000 | a | ||
---|---|---|---|
999 |
_c32157 _d32157 |
||
008 | 231101b xxu||||| |||| 00| 0 eng d | ||
020 | _a9781316511961 | ||
082 |
_a006.3 _bMEY |
||
100 | _aMeyn, S. P. | ||
245 | _aControl systems and reinforcement learning | ||
260 |
_bCambridge University Press, _c2022 _aCambridge : |
||
300 |
_axv, 435 p. ; _bill., _c27 cm. |
||
365 |
_b49.99 _cGBP _d107.60 |
||
504 | _aIncludes bibliographical references and index. | ||
520 | _aA high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of "deep" or "Q", or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning. | ||
650 | _aControl theory | ||
650 | _aMathematical optimization | ||
650 | _aComputer vision | ||
650 | _aPattern recognition | ||
650 | _aStochastic Processes | ||
650 | _a Econometrics | ||
650 | _a Cost function | ||
650 | _aContro theory | ||
650 | _aApproximation | ||
942 |
_2ddc _cBK |