by Dimitri P. Bertsekas
ISBN: 1-886529-42-6, 978-1-886529-42-7
Publication: April 2013, 256 pages, hardcover
Table of Contents and Preface,
A research monograph providing a synthesis of research on the foundations of dynamic programming that started nearly 50 years ago, with the modern theory of approximate dynamic programming and the new class of semicontractive models.
It aims at a unified and economical development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. The analysis focuses on the abstract mapping that underlies dynamic programming and defines the mathematical character of the associated problem. The discussion centers on two fundamental properties that this mapping may have: monotonicity and (weighted sup-norm) contraction. It turns out that the nature of the analytical and algorithmic DP theory is determined primarily by the presence or absence of these two properties, and the rest of the problem's structure is largely inconsequential. New research is focused on two areas: 1) The ramifications of these properties in the context of algorithms for approximate dynamic programming, and 2) The new class of semicontractive models, exemplified by stochastic shortest path problems, where some but not all policies are contractive.
The book is an excellent supplement to several of our books: Dynamic Programming and Optimal Control (Athena Scientific, 2012), and Neuro-Dynamic Programming (Athena Scientific, 1996).
From the review by Panos Pardalos (Optimization Methods and Software):
"The mathematical development in this book is elegant and mathematically rigorous, relying on the power of abstraction to focus on the fundamentals. The monograph provides for the first time a comprehensive synthesis of the field, while presenting much new research, some of which is relevant to currently very active fields such as approximate dynamic programming. Many examples are sprinkled through the book, illustrating the unifying power of the theory and applying it to specific types of problems, such as discounted, stochastic shortest path, semi-Markov, minimax, sequential games, multiplicative, and risk-sensitive models. The book also includes end-of-chapter exercises (with complete solutions), which supplement the text with examples, counterexamples, and theoretical extensions.
Like several other books by Bertsekas, this book is well-written, and well-suited for self-study. It can be used as a supplement to graduate dynamic programming classes. I highly recommend this book for everyone interested in dynamic optimization."
Among its features, the book:
develops the algorithmic foundations of approximate dynamic programming within an abstract unifying framework
discusses algorithms for approximate dynamic programming within a broadly applicable setting
develops the connections of dynamic programming with fixed point theory
contains significant new research
Dimitri P. Bertsekas is McAfee Professor of Engineering at the Massachusetts Institute of Technology and a member of the prestigious United States National Academy of Engineering. He is the recipient of the 2001 A. R. Raggazini ACC education award and the 2009 INFORMS expository writing award.
The material listed below can be freely downloaded, reproduced, and distributed.