This course will study the theory and applications of stochastic dynamic programming. We will first study the theory, including finite-stage models, discounted dynamic programming, and optimal stopping problems, and then study both classical and recent applications in operations management. Although the applications are chosen from operations management, the techniques used in these applications may have broad applicability to other social science or engineering fields.
On successful completion of this course students should be able to understand the theory of the dynamic programming, beginning with finite stage models, contraction mappings of discounted dynamic programming, and the essence of negative dynamic programming. Furthermore, they will become knowledgeable about different models of dynamic programming and the expected solution structure to be able to model specific research contexts. Also, they will develop the foundations of arriving at solutions, both numerically (e.g. successive iterations, policy iterations, linear programming) and analytically using the standard mathematical induction and/or applying tricks that exploit specific structures of the problem (e.g. one-stage-look-ahead policy, interchange argument, etc). The students will also understand the Bayesian learning framework, and sufficient statistics, and be able to apply conjugate pairs in their models. Finally, students will be exposed to the dynamic programming applications, including inventory and production management, technology management, and operations/marketing interface.
Students will demonstrate conceptual understanding of fundamental concepts and applications of dynamic programming. They will develop key skills such as critical reflection, problem solving, and communication of scholarly ideas in an effective manner. They will also learn to link different theoretical methods to topical areas of research related to stochastic modeling.
Assessment is as follows: 55% for unseen 3-hour examination at the end of the course, 35% each for 7 problem sets during the term, 10% for class participation.
Required textbook Introduction to Stochastic Dynamic Programming, Sheldon M. Ross, Academic Press, 1983.