Preface
Published online by Cambridge University Press: 07 September 2011
Summary
The purpose of this book is to provide a fundamental description of stochastic control theory and its applications to dynamic optimization in economics. Its content is suitable particularly for graduate students and scientists in applied mathematics, economics, and engineering fields.
A stochastic control problem poses the question: what is the optimal magnitude of a choice variable at each time in a dynamical system under uncertainty? In stochastic control theory, the state variables and control variables, respectively, describe the random phenomena of dynamics and inputs. The state variable in the problem evolves according to stochastic differential equations (SDE) with control variables. By steering of such control variables, we aim to optimize some performance criteria as expressed by the objective functional. Stochastic control can be viewed as a problem of decision making in maximization or minimization. This subject has created a great deal of mathematics as well as a large variety of applications in economics, mathematical finance, and engineering.
This book provides the basic elements of stochastic differential equations and stochastic control theory in a simple and self-contained way. In particular, a key to the stochastic control problem is the dynamic programming principle (DPP), which leads to the notion of viscosity solutions of Hamilton–Jacobi–Bellman (HJB) equations. The study of viscosity solutions, originated by M. Crandall and P. L. Lions in the 1980s, provides a useful tool for dealing with the lack of smoothness of the value functions in stochastic control.
- Type
- Chapter
- Information
- Stochastic Control and Mathematical ModelingApplications in Economics, pp. xi - xivPublisher: Cambridge University PressPrint publication year: 2010