Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-21T17:45:14.317Z Has data issue: false hasContentIssue false

On the dynamic programming approach to Pontriagin's maximum principle

Published online by Cambridge University Press:  14 July 2016

Richard Morton*
Affiliation:
University of Manchester

Extract

Suppose that the state variables x = (x1,…,xn)′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1968 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Pontriagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V., and Mishchemko, E. F. (1962) The Mathematical Theory of Optimal Processes. (Trans. Trirogoff, .) Wiley and Sons, N. Y. Google Scholar
[2] Athans, M. and Falb, P. L. (1966) Optimal Control. McGraw-Hill, N. Y. Google Scholar
[3] Desoer, C. A. (1961) Pontriagin's maximum principle and the principle of optimality. J. Franklin Inst. 271, 361367. Addendum (1962) J. Franklin Inst. 272, 313.Google Scholar
[4] Boltyanskii, V. G. (1966) Sufficient conditions for optimality and the justification of the dynamic programming method. SIAM J. Control. 4, 326361.Google Scholar
[5] Bellman, R. (1962) Dynamic Programming. Princeton Univ. Press.Google Scholar