Hostname: page-component-7c8c6479df-995ml Total loading time: 0 Render date: 2024-03-28T15:13:22.948Z Has data issue: false hasContentIssue false

14.—Dynamic Programming applied to Some Non-linear Stochastic Control Systems

Published online by Cambridge University Press:  14 February 2012

A. T. Fuller
Affiliation:
Department of Engineering, University of Cambridge

Synopsis

Optimal switching curves are investigated for some second-order control systems with random disturbances, saturating control, and a mean-square-error performance index.

It is first shown that the optimal switching curves are asymptotic at infinity to the optimal switching curves for corresponding deterministic systems.

The systems are then discretised in time and state, and ordinary dynamic programming is applied to the resulting Markov chain models. The discretisation techniques are described in some detail.

The same techniques are applied to a deterministic system for which the optimal switching curve is known, and it is found that the resulting discretisation errors are considerable.

Thus it turns out that ordinary discrete dynamic programming gives only a rough approximation to the optimal switching curve. It seems desirable to tackle the equations of continuous dynamic programming by techniques more refined than crude discretisation. One such technique is suggested.

Type
Research Article
Copyright
Copyright © Royal Society of Edinburgh 1976

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1Bellman, R.. A Markovian decision process. J. Maths. Mech. 6 (1957), 679684.Google Scholar
2Bellman, R.. Dynamic programming (Princeton: University Press, 1957).Google ScholarPubMed
3Bellman, R.. Dynamic programming and stochastic control processes. Information and Control 1 (1958), 228239.CrossRefGoogle Scholar
4Bellman, R.. Adaptive control processes: a guided tour (Princeton: University Press, 1961).CrossRefGoogle Scholar
5Bellman, R. and Kalaba, R.. Dynamic programming and feedback control. Proc. IFAC Congress Moscow 1 (1960), 460464.Google Scholar
6Buhr, R. J.. The computational analysis of nonlinear stochastic controlsystems (Cambridge University: Ph.D. Thesis, 1966).Google Scholar
7Dorato, P., Hsieh, C. M. and Robinson, P. N.. Optimal bang-bang control of linear stochastic systems with a small noise parameter. IEE Trans. Automatic Control AC–12 (1967), 682690.CrossRefGoogle Scholar
8Eaton, J. H. and Zadeh, L. A.. Optimal pursuit strategies in discrete-stateprobabilistic systems. Trans. ASMEJ. Basic Eng. (1962), 2329.CrossRefGoogle Scholar
9Feldbaum, A. A.. Principles of the theory of optimal automatic systems. (In Russian.) (Moscow: Fizmatgiz, 1963.)Google Scholar
10Fleming, W. H.. Stochastic control for small noise intensities. SIAMJ.Control 9 (1971), 473517.CrossRefGoogle Scholar
11Florentin, J. J.. Optimal control of continuous time, Markov, stochastic systems. J. Electron. Control 10 (1961), 473488.CrossRefGoogle Scholar
12Fuller, A. T.. Optimisation of non-linear control systems with random inputs. J. Electron. Control 9 (1960), 6580.CrossRefGoogle Scholar
13Fuller, A. T.. Relay control systems optimized for various performance criteria. Proc. IFAC Congress, Moscow 1 (1960), 510519.Google Scholar
14Fuller, A. T.. Optimisation of some non-linear control systems by means of Bellman's equation and dimensional analysis. Internal. J. Control 3 (1966), 359394.CrossRefGoogle Scholar
15Fuller, A. T.. Linear control of non-linear systems. Internal. J. Control 5 (1967), 197243.CrossRefGoogle Scholar
16Fuller, A. T.. Analysis of non-linear stochastic systems by means of the Fokker-Planck equation Internal. J. Control 9 (1969), 603655.CrossRefGoogle Scholar
17Fuller, A. T.. Non-linear stochastic control systems (London: Taylor and Francis, 1970).Google Scholar
18Fuller, A. T.. Time-optimal control on state axes. Cambridge Univ. Eng. Dept Note CN/72/5 (1972).Google Scholar
19Fuller, A. T.. Time-optimal control in regions where all state coordinates have the same sign. Internal. J. Control 20 (1974), 705712.CrossRefGoogle Scholar
20Fuller, A. T.. Analysis and partial optimisation of a non-linear stochastic control system. Internal.J. Control 19 (1974), 8194.CrossRefGoogle Scholar
21Howard, R. A.. Dynamic programming and Markov processes (Cambridge, Mass: M.I.T. Press, 1960).Google Scholar
22Katayama, T.. Stochastic bang-bang controls that maximize the expectationof first passage time. Internat. J. Control 14 (1971), 8396.CrossRefGoogle Scholar
23Katayama, T.. Optimal bang-bang controls that maximize the probability ofhitting a target manifold. Internat. J. Control 15 (1972), 737750.CrossRefGoogle Scholar
24Kushner, H. J. and Kleinman, A. J.. Numerical methods for the solution of degenerate nonlinear elliptic equations arising in optimal stochastic control theory. IEEE Trans. Automatic Control AC-13 (1968), 344353.CrossRefGoogle Scholar
25Kushner, H. J.. Probability limit theorems and the convergence of finite difference approximations of partial differential equations. Math. Anal. Appl 32 (1970), 77103.CrossRefGoogle Scholar
26Lee, W. K. and Luecke, R. H.. Time optimal control in stochastic systems. Joint Automatic Control Conf. U.S.A. (1972), 199204.Google Scholar
27Lee, W. K. and Luecke, R. H.. A direct search for time-optimal control in stochastic systems. Internat. J. Control 19 (1974), 129141.CrossRefGoogle Scholar
28Roberts, J. A.. Linear control of saturating control systems. Internat. J. Control 12 (1970), 239255.CrossRefGoogle Scholar
29Roberts, J. A.. Optimal and linear sub-optimal control of second-order saturating control systems. Internat. J. Control 17 (1973), 897919.CrossRefGoogle Scholar
30Robinson, P. N. and Moore, J. B.. Solution of the stochastic control problem in unbounded domains. Franklin Inst. 295 (1973), 185192.CrossRefGoogle Scholar
31Robinson, P. N. and Yurtseven, H. O.. A Monte-Carlo method for stochastic time-optimal control. IEEE Trans Automatic Control AC–14 (1969), 574575.CrossRefGoogle Scholar
32Stratonovich, R. L.. On the theory of optimal control. Sufficient coordinates. Automat, Telemeh. 23 (1962), 910917.Google Scholar
33Stratonovich, R. L.. On the theory of optimal control. An asymptotic method for solving the diffusive alternatives equation. Automat, i Telemeh. 23 (1962), 14391447.Google Scholar
34Stratonovich, R. L.. Most recent development of dynamic programming techniques and their application to optimal systems design. Proc. IFAC Congress, Basle, Theory (1963), 352357.CrossRefGoogle Scholar
35Mellaert, L. J. Van. Inclusion-probability optimal control. Polytechnic Inst. Brooklyn Res. Rep. PIBMRI-1364–67 (1967).Google Scholar
36Wilkes, M. V.. A short introduction to numerical analysis (Cambridge: University Press, 1966).CrossRefGoogle Scholar
37Wonham, W. M.. Note on a problem in optimal non-linear control. Electron. Control 15 (1963), 5962.CrossRefGoogle Scholar
38Wonham, W. M.. On the separation theorem of stochastic control. SIAM J. Control 6 (1968), 312326.CrossRefGoogle Scholar
39Woodside, C. M.. Empirical optimisation of non-linear control systems with random inputs. Internat. J. Control 2 (1965), 285–304, 409424.CrossRefGoogle Scholar
40Zadeh, L. A.. Optimal control problems in discrete-time systems. In Computer control systems technology, Leondes, C. T., ed., Chap. 14 (New York: McGraw-Hill, 1961).Google Scholar
41Zienkiewicz, D. C.. The finite element method in engineering science (London: McGraw-Hill, 1971).Google Scholar