Hostname: page-component-7479d7b7d-68ccn Total loading time: 0 Render date: 2024-07-12T03:04:54.134Z Has data issue: false hasContentIssue false

Optimal search for a randomly moving object

Published online by Cambridge University Press:  14 July 2016

R. R. Weber*
Affiliation:
University of Cambridge
*
Postal address: Engineering Department, Control and Management Systems Group, Mill Lane, Cambridge, CB2 1RX, UK.

Abstract

It is desired to minimize the expected cost of finding an object which moves back and forth between two locations according to an unobservable Markov process. When the object is in location i (i = 1, 2) it resides there for a time which is exponentially distributed with parameter λ1 and then moves to the other location. The location of the object is not known and at each instant until it is found exactly one of the two locations must be searched. Searching location i for time δ costs ciδ and conditional on the object being in location i there is a probability αiδ + o(δ) that this search will find it. The probability that the object starts in location 1 is known to bé p1(0). The location to be searched at time t is to be chosen on the basis of the value of p1(t), the probability that the object is in location 1, given that it has not yet been discovered. We prove that there exists a threshold Π such that the optimal policy may be described as: search location 1 if and only if the probability that the object is in location 1 is greater than Π. Expressions for the threshold Π are given in terms of the parameters of the model.

Type
Research Paper
Copyright
Copyright © Applied Probability Trust 1986 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This research was conducted at the Department of Electrical Engineering and Computer Sciences and Electronics Research Laboratory, University of California, Berkeley and was supported by the Office of Naval Research Contract N00014-80-C-0507.

References

Nash, P. and Gittins, J. C. (1977) A Hamiltonian approach to stochastic resource allocation. Adv. Appl. Prob. 9, 5568.Google Scholar
Rishel, R. W. (1982) Unnormalized conditional probabilities and optimality for partially observed controlled jump Markov processes, in Advances in Filtering and Optimal Stochastic Control, ed. Fleming, W. H. and Gorostiza, L. G., Lecture Notes in Control and Information Sciences 42, Springer-Verlag, New York, 326343.Google Scholar
Ross, S. M. (1983) Introduction to Stochastic Dynamic Programming. Academic Press, San Francisco, Cal.Google Scholar
Varaiya, P. (1972) Notes on Optimization. Van Nostrand Reinhold, New York.Google Scholar
Weber, R. R. (1982) Scheduling jobs with stochastic processing times on parallel processors to minimize makespan or flowtime. J. Appl. Prob. 19, 167182.Google Scholar