Hostname: page-component-77c89778f8-vsgnj Total loading time: 0 Render date: 2024-07-21T05:56:47.562Z Has data issue: false hasContentIssue false

Some results on two-armed bandits when both projects vary

Published online by Cambridge University Press:  14 July 2016

Brendan O'Flaherty*
Affiliation:
Columbia University
*
Postal address: Department of Economics, Columbia University, New York, NY 10027, USA.

Abstract

In the multi-armed bandit problem, the decision-maker must choose each period a single project to work on. From the chosen project she receives an immediate reward that depends on the current state of the project. Next period the chosen project makes a stochastic transition to a new state, but projects that are not chosen remain in the same state. What happens in a two-armed bandit context if projects not chosen do not remain in the same state? We derive two sufficient conditions for the optimal policy to be myopic: either the transition function for chosen projects has in a certain sense uniformly stronger stochastic dominance than the transition function for unchosen projects, or both transition processes are normal martingales, the variance of which is independent of the history of process choices.

Type
Short Communications
Copyright
Copyright © Applied Probability Trust 1989 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Blackwell, D. (1965) Discounted dynamic programming. Ann. Math. Statist. 36, 226235.Google Scholar
Daley, D. J. (1968) Stochastically monotone Markov chains. Z. Wahrscheinlichkeitsth. 10, 305307.Google Scholar
Gittins, J. C. and Jones, D. M. (1974) A dynamic allocation index for the design of experiments. In Progress in Statistics, ed. Gani, J., North-Holland, Amsterdam, 241266.Google Scholar