Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-19T08:18:18.776Z Has data issue: false hasContentIssue false

Bounds for the Ruin Probability of a Discrete-Time Risk Process

Published online by Cambridge University Press:  14 July 2016

Maikol A. Diasparra*
Affiliation:
Universidad Simón Bolívar
Rosario Romera*
Affiliation:
Universidad Carlos III de Madrid
*
Postal address: Department of Pure and Applied Mathematics, Universidad Simón Bolívar, 1080-A Caracas, Venezuela. Email address: maikold@yahoo.com
∗∗Postal address: Department of Statistics, Universidad Carlos III de Madrid, 28903 Getafe, Spain. Email address: mrromera@est-econ.uc3m.es
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We consider a discrete-time risk process driven by proportional reinsurance and an interest rate process. We assume that the interest rate process behaves as a Markov chain. To reduce the risk of ruin, we may reinsure a part or even all of the reserve. Recursive and integral equations for ruin probabilities are given. Generalized Lundberg inequalities for the ruin probabilities are derived given a stationary policy. To illustrate these results, a numerical example is included.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2009 

References

[1] Asmussen, S (2000). Ruin Probabilities. World Scientific, River Edge, NJ.Google Scholar
[2] Cai, J. (2002). Ruin probabilities with dependent rates of interest. J. Appl. Prob. 39, 312323.CrossRefGoogle Scholar
[3] Cai, J. and Dickson, D. C. M. (2004). Ruin probabilities with a Markov chain interest model. Insurance Math. Econom. 35, 513525.CrossRefGoogle Scholar
[4] Grandell, J.} (1991). Aspects of Risk Theory. Springer, New York.Google Scholar
[5] González-Hernández, J., López-Martínez, R. R. and Pérez-Hernández, J. R. (2007). Markov control processes with randomized discounted cost. Math. Meth. Operat. Res. 65, 2744.Google Scholar
[6] Hernández-Lerma, O. and Lasserre, J. B. (1996). Discrete-Time Markov Control Processes (Appl. Math. 30). Springer, New York.Google Scholar
[7] Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes (Appl. Math. 42). Springer, New York.CrossRefGoogle Scholar
[8] Hernández-Lerma, O. and Lasserre, J. B. (2003). Markov Chains and Invariant Probabilities (Progress Math. 211). Birkhäuser, Basel.Google Scholar
[9] Schäl, M. (2004). On discrete-time dynamic programming in insurance: exponential utility and minimizing the ruin probability. Scand. Actuarial J. 3, 189210.Google Scholar
[10] Schmidli, H. (2008). Stochastic Control in Insurance. Springer, London.Google Scholar
[11] Willmot, G. E. and Lin, X. S. (2001). Lundberg Approximations for Compound Distributions with Insurance Applications (Lectures Notes Statist. 156). Springer, New York.CrossRefGoogle Scholar