To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We propose a generalized Cramér–Lundberg model of the risk theory of non-life insurance and study its ruin probability. Our model is an extension of that of Dubey (1977) to the case of multiple insureds, where the counting process is a mixed Poisson process and the continuously varying premium rate is determined by a Bayesian rule on the number of claims. We use two proofs to show that, for each fixed value of the safety loading, the ruin probability is the same as that of the classical Cramér–Lundberg model and does not depend on either the distribution of the mixing variable of the driving mixed Poisson process or the number of claim contracts.
The rich-get-richer rule reinforces actions that have been frequently chosen in the past. What happens to the evolution of individuals’ inclinations to choose an action when agents interact? Interaction tends to homogenize, while each individual dynamics tends to reinforce its own position. Interacting stochastic systems of reinforced processes have recently been considered in many papers, in which the asymptotic behavior is proven to exhibit almost sure synchronization. In this paper we consider models where, even if interaction among agents is present, absence of synchronization may happen because of the choice of an individual nonlinear reinforcement. We show how these systems can naturally be considered as models for coordination games or technological or opinion dynamics.
In this paper, we consider an extended class of univariate and multivariate generalized Pólya processes and study its properties. In the generalized Pólya process considered in , each occurrence of an event increases the stochastic intensity of the counting process. In the extended class studied in this paper, on the contrary, it decreases the stochastic intensity of the process, which induces a kind of negative dependence in the increments in the disjoint time intervals. First, we define the extended class of generalized Pólya processes and derive some preliminary results which will be used in the remaining part of the paper. It is seen that the extended class of generalized Pólya processes can be viewed as generalized pure death processes, where the death rate depends on both the state and the time. Based on the preliminary results, the main properties of the multivariate extended generalized Pólya process and meaningful characterizations are obtained. Finally, possible applications to reliability modeling are briefly discussed.
Taylor’s power law (or fluctuation scaling) states that on comparable populations, the variance of each sample is approximately proportional to a power of the mean of the population. The law has been shown to hold by empirical observations in a broad class of disciplines including demography, biology, economics, physics, and mathematics. In particular, it has been observed in problems involving population dynamics, market trading, thermodynamics, and number theory. In applications, many authors consider panel data in order to obtain laws of large numbers. Essentially, we aim to consider ergodic behaviors without independence. We restrict our study to stationary time series, and develop different Taylor exponents in this setting. From a theoretical point of view, there has been a growing interest in the study of the behavior of such a phenomenon. Most of these works focused on the so-called static Taylor’s law related to independent samples. In this paper we introduce a dynamic Taylor’s law for dependent samples using self-normalized expressions involving Bernstein blocks. A central limit theorem (CLT) is proved under either weak dependence or strong mixing assumptions for the marginal process. The limit behavior of the estimation involves a series of covariances, unlike the classic framework where the limit behavior involves the marginal variance. We also provide an asymptotic result for a goodness-of-fit procedure suitable for checking whether the corresponding dynamic Taylor’s law holds in empirical studies.
This paper investigates spatial data on the unit sphere. Traditionally, isotropic Gaussian random fields are considered as the underlying mathematical model of the cosmic microwave background (CMB) data. We discuss the generalized multifractional Brownian motion and its pointwise Hölder exponent on the sphere. The multifractional approach is used to investigate the CMB data from the Planck mission. These data consist of CMB radiation measurements at narrow angles of the sky sphere. The results obtained suggest that the estimated Hölder exponents for different CMB regions do change from location to location. Therefore, the CMB temperature intensities are multifractional. The methodology developed is used to suggest two approaches for detecting regions with anomalies in the cleaned CMB maps.
This paper considers a variant of the classical Cramér–Lundberg model that is particularly appropriate in the credit context, with the distinguishing feature that it corresponds to a finite number of obligors. The focus is on computing the ruin probability, i.e. the probability that the initial reserve, increased by the interest received from the obligors and decreased by the losses due to defaults, drops below zero. As well as an exact analysis (in terms of transforms) of this ruin probability, an asymptotic analysis is performed, including an efficient importance-sampling-based simulation approach.
The base model is extended in multiple dimensions: (i) we consider a model in which there may, in addition, be losses that do not correspond to defaults, (ii) then we analyze a model in which the individual obligors are coupled via a regime switching mechanism, (iii) then we extend the model so that between the losses the reserve process behaves as a Brownian motion rather than a deterministic drift, and (iv) we finally consider a set-up with multiple groups of statistically identical obligors.
We consider the optimal portfolio and consumption problem for a jump-diffusion process with regime switching. Under the criterion of maximizing the expected discounted total utility of consumption, two methods, namely, the dynamic programming principle and the stochastic maximum principle, are used to obtain the optimal result for the general objective function, which is the solution to a system of partial differential equations. Furthermore, we investigate the power utility as a specific example and analyse the existence and uniqueness of the optimal solution. Under the constraints of no-short-selling and nonnegative consumption, closed-form expressions for the optimal strategy and the value function are derived. Besides, some comparisons between the optimal results for the jump-diffusion model and the pure diffusion model are carried out. Finally, we discuss our optimal results in some special cases.
This study empirically examines whether shock size matters for the US monetary policy effects. Using a nonlinear local projection method, I find that large monetary policy shocks are less powerful than smaller monetary policy shocks, with the information effect being the potential source of the observed asymmetry in monetary policy efficacy.
We derive the large-sample distribution of the number of species in a version of Kingman’s Poisson–Dirichlet model constructed from an
-stable subordinator but with an underlying negative binomial process instead of a Poisson process. Thus it depends on parameters
from the subordinator and
from the negative binomial process. The large-sample distribution of the number of species is derived as sample size
. An important component in the derivation is the introduction of a two-parameter version of the Dickman distribution, generalising the existing one-parameter version. Our analysis adds to the range of Poisson–Dirichlet-related distributions available for modeling purposes.
Rough volatility is a well-established statistical stylized fact of financial assets. This property has led to the design and analysis of various new rough stochastic volatility models. However, most of these developments have been carried out in the mono-asset case. In this work, we show that some specific multivariate rough volatility models arise naturally from microstructural properties of the joint dynamics of asset prices. To do so, we use Hawkes processes to build microscopic models that accurately reproduce high-frequency cross-asset interactions and investigate their long-term scaling limits. We emphasize the relevance of our approach by providing insights on the role of microscopic features such as momentum and mean-reversion in the multidimensional price formation process. In particular, we recover classical properties of high-dimensional stock correlation matrices.
In this paper, to model cascading failures, a new stochastic failure model is proposed. In a system subject to cascading failures, after each failure of the component, the remaining component suffers from increased load or stress. This results in shortened residual lifetimes of the remaining components. In this paper, to model this effect, the concept of the usual stochastic order is employed along with the accelerated life test model, and a new general class of stochastic failure models is generated.