To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The retirement systems in many developed countries have been increasingly moving from defined benefit towards defined contribution system. In defined contribution systems, financial and longevity risks are shifted from pension providers to retirees. In this paper, we use a probabilistic approach to analyse the uncertainty associated with superannuation accumulation and decumulation. We apply an economic scenario generator called the Simulation of Uncertainty for Pension Analysis (SUPA) model to project uncertain future financial and economic variables. This multi-factor stochastic investment model, based on the Monte Carlo method, allows us to obtain the probability distribution of possible outcomes regarding the superannuation accumulation and decumulation phases, such as relevant percentiles. We present two examples to demonstrate the implementation of the SUPA model for the uncertainties during both phases under the current superannuation and Age Pension policy, and test two superannuation policy reforms suggested by the Grattan Institute.
One of the limiting factors for the analysis of minor elements in multiphase materials by electron probe microanalysis is the effect of secondary fluorescence (SF), which is not accounted for by matrix corrections. Although the apparent concentration due to SF can be calculated numerically or measured experimentally, detailed investigations of this effect for fine-grained materials are scarce. In this work, we use the Monte Carlo simulation program PENEPMA to examine and correct the effect of SF affecting micron-sized mineral inclusions hosted by other minerals. A concentration profile across an olivine [(Mg,Fe)2SiO4] inclusion in chromite (Fe2+Cr2O4) is measured and used to assess the reliability of calculations, where different boundary geometries are examined. Three application examples are presented, which include the determination of Cr in olivine and serpentine [Mg3Si2O5(OH)4] inclusions hosted by chromite and of Fe in quartz (SiO2) inclusions hosted by almandine garnet (Fe3Al2Si3O12). Our results show that neglecting SF leads to concentrations that are overestimated by ~0.1–0.8 wt%, depending on inclusion size. In addition, assuming a straight boundary yields to an underestimation of SF effects by a factor of ~2–4. Because its long-range nature, SF severely compromises trace element analyses even for phases as large as 1 mm in size.
Many different phases of matter can be characterized by the symmetries that they break. The Ising model for interacting spins illustrates this idea. In the absence of a magnetic field, there is a critical temperature, below which there is ferromagnetic ordering, and above which there is not. The magnetization is the order parameter for this transition: it is non-zero only when there is ferromagnetic ordering. The ferromagnetic phase transition in the Ising model is explored using the approximate method of mean field theory. Exact solutions are known for the Ising model in one and two dimensions and are discussed, along with numerical solutions using Monte Carlo simulations. Finally, the ideas of broken symmetry and their relationship to phase transitions are placed in the general framework of Landau theory and compared to results from mean field theory.
This study aims to evaluate dosimetric parameters like percentage depth dose, dosimetric field size, depth of maximum dose surface dose, penumbra and output factors measured using IBA CC01 pinpoint chamber, IBA stereotactic field diode (SFD), PTW microDiamond against Monte Carlo (MC) simulation for 6 MV flattening filter-free small fields.
Materials and Methods:
The linear accelerator used in the study was a Varian TrueBeam® STx. All field sizes were defined by jaws. The required shift to effective point of measurement was given for CC01, SFD and microdiamond for depth dose measurements. The output factor of a given field size was taken as the ratio of meter readings normalised to 10 × 10 cm2 reference field size without applying any correction to account for changes in detector response. MC simulation was performed using PRIMO (PENELOPE-based program). The phase space files for MC simulation were adopted from the MyVarian Website.
Results and Discussion:
Variations were seen between the detectors and MC, especially for fields smaller than 2 × 2 cm2 where the lateral charge particle equilibrium was not satisfied. Diamond detector was seen as most suitable for all measurements above 1 × 1 cm2. SFD was seen very close to MC results except for under-response in output factor measurements. CC01 was observed to be suitable for field sizes above 2 × 2 cm2. Volume averaging effect for penumbra measurements in CC01 was observed. No detector was found suitable for surface dose measurement as surface ionisation was different from surface dose due to the effect of perturbation of fluence. Some discrepancies in measurements and MC values were observed which may suggest effects of source occlusion, shift in focal point or mismatch between real accelerator geometry and simulation geometry.
For output factor measurement, TRS483 suggested correction factor needs to be applied to account for the difference in detector response. CC01 can be used for field sizes above 2 × 2 cm2 and microdiamond detector is suitable for above 1 × 1 cm2. Below these field sizes, perturbation corrections and volume averaging corrections need to be applied.
In this study, the radiation contamination dose (RCD) for different combinations of electron energy/distance, applicator and radius around the light intraoperative accelerator (LIAC), a high dose per pulse dedicated intraoperative electron radiotherapy machine, has been estimated. Being aware about the amount of RCDs is highly recommended for linear medical electron accelerators.
Methods and methods:
Monte Carlo Nuclear Particles (MCNP) code was used to simulate the LIAC® head and calculate RCDs. Experimental RCDs measurements were also done by Advanced Markus chamber inside a MP3-XS water phantom. Relative differences of simulations and measurements were calculated.
RCD reduction by distance from the machine follows the inverse-square law, as expected. The RCD was decreased by increasing angle from applicator walls opposed to the electron beam direction. The maximum differences between the simulation and measurement results were lower than 3%.
The RCD is strongly dependent on electron beam energy, applicator size and distance from the accelerator head. Agreement between the MCNP results and ionometric dosimetry confirms the applicability of this simulation code in modelling the intraoperative electron beam and obtaining the dosimetric parameters. The RCD is a parameter that would restrict working with LIAC in an unshielded operative room.
Fine-scale temporal processes, such as the synchronous deposition of organic materials, can be challenging to identify using 14C datasets. While some events, such as volcanic eruptions, leave clear evidence for synchronous deposition, synchroneity is more difficult to establish for other types of events. This has been a source of controversy regarding 14C dates associated with a hypothesized extraterrestrial impact at the Younger Dryas Boundary (YDB). To address this controversy, we first aggregate 14C measurements from Northern Hemisphere YDB sites. We also aggregate 14C measurements associated with a known synchronous event, the Laacher See volcanic eruption. We then use a Monte Carlo simulation to evaluate the magnitude of variability expected in a 14C dataset associated with a synchronous event. The simulation accounts for measurement error, calibration uncertainty, “old wood” effects, and laboratory measurement biases. The Laacher See 14C dataset is consistent with expectations of synchroneity generated by the simulation. However, the YDB 14C dataset is inconsistent with the simulated expectations for synchroneity. These results suggest that a central requirement of the Younger Dryas Impact Hypothesis, synchronous global deposition of a YDB layer, is extremely unlikely, calling into question the Younger Dryas Impact Hypothesis more generally.
In this paper, we introduce a new large family of Lévy-driven point processes with (and without) contagion, by generalising the classical self-exciting Hawkes process and doubly stochastic Poisson processes with non-Gaussian Lévy-driven Ornstein–Uhlenbeck-type intensities. The resulting framework may possess many desirable features such as skewness, leptokurtosis, mean-reverting dynamics, and more importantly, the ‘contagion’ or feedback effects, which could be very useful for modelling event arrivals in finance, economics, insurance, and many other fields. We characterise the distributional properties of this new class of point processes and develop an efficient sampling method for generating sample paths exactly. Our simulation scheme is mainly based on the distributional decomposition of the point process and its intensity process. Extensive numerical implementations and tests are reported to demonstrate the accuracy and effectiveness of our scheme. Moreover, we use portfolio risk management as an example to show the applicability and flexibility of our algorithms.
Options with extendable features have many applications in finance and these provide the motivation for this study. The pricing of extendable options when the underlying asset follows a geometric Brownian motion with constant volatility has appeared in the literature. In this paper, we consider holder-extendable call options when the underlying asset follows a mean-reverting stochastic volatility. The option price is expressed in integral forms which have known closed-form characteristic functions. We price these options using a fast Fourier transform, a finite difference method and Monte Carlo simulation, and we determine the efficiency and accuracy of the Fourier method in pricing holder-extendable call options for Heston parameters calibrated from the subprime crisis. We show that the fast Fourier transform reduces the computational time required to produce a range of holder-extendable call option prices by at least an order of magnitude. Numerical results also demonstrate that when the Heston correlation is negative, the Black–Scholes model under-prices in-the-money and over-prices out-of-the-money holder-extendable call options compared with the Heston model, which is analogous to the behaviour for vanilla calls.
The analysis of list experiments depends on two assumptions, known as “no design effect” and “no liars”. The no liars assumption is strong and may fail in many list experiments. I relax the no liars assumption in this paper, and develop a method to provide bounds for the prevalence of sensitive behaviors or attitudes under a weaker behavioral assumption about respondents’ truthfulness toward the sensitive item. I apply the method to a list experiment on the anti-immigration attitudes of California residents and on a broad set of existing list experiment datasets. The prevalence of different items and the correlation structure among items on the list jointly determine the width of the bound estimates. In particular, the bounds tend to be narrower when the list consists of items of the same category, such as multiple groups or organizations, different corporate activities, and various considerations for politician decision-making. My paper illustrates when the full power of the no liars assumption is most needed to pin down the prevalence of the sensitive behavior or attitude, and facilitates estimation of the prevalence robust to violations of the no liars assumption for many list experiment applications.
The fixed-effects estimator is biased in the presence of dynamic misspecification and omitted within variation correlated with one of the regressors. We argue and demonstrate that fixed-effects estimates can amplify the bias from dynamic misspecification and that with omitted time-invariant variables and dynamic misspecifications, the fixed-effects estimator can be more biased than the ‘naïve’ OLS model. We also demonstrate that the Hausman test does not reliably identify the least biased estimator when time-invariant and time-varying omitted variables or dynamic misspecifications exist. Accordingly, empirical researchers are ill-advised to rely on the Hausman test for model selection or use the fixed-effects model as default unless they can convincingly justify the assumption of correctly specified dynamics. Our findings caution applied researchers to not overlook the potential drawbacks of relying on the fixed-effects estimator as a default. The results presented here also call upon methodologists to study the properties of estimators in the presence of multiple model misspecifications. Our results suggest that scholars ought to devote much more attention to modeling dynamics appropriately instead of relying on a default solution before they control for potentially omitted variables with constant effects using a fixed-effects specification.
The finding of this study is that the interaction volume in electron microscopy in transmission is well ordered laterally, with a remarkable and unexpected consequence being that lateral subsections of the interaction volume produce subsections of the Kikuchi diffraction pattern. It makes the microstructure of samples directly visible in Kikuchi patterns. This is first illustrated with polycrystalline Ti–10Al–25Nb with an on-axis transmission Kikuchi diffraction set-up in a scanning electron microscope. It is then shown via a Monte Carlo simulation and a large-angle convergent-beam electron diffraction experiment that this phenomenon finds its origin in the nature of the differential elastic and quasi-elastic cross sections. This phenomenon is then quantified by a careful image analysis of Kikuchi patterns recorded across a vertical interface in a silicon sample specifically designed and fabricated. A Monte Carlo simulation reproducing all the geometric parameters is conducted. Experiments and simulations match very well qualitatively, but with a slight quantitativity gap. The specificity of the thermal diffuse scattering cross-section, not available in the simulation, is thought to be responsible for this gap. Beside Kikuchi diffraction, the case of diffraction spots and diffuse background present in the pattern is also discussed.
A flow corridor is a new class of trajectory-based airspace that encloses groups of flights which fly along the same path in one direction and accept responsibility for separation from each other. A well-designed corridor could reduce the airspace complexity, decrease the workload of air traffic controllers and increase the airspace capacity. This paper analyses the impact of different self-separation parameters on capacity and conflicts of the flow corridor. Both the quantitative impact and interaction effects of pairs of parameters are evaluated using the combined discrete-continuous model and Monte Carlo simulation method. The simulation results show that although the initial separation is the dominating factor, the interactions between initial separation and separation buffer, minimum separation, extra switch buffer, extra threshold buffer and velocity difference threshold also have some significant impacts on the capacity and conflicts for the flow corridor.
One of the stereotactic radiosurgery techniques is Gamma Knife radiosurgery, in which intracranial lesions that are inaccessible or inappropriate for surgery are treated using 201 cobalt-60 sources in one treatment session. In this conformal technique, the penumbra width, which results in out-of-field dose in tumour-adjacent normal tissues should be determined accurately. The aim of this study is to calculate the penumbra widths of single and 201 beams for different collimator sizes of Gamma Knife machine model 4C using EGSnrc/BEAMnrc Monte Carlo simulation code and comparison the results with EBT3 film dosimetry data.
Methods and materials
In this study, simulation of Gamma Knife machine model 4C was performed based on the Monte Carlo codes of EGSnrc/BEAMnrc. To investigate the physical penumbra width (80−20%), the single beam and 201 beams profiles were obtained using EGSnrc/DOSXYZnrc code and EBT3 films located at isocentre point in a spherical Plexiglas head phantom.
Based on the results, the single beam penumbra widths obtained from simulation data for 4, 8, 14 and 18 mm collimator sizes along X axis were 0·75, 0·77, 0·90 and 0·92 mm, respectively. The data for 201 beams obtained from simulation were 2·61, 4·80, 7·92 and 9·81 mm along X axis and 1·31, 1·60, 1·91 and 2·14 mm along Z axis and from film dosimetry were 3·21, 4·90, 8·00 and 10·61 mm along X axis and 1·22, 1·69, 2·01 and 2·25 mm along Z axis, respectively.
The differences between measured and simulated penumbra widths are in an acceptable range. However, for more precise measurement in the penumbra region in which dose gradient is high, Monte Carlo simulation is recommended.
This letter compares the performance of multiple imputation and listwise deletion using a simulation approach. The focus is on data that are “missing not at random” (MNAR), in which case both multiple imputation and listwise deletion are known to be biased. In these simulations, multiple imputation yields results that are frequently more biased, less efficient, and with worse coverage than listwise deletion when data are MNAR. This is the case even with very strong correlations between fully observed variables and variables with missing values, such that the data are very nearly “missing at random.” These results recommend caution when comparing the results from multiple imputation and listwise deletion, when the true data generating process is unknown.
We develop a forward-reverse expectation-maximization (FREM) algorithm for estimating parameters of a discrete-time Markov chain evolving through a certain measurable state-space. For the construction of the FREM method, we develop forward-reverse representations for Markov chains conditioned on a certain terminal state. We prove almost sure convergence of our algorithm for a Markov chain model with curved exponential family structure. On the numerical side, we carry out a complexity analysis of the forward-reverse algorithm by deriving its expected cost. Two application examples are discussed.
Computers hold the potential to draw legislative districts in a neutral way. Existing approaches to automated redistricting may introduce bias and encounter difficulties when drawing districts of large and even medium-sized jurisdictions. We present a new algorithm that can neutrally generate legislative districts without indications of bias that are contiguous, balanced and relatively compact. The algorithm does not show the kinds of bias found in prior algorithms and is an advance over previously published algorithms for redistricting because it is computationally more efficient. We use the new algorithm to draw 10,000 maps of congressional districts in Mississippi, Virginia, and Texas. We find that it is unlikely that the number of majority-minority districts we observe in the Mississippi, Virginia, and Texas congressional maps of these states would happen through a neutral redistricting process.
Multiple imputation (MI) is often presented as an improvement over listwise deletion (LWD) for regression estimation in the presence of missing data. Against a common view, we demonstrate anew that the complete case estimator can be unbiased, even if data are not missing completely at random. As long as the analyst can control for the determinants of missingness, MI offers no benefit over LWD for bias reduction in regression analysis. We highlight the conditions under which MI is most likely to improve the accuracy and precision of regression results, and develop concrete guidelines that researchers can adopt to increase transparency and promote confidence in their results. While MI remains a useful approach in certain contexts, it is no panacea, and access to imputation software does not absolve researchers of their responsibility to know the data.
Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy. The purpose of this research is comparison between 6 MV Primus LINAC simulation output with commissioning data using EGSnrc and build a Monte Carlo geometry of 6 MV Primus LINAC as realistically as possible. The BEAMnrc and DOSXYZnrc (EGSnrc package) Monte Carlo model of the LINAC head was used as a benchmark.
In the first part, the BEAMnrc was used for the designing of the LINAC treatment head. In the second part, dose calculation and for the design of 3D dose file were produced by DOSXYZnrc. The simulated PDD and beam profile obtained were compared with that calculated using commissioning data. Good agreement was found between calculated PDD (1·1%) and beam profile using Monte Carlo simulation and commissioning data. After validation, TPR20,10, TMR and Sp values were calculated in five different field.
Good agreement was found between calculated values by using Monte Carlo simulation and commissioning data. Average differences for five field sizes in this approach is about 0·83% for Sp. for TPR20,10 differences for field sizes 10×10 cm2 is 0·29% and for TMR in five field sizes, the average value is ~1·6%.
In conclusion, the BEAMnrc and DOSXYZnrc codes package have very good accuracy in calculating dose distribution for 6 MV photon beam and it can be considered as a promising method for patient dose calculations and also the Monte Carlo model of primus linear accelerator built in this study can be used as method to calculate the dose distribution for cancer patients.
A numerical comparison of the Monte Carlo (MC) simulation and the finite-difference method for pricing European options under a regime-switching framework is presented in this paper. We consider pricing options on stocks having two to four volatility regimes. Numerical results show that the MC simulation outperforms the Crank–Nicolson (CN) finite-difference method in both the low-frequency case and the high-frequency case. Even though both methods have linear growth, as the number of regimes increases, the computational time of CN grows much faster than that of MC. In addition, for the two-state case, we propose a much faster simulation algorithm whose computational time is almost independent of the switching frequency. We also investigate the performances of two variance-reduction techniques: antithetic variates and control variates, to further improve the efficiency of the simulation.