To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduced mammalian predators are responsible for the decline and extinction of many native species, with rats (genus Rattus) being among the most widespread and damaging invaders worldwide. In a naturally fragmented landscape, we demonstrate the multi-year effectiveness of snap traps in the removal of Rattus rattus and Rattus exulans from lava-surrounded forest fragments ranging in size from <0.1 to >10 ha. Relative to other studies, we observed low levels of fragment recolonization. Larger rats were the first to be trapped, with the average size of trapped rats decreasing over time. Rat removal led to distinct shifts in the foraging height and location of mongooses and mice, emphasizing the need to focus control efforts on multiple invasive species at once. Furthermore, because of a specially designed trap casing, we observed low non-target capture rates, suggesting that on Hawai‘i and similar islands lacking native rodents the risk of killing non-target species in snap traps may be lower than the application of rodenticides, which have the potential to contaminate food webs. These efforts demonstrate that targeted snap-trapping is an effective removal method for invasive rats in fragmented habitats and that, where used, monitoring of recolonization should be included as part of a comprehensive biodiversity management strategy.
The approach to vascular access in children with CHD is a complex decision-making process that may have long-term implications. To date, evidence-based recommendations have not been established to inform this process.
The RAND/UCLA Appropriateness Method was used to develop miniMAGIC, including sequential phases: definition of scope and key terms; information synthesis and literature review; expert multidisciplinary panel selection and engagement; case scenario development; and appropriateness ratings by expert panel via two rounds. Specific recommendations were made for children with CHD.
Recommendations were established for the appropriateness of the selection, characteristics, and insertion technique of intravenous catheters in children with CHD with both univentricular and biventricular physiology.
miniMAGIC-CHD provides evidence-based criteria for intravenous catheter selection for children with CHD.
The DESCANT (Dementia Early Stage Cognitive Aids New Trial) intervention provided a personalised care package to improve the cognitive abilities, function and well -being of people with early-stage dementia and their carers by providing a range of memory aids, with training and support for use. This presentation will explore findings from a goal attainment scaling exercise undertaken within a multi-site pragmatic randomised trial, part of a NIHR-funded research programme ‘Effective Home Support in Dementia Care: Components, Impacts and Costs of Tertiary Prevention.’
The aim was to describe the Goal Attainment Scaling (GAS) approach developed; investigate the types of goals identified by people with dementia and their carers and subsequent attainment; and explore the role of Dementia Support Practitioners (DSPs) in the process. This GAS exercise was designed by researchers, a clinical psychologist, a clinician and a DSP. Goal setting and attainment were conducted with the person with dementia and their carer and recorded by DSPs. Data were obtained from 117 intervention records and semi-structured interviews with five DSPs delivering the intervention across seven NHS Trusts in England and Wales. The GAS exercise was conducted as planned with goals and extent of involvement in the exercise tailored to individual participants and engagement was high. Demographic characteristics from the trial baseline dataset were analysed. Measures were created from intervention records to permit quantification and descriptive analysis. Interviews were professionally transcribed and subject to thematic analysis to identify salient themes.
A total of 293 goals were identified across the 117 participants. From these 17 goal types were distinguished across six domains: self -care; household tasks; daily occupation; orientation; communication; and well-being and safety. A measure of goal attainment appropriate to both the client group and a modest intervention was obtained. On average participants had evidenced some improvement regarding goals set. Qualitative findings suggested overall DSPs were positive about their experience of goal setting. Although several challenges were identified, if these were overcome, measuring goal attainment was generally viewed as straightforward. GAS can be used in the context of a psychosocial intervention for people with early-stage dementia to identify and measure attainment of personalised care goals.
In this paper, we explore the use of an extensive list of Archimedean copulas in general and life insurance modelling. We consider not only the usual choices like the Clayton, Gumbel–Hougaard, and Frank copulas but also several others which have not drawn much attention in previous applications. First, we apply different copula functions to two general insurance data sets, co-modelling losses and allocated loss adjustment expenses, and also losses to building and contents. Second, we adopt these copulas for modelling the mortality trends of two neighbouring countries and calculate the market price of a mortality bond. Our results clearly show that the diversity of Archimedean copula structures gives much flexibility for modelling different kinds of data sets and that the copula and tail dependence assumption can have a significant impact on pricing and valuation. Moreover, we conduct a large simulation exercise to investigate further the caveats in copula selection. Finally, we examine a number of other estimation methods which have not been tested in previous insurance applications.
In this chapter I defend the view that to know what it is like to experience a phenomenal property is just to be consciously acquainted with it, to experience it. Knowledge of what it is like is not knowledge that. It is not conceptual/propositional at all. It does not require thought, or the deployment of concepts. Nor is it knowledge what in the sense of, for example, knowing what time it is, or knowing what the positive square root of 169 is, which is also conceptual. And it is not some kind of know-how. It is, I will argue, simple acquaintance with, being familiar with, a phenomenal property. To know what a particular kind of experience is like is to be familiar with the phenomenal property or properties that characterize it; and to be familiar with such properties is just to experience them. Acquaintance is the fundamental mode of knowledge of phenomenal properties instantiated in experience, it is knowing what it is like; and all it requires is the experience itself.
We consider a modification to the Poisson common factor model and utilise a generalised linear model (GLM) framework that incorporates a smoothing process and a set of linear constraints. We extend the standard GLM model structure to adopt Lagrange methods and P-splines such that smoothing and constraints are applied simultaneously as the parameters are estimated. Our results on Australian, Canadian and Norwegian data show that this modification results in an improvement in mortality projection in terms of producing more accurate forecasts in the out-of-sample testing. At the same time, projected male-to-female ratio of death rates at each age converges to a constant and the residuals of the models are sufficiently random, indicating that the use of smoothing does not adversely affect the fit of the model. Further, the irregular patterns in the estimates of the age-specific parameters are moderated as a result of smoothing and this model can be used to produce more regular projected life tables for pricing purposes.
In this paper we analyse insurance claim frequency data using the bivariate negative binomial regression (BNBR) model. We use general insurance data on claims from simple third-party liability insurance and comprehensive insurance. We find that bivariate regression, with its capacity for modelling correlation between the two observed claim counts, provides both a superior fit and out-of-sample prediction compared with the more common practice of fitting univariate negative binomial regression models separately to each claim type. Noting the complexity of BNBR models and their potential for a large number of parameters, we explore the use of model shrinkage methodology, namely the least absolute shrinkage and selection operator (Lasso) and ridge regression. We find that models estimated using shrinkage methods outperform the ordinary likelihood-based models when being used to make predictions out-of-sample. We find that the Lasso performs better than ridge regression as a method of shrinkage.
In the 1970s, Feldman and Moore classified separably acting von Neumann algebras containing Cartan maximal abelian self-adjoint subalgebras (MASAs) using measured equivalence relations and 2-cocycles on such equivalence relations. In this paper we give a new classification in terms of extensions of inverse semigroups. Our approach is more algebraic in character and less point-based than that of Feldman and Moore. As an application, we give a restatement of the spectral theorem for bimodules in terms of subsets of inverse semigroups. We also show how our viewpoint leads naturally to a description of maximal subdiagonal algebras.
Recent studies point to overlap between neuropsychiatric disorders in symptomatology and genetic aetiology.
To systematically investigate genomics overlap between childhood and adult attention-deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD) and major depressive disorder (MDD).
Analysis of whole-genome blood gene expression and genetic risk scores of 318 individuals. Participants included individuals affected with adult ADHD (n = 93), childhood ADHD (n = 17), MDD (n = 63), ASD (n = 51), childhood dual diagnosis of ADHD–ASD (n = 16) and healthy controls (n = 78).
Weighted gene co-expression analysis results reveal disorder-specific signatures for childhood ADHD and MDD, and also highlight two immune-related gene co-expression modules correlating inversely with MDD and adult ADHD disease status. We find no significant relationship between polygenic risk scores and gene expression signatures.
Our results reveal disorder overlap and specificity at the genetic and gene expression level. They suggest new pathways contributing to distinct pathophysiology in psychiatric disorders and shed light on potential shared genomic risk factors.
This is the first of two papers in which we estimate transition probabilities amongst levels of disability as defined in the Australian Survey of Disability, Ageing and Carers. In this paper we describe both the main tools of our estimation and the estimation of the numbers of individuals in different disability categories at annual intervals using survey data that are available at five year intervals. In Paper II we describe our estimation procedure, followed by its implementation, discussion of results and graduation of the estimated transition probabilities.
This is the second of two papers in which we estimate transition probabilities amongst levels of disability as defined in the Australian Survey of Disability, Ageing and Carers. In this paper we describe our estimation procedure, followed by its implementation, discussion of results and graduation of the estimated transition probabilities.
This paper explores how we can apply various modern data mining techniques to better understand Australian Income Protection Insurance (IPI). We provide a fast and objective method of scoring claims into different portfolios using available rating factors. Results from fitting several prediction models are compared based on not only the conventional loss prediction error function, but also a modified loss function. We demonstrate that the prediction power of all the data mining methods under consideration is clearly evident using a misclassification plot. We also point out that this predictability can be masked by looking at just the conventional prediction error function. We then suggest using the stepwise regression technique to reduce the number of variables used in the data mining methods. Apart from this variable selection method, we also look at principal components analysis to increase understanding of the rating factors that drive claim durations of insured lives. We also discuss and compare how different variable combining techniques can be used to weight available predicting variables. One interesting outcome we discover is that principal components analysis and the weighted combination prediction model together provide very consistent results on identifying the most significant variables for explaining claim durations.
It has been postulated that aging is the consequence of an accelerated accumulation of somatic DNA mutations and that subsequent errors in the primary structure of proteins ultimately reach levels sufficient to affect organismal functions. The technical limitations of detecting somatic changes and the lack of insight about the minimum level of erroneous proteins to cause an error catastrophe hampered any firm conclusions on these theories. In this study, we sequenced the whole genome of DNA in whole blood of two pairs of monozygotic (MZ) twins, 40 and 100 years old, by two independent next-generation sequencing (NGS) platforms (Illumina and Complete Genomics). Potentially discordant single-base substitutions supported by both platforms were validated extensively by Sanger, Roche 454, and Ion Torrent sequencing. We demonstrate that the genomes of the two twin pairs are germ-line identical between co-twins, and that the genomes of the 100-year-old MZ twins are discerned by eight confirmed somatic single-base substitutions, five of which are within introns. Putative somatic variation between the 40-year-old twins was not confirmed in the validation phase. We conclude from this systematic effort that by using two independent NGS platforms, somatic single nucleotide substitutions can be detected, and that a century of life did not result in a large number of detectable somatic mutations in blood. The low number of somatic variants observed by using two NGS platforms might provide a framework for detecting disease-related somatic variants in phenotypically discordant MZ twins.
This paper aims to evaluate the aggregate claims distribution under the collective risk model when the number of claims follows a so-called generalised (a, b, 1) family distribution. The definition of the generalised (a, b, 1) family of distributions is given first, then a simple matrix-form recursion for the compound generalised (a, b, 1) distributions is derived to calculate the aggregate claims distribution with discrete non-negative individual claims. Continuous individual claims are discussed as well and an integral equation of the aggregate claims distribution is developed. Moreover, a recursive formula for calculating the moments of aggregate claims is also obtained in this paper. With the recursive calculation framework being established, members that belong to the generalised (a, b, 1) family are discussed. As an illustration of potential applications of the proposed generalised (a, b, 1) distribution family on modelling insurance claim numbers, two numerical examples are given. The first example illustrates the calculation of the aggregate claims distribution using a matrix-form Poisson for claim frequency with logarithmic claim sizes. The second example is based on real data and illustrates maximum likelihood estimation for a set of distributions in the generalised (a, b, 1) family.
In this paper, we present a Markov chain Monte Carlo (MCMC) simulation algorithm for estimating parameters in the kernel density estimation of bivariate insurance claim data via transformations. Our data set consists of two types of auto insurance claim costs and exhibits a high-level of skewness in the marginal empirical distributions. Therefore, the kernel density estimator based on original data does not perform well. However, the density of the original data can be estimated through estimating the density of the transformed data using kernels. It is well known that the performance of a kernel density estimator is mainly determined by the bandwidth, and only in a minor way by the kernel. In the current literature, there have been some developments in the area of estimating densities based on transformed data, where bandwidth selection usually depends on pre-determined transformation parameters. Moreover, in the bivariate situation, the transformation parameters were estimated for each dimension individually. We use a Bayesian sampling algorithm and present a Metropolis-Hastings sampling procedure to sample the bandwidth and transformation parameters from their posterior density. Our contribution is to estimate the bandwidths and transformation parameters simultaneously within a Metropolis-Hastings sampling procedure. Moreover, we demonstrate that the correlation between the two dimensions is better captured through the bivariate density estimator based on transformed data.
Sensitivity analysis, or so-called ‘stress-testing’, has long been part of the actuarial contribution to pricing, reserving and management of capital levels in both life and non-life assurance. Recent developments in the area of derivatives pricing have seen the application of adjoint methods to the calculation of option price sensitivities including the well-known ‘Greeks’ or partial derivatives of option prices with respect to model parameters. These methods have been the foundation for efficient and simple calculations of a vast number of sensitivities to model parameters in financial mathematics. This methodology has yet to be applied to actuarial problems in insurance or in pensions. In this paper we consider a model for a defined benefit pension scheme and use adjoint methods to illustrate the sensitivity of fund valuation results to key inputs such as mortality rates, interest rates and levels of salary rate inflation. The method of adjoints is illustrated in the paper and numerical results are presented. Efficient calculation of the sensitivity of key valuation results to model inputs is useful information for practising actuaries as it provides guidance as to the relative ultimate importance of various judgments made in the formation of a liability valuation basis.
We consider a set of workers' compensation insurance claim data where the aggregate number of losses (claims) reported to insurers are classified by year of occurrence of the event causing loss, the US state in which the loss event occurred and the occupation class of the insured workers to which the loss count relates. An exposure measure, equal to the total payroll of observed workers in each three-way classification, is also included in the dataset. Data are analysed across ten different states, 24 different occupation classes and seven separate observation years. A multiple linear regression model, with only predictors for main effects, could be estimated in 223 + 9 + 1 + 1 = 234 ways, theoretically more than 17 billion different possible models! In addition, one might expect that the number of claims recorded in each year in the same state and relating to the same occupation class, are positively correlated. Different modelling assumptions as to the nature of this correlation should also be considered. On the other hand it may reasonably be assumed that the number of losses reported from different states and from different occupation classes are independent. Our data can therefore be modelled using the statistical techniques applicable to panel data and we work with generalised estimating equations (GEE) in the paper. For model selection, Pan (2001) suggested the use of an alternative to the AIC, namely the quasi-likelihood under independence model criterion (QIC), for model comparison. This paper develops and applies a Gibbs sampling algorithm for efficiently locating, out of the more than 17 billion possible models that could be considered for the analysis, that model with the optimal (least) QIC value. The technique is illustrated using both a simulation study and using workers' compensation insurance claim data.
Public organizations in developed countries are becoming more and more diverse, particularly along racial, ethnic and gender lines. Globalization, technology, and shifting cultural norms have contributed to increased percentages of women and people of colour in the workforce worldwide. In European countries, EU guidelines on immigration have led to marked increases in the numbers of foreign nationals living in many member countries (OECD 2003). The workforces of countries formerly under apartheid rule, such as Namibia and South Africa, are seeing fast, significant increases in the representation of ethnic minorities (Balogun 2001). In the United States growing immigration has led to sharp increases in non-native-English-speaking residents (Rubaii-Barrett and Wise 2007). This surge in workforce diversity has created challenges for public managers who are accustomed to managing in a homogeneous environment. Diversity provides both opportunities and struggles for organizations with shifting demographic profiles, but research on managing diversity is still in its infancy. As this literature develops, it is becoming clear that employee diversity creates implications for both the sociology of organizations and the human resources management strategies that best address employee differences. These two lenses – one behavioural, one managerial – have provided the foundation for much of the work on organizational diversity and its consequences for performance.