To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We conducted unmanned aerial vehicle lidar missions in the Maya Lowlands between June 2017 and June 2018 to develop appropriate methods, procedures, and standards for drone lidar surveys of ancient Maya settlements and landscapes. Three site locations were tested within upper Usumacinta River region using Phoenix Lidar Systems: Piedras Negras, Guatemala, was tested in 2017, and Budsilha and El Infiernito, both in Mexico, were tested in 2018. These sites represent a range of natural and cultural contexts, which make them ideal to evaluate the usefulness of the technology in the field. Results from standard digital elevation and surface models demonstrate the utility of deploying drone lidar in the Maya Lowlands and throughout Latin America. Drone survey can be used to target and efficiently document ancient landscapes and settlement. Such an approach is adaptive to fieldwork and is cost effective but still requires planning and thoughtful evaluation of samples. Future studies will test and evaluate the methods and techniques for filtering and processing these data.
We consider the problem of pricing modern guarantee concepts in unit-linked life insurance, where the guaranteed amount grows contingent on the performance of an investment fund that acts simultaneously as the underlying security and the replicating portfolio. Using the Martingale Method, this nonstandard pricing problem can be transformed into a fixed-point problem, whose solution requires the evaluation of conditional expectations of highly path-dependent payoffs. By adapting the least-squares Monte Carlo method for American option pricing problems, we develop a new numerical approach to approximate the value of contingent guarantees and prove its convergence. Our valuation procedure can be applied to large-scale pricing problems, for which existing methods are infeasible, and leads to significant improvements in performance.
Although most hospitals report very high levels of hand hygiene compliance (HHC), the accuracy of these overtly observed rates is questionable due to the Hawthorne effect and other sources of bias. In the study, we aimed (1) to compare HHC rates estimated using the standard audit method of overt observation by a known observer and a new audit method that employed a rapid (<15 minutes) “secret shopper” method and (2) to pilot test a novel feedback tool.
Quality improvement project using a quasi-experimental stepped-wedge design.
This study was conducted in 5 acute-care hospitals (17 wards, 5 intensive care units) in the Midwestern United States.
Sites recruited a hand hygiene observer from outside the acute-care units to rapidly and covertly observe entry and exit HHC during the study period, October 2016–September 2017. After 3 months of observations, sites received a monthly feedback tool that communicated HHC information from the new audit method.
The absolute difference in HHC estimates between the standard and new audit methods was ~30%. No significant differences in HHC were detected between the baseline and feedback phases (OR, 0.92; 95% CI, 0.84–1.01), but the standard audit method had significantly higher estimates than the new audit method (OR, 9.83; 95% CI, 8.82–10.95).
HHC estimates obtained using the new audit method were substantially lower than estimates obtained using the standard audit method, suggesting that the rapid, secret-shopper method is less subject to bias. Providing feedback using HHC from the new audit method did not seem to impact HHC behaviors.
Background: Heterozygous loss-of-function mutations in the synaptic scaffolding gene SHANK2 are strongly associated with autism spectrum disorder (ASD). However, their impact on the function of human neurons is unknown. Derivation of induced pluripotent stem cells (iPSC) from affected individuals permits generation of live neurons to answer this question. Methods: We generated iPSCs by reprogramming dermal fibroblasts of neurotypic and ASD-affected donors. To isolate the effect of SHANK2, we used CRISPR/Cas9 to knock out SHANK2 in control iPSCs and correct a heterozygous nonsense mutation in ASD-affected donor iPSCs. We then derived cortical neurons from SOX1+ neural precursor cells differentiated from these iPSCs. Using a novel assay that overcomes line-to-line variability, we compared neuronal morphology, total synapse number, and electrophysiological properties between SHANK2 mutants and controls. Results: Relative to controls, SHANK2 mutant neurons have increased dendrite complexity, dendrite length, total synapse number (1.5-2-fold), and spontaneous excitatory postsynaptic current (sEPSC) frequency (3-7.6-fold). Conclusions: ASD-associated heterozygous loss-of-function mutations in SHANK2 increase synaptic connectivity among human neurons by increasing synapse number and sEPSC frequency. This is partially supported by increased dendrite length and complexity, providing evidence that SHANK2 functions as a suppressor of dendrite branching during neurodevelopment.
I am grateful to Krishnarao Appasani for making the Herculean effort to prepare this volume on the rapidly expanding fields of Genome-Wide Association Studies (GWAS) and personalized medicine, and for inviting me to offer a few of my own introductory statements.
The complexity of the human disease state has always been an area of human curiosity. Over the last decade, GWASs have enabled us to expand our understanding of complex diseases using genetic-based approaches. We now see GWAS as a technology platform that promises to help move us into the era of personalized medicine.
Genome-Wide Association Studies: From Polymorphism to Personalized Medicine edited by Dr. Appasani has assembled the contributing chapters into five main areas encompassing: an introduction to GWAS in medicine, GWAS in pharmacogenomics, different classes of genetic variants for GWAS, new technologies including next-generation sequencing, and population genetics. The component chapters will be highly valuable, not only to those who are experimentally active in these aspects of research, but also to those interested in potential drug discovery applications. The historical perspectives offered also bring forward a unique vantage point into ongoing and future research in this field.
I believe that this book will become a reliable guide for anyone attempting to understand the successes with GWAS, planning new experiments, as well as its potential for the advancement of medicine. We approach a time where advances in genome sequencing technologies will deliver the long-awaited $1,000 genome, which promises to enable capture of all classes of genetic variants in a single experiment empowering new and innovative future studies. As such, studying the content of this book will allow us to pause and reflect, of where the field has come from, and where it needs to now go.
Almost 10 years of GWAS: a feast of discoveries
Genome-wide association studies (GWASs) are less than 10 years old but have revolutionized discoveries in human genetics. GWAS is an experimental design based upon association because it exploits the fact that genetic variants that are close together tend to be statistically correlated. Driven by advances in array technologies these genome surveys of genetic variation have led to the discovery of thousands of DNA variants that are associated with complex traits, including many diseases. They have also led to new insights in human evolution and population differences.
Over the last twenty years, genome-wide association studies (GWAS) have revealed a great deal about the genetic basis of a wide range of complex diseases and they will undoubtedly continue to have a broad impact as we move to an era of personalised medicine. This authoritative text, written by leaders and innovators from both academia and industry, covers the basic science as well as the clinical, biotechnological and pharmaceutical potential of these methods. With special emphasis given to highlighting pharmacogenomics and population genomics studies using next-generation technology approaches, this is the first book devoted to combining association studies with single nucleotide polymorphisms, copy number variants, haplotypes and expressed quantitative trait loci. A reliable guide for newcomers to the field as well as for experienced scientists, this is a unique resource for anyone interested in how the revolutionary power of genomics can be applied to solve problems in complex disease.
The results of meta-analytic (MA) and validity generalization (VG) studies continue to be impressive. In contrast to earlier findings that capped the variance accounted for in job performance at roughly 16%, many recent studies suggest that a single predictor variable can account for between 16 and 36% of the variance in some aspect of job performance. This article argues that this “enhancement” in variance accounted for is often attributable not to improvements in science but to a dumbing down of the standards for the values of statistics used in correction equations. With rare exceptions, applied researchers have suspended judgment about what is and is not an acceptable threshold for criterion reliability in their quest for higher validities. We demonstrate a statistical dysfunction that is a direct result of using low criterion reliabilities in corrections for attenuation. Corrections typically applied to a single predictor in a VG study are instead applied to multiple predictors. A multiple correlation analysis is then conducted on corrected validity coefficients. It is shown that the corrections often used in single predictor studies yield a squared multiple correlation that appears suspect. Basically, the multiple predictor study exposes the tenuous statistical foundation of using abjectly low criterion reliabilities in single predictor VG studies. Recommendations for restoring scientific integrity to the meta-analyses that permeate industrial–organizational (I–O) psychology are offered.