To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Medicine is becoming increasingly reliant on diagnostic, prognostic and screening tests for the successful treatment of patients. With new tests being developed all the time, a more informed understanding of the benefits and drawbacks of these tests is crucial. Providing readers with the tools needed to evaluate and interpret these tests, numerous real-world examples demonstrate the practical application and relevance of the material. The mathematics involved are rigorously explained using simple and informative language. Topics covered include the diagnostic process, reliability and accuracy of tests, and quantifying treatment benefits using randomized trials, amongst others. Engaging illustrations act as visual representations of the concepts discussed in the book, complementing the textual explanation.Based on decades of experience teaching in a clinical research training program, this fully updated second edition is an essential guide for anyone looking to select, develop or market medical tests.
This tutorial reference serves as a coherent overview of various statistical and mathematical approaches used in brain network analysis, where modeling the complex structures and functions of the human brain often poses many unique computational and statistical challenges. This book fills a gap as a textbook for graduate students while simultaneously articulating important and technically challenging topics. Whereas most available books are graph theory-centric, this text introduces techniques arising from graph theory and expands to include other different models in its discussion on network science, regression, and algebraic topology. Links are included to the sample data and codes used in generating the book's results and figures, helping to empower methodological understanding in a manner immediately usable to both researchers and students.
This practical book is designed for applied researchers who want to use mixed models with their data. It discusses the basic principles of mixed model analysis, including two-level and three-level structures, and covers continuous outcome variables, dichotomous outcome variables, and categorical and survival outcome variables. Emphasizing interpretation of results, the book develops the most important applications of mixed models, such as the study of group differences, longitudinal data analysis, multivariate mixed model analysis, IPD meta-analysis, and mixed model predictions. All examples are analyzed with STATA, and an extensive overview and comparison of alternative software packages is provided. All datasets used in the book are available for download, so readers can re-analyze the examples to gain a strong understanding of the methods. Although most examples are taken from epidemiological and clinical studies, this book is also highly recommended for researchers working in other fields.
Reflecting a sea change in how empirical research has been conducted over the past three decades, Foundations of Agnostic Statistics presents an innovative treatment of modern statistical theory for the social and health sciences. This book develops the fundamentals of what the authors call agnostic statistics, which considers what can be learned about the world without assuming that there exists a simple generative model that can be known to be true. Aronow and Miller provide the foundations for statistical inference for researchers unwilling to make assumptions beyond what they or their audience would find credible. Building from first principles, the book covers topics including estimation theory, regression, maximum likelihood, missing data, and causal inference. Using these principles, readers will be able to formally articulate their targets of inquiry, distinguish substantive assumptions from statistical assumptions, and ultimately engage in cutting-edge quantitative empirical research that contributes to human knowledge.
This book builds a much-needed bridge between biostatistics and organismal biology by linking the arithmetic of statistical studies of organismal form to the biological inferences that may follow from it. It incorporates a cascade of new explanations of regression, correlation, covariance analysis, and principal components analysis, before applying these techniques to an increasingly common data resource: the description of organismal forms by sets of landmark point configurations. For each data set, multiple analyses are interpreted and compared for insight into the relation between the arithmetic of the measurements and the rhetoric of the subsequent biological explanations. The text includes examples that range broadly over growth, evolution, and disease. For graduate students and researchers alike, this book offers a unique consideration of the scientific context surrounding the analysis of form in today's biosciences.
Specifically intended for lab-based biomedical researchers, this practical guide shows how to design experiments that are reproducible, with low bias, high precision, and widely applicable results. With specific examples from research using both cell cultures and model organisms, it explores key ideas in experimental design, assesses common designs, and shows how to plan a successful experiment. It demonstrates how to control biological and technical factors that can introduce bias or add noise, and covers rarely discussed topics such as graphical data exploration, choosing outcome variables, data quality control checks, and data pre-processing. It also shows how to use R for analysis, and is designed for those with no prior experience. An accompanying website (https://stanlazic.github.io/EDLB.html) includes all R code, data sets, and the labstats R package. This is an ideal guide for anyone conducting lab-based biological research, from students to principle investigators working in either academia or industry.
Planning a clinical study is much more than determining the basic study design. Who will you be studying? How do you plan to recruit your study subjects? How do you plan to retain them in the study? What data do you plan to collect? How will you obtain this data? How will you minimize bias? All these decisions must be consistent with the ethical considerations of studying people. This book teaches how to choose the best design for your question. Drawing on their many years working in clinical research, Nancy G. Berman and Robert A. Parker guide readers through the essential elements of study planning to help get them started. The authors offer numerous examples to illustrate the key decisions needed, describing what works, what does not work, and why. Written specifically for junior investigators beginning their research careers, this guide will also be useful to senior investigators needing to review specific topics.
Many problems in biology require an understanding of the relationships among variables in a multivariate causal context. Exploring such cause-effect relationships through a series of statistical methods, this book explains how to test causal hypotheses when randomised experiments cannot be performed. This completely revised and updated edition features detailed explanations for carrying out statistical methods using the popular and freely available R statistical language. Sections on d-sep tests, latent constructs that are common in biology, missing values, phylogenetic constraints, and multilevel models are also an important feature of this new edition. Written for biologists and using a minimum of statistical jargon, the concept of testing multivariate causal hypotheses using structural equations and path analysis is demystified. Assuming only a basic understanding of statistical analysis, this new edition is a valuable resource for both students and practising biologists.
This sophisticated package of statistical methods is for advanced master's (MPH) and PhD students in public health and epidemiology who are involved in the analysis of data. It makes the link from statistical theory to data analysis, focusing on the methods and data types most common in public health and related fields. Like most toolboxes, the statistical tools in this book are organized into sections with similar objectives. Unlike most toolboxes, however, these tools are accompanied by complete instructions, explanations, detailed examples, and advice on relevant issues and potential pitfalls - conveying skills, intuition, and experience. The only prerequisite is a first-year statistics course and familiarity with a computing package such as R, Stata, SPSS, or SAS. Though the book is not tied to a particular computing language, its figures and analyses were all created using R. Relevant R code, data sets, and links to public data sets are available from www.cambridge.org/9781107113084.
In most modern biomedical research projects, application of high-throughput genomic, proteomic, and transcriptomic experiments has gradually become an inevitable component. Popular technologies include microarray, next generation sequencing, mass spectrometry and proteomics assays. As the technologies have become mature and the price affordable, omics data are rapidly generated, and the problem of information integration and modeling of multi-lab and/or multi-omics data is becoming a growing one in the bioinformatics field. This book provides comprehensive coverage of these topics and will have a long-lasting impact on this evolving subject. Each chapter, written by a leader in the field, introduces state-of-the-art methods to handle information integration, experimental data, and database problems of omics data.
What decides whether a person suffering misfortune bounces back quickly or falls into despair for years? Which processes and mechanisms constitute psychological resilience? Is there a particular, evolutionary-shaped model of human adaptation, which enables a person to maintain mental health in unfavorable and dynamically changing circumstances?
'All these questions are addressed by the contributors to the monograph titled Resilience and Health in a Fast-Changing World. While searching for the answers the authors refer to an extensive scholarly literature, their own theoretical investigations as well as to the outcomes of empirical researches conducted.' Nina Oginska-Bulik
Providing genome-informed personalized treatment is a goal of modern medicine. Identifying new translational targets in nucleic acid characterizations is an important step toward that goal. The information tsunami produced by such genome-scale investigations is stimulating parallel developments in statistical methodology and inference, analytical frameworks, and computational tools. Within the context of genomic medicine and with a strong focus on cancer research, this book describes the integration of high-throughput bioinformatics data from multiple platforms to inform our understanding of the functional consequences of genomic alterations. This includes rigorous and scalable methods for simultaneously handling diverse data types such as gene expression array, miRNA, copy number, methylation, and next-generation sequencing data. This material is written for statisticians who are interested in modeling and analyzing high-throughput data. Chapters by experts in the field offer a thorough introduction to the biological and technical principles behind multiplatform high-throughput experimentation.
Bioterrorism is not a new threat, but in an increasingly interconnected world, the potential for catastrophic outcomes is greater today than ever. The medical and public health communities are establishing biosurveillance systems designed to proactively monitor populations for possible disease outbreaks as a first line of defense. The ideal biosurveillance system should identify trends not visible to individual physicians and clinicians in near-real time. Many of these systems use statistical algorithms to look for anomalies and to trigger epidemiologic investigation, quantification, localization and outbreak management. This book discusses the design and evaluation of statistical methods for effective biosurveillance for readers with minimal statistical training. Weaving public health and statistics together, it presents basic and more advanced methods, with a focus on empirically demonstrating added value. Although the emphasis is on epidemiologic and syndromic surveillance, the statistical methods can be applied to a broad class of public health surveillance problems.
Recent decades have brought advances in statistical theory for missing data, which, combined with advances in computing ability, have allowed implementation of a wide array of analyses. In fact, so many methods are available that it can be difficult to ascertain when to use which method. This book focuses on the prevention and treatment of missing data in longitudinal clinical trials. Based on his extensive experience with missing data, the author offers advice on choosing analysis methods and on ways to prevent missing data through appropriate trial design and conduct. He offers a practical guide to key principles and explains analytic methods for the non-statistician using limited statistical notation and jargon. The book's goal is to present a comprehensive strategy for preventing and treating missing data, and to make available the programs used to conduct the analyses of the example dataset.
Genomics is majorly impacting therapeutics development in medicine. This book contains up-to-date information on the use of genomics in the design and analysis of therapeutic clinical trials with a focus on novel approaches that provide a reliable basis for identifying which patients are most likely to benefit from each treatment. It is oriented to both clinical investigators and statisticians. For clinical investigators, it includes background information on clinical trial design and statistical analysis. For statisticians and others who want to go deeper, it covers state-of-the-art adaptive designs and the development and validation of probabilistic classifiers. The author describes the development and validation of prognostic and predictive biomarkers and their integration into clinical trials that establish their clinical utility for informing treatment decisions for future patients.
Translating laboratory discoveries into successful therapeutics can be difficult. Clinical Trials in Neurology aims to improve the efficiency of clinical trials and the development of interventions in order to enhance the development of new treatments for neurologic diseases. It introduces the reader to the key concepts underpinning trials in the neurosciences. This volume tackles the challenges of developing therapies for neurologic disorders from measurement of agents in the nervous system to the progression of clinical signs and symptoms through illustrating specific study designs and their applications to different therapeutic areas. Clinical Trials in Neurology covers key issues in Phase I, II and III clinical trials, as well as post-marketing safety surveillance. Topics addressed include regulatory and implementation issues, outcome measures and common problems in drug development. Written by a multidisciplinary team, this comprehensive guide is essential reading for neurologists, psychiatrists, neurosurgeons, neuroscientists, statisticians and clinical researchers in the pharmaceutical industry.
The success of the Apgar score demonstrates the astounding power of an appropriate clinical instrument. This down-to-earth book provides practical advice, underpinned by theoretical principles, on developing and evaluating measurement instruments in all fields of medicine. It equips you to choose the most appropriate instrument for specific purposes. The book covers measurement theories, methods and criteria for evaluating and selecting instruments. It provides methods to assess measurement properties, such as reliability, validity and responsiveness, and interpret the results. Worked examples and end-of-chapter assignments use real data and well-known instruments to build your skills at implementation and interpretation through hands-on analysis of real-life cases. All data and solutions are available online. This is a perfect course book for students and a perfect companion for professionals/researchers in the medical and health sciences who care about the quality and meaning of the measurements they perform.
Functional magnetic resonance imaging (fMRI) has become the most popular method for imaging brain function. Handbook of Functional MRI Data Analysis provides a comprehensive and practical introduction to the methods used for fMRI data analysis. Using minimal jargon, this book explains the concepts behind processing fMRI data, focusing on the techniques that are most commonly used in the field. This book provides background about the methods employed by common data analysis packages including FSL, SPM and AFNI. Some of the newest cutting-edge techniques, including pattern classification analysis, connectivity modeling and resting state network analysis, are also discussed. Readers of this book, whether newcomers to the field or experienced researchers, will obtain a deep and effective knowledge of how to employ fMRI analysis to ask scientific questions and become more sophisticated users of fMRI analysis software.
This book is for anyone who has biomedical data and needs to identify variables that predict an outcome, for two-group outcomes such as tumor/not-tumor, survival/death, or response from treatment. Statistical learning machines are ideally suited to these types of prediction problems, especially if the variables being studied may not meet the assumptions of traditional techniques. Learning machines come from the world of probability and computer science but are not yet widely used in biomedical research. This introduction brings learning machine techniques to the biomedical world in an accessible way, explaining the underlying principles in nontechnical language and using extensive examples and figures. The authors connect these new methods to familiar techniques by showing how to use the learning machine models to generate smaller, more easily interpretable traditional models. Coverage includes single decision trees, multiple-tree techniques such as Random Forests™, neural nets, support vector machines, nearest neighbors and boosting.