To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Good education requires student experiences that deliver lessons about practice as well as theory and that encourage students to work for the public good—especially in the operation of democratic institutions (Dewey 1923; Dewy 1938). We report on an evaluation of the pedagogical value of a research project involving 23 colleges and universities across the country. Faculty trained and supervised students who observed polling places in the 2016 General Election. Our findings indicate that this was a valuable learning experience in both the short and long terms. Students found their experiences to be valuable and reported learning generally and specifically related to course material. Postelection, they also felt more knowledgeable about election science topics, voting behavior, and research methods. Students reported interest in participating in similar research in the future, would recommend other students to do so, and expressed interest in more learning and research about the topics central to their experience. Our results suggest that participants appreciated the importance of elections and their study. Collectively, the participating students are engaged and efficacious—essential qualities of citizens in a democracy.
In May 2001, the World Health Assembly (WHA) passed a resolution which urged member states to attain, by 2010, a minimum target of regularly administering anthelminthic drugs to at least 75% and up to 100% of all school-aged children at risk of morbidity. The refined global strategy for the prevention and control of schistosomiasis and soil-transmitted helminthiasis was issued in the following year and large-scale administration of anthelminthic drugs endorsed as the central feature. This strategy has subsequently been termed ‘preventive chemotherapy’. Clearly, the 2001 WHA resolution led the way for concurrently controlling multiple neglected tropical diseases. In this paper, we recall the schistosomiasis situation in Africa in mid-2003. Adhering to strategic guidelines issued by the World Health Organization, we estimate the projected annual treatment needs with praziquantel among the school-aged population and critically discuss these estimates. The important role of geospatial tools for disease risk mapping, surveillance and predictions for resource allocation is emphasised. We clarify that schistosomiasis is only one of many neglected tropical diseases and that considerable uncertainties remain regarding global burden estimates. We examine new control initiatives targeting schistosomiasis and other tropical diseases that are often neglected. The prospect and challenges of integrated control are discussed and the need for combining biomedical, educational and engineering strategies and geospatial tools for sustainable disease control are highlighted. We conclude that, for achieving integrated and sustainable control of neglected tropical diseases, a set of interventions must be tailored to a given endemic setting and fine-tuned over time in response to the changing nature and impact of control. Consequently, besides the environment, the prevailing demographic, health and social systems contexts need to be considered.
Students and investigators working in statistics, biostatistics, or applied statistics, in general, are constantly exposed to problems that involve large quantities of data. This is even more evident today, when massive datasets with an impressive amount of details are produced in novel fields such as genomics or bioinformatics at large. Because, in such a context, exact statistical inference may be computationally out of reach and in many cases not even mathematically tractable, they have to rely on approximate results. Traditionally, the justification for these approximations was based on the convergence of the first four moments of the distributions of the statistics under investigation to those of some normal distribution. Today we know that such an approach is not always theoretically adequate and that a somewhat more sophisticated set of techniques based on asymptotic considerations may provide the appropriate justification. This need for more profound mathematical theory in statistical large-sample theory is patent in areas involving dependent sequences of observations, such as longitudinal and survival data or life tables, in which the use of martingale or related structures has distinct advantages.
Unfortunately, most of the technical background for understanding such methods is dealt with in specific articles or textbooks written for a readership with such a high level of mathematical knowledge that they exclude a great portion of the potential users. We tried to bridge this gap in a previous text (Sen and Singer : Large Sample Methods in Statistics: An Introduction with Applications), on which our new enterprise is based.
Statistical estimation as well as hypothesis testing may be viewed as important topics of a more general (and admittedly, more abstract) statistical decision theory (SDT). Having genesis in the theory of games and affinity with Bayes methods, SDT has been continuously fortified with sophisticated mathematical tools as well as with philosophical justifications. In conformity with the general objectives and contended intermediate level of this monograph, we intend to provide an overall introduction to the general principles of SDT with some emphasis on Bayes methodology (as well as some of its variants), avoiding the usual philosophical deliberations and mathematical sophistication, to the extent possible. See Berger (1993) for a detailed exposition.
The connection between estimation and hypothesis testing theories treated in the preceding chapters and SDT relates to the uncertainty of statistical conclusions or decisions based on observed data sets and to the adequate provision for quantifying the frequency of incorrect ones. This has generated the notion of loss and risk functions that form the foundation of SDT. This notion serves as a building block for the formulation of minimum risk and minimax (risk) estimation theory, where Bayes estimates have a focal stand. In the same vein, Bayes tests, which are not necessarily isomorphic to the Neyman–Pearson–Wald likelihood-based tests have cropped up in SDT. In either case, the basic difference comes from the concepts of prior and posterior distributions that bring in more room for subjective judgement in the inferential process.