To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To provide cross-national data for selected countries of the Americas on service utilization for psychiatric and substance use disorders, the distribution of these services among treatment sectors, treatment adequacy and factors associated with mental health treatment and adequacy of treatment.
Data come from data collected from 6710 adults with 12 month mental disorder surveys across seven surveys in six countries in North (USA), Central (Mexico) and South (Argentina, Brazil, Colombia, Peru) America who were interviewed 2001–2015 as part of the World Health Organization (WHO) World Mental Health (WMH) Surveys. DSM-IV diagnoses were made with the WHO Composite International Diagnostic Interview (CIDI). Interviews also assessed service utilization by the treatment sector, adequacy of treatment received and socio-demographic correlates of treatment.
Little over one in four of respondents with any 12 month DSM-IV/CIDI disorder received any treatment. Although the vast majority (87.1%) of this treatment was minimally adequate, only 35.3% of cases received treatment that met acceptable quality guidelines. Indicators of social-advantage (high education and income) were associated with higher rates of service use and adequacy, but a number of other correlates varied across survey sites.
These results shed light on an enormous public health problem involving under-treatment of common mental disorders, although the problem is most extreme among people with social disadvantage. Promoting services that are more accessible, especially for those with few resources, is urgently needed.
Fiona Sampson looks beyond any simplistic account of legacy in her nuanced tracing of Plath’s continuing influence on British poetry. While Plath left no substantial or explicit articulation of her poetics, her early published work indicates some of her own literary debts. The free verse which eventually muscles its way out of that initial formality is closely related, in both rhythm and register, to exactly contemporary work by Ted Hughes. Almost universally read by contemporary British poets, she contributes a Plathian dimension to contemporary British poetics as a whole. This is less apparent in today’s Confessional free verse, which owes much to life writing and oral forms, than in the continuation, alongside the Hardy/Larkin mainstream, of a more risk-taking, symbolic and higher-register tradition. Its protagonists include Sharon Olds, Louise Glück, Selima Hill and Denise Riley.
We aimed to critically evaluate decision aids developed for practitioners and caregivers when providing care for someone with dementia or for use by people with dementia themselves. Decision aids may be videos, booklets, or web-based tools that explicitly state the decision, provide information about the decision, and summarize options along with associated benefits and harms. This helps guide the decision maker through clarifying the values they place on the benefits or harms of the options.
We conducted a systematic review of peer-reviewed literature in electronic databases (CINAHL, The Cochrane Library, EMBASE, MEDLINE, and PsychINFO) in March 2018. Reference lists were searched for relevant papers and citations tracked. Data were synthesized with meta-analysis and narrative synthesis. Papers were included if they met the following criteria: 1) the focus of the paper was on the evaluation of a decision aid; 2) the decision aid was used in dementia care; and 3) the decision aid was aimed at professionals, people with dementia, or caregivers.
We identified 3618 studies, and 10 studies were included, covering three topics across six decision aids: 1) support with eating/feeding options, 2) place of care, and 3) goals of care. The mode of delivery and format of the decision aids varied and included paper-based, video-based, and audio-based decision aids. The decision aids were shown to be effective, increasing knowledge and the quality of communication. The meta-analysis demonstrated that decisions are effective in reducing decisional conflict among caregivers (standardized mean difference = −0.50, 95% confidence interval [ − 0.97, − 0.02]).
Decision aids offer a promising approach for providing support for decision-making in dementia care. People are often faced with more than one decision, and decisions are often interrelated. The decision aids identified in this review focus on single topics. There is a need for decision aids that cover multiple topics in one aid to reflect this complexity and better support caregivers.
Palliative care and the hospice movement were founded as a response to people dying from cancer . However, there is now wide recognition that palliative care should be provided and made available for people with a range of progressive advanced chronic diseases including dementia, frailty and organ failure. This is particularly pertinent as the population ages and a growing number of people are dying with these conditions. This chapter defines palliative care and the role of the psychiatrist, and examines some current issues in palliative care including having difficult conversations, dealing with uncertainty, symptom control and supporting grieving family and friends both before and after death, with a focus on the needs of those with dementia.
There is growing evidence that many people attending annual screening for diabetic retinopathy in the United Kingdom (UK) are at low risk of developing the disease. This has led to new policy statements. However, the basis on which to establish a risk-based individualized variable-recall screening program has not yet been determined. We present a methodology for using information on an individual's risk factors to improve the allocation of resources within a screening program.
We developed a patient-level state-transition model to evaluate the cost-effectiveness of risk-based screening for diabetic retinopathy in the UK. The model incorporated a recently developed risk calculation engine that predicts an individual's risk of disease onset, and allocated individuals to alternative screening recall periods according to this level of risk. Using the findings, we demonstrate a means of estimating: (i) a threshold level of risk, above which individuals should be invited to screening, and (ii) the optimum screening recall period for an individual, based on the expected cost-effectiveness of screening and treatment.
The cost-effectiveness analysis demonstrated that standardized screening (current practice) is the least cost-effective program. Individualized screening can improve outcomes at a reduced cost. We found it feasible – though computationally expensive – to incorporate a risk calculation engine into a decision model in Microsoft Excel. In an optimized screening program, the majority or patients would be invited to attend screening at least two years after a negative screening result.
Individualized risk-based screening is likely to be cost-effective in the context of diabetic eye disease in the UK. It is expected that risk calculation engines will be developed in other disease areas in the future, and used to allocate screening and treatment at the individual level. It is important that researchers develop robust methods for combining risk calculation engines into decision analytic models and health technology assessment more broadly.
Interventions and services for people with mental health problems can have broad remits: they are often designed to treat people with a variety of diagnoses. Furthermore, addressing mental health problems can have long-term implications for economic, social, and health outcomes. This represents a challenge for health technology assessment, for which long-term trial data can be lacking. In this review, we sought to identify how analysts have tackled this problem. We reviewed the methods used to extrapolate costs and outcomes for the purpose of economic evaluation where long-term trial data are not available.
We conducted a systematic review of the medical and economic literature evaluating long-term costs and outcomes for mental health interventions and services designed to treat or prevent more than two mental health conditions. We searched key databases including MEDLINE, Embase, PsycINFO, CINAHL, and EconLit. Two authors independently screened citations. Articles were excluded if they reported within-trial analyses or employed a time horizon of less than 5 years.
The search identified 829 unique records. No papers could be included in the review.
This review highlights the lack of research and understanding available to inform the appraisal of broad mental health interventions. In light of our findings, we consider the reasons for this lack of information and review relevant literature on the subject. Potential barriers to research in this context include: (i) challenges in understanding the value of broad mental health services, such as the mental and physical health nexus, intersectoral costs and benefits, and interpersonal impacts, (ii) methodological difficulties, such as data availability, patient heterogeneity, and the challenge of extrapolation, and (iii) parity of esteem. We make recommendations for resolving this problem with regard to funding, data collection, modelling methods, and outcome measurement.
This chapter will examine elements of communication in our role as expert searchers. First, interprofessional communication and communicating as part of a team will be examined from the framework of core professional competencies. The practical use of peer communication to assure or improve search quality will be examined with the PRESS checklist. Next will be a discussion of presenting search results to gain maximum impact.
The chapter will go on to examine ways to increase discoverability, reproducibility and reusability of our work through mechanisms such as protocol registration, open access publication and data deposit. These are key to clear, complete, transparent scientific communication. The role of social media, broadly defined, in professional communication in support of search will also be discussed.
Our advances through formal research studies in methods of searching need to be communicated to professional audiences, both in library science and, more broadly, through academic communication or knowledge translation. Finally, future directions for research and practice will be suggested and conclusions and key points for reflection will be presented.
‘Interdisciplinary communication’ is defined in the Medical Subject Heading (MeSH) term as:
Communication, in the sense of cross-fertilization of ideas, involving two or more academic disciplines (such as the disciplines that comprise the cross-disciplinary field of bioethics, including the health and biological sciences, the humanities, and the social sciences and law). Also includes differences in patterns of language usage in different academic or medical disciplines.
(National Library of Medicine, 2018)
Chapter 11 has shown just how important this collaboration with other members of the review team can be to maintaining the professional standing of the expert searcher.
Understanding training and expectations for communication can be very helpful. CanMEDS is a Canadian framework that identifies and describes the abilities physicians require to meet effectively the health care needs of the people they serve (Royal College of Physicians and Surgeons of Canada, 2011). The overarching competency is that of medical expert, but the first specific role is that of communicator (see Case study 12.1). These role expectations can be paraphrased to describe the role of the searcher in the reference interview (see Case study 12.2).
The prevalence of mental disorders among Black, Latino, and Asian adults is lower than among Whites. Factors that explain these differences are largely unknown. We examined whether racial/ethnic differences in exposure to traumatic events (TEs) or vulnerability to trauma-related psychopathology explained the lower rates of psychopathology among racial/ethnic minorities.
We estimated the prevalence of TE exposure and associations with onset of DSM-IV depression, anxiety and substance disorders and with lifetime post-traumatic stress disorder (PTSD) in the Collaborative Psychiatric Epidemiology Surveys, a national sample (N = 13 775) with substantial proportions of Black (35.9%), Latino (18.9%), and Asian Americans (14.9%).
TE exposure varied across racial/ethnic groups. Asians were most likely to experience organized violence – particularly being a refugee – but had the lowest exposure to all other TEs. Blacks had the greatest exposure to participation in organized violence, sexual violence, and other TEs, Latinos had the highest exposure to physical violence, and Whites were most likely to experience accidents/injuries. Racial/ethnic minorities had lower odds ratios of depression, anxiety, and substance disorder onset relative to Whites. Neither variation in TE exposure nor vulnerability to psychopathology following TEs across racial/ethnic groups explained these differences. Vulnerability to PTSD did vary across groups, however, such that Asians were less likely and Blacks more likely to develop PTSD following TEs than Whites.
Lower prevalence of mental disorders among racial/ethnic minorities does not appear to reflect reduced vulnerability to TEs, with the exception of PTSD among Asians. This highlights the importance of investigating other potential mechanisms underlying racial/ethnic differences in psychopathology.
In “Toward a Theory of Race, Crime, and Urban Inequality,” Sampson and Wilson (1995) argued that racial disparities in violent crime are attributable in large part to the persistent structural disadvantages that are disproportionately concentrated in African American communities. They also argued that the ultimate causes of crime were similar for both Whites and Blacks, leading to what has been labeled the thesis of “racial invariance.” In light of the large scale social changes of the past two decades and the renewed political salience of race and crime in the United States, this paper reassesses and updates evidence evaluating the theory. In so doing, we clarify key concepts from the original thesis, delineate the proper context of validation, and address new challenges. Overall, we find that the accumulated empirical evidence provides broad but qualified support for the theoretical claims. We conclude by charting a dual path forward: an agenda for future research on the linkages between race and crime, and policy recommendations that align with the theory’s emphasis on neighborhood level structural forces but with causal space for cultural factors.
Monitoring vectors is relevant to ascertain transmission of lymphatic filariasis (LF). This may require the best sampling method that can capture high numbers of specific species to give indication of transmission. Gravid anophelines are good indicators for assessing transmission due to close contact with humans through blood meals. This study compared the efficiency of an Anopheles gravid trap (AGT) with other mosquito collection methods including the box and the Centres for Disease Control and Prevention gravid, light, exit and BioGent-sentinel traps, indoor resting collection (IRC) and pyrethrum spray catches across two endemic regions of Ghana. The AGT showed high trapping efficiency by collecting the highest mean number of anophelines per night in the Western (4.6) and Northern (7.3) regions compared with the outdoor collection methods. Additionally, IRC was similarly efficient in the Northern region (8.9) where vectors exhibit a high degree of endophily. AGT also showed good trapping potential for collecting Anopheles melas which is usually difficult to catch with existing methods. Screening of mosquitoes for infection showed a 0.80–3.01% Wuchereria bancrofti and 2.15–3.27% Plasmodium spp. in Anopheles gambiae. The AGT has shown to be appropriate for surveying Anopheles populations and can be useful for xenomonitoring for both LF and malaria.
Technology wants us, but what does it want for us? What do we get out of its long journey?
(Kevin Kelly, What Technology Wants, 2010, 347)
Providing access to born-digital content can be both straightforward and exceedingly complex. For those files that are simple, standalone and in common file formats, providing access can be as simple as double-clicking on the file to open it in a contemporary computer with a standard suite of software installed. This is currently true of standalone PDF, DOCX, PPT, JPG, MP3 and AVI files, but less true of older file types like WordPerfect and Windows Media Audio, as well as complex proprietary formats – for example, older versions of AutoCAD. You will also have special considerations to keep in mind when providing access to e-mail, website and mobile device content.
There are several options for providing access to your born-digital content, each with its own benefits and challenges. You may be considering providing unrestricted online access to your content, some form of restricted online access or on-site-only access; you may want to provide a combination of these methods. In this chapter, we'll take a look at some of the options that are available to you for online, remote and restricted on-site access and review some of the additional things you will need to consider for providing access to born-digital content generally.
Deciding on your access strategy
There are several things that you will need to consider when developing your access strategy. First, you will need to have an understanding of the nature of your current and future born-digital content with regard to: the format-specific hardware and software needed in order to provide access; access restrictions imposed by law or donor agreements; and the significant properties inherent to the content that you may or may not need to preserve. Second, you will need to understand your current technological infrastructure and what it can and can't support, as well as have a roadmap for expanding and adapting this infrastructure as your needs grow and the technological landscape changes. Last, it is very important to know your users’ requirements for accessing your content.
For tens of millennia humankind has made purposeful, material marks on whatever surface was available. Human beings have recorded evidence of their existence with ground rock smeared on cave walls, carvings in stone, plant fluids brushed onto papyrus, gold and coloured inks painted on animal skin, dark inks rolled onto movable type and pressed into paper, and magnetised iron oxide on a plastic substrate disk. These artefacts, whether they can be read ten minutes or ten millennia from now are all evidence of humans attempting the often Herculean feat of making sense of the world around them. No matter the medium, we are fixing our ideas and creations into a form that will allow them to move into the future. Over time, the content has been relatively similar, but the quantity and methods of recording this content have changed drastically.
In our current age, nearly all data and creative outputs are generated, stored and accessed through the use of computers. Records of our transactions, of our communication and experiences with one another, of our thoughts, ideas and creative outputs are almost all created, stored and transmitted via digital encoding. How much of your own communication and work is transacted or recorded digitally? More importantly for the library and archival professions, how do we go about collecting, preserving and providing access to it? This question may seem difficult or daunting to answer, but we can make it simple for you by starting with the basics and building from there.
What is born-digital content?
Photographs, books and maps created and printed on paper-based mediums can be ‘digitised’. For the past few decades digitised content has been in high demand and a game-changer for libraries and archives’ ability to share their resources across the globe. Digitising valuable and fragile materials reduces handling and therefore helps preserve the originals for longer periods of time.
Recently, however, more attention has been directed toward the content that is being created, distributed and used solely in digital form. This content is called ‘born digital’ because it was created or ‘born’ digitally, and in most cases is not transferred or accessed otherwise. Because there is no original paper-based or analogue version of born-digital content, it poses some unique challenges in preserving access to it over the long term.
Computers are the most complex objects we human beings have ever created, but in a fundamental sense they are remarkably simple.
(Danny Hillis, The Pattern on the Stone, 1998, vii)
Learning how to preserve, conserve and describe paper-based materials usually entails learning about what the paper is made of and how it was made. It also involves knowing how the ink was made and how it was applied to the paper. Interpreting messages fixed on paper also requires an understanding of the language in which the messages were written, which also requires knowledge of the shapes and symbols used in the language represented. Understanding the basics of preserving and interpreting born-digital information is no different. It helps to understand how digital information is encoded and fixed onto physical media to make informed decisions about how best to preserve and provide access to it.
This chapter explains basic encoding methods used to convert various types of information into digital form, describes how digital information is fixed onto physical mediums and discusses basics of the command line and navigating code repositories. This may feel like an intimidating chapter to start with, but once you understand the concepts presented here, the rest of the principles and processes presented throughout the book will be simple to master.
What is digital information?
At a basic level, the word digital refers to information that is expressed in digits, or numbers; more specifically the numbers 1 and 0. The numbers 1 and 0 represent any kind of binary information presentation. This can be the presence (1) or absence (0) of something, different orientations of something like up (1) or down (0), statements of truth like TRUE (1) or FALSE (0), polar orientation like North (1) or South (0), dashes (1) or dots (0) like in Morse code; basically anything that can be represented by a maximum of two different states. Since digital information is encoded into only one of two digits, it is also referred to as ‘binary’ encoding, where ‘bi’ means ‘two’. Each individual digit (a 1 or a 0) is called a ‘bit’. A string of eight bits is called a ‘byte’.
The best of my nature reveals itself in play, and play is sacred.
(Karen Blixen, On Modern Marriage and Other Observations, 1987, 80)
Now that you've made your way through the book we hope that you feel confident enough to embark on some born-digital content management, and to start the long and exciting journey of learning and growing as the technology we use to record knowledge shifts and changes over time. Now that we've whetted your appetite, we encourage you to take a look at the additional resources in Appendix A for more avenues to explore. We wholeheartedly encourage you to pick up more computer programming skills if that's a path that interests you. If it doesn't, that's fine too.
The fact that you picked up this book indicates you have at least an interest in the subject, which is half the battle. We want to emphasise that having strong computer programming and technological skills is not imperative in order to be a good born-digital content manager. You need to have the basic knowledge of the challenge, to find the right people who can help you and to communicate with them what you need to happen and why. We hope that we have shared with you enough of the fundamental knowledge you will need in order to do this and to guide you in the direction of more information that you can pick up as you go.
After reading this book you should know a little bit more about the basics of how digital information is created and rendered, and about borndigital-specific practices around selection, acquisition, description, preservation and access – how to tie all of these elements together; and a little bit about what may lie ahead.
In Chapter 1 you learned about the range of methods by which words, numbers, images, sound and videos are encoded into the binary information that computers are designed to interpret and render in various ways. We considered a number of different digital file formats and looked closely at how binary information is encoded on different types of physical media. In that chapter we also introduced some basic information about the command line and how you might use it in your practice.