We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We assessed patterns of enteric infections caused by 14 pathogens, in a longitudinal cohort study of sequelae in British Columbia (BC) Canada, 2005–2014. Our population cohort of 5.8 million individuals was followed for an average of 7.5 years/person; during this time, 40 523 individuals experienced 42 308 incident laboratory-confirmed, provincially reported enteric infections (96.4 incident infections per 100 000 person-years). Most individuals (38 882/40 523; 96%) had only one, but 4% had multiple concurrent infections or more than one infection across the study. Among individuals with more than one infection, the pathogens and combinations occurring most frequently per individual matched the pathogens occurring most frequently in the BC population. An additional 298 557 new fee-for-service physician visits and hospitalisations for enteric infections, that did not coincide with a reported enteric infection, also occurred, and some may be potentially unreported enteric infections. Our findings demonstrate that sequelae risk analyses should explore the possible impacts of multiple infections, and that estimating risk for individuals who may have had a potentially unreported enteric infection is warranted.
The recent discovery in Lima's National Museum of History of a sixteenth-century inspection of the royal repartimiento of Yanque Collaguas in the district of Arequipa, has stimulated a search for further important documentation about the region. The investigation was made possible by financial support from the Ford Foundation under its Peruvian program for projects in the social sciences (PA73–807). The project is coordinated by Dr. Franklin Pease G. Y., with the active participation of Dr. N. David Cook, Professor Juan Carlos Crespo (Catholic University, Lima), and Dr. Alejandro Malaga Medina (University of San Agustin, Arequipa).
Police, like other bureaucratic agencies, are responsible for collecting and disseminating policy-relevant data. Nonetheless, critical data, including killings by police, often go unreported. We argue that this is due in part to the limited oversight capacity of legislative bodies to whom police are accountable. Although many local assemblies lack the means for effective oversight, well-resourced state legislatures may induce transparency from state and substate agencies. This argument is evaluated in two studies of police transparency in the United States. First, we examine the compliance of 19,095 state, county, and municipal police agencies with official data requests over five decades, finding strong positive effects of state legislative capacity on transparency. Second, we examine the accuracy of transmitted data on killings by police, finding that lethality is systematically underreported in states with lower-capacity legislatures. Collectively, our study has implications for research on policing, legislatures, agency control, and analyses of government data.
Virtual reality has emerged as a unique educational modality for medical trainees. However, incorporation of virtual reality curricula into formal training programmes has been limited. We describe a multi-centre effort to develop, implement, and evaluate the efficacy of a virtual reality curriculum for residents participating in paediatric cardiology rotations.
Methods:
A virtual reality software program (“The Stanford Virtual Heart”) was utilised. Users are placed “inside the heart” and explore non-traditional views of cardiac anatomy. Modules for six common congenital heart lesions were developed, including narrative scripts. A prospective case–control study was performed involving three large paediatric residency programmes. From July 2018 to June 2019, trainees participating in an outpatient cardiology rotation completed a 27-question, validated assessment tool. From July 2019 to February 2020, trainees completed the virtual reality curriculum and assessment tool during their cardiology rotation. Qualitative feedback on the virtual reality experience was also gathered. Intervention and control group performances were compared using univariate analyses.
Results:
There were 80 trainees in the control group and 52 in the intervention group. Trainees in the intervention group achieved higher scores on the assessment (20.4 ± 2.9 versus 18.8 ± 3.8 out of 27 questions answered correctly, p = 0.01). Further analysis showed significant improvement in the intervention group for questions specifically testing visuospatial concepts. In total, 100% of users recommended integration of the programme into the residency curriculum.
Conclusions:
Virtual reality is an effective and well-received adjunct to clinical curricula for residents participating in paediatric cardiology rotations. Our results support continued virtual reality use and expansion to include other trainees.
Understanding how cardiovascular structure and physiology guide management is critically important in paediatric cardiology. However, few validated educational tools are available to assess trainee knowledge. To address this deficit, paediatric cardiologists and fellows from four institutions collaborated to develop a multimedia assessment tool for use with medical students and paediatric residents. This tool was developed in support of a novel 3-dimensional virtual reality curriculum created by our group.
Methods:
Educational domains were identified, and questions were iteratively developed by a group of clinicians from multiple centres to assess understanding of key concepts. To evaluate content validity, content experts completed the assessment and reviewed items, rating item relevance to educational domains using a 4-point Likert scale. An item-level content validity index was calculated for each question, and a scale-level content validity index was calculated for the assessment tool, with scores of ≥0.78 and ≥0.90, respectively, representing excellent content validity.
Results:
The mean content expert assessment score was 92% (range 88–97%). Two questions yielded ≤50% correct content expert answers. The item-level content validity index for 29 out of 32 questions was ≥0.78, and the scale-level content validity index was 0.92. Qualitative feedback included suggestions for future improvement. Questions with ≤50% content expert agreement and item-level content validity index scores <0.78 were removed, yielding a 27-question assessment tool.
Conclusions:
We describe a multi-centre effort to create and validate a multimedia assessment tool which may be implemented within paediatric trainee cardiology curricula. Future efforts may focus on content refinement and expansion to include additional educational domains.
Electrical injury (EI) is a significant, multifaceted trauma often with multi-domain cognitive sequelae, even when the expected current path does not pass through the brain. Chronic pain (CP) research suggests pain may affect cognition directly and indirectly by influencing emotional distress which then impacts cognitive functioning. As chronic pain may be critical to understanding EI-related cognitive difficulties, the aims of the current study were: examine the direct and indirect effects of pain on cognition following EI and compare the relationship between pain and cognition in EI and CP populations.
Method:
This cross-sectional study used data from a clinical sample of 50 patients with EI (84.0% male; Mage = 43.7 years) administered standardized measures of pain (Pain Patient Profile), depression, and neurocognitive functioning. A CP comparison sample of 93 patients was also included.
Results:
Higher pain levels were associated with poorer attention/processing speed and executive functioning performance among patients with EI. Depression was significantly correlated with pain and mediated the relationship between pain and attention/processing speed in patients with EI. When comparing the patients with EI and CP, the relationship between pain and cognition was similar for both clinical groups.
Conclusions:
Findings indicate that pain impacts mood and cognition in patients with EI, and the influence of pain and its effect on cognition should be considered in the assessment and treatment of patients who have experienced an electrical injury.
Psychopathic Personality Disorder (PPD) plays a central role in forensic clinical practice. It has relevance for violence risk and treatment responsivity; those suffering from it place financial and other burdens on services. PPD remains a controverted concept; the essence of the disorder remains disputed. In this chapter I examine the history of the concept, methods for its evaluation, its demography and its relevance to clinical practice—from the first interview, through risk formulation to intervention. I describe recent attempts to explicate the concept of the PPD, including the development of the Comprehensive Assessment of Psychopathic Personality (CAPP). I conclude by considering current controversies regarding the diagnostic significance of criminal behaviour, the predictive utility of historical instruments such as the Psychopathy Checklist Revised and the reliability of that instrument in forensic clinical practice.
Childhood trauma is strongly associated with poor health outcomes. Although many studies have found associations between adverse childhood experiences (ACEs), a well-established indicator of childhood trauma and diet-related health outcomes, few have explored the relationship between ACEs and diet quality, despite growing literature in epidemiology and neurobiology suggesting that childhood trauma has an important but poorly understood relationship with diet. Thus, we performed a cross-sectional study of the association of ACEs and adult diet quality in the Southern Community Cohort Study, a largely low-income and racially diverse population in the southeastern United States. We used ordinal logistic regression to estimate the association of ACEs with the Healthy Eating Index-2010 (HEI-10) score among 30 854 adults aged 40–79 enrolled from 2002 to 2009. Having experienced any ACE was associated with higher odds of worse HEI-10 among all (odds ratio (OR) 1⋅22; 95 % confidence interval (CI) 1⋅17, 1⋅27), and for all race–sex groups, and remained significant after adjustment for adult income. The increasing number of ACEs was also associated with increasing odds of a worse HEI-10 (OR for 4+ ACEs: 1⋅34; 95 % CI 1⋅27, 1⋅42). The association with worse HEI-10 score was especially strong for ACEs in the household dysfunction category, including having a family member in prison (OR 1⋅34; 95 % CI 1⋅25, 1⋅42) and parents divorced (OR 1⋅25; 95 % CI 1⋅20, 1⋅31). In summary, ACEs are associated with poor adult diet quality, independent of race, sex and adult income. Research is needed to explore whether trauma intervention strategies can impact adult diet quality.
During the 1970s, the calculus by which a movie was considered successful or not in the marketplace had changed. Between the mid-and late 1960s, the studios teetered on the brink of insolvency because of the impact of the Paramount decrees, the national diffusion of television, and poor financial decisions made in their wake—the abortive attempt to convert to full stereoscopic production in the early 1950s among them. Starting with Universal's acquisition by MCA in 1962 and Paramount's absorption by Gulf and Western Industries in 1966, the classical Hollywood studios were all bought up by larger corporations who didn't really know how to run them, resulting in an industry-wide recession between 1969 and 1971. At the same time, in October 1968, the 34-year-old Production Code Administration (PCA) was replaced by a Code and Ratings Administration (CARA) that transformed the national audience—more or less homogeneous since its formation, except at the ethnic fringes—into a demographically segregated one. The conviction grew in Hollywood that the regular flow of product was no longer profitable because there was no longer a regular and predictable audience for its films. In such a volatile context, if a few productions became big hits, they could amortize those that produced a loss or simply broke even. This idea of engineering “event” movies to generate windfall profits and carry the rest along with them became known as “the blockbuster syndrome.” This was not a new phenomenon—“blockbusters” like Paramount's The Ten Commandments (Cecil B. DeMille, 1956) and MGM's Ben-Hur (William Wyler, 1959) had kept their respective studios profitable for years after their release, and expensive flops like Cleopatra (Joseph Mankiewicz, 1963) had nearly ruined 20th Century Fox. What was different was the amount of capital involved in hedging the bets.
Between 1972 and 1979, the average production budget or “negative cost” for a single film rose by 178 percent, or nearly four times the rate of general inflation. By the end of 1979, negative costs had doubled their 1977 level to reach the then-staggering sum of $8.9 million, so that the financial risks of production were substantially multiplied.
While digital stereoscopy represents the most recent manifestation of a process that began with Charles Wheatstone's stereoscopic line drawings nearly two centuries ago, it is not the end of it. The ultimate technology for seeing things in three dimensions is Virtual Reality (VR), which uses a hybrid of advanced modern technology—Lidar scanners, hyper-accelerated graphic cards, and so on—and the stereoscopic illusion first quantified by Wheatstone to create a state of sensory immersion that borders on otherness. Finding a way to massmarket the VR experience as a form of popular cinema, rather than as an enhanced form of video game, has become the new Grail Quest of the film industry. But VR also has a more elevated, philosophical justification.
The iconic French film theorist André Bazin believed that each new technological development in the history of cinema (e.g., the introduction of sound, color, widescreen, and 3-D) brought it closer to its ultimate destiny—the production of a total simulacrum of realty—which was implicit in its very beginnings. As he wrote in 1946:
The idea of Cinema tends toward a full and total representation of reality, and immediately considers the dream of a perfect illusion of the outside world with sound, color, and depth. If the cinema in its infancy did not have all the attributes of the total cinema of tomorrow, this was no fault of its own, and was only because if fairies were technically impotent and couldn't give it to them even though they wanted to.
The guiding myth, then, inspiring the invention of cinema, is the accomplishment of that which dominated in a more or less vague fashion all the techniques of the mechanical reproduction of reality in the nineteenth century, from photography to the phonograph, namely an integral realism, a recreation of the world in its own image, an image unburdened by the freedom of interpretation of the artist or the irreversibility of time. […] Every new development added to the cinema must, paradoxically, take it nearer and nearer to its origins. In short, cinema has not yet been invented! (“The Myth of Total Cinema,” in André Bazin, What Is Cinema?, vol. 2, trans. Hugh Gray [University of California Press, 1967])
Most of the three-dimensional (3D) cinema discussed or mentioned in this book originated in the United States and Europe, with some nods to work done in China, India, and Japan. Many other countries experimented with stereoscopic motion pictures in the early 1950s (e.g., Hungary, the Soviet Union, West Germany, Italy, and Mexico), but each produced only a few films in the process and experienced nothing like the 1952–55 boomlet in Hollywood. In the post-Avatar period, 2010–20, digital 3D movies were widely popular and produced by many non-Western industries, but only a handful of such films appeared outside of their domestic markets either in theaters or on 3D Bluray discs. When the COVID-19 pandemic of 2020–21 ravaged the world's film industries, stereoscopic films nearly disappeared altogether because they could not be viably exploited in either theaters, due to the possibility of viral infection, or homes, due to the scarcity of 3D-capable televisions since 2016. The omission of non-Western cinema cultures from this study is therefore due to a lack of accessibility rather than critical choice.
The source of Émile Reynaud's despondency was apparently his failure to have invented true motion pictures, in which he was preceded by the Edison Corporation in 1893 and Lumières Freres in 1895. The Edison Kinetograph, actually “invented” by laboratory assistant William Kennedy Laurie Dixon (1860–1935), was a clever synthesis of existing principles and techniques derived from Eadweard Muybridge (series photography), Étienne-Jules Marey (chronophotography), and others that used celluloid roll film in a battery-driven camera to take photographic exposures of things in motion at the approximate rate of 40 frames (and later 16 and 24) per second. The Kinetograph incorporated what have come to be recognized as the two essential components of motion picture camera engineering: (1) a stopmotion device to ensure the regular but intermittent movement of the film strip through the camera (adapted by Dixon from the escapement mechanism of a watch) and (2) a perforated film strip (first innovated by Reynaud for his Théâtre Optique). Intermittency permits the unexposed film strip in its transit through the camera to be stopped for a fraction of a second before the lens while the shutter opens to admit light bouncing off the subject and expose the individual frames. The synchronization of the film strip and the shutter is accomplished by the perforations in the film strip, which is pulled through the camera by a system of clawed gears known as sprockets. This process is reversed in the projector or viewing device, with the perforations—or sprocket holes—ensuring the exact regularity of the film strip's movement as it passes through both machines. Because it was battery-driven, the Kinetograph was virtually inert, weighing about 600 lbs, and it could only be used inside of a purpose-built studio called the “Black Maria.” Furthermore, Edison did not initially commission the designing of a projector, so motion pictures made with the Kinetograph were first seen in a coin-operated peepshow device called the Kinetoscope. (In 1896, Edison bought the rights to a projector invented by C. Francis Jenkins and Thomas Armat and patented it as the “Edison Vitascope,” so that Kinetograph films could then be exhibited to large audiences.)
A History of Three-Dimensional Cinema chronicles 3-D cinema as a single, continuous and coherent medium, proceeding from 19th-century experiments in stereoscopic photography and lantern projection (1839–1892) to stereoscopic cinema’s “long novelty period” (1893-1952). It proceeds to examine the first Hollywood boom in anaglyphic stereo (1953-1955), when the mainstream industry produced 69 features in 3-D, mostly action films that could exploit the depth illusion, but also a handful of big-budget films - for example, Kiss Me Kate (George Sidney, 1953) and Dial M for Murder (Alfred Hitchcock, 1954) - until audiences tired of the process; the anaglyphic revival of 1970-1985, when 3-D was sustained as a novelty feature in sensational genres like soft-core pornography and horror; the age of IMAX 3-D (1986-2008); the current era of digital 3-D cinema, which began in 2009 when James Cameron’s Avatar became the highest-grossing feature of all time and the studios once again stampeded into 3-D production; and finally the future promise of Virtual Reality.
Most of us see the world in three dimensions. This is so primarily because humans possess binocular vision, although other factors such as foreground overlap, gradient texture, and relative size are important to depth perception too. (For example, the illusion of depth on the flat surfaces of paintings, drawings, and photographs depends on the replication of these alternate depth cues through the system of linear perspective.) Because our two eyes are aligned so that their fields of vision partially overlap, causing a slight parallax in which each eye sees objects from a slightly different point of view, their combination in our minds creates binocular vision. The distance between the two eyes of a normal adult is almost invariably 6.5 cm, and this binocular disparity provides cues that the brain automatically interprets as depth.
Humans since ancient times have also been understood as beings who create and manipulate signs—things that take the place of or stand for other things. We are the “representational animal,” or animal symbolicum in the term of early twentieth-century philosopher Ernst Cassirer. Representation (or “re-presentation”), significantly, is the central characteristic of art. So, it is not unnatural that three-dimensional representation would become an important issue for aesthetics early in our history. By the middle Renaissance, such concerns were manifest in the development of linear perspective in painting and drawing, as noted above; the disposition of scenographic space in the theater; and the manipulation of spatial perspective in landscape architecture, sculpture, and bas-relief. Contemporaneously, at the level of popular entertainment, a cultural precedent for three-dimensional cinema appeared in the form of the European peepshow, which in the eighteenth century spread to the United States, China, and Japan.
According to Richard Balzer, a peepshow is a closed or semi-closed box having at least one hole through which a perspective is seen, which may or may not use mirrors to create its illusion; the factor of enclosure is important, because it gives the viewer the sense of approaching an inaccessible space and seeing a hidden image, similar to the voyeuristic experience of watching a movie in a darkened theater. Its origin seems to date from the work of Italian painter and architect Leone Battista Alberti, who in 1437 developed an instrument to view perspectives through a small hole in a box.