To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We describe the glacial geomorphology and initial geochronology of two ice-free valley systems within the Neptune Range of the Pensacola Mountains, Antarctica. These valleys are characterized by landforms associated with formerly more expanded ice sheet(s) that were at least 200 m thicker than at present. The most conspicuous features are areas of supraglacial debris, discrete debris accumulations separated from modern-day ice and curvilinear ridges and mounds. The landsystem bears similarities to debris-rich cold-based glacial landsystems described elsewhere in Antarctica and the Arctic where buried ice is prevalent. Geochronological data demonstrate multiple phases of ice expansion. The oldest, occurring > 3 Ma, overtopped much of the landscape. Subsequent, less expansive advances into the valleys occurred > 2 Ma and > ~1 Ma. An expansion of some local glaciers occurred < 250 ka. This sequence of glacial stages is similar to that described from the northernmost massif of the Pensacola Mountains (Dufek Massif), suggesting that it represents a regional signal of ice-sheet evolution over the Plio-Pleistocene. The geomorphological record and its evolution over millions of years makes the Neptune Range valleys an area worthy of future research and we highlight potential avenues for this.
When vaccination depends on injection, it is plausible that the blood-injection-injury cluster of fears may contribute to hesitancy. Our primary aim was to estimate in the UK adult population the proportion of COVID-19 vaccine hesitancy explained by blood-injection-injury fears.
In total, 15 014 UK adults, quota sampled to match the population for age, gender, ethnicity, income and region, took part (19 January–5 February 2021) in a non-probability online survey. The Oxford COVID-19 Vaccine Hesitancy Scale assessed intent to be vaccinated. Two scales (Specific Phobia Scale-blood-injection-injury phobia and Medical Fear Survey–injections and blood subscale) assessed blood-injection-injury fears. Four items from these scales were used to create a factor score specifically for injection fears.
In total, 3927 (26.2%) screened positive for blood-injection-injury phobia. Individuals screening positive (22.0%) were more likely to report COVID-19 vaccine hesitancy compared to individuals screening negative (11.5%), odds ratio = 2.18, 95% confidence interval (CI) 1.97–2.40, p < 0.001. The population attributable fraction (PAF) indicated that if blood-injection-injury phobia were absent then this may prevent 11.5% of all instances of vaccine hesitancy, AF = 0.11; 95% CI 0.09–0.14, p < 0.001. COVID-19 vaccine hesitancy was associated with higher scores on the Specific Phobia Scale, r = 0.22, p < 0.001, Medical Fear Survey, r = 0.23, p = <0.001 and injection fears, r = 0.25, p < 0.001. Injection fears were higher in youth and in Black and Asian ethnic groups, and explained a small degree of why vaccine hesitancy is higher in these groups.
Across the adult population, blood-injection-injury fears may explain approximately 10% of cases of COVID-19 vaccine hesitancy. Addressing such fears will likely improve the effectiveness of vaccination programmes.
Our aim was to estimate provisional willingness to receive a coronavirus 2019 (COVID-19) vaccine, identify predictive socio-demographic factors, and, principally, determine potential causes in order to guide information provision.
A non-probability online survey was conducted (24th September−17th October 2020) with 5,114 UK adults, quota sampled to match the population for age, gender, ethnicity, income, and region. The Oxford COVID-19 vaccine hesitancy scale assessed intent to take an approved vaccine. Structural equation modelling estimated explanatory factor relationships.
71.7% (n=3,667) were willing to be vaccinated, 16.6% (n=849) were very unsure, and 11.7% (n=598) were strongly hesitant. An excellent model fit (RMSEA=0.05/CFI=0.97/TLI=0.97), explaining 86% of variance in hesitancy, was provided by beliefs about the collective importance, efficacy, side-effects, and speed of development of a COVID-19 vaccine. A second model, with reasonable fit (RMSEA=0.03/CFI=0.93/TLI=0.92), explaining 32% of variance, highlighted two higher-order explanatory factors: ‘excessive mistrust’ (r=0.51), including conspiracy beliefs, negative views of doctors, and need for chaos, and ‘positive healthcare experiences’ (r=−0.48), including supportive doctor interactions and good NHS care. Hesitancy was associated with younger age, female gender, lower income, and ethnicity, but socio-demographic information explained little variance (9.8%). Hesitancy was associated with lower adherence to social distancing guidelines.
COVID-19 vaccine hesitancy is relatively evenly spread across the population. Willingness to take a vaccine is closely bound to recognition of the collective importance. Vaccine public information that highlights prosocial benefits may be especially effective. Factors such as conspiracy beliefs that foster mistrust and erode social cohesion will lower vaccine up-take.
Acute cannabis administration can produce transient psychotic-like effects in healthy individuals. However, the mechanisms through which this occurs and which factors predict vulnerability remain unclear. We investigate whether cannabis inhalation leads to psychotic-like symptoms and speech illusion; and whether cannabidiol (CBD) blunts such effects (study 1) and adolescence heightens such effects (study 2).
Two double-blind placebo-controlled studies, assessing speech illusion in a white noise task, and psychotic-like symptoms on the Psychotomimetic States Inventory (PSI). Study 1 compared effects of Cann-CBD (cannabis containing Δ-9-tetrahydrocannabinol (THC) and negligible levels of CBD) with Cann+CBD (cannabis containing THC and CBD) in 17 adults. Study 2 compared effects of Cann-CBD in 20 adolescents and 20 adults. All participants were healthy individuals who currently used cannabis.
In study 1, relative to placebo, both Cann-CBD and Cann+CBD increased PSI scores but not speech illusion. No differences between Cann-CBD and Cann+CBD emerged. In study 2, relative to placebo, Cann-CBD increased PSI scores and incidence of speech illusion, with the odds of experiencing speech illusion 3.1 (95% CIs 1.3–7.2) times higher after Cann-CBD. No age group differences were found for speech illusion, but adults showed heightened effects on the PSI.
Inhalation of cannabis reliably increases psychotic-like symptoms in healthy cannabis users and may increase the incidence of speech illusion. CBD did not influence psychotic-like effects of cannabis. Adolescents may be less vulnerable to acute psychotic-like effects of cannabis than adults.
Personality factors analogous to the Big Five observed in humans are present in the great apes. However, few studies have examined the long-term stability of great ape personality, particularly using factor-based personality instruments. Here, we assessed overall group, and individual-level, stability of chimpanzee personality by collecting ratings for chimpanzees (N = 50) and comparing them with ratings collected approximately 10 years previously, using the same personality scale. The overall mean scores of three of the six factors differed across the two time points. Sex differences in personality were also observed, with overall sex differences found for three traits, and males and females showing different trajectories for two further traits over the 10 year period. Regardless of sex, rank-order stability analysis revealed strong stability for dominance; individuals who were dominant at the first time point were also dominant 10 years later. The other personality factors exhibited poor to moderate rank-order stability, indicating that individuals were variable in their rank-position consistency over time. As many studies assessing chimpanzee cognition rely on personality data collected several years prior to testing, these data highlight the importance of collecting current personality data when correlating them with cognitive performance.
The National Institute for Health and Care Excellence referral guidelines prompting urgent two-week referrals were updated in 2015. Additional symptoms with a lower threshold of 3 per cent positive predictive values were integrated. This study aimed to examine whether current pan-London urgent referral guidelines for suspected head and neck cancer lead to efficient and accurate referrals by assessing frequency of presenting symptoms and risk factors, and examining their correlation with positive cancer diagnoses.
The risk factors and symptoms of 984 consecutive patients (over a six-month period in 2016) were collected retrospectively from urgent referral letters to University College London Hospital for suspected head and neck cancer.
Only 37 referrals (3.76 per cent) resulted in a head and neck cancer diagnosis. Four of the 23 recommended symptoms demonstrated statistically significant results. Nine of the 23 symptoms had a positive predictive value of over 3 per cent.
The findings indicate that the current referral guidelines are not effective at detecting patients with cancer. Detection rates have decreased from 10–15 per cent to 3.76 per cent. A review of the current head and neck cancer referral guidelines is recommended, along with further data collection for comparison.
The diurnal feeding patterns of dairy cows affects the 24 h robot utilisation of pasture-based automatic milking systems (AMS). A decline in robot utilisation between 2400 and 0600 h currently occurs in pasture-based AMS, as cow feeding activity is greatly reduced during this time. Here, we investigate the effect of a temporal variation in feed quality and quantity on cow feeding behaviour between 2400 and 0600 h as a potential tool to increase voluntary cow trafficking in an AMS at night. The day was allocated into four equal feeding periods (0600 to 1200, 1200 to 1800, 1800 to 2400 and 2400 to 0600 h). Lucerne hay cubes (CP = 19.1%, water soluble carbohydrate = 3.8%) and oat, ryegrass and clover hay cubes with 20% molasses (CP = 11.8%, water soluble carbohydrate = 10.7%) were offered as the ‘standard’ and ‘preferred’ (preference determined previously) feed types, respectively. The four treatments were (1) standard feed offered ad libitum (AL) throughout 24 h; (2) as per AL, with preferred feed replacing standard feed between 2400 and 0600 h (AL + P); (3) standard feed offered at a restricted rate, with quantity varying between each feeding period (20:10:30:60%, respectively) as a proportion of the (previously) measured daily ad libitum intake (VA); (4) as per VA, with preferred feed replacing standard feed between 2400 and 0600 h (VA + P). Eight non-lactating dairy cows were used in a 4 × 4 Latin square design. During each experimental period, treatment cows were fed for 7 days, including 3 days habituation and 4 days data collection. Total daily intake was approximately 8% greater (P < 0.001) for the AL and AL + P treatments (23.1 and 22.9 kg DM/cow) as compared with the VA and VA + P treatments (21.6 and 20.9 kg DM/cow). The AL + P and VA treatments had 21% and 90% greater (P < 0.001) dry matter intake (DMI) between 2400 and 0600 h, respectively, compared with the AL treatment. In contrast, the VA + P treatment had similar DMI to the VA treatment. Our experiment shows ability to increase cow feeding activity at night by varying feed type and quantity, though it is possible that a penalty to total DMI may occur using VA. Further research is required to determine if the implementation of variable feed allocation on pasture-based AMS farms is likely to improve milking robot utilisation by increasing cow feeding activity at night.
Introduction: Endotracheal intubation (ETI) is a lifesaving procedure commonly performed by emergency department (ED) physicians that may lead to patient discomfort or adverse events (e.g., unintended extubation) if sedation is inadequate. No ED-based sedation guidelines currently exist, so individual practice varies widely. This study's objective was to describe the self-reported post-ETI sedation practice of Canadian adult ED physicians. Methods: An anonymous, cross-sectional, web-based survey featuring 7 common ED scenarios requiring ETI was distributed to adult ED physician members of the Canadian Association of Emergency Physicians (CAEP). Scenarios included post-cardiac arrest, hypercapnic and hypoxic respiratory failure, status epilepticus, polytrauma, traumatic brain injury, and toxicology. Participants indicated first and second choice of sedative medication following ETI, as well as bolus vs. infusion administration in each scenario. Data was presented by descriptive statistics. Results: 207 (response rate 16.8%) ED physicians responded to the survey. Emergency medicine training of respondents included CCFP-EM (47.0%), FRCPC (35.8%), and CCFP (13.9%). 51.0% of respondents work primarily in academic/teaching hospitals and 40.4% work in community teaching hospitals. On average, responding physicians report providing care for 4.9 ± 6.8 (mean ± SD) intubated adult patients per month for varying durations (39.2% for 1–2 hours, 27.8% for 2–4 hours, and 22.7% for ≤1 hour). Combining all clinical scenarios, propofol was the most frequently used medication for post-ETI sedation (38.0% of all responses) and was the most frequently used agent except for the post-cardiac arrest, polytrauma, and hypercapnic respiratory failure scenarios. Ketamine was used second most frequently (28.2%), with midazolam being third most common (14.5%). Post-ETI sedation was provided by > 98% of physicians in all situations except the post-cardiac arrest (26.1% indicating no sedation) and toxicology (15.5% indicating no sedation) scenarios. Sedation was provided by infusion in 74.6% of cases and bolus in 25.4%. Conclusion: Significant practice variability with respect to post-ETI sedation exists amongst Canadian emergency physicians. Future quality improvement studies should examine sedation provided in real clinical scenarios with a goal of establishing best sedation practices to improve patient safety and quality of care.
The EU has threatened to suspend Generalized Scheme of Preferences (GSP) status for Myanmar, under which the country's exports can enter Europe without any tariffs or quotas. The official reason cited by the EU is a growing concern over human rights violations and issues around labour rights in Myanmar. If this threat were to be carried out, the business sector that will be most affected is Myanmar's burgeoning garment sector, which employs around 700,000 people, most of whom are women. The principal worry in Myanmar is that if EU buyers and brands have to start paying tariffs to import Myanmar-made garments, then they will opt to shift their sourcing to other countries. Without GSP, Myanmar's garment exports may no longer be price competitive. As one of the few manufacturing sectors in Myanmar to employ semi-skilled women, many of whom migrated from poor rural areas, the garment sector has come to play an important socioeconomic role in the country. Whether or not the EU decides to withdraw GSP status, Myanmar's garment sector faces a number of challenges. How Myanmar's policymakers and garment industry leaders respond to global industry trends will be just as important, in the long run, in determining the sector's commercial sustainability.
MYANMAR's GARMENT SECTOR AND THE EU's THREAT TO SUSPEND GSP PREFERENCES
In 2013 Myanmar was reinstated into the EU Single Market's “Generalized Scheme of Preferences” (GSP), under which goods from the country — and forty-six other least developed countries — may enter the EU duty- and quota-free, in conformity with the “Everything But Arms” (EBA) trade scheme. This followed the positive progress that Myanmar had recently made in transitioning away from a military-led government, and served as “recognition of [Myanmar's] efforts to launch ambitious political, social and labour reforms”. However, in October 2018, following a fact-finding mission to Myanmar, the EU cautioned that Myanmar's GSP privileges might be suspended because of “deeply worrying developments highlighted in various United Nations reports, in particular as regards human rights violations in Rakhine, Kachin and Shan States and concerns around labour rights”.
In 2017, Myanmar exported goods to the EU collectively valued at €1.55 billion, of which the largest proportion by far, 72 per cent, was ready-made garments. Not coincidentally, one of the few manufacturing sectors to display significant growth in Myanmar over the last five years has been the garment sector. Some of that growth has been domestically generated, but a significant proportion stems from foreign investment in the sector, bringing in not just capital, but also non-financial inputs, such as global industry knowledge and expertise, and the all-important networks and relationships with international buyers.
Thus, the dynamics that lie behind Myanmar's recent garment sector growth is multifaceted, and comprises the following elements:
• Significant foreign investment inflows from Asian manufacturing companies (such as those from China and South Korea), leading to newly established export-oriented garment factories in Myanmar, and particularly in and around Yangon's industrial zones.
• A marked reduction in the number of domestically owned garment companies in Myanmar, which are unable to compete and therefore effectively shut out of the export market. Most are typically left supplying cheap garments and uniforms to the relatively modest domestic market.
• An increase in export volumes for garments, principally to the EU, that is partly dependent on GSP privileges that allow Myanmar-based (but not necessarily Myanmar-owned) garment producers to compete with overseas rivals on the in-store prices for garments sold in the EU market.
• The EU has threatened to suspend Generalized Scheme of Preferences (GSP) status for Myanmar, under which the country's exports can enter Europe without any tariffs or quotas. The official reason cited by the EU is a growing concern over human rights violations and issues around labour rights in Myanmar.
• If this threat were to be carried out, the business sector that will be most affected is Myanmar's burgeoning garment sector, which employs around 700,000 people, most of whom are women.
• The principal worry in Myanmar is that if EU buyers and brands have to start paying tariffs to import Myanmar-made garments, then they will opt to shift their sourcing to other countries. Without GSP, Myanmar's garment exports may no longer be price competitive.
• As one of the few manufacturing sectors in Myanmar to employ semi-skilled women, many of whom migrated from poor rural areas, the garment sector has come to play an important socioeconomic role in the country.
• Whether or not the EU decides to withdraw GSP status, Myanmar's garment sector faces a number of challenges. How Myanmar's policymakers and garment industry leaders respond to global industry trends will be just as important, in the long run, in determining the sector's commercial sustainability.
The economic, political, strategic and cultural dynamism in Southeast Asia has gained added relevance in recent years with the spectacular rise of giant economies in East and South Asia. This has drawn greater attention to the region and to the enhanced role it now plays in international relations and global economics.
The sustained effort made by Southeast Asian nations since 1967 towards a peaceful and gradual integration of their economies has had indubitable success, and perhaps as a consequence of this, most of these countries are undergoing deep political and social changes domestically and are constructing innovative solutions to meet new international challenges. Big Power tensions continue to be played out in the neighbourhood despite the tradition of neutrality exercised by the Association of Southeast Asian Nations (ASEAN).
The Trends in Southeast Asia series acts as a platform for serious analyses by selected authors who are experts in their fields. It is aimed at encouraging policymakers and scholars to contemplate the diversity and dynamism of this exciting region.
Achieving a consistent level of robot utilisation throughout 24 h maximises automatic milking system (AMS) utilisation. However, levels of robot utilisation in the early morning hours are typically low, caused by the diurnal feeding behaviour of cows, limiting the inherent capacity and total production of pasture-based AMS. Our objective was to determine robot utilisation throughout 24 h by dairy cows, based on milking frequency (MF; milking events per animal per day) in a pasture-based AMS. Milking data were collected from January and February 2013 across 56 days, from a single herd of 186 animals (Bos taurus) utilising three Lely A3 robotic milking units, located in Tasmania, Australia. The dairy herd was categorised into three equal sized groups (n=62 per group) according to the cow’s mean daily MF over the duration of the study. Robot utilisation was characterised by an interaction (P< 0.001) between the three MF groups and time of day, with peak milking time for high MF cows within one h of a fresh pasture allocation becoming available, followed by the medium MF and low MF cows 2 and 4 h later, respectively. Cows in the high MF group also presented for milking between 2400 and 0600 h more frequently (77% of nights), compared to the medium MF group (57%) and low MF group (50%). This study has shown the formation of three distinct groups of cows within a herd, based on their MF levels. Further work is required to determine if this finding is replicated across other pasture-based AMS farms.
Changes in cannabis regulation globally make it increasingly important to determine what predicts an individual's risk of experiencing adverse drug effects. Relevant studies have used diverse self-report measures of cannabis use, and few include multiple biological measures. Here we aimed to determine which biological and self-report measures of cannabis use predict cannabis dependency and acute psychotic-like symptoms.
In a naturalistic study, 410 young cannabis users were assessed once when intoxicated with their own cannabis and once when drug-free in counterbalanced order. Biological measures of cannabinoids [(Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), cannabinol (CBN) and their metabolites)] were derived from three samples: each participant's own cannabis (THC, CBD), a sample of their hair (THC, THC-OH, THC-COOH, CBN, CBD) and their urine (THC-COOH/creatinine). Comprehensive self-report measures were also obtained. Self-reported and clinician-rated assessments were taken for cannabis dependency [Severity of Dependence Scale (SDS), DSM-IV-TR] and acute psychotic-like symptoms [Psychotomimetic State Inventory (PSI) and Brief Psychiatric Rating Scale (BPRS)].
Cannabis dependency was positively associated with days per month of cannabis use on both measures, and with urinary THC-COOH/creatinine for the SDS. Acute psychotic-like symptoms were positively associated with age of first cannabis use and negatively with urinary THC-COOH/creatinine; no predictors emerged for BPRS.
Levels of THC exposure are positively associated with both cannabis dependency and tolerance to the acute psychotic-like effects of cannabis. Combining urinary and self-report assessments (use frequency; age first used) enhances the measurement of cannabis use and its association with adverse outcomes.
We present observations of 50 deg2 of the Mopra carbon monoxide (CO) survey of the Southern Galactic Plane, covering Galactic longitudes l = 300–350° and latitudes |b| ⩽ 0.5°. These data have been taken at 0.6 arcmin spatial resolution and 0.1 km s−1spectral resolution, providing an unprecedented view of the molecular clouds and gas of the Southern Galactic Plane in the 109–115 GHz J = 1–0 transitions of 12CO, 13CO, C18O, and C17O.
We present a series of velocity-integrated maps, spectra, and position-velocity plots that illustrate Galactic arm structures and trace masses on the order of ~106 M⊙ deg−2, and include a preliminary catalogue of C18O clumps located between l = 330–340°. Together with the information about the noise statistics of the survey, these data can be retrieved from the Mopra CO website and the PASA data store.
To examine whether social media and online behaviours are associated with unhealthy food and beverage consumption in children.
A cross-sectional online survey was used to assess Internet and social media use, including engagement with food and beverage brand content, and frequency of consumption of unhealthy foods and beverages. Linear regression models were used to examine associations between online behaviours, including engagement with food and beverage brand content, and consumption of unhealthy foods and beverages, adjusting for age, sex and socio-economic status.
New South Wales, Australia, in 2014.
Children aged 10–16 years (n 417).
Watching food brand video content on YouTube, purchasing food online and seeing favourite food brands advertised online were significantly associated with higher frequency of consumption of unhealthy foods and drinks after adjustment for age, sex and socio-economic status.
Children who have higher online engagement with food brands and content, particularly through online video, are more likely to consume unhealthy foods and drinks. Our findings highlight the need to include social media in regulations and policies designed to limit children’s exposure to unhealthy food marketing. Social media companies have a greater role to play in protecting children from advertising.
Novel approaches to improving disaster response have begun to include the use of big data and information and communication technology (ICT). However, there remains a dearth of literature on the use of these technologies in disasters. We have conducted an integrative literature review on the role of ICT and big data in disasters. Included in the review were 113 studies that met our predetermined inclusion criteria. Most studies used qualitative methods (39.8%, n=45) over mixed methods (31%, n=35) or quantitative methods (29.2%, n=33). Nearly 80% (n=88) covered only the response phase of disasters and only 15% (n=17) of the studies addressed disasters in low- and middle-income countries. The 4 most frequently mentioned tools were geographic information systems, social media, patient information, and disaster modeling. We suggest testing ICT and big data tools more widely, especially outside of high-income countries, as well as in nonresponse phases of disasters (eg, disaster recovery), to increase an understanding of the utility of ICT and big data in disasters. Future studies should also include descriptions of the intended users of the tools, as well as implementation challenges, to assist other disaster response professionals in adapting or creating similar tools. (Disaster Med Public Health Preparedness. 2019;13:353–367)
Non-psychotic affective symptoms are important components of psychotic syndromes. They are frequent and are now thought to influence the emergence of paranoia and hallucinations. Evidence supporting this model of psychosis comes from recent cross-fertilising epidemiological and intervention studies. Epidemiological studies identify plausible targets for intervention but must be interpreted cautiously. Nevertheless, causal inference can be strengthened substantially using modern statistical methods.
Directed Acyclic Graphs were used in a dynamic Bayesian network approach to learn the overall dependence structure of chosen variables. DAG-based inference identifies the most likely directional links between multiple variables, thereby locating them in a putative causal cascade. We used initial and 18-month follow-up data from the 2000 British National Psychiatric Morbidity survey (N = 8580 and N = 2406).
We analysed persecutory ideation, hallucinations, a range of affective symptoms and the effects of cannabis and problematic alcohol use. Worry was central to the links between symptoms, with plausible direct effects on insomnia, depressed mood and generalised anxiety, and recent cannabis use. Worry linked the other affective phenomena with paranoia. Hallucinations were connected only to worry and persecutory ideation. General anxiety, worry, sleep problems, and persecutory ideation were strongly self-predicting. Worry and persecutory ideation were connected over the 18-month interval in an apparent feedback loop.
These results have implications for understanding dynamic processes in psychosis and for targeting psychological interventions. The reciprocal influence of worry and paranoia implies that treating either symptom is likely to ameliorate the other.