To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study considers the relationships between first-year law students' admission credentials, the amount of time they spend in study, and the grades they receive on examination. Findings include that there is a significant drop in effort during the first year, that while effort invested in study pays off in improved grades this effort is much less significant in explaining grades than is student ability as measured by LSAT and undergraduate grades, that students in the middle and bottom of the class are helped more by substantial study than are those in the top, that class attendance is much more valuable in raising grades than is equivalent time in other study, and that none of the various study techniques examined could be linked with major differences in results.
The medical burden in mood disorders is high; various factors are thought to drive this pattern. Little research has examined the role of childhood maltreatment and its effects on medical morbidity in adulthood among people with unipolar depression and bipolar disorder.
This is the first study to explore the association between childhood maltreatment and medical morbidity in bipolar disorder and in unipolar depression, and examine whether the impact of abuse and neglect are distinct or combined.
The participants consisted of 354 psychiatrically healthy controls, 248 participants with recurrent unipolar depression and 72 with bipolar disorder. Participants completed the Childhood Trauma Questionnaire and received a validated medical history interview.
Any type of childhood maltreatment, child abuse and child neglect were significantly associated with the medical burden in bipolar disorder, but not unipolar depression or for controls. These associations worked in a dose–response fashion where participants with bipolar disorder with a history of two or more types of childhood maltreatment had the highest odds of having a medical illness relative to those without such history or those who reported one form. No such significant dose–response patterns were detected for participants with unipolar depression or controls.
These findings suggest that childhood maltreatment may play a stronger role in the development of medical illnesses in individuals with bipolar disorder relative to those with unipolar depression. Individuals who had been maltreated with a mood disorder, especially bipolar disorder may benefit most from prevention and intervention efforts surrounding physical health.
This paper reviews recent research into predicting the eating qualities of beef. A range of instrumental and grading approaches have been discussed, highlighting implications for the European beef industry. Studies incorporating a number of instrumental and spectroscopic techniques illustrate the potential for online systems to non-destructively measure muscle pH, colour, fat and moisture content of beef with R2 (coefficient of determination) values >0.90. Direct predictions of eating quality (tenderness, flavour, juiciness) and fatty acid content using these methods are also discussed though success is greatly variable. R2 values for instrumental measures of tenderness have been quoted as high as 0.85 though R2 values for sensory tenderness values can be as low as 0.01. Discriminant analysis models can improve prediction of variables such as pH and shear force, correctly classifying beef samples into categorical groups with >90% accuracy. Prediction of beef flavour continues to challenge researchers and the industry alike, with R2 values rarely quoted above 0.50, regardless of instrumental or statistical analysis used. Beef grading systems such as EUROP and United States Department of Agriculture systems provide carcase classification and some indication of yield. Other systems attempt to classify the whole carcase according to expected eating quality. These are being supplemented by schemes such as Meat Standards Australia (MSA), based on consumer satisfaction for individual cuts. In Australia, MSA has grown steadily since its inception generating a 10% premium for the beef industry in 2015-16 of $187 million. There is evidence that European consumers would respond to an eating quality guarantee provided it is simple and independently controlled. A European beef quality assurance system might encompass environmental and nutritional measures as well as eating quality and would need to be profitable, simple, effective and sufficiently flexible to allow companies to develop their own brands.
The Meat Standards Australia (MSA) grading scheme has the ability to predict beef eating quality for each ‘cut×cooking method combination’ from animal and carcass traits such as sex, age, breed, marbling, hot carcass weight and fatness, ageing time, etc. Following MSA testing protocols, a total of 22 different muscles, cooked by four different cooking methods and to three different degrees of doneness, were tasted by over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia. Consumers scored the sensory characteristics (tenderness, flavor liking, juiciness and overall liking) and then allocated samples to one of four quality grades: unsatisfactory, good-every-day, better-than-every-day and premium. We observed that 26% of the beef was unsatisfactory. As previously reported, 68% of samples were allocated to the correct quality grades using the MSA grading scheme. Furthermore, only 7% of the beef unsatisfactory to consumers was misclassified as acceptable. Overall, we concluded that an MSA-like grading scheme could be used to predict beef eating quality and hence underpin commercial brands or labels in a number of European countries, and possibly the whole of Europe. In addition, such an eating quality guarantee system may allow the implementation of an MSA genetic index to improve eating quality through genetics as well as through management. Finally, such an eating quality guarantee system is likely to generate economic benefits to be shared along the beef supply chain from farmers to retailors, as consumers are willing to pay more for a better quality product.
Tenderness can be considered as a function of three components: connective tissue content/composition, sarcomere length and proteolysis of the myofibrillar proteins (ageing) (Koohmaraie, 2002). Improvement of sarcomere length and proteolysis can be achieved through optimal processing (e.g. hanging and aging; Thompson, 2006). The main technique that improves the sarcomere length is tenderstretch hanging. This technique increases the tension of the hindlimb and loin muscles avoiding the contraction of the fibers at rigor (Bouton, 1973). In this experiment the aim was, under commercial conditions, to compare two methods of tenderstretch hanging and to examine the potential to improve the tenderness of lamb muscles.
A considerable proportion of beef produced in the UK is a byproduct of the dairy industry. Young animals from this source are generally regarded as low in quality and meat from animals of this type is usually destined for the commodity minced beef market. The objective of the present study was to evaluate the effect of slaughter weight on sensory characteristics of meat from Holstein-Friesian bulls and steers offered a cereal-based ration.
Accurately quantifying a consumer’s willingness to pay (WTP) for beef of different eating qualities is intrinsically linked to the development of eating-quality-based meat grading systems, and therefore the delivery of consistent, quality beef to the consumer. Following Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia were asked to detail their willingness to pay for beef from one of four categories that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium quality. These figures were subsequently converted to a proportion relative to the good-every-day category (P-WTP) to allow comparison between different currencies and time periods. Consumers also answered a short demographic questionnaire. Consumer P-WTP was found to be remarkably consistent between different demographic groups. After quality grade, by far the greatest influence on P-WTP was country of origin. This difference was unable to be explained by the other demographic factors examined in this study, such as occupation, gender, frequency of consumption and the importance of beef in the diet. Therefore, we can conclude that the P-WTP for beef is highly transferrable between different consumer groups, but not countries.
The most important factors known to influence the eating quality of beef are well established and include both pre- and post-slaughter events with many of the determinants interacting with each other. A substantial programme of work has been conducted by the Agri-Food and Biosciences Institute in Northern Ireland aimed at quantifying those factors of most importance to the local beef industry. Post-slaughter effects such as carcase chilling and electrical stimulation, ageing, carcase hanging and cooking method have been shown to have a significant impact on eating quality when compared with pre-slaughter activities such as animal handling and lairage time in the Northern Ireland studies. However, the effect of animal breed, particularly the use of dairy breed animals, was shown to significantly improve eating quality. Many of these factors were found to interact with each other.
To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches.
The beef industry must become more responsive to the changing market place and consumer demands. An essential part of this is quantifying a consumer’s perception of the eating quality of beef and their willingness to pay for that quality, across a broad range of demographics. Over 19 000 consumers from Northern Ireland, Poland, Ireland and France each tasted seven beef samples and scored them for tenderness, juiciness, flavour liking and overall liking. These scores were weighted and combined to create a fifth score, termed the Meat Quality 4 score (MQ4) (0.3×tenderness, 0.1×juiciness, 0.3×flavour liking and 0.3×overall liking). They also allocated the beef samples into one of four quality grades that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium. After the completion of the tasting panel, consumers were then asked to detail, in their own currency, their willingness to pay for these four categories which was subsequently converted to a proportion relative to the good-every-day category (P-WTP). Consumers also answered a short demographic questionnaire. The four sensory scores, the MQ4 score and the P-WTP were analysed separately, as dependant variables in linear mixed effects models. The answers from the demographic questionnaire were included in the model as fixed effects. Overall, there were only small differences in consumer scores and P-WTP between demographic groups. Consumers who preferred their beef cooked medium or well-done scored beef higher, except in Poland, where the opposite trend was found. This may be because Polish consumers were more likely to prefer their beef cooked well-done, but samples were cooked medium for this group. There was a small positive relationship with the importance of beef in the diet, increasing sensory scores by about 4% in Poland and Northern Ireland. Men also scored beef about 2% higher than women for most sensory scores in most countries. In most countries, consumers were willing to pay between 150 and 200% more for premium beef, and there was a 50% penalty in value for unsatisfactory beef. After quality grade, by far the greatest influence on P-WTP was country of origin. Consumer age also had a small negative relationship with P-WTP. The results indicate that a single quality score could reliably describe the eating quality experienced by all consumers. In addition, if reliable quality information is delivered to consumers they will pay more for better quality beef, which would add value to the beef industry and encourage improvements in quality.
Quantifying consumer responses to beef across a broad range of demographics, nationalities and cooking methods is vitally important for any system evaluating beef eating quality. On the basis of previous work, it was expected that consumer scores would be highly accurate in determining quality grades for beef, thereby providing evidence that such a technique could be used to form the basis of and eating quality grading system for beef. Following the Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia tasted cooked beef samples, then allocated them to a quality grade; unsatisfactory, good-every-day, better-than-every-day and premium. The consumers also scored beef samples for tenderness, juiciness, flavour-liking and overall-liking. The beef was sourced from all countries involved in the study and cooked by four different cooking methods and to three different degrees of doneness, with each experimental group in the study consisting of a single cooking doneness within a cooking method for each country. For each experimental group, and for the data set as a whole, a linear discriminant function was calculated, using the four sensory scores which were used to predict the quality grade. This process was repeated using two conglomerate scores which are derived from weighting and combining the consumer sensory scores for tenderness, juiciness, flavour-liking and overall-liking, the original meat quality 4 score (oMQ4) (0.4, 0.1, 0.2, 0.3) and current meat quality 4 score (cMQ4) (0.3, 0.1, 0.3, 0.3). From the results of these analyses, the optimal weightings of the sensory scores to generate an ‘ideal meat quality 4 score (MQ4)’ for each country were calculated, and the MQ4 values that reflected the boundaries between the four quality grades were determined. The oMQ4 weightings were far more accurate in categorising European meat samples than the cMQ4 weightings, highlighting that tenderness is more important than flavour to the consumer when determining quality. The accuracy of the discriminant analysis to predict the consumer scored quality grades was similar across all consumer groups, 68%, and similar to previously reported values. These results demonstrate that this technique, as used in the MSA system, could be used to predict consumer assessment of beef eating quality and therefore to underpin a commercial eating quality guarantee for all European consumers.
European conformation and fat grades are a major factor determining carcass value throughout Europe. The relationships between these scores and sensory scores were investigated. A total of 3786 French, Polish and Irish consumers evaluated steaks, grilled to a medium doneness, according to protocols of the ‘Meat Standards Australia’ system, from 18 muscles representing 455 local, commercial cattle from commercial abattoirs. A mixed linear effects model was used for the analysis. There was a negative relationship between juiciness and European conformation score. For the other sensory scores, a maximum of three muscles out of a possible 18 demonstrated negative effects of conformation score on sensory scores. There was a positive effect of European fat score on three individual muscles. However, this was accounted for by marbling score. Thus, while the European carcass classification system may indicate yield, it has no consistent relationship with sensory scores at a carcass level that is suitable for use in a commercial system. The industry should consider using an additional system related to eating quality to aid in the determination of the monetary value of carcasses, rewarding eating quality in addition to yield.
Delivering beef of consistent quality to the consumer is vital for consumer satisfaction and will help to ensure demand and therefore profitability within the beef industry. In Australia, this is being tackled with Meat Standards Australia (MSA), which uses carcass traits and processing factors to deliver an individual eating quality guarantee to the consumer for 135 different ‘cut by cooking methods’ from each carcass. The carcass traits used in the MSA model, such as ossification score, carcass weight and marbling explain the majority of the differences between breeds and sexes. Therefore, it was expected that the model would predict with eating quality of bulls and dairy breeds with good accuracy. In total, 8128 muscle samples from 482 carcasses from France, Poland, Ireland and Northern Ireland were MSA graded at slaughter then evaluated for tenderness, juiciness, flavour liking and overall liking by untrained consumers, according to MSA protocols. The scores were weighted (0.3, 0.1, 0.3, 0.3) and combined to form a global eating quality (meat quality (MQ4)) score. The carcasses were grouped into one of the three breed categories: beef breeds, dairy breeds and crosses. The difference between the actual and the MSA-predicted MQ4 scores were analysed using a linear mixed effects model including fixed effects for carcass hang method, cook type, muscle type, sex, country, breed category and postmortem ageing period, and random terms for animal identification, consumer country and kill group. Bulls had lower MQ4 scores than steers and females and were predicted less accurately by the MSA model. Beef breeds had lower eating quality scores than dairy breeds and crosses for five out of the 16 muscles tested. Beef breeds were also over predicted in comparison with the cross and dairy breeds for six out of the 16 muscles tested. Therefore, even after accounting for differences in carcass traits, bulls still differ in eating quality when compared with females and steers. Breed also influenced eating quality beyond differences in carcass traits. However, in this case, it was only for certain muscles. This should be taken into account when estimating the eating quality of meat. In addition, the coefficients used by the Australian MSA model for some muscles, marbling score and ultimate pH do not exactly reflect the influence of these factors on eating quality in this data set, and if this system was to be applied to Europe then the coefficients for these muscles and covariates would need further investigation.
Ossification score and animal age are both used as proxies for maturity-related collagen crosslinking and consequently decreases in beef tenderness. Ossification score is strongly influenced by the hormonal status of the animal and may therefore better reflect physiological maturity and consequently eating quality. As part of a broader cross-European study, local consumers scored 18 different muscle types cooked in three ways from 482 carcasses with ages ranging from 590 to 6135 days and ossification scores ranging from 110 to 590. The data were studied across three different maturity ranges; the complete range of maturities, a lesser range and a more mature range. The lesser maturity group consisted of carcasses having either an ossification score of 200 or less or an age of 987 days or less with the remainder in the greater maturity group. The three different maturity ranges were analysed separately with a linear mixed effects model. Across all the data, and for the greater maturity group, animal age had a greater magnitude of effect on eating quality than ossification score. This is likely due to a loss of sensitivity in mature carcasses where ossification approached and even reached the maximum value. In contrast, age had no relationship with eating quality for the lesser maturity group, leaving ossification score as the more appropriate measure. Therefore ossification score is more appropriate for most commercial beef carcasses, however it is inadequate for carcasses with greater maturity such as cull cows. Both measures may therefore be required in models to predict eating quality over populations with a wide range in maturity.
Major depressive disorder (MDD) is a common and disabling condition with well-established heritability and environmental risk factors. Gene–environment interaction studies in MDD have typically investigated candidate genes, though the disorder is known to be highly polygenic. This study aims to test for interaction between polygenic risk and stressful life events (SLEs) or childhood trauma (CT) in the aetiology of MDD.
The RADIANT UK sample consists of 1605 MDD cases and 1064 controls with SLE data, and a subset of 240 cases and 272 controls with CT data. Polygenic risk scores (PRS) were constructed using results from a mega-analysis on MDD by the Psychiatric Genomics Consortium. PRS and environmental factors were tested for association with case/control status and for interaction between them.
PRS significantly predicted depression, explaining 1.1% of variance in phenotype (p = 1.9 × 10−6). SLEs and CT were also associated with MDD status (p = 2.19 × 10−4 and p = 5.12 × 10−20, respectively). No interactions were found between PRS and SLEs. Significant PRSxCT interactions were found (p = 0.002), but showed an inverse association with MDD status, as cases who experienced more severe CT tended to have a lower PRS than other cases or controls. This relationship between PRS and CT was not observed in independent replication samples.
CT is a strong risk factor for MDD but may have greater effect in individuals with lower genetic liability for the disorder. Including environmental risk along with genetics is important in studying the aetiology of MDD and PRS provide a useful approach to investigating gene–environment interactions in complex traits.
Intrauterine variations in nutrient allowance can alter body composition and tissue features of the porcine offspring around birth. This study aimed to determine the effects of fetal weight variations between littermates and of maternal dietary regimen during gestation on fetal muscle traits just before birth. Fourteen pregnant gilts were reared under a conventional (control, CTL; n=7) or an experimental (treatment, TRT; n=7) dietary regimen during gestation. The dietary treatment provided 70% of the protein and digestible energy contents of the CTL diet during the first 70 days of gestation and then, 115% of the protein and digestible energy contents up to farrowing. At 110 days of gestation, sows were sacrificed and one fetus having a low (824±140 g) and one having a normal (1218±192 g) BW per litter were sampled. Irrespective of maternal dietary regimen, the longissimus muscle of the small fetuses exhibited higher expression levels of DLK1/Pref1 and NCAM1/CD56, two genes known to be downregulated during normal skeletal muscle development. Expression levels of the embryonic isoform of the myosin heavy chain (MyHC), both at the mRNA and at the protein levels, were also higher in small fetuses. In addition, the ratios of perinatal to embryonic and of adult fast to developmental MyHC isoforms were generally lower in light fetuses compared with their medium-weight littermates. These modifications suggest a delayed myofiber development in spontaneous growth-retarded fetuses. Finally, GLUT1 was expressed to a lesser extent in the muscle of small v. normal fetuses, suggesting decreased ability for glucose uptake in muscle. Initial feed restriction and subsequent overfeeding of sows during gestation led to a lower expression of the myogenic factor MYOD1, a prerequisite for myogenic initiation in skeletal muscle. This maternal strategy was also associated with a lower expression level of insulin-like growth factor 1 receptor (IGFR) but an upregulation of IGF2. This suggests an altered susceptibility of muscle cells to IGFs’ signal in fetuses from treated sows. Altogether, intrauterine growth restriction impaired fetal muscle development, and restricted feeding followed by overfeeding of gestating sows did not allow small fetuses to recover normal contractile and metabolic characteristics.
Strategies to dissect phenotypic and genetic heterogeneity of major depressive disorder (MDD) have mainly relied on subphenotypes, such as age at onset (AAO) and recurrence/episodicity. Yet, evidence on whether these subphenotypes are familial or heritable is scarce. The aims of this study are to investigate the familiality of AAO and episode frequency in MDD and to assess the proportion of their variance explained by common single nucleotide polymorphisms (SNP heritability).
For investigating familiality, we used 691 families with 2–5 full siblings with recurrent MDD from the DeNt study. We fitted (square root) AAO and episode count in a linear and a negative binomial mixed model, respectively, with family as random effect and adjusting for sex, age and center. The strength of familiality was assessed with intraclass correlation coefficients (ICC). For estimating SNP heritabilities, we used 3468 unrelated MDD cases from the RADIANT and GSK Munich studies. After similarly adjusting for covariates, derived residuals were used with the GREML method in GCTA (genome-wide complex trait analysis) software.
Significant familial clustering was found for both AAO (ICC = 0.28) and episodicity (ICC = 0.07). We calculated from respective ICC estimates the maximal additive heritability of AAO (0.56) and episodicity (0.15). SNP heritability of AAO was 0.17 (p = 0.04); analysis was underpowered for calculating SNP heritability of episodicity.
AAO and episodicity aggregate in families to a moderate and small degree, respectively. AAO is under stronger additive genetic control than episodicity. Larger samples are needed to calculate the SNP heritability of episodicity. The described statistical framework could be useful in future analyses.
The mineralogy and isotopic compositions of subglacially precipitated carbonate crusts (SPCCs) provide information on conditions and processes beneath former glaciers and ice sheets. Here we describe SPCCs formed on gneissic bedrock at the bed of the Laurentide Ice Sheet (LIS) during the last glacial maximum on central Baffin Island. Geochemical data indicate that the Ca in the crusts was likely derived from the subglacial chemical weathering Ca-bearing minerals in the local bedrock. C and Sr isotopic analyses reveal that the C in the calcite was derived predominantly from older plant debris. The δ18O values of the SPCCs suggest that these crusts formed in isotopic equilibrium with basal ice LIS preserved in the Barnes Ice Cap (BIC). Columnar crystal fabric and the predominance of sparite over micrite in the SPCCs are indicative of carbonate precipitation under open-system conditions. However, the mean δ18O value of the calcite crusts is ~ 10‰ higher than those of primary LIS ice preserved in the BIC, demonstrating that SPCCs record the isotopic composition of only basal ice. Palynomorph assemblages preserved within the calcite and basal BIC ice include species last endemic to the Arctic in the early Tertiary. The source of these palynomorphs remains enigmatic.
Although usually thought of as external environmental stressors, a significant heritable component has been reported for measures of stressful life events (SLEs) in twin studies.
We examined the variance in SLEs captured by common genetic variants from a genome-wide association study (GWAS) of 2578 individuals. Genome-wide complex trait analysis (GCTA) was used to estimate the phenotypic variance tagged by single nucleotide polymorphisms (SNPs). We also performed a GWAS on the number of SLEs, and looked at correlations between siblings.
A significant proportion of variance in SLEs was captured by SNPs (30%, p = 0.04). When events were divided into those considered to be dependent or independent, an equal amount of variance was explained for both. This ‘heritability’ was in part confounded by personality measures of neuroticism and psychoticism. A GWAS for the total number of SLEs revealed one SNP that reached genome-wide significance (p = 4 × 10−8), although this association was not replicated in separate samples. Using available sibling data for 744 individuals, we also found a significant positive correlation of R2 = 0.08 in SLEs (p = 0.03).
These results provide independent validation from molecular data for the heritability of reporting environmental measures, and show that this heritability is in part due to both common variants and the confounding effect of personality.
This chapter describes the diagnosis, pathophysiology, treatments, and diagnosis of twin reversed arterial perfusion (TRAP). Existence of TRAP requires two conditions: pump or forward flow failure in the acardiac twin and a set of arterioarterial and venovenous placental anastomoses connecting the acardiac and pump twins' circulatory systems. Ultrasonography with color Doppler is the primary method for diagnosing TRAP. The added benefits of color Doppler sonography include ability to trace fetal vessels and document reversed flow through an arterioarterial anastomosis confirming diagnosis of TRAP. MRI has also been used as an adjunct modality in the diagnosis of TRAP. Using MRI, one can determine the extent of blood flow in the umbilical cord of the acardiac, as well as evaluate the pump twin for anomalies, cardiac decompensation, and signs of chronic hypoxia such as brain ischemia. However, no single surgical technique has been found to be unequivocally superior.