We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The voyage data recorder (VDR) is a data recording system that aims to provide all navigational, positional, communicational, sensor, control and command information for data-driven investigation of accidents onboard ships. Due to the increasing dependence on interconnected networks, cybersecurity threats are one of the most severe issues and critical problems when it comes to safeguarding sensitive information and assets. Cybersecurity issues are extremely important for the VDR, considering that modern VDRs may have internet connections for data transfer, network links to the ship's critical systems and the capacity to record potentially sensitive data. Thus, this research adopted failure modes and effects analysis (FMEA) to perform a cybersecurity risk assessment of a VDR in order to identify cyber vulnerabilities and specific cyberattacks that might be launched against the VDR. The findings of the study indicate certain cyberattacks (false information, command injection, viruses) as well as specific VDR components (data acquisition unit (DAU), remote access, playback software) that required special attention. Accordingly, preventative and control measures to improve VDR cybersecurity have been discussed in detail. This research makes a contribution significantly to the improvement of ship safety management systems, particularly in terms of cybersecurity.
Responding to mistrust in the European agencies’ risk assessments in politically salient cases, the European Union (EU) legislator, the European Food Safety Authority and the European Medicines Agency alike have accelerated their efforts to foster EU regulatory science transparency. These simultaneous endeavours have, however, taken place in a fragmented legislative and administrative context, with each agency operating under a different legal framework. By focusing on authorisation procedures, from registration of studies to authorisation of novel foods, pesticides and human medicines, this article examines the resulting regimes governing the disclosure of scientific data by EU agencies to identify common trends and sectoral specificities. Against the background of an overall shift towards enhanced transparency, we shed light on, first, the circulation of institutional arrangements and practices among agencies and, second, the new dimensions of transparency emerging from these developments. We also highlight the remaining sectoral differences and argue that they could have potentially large impacts on the amount and type of information disclosed and on the level of transparency perceived by stakeholders and citizens. We argue that more coherence across the sectoral transparency regimes is needed, in particular in light of the agencies’ contested legitimacy and of their increasing cooperation on cross-cutting issues like antimicrobial resistance and medicine and pesticide residues in food.
While the incidence of pregnancy has increased among individuals with adult CHD, little has been described about considerations and experiences of patients with adult CHD regarding pregnancy.
Objective:
We aimed to explore patients’ motivations, concerns, and decision-making processes regarding pregnancy.
Methods:
In April 2019–January 2020, we conducted in-depth telephone interviews with patients (n = 25) with simple, moderate, or complex adult CHD, who received prenatal care at the University of Washington during 2010–2019 and experienced a live birth. Transcripts were analysed using thematic analysis.
Results:
Participants described motivations for pregnancy as both internal desires (motherhood, marriage fulfillment, biological connection, fetal personhood, self-efficacy) and external drivers (family or community), as well as concerns for the health and survival of themselves and the fetus. Factors that enabled their decision to maintain a pregnancy included having a desire that outweighed their perceived risk, using available data to guide their decision, planning for contingencies and knowing their beliefs about termination, plus having a trusted healthcare team, social support, and resources. Factors that led to insurmountable risk in subsequent pregnancies included desire having been fulfilled by the first pregnancy, compounding risk with age and additional pregnancies, new responsibility to an existing child, and reduced healthcare team and social support.
Conclusions:
Understanding individuals’ motivations and concerns, and how they weigh their decisions to become or remain pregnant, can help clinicians better support patients with adult CHD considering pregnancy. Clinician education on patient experiences is warranted.
The welfare of transgenic animals is often not considered prior to their generation. However, we demonstrate here how a welfare risk assessment can be carried out before transgenic animals are created. We describe a risk assessment identifying potential welfare problems in transgenic pigs generated for future xeno-donation of organs. This assessment is based on currently available information concerning transgenic animal models in which one or more transgenes relevant to future xeno-donation have been inserted. The welfare risk assessment reveals that future xeno-donor pigs may have an increased tendency toward septicaemias, reduced fertility and/or impaired vision. The transgenic animal models used in generating hypotheses about the welfare of xeno-donor pigs can also assist in the testing of these hypotheses. To ensure high levels of welfare of transgenic animals, analogous risk assessments can be used to identify potential welfare problems during the early stages of the generation of new transgenic animals. Such assessments may form part of the basis on which licenses to generate new transgenic animals are granted to research groups.
Risk is defined as a situation involving exposure to danger. Risk assessment by nature characterises the probability of a negative event occurring and quantifies the consequences of such an event. Risk assessment is increasingly being used in the field of animal welfare as a means of drawing comparisons between multiple welfare problems within and between species and identifying those that should be prioritised by policy-makers, either because they affect a large proportion of the population or because they have particularly severe consequences for those affected. The assessment of risk is typically based on three fundamental factors: intensity of consequences, duration affected by consequences and prevalence. However, it has been recognised that these factors alone do not give a complete picture of a hazard and its associated consequences. Rather, to get a complete picture, it is important to also consider information about the hazard itself: probability of exposure to the hazard and duration of exposure to the hazard. The method has been applied to a variety of farmed species (eg poultry, dairy cows, farmed fish), investigating housing, husbandry and slaughter procedures, as well as companion animals, where it has been used to compare inherited defects in pedigree dogs and horses. To what extent can we trust current risk assessment methods to get the priorities straight? How should we interpret the results produced by such assessments? Here, the potential difficulties and pitfalls of the welfare risk assessment method will be discussed: (i) the assumption that welfare hazards are independent; (ii) the problem of quantifying the model parameters; and (iii) assessing and incorporating variability and uncertainty into welfare risk assessments.
Science forms a vital part of animal welfare assessment. However, many animal welfare issues are more influenced by public perception and political pressure than they are by science. The discipline of epidemiology has had an important role to play in examining the effects that management, environment and infrastructure have on animal-based measures of welfare. Standard multifactorial analyses have been used to investigate the effects of these various inputs on outcomes such as lameness. Such research has thereby established estimates of the probability of occurrence of these adverse welfare outcomes (AWOs) and given exposure to particular management inputs (welfare challenges). Welfare science has established various measures of the consequences of challenges to welfare. In this paper, a method is proposed for comparing the likely impact of different welfare challenges, incorporating both the probability of AWOs resulting from that welfare challenge, and their impacts or consequences if they do, using risk assessment principles. The rationale of this framework is explained. Its scope lies within a science-based risk assessment framework. This method does not provide objective measures or score of welfare without some context of comparison and does not provide new welfare measures but only provides a framework enabling objective comparison. Possible applications of this method include comparing the effects of specific management inputs, assigning priority to welfare challenges in order to inform allocation of resources for addressing those challenges, and comparisons of the lifetime welfare effects of management inputs or systems. The use of risk assessment methods in the animal welfare field can facilitate objective comparisons of situations that are currently assessed with some level of subjectivity. This methodology will require significant validation to determine its most productive use. The risk assessment approach could have a productive role in advancing quantitative assessment in animal welfare science.
In Australia, flystrike can severely compromise sheep welfare. Traditionally, the surgical practice of mulesing was performed to alter wool distribution and breech conformation and thereby reduce flystrike risk. The aim of this study was to use published data to evaluate the effectiveness of an epidemiologically based risk assessment model in comparing welfare outcomes in sheep undergoing mulesing, mulesing with pain relief, plastic skin-fold clips, and no mulesing. We used four measures, based on cortisol, haptoglobin, bodyweight and behavioural change, across three farming regions in Australia. All data were normalised to a common scale, based on the range between the highest and lowest responses for each variable (‘welfare impact’; I). Lifetime severity of welfare challenge (SWC) was estimated by summing annual SWCs (SWC = I × P, where P = probability of that impact occurring). The severity of welfare challenge during the first year of life was higher for mulesed animals compared to unmulesed. However, over five years of life, the highest severity of welfare challenge was for unmulesed animals, and the lowest was for the plastic skin-fold clips. The model produced estimates of SWC that are in broad agreement with expert consensus that, although mulesing historically represented a welfare benefit for sheep under Australian conditions, the replacement of mulesing with less invasive procedures, and ultimately genetic selection combined with anti-fly treatments, will provide a sustainable welfare benefit. However, the primary objective of this work was to evaluate the use of the risk assessment framework; not to compare welfare outcomes from mulesing and its alternatives.
The Food and Drug Administration (FDA) warned against administering over-the-counter cough and cold medicines to children under 2. This study evaluated whether experienced parents show poorer adherence to the FDA warning, as safe experiences are predicted to reduce the impact of warnings, and how adherence can be improved. Participants included 218 American parents (mean age: 29.98 (SD = 6.16), 82.9% female) with children age ≤ 2 who were aware of the FDA warning. We compared adherence among experienced (N=142; with other children > age 2) and inexperienced parents (N=76; only children ≤2). We also evaluated potential moderating variables (amount of warning-related information received, prevalence of side effects, trust in the FDA, frequency of coughs and colds, trust in drug packaging) and quantified the impact of amount of information. Logistic regression assessed the ability of experience alone, and experience combined with amount of information, to predict adherence. 53.3% of inexperienced but 28.4% of experienced parents were adherent (p = 0.0003). The groups did not differ on potential moderating variables. Adherence was 39.5% among experienced parents receiving “a lot of information”, but 15.4% for those receiving less (p = 0.002); amount of information did not affect adherence in inexperienced parents (p = 0.22) but uniquely predicted adherence compared to a model with experience alone (p = 0.0005). Experienced parents were also less likely to mistrust drug packaging (p = 0.03). Targeting FDA information to experienced parents, particularly via drug packaging, may improve their adherence.
The Eurasian beaver has returned to Britain, presenting fundamental challenges and opportunities for all involved. Beavers will inevitably expand throughout British freshwater systems and provide significant benefits. Unofficial releases have presented challenges in terms of sourcing and genetics, health status and disease risks, the risk of introducing the non-native North American beaver species, and the lack of engagement with communities and resulting conflict. Agreed approaches require development using multi-stakeholder approaches to recognise and promote benefits whilst sensitively managing beavers’ impacts on people’s livelihoods.
Disease outbreaks may be a threat to the outcome of conservation translocations, and disease risk analysis is best completed before translocation. Disease risk analysis is hampered by knowledge of the full complement and pathogenicity of parasites harboured by wild animals.
Increased global movement of biological materials, coupled with climate change, and other environmental pressures are leading to increasing threats to plants from pests and pathogens. These pests and pathogens are relevant to plant conservation translocations as a source of translocation failure, and because the translocation itself can lead to pest and pathogen transmission. Many plant conservation translocations are relatively low risk, especially those involving the small-scale local movement of plant material between proximal sites. In contrast, plant translocations that involve movement of large amounts of material, and/or large geographical distances or crossing natural ecological barriers, are intrinsically higher risk. Additional high-risk factors include the potential for pest and pathogen transmission to occur at nursery/propagation facilities, especially if the translocated material is held in close proximity to other plants infected with pests and pathogens and/or material sourced from distant localities. Despite the importance of these issues, plant health risks are often not explicitly considered in plant conservation translocations. To support greater awareness and the effective uptake of appropriate biosecurity steps in plant conservation translocations, there is a pressing need to develop generally applicable best-practice guidelines targeted at translocation practitioners.
In the IPCC’s AR6, the chapters of each of the three Working Groups are structured with the intention of integrating ‘cross-cutting themes’ and ‘handshakes’ between them. While integration received special emphasis in AR6, it is not new. The IPCC has long considered how to treat issues such as representations of uncertainty and scenario data consistently across WGs. The IPCC’s effort to integrate knowledge across WGs raises important epistemological and ethical questions related to how the humanities, natural sciences, and social sciences shape understandings of climate change. To illustrate the theme of integration as applied within the IPCC, this chapter focuses on how risk is integrated across WGI and WGII in the AR6.
This study investigated how the proximity of disaster experience was associated with financial preparedness for emergencies.
Methods:
The data used were from the 2018 National Household Survey, which was administered by the Federal Emergency Management Agency. The working sample included 4779 respondents.
Results:
Logistic Regression showed that the likelihood of setting aside emergency funds tended to be the highest between 2-5 years after experiencing a disaster, which declined slightly but persisted even after 16 years. Recent disaster experience within 1 year did not show a significant impact, indicating a period of substantial needs. However, the proximity of disaster experience did not significantly affect the amount of money set aside.
Conclusion:
It is suspected that increased risk perception related to previous experiences of disasters is more relevant to the likelihood of preparing financially; whereas other capacity-related factors such as income and having a disability have more effect on the amount of money set aside.
Digital psychiatry could empower individuals to navigate their context-specific experiences outside healthcare visits. This editorial discusses how leveraging digital health technologies could dramatically transform how we conceptualise mental health and the mental health professional's day-day practice, and how patients could be enabled to navigate their mental health with greater agency.
Mortality among people with mental disorders is higher in comparison with the general population. There is a scarcity of studies on mortality in the abovementioned group of people in Central and Eastern European countries.
Methods
The study aimed to assess all-cause mortality in people with mental disorders in Poland. We conducted a nationwide, register-based cohort study utilizing data from two nationwide registries in Poland: the registry of healthcare services reported to the National Health Fund (2009–2018) and the all-cause death registry from Statistics Poland (2019). We identified individuals who were consulted or hospitalized in public mental healthcare facilities and received at least one diagnosis of mental disorders (International Statistical Classification of Diseases and Health Problems [ICD-10]) from 2009 to 2018. Standardized mortality ratios (SMRs) were compared between people with a history of mental disorder and the general population.
Results
The study comprised 4,038,517 people. The SMR for individuals with any mental disorder compared with the general population was 1.54. SMRs varied across diagnostic groups, with the highest values for substance use disorders (3.04; 95% CI 3.00–3.09), schizophrenia, schizotypal and delusional disorders (2.12; 95% CI 2.06–2.18), and pervasive and specific developmental disorders (1.68; 95% CI 1.08–2.29). When only inpatients were considered, all-cause mortality risk was almost threefold higher than in the general population (SMR 2.90; 95% CI 2.86–2.94).
Conclusions
In Poland, mortality in people with mental disorders is significantly higher than in the general population. The results provide a reference point for future longitudinal studies on mortality in Poland.
This study leveraged machine learning to evaluate the contribution of information from multiple developmental stages to prospective prediction of depression and anxiety in mid-adolescence.
Methods
A community sample (N = 374; 53.5% male) of children and their families completed tri-annual assessments across ages 3–15. The feature set included several important risk factors spanning psychopathology, temperament/personality, family environment, life stress, interpersonal relationships, neurocognitive, hormonal, and neural functioning, and parental psychopathology and personality. We used canonical correlation analysis (CCA) to reduce the large feature set to a lower dimensional space while preserving the longitudinal structure of the data. Ablation analysis was conducted to evaluate the relative contributions to prediction of information gathered at different developmental periods and relative to previous disorder status (i.e. age 12 depression or anxiety) and demographics (sex, race, ethnicity).
Results
CCA components from individual waves predicted age 15 disorder status better than chance across ages 3, 6, 9, and 12 for anxiety and 9 and 12 for depression. Only the components from age 12 for depression, and ages 9 and 12 for anxiety, improved prediction over prior disorder status and demographics.
Conclusions
These findings suggest that screening for risk of adolescent depression can be successful as early as age 9, while screening for risk of adolescent anxiety can be successful as early as age 3. Assessing additional risk factors at age 12 for depression, and going back to age 9 for anxiety, can improve screening for risk at age 15 beyond knowing standard demographics and disorder history.
The strong, life-long association between epilepsy and intellectual disability means that psychiatric teams, and the services they exist in, have a need for significant competencies in the field of epilepsy. This article addresses these competencies through the pathway of care. It will focus on those areas most relevant to psychiatric care and, when possible, explore where technology has begun to influence practice. The pathway leads from diagnosis through, in some cases, to mortality and support of the bereaved in psychiatric care. We will approach the topic through showing how the intertwining themes of information, empowerment, access to care, assessment of risk and psychological support are important. Technological advances are supporting changes in most of these areas, and psychological support, a knowledge of the needs of people with epilepsy and intellectual disability and epilepsy skills remain the foundation in the application of these advances.
This chapter examines the risks of pregnancy for women over 40 and the strategies to optimize the management of pregnancy, labour and puerperium in this age group. In the UK, antenatal care is not usually any different at less than 40 years unless there are other confounding factors. Women at advanced age booking for pregnancy should have a thorough risk assessment to ascertain risk of hypertensive diseases of pregnancy and those at higher risk should be started on 75 mcg aspirin from 12 weeks till until 36 weeks Increased surveillance for GDM is not recommended in the UK based on age alone. However, it should be noted that AMA is associated with an increased background incidence of diabetes and it is our practice to offer a mini glucose tolerance test. Risk of venous thromboembolism should be assessed at booking and at each encounter. Serial growth scans with doppler studies are to be performed starting from 26-28 weeks of gestation in women more than 40 years. Induction of labour is recommended between 39 to 40 weeks when maternal age is more than 40 years. There is insufficient evidence to comment on the possible effect on perinatal mortality and rates of operative delivery from this intervention and this should be mentioned when counselling for induction of labour.
Converging theoretical and empirical evidence points to suicide being a fundamentally aleatory event – that risk of suicide is opaque to useful assessment at the level of the individual. This chapter presents an integrated evolutionary and clinical argument that the time has come to transcend efforts to categorise peoples’ risk of taking their own lives. A brighter future awaits mental healthcare if the behaviour’s essential non-predictability is understood and accepted. The pain-brain evolutionary theory of suicide predicts inter alia that all intellectually competent humans carry the potential for suicide, and that suicides will occur largely at random. The randomness arises because, over an evolutionary timescale, selection of adaptive defences will have sought out and exploited all operative correlates of suicide and will thus have exhausted those correlates’ predictive power. Completed suicides are therefore statistical residuals – events intrinsically devoid of informational cues by which the organism could have avoided self-destruction. Empirical evidence supports this theoretical expectation. Suicide resists useful prediction at the level of the individual. Regardless of the means by which the assessment is made, people rated ‘high risk’ seldom take their own lives, even over extended periods. Consequently, if a prevention treatment is sufficiently safe and effective to be worth allotting to the ‘high-risk’ subset of a cohort of patients, it will be just as worthwhile for the rest. Prevention measures will offer the greatest prospects for success where the aleatory nature of suicide is accepted, acknowledging that ‘fault’ for rare, near-random, self-induced death resides not within the individual but as a universal human potentiality. A realistic, evolution-informed, clinical approach is proposed that focuses on risk communication in place of risk assessment. All normally sapient humans carry a vanishingly small daily risk of taking their own lives but are very well adapted to avoiding that outcome. Almost all of us nearly always find other solutions to the stresses of living.
Risk assessment in clinical practice is often characterised as a process of analysing information so as to make a judgement about the likelihood of harmful behaviour occurring in the future. However, this characterisation is brought into question when the evidence does not support the current use of risk assessment approaches to predict, or provide probability estimates of, future behaviour in a way that is usable in single instances arising in individual cases. This article sets out a broader and more clinically applicable description of risk assessment which takes account of the wider influences on how this clinical activity takes place. In so doing, it provides a framework to guide clinicians, researchers and authors of practice guidance who are interested in improving approaches to risk assessment.