To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article examines three Japan–South Korea postwar compensation cases: the comfort women issue, the Sakhalin Island forced labor issue, and Korean atomic bomb survivor issue. These compensation movements produced vastly different results, even though the basic policy directions for compensation provision in all three cases were similar. Japan's approach toward the comfort women problem has been a complete failure, while its treatment of the Sakhalin forced labor issue and the atomic bomb issues has been more successful. This article's explanation of the different outcomes focuses on the character and geographical base of the civic groups leading these compensation movements. In South Korea, women's rights activists spearheaded the comfort women compensation movement and related victim-relief activities. The Korean non-governmental organizations (NGOs) that assisted the comfort women treated this problem not only as a women's rights issue, but also as a nationalist issue. In contrast, the Red Cross, a politically neutral international organization, promoted the Sakhalin forced labor and atomic bomb issues. In short, the different receptions accorded to those championing the comfort women issue and those promoting the Sakhalin forced labor and atomic bomb issues depended on the principal agent of each compensation process. This article aims to provide some implications for successfully implementing postwar compensation policies. It suggests that, if successful postwar compensation policy depends on successful perpetrator–victim reconciliation, establishing solidarity between perpetrator and victim countries’ civic groups is important. This can only be facilitated through the depoliticized and transparent operation of leading NGOs both inside and outside the redressal-seeking nation.
Identifying more homogenous subtypes of patients with obsessive–compulsive disorder (OCD) using biological evidence is critical for understanding complexities of the disorder in this heterogeneous population. Age of onset serves as a useful subtyping scheme for distinguishing OCD into two subgroups that aligns with neurodevelopmental perspectives. The underlying neurobiological markers for these distinct neurodevelopmental differences can be identified by investigating gyrification changes to establish biological evidence-based homogeneous subtypes.
We compared whole-brain cortical gyrification in 84 patients with early-onset OCD, 84 patients with late-onset OCD, and 152 healthy controls (HCs) to identify potential markers for early neurodevelopmental deficits using the local gyrification index (lGI). Then, the relationships between lGI in clusters showing significant differences and performance in visuospatial memory and verbal fluency, which are considered trait-related neurocognitive impairments in OCD, were further examined in early-onset OCD patients.
The early-onset OCD patients exhibited significantly greater gyrification than those with late-onset OCD patients and HCs in frontoparietal and cingulate regions, including the bilateral precentral, postcentral, precuneus, paracentral, posterior cingulate, superior frontal, and caudal anterior cingulate gyri. Moreover, impaired neurocognitive functions in early-onset OCD patients were correlated with increased gyrification.
Our findings provide a neurobiological marker to distinguish the OCD population into more neurodevelopmentally homogeneous subtypes, which may contribute to the understanding of the neurodevelopmental underpinnings of an etiology in early-onset OCD consistent with the accumulated phenotypic evidence of greater neurodevelopmental deficits in early-onset OCD than in late-onset OCD.
The COVID-19 pandemic poses a major threat to mental health and is associated with an increased risk of suicide. An understanding of suicidal behaviours during the pandemic is necessary for establishing policies to prevent suicides in such social conditions.
We aimed to investigate vulnerable individuals and the characteristics of changes in suicidal behaviour during the COVID-19 pandemic.
We retrospectively reviewed the medical records of patients with suicide attempts who visited the emergency department from February 2019 to January 2021. We analysed the demographic and clinical characteristics, risk factors and rescue factors of patients, and compared the findings between the pre-pandemic and pandemic periods.
In total, 519 patients were included. During the pre-pandemic and pandemic periods, 303 and 270 patients visited the emergency department after a suicide attempt, respectively. The proportion of suicide attempts by women (60.1% v. 69.3%, P = 0.035) and patients with a previous psychiatric illness (63.4% v. 72.9%, P = 0.006) increased during the COVID-19 pandemic. In addition, patients’ rescue scores during the pandemic were lower than those during the pre-pandemic period (12 (interquartile range: 11–13) v. 13 (interquartile range: 12–14), P < 0.001).
Women and people with previous psychiatric illnesses were more vulnerable to suicide attempts during the COVID-19 pandemic. Suicide prevention policies, such as continuous monitoring and staying in touch with vulnerable individuals, are necessary to cope with suicide risk.
The risk factors of environmental contamination by SARS-CoV-2 are largely unknown. We analyzed 1,320 environmental samples obtained from COVID-19 patients over 1 year. The risk factors for contamination of COVID-19 patients’ surrounding environment were higher viral load in the respiratory tract and shorter duration from symptom onset to sample collection.
Two aphid-transmitted RNA viruses, broad bean wilt virus 2 (BBWV2) and cucumber mosaic virus (CMV), are the most prevalent viruses in Korean pepper fields and cause chronic damage in pepper production. In this study, we employed a screening system for pathotype-specific resistance of pepper germplasm to BBWV2 and CMV by utilizing infectious cDNA clones of different pathotypes of the viruses (two BBWV2 strains and three CMV strains). We first examined pathogenic characteristics of the BBWV2 and CMV strains in various plant species and their phylogenetic positions in the virus population structures. We then screened 34 commercial pepper cultivars and seven accessions for resistance. While 21 pepper cultivars were resistant to CMV Fny strain, only two cultivars were resistant to CMV P1 strain. We also found only one cultivar partially resistant to BBWV2 RP1 strain. However, all tested commercial pepper cultivars were susceptible to the resistance-breaking CMV strain GTN (CMV-GTN) and BBWV2 severe strain PAP1 (BBWV2-PAP1), suggesting that breeding new cultivars resistant to these virus strains is necessary. Fortunately, we identified several pepper accessions that were resistant or partially resistant to CMV-GTN and one symptomless accession despite systemic infection with BBWV2-PAP1. These genetic resources will be useful in pepper breeding programs to deploy resistance to BBWV2 and CMV.
Prognostic heterogeneity in early psychosis patients yields significant difficulties in determining the degree and duration of early intervention; this heterogeneity highlights the need for prognostic biomarkers. Although mismatch negativity (MMN) has been widely studied across early phases of psychotic disorders, its potential as a common prognostic biomarker in early periods, such as clinical high risk (CHR) for psychosis and first-episode psychosis (FEP), has not been fully studied.
A total of 104 FEP patients, 102 CHR individuals, and 107 healthy controls (HCs) participated in baseline MMN recording. Clinical outcomes were assessed; 17 FEP patients were treatment resistant, 73 FEP patients were nonresistant, 56 CHR individuals were nonremitters (15 transitioned to a psychotic disorder), and 22 CHR subjects were remitters. Baseline MMN amplitudes were compared across clinical outcome groups and tested for utility prognostic biomarkers using binary logistic regression.
MMN amplitudes were greatest in HCs, intermediate in CHR subjects, and smallest in FEP patients. In the clinical outcome groups, MMN amplitudes were reduced from the baseline in both FEP and CHR patients with poor prognostic trajectories. Reduced baseline MMN amplitudes were a significant predictor of later treatment resistance in FEP patients [Exp(β) = 2.100, 95% confidence interval (CI) 1.104–3.993, p = 0.024] and nonremission in CHR individuals [Exp(β) = 1.898, 95% CI 1.065–3.374, p = 0.030].
These findings suggest that MMN could be used as a common prognostic biomarker across early psychosis periods, which will aid clinical decisions for early intervention.
Several studies supported the usefulness of “the surprise question” in terms of 1-year mortality of patients. “The surprise question” requires a “Yes” or “No” answer to the question “Would I be surprised if this patient died in [specific time frame].” However, the 1-year time frame is often too long for advanced cancer patients seen by palliative care personnel. “The surprise question” with shorter time frames is needed for decision making. We examined the accuracy of “the surprise question” for 7-day, 21-day, and 42-day survival in hospitalized patients admitted to palliative care units (PCUs).
This was a prospective multicenter cohort study of 130 adult patients with advanced cancer admitted to 7 hospital-based PCUs in South Korea. The accuracy of “the surprise question” was compared with that of the temporal question for clinician's prediction of survival.
We analyzed 130 inpatients who died in PCUs during the study period. The median survival was 21.0 days. The sensitivity, specificity, and overall accuracy for the 7-day “the surprise question” were 46.7, 88.7, and 83.9%, respectively. The sensitivity, specificity, and overall accuracy for the 7-day temporal question were 6.7, 98.3, and 87.7%, respectively. The c-indices of the 7-day “the surprise question” and 7-day temporal question were 0.662 (95% CI: 0.539–0.785) and 0.521 (95% CI: 0.464–0.579), respectively. The c-indices of the 42-day “the surprise question” and 42-day temporal question were 0.554 (95% CI: 0.509–0.599) and 0.616 (95% CI: 0.569–0.663), respectively.
Significance of results
Surprisingly, “the surprise questions” and temporal questions had similar accuracies. The high specificities for the 7-day “the surprise question” and 7- and 21-day temporal question suggest they may be useful to rule in death if positive.
Researchers argue that social investment policies contribute not only to equal opportunity and human capital development, but also to the sustainability of welfare states. In that respect, these policies are regarded as the new vanguard of the welfare state (Morel et al, 2012). Yet, in the west, many criticise the role of social investment policies, as they tend to place too much focus on the (re)commodification of labour and are unable to cope with increasing inequality. In fact, scholars suspect social investment policies create a Matthew effect (Bonoli et al, 2017). However, many commentators note that East Asian welfare regimes do not need social investment policies to enhance human capital, as these countries are well-known for highly commodified labour and high rankings in the Programme for International Student Assessment (PISA).
However, these commentators seem to largely neglect the social outcomes of education policies in East Asian countries. Behind the scenes of their remarkable educational achievements, these countries seem to suffer from decreasing social mobility. For example, in South Korea (hereafter Korea), once praised for its active upward social mobility, the media has frequently referred to the country's increasing social inequality and reduced social mobility using the terms ‘gold spoon’ and ‘dirt spoon’. Unlike decreasing social mobility, overall education expenditure in Korea is 8 per cent of GDP, and public expenditure has increased from 3 per cent in 2000 to more than 5 per cent in 2015 (World Bank, 2018). This could mean that the education policy and expenditure has not been able to reverse the labour market dualisation and has failed to secure an equitable outcome. Therefore, it is still important to look at education policy from the perspective of social investment.
This chapter aims to explore the role of education and social investment, with special attention on the effects of shadow education on social mobility in Korea. There has been much social and political discussion about social mobility, but few empirical studies have been conducted. This study analyses how family background and shadow education influence educational attainment and, subsequently, how educational attainment affects incomes, using data from the Korea Education and Employment Panel (KEEP).
Over the past two decades, early detection and early intervention in psychosis have become essential goals of psychiatry. However, clinical impressions are insufficient for predicting psychosis outcomes in clinical high-risk (CHR) individuals; a more rigorous and objective model is needed. This study aims to develop and internally validate a model for predicting the transition to psychosis within 10 years.
Two hundred and eight help-seeking individuals who fulfilled the CHR criteria were enrolled from the prospective, naturalistic cohort program for CHR at the Seoul Youth Clinic (SYC). The least absolute shrinkage and selection operator (LASSO)-penalized Cox regression was used to develop a predictive model for a psychotic transition. We performed k-means clustering and survival analysis to stratify the risk of psychosis.
The predictive model, which includes clinical and cognitive variables, identified the following six baseline variables as important predictors: 1-year percentage decrease in the Global Assessment of Functioning score, IQ, California Verbal Learning Test score, Strange Stories test score, and scores in two domains of the Social Functioning Scale. The predictive model showed a cross-validated Harrell's C-index of 0.78 and identified three subclusters with significantly different risk levels.
Overall, our predictive model showed a predictive ability and could facilitate a personalized therapeutic approach to different risks in high-risk individuals.
Background: After the Middle East respiratory syndrome coronavirus outbreak in Korea in 2015, the government newly established the additional reimbursement for infection prevention to encourage infection control activities in the hospitals. The new policy was announced in December 2015 and was implemented in September 2016. We evaluated how infection control activities improved in hospitals after the change of government policy in Korea. Methods: Three cross-sectional surveys using the WHO Hand Hygiene Self-Assessment Framework (HHSAF) were conducted in 2013, 2015, and 2017. Using multivariable linear regression model including hospital characteristics, we analyzed the changes in total HHSAF scores according to the survey time. Results: In total, 32 hospitals participated in the survey in 2013, 52 in 2015, and 101 in 2017. The number of inpatient beds per infection control professionals decreased from 324 in 2013 to 303 in 2015 and 179 in 2017. Most hospitals were at intermediate or advanced levels of progress (90.6% in 2013, 86.6% in 2015, and 94.1% in 2017). In a multivariable linear regression model, the total HHSAF scores were significantly associated with hospital teaching status (β coefficient of major teaching hospital, 52.6; 95% CI, 8.9–96.4; P = .018), bed size (β coefficient of 100-bed increase, 5.1; 95% CI, 0.3–9.8; P = .038), and survey time (β coefficient of 2017 survey, 45.1; 95% CI, 19.3–70.9; P = .001). Conclusions: After the national policy implementation, the number of infection control professionals increased, and the promotion of hand hygiene activities was strengthened in Korean hospitals.
Obsession and delusion are theoretically distinct from each other in terms of reality testing. Despite such phenomenological distinction, no extant studies have examined the identification of common and distinct neural correlates of obsession and delusion by employing biologically grounded methods. Here, we investigated dimensional effects of obsession and delusion spanning across the traditional diagnostic boundaries reflected upon the resting-state functional connectivity (RSFC) using connectome-wide association studies (CWAS).
Our study sample comprised of 96 patients with obsessive–compulsive disorder, 75 patients with schizophrenia, and 65 healthy controls. A connectome-wide analysis was conducted to examine the relationship between obsession and delusion severity and RFSC using multivariate distance-based matrix regression.
Obsession was associated with the supplementary motor area, precentral gyrus, and superior parietal lobule, while delusion was associated with the precuneus. Follow-up seed-based RSFC and modularity analyses revealed that obsession was related to aberrant inter-network connectivity strength. Additional inter-network analyses demonstrated the association between obsession severity and inter-network connectivity between the frontoparietal control network and the dorsal attention network.
Our CWAS study based on the Research Domain Criteria (RDoC) provides novel evidence for the circuit-level functional dysconnectivity associated with obsession and delusion severity across diagnostic boundaries. Further refinement and accumulation of biomarkers from studies embedded within the RDoC framework would provide useful information in treating individuals who have some obsession or delusion symptoms but cannot be identified by the category of clinical symptoms alone.
For decades, fructose intake has been recognised as an environmental risk for metabolic syndromes and diseases. Here we comprehensively examined the effects of fructose intake on mice liver transcriptomes. Fructose-supplemented water (34 %; w/v) was fed to both male and female C57BL/6N mice at their free will for 6 weeks, followed by hepatic transcriptomics analysis. Based on our criteria, differentially expressed genes (DEG) were selected and subjected to further computational analyses to predict key pathways and upstream regulator(s). Subsequently, predicted genes and pathways from the transcriptomics dataset were validated via quantitative RT-PCR analyses. As a result, we identified eighty-nine down-regulated and eighty-eight up-regulated mRNA in fructose-fed mice livers. These DEG were subjected to bioinformatics analysis tools in which DEG were mainly enriched in xenobiotic metabolic processes; further, in the Ingenuity Pathway Analysis software, it was suggested that the aryl hydrocarbon receptor (AhR) is an upstream regulator governing overall changes, while fructose suppresses the AhR signalling pathway. In our quantitative RT-PCR validation, we confirmed that fructose suppressed AhR signalling through modulating expressions of transcription factor (AhR nuclear translocator; Arnt) and upstream regulators (Ncor2, and Rb1). Altogether, we demonstrated that ad libitum fructose intake suppresses the canonical AhR signalling pathway in C57BL/6N mice liver. Based on our current observations, further studies are warranted, especially with regard to the effects of co-exposure to fructose on (1) other types of carcinogens and (2) inflammation-inducing agents (or even diets such as a high-fat diet), to find implications of fructose-induced AhR suppression.
Refugees commonly experience difficulties with emotional processing, such as alexithymia, due to stressful or traumatic experiences. However, the functional connectivity of the amygdala, which is central to emotional processing, has yet to be assessed in refugees. Thus, the present study investigated the resting-state functional connectivity of the amygdala and its association with emotional processing in North Korean (NK) refugees.
This study included 45 NK refugees and 40 native South Koreans (SK). All participants were administered the Toronto Alexithymia Scale (TAS), Beck Depression Inventory (BDI), and Clinician-administered PTSD Scale (CAPS), and differences between NK refugees and native SK in terms of resting-state functional connectivity of the amygdala were assessed. Additionally, the association between the strength of amygdala connectivity and the TAS score was examined.
Resting-state connectivity values from the left amygdala to the bilateral dorsolateral prefrontal cortex (dlPFC) and dorsal anterior cingulate cortex (dACC) were higher in NK refugees than in native SK. Additionally, the strength of connectivity between the left amygdala and right dlPFC was positively associated with TAS score after controlling for the number of traumatic experiences and BDI and CAPS scores.
The present study found that NK refugees exhibited heightened frontal–amygdala connectivity, and that this connectivity was correlated with alexithymia. The present results suggest that increased frontal–amygdala connectivity in refugees may represent frontal down-regulation of the amygdala, which in turn may produce alexithymia.
Residual stress is generally evaluated using indentation by comparing the indentation curves of stressed and stress-free states. Here, we suggest a new method that can evaluate surface residual stress without indentation testing on stress-free specimen using stress-independent indentation parameters and an analysis of indentation contact morphology for the stress-free state. We found that several indentation parameters are independent of the stress by Vickers indentation testing on various stress states. The indentation contact morphology can be represented by indentation parameters including stress-independent ones, and by applying the stress-independent parameters obtained from the stressed state to the indentation contact depth function, we can estimate an indentation curve for stress-free state. The estimated curve matches well with the experimental stress-free indentation curve, and it was also confirmed that the applied stress values evaluated by comparing the estimated curve with the stressed indentation curve agree well with the reference values obtained from strain gauge.
This paper aims to test two types of legislative shirking in a new democracy, South Korea. Using the lame-duck sessions of the Korean National Assembly, we test whether a legislator shirks in voting participation and in voting decisions. We weave two competing motivations of legislative shirking in voting participation – that to secure more leisure time and that to utilize the last, valuable voting opportunity – into a synthetic hypothesis and test it with two-part hurdle models. To test a shirking in voting participation hypothesis, we analyze legislators’ choices on bills that are supposedly related to the interests of constituents or political parties. Empirical results strongly support our shirking in voting participation claims, while only partial evidence is found on shirking in voting decisions. The findings suggest that, besides the trade-off between labor and leisure, some legislators deem the lame-duck sessions an opportunity to express their own preferences unconstrained.
We examined whether hypotension in very low birth weight infants aged⩽1 week was associated with hospital morbidities and overall mortality. Further, we studied whether hypotension was associated with poor neurodevelopmental outcomes in these patients at the corrected age of 18 months. A total of 166 very low birth weight infants were studied during this period. Hospital outcomes and neurodevelopmental outcomes at the corrected age of 18 months were evaluated. Among the 166 very low birth weight infants, 95 patients (57.2%) experienced hypotension at⩽1 week and were associated with an increased incidence of morbidities and mortality. At the corrected age of 18 months, hypotension of the⩽1 week group had significantly lower scores in all three – cognitive, language, and motor – composites of the Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III) screening tests. In addition, a multivariable logistic regression analysis showed that longer mechanical ventilation and periventricular leukomalacia were additionally associated with worse cognitive and language neurodevelopmental outcomes. Hypotension in very low birth weight infants within 1 week of life was associated with increased morbidities and overall mortality. It was also associated with an increased risk of cognitive and language outcomes.