To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Prognosis and disposition among older emergency department (ED) patients with suspected infection remains challenging. Frailty is increasingly recognized as a predictor of poor prognosis among critically ill patients; however, its association with clinical outcomes among older ED patients with suspected infection is unknown.
We conducted a multicenter prospective cohort study at two tertiary care EDs. We included older ED patients (≥75 years) with suspected infection. Frailty at baseline (before index illness) was explicitly measured for all patients by the treating physicians using the Clinical Frailty Scale (CFS). We defined frailty as a CFS 5–8. The primary outcome was 30-day mortality. We used multivariable logistic regression to adjust for known confounders. We also compared the prognostic accuracy of frailty with the Systemic Inflammatory Response Syndrome (SIRS) and Quick Sequential Organ Failure Assessment (qSOFA) criteria.
We enrolled 203 patients, of whom 117 (57.6%) were frail. Frail patients were more likely to develop septic shock (adjusted odds ratio [aOR], 1.83; 95% confidence interval [CI], 1.08–2.51) and more likely to die within 30 days of ED presentation (aOR 2.05; 95% CI, 1.02–5.24). Sensitivity for mortality was highest among the CFS (73.1%; 95% CI, 52.2–88.4), compared with SIRS ≥ 2 (65.4%; 95% CI, 44.3–82.8) or qSOFA ≥ 2 (38.4; 95% CI, 20.2–59.4).
Frailty is a highly prevalent prognostic factor that can be used to risk-stratify older ED patients with suspected infection. ED clinicians should consider screening for frailty to optimize disposition in this population.
Introduction: There is ongoing concern about the burden placed on healthcare systems by lab tests. Although these concerns are widespread, it is difficult to quantify the extent of the problem. One approach involves use of a metric known as the Mean Abnormal Response Rate (MARR), which is the proportion of tests ordered that return an abnormal result; a higher MARR value indicates higher yield. The primary objective of this study was to calculate MARRs for tests ordered between April 2014 and March 2019 at the four adult emergency departments (EDs) covering a metropolitan population of 1.3 million. Secondary objectives included identifying tests with highest and lowest MARRs; comparison of MARRs for nurse- and physician-initiated orders; correlation of the number of tests per order requisition to MARR; and correlation of physician experience to MARR. Methods: In total, 40 laboratory tests met inclusion criteria for this study. Administrative data on these tests as ordered at the four EDs were obtained and analyzed. Multi-component test results, such as from CBC, were consolidated such that an abnormal result for any component was coded as an abnormal result for the entire test. Repeat tests ordered within a single patient visit were excluded. Physician experience was quantified for 209 ED physicians as number of years since licensure. Analyses were descriptive where appropriate for whole-population data. Risk of bias was attenuated by the focus on administrative data. Results: The population dataset comprised 33,757,004 test results on 415,665 unique patients. Of these results, 30.3% were the outcomes of nurse-initiated orders. The 5-year MARRs for the four hospitals were 38.3%, 40.0%, 40.7% and 40.9%. The highest per-test MARRs were for BNP (80.5%) and CBC (62.6%), while the lowest were for glucose (7.9%) and sodium (11.6%). MARRs were higher for nurse-initiated orders than for physician-initiated orders (44.7% vs. 38.1%), likely due to the greater order frequency of high-yield CBC in nurse-initiated orders (38.6% vs. 18.1%). The number of tests per order requisition was inversely associated with MARR (r = -0.90, p < 0.001). Finally, the number of years since licensure was modestly but significantly associated with MARR (r = 0.28, p < 0.001). Conclusion: This is the first and largest study to apply the MARR in an ED setting. As a metric, MARR effectively identifies differences in test ordering practices on per-test and per-hospital bases, which could be useful for data-informed practice optimization.
Introduction: Several recent observational studies have presented concerning data regarding the safety of cardioversion (CV) for acute atrial fibrillation and flutter (AAFF). We conducted this systematic review to determine whether it is safe to cardiovert AAFF patients without prescribing oral anticoagulation (OAC) post-CV for those who are CHADS-65 negative. Methods: We conducted a librarian assisted search of MEDLINE, Embase, and Cochrane from inception through November 23, 2019. We included observational studies and randomized trials reporting thromboembolic (TE) events (i.e. stroke, transient ischemic attack, or systemic thromboembolism) within 30 days following CV in patients with AAFF, where onset of symptoms was <48 hours. Two reviewers independently screened studies and extracted data. The main outcome was risk of TE events within 30 days post-CV, stratified by OAC use. Risk of bias was assessed with the Quality in Prognostic Studies (QUIPS) tool. The primary analysis was based on prospective studies and the secondary analysis was based on retrospective studies. We performed meta-analyses for TE events where 2 or more studies were available, by applying the DerSimonian-Laird random-effects model. We implemented analyses stratified by study design using Open MetaAnalyst and generated the forest plots. Results: Our search yielded 969 titles; 74 were selected for full-text review and 20 studies were included in the review. The primary meta-analysis of 6 prospective studies, including two randomized trials, found a TE event rate of 0.15% (2 TE events/1,314 CVs). Within this prospective group, lack of OAC use was associated with a decreased risk of TE events (RR = 2.15 where RR >1 indicates increased risk of TE events with OAC compared to no OAC; 95% CI 0.50 to 9.31; I2 = 0%). Five of the 6 prospective studies had a low or moderate risk of bias in all QUIPS domains. Secondary meta-analysis of 6 retrospective studies revealed a TE event rate of 0.53% (56 TE events/10,521 CVs). This subgroup showed a trend favouring OAC use with decreased risk of TE events (RR = 0.34 where RR <1 suggests decreased risk of TE events with OAC; 95% CI 0.17 to 0.72; I2 = 0%). Conclusion: In the primary analysis of prospective studies, we found a low TE event rate following CV of AAFF, irrespective of OAC use. This contradicts previous analyses of retrospective studies. Our study supports the longstanding practice of not necessarily prescribing OAC post-CV in the ED for AAFF patients who are CHADS-65 negative.
Introduction: Prognostication and disposition among older Emergency Department (ED) patients with suspected infection remains challenging. Frailty is increasingly recognized as a predictor of poor prognosis among critically ill patients, however its association with clinical outcomes among older ED patients with suspected infection is unknown. Methods: We conducted a multicentre prospective cohort study at two tertiary care EDs. We included older ED patients (≥ 75 years) presenting with suspected infection. Frailty at baseline (prior to index illness) was explicitly measured for all patients by the treating physicians using the Clinical Frailty Scale (CFS). We defined frailty as a CFS 5-8. The primary outcome was 30-day mortality. We used multivariable logistic regression to adjust for known confounders. We also compared the prognostic accuracy of frailty against the Systemic Inflammatory Response Syndrome (SIRS) and Quick Sequential Organ Failure Assessment (qSOFA) criteria. Results: We enrolled 203 patients, of whom 117 (57.6%) were frail. Frail patients were more likely to develop septic shock (adjusted odds ratio [aOR]: 1.83, 95% confidence interval [CI]: 1.08-2.51) and more likely to die within 30 days of ED presentation (aOR 2.05, 95% CI: 1.02-5.24). Sensitivity for mortality was highest among the CFS (73.1%, 95% CI: 52.2-88.4), as compared to SIRS ≥ 2 (65.4%, 95% CI: 44.3-82.8) or qSOFA ≥ 2 (38.4, 95% CI: 20.2-59.4). Conclusion: Frailty is a highly prevalent prognostic factor that can be used to risk-stratify older ED patients with suspected infection. ED clinicians should consider screening for frailty in order to optimize disposition in this population.
To explore the clinical characteristics, assessment, biological and psychosocial correlates, and treatment of pediatric bipolar disorder (BD) in China.
All the studies published during the past 20 years on pediatric bipolar disorder in China were reviewed.
There is a lack of a unified diagnosis system in China. A serial of genetic researches showed the family aggregation and genetic predisposition of BD. There are consistent findings on the core symptoms of the disorder. BD has the characteristic of comorbidity with other disorders such as ADHD and OCD. Mood stabilizers and combined use of antipsychotics and TCA are still the main choice of psychiatrists to treat the pediatric patients with BD. The effectiveness of specific psychotherapy does need further studies.
A unified diagnosis system and criteria of BD for different age groups is crucial for further work. Combination of various treatments, such as mood stabilizers, AC, TCA and traditional Chinese medicine is effective for these patients. More studies, especially randomized controlled trials should be conducted to explore the etiology, pharmacotherapy and psychotherapy of this disease.
Despite strong evidence that the pathophysiology of tic disorders (TD) involves structural and functional disturbances of the basal ganglia, inconsistent findings from several TD imaging studies have supported contradictory conclusions.
To find brain structural differences between children with of TD and the health children and verify the pathogenesis hypothesis of that basal ganglia play an important role in this disorder.
The right handedness, first-episode TD children were chosen. Yale global tic severity scale (YGTSS) was used to assess the tic severity. MRI scan was performed on TD children and the controls. The volumes of caudate nucleus, putamen, globus pallidus and total intracranial volume were measured on high resolution MR images. We compared the volumes, relative volumes and asymmetry index, AI between groups.
Totally 11 patients finished this study with two excluded for the unclear image caused by tic and 18 subjects (9 TD patients and 9 controls) were finally analyzed. The right globus pallidus is significantly larger in TD patients. The volumes of left caudate increased significantly in both TD patients and controls. There was no significant difference in asymmetry index between two groups, relative volumes did not correlate significantly with the severity of tic and the course of disease.
The right globus pallidus may be the primary pathological change of TD. Asymmetry indexes between the two groups are not significantly different. The relative volume of any structure of basal ganglia has no significant correlation with the severity of tic and the course of disease.
The main aim of this study is to investigate the capacity of a number of variables from four dimensions (clinical, psychosocial, cognitive and genetic domains) to predict the antidepressant treatment outcome, and combined the predictors in one integrate regression model with the aim to investigate which predictor contributed most.
In a semi-naturalistic prospective cohort study with a total of 241 fully assessed MDD patients, decrease in HAM-D scores from baseline to after 6 weeks of treatment was used to measure the antidepressant treatment outcome.
The clinical and psychosocial model (R2 = 0.451) showed that HAM-D scores at baseline and MMPI-2 scale paranoia was the best clinical and psychosocial predictor of treatment outcome respectively. The cognitive model (R2 = 0.502) revealed that combination of better performance on TMT-B test and worse performance on TOH and WAIS-R Digit Backward testes could predict decline in HAM-D scores. The genetics analysis only found median of percent improvement in HAM-D scores in G-allele of GR gene BclI polymorphism carriers (72.2%) was significant lower than that in non-G allele carriers (80.1%). The integrate model showed that three predictors, combination of HAM-D scores at baseline, MMPI-2 scale paranoia and TMT-B test, explained 57.1% of the variance.
Three markers, HAM-D scores at baseline, MMPI-2 scale paranoia and TMT-B test, might serve as predictor of antidepressant outcome in daily psychiatric practice.
Long duration of untreated psychosis (DUP) has been associated to brain morphological changes in schizophrenia in cross sectional analyses. It is unclear DUP relates to brain volume change over time.
Our aim was to analyze the association between length of DUP and total brain volume change in schizophrenia in a general population based sample.
All members of the Northern Finland 1966 Birth Cohort (NFBC1966) known to have had psychotic illness were invited for a field study at the age 34-years (in average 10 year after onset of psychosis) and follow up nine years later at the age 43-years. DUP was assessed from medical records. The total brain volume scan interval change and the DUP information were available for 32 subjects with DSM III R schizophrenia. We analysed the correlation between length of DUP and the mean annual whole brain reduction, adjusted for age of illness onset and sex.
The mean annual whole brain volume reduction was 0.66%. The reduction was 0.76% among those with shortest DUP, 0.58% among those with median DUP, and 0.63% among those with longest DUP. There was no statistically significant correlation between DUP and annual brain volume change when adjusted for onset age and/or sex.
We did not find an association between long DUP and brain volume decrease in schizophrenia in 9 years follow up. Although long DUP has been associated with differences in brain volume in cross sectional analyses, the significance of DUP on brain morphology in long term is unclear.
Current available antidepressants exhibit low remission rate with a long response lag time. Growing evidence has demonstrated acute sub-anesthetic dose of ketamine exerts rapid, robust, and lasting antidepressant effects. However, a long term use of ketamine tends to elicit its adverse reactions. The present study aimed to investigate the antidepressant-like effects of intermittent and consecutive administrations of ketamine on chronic unpredictable mild stress (CUMS) rats, and to determine whether ketamine can redeem the time lag for treatment response of classic antidepressants. The behavioral responses were assessed by the sucrose preference test, forced swimming test, and open field test. In the first stage of experiments, all the four treatment regimens of ketamine (10 mg/kg ip, once daily for 3 or 7 consecutive days, or once every 7 or 3 days, in a total 21 days) showed robust antidepressant-like effects, with no significant influence on locomotor activity and stereotype behavior in the CUMS rats. The intermittent administration regimens produced longer antidepressant-like effects than the consecutive administration regimens and the administration every 7 days presented similar antidepressant-like effects with less administration times compared with the administration every 3 days. In the second stage of experiments, the combination of ketamine (10 mg/kg ip, once every 7 days) and citalopram (20 mg/kg po, once daily) for 21 days caused more rapid and sustained antidepressant-like effects than citalopram administered alone. In summary, repeated sub-anesthestic doses of ketamine can redeem the time lag for the antidepressant-like effects of citalopram, suggesting the combination of ketamine and classic antidepressants is a promising regimen for depression with quick onset time and stable and lasting effects.
Many institutions are attempting to implement patient-reported outcome (PRO) measures. Because PROs often change clinical workflows significantly for patients and providers, implementation choices can have major impact. While various implementation guides exist, a stepwise list of decision points covering the full implementation process and drawing explicitly on a sociotechnical conceptual framework does not exist.
To facilitate real-world implementation of PROs in electronic health records (EHRs) for use in clinical practice, members of the EHR Access to Seamless Integration of Patient-Reported Outcomes Measurement Information System (PROMIS) Consortium developed structured PRO implementation planning tools. Each institution pilot tested the tools. Joint meetings led to the identification of critical sociotechnical success factors.
Three tools were developed and tested: (1) a PRO Planning Guide summarizes the empirical knowledge and guidance about PRO implementation in routine clinical care; (2) a Decision Log allows decision tracking; and (3) an Implementation Plan Template simplifies creation of a sharable implementation plan. Seven lessons learned during implementation underscore the iterative nature of planning and the importance of the clinician champion, as well as the need to understand aims, manage implementation barriers, minimize disruption, provide ample discussion time, and continuously engage key stakeholders.
Highly structured planning tools, informed by a sociotechnical perspective, enabled the construction of clear, clinic-specific plans. By developing and testing three reusable tools (freely available for immediate use), our project addressed the need for consolidated guidance and created new materials for PRO implementation planning. We identified seven important lessons that, while common to technology implementation, are especially critical in PRO implementation.
Short-term peripheral venous catheter–related bloodstream infection (PVCR-BSI) rates have not been systematically studied in resource-limited countries, and data on their incidence by number of device days are not available.
Prospective, surveillance study on PVCR-BSI conducted from September 1, 2013, to May 31, 2019, in 727 intensive care units (ICUs), by members of the International Nosocomial Infection Control Consortium (INICC), from 268 hospitals in 141 cities of 42 countries of Africa, the Americas, Eastern Mediterranean, Europe, South East Asia, and Western Pacific regions. For this research, we applied definition and criteria of the CDC NHSN, methodology of the INICC, and software named INICC Surveillance Online System.
We followed 149,609 ICU patients for 731,135 bed days and 743,508 short-term peripheral venous catheter (PVC) days. We identified 1,789 PVCR-BSIs for an overall rate of 2.41 per 1,000 PVC days. Mortality in patients with PVC but without PVCR-BSI was 6.67%, and mortality was 18% in patients with PVC and PVCR-BSI. The length of stay of patients with PVC but without PVCR-BSI was 4.83 days, and the length of stay was 9.85 days in patients with PVC and PVCR-BSI. Among these infections, the microorganism profile showed 58% gram-negative bacteria: Escherichia coli (16%), Klebsiella spp (11%), Pseudomonas aeruginosa (6%), Enterobacter spp (4%), and others (20%) including Serratia marcescens. Staphylococcus aureus were the predominant gram-positive bacteria (12%).
PVCR-BSI rates in INICC ICUs were much higher than rates published from industrialized countries. Infection prevention programs must be implemented to reduce the incidence of PVCR-BSIs in resource-limited countries.
As an important dimensionless parameter for the vortex formation process, the general form of the formation time defined by Dabiri (Annu. Rev. Fluid Mech., vol. 41, 2009, pp. 17–33) is refined so as to provide better normalization for various vortex generator configurations. Our proposed definition utilizes the total circulation over the entire flow domain rather than that of the forming vortex ring alone. It adopts an integral form by considering the instantaneous infinitesimal increment in the formation time so that the effect of temporally varying properties of the flow configuration can be accounted for properly. By including the effect of buoyancy, the specific form of the general formation time for the starting forced plumes with negative and positive buoyancy is derived. A theoretical prediction based on the Kelvin–Benjamin variational principle shows that the general formation time manifests the invariance of the critical time scale, i.e. the formation number, under the influence of a source–ambient density difference. It demonstrates that the general formation time, based on the circulation production over the entire flow field, could take into account the effect of various vorticity production mechanisms, such as from a flux term or in a baroclinic fluid, on the critical formation number. The proposed definition may, therefore, serve as a guideline for deriving the specific form of the formation time in other types of starting/pulsatile flows.
Multiple lines of evidence suggest the presence of altered neuroimmune processes in patients with schizophrenia (Sz) and severe mood disorders. Recent studies using a novel free water diffusion tensor imaging (FW DTI) approach, proposed as a putative biomarker of neuroinflammation, atrophy, or edema, have shown significantly increased FW in patients with Sz. However no studies to date have investigated the longitudinal stability of FW alterations during the early course of psychosis, nor have studies focused separately on FE psychosis patients with Sz or bipolar disorder (BD) with psychotic features.
The current study included 188 participants who underwent diffusion magnetic resonance imaging scanning at baseline. Sixty-four participants underwent follow-up rescanning after 12 months. DTI-based alterations in patients were calculated using voxelwise tract-based spatial statistics and region of interest analyses.
Patients with FE psychosis, both Sz and BD, exhibited increased FW at illness onset which remained unchanged over the 12-month follow-up period. Preliminary analyses suggested that antipsychotic medication exposure was associated with higher FW in gray matter that reached significance in the BD group. Higher FW in white matter correlated with negative symptom severity.
Our results support the presence of elevated FW at the onset of psychosis in both Sz and BD, which remains stable during the early course of the illness, with no evidence of either progression or remission.
Identifying risk factors of individuals in a clinical-high-risk state for psychosis are vital to prevention and early intervention efforts. Among prodromal abnormalities, cognitive functioning has shown intermediate levels of impairment in CHR relative to first-episode psychosis and healthy controls, highlighting a potential role as a risk factor for transition to psychosis and other negative clinical outcomes. The current study used the AX-CPT, a brief 15-min computerized task, to determine whether cognitive control impairments in CHR at baseline could predict clinical status at 12-month follow-up.
Baseline AX-CPT data were obtained from 117 CHR individuals participating in two studies, the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP) and the Understanding Early Psychosis Programs (EP) and used to predict clinical status at 12-month follow-up. At 12 months, 19 individuals converted to a first episode of psychosis (CHR-C), 52 remitted (CHR-R), and 46 had persistent sub-threshold symptoms (CHR-P). Binary logistic regression and multinomial logistic regression were used to test prediction models.
Baseline AX-CPT performance (d-prime context) was less impaired in CHR-R compared to CHR-P and CHR-C patient groups. AX-CPT predictive validity was robust (0.723) for discriminating converters v. non-converters, and even greater (0.771) when predicting CHR three subgroups.
These longitudinal outcome data indicate that cognitive control deficits as measured by AX-CPT d-prime context are a strong predictor of clinical outcome in CHR individuals. The AX-CPT is brief, easily implemented and cost-effective measure that may be valuable for large-scale prediction efforts.
The co-occurrence of hepatic cystic echinococcosis (CE) and alveolar echinococcosis (AE) is extremely rare. Here, we present the clinical manifestations and treatment outcomes of three cases with co-occurring CE and AE in the liver. Computed tomography (CT), magnetic resonance imaging and 18FFluorodeoxyglucose Positron Emission Tomography-CT were used for preoperative diagnosis. Specimens were taken intraoperatively and sent for pathological studies to confirm the coexistence of CE and AE by laminated membrane, daughter cysts or germinal layer and infiltration structure. Albendazole was prescribed after operation for 12 months. All patients were completely recovered and showed no recurrence at last follow-up. Therefore, surgical intervention and postoperative application of albendazole are recommended for patients with concurrence of hepatic AE and CE.
Co-receptor tropism has been identified to correlate with HIV-1 transmission and the disease progression in patients. A molecular epidemiology investigation of co-receptor tropism is important for clinical practice and effective control of HIV-1. In this study, we investigated the co-receptor tropism on HIV-1 variants of 85 antiretroviral-naive patients with Geno2pheno algorithm at a false-positive rate of 10%. Our data showed that a majority of the subjects harboured the CCR5-tropic virus (81.2%, 69/85). No significant differences in gender, age, baseline CD4+ T-cell counts and transmission routes were observed between subjects infected with CXCR4-tropic or CCR5-tropic virus. The co-receptor tropism appeared to be associated with the virus genotype; a significantly more CXCR4-use was predicted in CRF01_AE infections whereas all CRF07_BC and CRF08_BC were predicted to use CCR5 co-receptor. Sequences analysis of V3 revealed a higher median net charge in the CXCR4 viruses over CCR5 viruses (4.0 vs. 3.0, P < 0.05). The predicted N-linked glycosylation site between amino acids 6 and 8 in the V3 region was conserved in CCR5 viruses, but not in CXCR4 viruses. Besides, variable crown motifs were observed in both CCR5 and CXCR4 viruses, of which the most prevalent motif GPGQ existed in both viral tropism and almost all genotypes identified in this study except subtype B. These findings may offer important implications for clinical practice and enhance our understanding of HIV-1 biology.
The aim of the study was to investigate any association between extrauterine growth restriction (EUGR) and intestinal flora of <30-week-old preterm infants. A total of 59 preterm infants were assigned to EUGR (n=23) and non-EUGR (n=36) groups. Intestinal bacteria were compared by using high-throughput sequencing of bacterial rRNA. The total abundance of bacteria in 344 genera (7568 v. 13,760; P<0.0001) and 456 species (10,032 v. 18,240; P<0.0001) was significantly decreased in the EUGR group compared with the non-EUGR group. After application of a multivariate logistic model and adjusting for potential confounding factors, as well as false-discovery rate corrections, we found four bacterial genera with higher and one bacterial genus with lower abundance in the EUGR group compared with the control group. In addition, the EUGR group showed significantly increased abundances of six species (Streptococcus parasanguinis, Bacterium RB5FF6, two Klebsiella species and Microbacterium), but decreased frequencies of three species (one Acinetobacter species, Endosymbiont_of_Sphenophorus_lev and one Enterobacter_species) compared with the non-EUGR group. Taken together, there were significant changes in the intestinal microflora of preterm infants with EUGR compared to preterm infants without EUGR.
Starch digestion in the small intestines of the dairy cow is low, to a large extent, due to a shortage of syntheses of α-amylase. One strategy to improve the situation is to enhance the synthesis of α-amylase. The mammalian target of rapamycin (mTOR) signalling pathway, which acts as a central regulator of protein synthesis, can be activated by leucine. Our objectives were to investigate the effects of leucine on the mTOR signalling pathway and to define the associations between these signalling activities and the synthesis of pancreatic enzymes using an in vitro model of cultured Holstein dairy calf pancreatic tissue. The pancreatic tissue was incubated in culture medium containing l-leucine for 3 h, and samples were collected hourly, with the control being included but not containing l-leucine. The leucine supplementation increased α-amylase and trypsin activities and the messenger RNA expression of their coding genes (P <0.05), and it enhanced the mTOR synthesis and the phosphorylation of mTOR, ribosomal protein S6 kinase 1 and eukaryotic initiation factor 4E-binding protein 1 (P <0.05). In addition, rapamycin inhibited the mTOR signal pathway factors during leucine treatment. In sum, the leucine regulates α-amylase and trypsin synthesis in dairy calves through the regulation of the mTOR signal pathways.