We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Smoking prevalence is higher amongst individuals with schizophrenia and depression compared with the general population. Mendelian randomisation (MR) can examine whether this association is causal using genetic variants identified in genome-wide association studies (GWAS).
Methods
We conducted two-sample MR to explore the bi-directional effects of smoking on schizophrenia and depression. For smoking behaviour, we used (1) smoking initiation GWAS from the GSCAN consortium and (2) we conducted our own GWAS of lifetime smoking behaviour (which captures smoking duration, heaviness and cessation) in a sample of 462690 individuals from the UK Biobank. We validated this instrument using positive control outcomes (e.g. lung cancer). For schizophrenia and depression we used GWAS from the PGC consortium.
Results
There was strong evidence to suggest smoking is a risk factor for both schizophrenia (odds ratio (OR) 2.27, 95% confidence interval (CI) 1.67–3.08, p < 0.001) and depression (OR 1.99, 95% CI 1.71–2.32, p < 0.001). Results were consistent across both lifetime smoking and smoking initiation. We found some evidence that genetic liability to depression increases smoking (β = 0.091, 95% CI 0.027–0.155, p = 0.005) but evidence was mixed for schizophrenia (β = 0.022, 95% CI 0.005–0.038, p = 0.009) with very weak evidence for an effect on smoking initiation.
Conclusions
These findings suggest that the association between smoking, schizophrenia and depression is due, at least in part, to a causal effect of smoking, providing further evidence for the detrimental consequences of smoking on mental health.
Dietary Zn has significant impacts on the growth and development of breeding rams. The objectives of this study were to evaluate the effects of dietary Zn source and concentration on serum Zn concentration, growth performance, wool traits and reproductive performance in rams. Forty-four Targhee rams (14 months; 68 ± 18 kg BW) were used in an 84-day completely randomized design and were fed one of three pelleted dietary treatments: (1) a control without fortified Zn (CON; n = 15; ~1 × NRC); (2) a diet fortified with a Zn amino acid complex (ZnAA; n = 14; ~2 × NRC) and (3) a diet fortified with ZnSO4 (ZnSO4; n = 15; ~2 × NRC). Growth and wool characteristics measured throughout the course of the study were BW, average daily gain (ADG), dry matter intake (DMI), feed efficiency (G : F), longissimus dorsi muscle depth (LMD), back fat (BF), wool staple length (SL) and average fibre diameter (AFD). Blood was collected from each ram at four time periods to quantify serum Zn and testosterone concentrations. Semen was collected 1 to 2 days after the trial was completed. There were no differences in BW (P = 0.45), DMI (P = 0.18), LMD (P = 0.48), BF (P = 0.47) and AFD (P = 0.9) among treatment groups. ZnSO4 had greater (P ≤ 0.03) serum Zn concentrations compared with ZnAA and CON treatments. Rams consuming ZnAA had greater (P ≤ 0.03) ADG than ZnSO4 and CON. There tended to be differences among groups for G : F (P = 0.06), with ZnAA being numerically greater than ZnSO4 and CON. Wool staple length regrowth was greater (P < 0.001) in ZnSO4 and tended to be longer (P = 0.06) in ZnAA treatment group compared with CON. No differences were observed among treatments in scrotal circumference, testosterone, spermatozoa concentration within ram semen, % motility, % live sperm and % sperm abnormalities (P ≥ 0.23). Results indicated beneficial effects of feeding increased Zn concentrations to developing Targhee rams, although Zn source elicited differential responses in performance characteristics measured.
Identifying risk factors of individuals in a clinical-high-risk state for psychosis are vital to prevention and early intervention efforts. Among prodromal abnormalities, cognitive functioning has shown intermediate levels of impairment in CHR relative to first-episode psychosis and healthy controls, highlighting a potential role as a risk factor for transition to psychosis and other negative clinical outcomes. The current study used the AX-CPT, a brief 15-min computerized task, to determine whether cognitive control impairments in CHR at baseline could predict clinical status at 12-month follow-up.
Methods
Baseline AX-CPT data were obtained from 117 CHR individuals participating in two studies, the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP) and the Understanding Early Psychosis Programs (EP) and used to predict clinical status at 12-month follow-up. At 12 months, 19 individuals converted to a first episode of psychosis (CHR-C), 52 remitted (CHR-R), and 46 had persistent sub-threshold symptoms (CHR-P). Binary logistic regression and multinomial logistic regression were used to test prediction models.
Results
Baseline AX-CPT performance (d-prime context) was less impaired in CHR-R compared to CHR-P and CHR-C patient groups. AX-CPT predictive validity was robust (0.723) for discriminating converters v. non-converters, and even greater (0.771) when predicting CHR three subgroups.
Conclusions
These longitudinal outcome data indicate that cognitive control deficits as measured by AX-CPT d-prime context are a strong predictor of clinical outcome in CHR individuals. The AX-CPT is brief, easily implemented and cost-effective measure that may be valuable for large-scale prediction efforts.
Whole-grain cereal breakfast consumption has been associated with beneficial effects on glucose and insulin metabolism as well as satiety. Pearl millet is a popular ancient grain variety that can be grown in hot, dry regions. However, little is known about its health effects. The present study investigated the effect of a pearl millet porridge (PMP) compared with a well-known Scottish oats porridge (SOP) on glycaemic, gastrointestinal, hormonal and appetitive responses. In a randomised, two-way crossover trial, twenty-six healthy participants consumed two isoenergetic/isovolumetric PMP or SOP breakfast meals, served with a drink of water. Blood samples for glucose, insulin, glucagon-like peptide 1, glucose-dependent insulinotropic polypeptide (GIP), peptide YY, gastric volumes and appetite ratings were collected 2 h postprandially, followed by an ad libitum meal and food intake records for the remainder of the day. The incremental AUC (iAUC2h) for blood glucose was not significantly different between the porridges (P > 0·05). The iAUC2h for gastric volume was larger for PMP compared with SOP (P = 0·045). The iAUC2h for GIP concentration was significantly lower for PMP compared with SOP (P = 0·001). Other hormones and appetite responses were similar between meals. In conclusion, the present study reports, for the first time, data on glycaemic and physiological responses to a pearl millet breakfast, showing that this ancient grain could represent a sustainable alternative with health-promoting characteristics comparable with oats. GIP is an incretin hormone linked to TAG absorption in adipose tissue; therefore, the lower GIP response for PMP may be an added health benefit.
We have observed the G23 field of the Galaxy AndMass Assembly (GAMA) survey using the Australian Square Kilometre Array Pathfinder (ASKAP) in its commissioning phase to validate the performance of the telescope and to characterise the detected galaxy populations. This observation covers ~48 deg2 with synthesised beam of 32.7 arcsec by 17.8 arcsec at 936MHz, and ~39 deg2 with synthesised beam of 15.8 arcsec by 12.0 arcsec at 1320MHz. At both frequencies, the root-mean-square (r.m.s.) noise is ~0.1 mJy/beam. We combine these radio observations with the GAMA galaxy data, which includes spectroscopy of galaxies that are i-band selected with a magnitude limit of 19.2. Wide-field Infrared Survey Explorer (WISE) infrared (IR) photometry is used to determine which galaxies host an active galactic nucleus (AGN). In properties including source counts, mass distributions, and IR versus radio luminosity relation, the ASKAP-detected radio sources behave as expected. Radio galaxies have higher stellar mass and luminosity in IR, optical, and UV than other galaxies. We apply optical and IR AGN diagnostics and find that they disagree for ~30% of the galaxies in our sample. We suggest possible causes for the disagreement. Some cases can be explained by optical extinction of the AGN, but for more than half of the cases we do not find a clear explanation. Radio sources aremore likely (~6%) to have an AGN than radio quiet galaxies (~1%), but the majority of AGN are not detected in radio at this sensitivity.
To determine the impact of pre-operative intratympanic gentamicin injection on the recovery of patients undergoing translabyrinthine resection of vestibular schwannomas.
Methods
This prospective, case–control pilot study included eight patients undergoing surgical labyrinthectomy, divided into two groups: four patients who received pre-operative intratympanic gentamicin and four patients who did not. The post-operative six-canal video head impulse test responses and length of in-patient stay were assessed.
Results
The average length of stay was shorter for patients who received intratympanic gentamicin (6.75 days; range, 6–7 days) than for those who did not (9.5 days; range, 8–11 days) (p = 0.0073). Additionally, the gentamicin group had normal post-operative video head impulse test responses in the contralateral ear, while the non-gentamicin group did not.
Conclusion
Pre-operative intratympanic gentamicin improves the recovery following vestibular schwannoma resection, eliminating, as per the video head impulse test, the impact of labyrinthectomy on the contralateral labyrinth.
This chapter describes the current state of, and normative basis for, the law of reasonable royalties among the leading jurisdictions for patent infringement litigation, as well as the principal arguments for and against various practices relating to the calculation of reasonable royalties; and for each of the major issues discussed, the chapter provides one or more recommendations. The chapter’s principal recommendation is that, when applying a “bottom-up” approach to estimating reasonable royalties, courts should replace the Georgia-Pacific factors (and analogous factors used outside the United States) with a smaller list of considerations, specifically (1) calculating the incremental value of the invention and dividing it appropriately between the parties; (2) assessing market evidence, such as comparable licenses; and (3) where feasible and cost justified, using each of these first two considerations as a “check” on the accuracy of the other
A national need is to prepare for and respond to accidental or intentional disasters categorized as chemical, biological, radiological, nuclear, or explosive (CBRNE). These incidents require specific subject-matter expertise, yet have commonalities. We identify 7 core elements comprising CBRNE science that require integration for effective preparedness planning and public health and medical response and recovery. These core elements are (1) basic and clinical sciences, (2) modeling and systems management, (3) planning, (4) response and incident management, (5) recovery and resilience, (6) lessons learned, and (7) continuous improvement. A key feature is the ability of relevant subject matter experts to integrate information into response operations. We propose the CBRNE medical operations science support expert as a professional who (1) understands that CBRNE incidents require an integrated systems approach, (2) understands the key functions and contributions of CBRNE science practitioners, (3) helps direct strategic and tactical CBRNE planning and responses through first-hand experience, and (4) provides advice to senior decision-makers managing response activities. Recognition of both CBRNE science as a distinct competency and the establishment of the CBRNE medical operations science support expert informs the public of the enormous progress made, broadcasts opportunities for new talent, and enhances the sophistication and analytic expertise of senior managers planning for and responding to CBRNE incidents.
Shiga toxin-producing Escherichia coli (STEC) infection can cause serious illness including haemolytic uraemic syndrome. The role of socio-economic status (SES) in differential clinical presentation and exposure to potential risk factors amongst STEC cases has not previously been reported in England. We conducted an observational study using a dataset of all STEC cases identified in England, 2010–2015. Odds ratios for clinical characteristics of cases and foodborne, waterborne and environmental risk factors were estimated using logistic regression, stratified by SES, adjusting for baseline demographic factors. Incidence was higher in the highest SES group compared to the lowest (RR 1.54, 95% CI 1.19–2.00). Odds of Accident and Emergency attendance (OR 1.35, 95% CI 1.10–1.75) and hospitalisation (OR 1.71, 95% CI 1.36–2.15) because of illness were higher in the most disadvantaged compared to the least, suggesting potential lower ascertainment of milder cases or delayed care-seeking behaviour in disadvantaged groups. Advantaged individuals were significantly more likely to report salad/fruit/vegetable/herb consumption (OR 1.59, 95% CI 1.16–2.17), non-UK or UK travel (OR 1.76, 95% CI 1.40–2.27; OR 1.85, 95% CI 1.35–2.56) and environmental exposures (walking in a paddock, OR 1.82, 95% CI 1.22–2.70; soil contact, OR 1.52, 95% CI 2.13–1.09) suggesting other unmeasured risks, such as person-to-person transmission, could be more important in the most disadvantaged group.
Cognitive-behavioural therapy (CBT) is an effective treatment for depressed adults. CBT interventions are complex, as they include multiple content components and can be delivered in different ways. We compared the effectiveness of different types of therapy, different components and combinations of components and aspects of delivery used in CBT interventions for adult depression. We conducted a systematic review of randomised controlled trials in adults with a primary diagnosis of depression, which included a CBT intervention. Outcomes were pooled using a component-level network meta-analysis. Our primary analysis classified interventions according to the type of therapy and delivery mode. We also fitted more advanced models to examine the effectiveness of each content component or combination of components. We included 91 studies and found strong evidence that CBT interventions yielded a larger short-term decrease in depression scores compared to treatment-as-usual, with a standardised difference in mean change of −1.11 (95% credible interval −1.62 to −0.60) for face-to-face CBT, −1.06 (−2.05 to −0.08) for hybrid CBT, and −0.59 (−1.20 to 0.02) for multimedia CBT, whereas wait list control showed a detrimental effect of 0.72 (0.09 to 1.35). We found no evidence of specific effects of any content components or combinations of components. Technology is increasingly used in the context of CBT interventions for depression. Multimedia and hybrid CBT might be as effective as face-to-face CBT, although results need to be interpreted cautiously. The effectiveness of specific combinations of content components and delivery formats remain unclear. Wait list controls should be avoided if possible.
We studied the genetic diversity and the population structure of human isolates of Histoplasma capsulatum, the causative agent of histoplasmosis, using a randomly amplified polymorphic DNA-polymerase chain reaction (RAPD-PCR) assay to identify associations with the geographic distribution of isolates from Mexico, Guatemala, Colombia and Argentina. The RAPD-PCR pattern analyses revealed the genetic diversity by estimating the percentage of polymorphic loci, effective number of alleles, Shannon's index and heterozygosity. Population structure was identified by the index of association (IA) test. Thirty-seven isolates were studied and clustered into three groups by the unweighted pair-group method with arithmetic mean (UPGMA). Group I contained five subgroups based on geographic origin. The consistency of the UPGMA dendrogram was estimated by the cophenetic correlation coefficient (CCCr = 0.94, P = 0.001). Isolates from Mexico and Colombia presented higher genetic diversity than isolates from Argentina. Isolates from Guatemala grouped together with the reference strains from the United States of America and Panama. The IA values suggest the presence of a clonal population structure in the Argentinian H. capsulatum isolates and also validate the presence of recombining populations in the Colombian and Mexican isolates. These data contribute to the knowledge on the molecular epidemiology of histoplasmosis in Latin America.
Introduction: Although use of point of care ultrasound (PoCUS) protocols for patients with undifferentiated hypotension in the Emergency Department (ED) is widespread, our previously reported SHoC-ED study showed no clear survival or length of stay benefit for patients assessed with PoCUS. In this analysis, we examine if the use of PoCUS changed fluid administration and rates of other emergency interventions between patients with different shock types. The primary comparison was between cardiogenic and non-cardiogenic shock types. Methods: A post-hoc analysis was completed on the database from an RCT of 273 patients who presented to the ED with undifferentiated hypotension (SBP <100 or shock index > 1) and who had been randomized to receive standard care with or without PoCUS in 6 centres in Canada and South Africa. PoCUS-trained physicians performed scans after initial assessment. Shock categories and diagnoses recorded at 60 minutes after ED presentation, were used to allocate patients into subcategories of shock for analysis of treatment. We analyzed actual care delivered including initial IV fluid bolus volumes (mL), rates of inotrope use and major procedures. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: Although there were expected differences in the mean fluid bolus volume between patients with non-cardiogenic and cardiogenic shock, there was no difference in fluid bolus volume between the control and PoCUS groups (non-cardiogenic control 1878 mL (95% CI 1550 – 2206 mL) vs. non-cardiogenic PoCUS 1687 mL (1458 – 1916 mL); and cardiogenic control 768 mL (194 – 1341 mL) vs. cardiogenic PoCUS 981 mL (341 – 1620 mL). Likewise there were no differences in rates of inotrope administration, or major procedures for any of the subcategories of shock between the control group and PoCUS group patients. The most common subcategory of shock was distributive. Conclusion: Despite differences in care delivered by subcategory of shock, we did not find any significant difference in actual care delivered between patients who were examined using PoCUS and those who were not. This may help to explain the previously reported lack of outcome difference between groups.
Introduction: Point of care ultrasound has been reported to improve diagnosis in non-traumatic hypotensive ED patients. We compared diagnostic performance of physicians with and without PoCUS in undifferentiated hypotensive patients as part of an international prospective randomized controlled study. The primary outcome was diagnostic performance of PoCUS for cardiogenic vs. non-cardiogenic shock. Methods: SHoC-ED recruited hypotensive patients (SBP < 100 mmHg or shock index > 1) in 6 centres in Canada and South Africa. We describe previously unreported secondary outcomes relating to diagnostic accuracy. Patients were randomized to standard clinical assessment (No PoCUS) or PoCUS groups. PoCUS-trained physicians performed scans after initial assessment. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses including shock category were recorded at 0 and 60 minutes. Final diagnosis was determined by independent blinded chart review. Standard statistical tests were employed. Sample size was powered at 0.80 (α:0.05) for a moderate difference. Results: 273 patients were enrolled with follow-up for primary outcome completed for 270. Baseline demographics and perceived category of shock were similar between groups. 11% of patients were determined to have cardiogenic shock. PoCUS had a sensitivity of 80.0% (95% CI 54.8 to 93.0%), specificity 95.5% (90.0 to 98.1%), LR+ve 17.9 (7.34 to 43.8), LR-ve 0.21 (0.08 to 0.58), Diagnostic OR 85.6 (18.2 to 403.6) and accuracy 93.7% (88.0 to 97.2%) for cardiogenic shock. Standard assessment without PoCUS had a sensitivity of 91.7% (64.6 to 98.5%), specificity 93.8% (87.8 to 97.0%), LR+ve 14.8 (7.1 to 30.9), LR- of 0.09 (0.01 to 0.58), Diagnostic OR 166.6 (18.7 to 1481) and accuracy of 93.6% (87.8 to 97.2%). There was no significant difference in sensitivity (-11.7% (-37.8 to 18.3%)) or specificity (1.73% (-4.67 to 8.29%)). Diagnostic performance was also similar between other shock subcategories. Conclusion: As reported in other studies, PoCUS based assessment performed well diagnostically in undifferentiated hypotensive patients, especially as a rule-in test. However performance was similar to standard (non-PoCUS) assessment, which was excellent in this study.
The mammal family Tenrecidae (Afrotheria: Afrosoricida) is endemic to Madagascar. Here we present the conservation priorities for the 31 species of tenrec that were assessed or reassessed in 2015–2016 for the IUCN Red List of Threatened Species. Six species (19.4%) were found to be threatened (4 Vulnerable, 2 Endangered) and one species was categorized as Data Deficient. The primary threat to tenrecs is habitat loss, mostly as a result of slash-and-burn agriculture, but some species are also threatened by hunting and incidental capture in fishing traps. In the longer term, climate change is expected to alter tenrec habitats and ranges. However, the lack of data for most tenrecs on population size, ecology and distribution, together with frequent changes in taxonomy (with many cryptic species being discovered based on genetic analyses) and the poorly understood impact of bushmeat hunting on spiny species (Tenrecinae), hinders conservation planning. Priority conservation actions are presented for Madagascar's tenrecs for the first time since 1990 and focus on conserving forest habitat (especially through improved management of protected areas) and filling essential knowledge gaps. Tenrec research, monitoring and conservation should be integrated into broader sustainable development objectives and programmes targeting higher profile species, such as lemurs, if we are to see an improvement in the conservation status of tenrecs in the near future.
Medical equipment can transmit pathogenic bacteria to patients. This single-institution point prevalence study aimed to characterise the types and relative amount of bacteria found on surgical loupes, headlights and their battery packs.
Method
Surgical loupes, headlights and battery packs of 16 otolaryngology staff and residents were sampled, cultured and quantified. Plate scores were summed for each equipment type, and the total was divided by the number of users to generate mean bacterial burden scores. Residents completed a questionnaire regarding their equipment cleaning practices.
Results
The contamination rates of loupes, headlights and battery packs were 68.75 per cent, 100 per cent and 75 per cent, respectively. Battery packs cultured more bacteria (1.58 per swab ± 1.00) than loupes (0.75 per swab ± 0.66; p = 0.024). Headlights had non-significantly greater growth (1.50 per swab ± 0.71) than loupes (p = 0.052). Bacterial growth was significantly higher from inner surfaces of loupes (p = 0.035) and headlights (p = 0.037). Potentially pathogenic bacteria were cultured from the equipment of five participants, including: Pantoea agglomerans, Acinetobacter radioresistens, Staphylococcus aureus, Acinetobacter calcoaceticus baumannii complex and Moraxella osloensis.
Conclusion
This study demonstrates that surgical loupes and headlights used in otolaryngology harbour non-pathogenic skin flora and potentially pathogenic bacteria.
Transoral laser microsurgery is an increasingly common treatment modality for glottic carcinoma. This study aimed to determine the effect of age, gender, stage and time on voice-related quality of life using the Voice Handicap Index-10.
Methods
Primary early glottic carcinoma patients treated with transoral laser microsurgery were included in the study. Self-reported Voice Handicap Index testing was completed pre-operatively, three months post-operatively, and yearly at follow-up appointments.
Results
Voice Handicap Index improvement was found to be dependent on age and tumour stage, while no significant differences were found in Voice Handicap Index for gender. Voice Handicap Index score was significantly improved at 12 months and 24 months. Time versus Voice Handicap Index modelling revealed a preference for non-linear over linear regression.
Conclusion
Age and stage are important factors, as younger patients with more advanced tumours show greater voice improvement post-operatively. Patient's Voice Handicap Index is predicted to have 95 per cent of maximal improvement by 5.5 months post-operatively.
When the accuracy of a lattice parameter determination is carried beyond about 0.01 % it becomes of special importance to consider the equivalence of λ and θ in solving the Bragg equation for d. A reference angle on the observed diffractometer profile must be identified with the corresponding wavelength, of the incident X-ray spectral distribution. Exact identity is not possible because the diffractometer profiles are broadened, distorted asymmetrically, and displaced from their correct positions by amounts dependent on the shape of the incident spectral lines, the angular separation of the Kα1,2 doublet lines, and the specimen, instrumental, and geometrical aberrations innerent in the experimental method. The aberration functions vary with the experimental conditions and are Bragg-angle-dependent, thereby introducing systematic errors which are not eliminated by extrapolation procedures.
Most of the published diffractometer measurements of lattice parameters have used as the reflection angle, 28, of the diffractometer profile the peak P(2θ) or the midpoint of chords at various heights above background M1/2(2θ), M2/3(2θ), etc., of the Kα1 line. The relationship between these various angular measures of the line profile is not constant; P(2θ) may be equai to, greater than, or less than M1/2(2θ), depending on the asymmetry of the line profile. The X-ray wavelengths currently used in diffractometry refer to the peak P(λ) of the spectral distribution. The use of P(λ) with different angular measures of the diffractometer profiles results in a range of d's from which different values of the lattice parameters are calculated. The selection of arbitrary methods of defining 2θ does not take into account the significant aspects of the diffraction process, nor does it facilitate the correction of the data for systematic errors inherent in the experimental measurements.
X-ray diffraction topographs were obtained from large arsenic single crystals. The camera employed copper Kα, radiation from a microfocus tube and an oscillating assembly of Soller slits limited the beam divergence. Reflections of the type (11) and (20) (primitive rhombohedral cell) were used to characterise dislocation Burgers vectors. The technique has been applied to arsenic single crystals grown from the vapour and from the melt. The majority of dislocations were found to belong to Burgers vectors <10>. Comparison has been made between dislocation etch pit patterns on (111) surfaces and X-ray topographs.
We assessed whether paternal demographic, anthropometric and clinical factors influence the risk of an infant being born large-for-gestational-age (LGA). We examined the data on 3659 fathers of term offspring (including 662 LGA infants) born to primiparous women from Screening for Pregnancy Endpoints (SCOPE). LGA was defined as birth weight >90th centile as per INTERGROWTH 21st standards, with reference group being infants ⩽90th centile. Associations between paternal factors and likelihood of an LGA infant were examined using univariable and multivariable models. Men who fathered LGA babies were 180 g heavier at birth (P<0.001) and were more likely to have been born macrosomic (P<0.001) than those whose infants were not LGA. Fathers of LGA infants were 2.1 cm taller (P<0.001), 2.8 kg heavier (P<0.001) and had similar body mass index (BMI). In multivariable models, increasing paternal birth weight and height were independently associated with greater odds of having an LGA infant, irrespective of maternal factors. One unit increase in paternal BMI was associated with 2.9% greater odds of having an LGA boy but not girl; however, this association disappeared after adjustment for maternal BMI. There were no associations between paternal demographic factors or clinical history and infant LGA. In conclusion, fathers who were heavier at birth and were taller were more likely to have an LGA infant, but maternal BMI had a dominant influence on LGA.