We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Delineating the proximal urethra can be critical for radiotherapy planning but is challenging on computerised tomography (CT) imaging.
Materials and methods:
We trialed a novel non-invasive technique to allow visualisation of the proximal urethra using a rapid sequence magnetic resonance imaging (MRI) protocol to visualise the urinary flow in patients voiding during the simulation scan.
Results:
Of the seven patients enrolled, four were able to void during the MRI scan. For these four patients, direct visualisation of urinary flow through the proximal urethra was achieved. The average volume of the proximal urethra contoured on voiding MRI was significantly higher than the proximal urethra contoured on CT, 4·07 and 1·60 cc, respectively (p = 0·02). The proximal urethra location also differed; the Dice coefficient average was 0·28 (range 0–0·62).
Findings:
In this small, proof-of-concept prospective clinical trial, the volume and location of the proximal urethra differed significantly when contoured on a voiding MRI scan compared to that determined by a conventional CT simulation. The shape of the proximal urethra on voiding MRI may be more anatomically correct compared to the proximal urethra shape determined with a semi-rigid catheter in place.
Previous research demonstrates various associations between depression, cardiovascular disease (CVD) incidence and mortality. Differences between studies may occur as a result of different methodologies.
Objectives:
This work investigated the impact of using two different methods to measure depression and two different methods of analysis to establish relationships.
Aims:
The work investigated the association between depression, CVD incidence (CVDI) and mortality from coronary heart disease (MCHD), smoking related conditions (MSRC), and all causes (MALL), in a major population study using depression measured from a validated scale and a depression measure derived by factor analysis, and analyses based on continuous data and grouped data.
Methods:
Data from the PRIME Study (N=9,798 men) on depression and ten year CVD incidence and mortality were analysed using Cox proportional hazards models.
Results:
Using continuous data, no relationships with CVDI were found, but both measures of depression resulted in the emergence of positive associations between depression and mortality (MCHD, MSRC, MALL). Using grouped data, no associations with CVDI or MCVD were found, and associations between the measure derived from factor analysis and MSRC and MALL were also lost. Positive associations were only found between depression measured using validated items, MSRC and MALL.
Conclusions:
These data demonstrate a possible association between depression and mortality but detecting this association is dependent on the methodology used. Different findings based on methodology present clear problems for the determination of relationships. The differences here suggest the preferential use of validated scales and suggest against over-reduction via factor analysis and grouping.
Original studies published over the last decade regarding time trends in dementia report mixed results. The aims of the present study were to use linked administrative health data for the province of Saskatchewan for the period 2005/2006 to 2012/2013 to: (1) examine simultaneous temporal trends in annual age- and sex-specific dementia incidence and prevalence among individuals aged 45 and older, and (2) stratify the changes in incidence over time by database of identification.
Methods:
Using a population-based retrospective cohort study design, data were extracted from seven provincial administrative health databases linked by a unique anonymized identification number. Individuals 45 years and older at first identification of dementia between April 1, 2005 and March 31, 2013 were included, based on case definition criteria met within any one of four administrative health databases (hospital, physician, prescription drug, and long-term care).
Results:
Between 2005/2006 and 2012/2013, the 12-month age-standardized incidence rate of dementia declined significantly by 11.07% and the 12-month age-standardized prevalence increased significantly by 30.54%. The number of incident cases decreased from 3,389 to 3,270 and the number of prevalent cases increased from 8,795 to 13,012. Incidence rate reductions were observed in every database of identification.
Conclusions:
We observed a simultaneous trend of decreasing incidence and increasing prevalence of dementia over a relatively short 8-year time period from 2005/2006 to 2012/2013. These trends indicate that the average survival time of dementia is lengthening. Continued observation of these time trends is warranted given the short study period.
Feed is a major component of variable costs associated with dairy systems and is therefore an important consideration for breeding objectives. As a result, measures of feed efficiency are becoming popular traits for genetic analyses. Already, several countries account for feed efficiency in their breeding objectives by approximating the amount of energy required for milk production, maintenance, etc. However, variation in actual feed intake is currently not captured in dairy selection objectives, although this could be possible by evaluating traits such as residual feed intake (RFI), defined as the difference between actual and predicted feed (or energy) intake. As feed intake is expensive to accurately measure on large numbers of cows, phenotypes derived from it are obvious candidates for genomic selection provided that: (1) the trait is heritable; (2) the reliability of genomic predictions are acceptable to those using the breeding values; and (3) if breeding values are estimated for heifers, rather than cows then the heifer and cow traits need to be correlated. The accuracy of genomic prediction of dry matter intake (DMI) and RFI has been estimated to be around 0.4 in beef and dairy cattle studies. There are opportunities to increase the accuracy of prediction, for example, pooling data from three research herds (in Australia and Europe) has been shown to increase the accuracy of genomic prediction of DMI from 0.33 within country to 0.35 using a three-country reference population. Before including RFI as a selection objective, genetic correlations with other traits need to be estimated. Weak unfavourable genetic correlations between RFI and fertility have been published. This could be because RFI is mathematically similar to the calculation of energy balance and failure to account for mobilisation of body reserves correctly may result in selection for a trait that is similar to selecting for reduced (or negative) energy balance. So, if RFI is to become a selection objective, then including it in an overall multi-trait selection index where the breeding objective is net profit is sensible, as this would allow genetic correlations with other traits to be properly accounted for. If genetic parameters are accurately estimated then RFI is a logical breeding objective. If there is uncertainty in these, then DMI may be preferable.
For centuries, animal breeders have very effectively been selecting livestock species, making use of the natural variation that exists within the population. As part of the developments towards broader breeding goals, the RobustMilk project was designed to develop new practical technologies to allow breeders to re-focus their selection to include milk quality and dairy cow robustness and to evaluate the consequences of selection for these traits taking cognisance of various milk production systems. Here we introduce the background to robustness, the value of expanding milk quality analysis (including the possibility of using milk quality characteristics as proxy measures for robustness traits), interactions between robustness and milk quality traits and the need for different breeding tools to enable delivery of these concepts to the industry. Developing a database with phenotypes from research herds across Europe, phenotyping tools using mid-infrared red spectroscopic analysis of milk, and the development of statistical and genomic tools for robustness and milk quality formed the core of the project. In the following papers you will read the outcomes and developments that happened during the project.
Data from 113 Dutch organic farms were analysed to determine the effect of cross-breeding on production and functional traits. In total, data on 33 788 lactations between January 2003 and February 2009 from 15 015 cows were available. Holstein–Friesian pure-bred cows produced most kg of milk in 305 days, but with the lowest percentages of fat and protein of all pure-bred cows in the data set. Cross-breeding Holstein dairy cows with other breeds (Brown Swiss, Dutch Friesian, Groningen White Headed, Jersey, Meuse Rhine Yssel, Montbéliarde or Fleckvieh) decreased milk production, but improved fertility and udder health in most cross-bred animals. In most breeds, heterosis had a significant effect (P < 0.05) on milk (kg in 305 days), fat and protein-corrected milk production (kg in 305 days) and calving interval (CI) in the favourable direction (i.e. more milk, shorter CI), but unfavourably for somatic cell count (higher cell count). Recombination was unfavourable for the milk production traits, but favourable for the functional traits (fertility and udder health). Farm characteristics, like soil type or housing system, affected the regression coefficients on breed components significantly. The effect of the Holstein breed on milk yield was twice as large in cubicle housing as in other housing systems. Jerseys had a negative effect on fertility only on farms on sandy soils. Hence, breed effects differ across farming systems in the organic farming and farmers can use such information to dovetail their farming system with the type of cow they use.
Dramatic advances in the control of physical systems at the atomic scale have provided many new ways to manufacture devices. An important question is how best to design these ultra-small complex systems. Access to vast amounts of inexpensive computing power makes it possible to accurately simulate their physical properties. Furthermore, high-performance computers allow us to explore the large number of degrees of freedom with which to construct new device configurations. This book aims to lay the groundwork for a methodology to exploit these emerging capabilities using optimal device design. By combining applied mathematics, smart computation, physical modeling, and twenty-first-century engineering and fabrication tools it is possible to find atomic and nanoscale configurations that result in components with performance characteristics that have not been achieved using other methods.
Imagine you want to design and build a novel nanoscale device. How would you go about it? A conventional starting point is to look at a macroscopic component with similar functionality, and consider ways to make it smaller. This approach has several potential pitfalls. For one, with continued reduction in size, device behavior will become quantum in character where classical concepts and models cease to be applicable. Moreover, it is limited by ad hoc designs, typically rooted in our unwillingness to consider aperiodic configurations, unless absolutely mandated by physical constraints. Most importantly this conventional approach misses the enormous opportunity of exploring the full landscape of possible system responses, offered by breaking all conceivable symmetries.
Computational resources, realistic physical models, and advanced optimization algorithms now make it possible to efficiently explore the properties of many more configurations than could be tested in a typical laboratory.
Explore the frontier of device engineering by applying optimization to nanoscience and device design. This cutting-edge work shows how robust, manufacturable designs that meet previously unobtainable system specifications can be created using a combination of modern computer power, adaptive algorithms, and realistic device-physics models. Applying this method to nanoscience is a path to creating new devices with new functionality, and it could be the key design element in making nanoscience a practical technology. Basic introductory examples along with MATLAB code are included, through to more formal and sophisticated approaches, and specific applications and designs are examined. Essential reading for researchers and engineers in electronic devices, nanoscience, materials science, applied mathematics, and applied physics.
The aim of this study was to describe a systematic process of record-linkage, cross-validation, case-ascertainment and capture–recapture analysis to assess the quality of tuberculosis registers and to estimate the completeness of notification of incident tuberculosis cases in The Netherlands in 1998. After record-linkage and cross-validation 1499 tuberculosis patients were identified, of whom 1298 were notified, resulting in an observed under-notification of 13·4%. After adjustment for possible imperfect record-linkage and remaining false-positive hospital cases observed under-notification was 7·3%. Log-linear capture–recapture analysis initially estimated a total number of 2053 (95% CI 1871–2443) tuberculosis cases, resulting in an estimated under-notification of 36·8%. After adjustment for possible imperfect record-linkage and remaining false-positive hospital cases various capture–recapture models estimated under-notification at 13·6%. One of the reasons for the higher than expected estimated under-notification in a country with a well-organized system of tuberculosis control might be that some tuberculosis cases, e.g. extrapulmonary tuberculosis, are managed by clinicians less familiar with notification of infectious diseases. This study demonstrates the possible impact of violation of assumptions underlying capture–recapture analysis, especially the perfect record-linkage, perfect positive predictive value and absent three-way interaction assumptions.