To send this article to your account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send this article to your Kindle, first ensure firstname.lastname@example.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Systemic lupus erythematosus (SLE) is a multi-system inflammatory disease where genetic susceptibility coupled with largely undefined environmental factors is reported to underlie the aetiology of the disease. One such factor is low vitamin D status. The primary source of vitamin D is endogenous synthesis following exposure of the skin to UVB light. Photosensitivity, sunlight avoidance and the use of sun protection factor in combination with medications prescribed to treat the symptoms of the disease, puts SLE patients at increased risk of vitamin D deficiency. Decreased conversion of 25-hydroxyvitamin D to the metabolically active form, 1,25-dihydroxyvitamin D3, is possible, due to renal impairment common in SLE putting additional stress on vitamin D metabolism. The majority of studies have identified low 25-hydroxyvitamin D in SLE patients, albeit using varying cut-offs (<25 to <80 nmol/l). Of these studies, fifteen have investigated a link between status and disease activity with conflicting results. Variation with disease activity index measures used alongside methodological limitations within the study design may partially explain these findings. This review discusses the importance of optimal vitamin D status in SLE, critically evaluates research carried out to date that has investigated vitamin D in SLE, and highlights the need for a well-designed observational study that controls for diet, medication use, dietary supplements, UV exposure and seasonality, that uses sensitive methods for measuring vitamin D status and disease activity in SLE to conclusively establish the role of vitamin D in SLE.
Dietary reference values for essential trace elements are designed to meet requirements with minimal risk of deficiency and toxicity. Risk–benefit analysis requires data on habitual dietary intakes, an estimate of variation and effects of deficiency and excess on health. For some nutrients, the range between the upper and lower limits may be extremely narrow and even overlap, which creates difficulties when setting safety margins. A new approach for estimating optimal intakes, taking into account several health biomarkers, has been developed and applied to selenium, but at present there are insufficient data to extend this technique to other micronutrients. The existing methods for deriving reference values for Cu and Fe are described. For Cu, there are no sensitive biomarkers of status or health relating to marginal deficiency or toxicity, despite the well-characterised genetic disorders of Menkes and Wilson's disease which, if untreated, lead to lethal deficiency and overload, respectively. For Fe, the wide variation in bioavailability confounds the relationship between intake and status and complicates risk–benefit analysis. As with Cu, health effects associated with deficiency or toxicity are not easy to quantify, therefore status is the most accessible variable for risk–benefit analysis. Serum ferritin reflects Fe stores but is affected by infection/inflammation, and therefore additional biomarkers are generally employed to measure and assess Fe status. Characterising the relationship between health and dietary intake is problematic for both these trace elements due to the confounding effects of bioavailability, inadequate biomarkers of status and a lack of sensitive and specific biomarkers for health outcomes.
Session 1: Balancing intake and output: food v. exercise
Symposium on ‘Nutrition: getting the balance right in 2010’
Satiety, which is the inhibition of eating following the end of a meal, is influenced by a number of food characteristics, including compositional and structural factors. An increased understanding of these factors and the mechanisms whereby they exert their effects on satiety may offer a food-based approach to weight management. Water and gas, which are often neglected in nutrition, are major components of many foods and contribute to volume, and to sensory and other characteristics. A review of previous short-term studies that evaluated the effects of water or gas in foods on satiety showed that while satiety was generally increased, effects on subsequent intakes were not always apparent. These studies were diverse in terms of design, timings and food matrices, which precludes definitive conclusions. However, the results indicate that solids may be more effective at increasing satiety than liquids, but gas may be as effective as water. Although increased gastric distension may be the main mechanism underlying these effects, pre-ingestive and ingestive impacts on cognitive, anticipatory and sensory responses also appear to be involved. Furthermore, there is limited evidence that water on its own may be effective at increasing satiety and decreasing intakes when drunk before, but not with, a meal. Longer-term extrapolation suggests that increasing food volumes with water or gas may offer weight-management strategies. However, from a practical viewpoint, the effects of water and gas on satiety may be best exploited by using these non-nutrients to manipulate perceived portion sizes, without increasing energy contents.
Professor Pennington was an advocate for quality in all aspects of nutrition support and its delivery, ensuring that the patient remained at the centre of all decisions, and that specialist artificial nutrition support was best managed by the multidisciplinary nutrition team and the education of the wider healthcare community. Within the conference theme of ‘Quality’, this commentary aims to outline drivers for and risks to aspects of quality in parenteral nutrition (PN) services. Quality is defined as a particular property or attribute associated with excellence; in the context of the provision of PN this can be translated to quality processes and standards in the assessment, prescription, preparation, administration and monitoring of PN. Quality products and services are delivered through the timely application of knowledge, competence, procedures and standards. Quality can be so easily compromised; inattention, ignorance and arrogance all play their part. PN is a high-risk therapy; the quality of its delivery should not be entirely dependent on the skills, knowledge and competence of those delivering this care but on accepted standards, procedures, communication, resource and infrastructure. Identification of key steps in the provision of PN and a review of the relevant patient safety data reveal points where safeguards can be put in place to ensure quality is not compromised. Full evaluation of standardisation, computerisation and competency-based training as risk-reduction strategies is required.
The health benefits associated with soya food consumption have been widely studied, with soya isoflavones and soya protein implicated in the protection of CVD, osteoporosis and cancers such as those of the breast and prostate. Equol (7-hydroxy-3-(4’-hydroxyphenyl)-chroman), a metabolite of the soya isoflavone daidzein, is produced via the formation of the intermediate dihydrodaidzein, by human intestinal bacteria, with only approximately 30–40% of the adult population having the ability to perform this transformation following a soya challenge. Inter-individual variation in conversion of daidzein to equol has been attributed, in part, to differences in the diet and in gut microflora composition, although the specific bacteria responsible for the colonic biotransformation of daidzein to equol are yet to be identified. Equol is a unique compound in that it can exert oestrogenic effects, but is also a potent antagonist of dihydrotestosterone in vivo. Furthermore, in vitro studies suggest that equol is more biologically active than its parent compound, daidzein, with a higher affinity for the oestrogen receptor and a more potent antioxidant activity. Although some observational and intervention studies suggest that the ability to produce equol is associated with reduced risk of breast and prostate cancer, CVD, improved bone health and reduced incidence of hot flushes, others have reported null or adverse effects. Studies to date have been limited and well-designed studies that are sufficiently powered to investigate the relationship between equol production and disease risk are warranted before the clinical relevance of the equol phenotype can be fully elucidated.
Enteral feeding (or ‘tube feeding’) is a very common inpatient intervention to maintain nutritional status where the oral route is inadequate, unsafe or inaccessible. A proportion of patients will need to continue tube feeding in the community after their admission and will require a gastrostomy tube. Although gastrostomy insertion is relatively straightforward, it is not without complications in an often frail and vulnerable group of patients and a multidisciplinary approach is necessary to ensure that the procedure is appropriate. Some patients are better managed with careful assisted hand feeding or nasogastric tubes. Particular care needs to be taken in deciding whether patients with dementia should have a gastrostomy in view of data suggesting that this group of patients have a particularly poor prognosis after the procedure. Decisions regarding the provision of enteral nutrition at the end of life or where patients are not competent to make an informed judgement are particularly challenging and need to be made on a case-by-case basis.
High-fat diet-induced obesity is associated with a chronic state of low-grade inflammation, which pre-disposes to insulin resistance (IR), which can subsequently lead to type 2 diabetes mellitus. Macrophages represent a heterogeneous population of cells that are instrumental in initiating the innate immune response. Recent studies have shown that macrophages are key mediators of obesity-induced IR, with a progressive infiltration of macrophages into obese adipose tissue. These adipose tissue macrophages are referred to as classically activated (M1) macrophages. They release cytokines such as IL-1β, IL-6 and TNFα creating a pro-inflammatory environment that blocks adipocyte insulin action, contributing to the development of IR and type 2 diabetes mellitus. In lean individuals macrophages are in an alternatively activated (M2) state. M2 macrophages are involved in wound healing and immunoregulation. Wound-healing macrophages play a major role in tissue repair and homoeostasis, while immunoregulatory macrophages produce IL-10, an anti-inflammatory cytokine, which may protect against inflammation. The functional role of T-cell accumulation has recently been characterised in adipose tissue. Cytotoxic T-cells are effector T-cells and have been implicated in macrophage differentiation, activation and migration. Infiltration of cytotoxic T-cells into obese adipose tissue is thought to precede macrophage accumulation. T-cell-derived cytokines such as interferon γ promote the recruitment and activation of M1 macrophages augmenting adipose tissue inflammation and IR. Manipulating adipose tissue macrophages/T-cell activity and accumulation in vivo through dietary fat modification may attenuate adipose tissue inflammation, representing a therapeutic target for ameliorating obesity-induced IR.
Unlike energy expenditure, energy intake occurs during discrete events: snacks and meals. The prevailing view is that meal size is governed by physiological and psychological events that promote satiation towards the end of a meal. This review explores an alternative and perhaps controversial proposition. Specifically that satiation plays a secondary role, and that meal size (kJ) is controlled by decisions about portion size, before a meal begins. Recently, techniques have been developed that enable us to quantify ‘expected satiation’ and ‘expected satiety’ (respectively, the fullness and the respite from hunger that foods are expected to confer). When compared on a kJ-for-kJ basis, these expectations differ markedly across foods. Moreover, in self-selected meals, these measures are remarkably good predictors of the energy content of food that ends up on our plate, even more important than palatability. Expected satiation and expected satiety are influenced by the physical characteristics of a food (e.g. perceived volume). However, they are also learned. Indeed, there is now mounting evidence for ‘expected-satiation drift’, a general tendency for a food to have higher expected satiation as it increases in familiarity. Together, these findings show that important elements of control (discrimination and learning/adaptation) are clearly evident in plans around portion size. Since most meals are eaten in their entirety, understanding the nature of these controls should be given high priority.