We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Australia, aeromedical retrieval provides a vital link for rural communities with limited health services to definitive care in urban centers. Yet, there are few studies of aeromedical patient experiences and outcomes, or clear measures of the service quality provided to these patients.
Study Objective:
This study explores whether a previously developed quality framework could usefully be applied to existing air ambulance patient journeys (ie, the sequences of care that span multiple settings; prehospital and hospital-based pre-flight, flight transport, after-flight hospital in-patient, and disposition). The study aimed to use linked data from aeromedical, emergency department (ED), and hospital sources, and from death registries, to document and analyze patient journeys.
Methods:
A previously developed air ambulance quality framework was used to place patient, prehospital, and in-hospital service outcomes in relevant quality domains identified from the Institutes of Medicine (IOM) and Dr. Donabedian models. To understand the aeromedical patients’ journeys, data from all relevant data sources were linked by unique patient identifiers and the outcomes of the resulting analyses were applied to the air ambulance quality framework.
Results:
Overall, air ambulance referral pathways could be classified into three categories: Intraregional (those retrievals which stayed within the region), Out of Region, and Into Region. Patient journeys and service outcomes varied markedly between referral pathways. Prehospital and in-hospital service variables and patient outcomes showed that the framework could be used to explore air ambulance service quality.
Conclusion:
The air ambulance quality framework can usefully be applied to air ambulance patient experiences and outcomes using linked data analysis. The framework can help guide prehospital and in-hospital performance reporting. With variations between regional referral pathways, this knowledge will aid with planning within the local service. The study successfully linked data from aeromedical, ED, in-hospital, and death sources and explored the aeromedical patients’ journeys.
This study on adolescents was intended to assess the prevalence of disordered eating attitudes and the nutritional status of adolescent girls in Saudi Arabia. Disordered eating attitudes and behaviour were assessed using the EAT-26. The type of eating disorder (ED) was determined using Diagnostic statistical manual of mental disorders, fifth edition. The nutritional status of the adolescent girls was determined by measuring their weight and height twice using standard protocols. The BMI-for-age and height-for-age were defined using WHO growth charts. Comparisons between adolescent girls with and without EDs were conducted using SPSS version 26. Eating disorders (EDs) were prevalent among 10⋅2 % of these girls. Other specified feeding or EDs were the most prevalent ED (7⋅6 %), followed by unspecified feeding or eating disorder (2⋅4 %). Anorexia nervosa was common among 0⋅3 % of the girls. The eating disordered adolescents were either overweight (7⋅7 %), obese (10⋅3 %), stunted (7⋅7 %) or severely stunted (2⋅6 %). ANOVA revealed that the BMI-for-age was influenced by age (P = 0⋅028), the type of ED (P = 0⋅019) and the EAT-26 (P < 0⋅0001). Pearson's correlation showed that the EAT-26 score increased significantly with the BMI (r 0⋅22, P = 0⋅0001), height (r 0⋅12, P = 0⋅019) and weight (r 0⋅22, P = 0⋅0001). The early detection of EDs among adolescents is highly recommended to reduce the risk associated with future impaired health status. Nutrition professionals must target adolescents, teachers and parents and provide nutritional education about the early signs and symptoms of ED and the benefits of following a healthy dietary pattern.
Optimal maternal long-chain PUFA (LCPUFA) status is essential for the developing fetus. The fatty acid desaturase (FADS) genes are involved in the endogenous synthesis of LCPUFA. The minor allele of various FADS SNP have been associated with increased maternal concentrations of the precursors linoleic acid (LA) and α-linolenic acid (ALA), and lower concentrations of arachidonic acid (AA) and DHA. There is limited research on the influence of FADS genotype on cord PUFA status. The current study investigated the influence of maternal and child genetic variation in FADS genotype on cord blood PUFA status in a high fish-eating cohort. Cord blood samples (n 1088) collected from the Seychelles Child Development Study (SCDS) Nutrition Cohort 2 (NC2) were analysed for total serum PUFA. Of those with cord PUFA data available, maternal (n 1062) and child (n 916), FADS1 (rs174537 and rs174561), FADS2 (rs174575), and FADS1-FADS2 (rs3834458) were determined. Regression analysis determined that maternal minor allele homozygosity was associated with lower cord blood concentrations of DHA and the sum of EPA + DHA. Lower cord blood AA concentrations were observed in children who were minor allele homozygous for rs3834458 (β = 0·075; P = 0·037). Children who were minor allele carriers for rs174537, rs174561, rs174575 and rs3834458 had a lower cord blood AA:LA ratio (P < 0·05 for all). Both maternal and child FADS genotype were associated with cord LCPUFA concentrations, and therefore, the influence of FADS genotype was observed despite the high intake of preformed dietary LCPUFA from fish in this population.
All the NHSTs in previous chapters compare two dependent variable means. When there are three or more group means, it is possible to use unpaired two-sample t-tests for each pair of group means, but there are two problems with this strategy. First, as the number of groups increases, the number of t-tests required increases faster. Second, the risk of Type I error increases with each additional t-test.
The analysis of variance (ANOVA) fixes both problems. Its null hypothesis is that all group means are equal. ANOVA follows the same eight steps as other NHST procedures. ANOVA produces an effect size, η2. The η2 effect size can be interpreted in two ways. First, η2 quantifies the percentage of dependent variable variance that is shared with the independent variable’s variance. Second, η2 measures how much better the group mean functions as a predicted score when compared to the grand mean.
ANOVA only says whether a difference exists – not which means differ from other means. To determine this, a post hoc test is frequently performed. The most common procedure is Tukey’s test. This helps researchers identify the location of the difference(s).
It is difficult to obtain an accurate blood pressure (BP) measurement, especially in the prehospital environment. It is not known fully how various BP measurement techniques differ from one another.
Study Objective:
The study hypothesized that there are differences in the accuracy of various non-invasive blood pressure (NIBP) measurement strategies as compared to the gold standard of intra-arterial (IA) measurement.
Methods:
The study enrolled adult intensive care unit (ICU) patients with radial IA catheters placed to measure radial intra-arterial blood pressure (RIBP) as a part of their standard care at a large, urban, tertiary-care Level I trauma center. Systolic blood pressure (SBP) was taken by three different NIBP techniques (oscillometric, auscultated, and palpated) and compared to RIBP measurements. Data were analyzed using the paired t-test with dependent samples to detect differences between RIBP measurements and each NIBP method. The primary outcome was the difference in RIBP and NIBP measurement. There was also a predetermined subgroup analysis based on gender, body mass index (BMI), primary diagnosis requiring IA line placement, and current vasoactive medication use.
Results:
Forty-four patients were enrolled to detect a predetermined clinically significant difference of 5mmHg in SBP. The patient population was 63.6% male and 36.4% female with an average age of 58.4 years old. The most common primary diagnoses were septic shock (47.7%), stroke (13.6%), and increased intracranial pressure (ICP; 13.6%). Most patients were receiving some form of sedation (63.4%), while 50.0% were receiving vasopressor medication and 31.8% were receiving anti-hypertensive medication. When compared to RIBP values, only the palpated SBP values had a clinically significant difference (9.88mmHg less than RIBP; P < .001). When compared to RIBP, the oscillometric and auscultated SBP readings showed statistically but not clinically significant lower values. The palpated method also showed a clinically significant lower SBP reading than the oscillometric method (5.48mmHg; P < .001) and the auscultated method (5.06mmHg; P < .001). There was no significant difference between the oscillometric and auscultated methods (0.42mmHg; P = .73).
Conclusion:
Overall, NIBPs significantly under-estimated RIBP measurements. Palpated BP measurements were consistently lower than RIBP, which was statistically and clinically significant. These results raise concern about the accuracy of palpated BP and its pervasive use in prehospital care. The data also suggested that auscultated and oscillometric BP may provide similar measurements.
Manual ventilation with a bag-valve device (BVD) is a Basic Life Support skill. Prolonged manual ventilation may be required in resource-poor locations and in severe disasters such as hurricanes, pandemics, and chemical events. In such circumstances, trained operators may not be available and lay persons may need to be quickly trained to do the job.
Objectives:
The current study investigated whether minimally trained operators were able to manually ventilate a simulated endotracheally intubated patient for six hours.
Methods:
Two groups of 10 volunteers, previously unfamiliar with manual ventilation, received brief, structured BVD-tube ventilation training and performed six hours of manual ventilation on an electronic lung simulator. Operator cardiorespiratory variables and perceived effort, as well as the quality of the delivered ventilation, were recorded. Group One ventilated a “normal lung” (compliance 50cmH2O/L, resistance 5cmH2O/L/min). Group Two ventilated a “moderately injured lung” (compliance 20cmH2O/L, resistance 20cmH2O/L/min).
Results:
Volunteers’ blood pressure, heart rate (HR), respiratory rate (RR), and peripheral capillary oxygen saturation (SpO2) were stable throughout the study. Perceived effort was minimal. The two groups provided clinically adequate and similar RRs (13.3 [SD = 3.0] and 14.1 [SD = 2.5] breaths/minute, respectively) and minute volume (MV; 7.6 [SD = 2.1] and 7.7 [SD = 1.4] L/minute, respectively).
Conclusions:
The results indicate that minimally trained persons can effectively perform six hours of manual BVD-tube ventilation of normal and moderately injured lungs, without undue effort. Quality of delivered ventilation was clinically adequate.
This manuscript summarizes the global incidence, exposures, mortality, and morbidity associated with extreme weather event (EWE) disasters over the past 50 years (1969-2018).
Methods:
A historical database (1969-2018) was created from the Emergency Events Database (EM-DAT) to include all disasters caused by seven EWE hazards (ie, cyclones, droughts, floods, heatwaves, landslides, cold weather, and storms). The annual incidence of EWE hazards and rates of exposure, morbidity, and mortality were calculated. Regression analysis and analysis of variance (ANOVA) calculations were performed to evaluate the association between the exposure rate and the hazard incidence rate, as well as the association between morbidity and mortality incidence rates and rates of human exposure and annual EWE incidence.
Results:
From 1969-2018, 10,009 EWE disasters caused 2,037,415 deaths and 3,998,466 cases of disease. A reported 7,350,276,440 persons required immediate assistance. Floods and storms were the most common. Most (89%) of EWE-related disaster mortality was caused by storms, droughts, and floods. Nearly all (96%) of EWE-related disaster morbidity was caused by cold weather, floods, and storms. Regression analysis revealed strong evidence (R2 = 0.88) that the annual incidence of EWE disasters is increasing world-wide, and ANOVA calculations identified an association between human exposure rates and hazard incidence (P value = .01). No significant trends were noted for rates of exposure, morbidity, or mortality.
Conclusions:
The annual incidence of EWEs appears to be increasing. The incidence of EWEs also appears to be associated with rates of human exposure. However, there is insufficient evidence of an associated increase in health risk or human exposures to EWEs over time.
Tognini-Bonelli (2001) made the following distinction between corpus-based and corpus-driven studies. While corpus-based studies start with pre-existing theories which are tested using corpus data, in corpus driven studies the hypothesis is derived by examination of the corpus evidence. This chapter will give an overview of the two different families of statistical tests which are suited for these two approaches. For corpus-based approaches, we use more traditional statistics, such as the t-test, or ANOVA which return a value called a p-value to tell us to what extent we should accept or reject the initial hypothesis. Multi-level modelling (also known as mixed modelling) is a new technique which shows considerable promise for corpus-based studies, and will also be described here to analyse the ENNTT subset of Europarl corpus. Multi-level modelling is useful for the examination of hierarchically structured or “nested” data, where for example translations may be “nested” together in a class if they have the same language of origin. A multi-level model takes account both of the variation between individual translations and the variation between classes. For example, we might expect the scores (such as vocabulary richness or readability scores) of two translations in the same class to be more similar to each other than two translations in different classes.
The Richter Scale measures the magnitude of a seismic occurrence, but it does not feasibly quantify the magnitude of the “disaster” at the point of impact in real humanitarian needs, based on United Nations International Strategy for Disaster Reduction (UNISDR; Geneva, Switzerland) 2009 Disaster Terminology. A Disaster Severity Index (DSI) similar to the Richter Scale and the Mercalli Scale has been formulated; this will quantify needs, holistically and objectively, in the hands of any stakeholders and even across timelines.
Background
An agreed terminology in quantifying “disaster” matters; inconsistency in measuring it by stakeholders posed a challenge globally in formulating legislation and policies responding to it.
Methods
A quantitative, mathematical calculation which uses the median score percentage of 100% as a baseline, indicating the ability to cope within the local capacity, was used. Seventeen indicators were selected based on the UNISDR 2009 disaster definition of vulnerability and exposure and holistic approach as a pre-condition. The severity of the disaster is defined as the level of unmet needs. Thirty natural disasters were tested, retrospectively, and non-parametric tests were used to test the correlation of the DSI score against the indicators.
Results
The findings showed that 20 out of 30 natural disasters tested fulfilled the inability to cope, within local capacity in disaster terminology. Non-parametric tests showed that there was a correlation between the 30 DSI scored and the indicators.
Conclusion
By computing a median fit percentage score of 100% as the ability to cope, and the correlation of the 17 indicators, in this DSI Scale, 20 natural disasters fitted into the disaster definition. This DSI will enable humanitarian stakeholders to measure and compare the severity of the disaster objectively, as well as enable future response to be based on needs.
YewYY, Castro DelgadoR, HeslopDJ, Arcos GonzálezP. The Yew Disaster Severity Index: A New Tool in Disaster Metrics. Prehosp Disaster Med. 2019;34(1):8–19.
Crop scientists occasionally compute sample correlations between traits based on observed data from designed experiments, and this is often accompanied by significance tests of the null hypothesis that traits are uncorrelated. This simple approach does not account for effects due to the randomization layout and treatment structure of the experiments and hence statistical inference based on standard procedures is not appropriate. The present paper describes how valid inferences accounting for all relevant effects can be obtained using bivariate mixed linear model procedures. A salient feature of the approach is that the bivariate model is commensurate with the model used for univariate analysis of individual traits and allows bivariate correlations to be computed at the level of effects. Heterogeneity of correlations between effects can be assessed by likelihood ratio tests or by graphical inspection of bivariate scatter plots of effect estimates. if heterogeneity is found to be substantial, it is crucial to focus on the correlation of effects, and usually, the main interest will be in the treatment effects. If heterogeneity is judged to be negligible, the marginal correlation can be estimated from the bivariate model for an overall assessment of association. The proposed methods are illustrated using four examples. Hints are given to alternative routes of analysis accounting for all treatment and design effects such as regression with groups and analysis of covariance.
Pediatric traumatic brain injury (TBI) is a leading cause of long-term disability in children and adolescents worldwide. Amongst the wide array of consequences known to occur after pediatric TBI, behavioral impairments are among the most widespread and may particularly affect children who sustain injury early in the course of development. The aim of this study was to investigate the presence of internalizing and externalizing behavioral problems 6 months after preschool (i.e. 18–60 months old) mild TBI.
Methods
This work is part of a prospective, longitudinal cohort study of preschool TBI. Participants (N = 229) were recruited to one of three groups: children with mild TBI, typically developing children and orthopedic injured (OI) children. Mothers of children in all three groups completed the Child Behavior Checklist as a measure of behavioral outcomes 6-month post-injury. Demographics, injury-related characteristics, level of parental distress, and estimates of pre-injury behavioral problems were also documented.
Results
The three groups did not differ on baseline characteristics (e.g. demographics and pre-injury behavioral problems for the mild TBI and OI groups) and level of parental distress. Mothers’ ratings of internalizing and externalizing behaviors were higher in the mild TBI group compared with the two control groups. Pre-injury behavioral problems and maternal distress were found to be significant predictors of outcome.
Conclusion
Our results show that even in its mildest form, preschool TBI may cause disruption to the immature brain serious enough to result in behavioral changes, which persist for several months post-injury.
The primary goal of this study was to compare paramedic first pass success rate between two different video laryngoscopes and direct laryngoscopy (DL) under simulated prehospital conditions in a cadaveric model.
Methods
This was a non-randomized, group-controlled trial in which five non-embalmed, non-frozen cadavers were intubated under prehospital spinal immobilization conditions using DL and with both the GlideScope Ranger (GL; Verathon Inc, Bothell, Washington USA) and the VividTrac VT-A100 (VT; Vivid Medical, Palo Alto, California USA). Participants had to intubate each cadaver with each of the three devices (DL, GL, or VT) in a randomly assigned order. Paramedics were given 31 seconds for an intubation attempt and a maximum of three attempts per device to successfully intubate each cadaver. Confirmation of successful endotracheal intubation (ETI) was confirmed by one of the six on-site physicians.
Results
Successful ETI within three attempts across all devices occurred 99.5% of the time overall and individually 98.5% of the time for VT, 100.0% of the time for GL, and 100.0% of the time for DL. First pass success overall was 64.4%. Individually, first pass success was 60.0% for VT, 68.8% for GL, and 64.5% for DL. A chi-square test revealed no statistically significant difference amongst the three devices for first pass success rates (P=.583). Average time to successful intubation was 42.2 seconds for VT, 38.0 seconds for GL, and 33.7 for seconds for DL. The average number of intubation attempts for each device were as follows: 1.48 for VT, 1.40 for GL, and 1.42 for DL.
Conclusion
The was no statistically significant difference in first pass or overall successful ETI rates between DL and video laryngoscopy (VL) with either the GL or VT (adult).
HodnickR, ZitekT, GalsterK, JohnsonS, BledsoeB, EbbsD. A Comparison of Paramedic First Pass Endotracheal Intubation Success Rate of the VividTrac VT-A 100, GlideScope Ranger, and Direct Laryngoscopy Under Simulated Prehospital Cervical Spinal Immobilization Conditions in a Cadaveric Model. Prehosp Disaster Med. 2017;32(6):621–624.
The Panama Canal (PC) has recently been in the world spotlight. In August 2014, it celebrated 100 years of uninterrupted service and, in June 2016, the expansion project for the canal was inaugurated. The final project involved building a third set of locks. Once the canal started to operate, it could be seen that the way in which vessels transited the canal remained the same. However, the dimensions of locks and their revised operating procedures have had an effect on vessel size and the manoeuvres for the larger vessels. After the first transit on 26 June 2016, it was possible to have access to data on the new lockage systems for Neopanamax ships. The thorough statistical study of these new datasets (composed of Analysis of Variance (ANOVA), multivariate regression and statistical quality control techniques) has shown the main drivers of transit time across the Cocolí and Agua Clara locks. It has also made it possible to test the learning curve of Panama Canal pilots in the newly expanded canal. The effects of pilot training on the time it takes to transit through the locks, direction of entry in each lock, the type of vessel, vessel dimensions and the use of different types of manoeuvres have been analysed. The results are used to characterise and help optimise the performance of this new and unique lock system.
The American Heart Association (AHA; Dallas, Texas USA) and European Resuscitation Council (Niel, Belgium) cardiac arrest (CA) guidelines recommend the intraosseous (IO) route when intravenous (IV) access cannot be obtained. Vasopressin has been used as an alternative to epinephrine to treat ventricular fibrillation (VF).
Hypothesis/Problem
Limited data exist on the pharmacokinetics and resuscitative effects of vasopressin administered by the humeral IO (HIO) route for treatment of VF. The purpose of this study was to evaluate the effects of HIO and IV vasopressin, on the occurrence, odds, and time of return of spontaneous circulation (ROSC) and pharmacokinetic measures in a swine model of VF.
Methods
Twenty-seven Yorkshire-cross swine (60 to 80 kg) were assigned randomly to three groups: HIO (n=9), IV (n=9), and a control group (n=9). Ventricular fibrillation was induced and untreated for two minutes. Chest compressions began at two minutes post-arrest and vasopressin (40 U) administered at four minutes post-arrest. Serial blood specimens were collected for four minutes, then the swine were resuscitated until ROSC or 29 post-arrest minutes elapsed.
Results
Fisher’s Exact test determined ROSC was significantly higher in the HIO 5/7 (71.5%) and IV 8/11 (72.7%) groups compared to the control 0/9 (0.0%; P=.001). Odds ratios of ROSC indicated no significant difference between the treatment groups (P=.68) but significant differences between the HIO and control, and the IV and control groups (P=.03 and .01, respectively). Analysis of Variance (ANOVA) indicated the mean time to ROSC for HIO and IV was 621.20 seconds (SD=204.21 seconds) and 554.50 seconds (SD=213.96 seconds), respectively, with no significant difference between the groups (U=11; P=.22). Multivariate Analysis of Variance (MANOVA) revealed the maximum plasma concentration (Cmax) and time to maximum concentration (Tmax) of vasopressin in the HIO and IV groups was 71753.9 pg/mL (SD=26744.58 pg/mL) and 61853.7 pg/mL (SD=22745.04 pg/mL); 111.42 seconds (SD=51.3 seconds) and 114.55 seconds (SD=55.02 seconds), respectively. Repeated measures ANOVA indicated no significant difference in plasma vasopressin concentrations between the treatment groups over four minutes (P=.48).
Conclusions
The HIO route delivered vasopressin effectively in a swine model of VF. Occurrence, time, and odds of ROSC, as well as pharmacokinetic measurements of HIO vasopressin, were comparable to IV.
BurgertJM, JohnsonAD, Garcia-BlancoJ, FultonLV, LoughrenMJ. The Resuscitative and Pharmacokinetic Effects of Humeral Intraosseous Vasopressin in a Swine Model of Ventricular Fibrillation. Prehosp Disaster Med. 2017;32(3):305–310.
There are various reasons for using statistics, but perhaps the most important is that the biological sciences are empirical sciences. There is always an element of variability that can only be dealt with by applying statistics. Essentially, statistics is a way to summarize the variability of data so that we can confidently say whether there is a difference among treatments or among regression parameters and tell others about the variability of the results. To that end, we must use the most appropriate statistics to get a “correct” picture of the experimental variability, and the best way of doing that is to report the size of the parameters or the means and their associated standard errors or confidence intervals. Simply declaring that the yields were 1 or 2 ton ha−1 does not mean anything without associated standard errors for those yields. Another driving force is that no journal will accept publications without the data having been subjected to some kind of statistical analysis.
Evaluation of economic outcome associated with a given weed management system is an important component in the decision-making process within crop production systems. The objective of this research was to investigate how risk-efficiency criteria could be used to improve herbicide-based weed management decision making, assuming different risk preferences among growers. Data were obtained from existing weed management trials in corn conducted at the University of Minnesota Southern Research and Outreach Center at Waseca. Weed control treatments represented a range of practices including one-pass soil-applied, one-pass postemergence, and sequential combinations of soil and postemergence herbicide application systems. Analysis of risk efficiency across 23 herbicide-based weed control treatments was determined with the mean variance and stochastic dominance techniques. We show how these techniques can result in different outcomes for the decision maker, depending on risk attitudes. For example, mean variance and stochastic dominance techniques are used to evaluate risk associated with one- vs. two-pass herbicide treatments with and without cultivation. Based on these analyses, it appears that a one-pass system is preferred by a risk-averse grower. However, we argue that this may not be the best option considering potential changes in weed emergence patterns, application timing concerns, etc. The techniques for economic analysis of weed control data outlined in this article will help growers match herbicide-based weed management systems to their own production philosophies based on economic risk.
The interaction between wheat and perennial ryegrass seed density and seedlings of different ages was studied under controlled conditions. Root length of perennial ryegrass, after sowing, was suppressed by wheat and was dependent on the density of wheat seeds. Shoot growth of perennial ryegrass, however, was unaffected by the presence of wheat. Perennial ryegrass density had no effect on the first 2 wk of wheat seedling growth. The age of wheat seedlings had no appreciable influence on either root or shoot growth of perennial ryegrass. The present study highlights the need for an unbiased two-way experimental design to identify the dominant competitor.
In the paper, the influence of shape memory alloy (SMA) by varying the parameters such as volume fraction, orientation, and temperature on the hybrid-SMA composite laminate subjected to low-velocity impact is studied. A theoretical model for the composite laminated plate bonded with SMA reinforced layers is presented. The constitutive relation of the SMA layer is obtained by using the method of micromechanics. The governing relations obtained can be used for theoretical predications of thermomechanical properties of SMA plies in this paper. The analytical expressions for the hybrid SMA composite plate are derived based on Tanaka's constitutive equation and linear phase transformation kinetics presented by Liang and Rogers.
The laminated plate theory, first-order shear deformation theory and minimal potential energy principle is utilized to solve the governing equations of the hybrid composite plate and calculate the absorbed energies including tensile, shear and bending.
An orthogonal array and analysis of variance is employed to investigate the influence of the mentioned parameters on the energy absorption of the hybrid laminated plate. The results showed that the effects of the phase transformation temperature are more significant than the effects of the volume fraction and orientation of SMA on structural energy absorption.
It is well documented that “unanticipated” information contained in United States Department of Agriculture (USDA) crop reports induces large price reactions in corn and soybean markets. Thus, a natural question that arises from this literature is: To what extent are futures hedges able to remove or reduce increased price risk around report release dates? This paper addresses this question by simulating daily futures returns, daily cash returns, and daily hedged returns around report release dates for two storable commodities (corn and soybeans) in two market settings (North Central Illinois and Memphis, Tennessee). Various risk measures, including “Value at Risk,” are used to determine hedging effectiveness, and “Analysis of Variance” is used to uncover the underlying factors that contribute to hedging effectiveness.
In this paper, we present an adaptive, analysis of variance (ANOVA)-based data-driven stochastic method (ANOVA-DSM) to study the stochastic partial differential equations (SPDEs) in the multi-query setting. Our new method integrates the advantages of both the adaptive ANOVA decomposition technique and the data-driven stochastic method. To handle high-dimensional stochastic problems, we investigate the use of adaptive ANOVA decomposition in the stochastic space as an effective dimension-reduction technique. To improve the slow convergence of the generalized polynomial chaos (gPC) method or stochastic collocation (SC) method, we adopt the data-driven stochastic method (DSM) for speed up. An essential ingredient of the DSM is to construct a set of stochastic basis under which the stochastic solutions enjoy a compact representation for a broad range of forcing functions and/or boundary conditions.
Our ANOVA-DSM consists of offline and online stages. In the offline stage, the original high-dimensional stochastic problem is decomposed into a series of low-dimensional stochastic subproblems, according to the ANOVA decomposition technique. Then, for each subproblem, a data-driven stochastic basis is computed using the Karhunen-Loève expansion (KLE) and a two-level preconditioning optimization approach. Multiple trial functions are used to enrich the stochastic basis and improve the accuracy. In the online stage, we solve each stochastic subproblem for any given forcing function by projecting the stochastic solution into the data-driven stochastic basis constructed offline. In our ANOVA-DSM framework, solving the original highdimensional stochastic problem is reduced to solving a series of ANOVA-decomposed stochastic subproblems using the DSM. An adaptive ANOVA strategy is also provided to further reduce the number of the stochastic subproblems and speed up our method. To demonstrate the accuracy and efficiency of our method, numerical examples are presented for one- and two-dimensional elliptic PDEs with random coefficients.