To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sugarbeet growers only recently have combined ethofumesate, S-metolachlor, and dimethenamid-P in a weed control system for waterhemp control. Sugarbeet plant density, visible stature reduction, root yield, percent sucrose content, and recoverable sucrose were measured in field experiments at five environments between 2014 and 2016. Sugarbeet stand density and stature reduction occurred in some but not all environments. Stand density was reduced with PRE application of S-metolachlor at 1.60 kg ai ha–1 and S-metolachlor at 0.80 kg ha–1 + ethofumesate at 1.68 kg ai ha–1 alone or followed by POST applications of dimethenamid-P at 0.95 kg ai ha–1. Sugarbeet visible stature was reduced when dimethenamid-P followed PRE treatments. Stature reduction was greatest with ethofumesate at 1.68 or 4.37 kg ha–1 PRE and S-metolachlor at 0.80 kg ha–1 + ethofumesate at 1.68 kg ha–1 PRE followed by dimethenamid-P at 0.95 kg ha–1 POST. Stature reduction ranged from 0 to 32% 10 d after treatment (DAT), but sugarbeet recovered quickly and visible injury was negligible 23 DAT. Although root yield and recoverable sucrose were similar across herbicide treatments and environments, we caution against the use of S-metolachlor at 0.80 kg ha–1 + ethofumesate at 1.68 kg ai ha–1 PRE followed by dimethenamid-P at 0.95 kg ha–1 in sugarbeet.
OBJECTIVES/SPECIFIC AIMS: To evaluate the ability of various techniques to track changes in body fluid volumes before and after a rapid infusion of saline. METHODS/STUDY POPULATION: Eight healthy participants (5M; 3F) completed baseline measurements of 1) total body water using ethanol dilution and bioelectrical impedance analysis (BIA) and 2) blood volume, plasma volume and red blood cell (RBC) volume using carbon monoxide rebreathe technique and I-131 albumin dilution. Subsequently, 30mL saline/kg body weight was administered intravenously over 20 minutes after which BIA and ethanol dilution were repeated. RESULTS/ANTICIPATED RESULTS: On average, 2.29±0.35 L saline was infused with an average increase in net fluid input-output (I/O) of 1.56±0.29 L. BIA underestimated measured I/O by −3.4±7.9%, while ethanol dilution did not demonstrate a measurable change in total body water. Carbon monoxide rebreathe differed from I-131 albumin dilution measurements of blood, plasma and RBC volumes by +0.6±2.8%, −5.4±3.6%, and +11.0±4.7%, respectively. DISCUSSION/SIGNIFICANCE OF IMPACT: BIA is capable of tracking modest changes in total body water. Carbon monoxide rebreathe appears to be a viable alternative for the I-131 albumin dilution technique to determine blood volume. Together, these two techniques may be useful in monitoring fluid status in patients with impaired fluid regulation.
In this article, we describe the results of the second phase of a randomized controlled trial of Minding the Baby (MTB), an interdisciplinary reflective parenting intervention for infants and their families. Young first-time mothers living in underserved, poor, urban communities received intensive home visiting services from a nurse and social worker team for 27 months, from pregnancy to the child's second birthday. Results indicate that MTB mothers' levels of reflective functioning was more likely to increase over the course of the intervention than were those of control group mothers. Likewise, infants in the MTB group were significantly more likely to be securely attached, and significantly less likely to be disorganized, than infants in the control group. We discuss our findings in terms of their contribution to understanding the impacts and import of intensive intervention with vulnerable families during the earliest stages of parenthood in preventing the intergenerational transmission of disrupted relationships and insecure attachment.
OBJECTIVES/SPECIFIC AIMS: Background: Delirium is a well described form of acute brain organ dysfunction characterized by decreased or increased movement, changes in attention and concentration as well as perceptual disturbances (i.e., hallucinations) and delusions. Catatonia, a neuropsychiatric syndrome traditionally described in patients with severe psychiatric illness, can present as phenotypically similar to delirium and is characterized by increased, decreased and/or abnormal movements, staring, rigidity, and mutism. Delirium and catatonia can co-occur in the setting of medical illness, but no studies have explored this relationship by age. Our objective was to assess whether advancing age and the presence of catatonia are associated with delirium. METHODS/STUDY POPULATION: Methods: We prospectively enrolled critically ill patients at a single institution who were on a ventilator or in shock and evaluated them daily for delirium using the Confusion Assessment for the ICU and for catatonia using the Bush Francis Catatonia Rating Scale. Measures of association (OR) were assessed with a simple logistic regression model with catatonia as the independent variable and delirium as the dependent variable. Effect measure modification by age was assessed using a Likelihood ratio test. RESULTS/ANTICIPATED RESULTS: Results: We enrolled 136 medical and surgical critically ill patients with 452 matched (concomitant) delirium and catatonia assessments. Median age was 59 years (IQR: 52–68). In our cohort of 136 patients, 58 patients (43%) had delirium only, 4 (3%) had catatonia only, 42 (31%) had both delirium and catatonia, and 32 (24%) had neither. Age was significantly associated with prevalent delirium (i.e., increasing age associated with decreased risk for delirium) (p=0.04) after adjusting for catatonia severity. Catatonia was significantly associated with prevalent delirium (p<0.0001) after adjusting for age. Peak delirium risk was for patients aged 55 years with 3 or more catatonic signs, who had 53.4 times the odds of delirium (95% CI: 16.06, 176.75) than those with no catatonic signs. Patients 70 years and older with 3 or more catatonia features had half this risk. DISCUSSION/SIGNIFICANCE OF IMPACT: Conclusions: Catatonia is significantly associated with prevalent delirium even after controlling for age. These data support an inverted U-shape risk of delirium after adjusting for catatonia. This relationship and its clinical ramifications need to be examined in a larger sample, including patients with dementia. Additionally, we need to assess which acute brain syndrome (delirium or catatonia) develops first.
The aim of this study was to determine the feasibility and efficacy of a culturally tailored lifestyle intervention, ¡Vivir Mi Vida! (Live My Life!). This intervention was designed to improve the health and well-being of high risk late middle-aged Latino adults and to be implemented in a rural primary care system.
Rural-dwelling Latino adults experience higher rates of chronic disease compared with their urban counterparts, a disparity exacerbated by limited access to healthcare services. Very few lifestyle interventions exist that are both culturally sensitive and compatible for delivery within a non-metropolitan primary care context.
Participants were 37 Latino, Spanish-speaking adults aged 50–64-years-old, recruited from a rural health clinic in the Antelope Valley of California. ¡Vivir Mi Vida! was delivered by a community health worker-occupational therapy team over a 16-week period. Subjective health, lifestyle factors, and cardiometabolic measures were collected pre- and post-intervention. Follow-up interviews and focus groups were held to collect information related to the subjective experiences of key stakeholders and participants.
Participants demonstrated improvements in systolic blood pressure, sodium and saturated fat intake, and numerous patient-centered outcomes ranging from increased well-being to reduced stress. Although participants were extremely satisfied with the program, stakeholders identified a number of implementation challenges. The findings suggest that a tailored lifestyle intervention led by community health workers and occupational therapists is feasible to implement in a primary care setting and can improve health outcomes in rural-dwelling, late middle-aged Latinos.
While state legislative rollbacks of public-sector workers’ collective bargaining rights in Wisconsin and other US states in 2011 appeared to signal an unprecedented wave of hostility toward the public sector, such episodes have a long history. Drawing on recent work on “governance repertoires,” this article compares antistate initiatives in Wisconsin in 2011 to two previous periods of conflict over the size and shape of government: the 1930s and the 1970s. We find that while small government advocates in all three periods used similar language and emphasized comparable themes, the outcomes of their advocacy were different due to the distinct historical moments in which they unfolded and the way local initiatives were linked to political projects at the national level. We explore the relationship of local versions of small government activism to their national-level counterparts in each period to show how national-level movements and the ideological, social, and material resources they provided shaped governance repertoires in Wisconsin. We argue that the three moments of conflict over the size of government are deeply intertwined with the prehistory, emergence, and rise to dominance of neoliberal political rationality and can provide insight into how that new “governance repertoire” was experienced and built at the local level.
With European Laser Facilities such as the Extreme Light Infrastructure (ELI) and the Helmholtz International Beamline for Extreme Fields (HIBEF) scheduled to come online within the next couple of years, General Atomics, as a major supplier of targets and target components for the High Energy Density Physics community in the United States, is gearing up to meet their demand for large numbers of low cost targets. Using the production of a subassembly for the National Ignition Facility’s fusion targets as an example, we demonstrate that through automation of assembly tasks, the design of targets and their experimental setup can be fairly complex while keeping the assembly time and cost as a minimum. A six-axis Mitsubishi robot is used in combination with vision feedback and a force–torque sensor to assemble target subassemblies of different scales and designs with minimal change of tooling, allowing for design flexibility and short assembly setup times. Implementing automated measurement routines on a Nikon NEXIV microscope further reduces the effort required for target metrology, while electronic data collection and transfer complete a streamlined target production operation that can be adapted to a large variety of target designs.
We have discussed the concepts of multivariate spaces and distances in earlier chapters. With discriminant analysis, we created a space that separated groups of observations. With principal components, we defined a multivariate space for the observations in fewer dimensions while losing the least amount of information. With correspondence analysis, we displayed observations and variables in terms of their Chi-square distances. This chapter is the first of two that focus on quantitative methods that start with a distance matrix and represent the distances between observations either in the form of a map (this chapter) or by grouping observations that are similar (Chapter 15). In both cases we generally start by computing a distance or similarity measure between pairs of observations.
The first part of the chapter describes different ways of defining distance and similarity. Different choices in the measurement of distance can have substantial influence on the results. The second part describes ways of analyzing distance matrices in order to represent them in the form of a map. If you have ever looked at a highway map, you may have noticed a triangular table of distances between major cities. What if you only had the table of distances? How would you go about reconstructing the map? Scaling methods were primarily developed in the field of psychology where data expressing perception or preferences are gathered directly in the form of a similarity matrix that reflects judgments regarding how similar pairs of stimuli are. In archaeology, the classic application is seriation, which attempts to represent variation in assemblages along a single dimension that may represent time (Chapter 17). The third part of the chapter illustrates how to compare two distance matrices. For example, if we have sites located in a region and collections of ceramics from those sites, how do we compare the geographic distances to ceramic assemblage distances?
DISTANCE, DISSIMILARITY, AND SIMILARITY
The term distance refers to a numeric score that indicates how close or far two observations are in terms of a set of variables. Larger distances mean the observations are less similar to one another. Dissimilarity measures are larger when two objects are more distant or different from one another. Similarity measures are larger when two objects are more like one another.
A table is simply a two-dimensional presentation of data or a summary of the data. We use tables to inspect the original data for errors or problems such as missing entries. We used tables to present condensed summaries of data values in Chapter 3 (e.g., numSummary()). Those summaries involved computing summary statistics by a categorical variable to see how the groups differed from one another. We can also use tables to see how categorical variables covary.
Nominal or categorical data play a large role in archaeological research. At the regional level, sites are the categories and we are interested in the number of different types of artifacts (also a category) found in each site. The same applies at the site level where the artifact categories are distributed across excavation units. Within sites, different kinds of features are present and features contain different types of artifacts. At the artifact level, some properties of artifacts are represented by categories. Because of this, the same data are often represented in different ways for different purposes. That is not a problem unless the statistical procedures we are using expect a format different from the one we are currently using. In Chapter 3, we created tables of descriptive statistics. In this chapter we are concerned with tables in which the cell entries consist of counts of objects.
R distinguishes between tables and data frames and some functions will work with one but not the other. Data frames have columns that represent different types of data (e.g., character strings, factors, numbers), but tables in R represent numeric data only. In fact, R tables are a kind of matrix. Before constructing tables, we will briefly describe how R encodes categorical data using factors.
FACTORS IN R
Factors are a way of storing categorical information in R. If you have coded a variable into a set of categories, you have the choice of storing the information as a character or factor vector. A factor stores each category as an integer and the category labels are stored as levels. If you import your data into a data frame, R will automatically convert character vectors into factors unless you use the argument stringsAsFactors=FALSE.
Archaeological data come in all sizes, shapes, and quantities ranging from Egyptian pyramids (large in size, small in the number of specimens) to micro-debitage from a lithic workshop or molecular residues in a ceramic bowl. Because the questions we ask of the data are different, our representations of those data differ. One way of representing the data dominates however, because it is so flexible. That is a rectangular arrangement of data so that each row represents an observation and each column represents a measurement on that observation. Some of those measurements can be counts, and each count is a potential observation for another data table.
For example, we may have located a variety of archaeological sites in a river valley. One data table could consist of the grid units that were surveyed so that each row of the table is a grid square (e.g., 100 m on a side). The columns of the data set include the coordinates of the unit and the number of sites and isolated artifact finds discovered during the survey. There could be other columns identifying when the unit was surveyed and information about the location of the unit with respect to topographic features such as dominant soil type, major waterways, lakes, and so on. This data set would be relevant to exploring questions about site density. For example, are there more sites near water features and fewer in upland areas away from any water source?
Each of the counts in this data set is a potential row in another data set. That data set consists of a row for each site and columns for the location of the site, the area of the site, the physical characteristics around the site (e.g., slope, elevation, aspect, soil type), and the number of different kinds of artifacts and features found on the site. This data set would be relevant to questions regarding where sites are located and how the artifacts and features found on sites differ.
Each of the artifacts and features in the site data set is a potential row in another data set (or more likely multiple data sets). At this point it may make sense to create separate data sets for projectile points, flakes, cores, pottery sherds, shells, bones, and other categories of material.
Quantitative Methods in Archaeology Using R is the first hands-on guide to using the R statistical computing system written specifically for archaeologists. It shows how to use the system to analyze many types of archaeological data. Part I includes tutorials on R, with applications to real archaeological data showing how to compute descriptive statistics, create tables, and produce a wide variety of charts and graphs. Part II addresses the major multivariate approaches used by archaeologists, including multiple regression (and the generalized linear model); multiple analysis of variance and discriminant analysis; principal components analysis; correspondence analysis; distances and scaling; and cluster analysis. Part III covers specialized topics in archaeology, including intra-site spatial analysis, seriation, and assemblage diversity.
Seriation involves finding a one-dimensional ordering of multivariate data (Marquardt, 1978). In archaeology, it is usually expected that the ordering will reflect chronological change, but the methods cannot guarantee that the ordering will be chronological. Seriation has a long history in archaeology beginning with Sir Flinders Petrie (1899) who was attempting to order 900 graves chronologically. The idea provoked the interest of a number of mathematicians over the last century because it involves interesting problems in combinatorial mathematics including David Kendall (1963) and W. S. Robinson (1951). Ecologists share an interest in finding one-dimensional orderings of ecological communities that match environmental gradients although they refer to the process as ordination rather than seriation.
The usual organization of data for seriation is a data frame where the columns represent artifact types (whether present/absent, or percentages) and the rows represent assemblages (graves, houses, sites, stratigraphic layers within sites, etc.). Before the widespread use of computers, seriation involved shuffling the rows of the data set to concentrate the values in each column into as few contiguous rows as possible. Ford (1962) proposed an approach to seriation of assemblages with types represented as percentages that involved shuffling rows to form “battleship curves.” In 1951, Robinson proposed an alternative approach that involved the construction of a similarity matrix. The rows and columns of the matrix are shuffled until the “best” solution is reached based on criteria that Robinson proposed. As computers became available, programs were written to implement both types of seriation. More recently, multivariate methods including multidimensional scaling, principal components, and correspondence analysis have been applied to seriation. These methods often represent the seriation as a parabola, which is referred to in the archaeological, statistical, and ecological literature as a “horseshoe” (Kendall, 1971). Kendall noted that the horseshoe results from the fact that distance measures generally have a maximum distance such that we cannot resolve the relative distances of objects beyond the maximum distance (a horizon effect). Assemblages that do not share any types are on this horizon. Unwrapping the horseshoe is necessary to produce a one-dimensional ordering.