We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The monotone homogeneity model (MHM—also known as the unidimensional monotone latent variable model) is a nonparametric IRT formulation that provides the underpinning for partitioning a collection of dichotomous items to form scales. Ellis (Psychometrika 79:303–316, 2014, doi:10.1007/s11336-013-9341-5) has recently derived inequalities that are implied by the MHM, yet require only the bivariate (inter-item) correlations. In this paper, we incorporate these inequalities within a mathematical programming formulation for partitioning a set of dichotomous scale items. The objective criterion of the partitioning model is to produce clusters of maximum cardinality. The formulation is a binary integer linear program that can be solved exactly using commercial mathematical programming software. However, we have also developed a standalone branch-and-bound algorithm that produces globally optimal solutions. Simulation results and a numerical example are provided to demonstrate the proposed method.
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30×30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation time considerations generally limit their applicability to matrix sizes no greater than 35×35. Accordingly, a variety of heuristic methods have been proposed for larger matrices, including iterative quadratic assignment, tabu search, simulated annealing, and variable neighborhood search. Although these heuristics can produce exceptional results, they are prone to converge to local optima where the permutation is difficult to dislodge via traditional neighborhood moves (e.g., pairwise interchanges, object-block relocations, object-block reversals, etc.). We show that a heuristic implementation of dynamic programming yields an efficient procedure for escaping local optima. Specifically, we propose applying dynamic programming to reasonably-sized subsequences of consecutive objects in the locally-optimal permutation, identified by simulated annealing, to further improve the value of the objective function. Experimental results are provided for three classic matrix permutation problems in the combinatorial data analysis literature: (a) maximizing a dominance index for an asymmetric proximity matrix; (b) least-squares unidimensional scaling of a symmetric dissimilarity matrix; and (c) approximating an anti-Robinson structure for a symmetric dissimilarity matrix.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.
Although the K-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The p-median model is an especially well-studied clustering problem that requires the selection of p objects to serve as cluster centers. The objective is to choose the cluster centers such that the sum of the Euclidean distances (or some other dissimilarity measure) of objects assigned to each center is minimized. Using 12 data sets from the literature, we demonstrate that a three-stage procedure consisting of a greedy heuristic, Lagrangian relaxation, and a branch-and-bound algorithm can produce globally optimal solutions for p-median problems of nontrivial size (several hundred objects, five or more variables, and up to 10 clusters). We also report the results of an application of the p-median model to an empirical data set from the telecommunications industry.
The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal or ordinal attributes. In such instances, the CPP incorporates edge costs obtained from an aggregation of binary equivalence relations among the attributes. We review existing theory and methods for the CPP and propose two versions of a new neighborhood search algorithm for efficient solution. The first version (NS-R) uses a relocation algorithm in the search for improved solutions, whereas the second (NS-TS) uses an embedded tabu search routine. The new algorithms are compared to simulated annealing (SA) and tabu search (TS) algorithms from the CPP literature. Although the heuristics yielded comparable results for some test problems, the neighborhood search algorithms generally yielded the best performances for large and difficult instances of the CPP.
Cellulose of tree rings is often assumed to be predominantly formed by direct assimilation of CO2 by photosynthesis and consequently can be used to reconstruct past atmospheric 14C concentrations at annual resolution. Yet little is known about the extent and the age of stored carbon from previous years used in addition to the direct assimilation in tree rings. Here, we studied 14C in earlywood and latewood cellulose of four different species (oak, pine, larch and spruce), which are commonly used for radiocarbon calibration and dating. These trees were still growing during the radiocarbon bomb peak period (1958–1972). We compared cellulose 14C measured in tree-ring subdivisions with the atmospheric 14C corresponding to the time of ring formation. We observed that cellulose 14C carried up to about 50% of the atmospheric 14C signal from the previous 1–2 years only in the earlywood of oak, whereas in conifers it was up to 20% in the earlywood and in the case of spruce also in the latewood. The bias in using the full ring of trees growing in a temperate oceanic climate to estimate atmospheric 14C concentration might be minimal considering that earlywood has a low mass contribution and that the variability in atmospheric 14C over a few years is usually less than 3‰.
The IntCal family of radiocarbon (14C) calibration curves is based on research spanning more than three decades. The IntCal group have collated the 14C and calendar age data (mostly derived from primary publications with other types of data and meta-data) and, since 2010, made them available for other sorts of analysis through an open-access database. This has ensured transparency in terms of the data used in the construction of the ratified calibration curves. As the IntCal database expands, work is underway to facilitate best practice for new data submissions, make more of the associated metadata available in a structured form, and help those wishing to process the data with programming languages such as R, Python, and MATLAB. The data and metadata are complex because of the range of different types of archives. A restructured interface, based on the “IntChron” open-access data model, includes tools which allow the data to be plotted and compared without the need for export. The intention is to include complementary information which can be used alongside the main 14C series to provide new insights into the global carbon cycle, as well as facilitating access to the data for other research applications. Overall, this work aims to streamline the generation of new calibration curves.
Nowadays, most radiocarbon (14C) laboratories can reliably avoid and remove any possible sample contamination during the pretreatment of organic samples (e.g., bones, charcoal, or trees) thanks to a series of methods commonly used by the radiocarbon community. However, what about the final step, the storage of graphite? Rarely do the laboratories produce their graphite and ship it as pressed targets to accelerator mass spectrometry (AMS) facilities for measurement. Pressed graphite in aluminum targets are vulnerable to contamination, and during shipment or storage, exogenous carbon can be introduced again. Here we report a test on various archaeological sample materials from different environments and different periods (from the past three millennia to the Middle Paleolithic period). We transformed them into graphite, pressed the graphite into targets and sent them to two different AMS laboratories to be dated. We observe that packing details of the targets, extended shipment and storage time may lead to contamination which can be avoided by appropriate packaging in tight metal cans and sealed in vacuum bags. Close cooperation and coordination between our chemistry laboratory and the AMS facilities, high standards in contamination removal, and efficient measurement planning enabled us to obtain reliable 14C ages within a short time.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
We undertook a strengths, weaknesses, opportunities, and threats (SWOT) analysis of Northern Hemisphere tree-ring datasets included in IntCal20 in order to evaluate their strategic fit with the demands of archaeological users. Case studies on wiggle-matching single tree rings from timbers in historic buildings and Bayesian modeling of series of results on archaeological samples from Neolithic long barrows in central-southern England exemplify the archaeological implications that arise when using IntCal20. The SWOT analysis provides an opportunity to think strategically about future radiocarbon (14C) calibration so as to maximize the utility of 14C dating in archaeology and safeguard its reputation in the discipline.
OBJECTIVES/GOALS: Large-scale clinical proteomic studies of cancer tissues often entail complex workflows and are resource-intensive. In this study we analyzed ovarian tumors using an emerging, high-throughput proteomic technology termed SWATH. We compared SWATH with the more widely used iTRAQ workflow based on robustness, complexity, ability to detect differential protein expression, and the elucidated biological information. METHODS/STUDY POPULATION: Proteomic measurements of 103 clinically-annotated high-grade serous ovarian cancer (HGSOC) tumors previously genomically characterized by The Cancer Genome Atlas were conducted using two orthogonal mass spectrometry-based proteomic methods: iTRAQ and SWATH. The analytical differences between the two methods were compared with respect to relative protein abundances. To assess the ability to classify the tumors into subtypes based on proteomic signatures, an unbiased molecular taxonomy of HGSOC was established using protein abundance data. The 1,599 proteins quantified in both datasets were classified based on z-score-transformed protein abundances, and the emergent protein modules were characterized using weighted gene-correlation network analysis and Reactome pathway enrichment. RESULTS/ANTICIPATED RESULTS: Despite the greater than two-fold difference in the analytical depth of each proteomic method, common differentially expressed proteins in enriched pathways associated with the HGSOC Mesenchymal subtype were identified by both methods. The stability of tumor subtype classification was sensitive to the number of analyzed samples, and the statistically stable subgroups were identified by the data from both methods. Additionally, the homologous recombination deficiency-associated enriched DNA repair and chromosome organization pathways were conserved in both data sets. DISCUSSION/SIGNIFICANCE OF IMPACT: SWATH is a robust proteomic method that can be used to elucidate cancer biology. The lower number of proteins detected by SWATH compared to iTRAQ is mitigated by its streamlined workflow, increased sample throughput, and reduced sample requirement. SWATH therefore presents novel opportunities to enhance the efficiency of clinical proteomic studies.
As the worldwide standard for radiocarbon (14C) dating over the past ca. 50,000 years, the International Calibration Curve (IntCal) is continuously improving towards higher resolution and replication. Tree-ring-based 14C measurements provide absolute dating throughout most of the Holocene, although high-precision data are limited for the Younger Dryas interval and farther back in time. Here, we describe the dendrochronological characteristics of 1448 new 14C dates, between ~11,950 and 13,160 cal BP, from 13 pines that were growing in Switzerland. Significantly enhancing the ongoing IntCal update (IntCal20), this Late Glacial (LG) compilation contains more annually precise 14C dates than any other contribution during any other period of time. Thus, our results now provide unique geochronological dating into the Younger Dryas, a pivotal period of climate and environmental change at the transition from LG into Early Holocene conditions.
As part of the ongoing effort to improve the Northern Hemisphere radiocarbon (14C) calibration curve, this study investigates the period of 856 BC to 626 BC (2805–2575 yr BP) with a total of 403 single-year 14C measurements. In this age range, IntCal13 was constructed largely from German and Irish oak as well as Californian bristlecone pine 14C dates, with most samples measured with a 10-yr resolution. The new data presented here is the first atmospheric 14C single-year record of the older end of the Hallstatt plateau based on an absolutely dated tree-ring chronology. The data helped reveal a major solar proton event (SPE) which caused a spike in the production rate of cosmogenic radionuclides around 2610/2609 BP. This production event is thought to have reached a magnitude similar to the 774/775 AD production event but has remained undetected due to averaging effects in the decadal calibration data. The record leading up to the 2610/2609 BP event reveals a 11-yr solar cycle with varying cyclicity. Features of the new data and the benefits of higher resolution calibration are discussed.
Advances in accelerator mass spectrometry have resulted in an unprecedented amount of new high-precision radiocarbon (14C) -dates, some of which will redefine the international 14C calibration curves (IntCal and SHCal). Often these datasets are unaccompanied by detailed quality insurances in place at the laboratory, questioning whether the 14C structure is real, a result of a laboratory variation or measurement-scatter. A handful of intercomparison studies attempt to elucidate laboratory offsets but may fail to identify measurement-scatter and are often financially constrained. Here we introduce a protocol, called Quality Dating, implemented at ETH-Zürich to ensure reproducible and accurate high-precision 14C-dates. The protocol highlights the importance of the continuous measurements and evaluation of blanks, standards, references and replicates. This protocol is tested on an absolutely dated German Late Glacial tree-ring chronology, part of which is intercompared with the Curt Engelhorn-Center for Archaeometry, Mannheim, Germany (CEZA). The combined dataset contains 170 highly resolved, highly precise 14C-dates that supplement three decadal dates spanning 280 cal. years in IntCal, and provides detailed 14C structure for this interval.
Collusion is a largely unconscious, dynamic bond, which may occur between patients and clinicians, between patients and family members, or between different health professionals. It is widely prevalent in the palliative care setting and provokes intense emotions, unreflective behavior, and negative impact on care. However, research on collusion is limited due to a lack of conceptual clarity and robust instruments to investigate this complex phenomenon. We have therefore developed the Collusion Classification Grid (CCG), which we aimed to evaluate with regard to its potential utility to analyze instances of collusion, be it for the purpose of supervision in the clinical setting or research.
Method
Situations of difficult interactions with patients with advanced disease (N = 10), presented by clinicians in supervision with a liaison psychiatrist were retrospectively analyzed by means of the CCG.
Result
1) All items constituting the grid were mobilized at least once; 2) one new item had to be added; and 3) the CCG identified different types of collusion.
Significance of results
This case series of collusions assessed with the CCG is a first step before the investigation of larger samples with the CCG. Such studies could search and identify setting-dependent and recurrent types of collusions, and patterns emerging between the items of the CCG. A better grasp of collusion could ultimately lead to a better understanding of the impact of collusion on the patient encounter and clinical decision-making.
In different pathophysiological conditions plasminogen activator inhibitor-1 (PAI-1) plasma concentrations are elevated. As dietary patterns are considered to influence PAI-1 concentration, we aimed to determine active PAI-1 plasma concentrations and mRNA expression in adipose tissue before and after consumption of a high-fat diet (HFD) and the impact of additive genetic effects herein in humans. For 6 weeks, 46 healthy, non-obese pairs of twins (aged 18–70) received a normal nutritionally balanced diet (ND) followed by an isocaloric HFD for 6 weeks. Active PAI-1 plasma levels and PAI-1 mRNA expression in subcutaneous adipose tissue were assessed after the ND and after 1 and 6 weeks of HFD. Active PAI-1 plasma concentrations and PAI-1 mRNA expression in adipose tissue were significantly increased after both 1 and 6 weeks of HFD when compared to concentrations determined after ND (p < .05), with increases of active PAI-1 being independent of gender, age, or changes of BMI and intrahepatic fat content, respectively. However, analysis of covariance suggests that serum insulin concentration significantly affected the increase of active PAI-1 plasma concentrations. Furthermore, the increase of active PAI-1 plasma concentrations after 6 weeks of HFD was highly heritable (47%). In contrast, changes in PAI-1 mRNA expression in fatty tissue in response to HFD showed no heritability and were independent of all tested covariates. In summary, our data suggest that even an isocaloric exchange of macronutrients — for example, a switch to a fat-rich diet — affects PAI-1 concentrations in humans and that this is highly heritable.