We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A significant proportion of inpatient antimicrobial prescriptions are inappropriate. Post-prescription review with feedback has been shown to be an effective means of reducing inappropriate antimicrobial use. However, implementation is resource intensive. Our aim was to evaluate the performance of traditional statistical models and machine-learning models designed to predict which patients receiving broad-spectrum antibiotics require a stewardship intervention.
Methods:
We performed a single-center retrospective cohort study of inpatients who received an antimicrobial tracked by the antimicrobial stewardship program. Data were extracted from the electronic medical record and were used to develop logistic regression and boosted-tree models to predict whether antibiotic therapy required stewardship intervention on any given day as compared to the criterion standard of note left by the antimicrobial stewardship team in the patient’s chart. We measured the performance of these models using area under the receiver operating characteristic curves (AUROC), and we evaluated it using a hold-out validation cohort.
Results:
Both the logistic regression and boosted-tree models demonstrated fair discriminatory power with AUROCs of 0.73 (95% confidence interval [CI], 0.69–0.77) and 0.75 (95% CI, 0.72–0.79), respectively (P = .07). Both models demonstrated good calibration. The number of patients that would need to be reviewed to identify 1 patient who required stewardship intervention was high for both models (41.7–45.5 for models tuned to a sensitivity of 85%).
Conclusions:
Complex models can be developed to predict which patients require a stewardship intervention. However, further work is required to develop models with adequate discriminatory power to be applicable to real-world antimicrobial stewardship practice.
Constitutional scholars emphasize the importance of an enduring, stable constitutional order, which North and Weingast (1989) argue is consistent with credible commitments to sustainable fiscal policies. However, this view is controversial and has received little empirical study. We use 19th-century US state-level data to estimate relationships between constitutional design and the likelihood of a government default. Results indicate that more entrenched and less specific constitutions are associated with a lower likelihood of default.
A rough balance of political power between monarchs and a militarized landed aristocracy characterized medieval Western Europe. Scholars have argued that this balance of power contributed to a tradition of limited government and constitutional bargaining. I argue that 5th- and 6th-century barbarian settlements created a foundation for this balance of power. The settlements provided barbarians with allotments of lands or taxes due from the lands. The allotments served to align the incentives of barbarian warriors and Roman landowners, and realign the incentives of barbarian warriors and their leadership elite. Barbarian military forces became decentralized and the warriors became political powerful shareholders of the realm.
We use the Stansel (2013) metropolitan area economic freedom index and 25 conditioning variables to analyze the spatial relationships between institutional quality and economic outcomes across 381 U.S. metropolitan areas. Specifically, we allow for spatial dependence in both the dependent and independent variables and estimate how economic freedom impacts both per capita income growth and per capita income levels. We find that economic freedom and per capita income growth and income levels are directly and positively related. Furthermore, we find that the total (direct plus indirect) effects on all metropolitan areas are positive and larger in magnitude than the direct effects alone, indicating that freedom-enhancing reforms in one metropolitan area lead to positive-sum games with neighboring metropolitan areas.
Large-scale trends in planktonic foraminiferal diversity have so far been based on utilization of synoptic biostratigraphic range charts. Although this approach ensures the taxonomic consistency and quality of the data being used, it takes no formal account of any sampling biases that might exist in the fossil record. We demonstrate that the occurrence data of planktonic foraminifera, as recorded in the primary literature, are strongly biased by sampling. We do this by demonstrating that raw diversity curves derived from the land-based and deep-sea records are strikingly different, but that they each correlate with the intensity of sampling in their respective environments, and thus are ultimately controlled by the structure of the geological record in each setting. Because sampling of the Mesozoic record is best in our land record whereas sampling of the Cenozoic is best in our deep-sea record, we combine the two to generate the best-supported estimates of species and genus diversity over time from these data. We correct for sampling bias using shareholder quorum subsampling and a modeling approach. The data are then transformed to generate a range-through plot of species richness that is compared with two earlier estimates of the diversity history where comparable species-in-bin data can be recovered. No robust statistical correlation is found among the three estimates. Although differences in amplitude are to be expected, differences in the actual shape of the curve are surprising. We conclude that these differences stem from the nature of the data themselves, namely the taxonomic scheme adopted and the taxonomic coverage used.
We provide industry-level estimates of the elasticity of substitution (σ) between capital and labor in the United States. We also estimate rates of factor augmentation. Aggregate estimates are produced. Our empirical model comes from the first-order conditions associated with a constant–elasticity of substitution production function. Our data represent 35 industries at roughly the 2-digit SIC level, 1960–2005. We find that aggregate U.S. σ is likely less than 0.620. σ is likely less than unity for a large majority of individual industries. Evidence also suggests that aggregate σ is less than the value-added share-weighted average of industry σ's. Aggregate technical change appears to be net labor–augmenting. This also appears to be true for the large majority of individual industries, but several industries may be characterized by net capital augmentation. When industry-level elasticity estimates are mapped to model sectors, the manufacturing sector σ is lower than that of services; the investment sector σ is lower than that of consumption.
The formal commissioning of the IRWG occurred at the 1991 Buenos Aires General Assembly, following a Joint Commission meeting at the IAU GA in Baltimore in 1988 that identified the problems with ground-based infrared photometry. The meeting justification, papers, and conclusions, can be found in Milone (1989). In summary, the challenges involved how to explain the failure to achieve the milli-magnitude precision expected of infrared photometry and an apparent 3% limit on system transformability. The proposed solution was to redefine the broadband Johnson system, the passbands of which had proven so unsatisfactory that over time effectively different systems proliferated, although bearing the same “JHKLMNQ” designations; the new system needed to be better positioned and centered in the spectral windows of the Earth's atmosphere, and the variable water vapour content of the atmosphere needed to be measured in real time to better correct for atmospheric extinction.
The formal origin of the IRWG occured at the Buenos Aires General Assembly, following a Joint Commission meeting at the IAU GA in Baltimore in 1988 that identified the problems with ground-based infrared photometry. The situation is summarized in Milone (1989). In short, the challenges involved how to explain the failure to achieve the milli-magnitude precision expected of infrared photometry and an apparent 3% limit on system transformability. The proposed solution was to redefine the broadband Johnson system, the passbands of which had proven so unsatisfactory that over time effectively different systems proliferated, although bearing the same JHKLMNQ designations; the new system needed to be better positioned and centered in the atmospheric windows of the Earth's atmosphere, and the variable water vapour content of the atmosphere needed to be measured in real time to better correct for atmospheric extinction.
As we have noted before, the WG-IR was created following a Joint Commission Meeting at the IAU General Assembly in Baltimore in 1988, a meeting that provided both diagnosis and prescription for the perceived ailments of infrared photometry at the time. The results were summarized in Milone (1989). The challenges involve how to explain the failure to systematically achieve the milli-magnitude precision expected of infrared photometry and an apparent 3% limit on system transformability. The proposed solution was to re-define the broadband Johnson system, the passbands of which had proven so unsatisfactory that over time effectively different systems proliferated although bearing the same JHKLMNQ designations; the new system needed to be better positioned and centered in the atmospheric windows of the Earth's atmosphere, and the variable water vapour content of the atmosphere needed to be measured in real time to better correct for atmospheric extinction.
The SB9 Working Group of Commission 30 aims at compiling the 9th Catalogue of Orbits of Spectroscopic Binaries. By definition, this is a never ending task as orbits of newly discovered systems keep appearing in the literature. Despite this, the working group tries to catch up with the delay as nothing was done in between 1989 when the 8th catalogue by Batten et al. and 2000 when the WG was settled. In 2006, at its business meeting, the WG decided to focus on the completeness of systems rather than on completeness of orbits. If the latter is a valuable objective, only the former is useful to any statistical investigation of spectroscopic binaries.
The WG-IR was created following a Joint Commission Meeting at the IAU General Assembly in Baltimore in 1988, a meeting that provided both diagnosis and prescription for the perceived ailments of infrared photometry at the time. The results were summarized in Milone (1989). The challenges involve how to explain the failure to systematically achieve the milli-magnitude precision expected of infrared photometry and an apparent 3% limit on system transformability. The proposed solution was to redefine the broadband Johnson system, the passbands of which had proven so unsatisfactory that over time effectively different systems proliferated although bearing the same JHKLMNQ designations; the new system needed to be better positioned and centered in the atmospheric windows of the Earth's atmosphere, and the variable water vapour content of the atmosphere needed to be measured in real time to better correct for atmospheric extinction.