To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: September 2017 saw the launch of the British Columbia (BC) Emergency Medicine Network (EM Network), an innovative clinical network established to improve emergency care across the province. The intent of the EM Network is to support the delivery of evidence-informed, patient-centered care in all 108 Emergency Departments and Diagnostic & Treatment Centres in BC. After one year, the Network undertook a formative evaluation to guide its growth. Our objective is to describe the evaluation approach and early findings. Methods: The EM Network was evaluated on three levels: member demographics, online engagement and member perceptions of value and progress. For member demographics and online engagement, data were captured from member registration information on the Network's website, Google Analytics and Twitter Analytics. Membership feedback was sought through an online survey using a social network analysis tool, PARTNER (Program to Analyze, Record, and Track Networks to Enhance Relationships), and semi-structured individual interviews. This framework was developed based on literature recommendations in collaboration with Network members, including patient representatives. Results: There are currently 622 EM Network members from an eligible denominator of approximately 1400 physicians (44%). Seventy-three percent of the Emergency Departments and Diagnostic and Treatment Centres in BC currently have Network members, and since launch, the EM Network website has been accessed by 11,154 unique IP addresses. Online discussion forum use is low but growing, and Twitter following is high. There are currently 550 Twitter followers and an average of 27 ‘mentions’ of the Network by Twitter users per month. Member feedback through the survey and individual interviews indicates that the Network is respected and credible, but many remain unaware of its purpose and offerings. Conclusion: Our findings underscore that early evaluation is useful to identify development needs, and for the Network this includes increasing awareness and online dialogue. However, our results must be interpreted cautiously in such a young Network, and thus, we intend to re-evaluate regularly. Specific action recommendations from this baseline evaluation include: increasing face-to-face visits of targeted communities; maintaining or accelerating communication strategies to increase engagement; and providing new techniques that encourage member contributions in order to grow and improve content.
Research on the cities of the Classical Greek world has traditionally focused on mapping the organisation of urban space and studying major civic or religious buildings. More recently, newer techniques such as field survey and geophysical survey have facilitated exploration of the extent and character of larger areas within urban settlements, raising questions about economic processes. At the same time, detailed analysis of residential buildings has also supported a change of emphasis towards understanding some of the functional and social aspects of the built environment as well as purely formal ones. This article argues for the advantages of analysing Greek cities using a multidisciplinary, multi-scalar framework which encompasses all of these various approaches and adds to them other analytical techniques (particularly micro-archaeology). We suggest that this strategy can lead towards a more holistic view of a city, not only as a physical place, but also as a dynamic community, revealing its origins, development and patterns of social and economic activity. Our argument is made with reference to the research design, methodology and results of the first three seasons of fieldwork at the city of Olynthos, carried out by the Olynthos Project.
Introduction: Understanding physician human resources in British Columbia’s (BC) emergency settings is essential to plan for training, recruitment and professional development programs. In 2014 we conducted an online and phone survey to the site leads for the 95 Emergency Departments (ED) attached to hospitals in BC. Methods: A one-page survey was developed by the authors (JC and JM). Each hospital listed on the BC Ministry of Health’s website was contacted to confirm that they had a functioning ED attached to the hospital and to determine who their site lead was. Each ED site lead was then emailed the questionnaire and up to three more follow-up emails and direct phone requests were performed as needed. Results: 92 of the 95 EDs completed the survey and we discovered that just over 1000 physicians deliver emergency care in BC with approximately half doing so in combination with family practice. There was an estimated shortfall of 199 physicians providing emergency care in 2014 and an anticipated shortfall of 287 by 2017 and 399 by 2019. Slightly more than half had formal certification, with 28% through the Royal College of Canada and 70% with the College of Family Physicians of Canada. Conclusion: More than 1000 physicians care for patients in EDs across BC but there is a significant and growing need for more physicians. There is tremendous variation across health authorities in emergency medicine certification, but approximately half of those who deliver emergency care have formal certification. Despite limitations of a survey method, this provides the most accurate and current estimate of emergency practitioner resources and training in BC and will be important in guiding discussions to address the identified gaps.
In 1842 when Charles Dickens visited North America, he was only twenty-nine years old and yet had already achieved superstar status. Everywhere he went in the young republic, the author was pressed by mobs wishing to catch a glimpse of Boz, to shake his hand, to beg for a lock of his hair, even to snatch a plug of fur from his bearskin coat. Dickens had travelled to the New World to discover America, but what he learned was that America had discovered him. Fairly early in his career Dickens thus achieved worldwide fame. As a young man he was fascinated by the American experiment, hoping to find in the United States a liberal democracy living up to its high ideals, but his visit to the States proved disappointing. While the Commonwealth of Massachusetts met with his approval for its innovative institutions and Boston seemed every bit the Athens of America, the rest of the country was not, in his words, ‘the Republic of my imagination’. Dickens could no longer in good conscience view America as a suitable destination for British emigrants. Directly after his trip, he wrote American Notes for General Circulation (1842) in which he levied much criticism against the country (and offered some praise, though that is largely forgotten in light of his jabs and stabs at American manners and institutions).
On 2011 July 14, a transient X-ray source, Swift J1822.3–1606, was detected by Swift BAT via its burst activities. It was subsequently identified as a new magnetar upon the detection of a pulse period of 8.4 s. Using follow-up RXTE, Swift, and Chandra observations, we have determined a spin-down rate of Ṗ ~ 3 × 10−13, implying a dipole magnetic field of ~ 5 × 1013 G, second lowest among known magnetars, although our timing solution is contaminated by timing noise. The post-outburst flux evolution is well modelled by surface cooling resulting from heat injection in the outer crust, although we cannot rule out other models. We measure an absorption column density similar to that of the open cluster M17 at 10′ away, arguing for a comparable distance of ~1.6 kpc. If confirmed, this could be the nearest known magnetar.
The challenge for the next 50 years is to increase the productivity of major livestock species to address the food needs of the world, while at the same time minimizing the environmental impact. The present review presents an optimistic view of this challenge. The completion of genome sequences, and high-density analytical tools to map genetic markers, allows for whole-genome selection programmes based on linkage disequilibrium for a wide spectrum of traits, simultaneously. In turn, it will be possible to redefine genetic prediction based on allele sharing, rather than pedigree relationships and to make breeding value predictions early in the life of the peak sire. Selection will be applied to a much wider range of traits, including those that are directed towards environmental or adaptive outcomes. In parallel, reproductive technologies will continue to advance to allow acceleration of genetic selection, probably including recombination in vitro. Transgenesis and/or mutagenesis will be applied to introduce new genetic variation or desired phenotypes. Traditional livestock systems will continue to evolve towards more intensive integrated farming modes that control inputs and outputs to minimize the impact and improve efficiency. The challenges of the next 50 years can certainly be met, but only if governments reverse the long-term disinvestment in agricultural research.
Characterisation of genetic diversity in a large number of European pig populations has been undertaken with EC support. The populations sampled included local (rare) breeds, national varieties of the major international breeds, commercial lines and the Chinese Meishan breed. A second phase of the project will sample a further 50 Chinese breeds. Neutral genetic markers (AFLP and microsatellites), with individual or bulk typing, were used and compared.
DNA from 59 European pig populations was extracted on samples of about 50 individuals per population. Individuals were typed for 50 microsatellites and for 148 AFLP bands. A subset of 25 populations was typed for 20 microsatellites on pools of DNA. Allele frequencies were estimated by direct allele counting for the co-dominant markers. Frequencies of AFLP negative alleles (absent bands) were obtained by taking the square root of absent band frequencies. Within-breed variability was summarised using standard statistics: expected and observed heterozygosity, mean observed and effective numbers of alleles, and F statistics. Between-breed diversity analysis was based on a bootstrapped Neighbor-Joining (NJ) tree derived from Reynolds distances (DR). The standard distance of Nei (DS) was also calculated.
To evaluate the relation between antimicrobial use and resistance in intensive-care unit (ICU) and non-ICU inpatient areas in eight US hospitals.
We determined antimicrobial use in terms of defined daily doses, antimicrobial-use density (defined daily doses/1,000 patient days), and percentage resistance for five antimicrobial-organism combinations in the ICU and non-ICU inpatient areas of eight US hospitals participating in project Intensive Care Antimicrobial Resistance Epidemiology.
Antimicrobial resistance and use varied tremendously among the eight hospitals. Antimicrobial resistance among these five nosocomial pathogens was significantly higher within the inpatient setting of these hospitals, compared with the outpatient setting. One hospital consistently ranked highest for use of all classes of antimicrobials examined. High antimicrobial use was not associated necessarily with high resistance for a particular antimicrobial-organism pair.
Antimicrobial use varied significantly across these hospitals, but generally was higher in ICUs. These results suggest that concomitant surveillance of both antimicrobial resistance and antimicrobial use is helpful in interpreting antimicrobial resistance in a hospital or ICU and that further analysis is required to determine the role of variables other than antimicrobial use in a statistical model for predicting antimicrobial resistance.
Thirty-two N'Dama bulls aged 3 to 4 years were used to study the interactions between work, trypanosomosis and nutrition. The bulls were randomly allocated to two treatments, working (W) and non-working (N). Half in each treatment were placed on an andropogon hay basal diet (B), the other half on a better quality groundnut hay diet (H). Five days a week, four pairs of animals in the BW group and and four pairs in the HW group pulled weighted sledges four times around a 2056-m track. Loads were set to ensure energy expenditure was equivalent to 1·4 times maintenance. After 4 weeks all 32 bulls were injected intradermally with 104 Trypanosoma congolense organisms. The trial continued for a further 8 weeks.
Trypanosome infection caused a significant (P < 0·001) decline in packed cell volume (PCV), and the anaemia was more severe (P < 0·05) in working animals; three pairs in the HW group and two pairs in the BW group were withdrawn because PCV fell below 17%. Diet had no effect on PCV or parasitaemia. Infection caused a decline in food intake (P < 0·001) but with significant interactions between diet and work. Intake patterns were similar in the BN and BW groups whilst the HW animals consumed significantly more groundnut hay compared with the HN group (P < 0·01). However, nutrition had no significant effect on lap times or the team's ability to work under trypanosomosis challenge. Post-infection, diet was the dominant factor determining weight change, HN and HW animals weighed significantly more than BN and BW animals (P < 0·01) and the interaction between period, diet and work demonstrated that BWhad the lowest weights in the latter stages of the trial (P < 0·05).
The results suggest that supplementation with better quality forages confers no benefit to an animal infected with trypanosomes. Nor can trypanotolerant cattle sustain long periods of work if subiected to a primary challenge of trypanosomes.
The impetus for a biomimetic approach to mineralization stems from the need for increasingly sophisticated materials showing greater efficiency, specialization, and optimization—properties that ultimately depend on the control of molecular and supramolecular structure, and hence on methods of predictive chemical fabrication. Biomineralization is of central importance to the development of new approaches in materials science because, as discussed in the preceding article by Fink, the formation of bioinorganic materials, such as bones, shells, and teeth is highly regulated and responsive to the surrounding environment in a manner not achieved by conventional synthetic routes. Some possible areas of overlap are shown in Figure 1. As in the other areas of biomaterials discussed in this and next month's issue of the MRS Bulletin, there are two potential connections between the natural processes of biomineralization and the synthetic demands of materials science; first, the commercial exploitation of biologically derived, tailored materials, and second, the assimilation and adaptation of biological concepts and mechanisms into “artificial” materials design and synthesis. The former is an extension of biotechnology, by which microbial systems could be utilized to produce mineral powders. Some of the possible processes have been discussed elsewhere. In general, the use of biological sources is only applicable where the high production costs are offset by a marketable specialty product. While this is feasible for organic-based products such as polyhydroxybutyrate (see next month's MRS Bulletin) it imposes a severe limitation when we transfer the approach to biomineralization.
I call this “Lerner's Problem” although it has emerged in recent literature under other labels, particularly “The Incentive Problem for a Soviet-style Manager.” I think that a brief digression to explain my title may be in order.
I imagine that many students of my generation were brought up, as I was, to speak (in an oral tradition) of “Lerner–Lange socialism.” Even a casual re-reading of their work (Lerner, 1944; Lange, 1936/7) is sufficient to show that the hyphen is inappropriate. Lange's contribution was to suggest that the Central Planning Board (henceforth CPB) should announce prices rather than quantities. This is in the tradition of Taylor (1929) rather than of Barone (1908). The advantages of announcing prices rather than quantities were two, one perhaps trivial, the other more serious. The first is that it might be technically easier to solve the dual than the primal problem. This, we now know, is a little trivial: if we have the information and computational facilities to solve one, we can solve the other. The second advantage is important. Lange, like Taylor, thought that by announcing only prices, the CPB could achieve a large measure of decentralization: given prices, economic agents could be left to “get on with it,” maximizing on their own account, and thus reproducing, or “choosing,” the quantities which would have emerged from a solution to the primal problem, without any need for concern with implementation.
The object of ch. 6 is not, I fear, to offer any serious solution to the general problem of the Second Best. It has three much less ambitious objects. The first is to suggest a new use for a familiar Criterion Function, the simple “cost-benefit” criterion of the project-evaluation literature (see Hammond, 1980). The second is to clear up operationally the old question of which definition of complementarity is appropriate in dealing with Second-Best solutions (on which see Corlett and Hague, 1954; Meade, 1955a and 1955b; and of course Lipsey and Lancaster, 1957). The third is to illustrate the use of adaptive control methods in a general rather than, as hitherto, a purely partial-equilibrium model.
We have had occasion to notice, in ch. 5, two familiar reasons for thinking that the standard First-Best solution is unattainable and/or undesirable: aversion to risk and preference for leisure. In the explicit Second-Best literature neither of these difficulties has been addressed: it has rather been assumed that the reason for First Best being unattainable is lack of instruments: that the government will not alter a tax or tariff, or cannot control an important monopolist. Here I shall follow the Second-Best literature: for illustrative purposes, I shall ignore both risk and effort. First Best will be unattainable due only to lack of instruments with which to control a monopolist; and, though a certain non-convexity in the technology will be assumed, it will be only to facilitate exposition.
The axiom of selfishness and the Two Theorems of Welfare Economics
The preferences attributed to individuals in welfare economics are usually assumed to satisfy the axiom of selfishness – that is, each individual is assumed to order consumption bundles for himself without regard to anyone else's preferences or actual consumption, and is said to be better off if he receives a more preferred bundle. There are two reasons for doubting if this is a satisfactory foundation for individualistic welfare economics. The first is empirical: it is doubtful if people are, always and everywhere, so purely selfish. The second is that it is hard to find much force in normative prescription for a world in which all agents are, by assumption, amoral. (The difficulty of a utilitarianism that accepts the axiom as descriptively accurate but goes on to recommend policy on utilitarian moral grounds is well known.) It therefore seems worth trying to relax this axiom if we can: we might gain in positive content and add moral force to normative individualistic prescription. We may indeed drop it, but we must enquire into the cost of doing so, and with what we may replace it.
Since Arrow (1951a), two outstanding contributions by Edgeworth (1881) have commonly been called the First and Second Theorems of Welfare Economics. The First Theorem is that any competitive equilibrium is a Pareto-optimum. The Second is that any optimal allocation can be supported by competitive prices if the initial endowment is appropriate.
In ch. 4, I describe the application of a real-time iterative control process to the control of a producer–producer externality. The example is very simple. It may, however, serve several purposes here. First, it provides an opportunity to display in some detail the mechanics of such a process, which may be unfamiliar to some readers. Second, it allows us to check the process against the list of necessary or desirable properties given in section 3.3 above. Third – and perhaps most important – it provides an opportunity to state and consider some necessary, and restrictive, technological assumptions. Consideration of strategic behavior is deferred to ch. 5. Ch. 4 concludes with some brief remarks on the relationship between the control problem studied here and the more general problem of Second Best.
It may be helpful to explain at the beginning who “we” are. “We” are government or some agency thereof, perhaps a Ministry of Production. We might even be the members of the Planning Commission for some industry or group of industries who have decided that the best way to discharge their office is to make it redundant. We are, in any event, in charge of designing the control process.
While methods of deriving optimal tax functions to control producer–producer externalities, at least in simple cases, are very well known, the computation of rates has remained a difficult problem.
As the assumptions of the model have been altered, the Second Theorem may so far have appeared quite robust. It survives the introduction of extended preferences if they are assumed to satisfy the non-paternalist condition. Introduction of effort-aversion certainly destroys the optimality of the simple Rule, but not by automatic implication the Theorem: there is no obvious reason why it should not survive in a properly constructed model (which I do not provide). Introduction of risk is another matter; we lose not only the Rule but, in the absence of complete and costless information, the possibility of attaining First Best at all. It remains to consider increasing returns to scale, which, in the consideration of Lerner's Problem (ch. 5), I was careful to postpone. A sufficient reason was that a partial-equilibrium framework is inadequate to the treatment of this problem. The important result, which requires general-equilibrium analysis, is that, in the presence of non-convexities, the “divorce,” seemingly justified by the Second Theorem, between considerations of efficiency and of distribution, or equity, cannot be made. There is, of course, quite another reason for thinking this divorce impossible (which will not be further explored here). If the simple notion of the representative consumer cannot be employed (see again Blackorby, Davidson, and Schworm, 1991), then any Criterion Function employed to judge any change must aggregate preferences in a fashion that must depend on value judgments.
The purpose of ch. 7 is to illustrate two further uses of step-wise controls. Other examples might have been chosen. Thus an iterative control may be used to force a profit-maximizing multiproduct firm to Ramsay prices (see Finnsinger and Vogelsang, 1979). Ramsay prices themselves may require some justification, but after our consideration of the Rule may be thought of as at least having some plausible rationality. The iterative scheme proposed by Finnsinger and Vogelsang is, of course, informationally decentralized in the sense discussed in ch. 3; we need no information which is not obtained easily during the process. We have the usual trade-off between the possibilities of overshoot (lack of monotonicity) and strategic behavior.
Another attractive, but probably frustrating, example comes to mind: application to the control of a common-property, open-access, renewable resource such as a fishery. The target is obviously rent. Without control, open access ensures that this is dissipated in over-fishing. Most methods of control depend on constructing expensive, and usually unreliable, models of the fish population, and imposing quotas, restricting fishing methods, or both, incurring further resource cost, both for enforcement and for the required fishing methods (to say nothing of the problems of international agreement and enforcement on the high seas). Rent is then dissipated in all directions: it is certainly not collected. It would seem easy and natural to impose a tax on fish landed, and to endow the tax clerk with a simple little algorithm by which to adjust the rate until its yield was maximized.