Please note, due to essential maintenance online transactions will not be possible between 02:30 and 04:00 BST, on Tuesday 17th September 2019 (22:30-00:00 EDT, 17 Sep, 2019). We apologise for any inconvenience.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mental health services in the UK National Health Service have evolved to include primary-care generalist, secondary-care generalist and secondary-care specialist services. We argue that there continues to be an important role for the secondary-care generalists as they minimise interfaces, can live with diagnostic uncertainty and support continuity of care. The lack of commissioning and clinical boundaries in secondary-care generalist services can undermine their feasibility, leading to difficulties recruiting and retaining staff. There is a risk of a polo-mint service, where the specialist services on the edge are well resourced, but the secondary-care generalist services taking the greatest burden struggle to recruit and retain clinicians. We need to establish equity in resources and expectations between generalist and specialist mental health services.
Despite global deterioration of coral reef health, not all reef-associated organisms are in decline. Bioeroding sponges are thought to be largely resistant to the factors that stress and kill corals, and are increasing in abundance on many reefs. However, there is a paucity of information on how environmental factors influence spatial variation in the distribution of these sponges, and how they might be affected by different stressors. We aimed to identify the factors that explained differences in bioeroding sponge abundance and assemblage composition, and to determine whether bioeroding sponges benefit from the same environmental conditions that can contribute towards coral mortality. Abundance surveys were conducted in the Wakatobi region of Indonesia on reefs characterized by different biotic and abiotic conditions. Bioeroding sponges occupied an average of 8.9% of available dead substrate and variation in abundance and assemblage composition was primarily attributed to differences in the availability of dead substrate. Our results imply that if dead substrate availability increases as a consequence of coral mortality, bioeroding sponge abundance is also likely to increase. However, bioeroding sponge abundance was lowest on a sedimented reef, despite abundant dead substrate. This suggests that not all forms of coral mortality will benefit all bioeroding sponge species, and sediment-degraded reefs are likely to be dominated by a few resilient bioeroding sponge species. Overall, we demonstrate the importance of understanding the drivers of bioeroding sponge abundance and assemblage composition in order to predict possible impacts of different stressors on reefs communities.
The runtime for a modern, concurrent, garbage collected language like Java or Haskell is like an operating system: sophisticated, complex, performant, but alas very hard to change. If more of the runtime system were in the high-level language, it would be far more modular and malleable. In this paper, we describe a novel concurrency substrate design for the Glasgow Haskell Compiler that allows multicore schedulers for concurrent and parallel Haskell programs to be safely and modularly described as libraries in Haskell. The approach relies on abstracting the interface to the user-implemented schedulers through scheduler activations, together with the use of Software Transactional Memory to promote safety in a multicore context.
Higher-order languages that encourage currying are typically implemented using one of two basic evaluation models: push/enter or eval/apply. Implementors use their intuition and qualitative judgements to choose one model or the other. Our goal in this paper is to provide, for the first time, a more substantial basis for this choice, based on our qualitative and quantitative experience of implementing both models in a state-of-the-art compiler for Haskell. Our conclusion is simple, and contradicts our initial intuition: compiled implementations should use eval/apply.
Server applications, and in particular network-based server applications, place a unique
combination of demands on a programming language: lightweight concurrency, high I/O
throughput, and fault tolerance are all important. This paper describes a prototype web
server written in Concurrent Haskell (with extensions), and presents two useful results: firstly,
a conforming server could be written with minimal effort, leading to an implementation in
less than 1500 lines of code, and secondly the naive implementation produced reasonable
performance. Furthermore, making minor modifications to a few time-critical components
improved performance to a level acceptable for anything but the most heavily loaded web
Higher-order languages such as Haskell encourage the programmer to build abstractions by
composing functions. A good compiler must inline many of these calls to recover an efficiently
executable program. In principle, inlining is dead simple: just replace the call of a function by
an instance of its body. But any compiler-writer will tell you that inlining is a black art, full
of delicate compromises that work together to give good performance without unnecessary
code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned
from a full-scale “production” inliner, the one used in the Glasgow Haskell compiler. We
focus mainly on the algorithmic aspects, but we also provide some indicative measurements
to substantiate the importance of various aspects of the inliner.
Email your librarian or administrator to recommend adding this to your organisation's collection.