We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Radiocarbon (14C) measurements undertaken by the NERC Radiocarbon Laboratory using accelerator mass spectrometry (AMS) are now freely available on a new online database. The data presented covers measurement of the wide range of sample types that are processed for research projects in the fields of Earth and environmental science, supported by the United Kingdom’s Natural Environment Research Council. Sample types within the database include organic remains, soils, sediments, carbonates, dissolved organic and inorganic carbon, and carbon dioxide. Currently, the database contains 14C data for over 2400 individual samples that were measured and reported between 2005 and 2013, but it is envisaged that this will expand considerably as more data are made available. Contextual information such as sampling location and associated publications are provided where available, and searches can be performed on sample location, sample type, project number, and publication code. This new database compliments an existing, publicly available database of measurements performed using radiometric methods by the laboratory which has recently been expanded to present over 2000 measurements. It is hoped that this archive will prove useful to workers in the community who would benefit from greater availability of measurements for particular locations or sample types, and for the purposes of performing meta-analyses, and/or synthesis of larger datasets.
In many countries scientists planning research that may involve the use of animals are required by law to examine the possibilities for replacement, reduction or refinement (the Three Rs) of these experiments. In addition to the large number of literature databases, there are now many specialist databases specifically addressing the Three Rs. Information centres, with a mandate to assist scientists and lay people locate information on the Three Rs, have also been established. Email discussion lists and their archives constitute another, although less quality-controlled, source of information. Furthermore, guidelines for the care and use of animals in research have been produced both by regulatory bodies and scientific organisations. The growth of the internet has put an enormous amount of data into the public domain, and the problems of accessing relevant information are discussed. Suggestions are also given for search strategies when using these information sources.
Researchers searching for alternatives to painful procedures that involve animals may find that the dispersed relevant literature and the array of databases make the search challenging and even onerous. This paper addresses a significant gap that exists for researchers, in identifying appropriate databases to use when searching for specific types of information on alternatives. To facilitate the efficient and effective searching by users, and to ensure compliance with new requirements and improved science, we initiate an evolving guide comprising search grids of database resources organised by animal models and topics (http://www.vetmed.ucdavis.edu/Animal_Alternatives/databaseapproach.html). The search grids present organised lists of specific databases and other resources for each animal model and topic, with live links. The search grids also indicate resources that are freely available worldwide, and those that are proprietary and available only to subscribers. The search grids are divided into two categories: ‘animal models’ and ‘topics’. The category ‘animal models’ comprises: animal model selections; mice; rodents — rats/guinea pigs/hamsters; rabbits; dogs, cats; farm animals, sheep, swine; non-human primates; fish, frogs, aquatic; and exotics. The category ‘topics’ comprises: husbandry; behaviour; euthanasia; toxicity; monoclonal antibodies; teaching; endpoints; disease models; analgesia/anaesthesia; emerging technologies; strategies for specific intervention procedures; and standard operating procedures (for example, drawing blood, behavioural training, transportation, handling, restraint and identification). Users are provided with a selected list of linked resources relevant to their particular search. Starting with an appropriate database that covers the type of information that is being sought is the first step in conducting an effective search that can yield useful information to enhance animal welfare.
In 1981, the Scientific Committee on Antarctic Research endorsed a program for ship-based collection of Antarctic iceberg data, to be coordinated by the Norwegian Polar Institute (NPI). From the austral summers 1982/1983 to 1997/1998, icebergs were recorded from most, and up to 2009/10 by fewer research vessels. The NPI database makes up 80% of the SCAR International Iceberg Database presented here, the remainder being Australian National Antarctic Research Expedition observations. The database contains positions of 374 142 icebergs resulting from 34 662 observations. Within these, 298 235 icebergs are classified into different size categories. The ship-based data are particularly useful because they include systematic observations of smaller icebergs not covered by current satellite-based datasets. Here, we assess regional and seasonal variations in iceberg density and total quantities, we identify drift patterns and exit zones from the continent, and we discuss iceberg dissolution rates and calving rates. There are significant differences in the extent of icebergs observed over the 30 plus years of observations, but much of these can be ascribed to differences in observation density and location. In the summer, Antarctic icebergs >10 m in length number ~130 000 of which 1000 are found north of the Southern Ocean boundary.
A substantial database of published excavation and other reports has been used to map the character and distribution in Roman Britain of whetstones, those unprepossessing implements essential in the home, farmstead, workshop and barracks for the maintenance of edge-tools and weapons. The quality of the geological identifications in the reports varies considerably, but a wide range of lithologies are reported as put to use: granite, basalts-dolerites, lava, tuff, mica-schist, slates/phyllites, Brownstones, Pennant sandstone, micaceous sandstones, grey sandstones/siltstones, Millstone Grit, Coal Measures, red sandstones, ferruginous sandstones, sarsen, Weald Clay Formation sandstones, sandy limestones, shelly limestones, cementstones, and (Lower) Carboniferous Limestone. On distributional evidence, some of these categories are aliases for alternatively and more familiarly named lithologies. Bringing ‘high-end’ products to the market, the long-running industry based on sandstones from the Weald Clay Formation (Lower Cretaceous) emerges as a British economic feature, evidenced from the Channel coast to the Scottish Borders, and with a recently demonstrated, substantial representation on the Roman near-continent. The distribution maps point to another and more complete British industry, based on the Brownstones (Old Red Sandstone, Devonian) and Pennant sandstone (Upper Carboniferous), outcropping close together in the West Country. A more systematic and geology-based treatment of excavated whetstones in the future is likely to yield yet more insights into the role these artefacts played in the economy of Roman Britain.
Chapter 18 covers the licensing of commercial software, data and databases. It begins with a discussion of database protection, including the history of legislative attempts to cover data and databases under copyright law, and the divergent pathways taken in the US and EU. It next addresses commercial data and database licensing practices (MD Mark v. Kerr-McGee, Eden Hannon v. Sumitomo). The chapter next discusses data privacy and protection regulations and their implementation in licensing agreements. It next moves to commercial software licensing, discussing the distinctions between source and object code. and offering some background on the legal protection of computer software through copyright, patent and trade secret law. Particular software licensing practices are discussed next, including provisions necessary to license source code, maintenance and support obligations, reverse engineering restrictions. It concludes with a discussion of licensing in the cloud and how different companies have approached this business model.
Carbon reduction is an important process for Earth-like origins of life events and of great interest to the astrobiology community. In this paper, we have collected experimental results, field work and modelling data on CO and CO2 reduction in order to summarize the research that has been carried out particularly in relation to the early Earth and Mars. By having a database of this work, researchers will be able to clearly survey the parameters tested and find knowledge gaps wherein more experimentation would be most beneficial. We focused on reviewing the modelling parameters, field work and laboratory conditions relevant to Mars and the early Earth. We highlight important areas addressed as well as suggest future work needed, including identifying relevant parameters to test in both laboratory and modelling work. We also discuss the utility of organizing research results in such a database in astrobiology.
The Pediatric Cardiac Critical Care Consortium (PC4) is a multi-institutional quality improvement registry focused on the care delivered in the cardiac ICU for patients with CHD and acquired heart disease. To assess data quality, a rigorous procedure of data auditing has been in place since the inception of the consortium.
Materials and methods:
This report describes the data auditing process and quantifies the audit results for the initial 39 audits that took place after the transition from version one to version two of the registry’s database.
Results:
In total, 2219 total encounters were audited for an average of 57 encounters per site. The overall data accuracy rate across all sites was 99.4%, with a major discrepancy rate of 0.52%. A passing score is based on an overall accuracy of >97% (achieved by all sites) and a major discrepancy rate of <1.5% (achieved by 38 of 39 sites, with 35 of 39 sites having a major discrepancy rate of <1%). Fields with the highest discrepancy rates included arrhythmia type, cardiac arrest count, and current surgical status.
Conclusions:
The extensive PC4 auditing process, including initial and routinely scheduled follow-up audits of every participating site, demonstrates an extremely high level of accuracy across a broad array of audited fields and supports the continued use of consortium data to identify best practices in paediatric cardiac critical care.
Student counselling services are at the forefront of providing mental health support to Irish Higher Education students. Since 1996, the Psychological Counsellors in Higher Education in Ireland (PCHEI) association, through their annual survey collection, has collected aggregate data for the sector. However, to identify national trends and effective interventions, a standardised non-aggregate sectoral approach to data collection is required. The Higher Education Authority funded project, 3SET, builds on the PCHEI survey through the development of a national database. In this paper, we outline the steps followed in developing the database, identify the parties involved at each stage and contrast the approach taken to the development of similar databases. Important factors shaping the development have been the autonomy of counselling services, compliance with General Data Protection Regulation, and the involvement of practitioners. This is an ongoing project with the long-term sustainability of the database being a primary objective.
Between 2018 and 2020 the Kipot ja kielet [Beakers and Speakers] project (KiKi) collected a typological database of archaeological artefacts in Finland and a typological linguistic database of Uralic languages. Both datasets will be accessible through a public online interface (URHIA) from 2021. The data will help integrate Finnish- and Uralic-speaking areas into global perspectives on human history.
Administrative databases (AD) are repositories of administrative and clinical data related to patient contact episodes with all sorts of health facilities (primary care, hospitals, pharmacies,…).The large number of patients/contact episodes with pharmaceutical facilities available, the systematic and broad register and the fact that AD provides Real-world data are some of the pros in using AD data.
Objectives
To perform a narrative review on the role of Big Data pharmaceutical registries in Mental Health research.
Methods
We conducted a narrative review using MEDLINE and Google Scholar databases in order to analyse current literature regarding the role of BigData pharmaceutical registries in Mental Health Research.
Results
Administrative variables like drug names and prices may be used and linked to other clinical variables such as patients disease, in-hospital mortality, length of stay,(…). The use of electronic medical records may also contribute to systematic surveillance approaches like local or national pharmacovigilance strategies, identification of patients at risk of developing complications and software pop-up warnings related to medication dosage, duplication and lateral effects. The use of Big Data pharmaceutical registries allows to create predictive epidemiological models regarding drugs lateral effects or interactions and may help to perform pharmacovigilance phase 4 clinical trials. Its use may be applied to the optimization of clinical decision, monitoring of drug adverse events, drug cost and administrative monitoring and as surrogate measures of quality care indicators.
Conclusions
Big Data use in pharmaceutical registries allows to collect large and important clinical and administrative data that may be later used in Mental Health care and research.
Substantial progress has been made in the standardization of nomenclature for paediatric and congenital cardiac care. In 1936, Maude Abbott published her Atlas of Congenital Cardiac Disease, which was the first formal attempt to classify congenital heart disease. The International Paediatric and Congenital Cardiac Code (IPCCC) is now utilized worldwide and has most recently become the paediatric and congenital cardiac component of the Eleventh Revision of the International Classification of Diseases (ICD-11). The most recent publication of the IPCCC was in 2017. This manuscript provides an updated 2021 version of the IPCCC.
The International Society for Nomenclature of Paediatric and Congenital Heart Disease (ISNPCHD), in collaboration with the World Health Organization (WHO), developed the paediatric and congenital cardiac nomenclature that is now within the eleventh version of the International Classification of Diseases (ICD-11). This unification of IPCCC and ICD-11 is the IPCCC ICD-11 Nomenclature and is the first time that the clinical nomenclature for paediatric and congenital cardiac care and the administrative nomenclature for paediatric and congenital cardiac care are harmonized. The resultant congenital cardiac component of ICD-11 was increased from 29 congenital cardiac codes in ICD-9 and 73 congenital cardiac codes in ICD-10 to 318 codes submitted by ISNPCHD through 2018 for incorporation into ICD-11. After these 318 terms were incorporated into ICD-11 in 2018, the WHO ICD-11 team added an additional 49 terms, some of which are acceptable legacy terms from ICD-10, while others provide greater granularity than the ISNPCHD thought was originally acceptable. Thus, the total number of paediatric and congenital cardiac terms in ICD-11 is 367. In this manuscript, we describe and review the terminology, hierarchy, and definitions of the IPCCC ICD-11 Nomenclature. This article, therefore, presents a global system of nomenclature for paediatric and congenital cardiac care that unifies clinical and administrative nomenclature.
The members of ISNPCHD realize that the nomenclature published in this manuscript will continue to evolve. The version of the IPCCC that was published in 2017 has evolved and changed, and it is now replaced by this 2021 version. In the future, ISNPCHD will again publish updated versions of IPCCC, as IPCCC continues to evolve.
The documentation and analysis of archaeological lithics must navigate a basic tension between examining and recording data on individual artifacts or on aggregates of artifacts. This poses a challenge both for artifact processing and for database construction. We present here an R Shiny solution that enables lithic analysts to enter data for both individual artifacts and aggregates of artifacts while maintaining a robust yet flexible data structure. This takes the form of a browser-based database interface that uses R to query existing data and transform new data as necessary so that users entering data of varying resolutions still produce data structured around individual artifacts. We demonstrate the function and efficacy of this tool (termed the Queryable Artifact Recording Interface [QuARI]) using the example of the Stelida Naxos Archaeological Project (SNAP), which, focused on a Paleolithic and Mesolithic chert quarry, has necessarily confronted challenges of processing and analyzing large quantities of lithic material.
Chapter 2 is a deep dive into the threads that link the chemical and mineralogical makeup of soils to that of the surrounding cosmos. The first section examines elements across the periodic table, and how they systematically change in abundance due to a variety of cosmic processes. The second part of the chapter examines how the elements are combined into minerals, and especially the silicate minerals. Discussion of the factors that dictate mineral stability in soils is introduced, and these will be examined in even greater depth in Chapter 8. Secondary minerals are also introduced, again with a strong focus on the silicate group. Cation exchange is examined. The chapter ends with the effect of plants, and biology, on soil chemistry, which is expanded upon in Chapter 3. The activities at the conclusion of the chapter provide students with an opportunity to use spreadsheets and data analyses in order to gain experience and confidence in data analysis.
Multicentre research databases can provide insights into healthcare processes to improve outcomes and make practice recommendations for novel approaches. Effective audits can establish a framework for reporting research efforts, ensuring accurate reporting, and spearheading quality improvement. Although a variety of data auditing models and standards exist, barriers to effective auditing including costs, regulatory requirements, travel, and design complexity must be considered.
Materials and methods:
The Congenital Cardiac Research Collaborative conducted a virtual data training initiative and remote source data verification audit on a retrospective multicentre dataset. CCRC investigators across nine institutions were trained to extract and enter data into a robust dataset on patients with tetralogy of Fallot who required neonatal intervention. Centres provided de-identified source files for a randomised 10% patient sample audit. Key auditing variables, discrepancy types, and severity levels were analysed across two study groups, primary repair and staged repair.
Results:
Of the total 572 study patients, data from 58 patients (31 staged repairs and 27 primary repairs) were source data verified. Amongst the 1790 variables audited, 45 discrepancies were discovered, resulting in an overall accuracy rate of 97.5%. High accuracy rates were consistent across all CCRC institutions ranging from 94.6% to 99.4% and were reported for both minor (1.5%) and major discrepancies type classifications (1.1%).
Conclusion:
Findings indicate that implementing a virtual multicentre training initiative and remote source data verification audit can identify data quality concerns and produce a reliable, high-quality dataset. Remote auditing capacity is especially important during the current COVID-19 pandemic.
Knowledge about political representatives' behavior is crucial for a deeper understanding of politics and policy-making processes. Yet resources on legislative elites are scattered, often specialized, limited in scope or not always accessible. This article introduces the Comparative Legislators Database (CLD), which joins micro-data collection efforts on open-collaboration platforms and other sources, and integrates with renowned political science datasets. The CLD includes political, sociodemographic, career, online presence, public attention, and visual information for over 45,000 contemporary and historical politicians from ten countries. The authors provide a straightforward and open-source interface to the database through an R package, offering targeted, fast and analysis-ready access in formats familiar to social scientists and standardized across time and space. The data is verified against human-coded datasets, and its use for investigating legislator prominence and turnover is illustrated. The CLD contributes to a central hub for versatile information about legislators and their behavior, supporting individual-level comparative research over long periods.
Despite the training and skills of airway managers, airway management complications still occur and may cause patient harm or death. The causes are multifactorial and may include patient, environment and clinician factors. Airway complications likely contribute to a significant proportion of deaths due to anaesthesia and are certainly more common outside the operating theatre and especially in the critical care unit. Reported incidences of failure and harm during airway management vary depending on the population studied and definitions used. Numbers may be of less value than understanding themes that help us improve care and reduce harm. The chapter emphasises that conventional research (e.g. device evaluation studies and randomised controlled trials) may be of little use in identifying low frequency events and complications because of their restricted inclusion and exclusion criteria, the use of devices only by experts and in conventional settings and because of their focus on efficacy rather than safety. The chapter highlights the important and growing role of registries and databases. Several are described in detail including the 4th National Audit Project and the Dutch ‘mini-NAP’. The value and limitations of litigation databases are explored. Specific complications of note are described at the end of the chapter.
Species inventories are essential to the implementation of conservation policies to mitigate biodiversity loss and maintain ecosystem services and their value to the society. This is particularly topical with respect to climate change and direct anthropogenic effects on Antarctic biodiversity, with the identification of the most at-risk taxa and geographical areas becoming a priority. Identification tools are often neglected and considered helpful only for taxonomists. However, the development of new online information technologies and computer-aided identification tools provides an opportunity to promote them to a wider audience, especially considering the emerging generation of scientists who apply an integrative approach to taxonomy. This paper aims to clarify essential concepts and provide convenient and accessible tools, tips and suggested systems to use and develop knowledge bases (KBs). The software Xper3 was selected as an example of a user-friendly KB management system to give a general overview of existing tools and functionalities through two applications: the ‘Antarctic Echinoids’ and ‘Odontasteridae Southern Ocean (Asteroids)’ KBs. We highlight the advantages provided by KBs over more classical tools, and future potential uses are highlighted, including the production of field guides to aid in the compilation of species inventories for biodiversity conservation purposes.
Digital technology has had a profound and generally beneficial effect on dictionaries and other language-reference tools. Electronic dictionaries continue to evolve and it seems likely that for people born in the current century and beyond, ‘dictionary’ may cease to have its primary denotation as a thick book filled with a list of alphabetised words and their definitions. The idea of the dictionary developed over centuries to its place of privilege in the mid-twentieth century: an authoritative book that could be found in nearly every home. In the decades since then, the idea of the dictionary has rapidly evolved to become, especially for today’s digital natives, an amorphous collection of data that lives in the cloud and that should be quickly retrievable to anyone who desires to find the definition of a word they don’t know, using whatever device they have at hand. In their efforts to become the newest, best, and most dazzling, makers of electronic dictionaries today must not lose sight of the fact that the core need of their user is a simple one than can be met with a simple solution, provided to them with what is now relatively simple technology.
This chapter reviews the transformative effects of technology on dictionary-making, focusing on four main areas: the use of databases for storing and organising dictionary text; the creation and exploitation of corpora for use as the dictionary’s evidence base; the enhancement of the value and usability of corpus data through the application of software tools developed in the NLP (natural language processing) community; and the migration of dictionaries from print to online media. During the last half-century, activity in all these areas has brought fundamental changes to the way dictionaries are created and made available to their users. We trace the development of corpus-based lexicography in English, from the early work of John Sinclair and his colleagues in the 1980s to the present day. Lexicographers working in English and other widely used languages now have access to resources which would scarcely have been imaginable thirty years ago: very large corpora (measured in tens of billions of words) and sophisticated corpus-querying tools are routinely available. The move from print to digital publication is a more recent development, but no less significant. The far-reaching implications of these changes – for dictionary-makers and dictionary-users alike – are explored at every stage.