To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Quantitative plant biology is a growing field, thanks to the substantial progress of models and artificial intelligence dealing with big data. However, collecting large enough datasets is not always straightforward. The citizen science approach can multiply the workforce, hence helping the researchers with data collection and analysis, while also facilitating the spread of scientific knowledge and methods to volunteers. The reciprocal benefits go far beyond the project community: By empowering volunteers and increasing the robustness of scientific results, the scientific method spreads to the socio-ecological scale. This review aims to demonstrate that citizen science has a huge potential (i) for science with the development of different tools to collect and analyse much larger datasets, (ii) for volunteers by increasing their involvement in the project governance and (iii) for the socio-ecological system by increasing the share of the knowledge, thanks to a cascade effect and the help of ‘facilitators’.
Managing healthcare data is a major challenge for today’s medicine. The use of artificial intelligence and big data tools has allowed solving questions related to this topic. However, the wide heterogeneity in the psychiatric consultation record makes the retrospective analysis of these data limited due to a lack of information or differences between specialists.
We aim to develop a platform that allows the structured record of medical care data (based on dementia) while maintaining flexibility and format for its usefulness during clinical practice in psychiatry.
We developed a web-based platform for the structured and semi-structured record of psychiatric evaluation. The instrument is diagnosed-oriented (for our version we used dementia). We used Core outcome sets and expert opinion to identify the relevant outcomes for the attention.
A web-based platform is presented for the care of people with suspected dementia at different levels of care designed with the potential to record information of interest in research but also of clinical utility for closer follow-up.
This strategy allows developing the proposal towards other pathologies of interest. Also, with the integration of recommendation algorithms, a monitoring and recommendation system could be achieved to promote knowledge of psychiatric illness from routine practice. This proposal intends to have an impact by increasing the quality of care, reducing care times, and providing better approaches from primary care systems.
There is limited knowledge about the potential role of machine learning (ML) in quality improvement of psychiatric care.
Our case study was to determine whether ML decision trees used on patient databases are suitable for focussing on specific patient population samples of mental healthcare quality audits. Populations were identified by patient and care provider variables, and the time of treatment. Outcomes were defined as hospital mortality, over-long hospitalization (over average +1SD or +2SD); and short hospitalization (under average -1SD; under 3 days).
We conducted a Split Train Test in Python for our outcomes on national mental health inpatient turnover data (2010 through 2018 for training and 2019 for testing). A well-fitting decision tree had the area under the curve (AUC) of the receiver operating characteristic (ROC) >= 0.7, and specificity >= 0.9. Performing qualitative analyses of decision trees, we rejected the ones with little clinical relevance.
Decision trees fit well (AUC = 0.7 to 0.9; specificity = 0.7 to 1.0; sensitivity = 0 to 0.69). For hospital death cases, the decision tree had AUC = 0.86, no difference after controlling for the types of hospital units, and was clinically relevant. Models predicting over-long hospitalization fit well (AUC=0,9); however, controlling for care pathways, good fit and sensitivity both vanished. No valid models emerged for undertime discharges. The decision trees revealed unique combinations of variables.
Our ML decision trees used on healthcare databases proved promising for focussing quality audit efforts. Narrative analysis for the clinical contexts of the decision trees is indispensable.
Patients with schizophreniform disorder(SD) and schizophrenia present similar symptoms, however, SD has a shorter duration, varying between at least 1 month and 6 months.
To describe and analyse Schizophreniform disorder related hospitalizations in a national hospitalization database.
We performed a retrospective observational study using a nationwide hospitalization database containing all hospitalizations registered in Portuguese public hospitals from 2008 to 2015. Hospitalizations with a primary diagnosis of schizophreniform diso72.1-der were selected based on International Classification of Diseases version 9, Clinical Modification (ICD-9-CM) code of diagnosis 295.4x. Birth date, sex, residence address, primary and secondary diagnoses, admission date, discharge date, length of stay (LoS), discharge status, and hospital charges were obtained. Comorbidities were analysed using the Charlson Index Score. Independent Sample T tests were performed to assess differences in continuous variables with a normal distribution and Mann-Whitney-U tests when no normal distribution was registered.
In Portuguese public hospitals, a total of 594 hospitalizations with a primary diagnosis of Schizophreniform disorder were registered during the 8-year study period. Most were associated to the male sex patients, 72.1% (n=428). The mean age at admission was 35.99 years and differed significantly between sexes (males - 34.44; females- 40.19; p<0.001). The median LoS was 17.00 days and the in-hospital mortality was 0.5% (n=3). Only 6.1% (n=36) of the hospitalization episodes had 1 or more registered comorbidities.
Hospitalizations with a primary diagnosis of Schizophreniform disorder occur more frequently in young male patients. This is the first nationwide study analysing all hospitalization episodes in Portugal.
This chapter sets out the main functions of artificial intellgence (AI), Internet of Things (IoT) and distributed ledger technology (DLT) within the environmental sphere and how these technologies can be combined to create more effective climate solutions through weather-pattern prediction, increasing transparency and accounting, technical feasibility and carbon market resilience.
To look forward, it is necessary to look back and learn from the past. Hence this diverse collection of histories provides an excellent opportunity to reflect on what a future assessment might look like globally. Across the regions the themes for the future of assessment were similar. These centered around the need to adapt tests, to incorporate more local or emic assessments, as well as the need to use more indigenous knowledge. There was a clear narrative of a lack of training and resources in countries that would typically be described as developing or as low- to middle-income countries. This need was also evident in countries that are relatively developed but engaged with psychological assessment later in their histories in comparison to the United States, Britain, France, and Germany, for example. Typically, countries that were colonized, especially those colonized by Britain, showed more developments in the field of assessment as they had more contact with early developments in the field. This chapter reflects on the international histories of assessment and provides an overarching view on what we assess, why we assess, who assesses, and how we assess. In so doing the chapter presents a possible blueprint for the way forward for assessment in the global village that will be equally accessible and applicable to all.
This manuscript presents a novel approach to the study of contemporary material culture using digital data. Scholars interested in the materiality of past and contemporary societies have been limited to information derived from assemblages of excavated, collected, or physically observed materials; they have yet to take full advantage of large or complex digital datasets afforded by the internet. To demonstrate the power of this approach and its potential to disrupt our understanding of the material world, we present a study of an ongoing global health crisis, the COVID-19 pandemic. In particular, we focus on face-mask production during the pandemic across the United States in 2020 and 2021. Scraping information on homemade face-mask characteristics at multimonth intervals—including location and materials—we analyze the production of masks and their change over time. We demonstrate that this new methodology, coupled with a sociopolitical examination of mask use according to state policies and politicization, provides an unprecedented avenue to understand the changing distributions and social significances of material culture. Our study of mask making elucidates a clear linkage between partisan politics and decreasing disease mitigation effectiveness. We further reveal how time-averaged asssemblages drown out the political meanings of artifacts otherwise visible with finer temporal resolution.
The Fourth Industrial Revolution (4IR) is reshaping the globe at a rate far quicker than earlier revolutions. It is also having a greater influence on society and industry. We are currently witnessing extraordinary technology such as self-driving cars and 3D printing, as well as robots that can follow exact instructions. And hitherto unconnected sectors are combining to achieve unfathomable effects. It is critical to comprehend this new era of technology since it will significantly alter life during the next several years in this age of technological advancement. In particular, one of the most significant findings is that 4IR technologies must be used responsibly and to benefit people, companies and countries as a whole; as a result, the development of artificial intelligence, the Internet of Things, blockchain, and robotics systems will be advanced most effectively by grouping a multidisciplinary team from areas such as computer science, education and social sciences.
Focusing on the big data elements of cybersecurity, this chapter looks at the landscape of the big data technologies and the complexities of the different types of data, including spatial and graph data. It outlines examples in these complex data types and how they can be evaluated using data analytics.
Quality assurance and enhancement exercises are important in higher education. Curriculum assurance and enhancement exercise, relied in the past primarily on raw assessment data and self-reported, which lacked follow-up mechanisms gauging its effectiveness. This paper reports on an impact study of a curriculum review exercise using both digitalised data and self-reported data. Both the original review and its impact study were conducted on an English Programme in a Hong Kong university taken by around 6,000 students each year. Both adopted a learning analytics approach with digitalised behavioural and assessment data. Results of the impact study, which is the focus of this paper, demonstrate the strength of using learning analytics, including its capability of inter-course and intra-course investigations. Learning analytics can also empirically confirm and/or refute concerns reported by teachers and students. The use of digitalised data for learning analytics offers opportunities to implement and follow-up on quality assurance measures.
Recently, voices have been raised regarding the challenges of Big Data-driven global approaches, including the realization that exclusively tackling the global scale masks social and historical realities. While multi-scalar analyses have confronted this problem, the effects of global approaches are being felt. We highlight one of these effects: as classical scholarship struggles to decolonize itself, the ancient Mediterranean in global archaeology pivots around the Graeco-Roman world only, marginalizing the non-classical Mediterranean, thus foiling attempts at promoting post-colonial perspectives. In highlighting this, our aim is twofold: first, to invigorate the debate on multi-scalar approaches, proposing to incorporate microhistory into archaeological analysis; second, to use the non-classical Mediterranean to demonstrate that historical depth at a micro level is essential to augment that power in our interpretations.
This chapter is based on two standard reference corpora, the British National Corpus and the Corpus of Contemporary American English, as opposed to the multi-billion-word database of Google Books Ngrams, which has, despite its allure, not been used in many systematic linguistic studies so far. Focusing on indefinite article allomorphy (a vs an) as an orthographic cue to the phonological strength of ‹h›-onsets in British and American English, the size advantage of the Ngrams database expectedly plays out in larger type and token counts, more stable estimates and fewer distortions due to data sparsity. However, as metadata are extremely limited (to year and variety), a fully accountable analysis is not feasible. The case study illustrates how richly annotated corpora can shed light on potential disturbances arising from two sources: genre differences and between-author variability. A sensitivity analysis offers some degree of reassurance when extending the analysis to the Ngrams database. In this way, the authors demonstrate that the strengths and limitations of corpora and big data resources can, with due caution, be counterbalanced to answer questions of linguistic interest.
This chapter underlines how, in the field of data, European digital constitutionalism would not suggest introducing new safeguards but providing a teleological interpretation of the GDPR unveiling its constitutional dimension. The first part of this chapter focuses on the rise and consolidation of data protection in the European framework. The second part addresses the rise of the big data environment and the constitutional challenges introduced by automated decision-making technologies. The third part focuses on the GDPR underlining the opportunities and challenges of European data protection law concerning artificial intelligence. This part aims to highlight to what extent the system of the GDPR can ensure the protection of the right to privacy and data protection in relation to artificial intelligence technologies. The fourth part underlines the constitutional values underpinning the GDPR to provide a constitutional interpretation of how European data protection law, as one of the more mature expression of European digital constitutionalism, can mitigate the rise of unaccountable powers in the algorithmic society.
The COVID-19 pandemic has highlighted that leveraging medical big data can help to better predict and control outbreaks from the outset. However, there are still challenges to overcome in the 21st century to efficiently use medical big data, promote innovation and public health activities and adequately protect individuals’ privacy. The metaphor that property is a “bundle of sticks” applies equally to medical big data. Understanding medical big data in this way raises a number of questions, including: Who has the right to make money off its buying and selling, or is it inalienable? When does medical big data become sufficiently stripped of identifiers that the rights of an individual concerning the data disappear? How have different regimes such as the General Data Protection Regulation in Europe and the Health Insurance Portability and Accountability Act in the US answered these questions differently? In this chapter, we will discuss three topics: (1) privacy and data sharing, (2) informed consent, and (3) ownership.
Chapter 7 is the conclusion. We provide a short and selective synopsis of our argument and briefly review, and elaborate on, the empirical illustrations from previous chapters. Theoretically, we suggest that cross-class solidarity, which has sometimes been linked to dense networks of civic associations, is likely to originate in low information and encompassing social insurance programs. The chapter also discusses promising avenues for future research.
Chapter 1 introduces the topic and motivates our study. It explains the general logic of our argument and introduces the methods and evidence we rely on. The chapter gives an overview of the book’s organization and main insights and hence serves as a preview.
A core principle of the welfare state is that everyone pays taxes or contributions in exchange for universal insurance against social risks such as sickness, old age, unemployment, and plain bad luck. This solidarity principle assumes that everyone is a member of a single national insurance pool, and it is commonly explained by poor and asymmetric information, which undermines markets and creates the perception that we are all in the same boat. Living in the midst of an information revolution, this is no longer a satisfactory approach. This book explores, theoretically and empirically, the consequences of 'big data' for the politics of social protection. Torben Iversen and Philipp Rehm argue that more and better data polarize preferences over public insurance and often segment social insurance into smaller, more homogenous, and less redistributive pools, using cases studies of health and unemployment insurance and statistical analyses of life insurance, credit markets, and public opinion.
Prior research has emphasized the importance of dynamic capabilities to organizational transformation. In this paper, we explore how dynamic capabilities can have varying roles in change, and only potentially create transformational outcomes. By conducting ethnographic phenomenon driven research and observing the interactions of specific customer data related capabilities over a long period of time, we relate the potential for change to the way in which capabilities' interact, and identify three different mechanisms for change. Transformation requires a disruption of existing operational capabilities, which may result from one of the three identified mechanisms. Introducing a more theoretically consistent and practical taxonomy for (dynamic) capabilities may help in resolving some of the criticisms for their unclear practical implications. Further, our findings underline the importance of studying capabilities in their networks within organizations and over time.
Modern digital life has produced big data in modern businesses and organizations. To derive information for decision-making from these enormous data sets, a lot of work is required at several levels. The storage, transmission, processing, mining, and serving of big data create problems for digital domains. Despite several efforts to implement big data in businesses, basic issues with big data remain (particularly big-data management (BDM)). Cloud computing, for example, provides companies with well-suited, cost-effective, and consistent on-demand services for big data and analytics. This paper introduces the modern systems for organizational BDM. This article analyzes the latest research to manage organization-generated data using cloud computing. The findings revealed several benefits in integrating big data and cloud computing, the most notable of which is increased company efficiency and improved international trade. This study also highlighted some hazards in the sophisticated computing environment. Cloud computing has the potential to improve corporate management and accountants' jobs significantly. This article's major contribution is to discuss the demands, advantages, and problems of using big data and cloud computing in contemporary businesses and institutions.
Computational information processing has gradually supplanted traditional records and recordkeeping for the physical record, undermining practices centered on the “moral defense” of the record and supplanting them with practices centred on datafication. Prioritizing data malleability rather than the defense of information from manipulation and corruption has, this chapter argues, contributed to the current diminution of the trustworthiness of information and an unravelling of society’s evidentiary foundations. Fields such as archival science and the law have long considered questions of how records may testify to the events and actions of which they form a part – serving as proofs of claims, that is, as evidentials – but research in the field of computing has only relatively recently focused on these issues. Despite its roots in computing culture, blockchain technology offers the promise of an immutable ledger that may halt the processes of datafication contributing to the current widespread potential for manipulation of records. The design and spirit of blockchains – offering the ability to cryptographically “fix” the record, chaining it in place so that any tampering is extremely difficult and immediately evident – harks back to a pre-digital past when the materiality of paper records more readily fixed in place transactional “facts” and protected their integrity from manipulation.