We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Computer scientists Jonathan Gratch and Stacy Marsella trace the influence of the 1988 publication of the OCC model, noting its impact on computational models of the interplay between emotion and cognition and on practical applications such as those relating to emotion recognition and the generation of emotion-related behaviors. They explain how OCC’s detailing of specific rules for reasoning about emotions invigorated work in affective computing and artificial intelligence more generally by giving computer scientists a clear pathway for modeling emotion processes. The model, they suggest, had both “upstream influences” on work concerning the cognitive antecedents of emotions, and “downstream influences” on work focused on modeling some of the consequences of emotions, such as those concerned with coping and decision-making and their relation to changes in beliefs, desires, and intentions. Finally, at a more sociology of science level, the authors suggest that the OCC model contributed significantly to bringing together the emotion research community in psychology and computer science communities interested in modeling affective phenomena.
Secreting a diffuse liability, potentially involving a large chain of actors (the designers and managers of the system, the authority having authorized it, the vehicle manufacturer, the intelligent road network operator and the driver), autonomous circulation defies Liability Law with regard to the requirement to establish a fault or at least accountability. Accordingly, the complex system of algorithms allowing autonomous circulation disrupts these classical mechanisms of liability, which do not appear to be able to meet the contemporary concern of guaranteeing compensation to victims of accidents caused by these vehicles.
The notion of constitutive equations (or laws) is elucidated together with the meaning of related notions such as material constants, calibration and response envelopes. Also discussed is why a comparison of constitutive equations is conceptually difficult.
Drones represent a rapidly developing industry. Devices initially designed for military purposes have evolved into a new area with a plethora of commercial applications. One of the biggest hindrances in the commercial developments of drones is legal uncertainty concerning the legal regimes applicable to the multitude of issues that arises with this new technology. This is especially prevalent in situations concerning autonomous drones (ie drones operating without a pilot). This article provides an overview of some of these uncertainties. A scenario based on the fictitious but plausible event of an autonomous drone falling from the sky and injuring people on the ground is analysed from the perspectives of both German and English private law. This working scenario is used to illustrate the problem of legal uncertainty facing developers, and the article provides valuable knowledge by mapping real uncertainties that impede the development of autonomous drone technology alongside providing multidisciplinary insights from law as well as software electronic and computer engineering.
Artificial intelligence (AI) represents emerging technology with beneficial potential for the maritime domain, to contain all natural and manmade features, events, or activities appertaining to the seas, oceans or other navigable waterways. It is not a single technology but a continuum of capabilities designed to synergize computational processing power with human creativity. This chapter introduces key AI concepts, including but not limited to, algorithms, reinforcement learning, deep learning and artificial general intelligence. Science has not yet achieved sentient machines, and fully autonomous vessels may not become commonplace for a number of years; nevertheless, current AI technologies offer risk-reduction methodologies to human-crewed vessels operating in dynamic and often dangerous conditions. In general, AI can enhance compliance with the law of the sea and reduce marine casualties. Specifically, this chapter proposes that AI technologies should be adopted to facilitate safer navigation through improved hydrographic services and AI-supported decision-making for vessel masters and human crews at sea.
Immunohistochemistry has progressed from humble beginnings as experimental techniques to what is now considered routine and essential analytical tools. Technologies available today in the research sphere may be adopted for standard practice in the near future. Multiplex assays can help pathology facilities extract more information from less tissue and improved amplification methods may reveal ultra-low expressing proteins that have escaped detection before. New forms of 'tagging' antibodies using nucleotides and quantum dots may replace traditional chromogen and fluorochrome protocols. Next-generation immunohistochemistry enlisting mass spectrometry principles to not only localize antigens in tissue but to also quantify the amount present has real clinical potential. Digital pathology and whole slide imaging have made significant progress to the point of being financially viable. All of these developments are standing at the doorway to the future of immunohistochemistry. Whether they are accepted and implemented in diagnostic laboratories remains to be seen. It is exciting to witness the continuing progression of immunohistochemistry.
Mass-casualty incidents (MCIs) are events in which many people are injured during the same period of time. This has major implications in regards to practical concerns and planning for both personnel and medical equipment. Smart glasses are modern tools that could help Emergency Medical Services (EMS) in the estimation of the number of potential patients in an MCI. However, currently there is no study regarding the advantage of employing the use of smart glasses in MCIs in Thailand.
Study Objective:
This study aims to compare the overall accuracy and amount of time used with smart glasses and comparing it to manual counting to assess the number of casualties from the scene.
Methods:
This study was a randomized controlled trial, field exercise experimental study in the EMS unit of Srinagarind Hospital, Thailand. The participants were divided into two groups (those with smart glasses and those doing manual counting). On the days of the simulation (February 25 and 26, 2022), the participants in the smart glasses group received a 30-minute training session on the use of the smart glasses. After that, both groups of participants counted the number of casualties on the simulation field independently.
Results:
Sixty-eight participants were examined, and in the smart glasses group, a total of 58.8% (N = 20) of the participants were male. The mean age in this group was 39.4 years old. The most experienced in the EMS smart glasses group had worked in this position for four-to-six years (44.1%). The participants in the smart glasses group had the highest scores in accurately assessing the number of casualties being between 21-30 (98.0%) compared with the manual counting group (89.2%). Additionally, the time used for assessing the number of casualties in the smart glasses group was shorter than the manual counting group in tallying the number of casualties between 11-20 (6.3 versus 11.2 seconds; P = .04) and between 21-30 (22.1 versus 44.5 seconds; P = .02).
Conclusion:
The use of smart glasses to assess the number of casualties in MCIs when the number of patients is between 11 and 30 is useful in terms of greater accuracy and less time being spent than with manual counting.
There is no doubt that students’ emotional states influence their general well-being and their learning success. Although current advances of computer hardware and Artificial Intelligence (AI) techniques made it possible to include real-time emotion-sensing as one of the ways to improve students’ learning experience and performance, there are many challenges that might inhibit or delay the deployment and usage of emotional learning analytics (LA) in education. In this chapter, we will critically review the current state-of-the art of emotion detection techniques, analysis and visualisation, the benefits of emotion analysis in education and the ethical issues surrounding emotion-aware systems in education. Finally, we hope that our guidelines on how to tackle each of those issues can support research in this area.
This paper is set against the backdrop of a rapidly changing world that brings considerable challenges and possibility for UK higher education. While the world of work is transitioning to Industrial Digitalisation (I4.0) technologies, the widespread lack of relevant skills among academics in a number of non-STEM disciplines is a fundamental impediment to harnessing the power of I4.0 in learning and teaching. Furthermore, there is no clear direction for how to start the process of curriculum innovation. To guide academics in non-STEM subjects, a three-step heuristic model for embedding core digitalisation competencies in the non-STEM curriculum is introduced. This chapter - as well as seeking to bring curricular change by empowering academics to make the first steps in embedding disciplinary relevant digitalisation competencies – intends to stimulate discussion about how universities can best produce graduates with the skillset and mind-set to critique, understand and find spaces to thrive in digitalisation-informed workplaces.
Adaptive learning is not new. Yet, with the rapid advances in the field of artificial intelligence and the global shift to online education, which occurred almost overnight due to the Covid-19 pandemic, adaptive learning is seen by many as a promising path to a smarter higher education. This chapter aims to shed light on both the opportunities and the challenges of the adoption of adaptive learning in higher education. The cases of universities, business schools and corporate universities adopting this approach are used to illustrate the role and benefits of adaptive learning powered by artificial intelligence at the current stage. The chapter concludes with an open call for academic experts and regulatory agencies to hold adaptive learning up as the defining shift in the future of education. It will likely become the established approach in education, filling the existing gaps in knowledge procurement.
The past few years have seen a rapid global scramble of governments developing strategic plans for Artificial Intelligence (AI). We focus in this chapter on four of the largest global economies and what their AI plans mean for higher education. Since 2016, China, the EU, the US, and the UK have all published strategic plans for AI (Fa, 2017; Hall and Pesenti, 2017; USA Government, 2019; European Commission, 2020). These plans’ speed appears to indicate a matter of urgency and a sense of high governmental priority. This chapter discusses how these plans differ, what we might look out for and the likely developments ahead for higher education. Higher education is of central importance to meet the coming AI economy’s demands, and embracing AI is broadly considered a transformative existential requirement for many universities’ survival (Aoun, 2017). How governments and their higher learning institutions are planning this transformation is of the utmost importance to us all.
Teaching formats are constantly evolving over the years to adapt to newer methods of student learning. In the ancient times, the practice of ‘Gurukul System’ where students would go and live with a teacher and learn all that teacher would know and practice by listening and observing. Here the method adopted was more tacit-to-tacit knowledge transfer between the teacher and the students. The learning here used to be a function of student’s ability to absorb skill and knowledge, and the evaluation used to be a function of real demonstration of student’s ability. This method has changed in the last hundred years where a more formal schooling system has evolved to get both teachers and students in one place, and the knowledge is transferred explicitly in the form of a prescribed curriculum. The learning in a ‘classroom’ environment is more formatted and the evaluation is based on formal assessments. In the recent years, lot of interest has been developed to innovate the teaching formats so that a ‘student centric learning’ can be practiced in a classroom environment. As described by Kaplan (2021) in this book’s first chapter, changes triggered by the COVID 19 pandemic have impacted the Education Sector extensively giving birth to a digital mode of instruction as the new normal. The digital platform and the transformation it can bring into the practice has created new interest among the teaching community to explore better teaching formats.
In this editorial article, we aim to map out the central features of algorithmic regulation and its conceptual basis – seeking to bring together different strands of the literature relating to the topic that have often remained apart. We then reflect on the ways through which algorithmic law could evolve to address the challenges of artificial intelligence in the legal domain, particularly by examining the potential of applying a “prudential” test in order to determine whether automated decision-making systems are suitable to adequately support legal decision-making.
This chapter analyses the path leading the Union to shift from a liberal approach to a democratic constitutional strategy to address the consolidation of platforms powers. This chapter aims to explain the reasons for this paradigmatic shift looking at content and data as the two paradigmatic areas to examine the rise of a new phase of European digital constitutionalism. This chapter focuses on three phases: digital liberalism, judicial activism and digital constitutionalism. The first part of this chapter frames the first steps taken by the Union in the phase of digital liberalism at the end of the last century. The second part analyses the role of judicial activism in moving the attention from fundamental freedoms to fundamental rights online in the aftermath of the adoption of the Lisbon Treaty. The third part examines the shift in the approach of the Union towards a constitutional democratic strategy and the consolidation of European digital constitutionalism.
Human behaviour is increasingly governed by automated decisional systems based on machine learning (ML) and ‘Big Data’. While these systems promise a range of benefits, they also throw up a congeries of challenges, not least for our ability as humans to understand their logic and ramifications. This chapter maps the basic mechanics of such systems, the concerns they raise, and the degree to which these concerns may be remedied by data protection law, particularly those provisions of the EU General Data Protection Regulation that specifically target automated decision-making. Drawing upon the work of Ulrich Beck, the chapter employs the notion of ‘cognitive sovereignty’ to provide an overarching conceptual framing of the subject matter. Cognitive sovereignty essentially denotes our moral and legal interest in being able to comprehend our environs and ourselves. Focus on this interest, the chapter argues, fills a blind spot in scholarship and policy discourse on ML-enhanced decisional systems, and is vital for grounding claims for greater explicability of machine processes.
This chapter introduces the research questions, methodology and structure of the book. The first part defines the primary goal of the book consisting of reframing the role of constitutional democracies in the algorithmic society and introduces the notion of digital constitutionalism. The second part underlines the path of constitutionalisation which has led to the rise of multiple entities expressing their norms and spaces. The third part underlines the talent of European constitutionalism to react against the consolidation of digital powers while the fourth and fifth part defines the research questions and structure of the book.
This chapter argues that the characteristics of European digital constitutionalism would lead to find a third way escaping polarisation. The primary goal of this chapter is to underline how the talent of European digital constitutionalism would promote a sustainable growth of the internal market while protecting fundamental rights and democratic values in the long run. The first part of this chapter focuses on the relationship between digital humanism and digital capitalism underlining the potential path characterising the European approach to artificial intelligence technologies. The second part examines how European digital constitutionalism would lead to a third way between public authority and private ordering. The third part underlines to what extent the Union would likely extend the scope of its constitutional values to address the global challenges of artificial intelligence technologies. Once this chapter addresses the potential road ahead of European digital constitutionalism, the fourth part summarises the primary findings of this research.
This chapter proposes using Artificial Intelligence (AI) to reposition the place of the child in society. Advancements in digital technology and applied statistical analysis offer an opportunity to dislodge the largely entrenched view of the child as an inferior rights holder. As currently positioned, the child’s power is derived from the parent(s) or legal guardian(s). This currently accepted derivative power structure limits the child’s autonomy to wield power independently from the parent. This structure was successful in the past. However, technological advances and the modern child’s dependence on digital resources requires a re-examination of this parent-based derivative power structure. Parents may now have less capability to perform protective and preparatory duties owed to children in the digital context. An analysis of the parent as gatekeeper for participatory rights in the modern digital context is critiqued and the ability of AI to alleviate this problem is proposed.
This element shows, based on a review of the literature, how digital technology has affected liberal democracies with a focus on three key aspects of democratic politics: political communication, political participation, and policy-making. The impact of digital technology permeates the entire political process, affecting the flow of information among citizen and political actors, the connection between the mass public and political elites, and the development of policy responses to societal problems. This element discusses how digital technology has shaped these different domains, identifies areas of research consensus as well as unresolved questions, and argues that a key perspective involves issue definition, that is, how the nature of the problems raised by digital technology is subject to political contestation.
In recent years, the concept of ‘prototype warfare’ has been adopted by Western militaries to accelerate the experimental development, acquisition, and deployment of emerging technologies in warfare. Building on scholarship at the intersection of Science and Technology Studies and International Relations investigating the broader discursive and material infrastructures that underpin contemporary logics of war, and taking a specific interest in the relationship between science, technology, and war, this article points out how prototype warfare captures the emergence of a new regime of warfare, which I term the experimental way of warfare. While warfare has always been defined by experimental activity, what is particular in the current context is how experimentation spans across an increasingly wide range of military practices, operating on the basis of a highly speculative understanding of experimentation that embraces failure as a productive force. Tracing the concept of prototype warfare across Western military discourse and practice, and zooming in on how prototype warfare takes experimentation directly into the battlefield, the article concludes by outlining how prototype warfare reconfigures and normalises military intervention as an opportunity for experimentation, while outsourcing the failures that are a structural condition of the experimental way of warfare to others, ‘over there’.