Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-nr4z6 Total loading time: 0 Render date: 2024-04-30T13:25:29.231Z Has data issue: false hasContentIssue false

Part VII - Responsible AI Healthcare and Neurotechnology Governance

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 377 - 444
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

22 Medical AI Key Elements at the International Level

Fruzsina Molnár-Gábor and Johanne Giesecke Footnote *
I. Introduction

It is impossible to imagine biomedicine today without Artificial Intelligence (AI). On the one hand, its application is grounded in its integration into scientific research. With AI methods moving into cancer biology, for example, it is now possible to better understand how drugs or gene variants might affect the spread of tumours in the body.Footnote 1 In genomics, AI has helped to decipher genetic instructions and, in doing so, to reveal rules of gene regulation.Footnote 2 A major driving force for the application of AI methods and particularly of deep learning in biomedical research has been the explosive growth of life-sciences data prominently based on gene-sequencing technologies, paired with the rapid generation of complex imaging data, producing tera- and petabytes of information. To better understand the contribution of genetic variation and alteration on human health, pooling large datasets and providing access to them are key for identifying connections between genetic variants and pathological phenotypes. This is not only true for rare diseases or molecularly characterized cancer entities, but also plays a central role in the study of the genetic influence of common diseases. The sheer growth and combination of data sets for analysis has created an emerging need to mine them faster than purely manual approaches are able to.Footnote 3

On the other hand, based on this knowledge from biomedical research, the use of AI is already widespread at various levels in healthcare. These applications can help in the prevention of infectious diseases, for example by making it easier to identify whether a patient exhibiting potential early COVID-19 symptoms has the virus even before they have returned a positive test.Footnote 4 It can also help to understand and classify diseases at the morphological and molecular level, such as breast cancer,Footnote 5 and can foster the effective treatment of diseases such as in the case of a stroke.Footnote 6 AI methods are also increasingly involved in the evaluation of medical interventions, such as in the assessment of surgical performance.Footnote 7 Additionally, physicians increasingly face comparison with AI-based systems in terms of successful application of their expertise.Footnote 8

With life-sciences research increasingly becoming part of medical treatment through the rapid translation of its findings into healthcare and through technology transfer, issues around the application of AI-based methods and products are becoming pertinent in medical care. AI applications, already ubiquitous, will only continue to multiply, permanently altering the healthcare system and in particular the individual doctor–patient relationship. Precisely because medical treatment has a direct impact on the life and physical integrity as well as the right of self-determination of patients involved, standards must be developed for the use of AI in healthcare. These guidelines are needed at the international level in order to ease the inevitable cross-border use of AI-based systems while boosting their beneficial impact on patients’ healthcare. This would not only promote patient welfare and general confidenceFootnote 9 in the benefits of medical AI, but would also help, for example, with the international marketing and uniform certification of AI-based medical devices,Footnote 10 thereby promoting innovation and facilitating trade.

A look at current statements, recommendations, and declarations by international organizations such as the United Nations Educational, Scientific and Cultural Organization (UNESCO), the World Health Organization (WHO), the Organisation for Economic Co-operation and Development (OECD), and the Council of Europe (CoE), as well as by non-governmental organizations such as the World Medical Association (WMA), shows that the importance of dealing with AI in as internationally uniform a manner as possible is already well recognized.Footnote 11 However, as will be shown in the following sections, international standardization for potential concrete AI applications in the various stages of medical treatment is not yet sufficient in terms of content. The situation is further complicated by the fact that the aforementioned instruments have varying degrees of binding force and legal effect. Following the identification of those gaps requiring regulation or guidance at the international level, the aim is to critically examine the international organizations and non-governmental organizations that could be considered for the job of closing them. In particular, when considering the spillover effect of the WMA’s guidelines and statements on national medical professional law, it will be necessary to justify why the WMA is particularly suitable for creating regulations governing the scope of application of AI in the doctor–patient relationship.

II. Application Areas of AI in Medicine Addressed by International Guidelines So Far

As sketched in the introduction, AI can be used to draw insights from large amounts of data at various stages of medical treatment. Thereby, AI can generally be defined as ‘the theory and development of computer systems capable to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making […]’.Footnote 12 Besides different types of AI systems as regards their autonomy and learning type,Footnote 13 a further distinction can be drawn in the context of decision-making in medical treatment as to whether AI is used as a decision aid or as a decision-maker.Footnote 14 While in the former case the physician retains the decision-making or interpretative authority over the findings of AI, in the latter case this does not normally apply, or only to a very limited extent. In any case, this distinction must be viewed critically insofar as even where AI is acting as a decision-maker the actors themselves, who are involved only to a small degree in the development and application of AI, each make individual decisions. Altogether, it is questionable whether decision-making can be assumed to be solely the result of AI’s self-learning properties.Footnote 15 Given that AI can, at least potentially, be used in every stage of medical treatment, from anamnesis to aftercare and documentation, and that the medical standards must be upheld, and that the patient must be kept informed at every stage, the gaps to be filled by an international guideline must be defined on the basis of a holistic view of medical treatment.

1. Anamnesis and Diagnostic Findings

The doctor–patient relationship usually begins with the patient contacting the doctor due to physical complaints, which the doctor tries to understand by means of anamnesis and diagnosis. Anamnesis includes the generation of potentially medically relevant information,Footnote 16 for example about previous illnesses, allergies, or regularly taken medications. The findings are collected by physical, chemical, or instrumental examinations or by functional testing of respiration, blood pressure, or circulation.Footnote 17

An important AI application area is oncology. Based on clinical or dermatopathological images, AI can be used to diagnose and to classify skin cancerFootnote 18 or make a more accurate interpretation of mammograms for early detection of breast cancer.Footnote 19 Another study from November 2020 shows that AI could also someday be used to automatically segment the major organs and skeleton in less than a second, which helps in localizing cancer metastases.Footnote 20

Among other things, wearables (miniaturized computers worn close to the body) and digital health applicationsFootnote 21 are also being developed for the field of oncology and are already being used by patients independently, for example, to determine their findings. For example, melanoma screening can be performed in advance of a skin cancer diagnosis using mobile applications such as store-and-forward teledermatology and automated smartphone apps.Footnote 22 Another ‘use’ case is monitoring patients with depression. The Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) is seeking to complement existing apps for monitoring writing and reading behaviors of depressed patients with an app that provides AI-based speech analysis. The model recognizes speech style and word sequences and finds patterns indicative of depression. Using machine learning, it learns to detect depression in new patients.Footnote 23

As regards health apps and wearables, the WMA distinguishes between ‘technologies used for lifestyle purposes and those which require the medical expertise of physicians and meet the definition of medical devices’ and calls for the use of the latter to be appropriately regulated.Footnote 24 In its October 2019 statement, the WMA emphasizes that protecting the confidentiality and control of patient data is a core principle of the doctor–patient relationship.Footnote 25 In line with this, the CoE recommends that data protection principles be respected in the processing of health data, especially where health insurers are involved, and that patients should be able to decide whether their data will be disclosed.Footnote 26 The WHO draws attention to the complexity of the governance of data obtained from wearables, which may not have been collected initially for healthcare or research purposes.Footnote 27

These statements provide a basic direction, but do not differentiate more closely between wearable technologies and digital health applications with regard to the type of use, the scope of health data collected and any transfer of this data to the physician. It is unclear how physicians should handle generated health data, such as whether they must conduct an independent review of the data or whether a plausibility check is sufficient to use the data when taking down a patient’s medical history and making findings. The degree of transparency for the patient regarding the workings of the AI application as well as any data processing is also not specified. The implementation of a minimum standard or certification procedure could be considered here.

Telematics infrastructure can play a particularly important role at the beginning of the doctor-patient relationship. In its 2019 recommendations, the WHO distinguished between two categories of telemedicine. First, it recommends client-to-provider telemedicine, provided this does not replace personal contact between doctor and patient, but merely supplements it.Footnote 28 Here it agrees with the WMA’s comprehensive 2018 statement on telemedicine, which made clear that telemedicine should only be used when timely face-to-face contact is not possible.Footnote 29 This also means that the physician treating by means of telemedicine should be the physician otherwise treating in person, if possible. This would require reliable identification mechanisms.Footnote 30 Furthermore, education, particularly about the operation of telemedicine, becomes highly important in this context so the patient can give informed consent.Footnote 31 The monitoring of patient safety, data protection, traceability, and accountability must all also be ensured.Footnote 32 After the first category of client-to-provider telemedicine has been established, the WHO also recommends provider-to-provider telemedicine as a second category, so that healthcare professionals, including physicians, can support each other in diagnoses, for example, by sharing images and video footage.Footnote 33 Thus, many factors must be clarified at the national level when creating a legal framework including licensing, cross-border telemedicine treatment, and use cases for remote consultations and their documentation.Footnote 34

In Germany, for example, the first regulations for the implementation of a telematics infrastructure have been in force since October 2020,Footnote 35 implementing the recommendations of the WHO and the WMA among others. There, the telematics infrastructure is to be an interoperable and compatible information, communication, and security infrastructure that serves to network service providers, payers, insured persons, and other players in the healthcare system and in rehabilitation and care.Footnote 36 This infrastructure is intended to enable telemedical procedures, for instance, for video consultation in SHI-accredited medical care.Footnote 37 For this purpose, § 365 SGB V explicitly refers to the high requirements of the physician’s duty of disclosure for informed consent pursuant to § 630e BGB (German Civil Code)Footnote 38, which correspond to those of personal treatment.

Telemedicine should be increasingly used to close gaps in care and thus counteract disadvantages, especially in areas with a poorer infrastructure in terms of local medical care.Footnote 39 To this end, it could be helpful to identify for which illnesses telemedical treatment is sufficient, or determine whether such a treatment can already be carried out at the beginning of the doctor–patient relationship. One example thereof are the large-scale projects in the German region of Brandenburg, where, for example, patients’ vital signs were transmitted telemedically as part of a study to provide care for heart patients.Footnote 40 In the follow-up study, AI is now also being used to prepare the vital data received at the telemedicine center for medical staff.Footnote 41

2. Diagnosis

The findings must then be evaluated professionally, incorporating ideas about the causes and origins of the disease, and assigned to a clinical picture.Footnote 42

Accordingly, AI transparency and explicability become especially important in the area of diagnosis. In its October 2019 statement, the WMA pointed out that physicians need to understand AI methods and systems so that they can make medical recommendations based on them, or refrain from doing so if individual patient data differs from the training data used.Footnote 43 It can be concluded, just as UNESCO’s Ad Hoc Expert Group (AHEG) directly stated in its September 2020 draft, that AI can be used as a decision support tool, but should not be used as a decision-maker replacing human decision and responsibility.Footnote 44 The WHO also recommends the use of AI as a decision support tool only when its use falls within the scope of the physicians’ current field of work, so that the physicians provide only the services for which they have been trained.Footnote 45

There is no clarification as to the extent to which transparency is required of the physician as regards AI algorithms and decision logic. A distinction should be made here between open-loop and closed-loop systems.Footnote 46 An open-loop system, in which the output has no influence on the control effect of the system, is generally easier to understand and explain, allowing stricter requirements to be placed on the control of AI decisions and treatments based on them. On the other hand, it is more difficult to deal with closed-loop systems in which the output depends on the input because the system has one or more feedback loops between its output and input. In addition, there is the psychological danger that the physician, knowing the nature of the system and its performance, may consciously or unconsciously exercise less rigorous control over the AI decision. It is, therefore, necessary to differentiate between both the type of system and the use of AI dependent on its influence as a decision aid in order to identify the degree of necessary control density, from simple plausibility checks to more intensive review obligations of the physician. It is also clear that there is a need to explain which training data and patient data were processed and influenced the specific diagnosis and why other diagnoses were excluded.Footnote 47 This is particularly relevant in the area of personalized and stratified diagnostics. In this context, the previously rejected possibility of AI as a decision-maker and the physician’s ultimate decision-making authority could be re-explored and enabled under specific, narrowly defined conditions depending on the type of application and the type and stage of the disease, which could reduce the burden on healthcare infrastructure.

3. Information, Education, and Consent

Before treatment in accordance with the diagnosis can be started, the patient must be provided with treatment information to ensure that the patient’s behavior is in line with the treatment and with economic information on the assumption of costs by the health insurance company.Footnote 48 In addition, information about the diagnosis, risks, and course of treatment as well as real alternatives to treatment is a prerequisite for effective patient consent.Footnote 49

The WMA’s Declaration of Helsinki states that information and consent should be obtained by a person qualified to give treatment.Footnote 50 The CoE’s May 2019 paper also requires that the user or patient be informed when AI is used to interact with them in the context of treatment.Footnote 51 It is questionable whether a general duty of disclosure can be derived from this for every case in which AI is involved in patient care, even if only to a very small extent. The WHO’s recent guidelines emphasize the increasing infeasibility of true informed consent particularly for the purpose of securing privacy.Footnote 52 In any case, there is currently a lack of guidance regarding the scope of the duty to disclose the functioning of the specific AI.

It would also be conceivable to use AI to provide information itself, for instance, through a type of chatbot system, if it had the training level and the knowledge of a corresponding specialist physician and queries to the treating physician remained possible. In any case, if this is rejected with regard to the physician’s ultimate decision-making authority, obtaining consent with the help of an AI application after information has been provided by a physician could be considered for time-efficiency reasons.

4. Treatment and Aftercare

Treatment is selected based on a diagnosis, the weighing of various measures and risks, the purpose of the treatment and the prospects of success according to its medical indication. After treatment is complete, monitoring, follow-up examinations, and any necessary rehabilitation take place.Footnote 53

According to the Declaration of Helsinki, the physician’s reservation and compliance with medical standards both apply, particularly in the therapeutic treatment of the patient.Footnote 54 No specific regulation has been formulated to govern the conditions under which AI used by physicians in treatment fulfil medical standards, and it is not clear whether it is necessary for them to meet those standards at all or whether even higher requirements should be placed on AI.Footnote 55 In addition, the limitations on a physician’s right to refuse the use of AI for treatment are unclear. It is possible that the weight of the physician’s ultimate decision-making authority could be graded to correspond to the measure and the risks of the treatment, especially in the context of personalized and stratified medicine, so that, depending on the degree of this grading, treatment by AI could be made possible.

AI allows the remote monitoring of health status via telemedicine, wearables, and health applications, for example, by monitoring sleep rhythms, movement profiles, and dietary patterns, as well as reminders to take medication. This is of great advantage especially in areas with poorer healthcare structures.Footnote 56 For example, a hybrid closed-loop system for follow-up care has already been developed for monitoring diabetes patients that uses AI to automate and personalize diabetes management. The self-learning insulin delivery system autonomously monitors the user’s insulin level and delivers an appropriate amount of insulin when needed.Footnote 57 Furthermore, a December 2020 study shows that AI can also be used in follow-up and preventive care for young patients who have suffered from depression or have high-risk syndromes to predict the transition to psychosis in a personalized way.Footnote 58 Meanwhile, follow-up also includes monitoring or digital tracking using an electronic patient file or other type of electronic health record so that, for example, timely follow-up examinations can be recommended. This falls under the digital tracking of clients’ health status and services, which the WHO recommends in combination with decision support and targeted client communication (if the existing healthcare system can support implementation and the area of application falls within the area of competence of the responsible physician and the protection of patient data is ensured).Footnote 59

However, there is as of yet no regulatory framework for the independent monitoring and initiation of AI-measures included in such applications. Apart from the need for regulation of wearables and health applications,Footnote 60 there is also a need for regulation of the transmission of patient data to AI, which must be solved in a way that is compliant with data protection rules.Footnote 61

5. Documentation and Issuing of Certificates

The course of medical treatment is subject to mandatory documentation.Footnote 62 There is no clarification as to what must be documented and the extent of documentation required in relation to the use of AI in medical treatment. A documentation obligation could, for example, extend to the training status of AI, any training data used, the nature of its application, and its influence on the success of the treatment.

Both economically and in terms of saving time, it could make sense to employ AI at the documentation stage in addition to its use during treatment, as well as for issuing health certificates and attestations, leaving more time for the physician to interact with the patient.

6. Data Protection

The use of AI in the medical field must also be balanced against the data protection law applicable in the respective jurisdiction. In the EU this would be the General Data Protection Regulation (GDPR)Footnote 63 and the corresponding member state implementation thereof.

The autonomyFootnote 64 and interconnectednessFootnote 65 of AI alone pose data protection law challenges, and these are only exacerbated when AI is used in the context of medical treatment due to the sensitivity of personal health-related data. For example, as Article 22(1) of the GDPR protects data subjects from adverse decisions based solely on automated processing, at least the final decision must remain in human hands.Footnote 66

The processing of sensitive personal data such as health data is lawful if the data subject has given his or her express consent.Footnote 67 Effective consent is defined as words or actions given voluntarily and with knowledge of the specific factual situation.Footnote 68 A person must, therefore, know what is to happen to the data. In order to consent to treatment involving the use of AI, the patient would have to be informed accordingly.Footnote 69 However, it is difficult to determine how to inform the patient about the processing of the data if the data processing procedure changes autonomously due to the self-learning property of the AI. Broad consentFootnote 70 on the part of the patient is challenging as they would be consenting to unforeseeable developments and would consequently have precisely zero knowledge of the specific factual situation at the time of consent, effectively waiving the exercise of part of their right to self-determination. The GDPR operationalizes the fundamental right to the protection of personal data by defining subjective rights of data subjects, but it is questionable to what extent these rights would enable the patient to intercept and control data processing. The role of the patient, on the other hand, would be strengthened by means of dynamic information and consentFootnote 71, as the patient could give his or her consent bit by bit over the course of treatment using AI. The challenge here would be primarily on the technical side, as an appropriate organization and communication structure would have to be created to inform the patient about further, new data processing by the AI.Footnote 72 The patient would have to be provided with extensive information not only about the processed data but also about the resulting metadata if the latter reveals personally identifiable information, not least in order to revoke their consent, if necessary, in a differentiated way,Footnote 73 and to arrange for the deletion of their data.

Correspondingly, Articles 13 and 14 of the GDPR provide for information obligations and Article 17 of the GDPR for a right to deletion. A particular problem here is that the patient data fed in becomes the basis for the independent development of the AI and can no longer be deleted. Technical procedures for anonymizing the data could in principle help here, although this would be futile in a highly contextualized environment.Footnote 74 The use of different pseudonymization types (for instance noise) to lower the chance of re-identifiability might also be worth considering. This might, however, render the data less usable.Footnote 75 In any case, the balancing of the conflicting legal positions could lead to a restriction of deletion rights.Footnote 76 This in turn raises the question of the extent to which consent, which may also be dynamic, could be used as a basis of legitimacy for the corresponding processing of the data, even after the appropriate information about the limitations has been provided. In order to avoid a revocation of consent leading to the exclusion of certain data, other legal bases for processing are often proposed.Footnote 77 This often fails to take into account that erasure rights and the right to be forgotten may lead to a severe restriction of processing regardless. Additionally, the compliance with other rights, such as the right to data portability,Footnote 78 might be hampered or limited due to the self-learning capabilities of AI, with the enforcement of such rights leading to the availability of a given data set or at least its particular patterns throughout different applications, obstructing the provision of privacy through control over personal data by the data subject.

Because of the AI methods involved in processing patients’ sensitive data and its regularly high contextualization, the likeliness of anonymized data, or data thought to be anonymized, becoming re-identifiable is also higher. Based on AI methods of pattern recognition, particular combinations of data fed into a self-learning AI system might be re-identified if the AI system trained with that data is later, in the course of its application, confronted with the same pattern. In this way, even if data or data sets were originally anonymized before being fed into an AI system, privacy issues may emerge due to the high contextuality of AI applications and their self-learning characteristics.Footnote 79 As a consequence, privacy issues will not only be relevant when data is moved between different data protection regimes, but also when data is analysed. However, the fact of re-identifiability might remain hidden for a considerable time.

Once re-identifiability is discovered, the processing of affected personal data will fall within the scope of application of the GDPR. Although at first glance this implies higher protection, unique characteristics of AI applications pose challenges to safeguard the rights of data subjects. A prominent example hereby is the right to be forgotten. As related to informational self-determination, the right to be forgotten is intended to prevent the representation and permanent presence of information in order to guarantee the possibility of free development of the personality. With the right to be forgotten, the digital, unlimited remembrance and retrieval of information is confronted with a claim to deletion in the form of non-traceability.Footnote 80 The concept of forgetting does not necessarily include a third party but does imply the disappearance of information as such.Footnote 81 Relating to data fed into AI applications, the connection between one’s own state of ignorance and that of others, as well as their forgetting, including AI’s ability to forget, remains decisive. Even if the person is initially able to ward off knowledge, it is still conceivable that others might experience or use this knowledge (relying on the increased re-identifiability of the data), and then in some form, even if derivatively, connect the data back to the individual. In this respect, forgetting by third parties is also relevant as an upstream protection for one’s own forgetting. Furthermore, the right to be forgotten becomes an indispensable condition for many further rights of the person concerned. Foreign and personal forgetting are necessary, if information processing detaches itself from the person concerned and becomes independent to then be fed back into their inherently internal decision-making processes, leveraging the realization of (negative) informational self-determination.Footnote 82

7. Interim Conclusion

The variety of already-existing uses of AI in the context of medical treatment, from initial contact to follow-up and documentation, shows the increasingly urgent need for uniform international standards, not least from a medical ethics perspective. Above all, international organizations such as the WHO and non-governmental organizations such as the WMA have set an initial direction with their statements and recommendations regarding the digitization of healthcare. However, it is striking that, on the one hand there are more recent differentiated recommendations for the application of AI in medical treatment in general but these are not directed at physicians in particular and that, on the other hand, the entire focus of such recommendations is regularly on individual subareas and on the governance in healthcare without a comprehensive examination of possible applications in the physician–patient relationship. Medical professionals, especially physicians, are thus exposed to different individual and general recommendations in addition to the technical challenges already posed by AI. This could lead to uncertainties and differing approaches among physicians and could ultimately have a chilling effect on innovation. Guidelines from a competent international organization or professional association that cover the use of AI in all stages of medical treatment, especially from the physician’s perspective, would therefore be desirable.

III. International Guidance for the Field of AI Application during Medical Treatment
1. International Organizations and Their Soft-Law Guidance

Both the WHO and the UNESCO are specialized agencies of the United Nations traditionally responsible for the governance of public health.Footnote 83 The WHO has been regularly engaged in fieldwork as an aid to research ethics committees, but has recently increasingly moved into developing guidance within the area of public health and emerging technologies.Footnote 84 UNESCO derives its responsibility for addressing biomedical issues from the preamble to its statutes and, at the latest since the 2005 Bioethics Declaration,Footnote 85 has indicated that it intends to assume the role of international coordinator in the governance of biomedical issues.Footnote 86 Here, UNESCO relies on an institutionalization of its ethical mandate in the form of the International Bioethics Committee.Footnote 87 Currently, both organizations’ key activity in this area focuses on setting standards: since the development of science and technology has become increasingly global in order to accompany progress, provide the necessary overview, and ensure equal access to the benefits of scientific development, there is a need for global principles in various areas that member states can apply as a reference framework for establishing specific regulatory measures.

Such global principles are developed by both organizations, notably in the form of international soft law.Footnote 88 According to prevailing opinion, this term covers rules of conduct of an abstract, general nature that have been enacted by subjects of international law but which cannot be assigned to any formal source of law and are not directly binding.Footnote 89 However, soft law instruments cannot be reduced to mere political recommendations but can unfold de facto ‘extra-legal binding effect’, despite their lack of direct legal binding force.Footnote 90 International soft law can also be used as an indicator of legal convictions for the interpretation of traditional sources of international law such as treaties.Footnote 91 Furthermore, it can provide evidence of the emergence of customary law and lead to obligations of good faith.Footnote 92 Soft law can also serve the further development of international law: It can often be a practical aid to consensus-building and can also provide a basis for the subsequent development of legally binding norms.Footnote 93 Such instruments can also have an effect on national legal systems if, for example, they are introduced into national legal frameworks through references in court decisions.Footnote 94

Criticism of UNESCO’s soft law documents is mainly directed at the participation in and deliberation of decisions.Footnote 95 Article 3(2) of the Statutes of the International Bioethics Committee of UNESCO (IBC Statutes)Footnote 96 prescribes the nomination of eminent experts to the member states.Footnote 97 Although the IBC’s reports generally show a particular sensitivity to normative challenges of emerging health technologies, the statute allows the involvement of external experts in the drafting processes – an option that has not been widely used by the IBC in the course of preparing the main UNESCO declaration in the area of bioethics.Footnote 98 The IBC’s reports are regularly revised and finalized by the Inter-Governmental Bioethics Committee (IGBC), which represents the member states’ governments.Footnote 99 This is justified by the fact that the addressees and primary actors in the promotion and implementation of the declarations are the member states.Footnote 100 However, only 36 member states are represented on the committee at once, which is just one-fifth of all UNESCO member states. Moreover, the available seats do not correspond to the number of member states in each geographic region. While approximately every fourth member state is represented from Western Europe and the North American states, only approximately every fifth member state is represented from the remaining regions.Footnote 101

2. The World Medical Association

The highest ethical demands are to be made of physicians within the scope of their professional practice because of their great responsibility towards the life, the bodily integrity and the right of self-determination of the patient.Footnote 102 In order to establish such an approach worldwide, the WMA was founded in 1947 following the Nuremberg trials as a reaction to the atrocities of German physicians in the Third Reich.Footnote 103 Today, as a federation of 115 national medical associations, it promotes ‘the highest possible standards of medical ethics’ and ‘provides ethical guidance to physicians through its Declarations, Resolutions and Statements’.Footnote 104 Unlike the international organizations described earlier, it is not a subject of international law, but a non-governmental organization that acts autonomously on a private law basis. As it is not based on a treaty under international law, the treaties it concludes with states would not be subject to international treaty law either.Footnote 105 The WMA is, therefore, to be treated as a subject of private law.

Such subjects of private law are well able to focus on specific topics to provide guidance and are, therefore, in a good position to address the challenges of biomedical issues. However, the Declaration of Helsinki and other declarations of the WMA have no legally binding character as resolutions of an international alliance of national associations under private law and can only be regarded as a codification of professional law, not as international soft law.Footnote 106 Yet, as will also be shown using the example of Germany, they are well integrated into national professional laws.

One criticism of the WMA’s decision-making legitimacy is that its internal deliberation is not very transparent and takes place primarily within the Council and the relevant committee(s), whose members are designated by the Council from its own members.Footnote 107 This means that some national medical associations barely participate in the deliberation. Currently, for example, only nine out of 27 Council members are from the Asian continent and one out of 27 from the African continent,Footnote 108 which is disproportionate compared to their population densities. Council bills are debated and discussed in the General Assembly but, given the lack of time and number of bills to be discussed, the Assembly does not have as much influence on the content as the Council and Committees.Footnote 109 Each national medical association may send one voting delegate to the General Assembly. In addition, they may send one additional voting member for every ten thousand members for whom all membership dues have been paid.Footnote 110 This makes the influence of a national medical association dependent, among other things, on its financial situation. Of additional concern is the fact that these national medical associations do not necessarily represent all types of physicians, because membership is not mandatory in most countries.Footnote 111 Moreover, other professional groups affected by the decisions of the WMA are not automatically heard.Footnote 112 As a consequence of the WMA’s genesis as a result of human experimentation by physicians in the Third Reich and the organization’s basis in the original Declaration of Helsinki,Footnote 113 the guidelines of the WMA are based primarily on American- or European-influenced medical ethics, although the membership of the WMA is more diverse.Footnote 114

3. Effect of International Measures in National Law
a. Soft Law

Declarations of UNESCO as international soft lawFootnote 115 are adopted by the General Conference.Footnote 116 They cannot be made binding on the member states and are not subject to ratification. They set forth universal principles to which member states ‘wish to attribute the greatest possible authority and to afford the broadest possible support’.Footnote 117 Additionally, UNESCO’s Constitution does not include declarations among the proposals which may be submitted to the General Conference for adoptionFootnote 118, although the General Conference can, in practice, adopt a document submitted to it in the form of a declaration.Footnote 119 Besides their contribution to shaping and developing binding norms and helping the interpretation of international law, soft law norms may also have immediate legal effects in the field of good faith, even if this does not change the non-legal nature of soft law.Footnote 120 This effect has particular relevance in the field of medicine and bioethics. The principle of good faith requires relevant actors not to contradict their own conduct.Footnote 121 Accordingly in the area of soft law, it legally protects expectations produced by these norms insofar as it is justified by the conduct of the parties concerned.Footnote 122 UNESCO itself states that declarations may be considered to engender a strong expectation that members states will abide by them on the part of the body adopting them. Consequently, insofar as the expectation is gradually justified by state practice, a declaration may by custom become recognized as laying down rules that are binding upon states.Footnote 123

b. Incorporation of WMA Measures into Professional Law

At the national level, professional law has an outstanding importance for physicians. In Germany, for example, the definition of individual professional duties is the responsibility of the respective state medical association, which issues professional regulations in the form of statutes. The autonomy of the statutes is granted to the state medical associations by virtue of state law and is an expression of the functional self-administration of the medical associations. In addition to defining professional duties, the state medical associations are also responsible for monitoring physicians’ compliance with these duties.Footnote 124 Due to the compulsory membership of physicians in the state medical associations, the professional law or respective professional code of conduct is obligatory for each individual physician.Footnote 125 The state medical associations are guided in terms of content by the Model Code of Professional Conduct for Physicians (MBO-Ä),Footnote 126 which is set out by the German Medical Association (Bundesärztekammer) as the association of state medical associations (and thus the German member of the WMA). If a declaration or statement is adopted at the international level by the WMA, the German Medical Association will incorporate the contents into the MBO-Ä, not least if it was involved in the deliberation. In addition to the statutes issued by the state medical associations, regulations on the professional conduct of physicians are found partly in federal laws such as the Criminal Code,Footnote 127 or the Civil Code,Footnote 128 and partly in state laws such as hospital laws. Regardless of which regulations are applicable in a specific case, the physician must always carry out the treatment of a patient in accordance with medical standards.Footnote 129

The medical standard to be applied in a specific case must be interpreted according to the circumstances of the individual case, taking into account what has objectively emerged as medical practice in scientific debate and practical experience and is recognized in professional circles as the path to therapeutic success, as well as what may be expected subjectively from the respective physician on average.Footnote 130 Any scientific debate about the application of AI in medical treatment on the level of the WMA would take place in professional circles and could thereby influence the applicable medical standard on a national level. Overall, the WMA’s guidelines would have a spillover effect in national professional law, whether in the area of professional regulations or in the scope of application of other federal or state laws. In this way, the contents of the guidance defined by the WMA could ultimately become binding for the individual physician licensed in Germany.

The situation is similar in Spain. The Spanish Medical Colleges Organization is a member of the WMA as the national medical association of Spain and ‘regulates the Spanish medical profession, ensures proper standards and promotes an ethical practice’.Footnote 131 Furthermore, the WMA is the main instrument for the participation of national medical associations in international issues. For example, the American Medical Association, as a member of the WMA, makes proposals for international guidelines and agendas and lobbies at the national level to achieve the goals of physicians in the health field.Footnote 132

IV. Conclusion: Necessity of Regulation by the World Medical Association

In order to close the gaps in the international guidance on the application of AI in medical care, active guidance by the WMA is recommended. Although it is not a subject of international law, meaning its guidance does not have legally binding effects, it is the only organization that has a strong indirect influence on national medical professional law through its members, as shown above. The incorporation of the contents of the guidance decided by the WMA is faster and less complex in this way than via the path of achieving legal effects through international soft law documents, particularly as the integration of the WMA guidelines into national professional laws reaches the physician actors that apply emerging technologies such as AI in only a few steps of implementation.

Furthermore, national professional laws and national professional regulations form not only the legal but also the ethical basis of the medical profession.Footnote 133 Consequently, professional law cannot be seen independently of professional ethics; instead, ethics constantly affect the legal doctor–patient relationship.Footnote 134 For example, the preamble to the German Model Code of Professional Conduct of the German Medical AssociationFootnote 135 states, among other things, that the purpose of the code of professional conduct is to preserve trust in the doctor–patient relationship, to ensure the quality of medical practice, to prevent conduct unbecoming a doctor, and to preserve the freedom of the medical profession. Furthermore, §2(1) sentence 1 MBO-Ä requires that physicians practice their profession according to their conscience, the prescriptions of medical ethics, and humanity. In addition, § 3(1) MBO-Ä also prohibits the practice of a secondary activity that is not compatible with the ethical principles of the medical profession. Preceding the regulations and the preamble of the model professional code of conduct is the medical vow set out in the WMA’s Declaration of GenevaFootnote 136, which is a modernized form of the Hippocratic Oath, itself over 2,000 years old. Altogether, this shows that ethics of professional conduct are not isolated from the law; they have a constant, universal effect on the legal relationship between the physician and the patient. Since the law largely assumes as a legal duty what professional ethics require from the physician,Footnote 137 the inclusion of medical ethics principles in professional law seems more direct in its effect than the inclusion of bioethical principles in international soft law.Footnote 138

From this example and the overall impact of the Declaration of Helsinki, it is clear that the WMA has the potential to work toward a standard that is widely recognized internationally. The orientation of the WMA towards European or American medical ethics must, however, be kept in mind when issuing guidelines. In particular, the ethical concerns of other members should be heard and included in the internal deliberation. Furthermore, the associations of other medical professions, such as the International Council of Nurses,Footnote 139 with whom partnerships already exist in most cases,Footnote 140 should be consulted, not least because their own professional field is strongly influenced by the use of AI in the treatment of patients, but also to aid the dissemination of medical ethics and standards throughout the health sector. Expanding participation in deliberation increases the legitimacy of the WMA’s guidelines and thus the spillover effect into the national professional law of physicians and other professions beyond. A comparison with other international organizations, such as UNESCO, also shows that the WMA, precisely because it is composed of physicians and because of its partnerships with other professional organizations, is particularly well suited from a professional point of view to grasp the problems of the use of AI in medical treatment and to develop and establish regulations for dealing with AI in the physician–patient relationship as well as in the entire health sector.

23 “Hey Siri, How Am I Doing?” Legal Challenges for Artificial Intelligence Alter Egos in Healthcare

Christoph Krönke
I. Introduction

In response to the question ‘Hey Siri, how am I doing?’, Apple’s intelligent language assistant today only gives ready-made answers (‘You’re OK. And I’m OK. And this is the best of all possible worlds.’). In the foreseeable future, however, it is quite conceivable that intelligent systems with comprehensive access to the health data of individual users could provide information and assessments of an individual’s state of health, make recommendations for a better way of life and possible treatments, and communicate directly with other actors in the medical field (e.g. a treating physician). This opens up the prospect that, with a simple touch of (or even a conversation with) our smartphones, we could enjoy all the promises generally associated with the digitalization of healthcare: comprehensive individual health data would be available and manageable anywhere and anytime, and they could be used to generate high-quality medical diagnoses using Artificial Intelligence (AI), such as those that are already within reach for skin cancer diagnosisFootnote 1 or breast cancer detection.Footnote 2

At the same time, the perspective on AI Alter Egos in the health sector raises numerous legal questions. The most essential of these increasingly pressing issues shall be identified and briefly discussed in this contribution – in a way that is understandable not only for die-hard lawyers.Footnote 3 First and foremost, responsible AI Alter Egos in healthcare would certainly require, on the one hand, a high level of data protection and IT security, for example, with regard to an individual’s informed consent to the data processing and with respect to the (centralized or decentralized) storage of health data. On the other hand, such dynamic systems would pose particular challenges to medical devices law, for instance with regard to the necessary monitoring of a self-learning system with medical device functions. Furthermore, conflicts of interest between the areas of law involved are becoming apparent, particularly with regard to the rather restrictive, limiting approach of data protection law on one side of the spectrum, and the rules of product safety law aiming for efficiency, high quality, and high performance of applications on the other side. With my considerations I would like to show that, all in all, the development of AI Alter Egos in healthcare will require an evolving interpretation of the applicable legal frameworks while – at the same time – ensuring that these systems make responsible decisions. Ignoring either of these necessities would put both the individual patient’s (data) sovereignty and the quality of the system outputs at stake.

I would like to proceed as follows: first of all, I would like to outline and describe the functionalities of AI Alter Egos in the healthcare sector,Footnote 4 namely the functions of an Alter Ego as a program for storing and managing individual health data,Footnote 5 as a software for generating individual medical diagnoses,Footnote 6 and finally as an interface for a collective analysis and evaluation of Big Health Data.Footnote 7 On this basis, I will identify the key elements of the applicable legal framework and discuss the three basic functions of an AI Alter Ego in light of the basic requirements following from this framework.Footnote 8 In doing so, I will focus primarily on the supranational requirements of European Union law so as not to become entangled in the thicket of national legislation.Footnote 9

II. AI Alter Egos in Healthcare: Concepts and Functions

In determining the concept and the description of the aforementioned functions of an AI Alter Ego in the healthcare sector, I am guided primarily by the considerations of Eugen MünchFootnote 10 who has been developing the idea of a digital Alter Ego for decadesFootnote 11. This is mainly due to the fact that his ideas seem very sound and general and do not reflect a concrete business model, but rather the main features that any AI Alter Ego in healthcare could have. Moreover, Münch had anticipated much of what many digital assistants and smart objects are designed for today. In the context of this contribution, it should remain open whether the carrier of an Alter Ego in the healthcare sector should be one or more decidedly state players or (public or private) economic enterprises, and whether the Alter Ego can operate on the basis of a specific legal framework or on the general basis of private contracts.Footnote 12 Certainly, the past has shown that the innovative and performance capabilities of private sector players are often superior to those of digital government initiatives. Even if Alter Ego projects should initially come from the private sector, however, one thing must be clear from the outset: the overriding (ethical) principle behind the idea of an Alter Ego in the health sector is not to enable utmost economic usability of health data, but rather to preserve the data sovereignty of the individual.

This being said, the general idea of an AI Alter Ego in healthcare involves two components and key functions: database functions and diagnostic functions.

1. Individual Health Data Storage and Management

The prerequisite for AI Alter Egos is a vast database that contains and manages as much personal health data of individual users as possible. In the ideal case, the entire individual data stock forms and reflects a digital image of the physical condition of the individual – in other words, a (complete) digital ‘Alter Ego’. In this way, the individual user has (at least theoretically) full access to the health-related information relating to him or her and can grant third parties, such as physicians, health companies, or insurances, access to a specific or several data areas too; subject, of course, to the practically, highly critical question of suitable data formats and interfaces. From a purely technical point of view, storage of the health data of all Alter Egos in a central database is just as conceivable as decentralized storage on systems that are controlled by the individual users or trustworthy third parties. However, as has been stated at the outset, the Alter Ego is designed as a tool that is intended to serve, first and foremost, as a benefit to the user. It shall, therefore, enable him or her to decide independently and responsibly (‘sovereignly’) on the access to and use of his or her health data. This idea of the individual’s health-specific ‘data sovereignty’ can hardly be reconciled with a central storage of his or her data – let alone with an outsourcing in ‘health clouds’ located beyond European sovereign borders.

2. Individual Medical Diagnostics

Building on this storage and management function, the digital Alter Ego should also have the potential to generate customized and high-quality medical diagnoses, taking into account all available health-related data points of the individual, possibly monitored on a real-time basis. When classifying this second, diagnostic function, however, one should follow a strict sense of reality. On the basis of the common differentiation, to be thought of on a sliding scale, between ‘weak’ (or ‘narrow’) AI, which is merely involved in the processing of concrete, relatively limited tasks, and ‘strong’ (or ‘general’) AI, which can be entrusted with comparatively comprehensive tasks like a human doctor,Footnote 13 all of the intelligent diagnostic systems that are, will, or might be implemented in the foreseeable future can be clearly classified as forms of narrow AI, with very specific functions such as cloud-based applications that analyze and interpret computed tomography (CT) images using self-learning algorithms to prepare medical reportsFootnote 14. Strong intelligent systems, on the other hand, are the stuff for science fiction novels and movies and should therefore not be the basis for legal considerations.

3. Interface for Collective Analysis and Evaluation of Big Health Data

The performance of the diagnostic functions depends on the quantity and quality of the health data, on the basis of which the algorithms used in the Alter Ego are trained and ultimately formed into robust decision rules. Against this background, a possible third, rather secondary function of the digital Alter Egos in their entirety could be to provide an all-encompassing data basis for its various possible diagnostic functions. In this respect, the individual Alter Ego could be both the limiting and enabling interface for a supra-individual (collective) analysis and evaluation of Big Health Data, from which the individual ‘data sovereign’ could ultimately benefit. Even if this function is reminiscent of the dystopian scenario in which humans merely act as data sources and mutate into ‘transparent patients’ – the price of any medical evaluation method, however advanced, is always the availability of a comprehensive basis of health data.

III. Key Elements of the Legal Framework and Legal Challenges

As explained in the introduction, the legal framework for the establishment and operation of digital Alter Egos is primarily provided by European data protection lawFootnote 15 and the law on medical devices.Footnote 16 In the following, I will put each of the aforementioned functions of an Alter Ego against the background of these legal rules and assess the prospect of AI Alter Egos in healthcare under the existing legal framework. In doing so I will focus on the scope of application as well as the material goals and basic concepts of these regimes.

1. European Data Protection Law

In order to adequately assess the specific data protection standards in their relevance for Alter Egos, it is not sufficient to make general references to the protection of informational self-determination or the rights to privacy and the protection of personal data.Footnote 17 As a matter conceived in terms of ‘risk law’,Footnote 18 data protection law shields the rights and interests of the persons concerned from various risks that can be typified to a certain extent. The resulting need for protection forms the actual concrete purposes of data protection law. The processing of personal data by digital Alter Egos touches on several of these purposes, which, in turn, can be assigned to the two fundamental protection concepts of data protection law, namely, the limitation and transparency of data processing.Footnote 19 Taking account of the different basic functions of AI Alter Egos, the following major data protection goals can be distinguished in the context of AI Alter Egos in healthcare.

a. Limitation of Data Processing: Data Protection-Friendly and Secure Design

The individual data storage and management functions of Alter Egos easily activate the data protection requirements under both the General Data Protection Regulation (GDPR)Footnote 20 and the supplementary European basic rights on data protection.Footnote 21 All health-related information relating to individuals is personal data – even particularly sensitive in the sense of Article 9 of the GDPR – and all possible ‘work steps’ of data handling by the Alter Ego are subject to the processing operations defined in Article 4(2) of the GDPR, such as the collection, storage, reading, querying, matching, use, modification, and transmission of personal data.

Additionally, with regard to the function of Alter Egos as interfaces to a collective database for a comprehensive analysis and evaluation of Big Health Data, the data protection rules are likely fully applicable as well. In the context of medical treatments, almost every piece of information can be assigned a personal and health reference that makes the person behind it at least ‘identifiable’ in the sense of Article 4(1) GDPR. In particular, medical data like a large blood count or an ECG recording are so unique to an individual that they can hardly be fully anonymized. Complete technical anonymization, which would lead to the inapplicability of data protection law, is therefore illusory. In this respect, it is certainly true that, in principle, ‘anonymous data’ no longer exists in the healthcare sector.Footnote 22

The data protection rules of the GDPR will thus subject almost every single processing of health-related data in Alter Egos to certain requirements with regard to the ‘whether’ and ‘how’ of data processing. With regard to the ‘whether’ of lawful data processing, Article 6(1) GDPR establishes the principle that processing of personal data is only permissible if it can be based on one of the processing situations mentioned in Article 6(1)(a) to (f) GDPR (the so-called prohibition principle). In particular, Articles 6(1)(a) and 9(2)(a) as well as Articles 6(1)(e) and 9(2)(g) and (h) of the GDPR can be considered as the predominant legal basis for the processing of health data by an Alter Ego, since the processing operations would be regularly based either on the explicit consent of the users or on specific legal provisions introduced by Member States in order to create a legal basis for the storage, management, and diagnostic analysis of individual health data. In addition, the opening clause of Article 9(2)(j) GDPR may also become relevant specifically for collective analysis and evaluation. This allows Member States to create legal processing powers for ‘scientific research purposes’ to a large extent, including also private research.Footnote 23 This legitimizes researchers to process health data even without the consent of the data subjects. Despite all the emphasis on the high level of protection in the health sector, the GDPR thus gives research interests comprehensive priority over the data protection interests of the data subjects.

With regard to the ‘how’ of lawful data processing, Article 5 GDPR defines the essential ‘principles of data processing’, which include in particular the principles of purpose limitation,Footnote 24 data minimizationFootnote 25 and storage limitationFootnote 26. In addition to these basic processing rules, the Union’s data protection legislation contains numerous other provisions. Some of these supplement the basic rules with sector-specific requirements, for example, with the particularly strict requirements for the processing of health-related data pursuant to Article 9 GDPR. Others specify, concretize, and flank them in more detail, for example in the rights of data subjects pursuant to Article 12 et seq. GDPR, and in some cases they do so by adding structural requirements beyond concrete data processing, like by requiring data protection-friendly and secure technology design in accordance with Article 25(2) and Article 32 GDPR.

In more concrete terms, the principle of purpose specification and limitation under Article 5(1)(b) GDPR requires that the information be collected only ‘for specified, explicit and legitimate purposes’ and ‘not further processed in a way incompatible with those purposes’. The importance of this principle is underlined by its embodiment in Sentence 1 of Article 8(2) of the Charter of Fundamental Rights. Therefore, the storage of health and other personal data ‘for undetermined and not yet determinable purposes’ is clearly impermissible under European Union law.Footnote 27 Otherwise, the data subjects would no longer be able to see by which bodies the specifically collected personal data are processed in which context. The principle of purpose limitation is supplemented by the principles of data minimization and necessity under Article 5(1)(c) GDPR. Accordingly, the collection and storage of each piece of information must be necessary in relation to the specified processing purposes, in other words, it must be necessary for the specified diagnostic and other medical purposes. In the case of health-related information of a particularly sensitive nature, the need for data collection may be condensed into a specific decision to be taken.

Against this background, any storage of health data would have to be carried out for a definable medical purpose from the outset. The monitoring of bodily functions ‘into the blue’, that is, for yet unknown medical purposes that might (or might not) become relevant in the future, seems inadmissible. The creation of a ‘digital Alter Ego’ in the sense of a complete image of all physical processes in the patient’s body, irrespective of an existing medical need, is therefore hardly possible under current data protection law – at least at first glance.

The specific requirements that can be derived from the principle of purpose limitation and the principle of necessity and data minimization continue to apply when accessing and retrieving information stored in the Alter Ego. For example, the principle of purpose limitation prohibits the processing of stored data for purposes that are not compatible with the originally defined purpose of collection. Accordingly, changes of purpose with regard to the processing of health-related data are only permissible if the conditions set out in Article 6(4) GDPR are met. Thus, either the (explicit) consent of the data subject is obtainedFootnote 28 or another reason pursuant to Article 9(2) GDPR is available, in which case an additional compatibility check is to be carried out in accordance with Article 6(4) GDPR additionally.Footnote 29

Such changes of purpose will likely become inevitable with the increasing use of Alter Egos as well as the extension of their diagnostic function. One could think of information initially collected and stored solely for the purpose of monitoring cardiovascular functions that is later being processed for the purpose of cancer detection, too. As long as the general medical purpose of data processing is not abandoned, the compatibility test for both individual diagnostic and collective analysis and evaluation purposes is in general complied with; provided an interpretation taking the individual’s interest in the performance of his or her own Alter Ego into account is carried out. However, this performance depends crucially on the fact that health data which were initially collected in a permissible manner can also be processed for additional purposes, including the generation of decision rules on the basis of large supra-individual (big data) databases. With regard to general research purposes, this idea has been explicitly laid down in the GDPR: according to Article 5(1)(b) GDPR, processing for (further) scientific research purposes is ‘not considered incompatible with the original purposes’. This flexibilization of the purpose limitation principle does not exempt the person responsible from checking the compatibility of the secondary purpose with the primary purpose according to Article 6(4) GDPR on a case-by-case basis, the principle of purpose limitation is still valid – as a rule, however, he may assume that compatibility is guaranteed.Footnote 30

Most certainly, the conception of a comprehensive individual health database, which can also form the foundation for potential collective (Big Health) data analysis and evaluation processing, involves highest structural dangers and risks with respect to both the lawfulness of the processing and the security of the stored information.Footnote 31 Automated processing of health data and the accessing of these data (both on the basis of centralized and decentralized storage system) entail a particular risk of inadmissible or even abusive input and accessing. This is in obvious tension with the requirements in Articles 24 and 25(1) GDPR, according to which the responsible body must take ‘appropriate technical and organisational measures’, taking into account the relevant risks, which serve to ‘implement data protection principles, such as data minimisation, in an effective manner and to integrate the necessary guarantees in the processing in order to meet the requirements of this Regulation and protect the rights of the data subjects’. Similar structural requirements are laid down in Article 32 GDPR specifically with regard to data security.Footnote 32

In view of the obligations to ensure that technology is designed in a ‘privacy by design’ manner, it is imperative for any healthcare Alter Ego system that a highly effective access rights management system be introduced that is absolutely subordinate to the ‘health data sovereignty’ of the individual. Furthermore, in view of the high risks involved, it is likely to be imperative to develop a decentralized (rather than a centralized) data storage system. Against this background, the ethical principle of data sovereignty of the individual also forms a legal principle with binding organizational effects for any Alter Ego in healthcare.

b. Securing a Self-Determined Lifestyle and Protection from Processing-Specific Errors through Transparency

In contrast to its database functions, the diagnostic function of an AI Alter Ego rather faces the typical data protection objectives that apply to all intelligent AI systems. Especially, the specific lack of transparency of algorithmically controlled decisions of intelligent systems challenges the goal of guaranteeing an autonomous self-determined lifestyle. An example with special relevance to data protection law is medical diagnoses that are made according to rules based on Big Data procedures. These decisions are typically based firstly on correlations (and thus not necessarily on causalities) and secondly on a multitude of different health-related data in the context of the concrete decisions. The results of the medical recommendations of an Alter Ego in the healthcare sector could range from the (comparatively harmless) recommendation to take a walk to stimulate the circulation to more sensitive predictions such as suspected sugar disease or a skin cancer diagnosis. If the rules and factors relevant to the decision in question, particularly with regard to the relevance of certain health-related and other personal circumstances, are not sufficiently clear to the person affected by the decision, this person has, on the one hand, no opportunity to adjust his or her behavior to the decision and, on the other hand, cannot recognize or correct factual errors of the Alter Ego.Footnote 33 In such a context, an autonomous, self-determined way of life appears to be possible only to a limited extent as the range of diagnostic possibilities increases. For such reasons, the creation of transparency in data processing has long been a recognized principle of data protection law.Footnote 34 The diagnostic function of an Alter Ego operating by means of AI is, therefore, in a specific tension between this principle and the many transparency-securing provisions of data protection law.

Furthermore, the use of intelligent systems such as AI Alter Egos in healthcare regularly touches on the need to protect the data subject from processing operations based on inappropriate decision rules. For example, if the decisions fail to achieve their medical (data processing) purpose due to inappropriate programming or use of the Alter Ego, they might generate inappropriate output. On the one hand, this addresses the possible specific quality problems of intelligent systems in general.Footnote 35 These problems can be based on various factors, such as the inferiority of the data basis used for the development of the decision rules, the improper or even illegal programming of the Alter Ego, or its use in a context that is not suitable for it. On the other hand, a specific element of the regulatory objective of avoiding inappropriate output of data processing lies in the protection against discrimination specific to data processing. What is meant is not unequal treatment as such, which occurs when a person is discriminated against based on particularly sensitive personality traits such as origin or disability. Rather, it refers to more disadvantageous treatment in a broader sense; this is when the person concerned belongs to a group of persons previously formed by the system. This second definition includes circumstances in which persons are assigned to a group that was defined specifically for one person by the system in the first place. Therefore, such groups can be understood as ‘tailor-made’.

The decision-making rules of an Alter Ego in the health sector will typically be based on the linking of certain health or other personal data points, like name, place of residence, educational level or income, eating, and other habits. These data points are often ‘developed’ by the system itself and typically include the results expected from the output of the Alter Ego, such as a specific diagnosis of a disease or general life expectancy. Even though Big Data procedures in particular aim to achieve the most granular classifications and evaluations by including as many data points as possible, these procedures inevitably lead to the formation of groups of people and a certain expectation or evaluation. To provide an example: higher risk of suffering from a certain disease might be linked to the affiliation to a certain group profile, for instance, people with a foreign name, a place of residence with low purchasing power, an unhealthy diet, moderate exercise, no university studies, etc. Because the Alter Ego does not necessarily include all individual health-related characteristics of a person and rather decides merely on random group membership based on more or less health-related (and other personal) data, a negative decision for the person with the desired characteristic (like low risk of illness) contrary to the system expectation based on his or her profile may prove to be arbitrary.Footnote 36

One aspect however must be particularly emphasized at this point, as it is often not sufficiently taken into account in legal scholarship:Footnote 37 data protection law itself does not prohibit incorrect or unlawful outputs, and in particular it does not prohibit general discrimination. The fact that unequal treatment based on gender, origin, other group memberships, or simply arbitrariness is not permissible does not follow from data protection regimes, but rather from substantive anti-discrimination legislation. Only the structural bias of automated data processing in general and of intelligent Alter Egos in particular is relevant from a perspective of data protection law. Such structural biases include the tendency to treat individuals in relation to a specific (medical) processing purpose on the basis of selective, typifying characteristics and this treatment being potentially inappropriate, arbitrary, and/or contrary to the purpose of the processing.

2. European Medical Devices Regulation

In the healthcare sector, such substantial-qualitative normative requirements – which cannot be derived from data protection law itself – arise from European medical devices law with regard to the outputs of an AI Alter Ego. According to the two introductory recitals of the applicable Medical Devices Regulation (MDR),Footnote 38 European medical devices law not only aims to ensure a functioning internal market for medical devices and thus pursues both cross-border coordination and economic promotion purposes, it is also supposed to guarantee high standards with regard to the quality (performance of the products) and safety (prevention of hazards and risks) of medical devices. First of all, it depends on the medical device legal classification of the individual functions of an AI Alter EgoFootnote 39 whether and to what extent the general objectives and the specific requirements of MDRFootnote 40 apply.

a. Classifying AI Alter Ego Functions in Terms of the Medical Devices Regulation

It goes without saying that software like an AI Alter Ego or, more precisely, individual functions of it can be classified as ‘medical devices’ in the legal sense. Software and software-supported products have been playing a significant role in the markets for medical services in the broader sense for some time. Possible distribution channels include software purchase or software rental as well as purely remote sales-based diagnostic or therapeutic services.Footnote 41 Possible applications which could also be used as part of an Alter Ego system range from comparatively simple computer programs, such as classical practice software for maintaining electronic patient records or health-related smart watch functionsFootnote 42, to more complex, intelligent programs and systems, such as cloud-based applications that analyze and interpret computed tomography (CT) images using self-learning algorithms to prepare medical reports.Footnote 43 A differentiation between different types of applications is particularly useful with regard to the respective use context intended, as the distinction between medical devices and non-medical devices as well as the classification according to different risk classesFootnote 44 is primarily based on the intended purpose of the product.Footnote 45 Against this background, four types of software functions can be distinguished from the outset in the context of AI Alter Egos in healthcare: (1) functions that qualify as ‘software as a medical device’ (so-called stand-alone software or software as a medical device – SaMD) and as (2) software as an accessory of a medical device; furthermore, Alter Ego functions that fall within the category of a (3) software as a component of a medical device (so-called integrated software), and finally (4) functions that merely qualify as software in the medical field.Footnote 46

First of all, (1) certain Alter Ego functions could fall under the term ‘medical devices’ in themselves, if they are intended to fulfil one of the ‘specific medical purposes’ mentioned in Article 2(1) MDR, i.e. if they are intended to diagnose, monitor or treat diseases, injuries or disabilities. A direct effect in or on the human body is not necessary for this purpose; a provision ‘for human beings’ is sufficient, even if it is only aimed at indirect physical effect.Footnote 47 In this sense (and explicitly according to the former directive terminology) ‘independent’Footnote 48 software products are considered ‘active’ medical devices under Article (4) MDR, for which specific classification rules and material requirements apply; they are also subject to special regulations, such as those of the MDR’s UDIFootnote 49 system). Practical examples of such SaMDs are decision-support programs comparing medical databases with the data of individual patients in order to provide medical personnel or patients directly with recommendations for the diagnosis, monitoring, or treatment of the patient in question.Footnote 50 The complex systems for the (possibly adaptive) analysis of image and other data with descriptive, predictive, or prescriptive functions mentioned earlier in this contribution also fall into this group of software products. This category is probably the most relevant for the diagnostic functions of an AI Alter Ego in healthcare.

Other Alter Ego functions will qualify as (2) ‘accessories’ in the sense of Article 2(2) MDR.Footnote 51 In contrast to (completely independent) standalone software, accessory software does not fulfil a specific medical purpose itself. However, it does fulfil such a purpose in combination with one or more other ‘medical devices’, by enabling or at least supporting its specific function as a medical device. In particular, software marketed separately for programming and controlling medical devices as well as their integrated software (e.g. of pacemakers)Footnote 52 is regularly qualified as accessory software. Against this background, support software that is compatible with an AI Alter Ego but marketed separately could fall within the category of an accessory.

Distinct from these first two categories are (3) supportive Alter Ego functions forming an integral part of one or more other Alter Ego functions that qualify as medical devices at the time of the placing on the market.Footnote 53 Important examples of such integrated software include programs for the control of medical devices, like blood pressure monitorsFootnote 54 or the power supply.Footnote 55 Such programs are not treated as medical devices themselves but as mere components of the respective product.

In contrast, (4) all other functions of an AI Alter Ego would have – as such! – no relevance under medical devices law. These can be programs with essential but merely auxiliary functions such as collecting, archiving, compressing, searching, or transmitting data. Examples include important information and communication systems that are connected with the diagnostic functions of the Alter Ego such as communication systems for separate tele-medicine services,Footnote 56 medical knowledge databases,Footnote 57 hospital information systems (HIS) with pure data collection, administration, scheduling, and accounting functions as well as picture archiving and communication systems (PACS) without reporting functionFootnote 58. Furthermore, as recital 19 sentence 1 of the MDR states in principle, programs used for lifestyle and well-being purposes are not sufficiently related to specific medical purposes. These include, in particular, the functions of a Smartwatch for recording and evaluating movement calories or sleep rhythm when using a lifestyle app. Of course, software with completely unspecific functions, for example operating systems or word processing program, are also irrelevant under medical devices law. Against the background of these considerations, software serving the individual data storage and management function of an AI Alter Ego as well as possible functions aiming for the collective analysis and evaluation of the (big) health data gathered through the participating Alter Egos in their entirety would – as such! – not qualify as ‘medical devices’ or ‘accessories’ under the MDR.

This does not mean, however, that the individual database functions and the collective Big Health Data functions of an AI Alter Ego are entirely irrelevant under medical devices law. It is not only the diagnostic functions being relevant. Of course, the usual case in practiceFootnote 59 deals with information technology systems consisting of several modules. In such instances, some of these modules can be qualified typically as a medical device or accessory, while other modules can only be qualified as software in the medical field. Consequently, the rules of medical devices law, especially the obligation to label, only apply to the first-mentioned modules.Footnote 60 Nevertheless, it has probably become clear that the performance of the diagnostic functions of an AI Alter Ego is crucially dependent on the quantity and quality of the data sets, including the software used to store and manage, analyze, and evaluate them. Even if the databases and their management software as well as the algorithms used to analyze and evaluate them are not subject to medical devices law as such, their quality and design has a decisive influence on how the diagnostic functions are to be assessed under medical devices law. In this respect, the individual database functions and the Big Health Data functions of an AI Alter Ego are not directly, but indirectly relevant for the following medical devices law considerations.

b. Objectives and Requirements Stipulated in the MDR

The potentially high quantitative and qualitative performance of the diagnostic functions of AI Alter Egos affects the core objective of medical devices law to ensure high quality standards in the healthcare sector, just like the use of AI in the healthcare sector in general. The need for such systems including cost aspects becomes obvious if, for example, in a side-by-side comparison between 157 dermatologists and an algorithm for evaluating skin anomalies, only seven experts are able to make more precise assessments of skin abnormalities than the computer system.Footnote 61

At the same time, the safety-related requirements of medical devices law are also touched upon. These requirements aim for the prevention and elimination of quality defects as well as imminent hazards and risks. The characteristic lack of transparency of algorithmic decision rules (which can produce unforeseen and unpredictable results) as well as the adaptability of continuously learning systems add specific risks to the increased basic risk inherent in all medical devices. Yet, precisely this adaptability is considered particularly attractive in the field of intelligent medical devices. Nevertheless, and in view of the high-ranking fundamental rights to which medical device risks generally refer (life and limb), these specific risks must be taken seriously and addressed appropriately by the regulatory authorities.

Particularly relevant for the development and operation of Alter Egos in the health sector and their basic functions (i.e. indirectly for the individual database function and the collective Big Health Data function, directly for its diagnostic functions) are the structural requirements laid down by the MDR. A look at these structural requirements of medical devices law shows that the introduction of intelligent Alter Egos in the healthcare sector will encounter a legal matter that is already particularly well adapted to the specific technology-related risks of such products for the protected goods concerned.

At the top of structural requirements is the general obligation to ensure the safety and efficacy of the medical device,Footnote 62 which is differentiated by further requirements, such as the obligation to perform a clinical evaluation or a clinical trial according to Article 10(3) MDR.Footnote 63 For the marketing of intelligent Alter Egos, some of these specifications seem particularly relevant. For example, in addition to the obligation to set up a general quality management system as part of quality assurance, which has been customary for industrially producing companies for decades,Footnote 64 the MDR orders the introduction of a risk management system,Footnote 65 in the context of which the specific risks of software and data-based products in particular must also be explicitly addressed.Footnote 66 In addition, according to Article 10(10) MDR, the ‘manufacturer’ of the Alter Ego must set up a post-marketing surveillance system in the sense of Article 83 MDR. At least in theory, the typical possibility of unforeseen outputs of AI Alter Egos in general and the adaptability of continuous learning systems in particular can be countered with such systems. In accordance with the regulatory concept of medical devices law, these abstract and general requirements are also specified in more detail for software products by means of special (‘harmonized’) technical standards. Particularly relevant in this respect is the international standard IEC 62304Footnote 67, adopted by the responsible European standardization organization Cenelec, which supplements the risk management standard ISO 14971 with software-specific aspects and also formulates requirements for the development, maintenance, and decommissioning of stand-alone software and for integrated software.Footnote 68 In particular, these standards contain, for instance, guidelines for the handling of raw data and its transformation into ‘clean data’ as well as for the proper training and validation of algorithms.

It is quite likely that that new types of risks are created in the development of intelligent medical devices if AI Alter Egos became actually widely used and were replacing conventional medical services and institutions. Depending on whether and to what extent such scenarios actually happen and, given the event that these new types of risks are not specifically addressed in the MDR or in other relevant harmonized standards, the corresponding standards can certainly be further developed. Manufacturers and ‘notified bodies’ (i.e. the certified inspectors of medical devices) are called upon to take account of the special features of intelligent systems in the context of conformity assessment by means of a risk-conscious but innovative interpretation of the regulatory requirements. Such an interpretative approach shall also be undertaken when such requires a specification or perhaps even a deviation of relevant technical standards.Footnote 69 It will be possible for instance, to derive certain Good Machine Learning Practices (GMLPs) from the general provisions of the MDR, including the reference to the development and production of software according to the ‘state of the art’.Footnote 70 According to the GMLPs, for example, only training data suitable for the product purpose may be selected; training, validation, and test data must be carefully separated from each other, and finally, it is necessary to work towards sufficient transparency of the intended output and the operative decision rules.Footnote 71 Continuous Learning Systems in Alter Egos are systems with decision rules that can be continuously changed during product operation and therefore actually have AI in the narrower sense and their application may generate specific risks as well. In principle, a change in the decision rules can become legally relevant from three points of view: it can affect the performance, safety, or intended use and/or data input of the product or its evaluation.Footnote 72 The manufacturer has to prepare for such changes already under the current regulatory situation, especially since Article 83(1) and (2) MDR obliges him to monitor the system behavior in a way that is adequate for the risk and the product. The manufacturer will have to identify and address (by developing a specific algorithm change protocol) such expected changes already within the scope of the establishment of his risk management system (as pre-specifications).Footnote 73 In any case, the distribution of intelligent medical devices does not pose insurmountable difficulties for medical devices law.

However, against the backdrop of the ‘general obligation to ensure the safety and efficacy of the medical device’ as described and explained above, the restrictions imposed by data protection law on the collection, storage, management, and other processing of health-related information appear to be a possible point of conflict. If restrictions on the use of health-related data, such as limitations on the changes of purpose, prove to be an obstacle to the quality of outputs for medical purposes, the question arises as to which regime should be given preference in case of doubt. Generalized statements are not helpful here. Rather, these problems should be handled on a case-by-case basis. Of primary relevance is the Alter Ego’s concrete medical function specifically affected. In the context of particularly sensitive functions, quality problems or system failures can have particularly far-reaching or even fatal consequences; as in the monitoring of cardiovascular functions or in the diagnosis of serious diseases, any restrictions imposed by data protection law should be overcome by an appropriate interpretation of the legal bases of data protection law. Conversely, a function designed to encourage the data subject to take regular walks should not necessarily be able to access all information, especially highly sensitive information.

IV. Conclusion

Overall, my considerations have shown that Alter Egos in the health sector, while appearing somewhat futuristic, already have an appropriate legal framework – at least if it is handled in an appropriate manner that is open to development. The truism will apply: not everything that is technically possible will (immediately) be legally permitted. The creation of a completely ‘transparent patient’ is (rightly) forbidden in view of the data protection principles of purpose limitation, necessity, and data minimization. Instead, the creation of comprehensive individual health databases in Alter Egos must be carried out step by step. The argument that every health-related data could (in the future) have some kind of medical relevance does not hold water here. On the other hand, data protection law and its legal basis must be interpreted in a way that is open to development and innovation in order to enable medical services that are already feasible and to allow individuals to make comprehensive and effective use of their health data for medical purposes. In order to ensure the quality of these medical functions, the existing rules of medical devices law already provide appropriate instruments that can be easily and adequately applied to AI Alter Egos. Hence, if the existing legal requirements are handled correctly, a responsible and at the same time powerful use of AI Alter Egos in the health sector can go hand in hand.

24 ‘Neurorights’ A Human Rights–Based Approach for Governing Neurotechnologies

Philipp Kellmeyer
I. Introduction

The combination of digital technologies for data collection and processing with advances in neurotechnology promises a new generation of highly adaptable, AI-based brain–computer interfaces for clinical but also consumer-oriented purposes. By integrating various types of personal data – physiological data, behavioural data, biographical data, and other types – such systems could become adept at inferring mental states and predicting behaviour, for example, for intended movements or consumer choices. This development has spawned a discussion – often framed around the idea of ‘neurorights’ – around how to protect mental privacy and mental integrity in the interaction with AI-based systems. Here, I review the current state of this debate from the perspective of philosophy, ethics, neuroscience, and psychology and propose some conceptual refinements on how to understand mental privacy and mental integrity in human-AI interactions.

The dynamic convergence of neuroscience, neurotechnology, and AI that we see today was initiated by progress in the scientific understanding of brain processes, the invention of computing machines and algorithmic programming in the early and mid-twentieth century.

In his book The Sciences of the Artificial, computer science, cybernetics, and AI pioneer Herbert A. Simon characterizes the relationship between the human mind and the human brain as follows:

As our knowledge increases, the relation between physiological and information-processing explanations will become just like the relation between quantum-mechanical and physiological explanations in biology (or the relation between solid-state physics and programming explanations in computer science). They constitute two linked levels of explanation with (in the case before us) the limiting properties of the inner system showing up at the interface between them.Footnote 1

This description captures the general spirit and prevailing analogy of the beginnings and early decades of the computer age: just as the computer is the hardware on which software is implemented, the brain is the hardware on which the mind runs. In the early 1940s, well before the first digital computers were built, Warren S. McCulloch and Walter Pitts introduced the idea of artificial neural networks that could compute logical functions.Footnote 2 Later, in 1950, Donald Hebb in The Organization of BehaviorFootnote 3 developed a theory of efficient encoding of statistics in neural networks which became a foundational text for early AI researchers and engineers. Later yet, in 1958, Frank Rosenblatt introduced the concept of a perceptron, a simple artificial neural network, which had comparatively limited information-processing capabilities back then but constitutes the conceptual basis from which the powerful artificial neural networks for deep learning are built today.

Much of this early cross-fertilization between discoveries in neurophysiology and the design of computational systems was driven by the insight that both computers and human brains can be broadly characterized as information-processing systems. This analogy certainly has intuitive appeal and motivates research programs to this day. The aim is to find a common framework that unifies approaches from diverse fields – computer science, AI, cybernetics, cognitive science, neuroscience – into a coherent account of information processing in (neuro)biological and artificial systems. But philosophy, especially philosophy of mind, (still) has unfinished business and keeps throwing conceptual wrenches – in the form of thought experiments, the most famous of which is arguably John Searle’s Chinese Room ArgumentFootnote 4 – into this supposedly well-oiled machine of informational dualism.

Today, through the ‘super-convergence’Footnote 5 of digital and information technologies, this original affinity and mutual inspiration between computer science (artificial neural networks, cognitive systems, and other approaches) and the sciences of the human brain and cognition is driving a new generation of AI-inspired neurotechnology and neuroscience-inspired AI.Footnote 6

In the field of brain–computer interfacing, for example, the application of AI-related machine learning methods, particularly artificial neural networks for deep learning, have demonstrated superior performance to conventional algorithms.Footnote 7 The same machine learning approach also excels in distinguishing normal from disease-related patterns of brain activity, for example, in finding patterns of epileptic brain activity in conventional electroencephalography (EEG) diagnostics.Footnote 8 These and other successes in applying AI-related methods to analysing and interpreting brain data drives an innovation ecosystem in which not only academic researchers and private companies, but also military research organizations invest heavily (and compete) in the field of ‘intelligent’ neurotechnologies.Footnote 9 This development has spawned an increasing number of analyses and debates on the ethical, legal, social, and policy-related relevance of brain data analytics and intelligent neurotechnologies.Footnote 10 Central concepts in this debate are the notions of mental privacy and mental integrity.

In this chapter, I will first give an account of the current understanding as well as ethical and legal implications of mental privacy and propose some conceptual refinements. Then I will attempt to clarify the conceptual foundations of mental integrity and propose a description that can be applied across various contexts. I will then address the debate on neurorights and advocate for an intermediate position between human rights conservatism (no new rights are necessary to protect mental privacy and integrity) and human rights reformism (existing human rights frameworks are insufficient to protect mental privacy and integrity and need to be revised). I will argue that the major problem is not the lack of well-conceptualized fundamental rights but insufficient pathways and mechanisms for applying these rights to effectively protect mental privacy and mental integrity from undue interference.

II. Mental Privacy
1. The Mental Realm: The Spectre of Dualism, Freedom of Thought and Related Issues

As outlined in the introduction and in the absence of a universal definition, I propose the following pragmatic operational description: ‘Mental privacy denotes the domain of a person’s active brain processes and experiences – perceptions, thoughts, emotions, volition; roughly corresponding to Kant’s notion of the locus internus in philosophyFootnote 11 – which are exceptionally hard (if not impossible) to access externally.’ The mental ‘realm’ implicated in this description refers to an agent’s phenomenological subjective experiences, indicated in language by terms such as ‘thoughts’, ‘inner speech’, ‘intentions’, ‘beliefs’, and ‘desires’, but also ‘fear’, ‘anxiety’ and emotions (such as ‘sadness’). While it makes intuitive sense, from a folk-psychological perspective, calling for special protection to this mental realm is predicated on a precise understanding of the relationship between levels of subjective experiences and corresponding brain processes – a requirement that neuroscientific evidence and models cannot meetFootnote 12.

From a monist and materialist position, these qualitative terms offer convenient ways for us to refer to subjective experiences, insisting that there is – in the strict ontological sense – nothing but physical processes in the human body (and the brain most of all), no dualistic ‘second substance’ or, as René Descartes referred to it, mens rea. In such an interpretation, there is no ‘mind-body problem’ because there is no such thing as a mind to begin with and the human practice of talking as if there was a mental realm that is separate from the physical realm arises from our (again folk-psychological, or anthropological) propensity to interpret our subjective experience as separate from brain processes, perhaps because we have no direct sensory access to these processes in the first place.

This spectre of dualism, the illusion – as a materialist (e.g. a physicalist) would put it – that our physical brain processes and our experiences are separate ‘things’, is so convincing and persuasive that it not only haunts everyday language, but is also deeply engrained in concept-formation and theorizing in psychological and neuroscientific disciplines such as experimental psychology or cognitive neuroscience as well as the medical fields of neurology, psychosomatic medicine, and psychiatry.Footnote 13

To date, there is no widely accepted and satisfying explanation of the precise relationship between the phenomenological level of subjective experience and brain processes. This conundrum allows for a wide range of theoretical positions, from strictly neuroessentialist and neurodeterministic interpretations (i.e. there is nothing separate from brain processes; and brain activity does not give rise to but simply is nothing but neurophysiology), to positions that emphasize the ‘4E’Footnote 14 character of human cognition and all the way to modern versions of dualist positions, such as ‘naturalistic dualism’Footnote 15. An interesting intermediate position that has experienced somewhat of a renaissance in the philosophy of mind in recent years is the concept of panpsychism. The main idea in panpsychism is that consciousness is a fundamental and ubiquitous feature of the natural world. In this view, the richness of our mental experience could be explained as an emerging property that depends on the complexity of biological organisms and their central nervous systems.Footnote 16 Intriguingly, there seem to be conceptually rich connections between advanced neuroscientific theories of consciousness, particularly the so-called Integrated Information Theory (IIT)Footnote 17, and emergentist panpsychist interpretations of consciousness and mental phenomena.Footnote 18 The reason why this is relevant for our topic here – brain data, information about brain processes, and neurotechnology – is that these conceptual and neuroscientific advances in building a unified theory of causal mechanisms of subjective experience might become an important tenet for future analytical approaches to decoding brain data from neurotechnologies and inferring mental information from these analyses.

2. Privacy of Data and Information: Ownership, Authorship, Interest, and Responsibility

Before delving into the current debate around mental privacy, let me provide a few propaedeutical thoughts on the terminology and conceptual foundations of privacy and how it is understood in the context of data and information processing. Etymologically, ‘privacy’ originates from the Latin term privatus which means ‘withdrawn from public life’.Footnote 19 An important historical usage context for the concept of ‘privacy’ was in the military and warfare domain, for example in the notion of ‘privateers’, that is, a person or ship that privately participated in an armed naval conflict under official commission of war (distinguishing privateering from outlawed activities such as piracy).Footnote 20 The term and concept has a rich history in jurisprudence and the law. Lacking the space to retrace all ramifications of the legal-philosophical understandings of privacy, one notion that seems relevant for our context here – and that sets privacy apart from the related notion of seclusion and secrecyFootnote 21 – is that privacy ultimately concerns a person’s ‘autonomy within society’.Footnote 22 In the current age of digital information technology, this autonomy extends into the realm of the informational—in other words, the ‘infosphere’ as elucidated by Luciano FloridiFootnote 23—which is reflected by an increasing number of ethical and legal analyses of ‘informational privacy’ and the metamorphosis of persons into ‘data subjects’ and digital service providers into ‘data controllers’ in the digital realm.Footnote 24 In this context, it may be worthwhile to remind us that data and information (and knowledge for that matter), though intricately intertwined, are not interchangeable notions. Whereas data are ‘numbers and words without relationships’, information are ‘numbers and words with relationships’ and knowledge refers to inferences gleaned from information.Footnote 25 This distinction is important for the development and application of granular and context-sensitive legal and policy instruments for protecting a person’s privacy.Footnote 26

For contexts in which questions around the protection of (and threats to) data or informational privacy are originating from the creation, movement, storage and analysis of digital data, it would seem appropriate to conceptualize ‘informational privacy’ as: autonomy of persons over the collection, access and use of data and information about themselves. Related to these questions, this expanding discussion has made the question of data (and information) ownership a central aspect of ethical and legal scholarship and policy debates.Footnote 27 In a legal context, the protection of data or informational privacy are relevant, inter alia, in trade law (e.g. confidential trade secrets), copyright law, health law and many other legal areas. Importantly, however, individuals do not have property rights regarding their personal information, e.g. information about their body, health and disease in medical records.Footnote 28 Separate from the question of ownership of personal information is the question of authorship, in other words, who can be regarded as the creator of specific data and information about a person.Footnote 29 But, even in contexts in which persons are neither the author/creator nor the owner of data and information about themselves, they nevertheless have legitimate interests in protecting this information from being misused to their disadvantage, and therefore legitimate interest, and derived thereof, right, to keep it private. This right to informational privacy is now a fundamental tenet in consumer protection laws as well as data protection and privacy laws, for example the European Union’s (EU) General Data Protection Regulation (GDPR).Footnote 30

Finally, these questions of ownership, authorship, and interests in personal data and information – and the legal mechanisms for protecting the right to informational privacy – of course also raise the questions of responsibility for and stewardship of personal data and information to protect them from unwarranted access and from misuse. Typically, many different participants and stakeholders are involved in the creation, administration, distribution, and use of personal data and information (i.e. the creator(s)/author(s), owner(s), persons with legitimate and vested interests). Under many circumstances, this creates a problem of ascribing responsibility for data stewardship – a diffusion of responsibility. This may be further complicated by the fact that the creator of a particular set of personal information, the owner (and the person to whom these data and information pertain), may reside in different jurisdictions and may therefore be accountable to different data protection and privacy laws.

3. Mental Privacy: Protecting Data and Information about the Human Brain and Associated Mental Phenomena

In the debate around ‘neurorights’ the term mental privacy has established itself to refer to the ‘mental realm’ outlined above. However, from a materialist, neurodeterministic position, it would not make much sense to give mental phenomena special juridical protection if we neither have ways to measure these phenomena nor a model of causal mechanisms to give an account of how they arise. For the law, however, such a strict mechanistic interpretation of mental mechanisms might not be required to ensure adequate protections. Consider, for example, that crimes with large immaterial components such as ‘hate speech’ or ‘perjury’ also contain a large component of internal processes that might remain hidden from the eye of the law. In hate speech, for instance, both the level of internal motivation of the perpetrator as well as the level of internal processes of psychological injury in the injured party do not need to be objectivated in order to establish whether or not a punishable crime was committed.

The precise understanding and interpretation of mental privacy also differs substantially across literatures, contexts, and debates. In legal philosophy, for instance, mental privacy is mainly discussed in the context of foundational questions and justifications in criminal justice such as the concept of mens rea (the ‘guilty mind’),Footnote 31 freedom of the will, the feasibility of lie detection, and other ‘neurolaw’ issues.Footnote 32 In neuroethics, mental privacy is often invoked in discussions around brain data governance and regulation as well as in reference to ‘neurorights’: the question of whether the protection of mental privacy is (or shall become) a part of human rights frameworks and legislation.Footnote 33 The discussion here shall be concerned with the latter context.

III. Mental Integrity through the Lens of Vulnerability Ethics

Mental integrity, much like the term mental privacy, has an evocative appeal which allows for an intuitive and immediate approximate understanding: to protect the intactness and inviolacy of brain structure and functions (and the associated mental experiences).

Like mental privacy, however, mental integrity is currently still lacking a broadly accepted definition across philosophy, ethics, cognitive science, and neuroscience.Footnote 34 Most operational descriptions refer to the idea that the structure and function of the human brain and the corresponding mental experiences allow for an integrated mental experience for an individual and that external interference with this integrated experience requires a reasonable justification (such as medication for disturbed states of mind in psychosis, for example) to be morally (and legally) acceptable. The problem that the nature of subjective mental experience, phenomenal consciousness, is inaccessible both internally (as the subject can only describe the qualitative aspects of the experience itself, but not the mechanics of its composite nature) and externally, also affects the way in which we conceptualize the notion of an integrated mind. As an individual – the indivisible person in the literal sense – we mostly experience the world in a more or less unified way, even though separate parallel perceptual, cognitive, and emotive processes have to be integrated in a complex manner to allow for this holistic experience. When being asked, for example, by a curious experimental psychologist or cognitive scientist, to describe the nature of our experience, for example seeing a red apple on a table, we can identify qualitative characteristics of the apple: its shape, texture, colour, and perhaps smell. Yet, we have no shared terminology to describe the quality of our inner experience of seeing the apple – outside of associating particular thoughts, memories, or emotions with this instance of an apple or apples in general. Put in another way: We all know intuitively what a unified or integrated experience of seeing an apple is like but we cannot explain it in such a way that the descriptions necessarily evoke the same experience(s) in others. To better understand what an integrated experience is like, we might also consider what a disintegrated, disunified, or fragmented experience is like. In certain dream-like states, pathogenic states like psychosis or under the influence of psychoactive substances, an experience can disintegrate into certain constitutive components (e.g. perceiving the shape and colour of the apple separately, yet, simultaneously) or perceptions can be qualitatively altered in countless ways (consider, for instance, the phenomenon of synaesthesia, ‘seeing’ tones or ‘hearing’ colours). This demonstrated potential for the composite nature of mental experiences suggests that it is not inconceivable that we might find more targeted and precise ways to influence the qualitative nature (and perhaps content) of our mental experiences, for example, through precision drugs or neurotechnological interventions.Footnote 35 Emerging techniques such as optogenetics, for instance, have already been demonstrated to be able to ‘incept’ false memories into a research animal’s brain.Footnote 36 But our mental integrity can also be compromised by non-neurotechnological interventions of course. Consider approaches from (behavioral) psychology such as nudging or subliminal priming (and related techniques)Footnote 37 that can influence decision making and choice (and have downstream effects on the experiences associated with these decisions and choices) or more overt psychological interventions such as psychotherapy or the broad – and lately much questioned (in the context of the replication crisis in psychologyFootnote 38) – field of positive psychology, for example mindfulness,Footnote 39 meditation, and related approaches.

Direct neurotechnologically mediated interventions into the brain intuitively raise health and safety concerns, for example concerning potential adverse effects on mental experience and therefore mental integrity. While such safety concerns are surely reasonable given the direct physical nature of the brain intervention, there is, however, to date no evidence of serious adverse effects for commonly used extracranial electric or electromagnetic neurostimulation techniques such as transcranial direct-current stimulation (tDCS) or repetitive transcranial magnetic stimulation (rTMS).Footnote 40 In stark contrast, comparatively little attention has been paid until recently to the adverse effects of psychological interventions. Studies in the past few years have now demonstrated that seemingly benign interventions such as psychotherapy, mindfulness, or meditation can have discernible and sometimes serious adverse effects on mental health and well-being and thus on mental integrity.Footnote 41

Another context in which there is intensive debate around the ethical aspects and societal impact of influencing mental experience and behavior concerns internet-based digital technologies, especially the issue of gamificationFootnote 42 and other incentivizing forms of user engagement in ‘social’ media platforms or apps. Certain types of digital behavioral technologiesFootnote 43 are specifically designed to tap into reward-based psychological and neurobiological mechanisms with the aim to maximize user engagement which drives the business model of many companies and developers in the data economy.Footnote 44 While these digital behavioral technologies (DBT) might be used in a healthcare provision context, for example to deliver digital mental health services,Footnote 45 the use of DBT apps in an uncontrolled environment, such as internet-based media and communication platforms raises concern about the long-term impact on mental integrity of users.

To summarize, the quality and content of our mental experience is multifaceted and the ability to successfully integrate different levels of mental experience into a holistic sense of self (as an important component of selfhood or personhood) – mental integrity – is an important prerequisite for mental health and well-being. There are several ways to interfere with mental integrity, through neurotechnologically mediated interventions as well as by many other means. The disruption of the integrated nature of our mental life can lead to severe psychological distress and potentially mental illness. Therefore, protecting our mental life from unwarranted and/or unconsented intervention seems like a justified ethical demand. The law offers many mechanisms for protection in that respect, both at the level of fundamental rights – for example in Article 3 – Right to integrity of the person of the EU Charter of Fundamental RightsFootnote 46 – as well as specific civil laws such as consumer protection laws and medical law.

IV. Neurorights: Legal Innovation or New Wine in Leaky Bottles?

As we have seen in the preceding sections, there are ethically justifiable and scientifically informed reasons to claim that mental privacy and mental integrity are indeed aspects of our human existence (‘anthropological goods’ if you will) that are worthy of being protected by the law. In this section, I will therefore give an overview of recent developments in the legal and policy domain regarding the implementation of such ‘neurorights’.Footnote 47 First, I will describe the current debate around the legal foundations and scope of neurorights, then I will propose some conceptual additions to the notion of neurorights and, third, propose a pragmatic and human rights–based approach for making neurorights actionable.

1. The Current Debate on the Conceptual and Normative Foundations and the Legal Scope of Neurorights

For a few years now, the debate around the legal foundations and precise scope of neurorights has been steadily growing. From a bird’s eye perspective, it seems fair to say that two main positions are dominating the current scholarly discourse: rights conservatism and rights innovationism/reformism. Scholars that argue from a rights conservatism position make the case that the existing set of fundamental rights, as enshrined for example in the Universal Declaration of Human Rights (UDHR) (but also in many constitutional legal frameworks in different states and specific jurisdictions), provides enough coverage to protect the anthropological goods of mental privacy and mental integrity.Footnote 48 Scholars that are arguing from the position of rights innovationism or reformism emphasize that there is something qualitatively special and new about the ways in which emerging neurotechnologies (and other methods, see above) (may) allow for unprecedented access to a person’s mental experience or (may) interfere with their mental integrity, and that, therefore, either new fundamental rights are necessary (legal innovation) or existing fundamental rights should be amended or expanded (legal reformism).Footnote 49 Common to both positions is the acknowledgment that the privacy and integrity of mental experience are indeed aspects of human existence that should be protected by the law; the differences in terms of how such mental protection could be implemented, however, have vastly different implications in terms of the consequences for national and international law. Whereas the legal conservatist would have to do the work to show precisely how national, international, and supranational legal frameworks could be effectively applied to protect mental privacy and integrity in specific contexts, the reformist position implies changes in the legal landscape that would have seismic and far-reaching consequences for many areas of the law, national and international policymaking as well as consumer protection and regulatory affairs. From a pragmatic point of view, two major problems immediately present themselves regarding the addition of new fundamental rights that refer to the protection of mental experience to the catalogue of human rights. The first problem concerns the potential for unintended consequences of introducing such novel rights. It is a well-known problem, both in moral philosophy and legal philosophy, that moral and legal goods – especially if they are not conceptually dependent on each other – can (and often do) exist in conflict with each other which, in applied moral philosophy gives rise to classical dilemma situations for example. Therefore, introducing new fundamental rights might serve the purpose of protecting a specific anthropological good, such as mental privacy, in a granular way, but at the same time it increases the complexity of balancing different fundamental rights and therefore also the potential for moral and/or legal dilemmata situations. Another often voiced criticism is the perceived problem of rights inflation, in other words, the notion that the juridification (German: ‘Verrechtlichung’) of ethical norms leads to an inflation of fundamental rights – and thus rights-based narratives and juridical claims – that undermine the ability of the polity to effectively address systemic social and other structural injustices.Footnote 50

From my point of view, the current state of this debate suffers from the following two major problems: firstly, an insufficient conceptual specification of mental privacy and mental integrity and, secondly, a lack of transdisciplinary collaborative discourses and proposals for translating the ethical demands that are framed as neurorights into actionable frameworks for responsible and effective governance of neurotechnologies. In the following sections, I address both concerns by suggesting some conceptual additions to the academic framing and discourse around neurorights and proposing a strategy for making neurorights actionable.

2. New Conceptual Aspects: Mental Privacy and Mental Integrity As Anthropological Goods

The variability of operational descriptions of mental privacy and mental integrity in the literature shows that both notions are still ‘under construction’ from a conceptual perspective. As important as this ongoing conceptual work is in refining these ideas and for making them accessible to a wide scholarly audience, I would propose here that understanding them mainly as relevant anthropological goodsFootnote 51 – rather than mainly philosophical or legal concepts – could help to theorize and discuss about mental privacy and mental integrity across disciplinary divides. However, the anthropological goods of mental privacy and mental integrity are conceptually underspecified in the following sense.

First, no clear account is given in the literature of what typical, if not the best approximate, correlates of mental experience (as the substrate of mental privacy) are. Some authors suggest that neurodata or brain data are – or might well become (with advances in neuroscience) – the most direct correlate of mental experience and that, therefore, brain data (and information gleaned from these data) should be considered a noteworthy and special category of personal data.Footnote 52 It could be argued that, in addition to brain data, many different kinds of contextual data (e.g. from smartphones, wearables, digital media and other contexts) allow for similar levels of diagnostic or predictive modelling and inferences on the quality and content of a person’s mental experience.Footnote 53 What is lacking, however, is a critical discussion of what the right level for protecting a person’s mental privacy is: the level of data protection (data privacy); protecting the information/content that can be extracted from these data (informational privacy); or both; or whether we should also address the question of how and to what ends mental data/information are being used? As discussed above, I would suggest that a very important and legitimate dimension for ethical concerns is also the question of whether and to what extent any kind of neurotechnology or neurodecoding approach has a negative impact on enabling a person to exercise their legitimate interest in their own mental data and information. To be able to respect a person’s interest in data and information on their mental states, however, we would need ethically viable means of disclosing these interests to a third party in ways that do not themselves create additional problems of privacy protection, in other words to avoid a self-perpetuating privacy protection problem. At the level of data and information protection, one strategy could be to establish trustworthy technological means (such as blockchain technology, differential privacy, homomorphic encryption, and other techniquesFootnote 54) and/or institutions – data fiduciaries – for handling any data of a person that might allow for inferences on mental experience.

Second, the demand for protecting mental integrity is undermined by the problem that we do not have a consensual conceptual understanding of key notions such as agency, autonomy, and the self. Take the example of psychedelic recreational drugs, as an example for an outside interference with mental integrity. We have ample evidence from psychological and psychiatric research that suggests that certain types of recreational psychedelic drugs, such as LSD or Psylocibin, have discernible effects on mental experiences associated with personal identity and self-experience, variously called, for example, ‘ego dissolution’Footnote 55 or ‘boundlessness’.Footnote 56 However, most systematic research studying these effects, say in experimental psychology or psychiatry, is not predicated on a universal understanding or model of human self-experience, personal identity, and related notions. As even any preliminary engagement with conceptual models of personal identity or ‘the’ self in psychology, cognitive science, and philosophy will quickly reveal, there are indeed many different competing, often conceptually non-overlapping or incommensurable models available: ranging from constructivist ideas of a ‘narrative self’, to embodiment-related (or more generally 4E-cognition-related) notions of an ‘embodied self’ or ‘active self’, to more socially inspired notions such as the ‘relational self’ or ‘social self’.Footnote 57 Consequently, any interpretation, let alone systematic understanding, of how certain interventions might or might not affect mental integrity – here represented by the dimension of self-experience and personal identity – will heavily depend on the conceptual model of mental experience that one has. This rather obvious point about the inevitable interdependencies between theory-driven modelling and data-driven inferences and interpretation has important consequences for the ethical demands and rights-claims that characterize the debate on the neurorights. First, this should lead to the demand and recommendation that any empirical research that investigates the relationship between physical (for instance via neurotechnologies or drugs) or psychological interventions (for example through behavioural psychology, such as nudging) and mental experience should make their underlying model of self-experience and personal identity explicit and specify it in a conceptually rigorous manner. Second, transdisciplinary research on the conceptual foundations of mental (self-)experience, involving philosophers, cognitive scientists, psychologists, neuroscientists, and clinicians should be encouraged to arrive at more widely accepted working models that can then be tested empirically.

3. Making Neurorights Actionable and Justiciable: A Human Rights–Based Approach

Irrespective of whether new fundamental rights will ultimately be deemed necessary or whether existing fundamental rights will prove sufficient to protect the anthropological goods mental privacy and mental integrity, regulation and governance of complex emerging sciences and technologies, such as AI-based neurotechnology, is a daunting challenge. If one would agree that reasonable demands for any governance regime that allows innovation of emerging technologies in a responsible manner include that the regime is context-sensitive, adaptive, anticipatory, effective, agile, and at the right level of ethical and legal granularity, then the scattered and inhomogeneous landscape of national and international regulatory and legal frameworks and instruments presents a particularly complex problem of technology governance.Footnote 58

Apart from the conceptual issues discussed here that need to be further clarified to elucidate the basis for specific ethical/normative demands for protecting mental privacy and mental integrity, another important step for making neurorights actionable is finding the right levels of governance and regulation and appropriate (and proportional) granularities of legal frameworks. So far, no multi-level approach to legal protection of mental privacy and mental integrity is available. Instead, we find various proposals and initiatives at different levels: at the level of ethical self-regulation and self-governance; represented for example by ethical codes of conduct in the context of neuroscience researchFootnote 59 or in the private sector around AI governance;Footnote 60 at the level of national policy, regulatory, and legislative initiatives (e.g. in Chile);Footnote 61 at the level of supranational policies and treaties, represented, for example, by the intergovernmental report on responsible innovation in neurotechnology of the Organization for Economic Co-operation and Development (OECD) from 2019Footnote 62.

Taking these complex problems into account, I would advocate for a pragmatic, human rights–based approach to regulating and governing AI-based neurotechnologies and for protecting mental privacy and mental integrity as anthropological goods. This approach is predicated on the assumption that existing fundamental rights, as enshrined in the UDHR and many national constitutional laws, such as the right to freedom of thought,Footnote 63 provide sufficient normative foundations. On top of these foundations, however, a multi-level governance approach is required that provides context-sensitive and adaptive regulatory, legal, and political solutions (at the right level of granularity) for protecting humans from potential threats to mental privacy and mental integrity, such as in the context of hitherto un- or underregulated consumer neurotechnologies. Such a complex web of legal and governance tools will likely include bottom-up instruments, such as ethical self-regulation, but also laws (constitutional laws, but also consumer protection laws and other civil laws) and regulations (data protection regulations and consumer regulations) at the national level and supranational level, as well as soft-law instruments at the supranational level (such as the OECD framework for responsible innovation of neurotechnology, or widely adopted ethics declarations from specialized agencies of the United Nations (UN), such as UN Educational, Scientific and Cultural Organization (UNESCO) or World Health Organization (WHO)).

But making any fundamental right actionable (and justiciable) at all levels of societies and international communities requires a legally binding and ethically weighty framework to resolve current, complex, and controversial issues in science, society, and science policy. Therefore, conceptualizing neurorights as a scientifically grounded and normatively oriented bundle of fundamental rights (and applied legal and political translational mechanisms) may have substantial inspirational and instrumental value for ensuring that the innovation potential of neurotechnologies, especially AI-based approaches, can be leveraged for applications that promote human health, well-being, and flourishing.

V. Summary and Conclusions

In summary, neurorights have become an important subject for scholarly debate, driven partly by innovation in AI-based decoding of neural activity, and as a result different positions are emerging in the discussion around the legal status of brain data and the legal approach to protecting the brain and mental content from unwarranted access and interference.

I have argued that mental privacy and mental integrity could be understood as important anthropological goods that need to be protected from unwarranted and undue interference, for example, by means of neurotechnology, particularly AI-based neurotechnology.

In the debate on the question of how neurorights relate to existing national and supranational legal frameworks, especially to human rights, three distinct positions are emerging: (a) a rights conservatism position, in which scholars argue that existing fundamental rights (e.g. constitutional rights at the national level and human rights at the supranational level) provide adequate protection to mental privacy and mental integrity; (b) a reformist, innovationist position, in which scholars argue that existing legal frameworks are not sufficient to protect the brain and mental content of individuals under envisioned near-future scenarios of AI-based brain decoding through neurotechnologies and, therefore, reforms of existing frameworks – such as constitutional laws or even the Universal Declaration of Human Rights – are required; and (c) a human rights–based approach, that acknowledges that the law (in most national jurisdictions as well as internationally) provides sufficient legal instruments but that its scattered nature – across jurisdictions as well as different areas and levels of the law (such as consumer protection laws, constitutional rights, etc.) – requires an approach that makes neurorights actionable and justiciable, for example by connecting fundamental rights to specific applied laws (e.g. in consumer protection laws).

The latter position – which in the policy domain would translate into a multi-level governance approach – has the advantage that it does not argue from entrenched positions with little room for consilience but provides deliberative space in which agreements, treaties, soft law declarations, and similar instruments for supra- and transnational harmonization can thrive.

25 AI-Supported Brain–Computer Interfaces and the Emergence of ‘Cyberbilities’

Boris Essmann and Oliver Mueller
I. Introduction

Recent advances in brain–computer interfacing (BCI) technology hold out the prospect of technological intervention into the basis of human agency to supplement and restore functioning in agency-limited individuals and even augmenting and enhancing capacities for natural agency. By increasingly using Artificial Intelligence (AI), for example machine learning methods, a new generation of brain–computer interfaces aims to advance technological possibilities to intervene into agentive capacities even more, creating new forms of human–machine interaction in the process. This trend further accentuates concerns about the impact of neurotechnology on human agency, not only regarding far-reaching visions like the media-effective propositions by Elon Musk (Neuralink) but also with respect to current developments in medicine. Because these developments could be understood as (worrisome) ‘fusions’ of human, machinic, and software agency we investigate neurotechnology and AI-assisted brain–computer interfaces by directly focusing on agentive dimensions and potential changes of agency in these types of interactions. By providing a philosophical discussion of these topics we aim to capture the broad impact of this technology on our future and contribute valuable perspectives on its ethically and socially relevant dimensions. Although we adopt a philosophical approach, we do not restrict ourselves to a single disciplinary perspective, such as an exclusively ethical or neuroscience-oriented analysis. Given the potential to fundamentally reshape our individual and collective lives, the combination of neurotechnology and AI-technology may well create challenges that exceed disciplinary boundaries and which, therefore, cannot be met by a single discipline.

Our contribution to discussing the ‘fusion’ of human and artificial agency is the introduction of two neologisms – cyberbilities and hybrid agency – which we understand as concepts that integrate a range of disciplinary perspectives on this phenomenon. At a fundamental level, the concept loosely draws on Amartya Sen’s and Martha Nussbaum’s capabilities approach, but retools the notion of capabilities to analyze intricate human–machine interactions. We specifically adopt the normative core of capabilities – the ethical value of well-being opportunities – as a conceptual tool to evaluate risks and benefits of AI-supported brain–computer interfaces. However, like capabilities, cyberbilities presuppose a concept of human agency. Therefore, devising this concept requires a clarification of the underlying understanding of agency. Furthermore, because cyberbilities involve agency that is assisted by neurotechnology, we will also include an analysis of the various interactions between human and non-human elements involved.

This chapter is divided into three main sections. In the first section, we present conceptual expositions of the terms capabilities, agency, and human–machine interaction which serve both as an illustration of the complex nature of BCI technology and some necessary background to motivate the following line of argument.Footnote 1 This section is not intended to exhaust the topic from a specific (e.g., ethical or neuroscientific) perspective, but rather to amalgamate three very different but – as we maintain – complementary approaches. Specifically, we draw on the work of capability theorists such as Sen and Nussbaum.Footnote 2 Also, since neurotechnology affects human agency on various levels, we discuss the notions of agency and human–machine interaction from the perspectives of neuroscience, philosophical action theory, and a sociological framework.Footnote 3 In the next section, we introduce the above-mentioned novel concepts of hybrid agency and cyberbilities which combine our preceding line of argument and denote new forms of agency resulting from ‘agentive’ technologies.Footnote 4 A cyberbility is a type of capability, in other words, it is a normative concept designed to gauge the various ways in which neurotechnology can lead to achievements of (or want of) well-being and contribute to (or detract from) human flourishing. In the last section, we propose a list of cyberbilities that illustrates ways in which neurotechnology can lead to well-being gains (or losses) and explores the personal, social, and political ramifications of neurotechnologically assisted (or, in our terms, hybrid) agency.Footnote 5 However, this list of cyberbilities should not be understood as a conclusive result of the preceding conceptual work, but rather as a tentative and incomplete catalogue of core claims and requirements that reflect how new kinds of technologies challenge our established understanding of agency and human–machine interaction. In this sense, we see the list of cyberbilities not as a completed ethical evaluation, but as a foray into mapping tentative points of normative orientation.Footnote 6 And finally, we want to discuss a potential objection regarding our approach.Footnote 7

II. From Capabilities to Cyberbilities

Let’s start by anticipating our definition of cyberbilities: Cyberbilities are capabilities that originate from hybrid agency (i.e. human–machine interactions), in which agency is distributed across human and neurotechnological elements. As we will lay out in the following sections, this definition emphasizes that cyberbilities are embedded not only in personal aspects of agency, but also in a social environment that is shaped by the ‘logic’ of the respective technology and the institutions that deploy it (i.e. the ‘technological condition’).

In order to provide the necessary background for the notion of cyberbilities, we shall proceed in three steps. Firstly, we will briefly unfold in which way we retool the capabilities approach for our own purposes. Secondly, we argue that we need to revisit the concept of agency concerning its use in neuroscience and philosophy if we want to reliably describe the complex interactions between human and artificial elements, especially in the context of brain–computer interfaces. Lastly, we will draw on the notion of distributed agency introduced by sociologist Werner RammertFootnote 8 to illuminate how technology affects agency and, consequently, human–machine interactions. All three steps serve to review current disciplinary views on the topics at hand and prepare our proposal of an extended and integrated perspective in Section III.

1. Capabilities

The capabilities approach, first introduced by SenFootnote 9 and extended by NussbaumFootnote 10, is a theoretical framework used in a number of fields to evaluate the well-being of individuals in relation to their social, political, and psychological circumstances. To capability theorists, each person can be described (and thus compared) in terms of their ‘capabilities’ to achieve and maintain well-being, and any restrictions of those capabilities are subject to ethical scrutiny. As a philosophical term, well-being does not mean, for example, happiness, wealth, or absence of negative emotions or circumstances. Rather, well-being is meant to encompass how well a person’s life is going overall, not just in relation to available means to lead a comfortable life or to achieve temporary positive emotional states, but concerning that a person is understood as an end when we focus on the opportunities to lead a good life that are available to each person.Footnote 11

There is a long history of debate on the capabilities approach, and Sen and Nussbaum themselves delivered further refinements of the approach. We are aware of the fact that there are a number of controversies and open questions, for example, that Sen’s account is overly individualisticFootnote 12, or regarding certain essentialist traitsFootnote 13 of Nussbaum’s version of the capabilities approach. However, due to the explorative purpose of this paper, we do not want to engage in further discussions of these aspects. Rather, we draw on Sen’s and Nussbaum’s theories in a pragmatic way, adopting some of their core elements in order to develop a basis for our tentative list of cyberbilities, which we see as a conceptual means not only to grasp novel kinds of agency in the upcoming age of human–machine fusions but also to propose a perspective that could help to evaluate these human–machine mergers as well.

But what are capabilities? Loosely following Sen, a capability describes what a person is actually able to be and do to increase her well-being. To capability theorists, ‘the freedom to achieve well-being is of primary moral importance’Footnote 14, and can therefore be used to evaluate if a person’s social, political, and developmental circumstances support or hinder her well-being. In more technical terms, a capability is the real opportunity (or freedom) to achieve functionings, where functionings are beings and doings (or states) of a person, like ‘being well-nourished’ or ‘taking the bus to work’. Both capabilities and functionings are treated as a measure of a person’s well-being, and therefore allow us to compare people in terms of how well their life is going. They are distinguished, however, from resources like wealth or commodities, because those metrics arguably provide only limited or indirect information about how well the life of a person is going.

Nussbaum further developed the capabilities approach, specifically by extending the scope of Sen’s pragmatic and result-oriented theory.Footnote 15 For her, a functioning is ‘an active realization of one or more capabilities (…). Functionings are beings and doings that are the outgrowths or realization of capabilities.’Footnote 16 Nussbaum stresses that she does not intend to deliver a theory on human nature as such. But she does understand the capabilities approach as an inherently evaluative and ethical theory that focuses on valuable capacities that human beings have reason to value and that a just society is obligated to nurture and support.Footnote 17 The normative criterion for valuableness is well-being as well (although quality of life or human flourishing are sometimes used synonymously). According to Nussbaum’s ambitious theory, the development of capabilities is connected to the notions of freedom (like in Sen’s theory) and dignity (by which she is going beyond Sen); she states: ‘In general (…) the Capabilities Approach, in my version, focuses on the protection of areas of freedom so central that their removal makes a life not worthy for human dignity.’Footnote 18 Against this background Nussbaum famously compiled a list with ten central capabilities, ranging from life, bodily health, bodily integrity, up to the affiliation with others and the political and material control over one’s environment.Footnote 19

Our conception of cyberbilities shares not only Sen’s focus on well-being and functionings, but also Nussbaum’s idea to provide a list with core cyberbilities. However, we understand our list not as a substitution, but a supplement to Nussbaum’s, taking into account that AI-based brain–computer interfaces might change our understanding of both capabilities and agency.

Our reasoning is that modern technology is so complex and closely connected to human agency and well-being that it has the potential not only to subvert, but also to strengthen capabilities in complex ways. This relation will only become more intricate as neurotechnology and AI become more elaborate and integrated in our bodies, especially with human–machine fusions promised by future BCI technologies. Simply asking if such technologies contribute to or detract from well-being, or contradict or strengthen central capabilities, might be undercut by the impact they have on human agency as a whole. We could overlook subtle but unpreferable effects on agency if a technology grants certain well-being benefits, or miss beneficial effects on flourishing, for example in the case of capability-tradeoffsFootnote 20 realized by new types of technologically-assisted agency.

For this reason, we argue that evaluating current and future neurotechnology on the basis of the capabilities approach alone might fall short. Instead, we propose to combine the well-being and functioning focus of the capability approach with an extended perspective on agency that is tailored to identifying the impact of neurotechnology and AI on human agency as a whole. The specific challenge is that neurotechnological devices are not just another type of tool that human beings can use as an external means to realize capabilities and achieve well-being. By intervening into the brain of a person, neurotechnology interacts intimately with the basis of human agency, which opens the possibility to affect agency and capabilities in unforeseen ways. Because we may not be able to predict if this new kind of interaction relates positively or negatively to those dimensions, it seems prudent to develop a perspective that may accompany the coming neurotechnological developments with ethical scrutiny.

Hence, cyberbilities are an extension of the core tenets of the capability approach insofar as they are capabilities that arise from agency that is already enabled or affected by neuro- and/or AI-technology.

2. Agency and Human–Machine Interactions

After having briefly introduced the notion of capabilities we now focus on the conceptions of agency and human–machine interaction. This section will work towards an understanding of the ways in which human agency intersects and merges with machinic and software agency in technological contexts, a phenomenon which sociologist Rammert calls distributed agency.Footnote 21 The concept of hybrid agency, which we introduce in Section III, is a specific type of distributed agency which is also the core of the notion of a cyberbility.

There are two dimensions we consider to be central to human–machine interaction in general, and human–computer interaction in particular: Firstly, the causal efficacy of intentions, in other words, the idea that human intentions are the causal origin of technologically mediated actions, and secondly, the social aspect of acting in a technological context, especially when interacting with technological devices. We review established views on both agency and human–machine interaction in the context of BCI operationFootnote 22 and then go on to discuss these views in more depth.Footnote 23 While these two dimensions by no means exhaust the spectrum of relevant aspects in human–machine interaction, we see them as instructive starting points to develop our extended view that leads to introducing the novel concepts of hybrid agency and cyberbilities.

a. The ‘Standard View’: Compensating Causality and Interactivity

Philosophically speaking, the concept of agency is connected with the phenomenon of intentionality and intention. An intention is a specific type of mental state that aggregates other action-related mental states (such as beliefs and desires), representing a concrete goal or plan and adding a stable commitment to actually perform actions aimed at realizing the respective goal or plan.Footnote 24 Theories that explain how intentions work conceptually are numerousFootnote 25, but the so-called standard view is that intentions govern and direct behavior through their specific causal efficacy.Footnote 26 In other words, intentions govern behavior by virtue of their direct and indirect causal effects on the chain of events from mental states to the execution of movements.Footnote 27 Hence, saying that a person ‘has agency’ amounts to saying that his intentions causally affect how the brain produces behavioral output, from cortical to spinal neural activity.

This view of agency is common not only in philosophy, but also in other disciplines, such as psychology and neuroscience. Principally, these disciplines agree that our behavior is governed by causally efficacious mental states, which emerge from the brain as their physiological basis. As a result, this view is compatible with a neuroscientific view of behavior and agency, and can be used to describe the basic rationale of current brain–computer interfaces and neuromodulation technologies. In what follows, we will primarily focus on motoric neuroprostheses, as they provide a clear and instructive case of application. The rationale for motoric neuroprostheses reads: If an agent cannot perform actions and movements anymore because the causal chain from the brain to the extremities is, in some way or another, interrupted, disrupted, or limited the brain–computer interface can bridge causal gaps in this chain by (re-)connecting the neural correlates of intention with an artificial effector, such as a wheelchair or robotic arm.Footnote 28

This basic rationale highlights a mainly restorative and supplemental quality of neurotechnology, which we call the compensatory view, as its main focus is to compensate for lost or limited neural function. The compensatory nature of neurotechnology is illustrated by Walter Glannon in his analysis of the specific interaction between brain–computer interface and user. Arguing that neurotechnologically assisted agency is comparable to natural agency, Glannon states: ‘BCIs do not supplant, but supplement the agent’s mental states in a model of shared control. Rather than undermining the subject’s control of his behavior, they enable control by restoring the neural functions mediating the relevant mental and physical capacities’.Footnote 29 Besides drawing on the standard view of agency, in which the device compensates for the interrupted chain of events from intention to movement by bridging causal gaps, Glannon states that an ‘extended embodiment’Footnote 30 is a further prerequisite: If the user fails to experience the device as part of her own body schema, she may not perceive the movements of the robotic arm as ‘her own’ which could ‘undermine the feeling of being in control of one’s behavior’, thereby disrupting her sense of agency.Footnote 31

According to Glannon, the restorative and supplemental character of brain–computer interfaces stems from the specific interaction between user and device, which creates the phenomenon of shared control, in other words, control over the course of action is partly on the side of the user, and partly delegated to the brain–computer interface. The interaction consists of the user directing her mental states in such a way that the interface can detect neural states which ‘encode’ her intentions. This kind of interaction is the basis of Glannon’s notion of shared control, and successful extended embodiment is necessary to sustain and improve this kind of interactive control.

Brain–computer interfaces based on these principles have been successfully implemented in human patients, and the technology clearly has the potential to compensate for limitations of agency in the way described above. However, it is important to note that while this conclusion is valid, it also stems from a specific understanding of technology, which might support the conclusion while also obscuring other relevant aspects. The compensatory view conceives neurotechnology as a type of instrumental technology and hence frames brain–computer interfaces as auxiliary devices. From this perspective, neurotechnological devices are conceptualized as tools which remain, by definition, fundamentally subordinate to human autonomy and intention. BCI operation appears as auxiliary in nature because the device takes over only partial segments of a course of action, and the overall goal and regulation of action remains governed by human agency.

In principle, many technological settings can be usefully described from the perspective of instrumental technology. But is this the case in BCI operation? After all, a brain–computer interface is not just an external object, but a device implanted into the brain, affecting and interacting with the origins of action rather than just the external locus of object manipulation. So, does this intimate characteristic distinguish a brain–computer interface from an external tool?

To address this question, we need to examine the effects of BCI operation concerning its causal and neurophysiological nature to see if brain–computer interfaces ‘just bridge a causal gap’, or if they do more than that.Footnote 32 This analysis will suggest that the compensatory view on BCI technology is an extension of the standard view on agency, thereby inheriting its conceptual limits. To counteract this limitation, we need to extend the vocabulary we use to describe agency, and we will do this by taking a closer look at the specific kind of interaction between user and device, taking into account certain social characteristics of this interaction.Footnote 33 The basic idea we need to address is that there are some human–machine interactions which are so intimate that it becomes hard to say where human agency ends and machine-agency starts: The interaction between human and machine is such that agency is actually distributed across both interaction partners, rather than ultimately remaining under the governance of human intention.

b. Reframing Causality

Concerning the neurophysiological nature of a brain–computer interface operation, and shared control specifically, it should be noted that ‘a brain–computer interface records brain activity’ does not mean that it simply ‘detects intentions in the brain’. A brain–computer interface is not like an ECG that detects a heartbeat. Rather, operating a brain–computer interface relies on a mutual learning process: Recently developed interfaces increasingly rely on machine learning to distinguish relevant from irrelevant information about intended movement from a narrow recording site that yields a stream of noisy and limited data.Footnote 34 At the same time, the user has to learn to influence his neural activity in such a way that the recording site provides enough information in the first place to successfully operate the external effector. This is achieved by passing through a lengthy training period in which user and interface gradually attune and adapt to each other.Footnote 35 Shared control over actions in Glannon’s sense is based on this kind of mutual adaptation.Footnote 36

However, this attunement and adaptation between brain–computer interface and user also affects the brain as a whole, which mitigates the claim that in these user–computer interactions, control is merely partly delegated from user to device. As Jonathan R. Wolpaw and Elizabeth Winter Wolpaw note, natural (i.e. not neurotechnologically assisted) agency is a product of activity distributed across the whole central nervous system, which continually adapts and changes to produce appropriate behavioral responses to its environment.Footnote 37 Introducing a brain–computer interface basically creates a novel output modality for this complex system. As a result, the central nervous system as a whole adapts and rearranges in order to learn to control this new way of interacting with its surroundings. And because brain–computer interfaces rely on a localized recording site and a specific type of neural signal, the user needs to retrain a small part of this extensive system to provide an output which normally is produced by the whole central nervous system, which in turn affects how the central nervous system works as a whole.

In our view, this speaks against the basic tenet of the compensatory view that a brain–computer interface just supplements the agent’s mental states, as the whole system that is producing mental states is affected by neurotechnological interfacing. Specifically, it puts into question the view that a brain–computer interface simply bridges a causal gap in the action chain of its user, as a brain–computer interface does not carefully target a specific causal gap. Rather, it modulates the whole system to restore causal efficacy, restructuring the causal chain from intention to action in the process. While this does not mean that a brain–computer interface necessarily supplants a person’s agency, we still claim that the compensatory view might easily miss important ramifications of the technology, even in terms of causal efficacy. Furthermore, we argue that the compensatory view also falls short of identifying more overarching agency-altering effects of neurotechnology. While Glannon discusses aspects of the sense of agency in terms of extended embodiment and experiencing control over one’s behavior – important aspects that contribute to explaining the sense of agency – both embodiment and the sense of agency include further aspects. For example, it has been suggested that the sense of agency is an aggregation of at least three distinct phenomena, namely, the sense of intentional causation, the sense of initiation, and the sense of control.Footnote 38 The latter can be distinguished further into the sense of motor, situational, and rational controlFootnote 39, raising the question of which aspects of control are actually shared between user and brain–computer interface. While the case of motor control seems quite clear, any effect of a neurotechnological device on rational or situational control over actions should be analyzed rigorously – the question is if an exclusively causal and neurophysiological vocabulary will suffice to explore these effects and their overarching consequences. It is to be expected that this situation will become even more pressing with the inclusion of increasingly complex and autonomous AI-technology. As outlined earlier, even current machine learning–supported brain–computer interfaces cannot be understood as simple ‘translators’ between brain and computer. Advanced AI-technologies will likely introduce additional dimensions of influence by establishing more sophisticated means of interaction between human and machine. We argue that this necessitates a framework that can capture not only specific causal effects, but also changes in interactivity between human and machine which might modulate the causal setting of agency altogether.

c. Reframing Interactivity

The compensatory view addresses interactions between user and brain–computer interface by highlighting that both the causal compensation and the integration into the body schema is based on a reciprocal learning process. However, the interactions and adaptations between user and brain–computer interface also have a social dimension which is not addressed by the compensatory view. We argue that this is due to conceptual blind spots that result from its vocabulary, which treats agency and intentionality as purely biological functions. As a result, the compensatory view struggles with identifying and factoring in nonbiological (e.g., social and normative) and nonhuman (i.e. artificially intelligent) dimensions of agency.

To counteract this shortcoming, it is necessary to extend the vocabulary of agency accordingly. Sociology, Science, and Technology Studies and Philosophy of Technology have a rich history of analyzing how technology permeates modern life and deeply affects and changes human agency. We will paradigmatically draw on a sociological theory called the gradualized concept of agencyFootnote 40, which shifts the focus from agency as a biological capacity to agency as a phenomenon that emerges from various types of interactions between and among humans, machines, and software. Advanced technologies, it is argued, create a multitude of heterogeneous artificial ‘agencies’ which interact and influence not only each other, but also human agency in fundamental ways. Importantly, the gradualized concept of agency can be used to examine interactions between a brain–computer interface and its user on the level of human–machine interactions without contradicting the neurophysiological aspects of human agency discussed earlier. In fact, the gradualized concept of agency may help to emphasize that the compensatory view is not outright false by demonstrating its blind spots in a constructive manner.

As argued above, the compensatory view regards neurotechnology as a passive tool by arguing that its contributions to a course of instrumental action concern only partial sequences in the causal chain, while the order of causal events still is governed and regulated by human intention. Hence, the significance and involvement of technological contributions is derived primarily from human intention: The user and his intentions remain in control of the action.

By contrast, the gradualized concept of agency offers an analysis of this kind of relation that shows how advanced technology can subtly restructure instrumental action and lead to agency-altering consequences. It draws on an action-theoretic distinction between three dimensions of agency. The intentional dimension contains the rational capacity to set action goals and deliberate courses of action. Human intention embodies this capacity as an overarching mental state that governs action from planning to execution. The regulative dimension corresponds to control and monitoring of action courses. And the effective dimension describes the base level efficacy to causally affect the environment depending on intentional and regulative aspects.Footnote 41

Based on this model, the gradualized concept of agency argues that technological involvement in the effective dimension can easily cascade from the effective to the regulative and even the intentional dimension. Three common motives of instrumental action illustrate this shift, as technology is often used to delegate effective and regulative aspects of actions in order to save time, improve action outcomes, and to realize action goals the agent could not realize herself. While these aspects may not seem noteworthy when using a conventional tool like a hammer or a common car, their significance and interconnectivity increases the more advanced a technological device is. This can be illustrated by way of two examples: Firstly, a navigation system not only saves time when planning a route, it also improves travel times by calculating and continuously adjusting the best route based on actual traffic data; and secondly, the Google search algorithm seems to be a simple tool to search for relevant information on the Internet. But by scanning billions of websites and documents in fractions of a seconds it is not only infinitely more efficient in finding information, but also autonomously regulates the search by ranking relevant information depending on context, which it determines dynamically. Google not only finds information; it evaluates which information is relevant.

It is noteworthy that technological artifacts themselves are the product of complex intentional actions, and that they embody the intentionality of their design: They are ‘objectively materialized structures of meaning’Footnote 42. In this perspective, artifacts carry normative weight which affects the structure of the actions they are involved in. Their designed versatility stems from being oriented towards typical rather than individual action, making them multipurpose and offering reliable repeatability of action. As a consequence, using an artifact requires that the agent adapts to its purpose rather than the other way around – particularly in cases where the artifact takes on partial actions which a human agent could not perform. This characteristic illustrates that technology not only improves or creates new courses of action, but that it is suggestive of certain action goals. Hence, artifacts have an active role in the intentional dimension as well. This effect is magnified when artifacts use software algorithms so that the user can delegate aspects of planning, monitoring, and control to the respective program.

These examples show that many interactions between user and advanced technology consist in various forms of delegation. In the context of AI-based neurotechnology, the combination of machines and software is of critical importance, as the involvement of machine learning and other AI-technology amounts to the inclusion of increasingly autonomous software agents in the equation which are capable of the self-generation of actions. Because software agents not only interact with human users, but also (and mostly) with other software agents, their ‘intra-activities’Footnote 43 create open systems which lose the transparency of operation we usually expect from technological tools. Hence, when delegating actions to such intra-acting software agents, we do not use a tool, but interact with another type of agency. Rammert notes that ‘[w]hen human actions, machine operations and programmed activities are so closely knit together that they form a “seamless web”, [we need to] analyze this hybrid constellation as a heterogeneous network of activities and interactivities.’Footnote 44 The gradualized concept of agency enables this kind of analysis by proposing the concept of distributed agency, which can be seen as a nondualist perspectiveFootnote 45 on the complex interactions between human and nonhuman contributors to agency. Of particular interest to us is the notion that agency can be (and often is) distributed across a hybrid constellation of entities, including (but not limited to) humans, machines, software, and AI. In this respect, being ‘distributed’ means that a simple observable movement performed by a patient with a BCI-enabled prosthesis is the result of a complex interplay of activities, interactivities, and intra-activities. So, who is acting in scenarios of neurotechnologically assisted agency? Following the gradualized concept of agency, not a singular agent, but a hybrid constellation of people, machines, and programs over all of which agency is distributed in complex ways.

The concept of distributed agency includes a further dimension which is of importance to our argument, namely the modern sociotechnological setting, or the ‘technological condition’ we mentioned in the introduction. With the concept of distributed agency, the gradualized concept of agency argues that technologically assisted agency emerges from ‘many loci of agency’Footnote 46 rather than from singular instrumental actions (e.g., tool use) performed by an individual human agent. While the individual agent does contribute to agency, his contribution is only one activity in a stream of human interactions, machinic intra-activities, and human–machine interactivities. The sociotechnological setting can be addressed by further analyzing human interactions and machinic intra-activities.

Rammert notes that complex technological actions, such as flying tourists to Tenerife with a commercial airplane, include not only individual actions by the pilot, but also considerable contributions from a multitude of both human and nonhuman contributors.Footnote 47 On the human side, the pilot is fully dependent on the flight team on board (co-pilot) and on the ground (air traffic controllers, radio operators), as well as the airline company which planned and scheduled the flight, and also the passengers buying the tickets, and so on. On the technical side, the flight is also facilitated by the intra-activities of the various machines and programs integrated into the airplane as well as the respective facilities on the ground. Also, consider that the majority of the flight actions are performed by the auto-pilot, which consists of software programs which constantly measure, monitor, and adjust the mechanical parts of the airplane while checking back with the software networks on the ground which assist in planning, controlling, and navigating the airplane.

Coming back to the example of a movement performed by a patient with an AI-based, BCI-enabled prosthesis, we can apply the same perspective. At first glance, it is just the patient who directly performs the movement of the prosthesis. However, we need to acknowledge the different teams involved, for example, doctors and nurses who performed the initial surgery, and the researchers, technicians, and engineers who built the prosthesis, designed the clinical study, and maintain the device. Also, the hospital, healthcare system, and research and development are related associations of people. And lastly, funding agencies, policies, and social demands contribute to enabling the movement of the neuroprosthesis as well. On the technical side, a neuroprosthesis includes the ‘decoder’ which can be considered a piece of AI as it employs machine learning to interpret the neural data monitored by the implanted electrodes. While a science fiction example at the moment, the inclusion of more complex AI solutions in brain–computer interfaces may well be achievable in the near future.

III. Hybrid Agency As the Foundation of Cyberbilities

The concept of distributed agency is a valuable tool to describe agency beyond the scope of the individual biological functions which underlie the human capacity to act in accordance with their intentions and plans. It shifts the perspective from the limited compensatory view of technological agency to the complex context in which technological agency not only takes place but emerges as the product of a broad spectrum of biological, psychological, social, and political factors. In this sense, the notion of distributed agency can be used as a viable philosophical tool to expose the conditions of possibility regarding concepts such as intention or capability.

1. Distributed Agency and Hybrid Agency

Because we aim to focus this critical potential on neurotechnologically-assisted agency in particular, we are faced with the challenge to address both its neurophysiological dimension – because neurotechnological devices are directly ‘wired’ into a person’s brain – and the sociotechnological dimension – as such a device entails complex inter- and intra-activities between and among humans and machines. Thus, we introduce the concept of hybrid agency as a special case of distributed agency, namely as human–machine interactions in which agency is distributed across human and neurotechnological elements. This further emphasizes that neurotechnology – which, by definition, is technology that is directly connected to the brain – is not a conventional tool because it shapes agency not only by being used, but also by directly interacting with the origin of agency. Hence, hybrid agency describes intimate ‘fusions’ of human and machinic agency and requires direct human–neurotechnology interaction as a basis – but, of course, this does not exclude any biological, psychological, social, or political factors which are directly or indirectly related to neurotechnology as well. These related or indirect factors still shape the structures of neurotechnologically assisted agency, and can themselves be shaped by neurotechnology. And, importantly, hybrid agency specifically includes the various systems of intra-activities among technological and software-agents which neurotechnological devices imply.

The concept of hybrid agency directly opposes the compensatory view, which reduces these complex dimensions by drawing on the instrumental theory of technology, equating neuroprosthetics with conventional tool-use. In this model, neurotechnologically-assisted agency means that a single human agent uses a passive technological tool that compensates for limitations in the action chain, allowing the user to perform actions she would have performed anyway if she could have done so.

2. Cyberbilities As Neurotechnological Capabilities

Hybrid agency is the foundation of cyberbilities insofar as this kind of technologically-assisted agency creates specific types of capabilities (i.e. opportunities to gain functionings) which we call cyberbilities. A formal definition reads: ‘cyberbilities are capabilities that originate from hybrid agency, i.e. human–machine interactions in which agency is distributed across human and neurotechnological elements.’ Because capabilities are defined as real opportunities to achieve functionings – beings and doings that increase well-being – cyberbilities are real opportunities to achieve such functionings as the result of hybrid agency.

It is important to emphasize that cyberbilities are capabilities, not functionings. They are not specific skills or abilities a person may gain from neurotechnology. Rather, they denote the opportunities to gain all kinds of (neurotechnological or ‘natural’) functionings. And even functionings are not just skills or abilities (doings), but also include states of being (like having financial or social resources or being informed about a certain subject matter). If a paraplegic person uses a brain–computer interface to gain the ability to control her wheelchair, the resulting cyberbilities are related to the opportunities that are gained by this type of technological agency. The brain–computer interface opens up a spectrum of agency that was previously restricted, allowing this person, for example, to attend a wedding and thus participate in socializing, which potentially increases this person’s well-being.

Hence, cyberbilities denote the opportunities opening up for users of neurotechnology. But because they are the result of hybrid agency, they are also the product of a technology that affects agency as a whole, in other words, not only on the level of causal efficacy, but also concerning psychological, social, and political factors. While a neurotechnological device may be designed to restore, facilitate, or enhance specific skills, gaining or regaining such skills has wider implications in that this can change how we conceptualize and live our lives. This is why neurotechnological agency cannot be reduced to gaining specific skills. We devised cyberbilities as a conceptual tool to reflect this important factor and provide a means of orientation concerning the potential developments entailed by the use of neurotechnology. Furthermore, cyberbilities are also concerned with the social ramifications of neurotechnological agency. The more the availability of neurotechnology increases, the more it affects all members of society.

IV. Cyberbilities and the Responsible Development of Neurotechnology

After having developed the concept of cyberbilities, we would like to propose a first tentative and incomplete list of cyberbilities, inspired by Nussbaum’s list of capabilities.Footnote 48 We consider our list to be incomplete because it is not meant to cover all basic needs of human beings, nor does it include any other holistic ambition. Therefore, the list presented in the following section should not be understood as a replacement of Nussbaum’s list. Rather, we merely aim to stimulate discussions about the implications of future neurotechnologies by drawing on core ideas of the capabilities approach. However, cyberbilities are comparable to capabilities in the following way: Nussbaum’s central capabilities describe opportunities which are based on personal and social circumstances which, if restricted or unattainable, would greatly reduce a person’s chances to gain well-being-related functionings (to ‘lead a good life’). Similarly, cyberbilities describe opportunities created by hybrid agency, which, if restricted or unattainable when using neurotechnology, would greatly reduce the chances to gain well-being-related functionings for a neurotechnologically assisted agent.

Our list of cyberbilities is also necessarily tentative: In order to address future neurotechnologies we have to work with a hypothetical view of neurotechnology that includes a type of AI-supported human–machine fusion that is yet to come. We base this view on current developments, where we can observe various endeavors aiming at advancing AI-assisted neurotechnology, from neuroprostheses for severely paralyzed patients, to sophisticated machine learning approaches, up to straightforward futuristic visions such as Musk’s neurotech company Neuralink.Footnote 49 Based on such enterprises we think of a future technology that is highly invasive and uses AI methods to generate a novel kind of human–machine fusion that goes far beyond traditional technological tools or machines. We assembled this list with this kind of future technology in mind. In the following, we first introduce our list of cyberbilities,Footnote 50 then provide some remarks on the responsible development of neurotechnology,Footnote 51 and finally discuss a potential objection against our proposal.Footnote 52

1. Introducing a List of Cyberbilities

The five cyberbilities we introduce below fall on a spectrum that ranges from individual to social and political agency. While neurotechnological interventions can create specific neurotechnologically enabled functionings, they also affect a person in more general ways. New, enhanced, or restored functionings extend and shift a person’s individual range of agency, and invasive or otherwise intimate interactions between human and machine may change how a person relates to their body. Both aspects can affect the identity and self-expression of a person, modulating their individual agency. But hybrid agency also affects social agency: On the one hand, neurotechnologies enable individual actions which can be the basis of social interactions and participation, potentially adding a social dimension even to the most basic movements.Footnote 53 On the other hand, hybrid agency itself is a type of interaction between human and neurotechnology which already includes various social aspects. Neurotechnology has the potential to support social agency, but some of its aspects may also radically reshape social engagement. Furthermore, hybrid agency has distinct political dimensions that range from enabling a person to take part in communal to political and democratic processes.

Autonomy and self-endorsement: Neurotechnological devices are often used with the intent to restore or increase a person’s functionings (skills, abilities, states), which might also suggest that such devices generally support their autonomy as a more general capability. However, this view might be too simplistic if those functionings result from hybrid agency. Hybrid agency entails a relational dimension of autonomy because autonomy is no longer restricted to interactions between human beings but also concerns the interactivity between human and machine. A neurotechnologically-assisted person could retain autonomy in relation to human interactions while losing it in the context of human–machine interaction. Furthermore, due to the intimate fusion of human and machine, simply insisting that the human part must retain autonomy over the machinic part might be an oversimplified demand. Instead, we should address autonomy in this setting not in terms of the primacy and efficacy of human intention (i.e. the compensatory view), but in terms of ‘self-endorsed agency’. Autonomy then denotes the extent to which a person experiences their behavior as volitional and self-endorsed as opposed to coerced, driven, or covertly directed by external forces. Understanding autonomy as a cyberbility that is focused on self-endorsed agency might be a viable way to safeguard and promote self-expression and identity.

Embodiment and identity: A technological device should restore or enhance a person’s body in such a way that the person is able to integrate the device into her bodily experience, meaning that the person can, without disruptions, identify with the artificial ‘part’ of herself. She should be able to say ‘I have acted like this with the support of the technology’ or ‘the device and I have acted together’ or ‘I have acted like this, and I did not experience the interference of the device’, etc. Although a neurotechnological device may not be unperceivably ‘merged’ with the body (like, for instance, a deep brain stimulator), but rather remains separate from the body, the person should have the impression that the device ‘behaves’ in such a way that she can unreservedly identify with the actions she is performing with the support of the respective device. In other words: The person may not have a sense of ownership but should have a sense of agency. The technological tool should be integrated in the body schema of a person, even if the body image is radically changed, for example, in the case of neuroprostheses consisting of external artificial limbs which are ‘wired’ directly into the motor cortex while remaining clearly separated from the patient’s body.

Understandability and life-world: Hybrid agency describes the fusion between a person and a neurotechnological device that is intimately connected with the brain and body of its user. Although a lay person may never entirely comprehend how such a device works exactly, a certain degree of understanding is indispensable. Complementing existing approaches to an ‘explainable AI’, a technological device should be ‘understandable’ in the sense that the user knows that the device creates a situation of hybrid agency and roughly how the device might affect her agency and behavior (e.g., knowing that a brain–computer interface complements the causal efficacy of her intentions and where the causal contribution lies, which might concern not only the execution of movements but also their planning or initiation). Furthermore, a person should be able to act in interplay with the device in such a way that she can always identify herself with the resulting joint action. While she does not need to be able to explain how the device works on a technical level, she rather needs to understand how the device contributes to hybrid actions and how the device creates well-being opportunities and, thus, becomes deeply integrated in the person’s ‘life world’.

Social embeddedness and social experience: Hybrid agency can create opportunities to engage with the social world, be it on the level of restoring mobility and allowing a person to meet other people or on the level of being able to express thoughts and feelings, for example via digital communication devices. Enabling, restoring, and extending such engagements – for example, in the case of severe paralysis, situations that restrict direct social contact (such as a pandemic), or when trying to socialize over long distances – hold the potential of significant well-being gains. At the same time, however, neurotechnology shapes and alters the basic conditions of social interactions, thereby influencing the way both neurotechnology users and nonusers are socially embedded in the first place. One possible way to capture such fundamental changes could be to focus on how our social experiences are affected by technology.

Political engagement and participation: By supporting individual and social agency, neurotechnology also opens up opportunities to engage in political activities on various levels and other forms of campaigning for the common good. Neurotechnological devices should be designed to foster participation in democratic processes such as voting, politicking, or running for office, and should also support engagement in local and global communities, organizations, and institutions.

2. Remarks on the Responsible Development of Neurotechnology

Because neurotechnologies are developed within a society and its always changing and shifting norms and regulations, cyberbilities are also linked to broad and ongoing societal, ethical, and legal questions. The keywords listed below are not to be understood as cyberbilities, but as indicators of more general questions surrounding cyberbilities. For example, due to usually limited resources we may encounter questions like which patient would benefit from this technology, meaning that not all persons may have the chance to alter their agency by gaining cyberbilities. Also, the neurotechnological engagement in certain activities may require laws that protect the user’s personal data (e.g., online services, healthcare, marketing). Because neurotechnology is and will most likely continue to be heavily regulated, the use of neurotechnology on the individual and social level will inherit the legal and political aspects associated with the regulation of neurotechnology, potentially affecting neurotechnology users and their agency. These complex areas will require careful analysis in the coming years and the following remarks address some of the most basic requirements to safeguard the responsible development of neurotechnology. Furthermore, both the question of the trustworthiness of technological devices (especially regarding AI systems) in general and questions around data protection and informational self-determination will affect the future of neurotechnology and also how we evaluate cyberbilities in the future.

Availability: Market approval of neurotechnological devices is related to a host of important questions. Who will have access to neurotechnology? How is access regulated – via healthcare systems, or even the open market? And how does regulated access affect not only neurotechnology users, but also those who do not have access to neurotechnology and who have to interact or compete (e.g. in the job market) with those who do? Such questions indicate important consequences for well-being on multiple levels: If neurotechnology users are individually, socially, politically, or otherwise advantaged or disadvantaged, this circumstance generally affects neurotechnology-related opportunities to gain well-being – both for those who have and those who do not have access to neurotechnology. The question of availability specifically reveals that neurotechnology affects not only those who gain hybrid agency, but also those who do not. This aspect could even result in a ‘feedback loop’, as the relationship between neurotechnology users and nonusers might affect how norms and regulations develop, further changing this initial relation.

Data protection: Because neurotechnological devices monitor, record, and process neurophysiological (and potentially other biological or psychological) data, hybrid agency opens up a plethora of ways in which the data can be used and shared to create functionings or cyberbilities. But the same data could also be used for, among other things, political or commercial purposes. A neurotechnological device should be designed in such a way that it collects and uses personal data as conservatively as possible (e.g. restricted to momentary joint actions and activities), or at least implements particularly robust measures to prevent misuse of data (e.g. through encryption). Because AI (i.e. machine learning) is already implemented in neuroprostheses in order to interpret brain activity faster and more efficiently, such devices should be regarded as a genuine ‘part’ of the patient and thus be subject to the same legal and political protection concerning personal information and human rights as the user herself. Also, any further implementation of AI-technology needs to be carefully designed to safeguard both the data of its user and any human or nonhuman interaction partners.

Trustworthiness: A technological device should not only be reliable in a mere technological sense, but the person should be able to trust herself and the device, especially in cases when the device is merged with the human body or brain. This trust could be seen as a broad psychological foundation of neurotechnology usage, as it includes many of the other items on this list and the list of cyberbilities, like trusting that hybrid agency can be self-endorsed, confidence in the physiological safety and digital security (hacking, manipulation, privacy) of neurotechnology, and reliance on understanding, in principle, the ways in which the device modifies and influences one’s natural capacity for agency.

V. Discussion and Closing Remarks

Neurotechnology will continue to afford us with astounding possibilities. While the application of neurotechnology is currently restricted to medical usage, we hope that we provided a convincing argument anticipating the future scope of this technology going above and beyond the therapeutic restoration of specific skills and abilities. The proposition of the concepts of hybrid agency and cyberbilities is directed at broadening our perspective so that the enormous potential and overarching impact of neurotechnology may come to the fore.

However, we want to discuss one objection that could be raised on this point, namely that the focus on well-being is too one-sided and may lead to disregarding the intrinsic value of human agency. After all, cyberbilities are not based on ‘natural’ agency, but hybrid agency. What if this novel kind of agency is in some way deficient, because its technological portion somehow detracts from the human part of agency? In some cases, then, well-being could be achieved at the price of losing aspects of ‘natural’ agency.

This reasonable objection raises questions about the relative normative weights of well-being and agency, a topic that also applies to the capability approach. There, capabilities and functionings are embedded in the more general concept of agency, and the latter itself has an intrinsic normative value. But does the importance of agency outweigh the importance of well-being? If we transfer this question to the cyberbilities approach, we could ask: Could the pursuit of cyberbilities lead to justifying a loss of ‘natural’ agency for the sake of gaining well-being that is less connected to human agency, but rather grounded in technological agency? And to add a utopian twist, could an AI-based brain – computer interface at some point know better and decide itself whether human or technological agency leads to more well-being gains?

There probably is no clear answer to these questions. While it could be argued that this thought experiment warrants preserving ‘natural’ agency, our line of argument in previous sections hopefully demonstrated that ‘natural agency’ is not easy to define. Following the standard view, natural agency would mean that the intentions of the human agent systematically modulate which actions are carried out. But considering the gradualized concept of agency, we also saw that human agency is entangled in complex social, institutional, and political systems that influence which intentions are available to human agents in the first place. Human agency is already intrinsically affected by our use of technology and its sociopolitical context.

However, we want to address a point we think is related to this general question: the possibility of capability-tradeoffs. We argued that neurotechnology might not just compensate for causal gaps in the action chain, but rather has an influence on the entire action chain by modulating how the brain works as a whole. Furthermore, neuroechnology also affects, in various ways, the formation of intentions that lead to action chains in the first place. As a result, neurotechnology has the potential to lead to both gaining and losing capabilities.

Consider this example: A neurotechnological device might allow a person to achieve mobility-based functionings (like performing grasping movements with a robotic arm, or getting to work with a wheelchair controlled with the help of a brain–computer interface). If this device also has the effect that its user does not experience her movements as caused by herself (significant portions of grasping movements are controlled by the prosthesis; the wheelchair autonomously navigates to the workplace), then the well-being achievements (being self-sufficient at home and earning money) are realized at the cost of losing some portion of agency. This is a capability tradeoff: The capability (in this case, cyberbility) of neurotechnologically enabled mobility is traded off against the capability of controlling and planning one’s movements (which is a part of ‘natural’ agency).

Of course, such tradeoffs are not necessarily adverse or harmful: In the case of grasping, delegating control to the device at the cost of the sense of control might be acceptable as long as a general sense of agency remains intact (for instance, if the prosthesis overall performs in line with the user’s intentions). The case of the autonomous wheelchair is similar, although here the delegation of control goes much further because it includes planning and deciding how to navigate. Our argument is, there might be a point at which the ‘cost’ becomes unacceptable, for example, if significant portions of agency are traded off. Possible examples could be that the device increasingly detracts from agency, severely influences the decisions of users, or significantly affects the process of intention formation.

Naturally, determining the point at which capability tradeoffs become unacceptable is a difficult task as this is not a technical or scientific problem, but a normative one that needs to be addressed from ethical, legal, social, and political viewpoints. But this open question might help to conclude our line of argument, as we understand cyberbilities as a potential safeguard against unacceptable capability tradeoffs.Footnote 54

Footnotes

22 Medical AI Key Elements at the International Level

* The authors acknowledge funding by the Volkswagen Foundation, grant No. 95827. The state of the science is reflected in this chapter until the end of March 2021. The sources have been updated until mid-September 2021.

1 E Landhuis, ‘Deep Learning Takes on Tumours’ (2020) 580 Nature 550.

2 Ž Avsec and others, ‘Base-Resolution Models of Transcription-Factor Binding Reveal Soft Motif Syntax’ (2021) 53 Nat Genet 354.

3 E Landhuis, ‘Deep Learning Takes on Tumours?’ (2020) 580 Nature 550.

4 S Porter, ‘AI Database Used to Improve Treatment of UK COVID-19 Patients’ (Healthcare IT News, 20 January 2021) www.healthcareitnews.com/news/emea/ai-database-used-improve-treatment-uk-covid-19-patients?utm_campaign=Clips&utm_medium=email&_hsmi=108004999&_hsenc=p2ANqtz-_Z0v3NgnQwS4wQHlc_eXjnWIuszmpfIvLSXXOM4z23_6DtTo2WdoeI8o8wYaIICunIyDMs7g82wwC8V217XJn9K1SfCJByZihVAdmrIAS0yq7u7It2Wd-d3JmQ5wwVPjo9XOkf&utm_content=108004999&utm_source=hs_email; concerning the usefulness of AI applications for pandemic response, see: M van der Schaar and others, ‘How Artificial Intelligence and Machine Learning Can Help Healthcare Systems Respond to COVID-19’ (2021) 110 Mach Learn 1.

5 A Binder and others, ‘Morphological and Molecular Breast Cancer Profiling through Explainable Machine Learning’ (Nat Mach Intell, 8 March 2021) www.nature.com/articles/s42256-021-00303-4.

6 Medieninformation, ‘Hirnschlag mit künstlicher Intelligenz wirksamer behandeln dank Verbundlernen’ (Universität Bern, 9 March 2021). www.caim.unibe.ch/unibe/portal/fak_medizin/dept_zentren/inst_caim/content/e998130/e998135/e1054959/e1054962/210309_Medienmitteilung_InselGruppe_UniBE_ASAP_eng.pdf; WHO, WHO Guideline: Recommendations on Digital Health Interventions for Health System Strengthening (WHO/RHR/19.8, 2019) (hereafter WHO, Recommendations on Digital Health).

7 JL Lavanchy and others, ‘Automation of Surgical Skill Assessment Using a Three-Stage Machine Learning Algorithm’ (2021) 11 Sci Rep 5197.

8 M Nagendran and others, ‘Artificial Intelligence versus Clinicians: Systematic Review of Design, Reporting Standards, and Claims of Deep Learning Studies’ (2020) BMJ 368:m689.

9 See also European-Commission, ‘High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI’ (European Commission, 8 April 2019) https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. The Guidelines of this group have benn subject to criticism, cf. M Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’ (2020) 11 European Journal of Risk Regulation 1, E1 doi:10.1017/err.2019.65.

10 Cf., for example, FDA, ‘Digital Health Software Precertification (Pre-Cert) Program’ (FDA, 14 September 2020) www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program.

11 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (Council of Europe, May 2019) 10 et seq. https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64; CoE Committee of Ministers, ‘Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes’ (1337th meeting of the Ministers’ Deputies, Decl(13/02/2019)1, 13 February 2019) No. 9 https://search.coe.int/cm/pages/result_details.aspx?ObjectId=090000168092dd4b; OECD, ‘Recommendation of the Council on Artificial Intelligence’ (OECD/LEGAL/0449, 22 November 2019) Section 2 https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449; UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (SHS/BIO/PI/2021/1, 23 November 2021) II.7 https://unesdoc.unesco.org/ark:/48223/pf0000381137; WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (WHO, 21 June 2021) 2 et seq., 17 https://www.who.int/publications/i/item/9789240029200 (hereafter WHO, ‘Ethics and Governance for Artificial Intelligence for Health’); and the following documents issued by the WMA: ‘WMA Statement on Mobile Health’ (66th WMA General Assembly, Russia, 20 February 2017) www.wma.net/policies-post/wma-statement-on-mobile-health/ (hereafter WMA, ‘WMA Statement on Mobile Health’); ‘WMA Statement on Augmented Intelligence in Medical Care’ (70th WMA General Assembly, Georgia, 26 November 2019) www.wma.net/policies-post/wma-statement-on-augmented-intelligence-in-medical-care/ (hereafter WMA, ‘WMA Statement on Augmented Intelligence’); ‘WMA Statement on the Ethics of Telemedicine’ (58th WMA General Assembly, Denmark, amended by 69th General Assembly, Iceland, 21 September 2020) No 1 www.wma.net/policies-post/wma-statement-on-the-ethics-of-telemedicine/ (hereafter WMA, ‘WMA Statement on the Ethics of Telemedicine’); ‘Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects’ (18th WMA General Assembly, Finland, last amended by the 64th WMA General Assembly, Brazil, 9 July 2018) No 26 www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ (hereafter Declaration of Helsinki).

12 English Oxford Living Dictionary, ‘Artificial Intelligence’ www.lexico.com/definition/artificial_intelligence. In this chapter, when elaborating on AI methods, Deep Learning and Machine Learning are at the focus of considerations.

13 Acatech, ‘Machine Learning in der Medizintechnik’ (acatech, 5 May 2020) 8, 11 www.acatech.de/publikation/machine-learning-in-der-medizintechnik/.

14 Datenethikkommission, ‘Gutachten der Datenethikkommission’ (2020) 24, 28 (Federal Ministry of the Interior, Building and Community, 23 October 2019) www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=6 (hereafter Datenethikkommission, ‘Gutachten’).

15 The ‘actorhood’ of AI is discussed mainly from the perspectives of action theory and moral philosophy, which are not addressed in this chapter. Currently, however, it is assumed that AI-based systems cannot themselves be bearers of moral responsibility, because they do not fulfill certain prerequisites assumed for this purpose, such as freedom, higher-level intentionality, and the ability to act according to reason. On the abilities required for ethical machine reasoning and the programming features that enable them, cf. LM Pereira and A Saptawijaya, Programming Machine Ethics (2016). On the question of the extent to which AI-based systems can act, cf. C Misselhorn, Grundfragen der Maschinenethik (2018) and Chapter 3 in this volume. With regard to the legal assessment related to the ‘actorhood’ of AI systems and the idea of granting algorithmic systems with a high degree of autonomous legal personality in the future (‘electronic person’), the authors agree with the position of the German Data Ethics Commission, according to which this idea should not be pursued further. Cf. Datenethikkommission, ‘Gutachten’ (Footnote n 14) Executive Summary, 31 Nr 73. For this reason, the article only talks about AI per se for the sake of simplicity; this is neither intended to imply any kind of ‘personalization’ nor to represent a position in the debate about ‘personalization’ with normative consequences.

16 A Laufs, BR Kern, and M Rehborn, ‘§ 50 Die Anamnese’ in A Laufs, BR Kern, and M Rehborn (eds), Handbuch des Arztrechts (5th ed. 2019) para 1.

17 C Katzenmeier, ‘Arztfehler und Haftpflicht’ in A Laufs, C Katzenmeier, and V Lipp (eds), Arztrecht (8th ed. 2021) para 4.

18 TJ Brinker and others, ‘Deep Learning Outperformed 136 of 157 Dermatologists in a Head-To-Head Dermoscopic Melanoma Image Classification Task’ (2019) 113 European Journal of Cancer 47.

19 A Esteva and others, ‘Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks’ (2017) 542 Nature 115.

20 O Schoppe and others, ‘Deep Learning-Enabled Multi-Organ Segmentation in Whole-Body Mouse Scans’ (2020) 11 Nat Commun 5626.

21 The Federal Institute for Drugs and Medical Devices keeps a record of all digital medical applications (DiGA-Verzeichnis) https://diga.bfarm.de/de/verzeichnis.

22 S Chan and others, ‘Machine Learning in Dermatology: Current Applications, Opportunities, and Limitations’ (2020) 10 Dermatol Ther (Heidelb) 365, 375.

23 T Alhanai, M Ghassemi, and J Glass, ‘Detecting Depression with Audio/Text Sequence Modelling of Interviews’ (2018) Proc Interspeech 1716. Cf. also M Tasmin and E Stroulia, ‘Detecting Depression from Voice’ in Canadian Conference on AI: Advances in Artificial Intelligence (2019) 472.

24 WMA, ‘WMA Statement on Mobile Health’ (Footnote n 11).

25 WMA, ‘WMA Statement on Augmented Intelligence’ (Footnote n 11).

26 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (n 11) 10 et seq.

27 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 84.

28 WHO, Recommendations on Digital Health (Footnote n 6).

29 WMA, ‘WMA Statement on the Ethics of Telemedicine’ (Footnote n 11).

30 Footnote Ibid, No 2.

31 Footnote Ibid, No 4.

32 WHO, Recommendations on Digital Health (Footnote n 6) 50.

33 Footnote Ibid, 53 et seq.

35 Law on the Protection of Electronic Patient Data within the Telematic Infrastructure (Gesetz zum Schutz elektronischer Patientendaten in der Telematikinfrastruktur), BGBl. 2020, 2115.

36 Social Security Statute Book V – Statutory Health Insurance (SGB V), Article 1 of the Act of 20 December 1988 (Federal Law Gazette [Bundesgesetzblatt] I page 2477, 2482), last amended by Artikel 1b of the Act of 23 Mai 2022 (Federal Law Gazette I page 760), §306(1) sentence 2.

37 § 364 et seq. SGB V.

38 Civil Code in the version promulgated on 2 January 2002 (Federal Law Gazette [Bundesgesetzblatt] I page 42, 2909; 2003 I page 738), last amended by Article 2 of the Act of 21 December 2021 (Federal Law Gazette I page 5252).

39 These advantages, which also increase the acceptance of health workers for digital health interventions, are described by the WHO: World Health Organization, Recommendations on Digital Health (Footnote n 6) 34. In addition, the WHO has recently suggested exploring whether the introduction and use of AI in healthcare exacerbates the digital divide. Ultimately, AI using telemedicine should reduce the gap in access to healthcare and ensure equitable access to quality care, regardless of geographic and other demographic factors: WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 74.

40 For more information cf. Charité, ‘Fontane’ https://telemedizin.charite.de/forschung/fontane/.

41 For more information cf. Charité, ‘Telemed5000’ https://telemedizin.charite.de/forschung/telemed5000/.

42 A Laufs, BR Kern, and M Rehborn, ‘§ 52 Die Diagnosestellung’ in A Laufs, BR Kern, and M Rehborn (eds), Handbuch des Arztrechts (5th ed. 2019) para 7 et seq.

43 WMA, ‘WMA Statement on Augmented Intelligence’ (Footnote n 11).

44 Ad Hoc Expert Group (AHEG) for the preparation of a draft text of a recommendation on ethics of artificial intelligence, ‘Outcome Document: First Draft of the Recommendation on the Ethics of Artificial Intelligence’ (September 2020) No 36 https://unesdoc.unesco.org/ark:/48223/pf0000373434.

45 WHO, ‘Guideline: Recommendations on Digital Health Interventions for Health System Strengthening’ (n 6) 65; WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) p 6.

46 Acatech, ‘Machine Learning in der Medizintechnik’ (n 13) 11.

47 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 106 et seq.

48 C Katzenmeier, ‘Aufklärungspflicht und Einwilligung’ in A Laufs, C Katzenmeier, and V Lipp (eds), Arztrecht (8th ed. 2021) para 16, 21.

49 Footnote Ibid, para 14.

50 WMA, ‘Declaration of Helsinki’ (Footnote n 11) No 26.

51 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (n 11) 10 et seq.

52 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 40 et seq., with some suggestions in Box 4, 48, 82, and 90.

53 C Katzenmeier, ‘Arztfehler und Haftpflicht’ in A Laufs, C Katzenmeier, V Lipp (eds), Arztrecht (8th ed. 2021) para 4.

54 WMA, ‘Declaration of Helsinki’ (Footnote n 11) No 12, No 10.

55 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 77.

57 For further information see C Amadou, S Franc, PY Benhamou, S Lablanche, E Huneker, G Charpentier, A Penfornis & Diabeloop Consortium ‘Diabeloop DBLG1 Closed-Loop System Enables Patients With Type 1 Diabetes to Significantly Improve Their Glycemic Control in Real-Life Situations Without Serious Adverse Events: 6-Month Follow-up’ (2021) 44 Diabetes care 3, 844.

58 N Koutsouleris and others, ‘Multimodal Machine Learning Workflows for Prediction of Psychosis in Patients with Clinical High-Risk Syndromes and Recent-Onset Depression’ (JAMA Psychiatry, 2 December 2020) https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2773732.

59 WHO, ‘Guideline: Recommendations on Digital Health Interventions for Health System Strengthening’ (n 6) 69 et seq. Considering the use of AI to extend ‘clinical’ care beyond the formal health-care system based on monitoring: WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 9 et seq.

62 For example, in civil law provisions in Germany according to § 630f BGB and for research studies based on the international standards of the WMA according to the Declaration of Helsinki (Footnote n 11) No 22.

63 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, p. 1–88.

64 R Konertz and R Schönhof, Das technische Phänomen „Künstliche Intelligenz” im allgemeinen Zivilrecht. Eine kritische Betrachtung im Lichte von Autonomie, Determinismus und Vorhersehbarkeit (2020) 69.

65 H Zech, ‘Künstliche Intelligenz und Haftungsfragen’ (2019) ZfPW, 118, 202.

66 B Buchner, ‘DS-GVO Art. 22’ in J Kühling and B Buchner (eds), Datenschutzgrundverordnung BDSG Kommentar (3rd ed. 2020) para 14 et seq. P Schantz and HA Wolff, Das neue Datenschutzrecht (2017) recital 736.

67 GDPR, Article 9(2)(a), in conjunction with Article 6(1)(a) GDPR or Article 6(1)(b) GDPR (doctor–patient relationship as a contractual obligation under civil law).

68 D Kampert, ‘DSGVO Art. 9’ in G Sydow (ed), Europäische Datenschutzgrundverordnung (2nd ed. 2018) para 14.

69 For challenges see Sub-section II 3.

70 GDPR, Recital 33. WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 84 et seq.

71 Cf. instead of many: HC Stoeklé and others, ‘Vers un consentement éclairé dynamique’ [Toward Dynamic Informed Consent] (2017) 33 Med Sci (Paris) 188.I Budin-Ljøsne and others, ‘Dynamic Consent: A Potential Solution to Some of the Challenges of Modern Biomedical Research’ (2017) 18(1) BMC Med Ethics 4. WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 82.

72 Information obligations in the course of broad and dynamic consent: Datenschutzkonferenz, ‘Beschluss der 97. Konferenz der unabhängigen Datenschutzaufsichtsbehörden des Bundes und der Länder zu Auslegung des Begriffs “bestimmte Bereiche wissenschaftlicher Forschung” im Erwägungsgrund 33 der DS-GVO’ (3 April 2019) www.datenschutzkonferenz-online.de/media/dskb/20190405_auslegung_bestimmte_bereiche_wiss_forschung.pdf.

73 For problems with this, see Sub-section II 3.

74 PHG Foundation, ‘The GDPR and Genomic Data: The Impact of the GDPR and DPA 2018 on Genomic Healthcare and Research’ (2020) 44 et seq. www.phgfoundation.org/media/123/download/gdpr-and-genomic-data-report.pdf?v=1&inline=1.

75 Footnote Ibid, 167.

76 Cf. GDPR, Article 17(3)(d). T Herbst, ‘DS-GVO Art. 22’ in J Kühling, B Buchner (eds), Datenschutzgrundverordnung BDSG Kommentar (3rd ed. 2020) para. 81 et seq.

77 For special categories of personal data, cf. the exemptions defined in Article 9(2) GDPR.

78 Cf. GDPR, Article 20.

79 Cf. instead of many others: B Murdoch, ‘Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era’ (2021) 22(1) BMC Medical Ethics 122.

80 CJEU, Case C-131/12 Google Spain v Gonzalez [2014] ECLI:EU:C:2014:317 para 87 et seq.

81 OJ Gstrein, Das Recht auf Vergessenwerden als Menschenrecht (2016) 111.

82 For this in-depth analysis of the right to be forgotten, cf. F Molnár-Gábor, ‘Das Recht auf Nichtwissen. Fragen der Verrechtlichung im Kontext von Big Data in der modernen Biomedizin’ in G Duttge and Ch Lemke (eds), Das sogenannte Recht auf Nichtwissen. Normatives Fundament und anwendungspraktische Geltungskraft (2019) 83, 99 et seq.

83 JE Alvarez, International Organizations as Law-Makers (2005) 4, 6 et seq. Due to the regional character of the Council of Europe, its instruments are not further elaborated on here.

84 WHO, ‘Global Health Ethics’ https://apps.who.int/iris/handle/10665/164576.

85 UNESCO, ‘Universal Declaration on Bioethics and Human Rights, 19 October 2005, Records of the UNESCO General Conference, 33rd Session, Paris, 3–21 October 2005’ (33 C/Resolution 36) 74 et seq.

86 Constitution of the UNESCO, 4 UNTS 275, UN Reg No I-52 (hereafter UNESCO-Constitution).

87 F Molnar-Gabor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 202 et seq.

88 On the advantages of international soft law compared to international treaties when it comes to the regulation of biomedicine cf. A Boyle,Some Reflections on the Relationship of Treaties and Soft Law’ (1999) 48 International and Comparative Law Quarterly 901, 902 et seq., 912 et seq.; R Andorno, Principles of International Biolaw. Seeking Common Ground at the Intersection of Bioethics and Human Rights (2013) 39 et seq.; W Höfling, ‘Professionelle Standards und Gesetz’ in HH Trute and others (eds), Allgemeines Verwaltungsrecht – zur Tragfähigkeit eines Konzepts, Festschrift für Schmidt-Aßmann zum 70. Geburtstag (2008) 45, 52.

89 M Bothe, ‘Legal and Non-Legal Norms: A Meaningful Distinction in International Relations?’ (1980) 11 Netherlands Yearbook of International Law 65, 67 et seq.

90 J Klabbers, An Introduction to International Institutional Law (2nd ed. 2009) 183.

91 H Hilgenberg, ‘Soft Law im Völkerrecht’ (1998) 1 Zeitschrift für Europarechtliche Studien 81, 100 et seq.

92 M Goldmann, Internationale öffentliche Gewalt (2015) 34, 60 et seq., 187 et seq., 199 et seq.

93 I Venzke, How Interpretation Makes International Law (2012) 380.

94 TA Faunce, ‘Will International Human Rights Subsume Medical Ethics? Intersections in the UNESCO Universal Bioethics Declaration’ (2005) 31 Journal of Medical Ethics 173, 176; D Thürer, ‘Soft Law’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 2.

95 Cf. instead of many others: A Langlois, Negotiating Bioethics (2013) (hereafter Langlois, Negotiating Bioethics) 144.

96 Statutes of the International Bioethics Committee of UNESCO (IBC), Adopted by the Executive Board at its 154th Session, on 7 May 1998 (154 EX/Dec. 8).

97 F Molnár-Gábor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 298 et seq.

98 Footnote Ibid, 301.

99 Statutes of the International Bioethics Committee of UNESCO (IBC) (Footnote n 96) Article 11. Cf. Rules of Procedure of the Intergovernmental Bioethics Committee (IGBC), Adopted by IGBC at its 3rd session on 23 June 2003 in Paris and amended at its 5th session on 20 July 2007 and at its 7th session on 5 September 2011 (SHS/EST/IGBC-5/07/CONF.204/7 Rev) Article 1.

100 Critically on this Langlois, Negotiating Bioethics (Footnote n 95) 56.

101 F Molnár-Gábor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 299 et seq. For the critical assessment of the Inter-Governmental Meeting of Experts, cf. Langlois, Negotiating Bioethics (Footnote n 95) 56. The distribution of seats and the election take place according to the decision of the Executive Council: 155 EX/Decision 9.2, Paris, 03.12.1998. According to this, Group I (Western Europe and the North American States) has seven seats, Group II (Eastern Europe) has four, Group III (Latin America and the Caribbean States) has six, Group IV (Asia and the Pacific States) has seven, and Group V (Africa [eight] and the Arab States [four]) has a total of twelve seats.

102 W Spann, ‘Ärztliche Rechts- und Standeskunde’ in A Ponsold (ed), Lehrbuch der Gerichtlichen Medizin (1957) 4.

103 T Richards, ‘The World Medical Association: Can Hope Triumph Over Experience?’ (1994) BMJ, 308 (hereafter Richards, ‘The World Medical Association’).

104 See official homepage: WMA, ‘About Us’ www.wma.net/who-we-are/about-us/ (hereafter WMA, ‘About Us’).

105 S Vöneky, ‘Rechtsfragen der Totalsequenzierung des menschlichen Genoms in internationaler und nationaler Perspektive’ (2012) Freiburger Informationspapiere zum Völkerrecht und Öffentlichen Recht 4, Footnote note 16, https://www.jura.uni-freiburg.de/de/institute/ioeffr2/downloads/online-papers/fip_4_2012_totalsequenzierung.pdf.

107 On the decision-making process M Chang, ‘Bioethics and Human Rights: The Legitimacy of Authoritative Ethical Guidelines Governing International Clinical Trials’ in S Voeneky and others (eds), Ethics and Law: The Ethicalization of Law (2013) 177, 210 (hereafter Chang, ‘Bioethics and Human Rights’).

108 See official homepage: WMA, ‘About Us’ (Footnote n 104).

109 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 209. Cf. Richards, ‘The World Medical Association’ (Footnote n 103).

110 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 209 et seq. The threshold was 50,000 members a few years ago. Cf. Richards, ‘The World Medical Association’ (Footnote n 103).

111 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 214.

112 Cf. Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 212.

113 WMA, ‘Declaration of Helsinki’ (Footnote n 11).

114 This medical ethics has been condensed into the four bioethical principles of autonomy, beneficence, non-maleficence, and justice (as set down by Beauchamp and Childress). TL Beauchamp and JF Childress, Principles of Biomedical Ethics (8th ed. 2012). For criticism on principalism cf. U Wiesing, ‘Vom Nutzen und Nachteil der Prinzipienethik für die Medizin’ in O Rauprich and F Steger (eds), Prinzipienethik in der Biomedizin. Moralphilosophie und medizinische Praxis (2005) 74, 77 et seq.

115 S Voeneky, Recht, Moral und Ethik (2010) 383.

117 UNESCO, ‘General Introduction to the Standard-Setting Instruments of UNESCO’ http://portal.unesco.org/en/ev.php-URL_ID=23772&URL_DO=DO_TOPIC&URL_SECTION=201.html (hereafter UNESCO, ‘General Introduction’).

118 Article 4(4) UNESCO-Constitution (Footnote n 86).

119 UNESCO, ‘General Introduction’ (Footnote n 117).

120 D Thürer, ‘Soft Law’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 27 (hereafter Thürer, ‘Soft Law’).

121 M Kotzur, ‘Good Faith (Bona Fide)’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 25.

122 Thürer, ‘Soft Law’ (Footnote n 120) recital 27. Cf. definition by M Goldmann, Internationale öffentliche Gewalt (2015) p. 3.

123 UNESCO, ‘General Introduction’ (Footnote n 117).

124 Compare V Lipp, ‘Ärztliches Berufsrecht’ in A Laufs, C Katzenmeier and V Lipp (eds), Arztrecht (8th ed. 2021) recital 12.

125 U Wiesing, Ethik in der Medizin (2nd ed. 2004) 75.

126 (Model) Professional Code for Physicians in Germany – MBO-Ä 1997 – The Resolutions of the 121st German Medical Assembly 2018 in Erfurt as amended by a Resolution of the Executive Board of the German Medical Association 14/12/2018 (hereafter MBO-Ä 1997).

127 E.g. § 203 StGB (German Criminal Code) which protects patient confidentiality.

128 Civil law regulates the contracts for the treatment of patients in §§ 630a ff. BGB.

129 Cf. § 630a BGB, C Katzenmeier, ‘BGB § 630a’ in BeckOK BGB (61st ed. 2022) para. 1 et seq.

130 M Quaas, ‘§ 14 Die Rechtsbeziehungen zwischen Arzt (Krankenhaus) und Patient’ in R Zuck, T Clemens, and M Quass (eds), Medizinrecht (4th ed. 2018) recital 128.

131 For more information see Organizatión Médica Colegial de España, ‘Funciones del CGCOM’ www.cgcom.es/funciones.

132 For more information see American Medical Association, ‘AMA’s International Involvement’ www.ama-assn.org/about/office-international-relations/ama-s-international-involvement.

133 Bundesärztekammer, ‘(Muster-)Berufsordnung-Ärzte’ https://www.bundesaerztekammer.de/themen/recht/berufsrecht.

134 BVerfGE, 52, 131 (BVerfG BvR 878/74) para 116.

135 MBO-Ä 1997 (Footnote n 126).

136 WMA, ‘Declaration of Geneva (1947), last amended by the 68th General Assembly in Chicago, USA, October 2017’ (WMA, 9 July 2018) www.wma.net/policies-post/wma-declaration-of-geneva/.

137 ‘Far more than in other social relations of human beings, the ethical and the legal merge in the medical profession.’ E Schmidt, ‘Der Arzt im Strafrecht’ in A Ponsold (ed), Lehrbuch der gerichtlichen Medizin (2nd ed. 1957) 1, 2; BVerfGE, 52, 131 (BVerfG BvR 878/74).

138 UNESCO states, for example, that ‘Human rights law contains provisions that are analogous to the principles that flow from analysis of moral obligations implicit in doctor–patient relationships, which is the starting point, for example, of much of the Anglo-American bioethics literature, as well as the bioethics traditions in other communities.’ UNESCO IBC, ‘Report on Human Gene Therapy’ SHS-94/CONF.011/8, Paris, 24.12.1994, IV.1.

139 International Council of Nurses www.icn.ch.

140 WMA, ‘Partners, WMA Partnerships’ www.wma.net/who-we-are/alliance-and-partner/partners/.

23 “Hey Siri, How Am I Doing?” Legal Challenges for Artificial Intelligence Alter Egos in Healthcare

1 There are already analytical methods for the detection of skin cancer that can be implemented using a commercially available smartphone and that are significantly more powerful than the cognitive abilities of the average dermatologist, cf. A Esteva and others, ‘Dermatologist-Level Classification of Skin Cancer with Deep Neural Network’ (2017) 542 Nature 115, 117 et seq.

2 See e.g. ED Pisano, ‘AI Shows Promise for Breast Cancer Screening’ (2020) 577 Nature 35, 35 et seq.

3 Many of the legal considerations I am making in this chapter are essentially based on my thoughts on data protection and medical devices law developed in my habilitation thesis, published as C Krönke, Öffentliches Digitalwirtschaftsrecht (2020) 467 et seq. (data protection law) and 500 et seq. (medical devices law).

4 See Section II.

9 For this reason, specific national legislation, such as the provisions of the 2019 Digital Supply Act (Gesetz für eine bessere Versorgung durch Digitalisierung und Innovation) (Digitale-Versorgung-Gesetz, DVG) will not be covered. For more information on this legislation cf. J Kühling and R Schildbach, ‘Die Reform der Datentransparenzvorschriften im SGB V’ (2020) 2 NZS 41, 41 et seq.

10 Founder of the Münch Foundation. See www.stiftung-muench.org/.

11 See e.g. the report on Eugen Münch’s idea: A Seith ‘Sanierung via Laptopmedizin’ Der Spiegel (12 January 2005) www.spiegel.de/wirtschaft/landklinik-sterben-sanierung-via-laptopmedizin-a-387338.html. Münch recently appointed an informal ‘Digital Alter Ego’ expert commission, of which I have been a member since early 2020.

12 These are highly significant organizational issues that are undoubtedly crucial to the success of any Alter Ego project. However, they depend on the political will and the specific legal framework of individual countries and therefore cannot be discussed in detail in this chapter.

13 Cf. for this differentiation for instance I Revolidis and A Dahi ‘The Peculiar Case of the Mushroom Picking Robot: Extra-contractual Liability in Robotics’ in M Corrales, M Fenwick, and N Forgó (eds), Robotics, AI and the Future of Law (2018), 57–59; see also the differentiation made in the AI strategy of the German Federal Government: Die Bundesregierung, ‘Strategie Künstliche Intelligenz der Bundesregierung’ (KI Strategie Deutschland, November 2018) 4, 5 https://www.bmbf.de/bmbf/shareddocs/downloads/files/nationale_ki-strategie.pdf?__blob=publicationFile&v=1.

14 In 2019, for example, the Siemens AI-based AI-Rad Companion Chest CT program was the first application of the company’s AI-Rad Companion platform to receive CE marking (see M Bludszuweit, ‘KI-basierte Software AI-Rad Companion Chest CT von Siemens Healthineers für Europa zugelassen’ (Siemens Healthineers, 26 July 2019) www.siemens-healthineers.com/de/press-room/press-releases/pr-20190726028shs.html). The program evaluates CT images of the thorax from any source, highlights abnormalities with respect to the corresponding organs (heart or lung), the carotid artery and vertebrae, and automatically generates a report for the radiologist, including any indications of possible abnormalities.

16 See Section III 2. The applicable Medical Devices Regulation will be supplemented in the foreseeable future by the EU Artificial Intelligence Act, which at least in its draft version (see COM(2021) 206 final) refers to the Medical Devices Regulation and modifies it slightly with regard to high-risk systems.

17 See the Charter of Fundamental Rights of the European Union (26 October 2012) 2012/C 326/02 (Charter of Fundamental Rights), Articles 7 and 8.

18 The characterization of data protection law as a risk-focused legal regime seems not to be controversial, even though it is rarely explicitly addressed – see as an exception for example K Ladeur, ‘Das Recht auf informationelle Selbstbestimmung: Eine juristische Fehlkonstruktion?’ (2009) 62 DÖV 45, 53 et seq.

19 Cf. with reference to the distinction of (limiting) opacity tools and (transparency-creating) transparency tools by P De Hert S Gutwirth ‘Regulating Profiling in a Democratic Constitutional State’ in E Claes, S Gutwirth, and A Duff (eds), Privacy and the Criminal Law (2006) 67 et seq.; N Marsch, Das europäische Datenschutzgrundrecht (2018) 96 et seq., who refers to these concepts as ‘protection goals’.

20 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.

21 See in particular Articles 7 and 8 of the Charter of Fundamental Rights.

22 Cf. M Martini and M HohmannDer gläserne Patient: Dystopie oder Zukunftsrealität? Perspektiven datengetriebener Gesundheitsforschung unter der DS-GVO und dem Digitale-Versorgung-Gesetz’ (2020) 49 NJW 3573, 3574 (hereafter Martini and Hohmann, ‘Der gläserne Patient’). Due to this lack of watertight anonymization possibilities de facto, they plead for the introduction of a concept of legal anonymization de lege ferenda, which would eliminate the identifiability of a data subject through health data by legal fiction, as long as sufficient technical and organizational security measures were in place.

23 It should be noted that this (wide) interpretation of the term ‘research’ is disputed in legal scholarship. Some authors would like to interpret Art. 9 GDPR as exclusively referring to research in the public interest, see e.g. T Weichert ‘Art 9 Verarbeitung besonderer Kategorien personenbezogenere Daten’ in J Kühling and B Buchner (eds), Datenschutz-Grundverordnung Bundesdatenschutzgesetz: DS-GVO/BDSG (2nd ed. 2018) para 122. For a view similar to the one taken in this contribution cf. for instance, Martini and Hohmann, ‘Der gläserne Patient’ (Footnote n 22) 3576.

24 Article 5(1)(b).

25 Article 5(1)(c).

26 Article 5(1)(d).

27 Cf. (in a different, public context) CJEU, Joined Cases C-293/12 and C-594/1 Digital Rights Ireland Ltd v Minister for Communications and Others (8 April 2014).

28 See GDPR, Article 9(2)(a).

29 For a detailed analysis of the requirements following from GDPR, Article 6(4) see e.g. B Buchner and T Petri ‘Art 6 Raeumlicher Anwendungsbereich’ in J Kühling and B Buchner (eds), Datenschutz-Grundverordnung Bundesdatenschutzgesetz: DS-GVO/BDSG (3rd ed. 2020) paras 178 et seq.

30 Cf. A Roßnagel, ‘Datenschutz in der Forschung’ (2019) 4 ZD 157, 162.

31 It should be mentioned that the field of ‘data protection and Big Data’ has become a subject of extensive research and will, as such, not be further discussed here. See e.g. T WeichertBig Data und Datenschutz – Chancen und Risiken einer neuen Form der Datenanalyse’ (2013) 6 ZD 251; A Roßnagel, ‘Big Data – Small Privacy? Konzeptionelle Herausforderungen für das Datenschutzrecht’(2013) 11 ZD 562, 562 et seq.; JP Ohrtmann and S Schwiering, ‘Big Data und Datenschutz – Rechtliche Herausforderungen und Lösungsansätze’ (2014) 41 NJW 2984, 2984 et seq.; T Helbling, ‘Big Data und der datenschutzrechtliche Grundsatz der Zweckbindung’ (2015) 3 K&R 145, 145 et seq.; P Richter, ‘Datenschutz zwecklos? – Das Prinzip der Zweckbindung im Ratsentwurf der DSGVO’ (2015) 39 DuD 735, 735 et seq.; C Werkmeister and E Brandt, ‘Datenschutzrechtliche Herausforderungen für Big Data’ (2016) 4 CR 233, 237 et seq.; K Ladeur, ‘“Big Data” im Gesundheitsrecht – Ende der Datensparsamkeit?”’ (2016) 40 DuD 360, 360–361; N Culik and C Döpke, ‘Zweckbindungsgrundsatz gegen unkontrollierten Einsatz von Big Data Anwendungen – Analyse möglicher Auswirkungen der DS-GVO’ (2017) 5 ZD 226, 228; T Hoeren, ‘IT- und Internetrecht – kein Neuland für die NJW’ (2017) 22 NJW 1587, 1591; BP Paal and M Hennemann, ‘Wettbewerbs- und daten(schutz)rechtliche Herausforderungen’ (2017) 24 NJW 1697, 1700 et seq.; see also the contributions of G Hornung, ‘Erosion traditioneller Prinizpien des Datenschutzrechts durch Big Data’ and Y Hermstrüwer, ‘Die Regulierung der prädikativen Analytik: eine juristisch-verhaltenswissenschaftliche Skizze’ in W Hoffmann-Riem (ed), Big Data – Regulative Challenges (2018) 79, 99.

32 The relationship between GDPR, Article 32 and Article 24 et seq. DSGVO is illuminated by M Martini in BP Paal and DA Pauly (eds), Datenschutz-Grundverordnung Bundesdatenschutzgesetz: DS-GVO/BDSG (2nd ed. 2018) paras 7 et seq.

33 Cf. M Martini, Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz (2019) 30 et seq.

34 See GDPR, Article 5(1).

35 Cf. for example T Wischmeyer, ‘Regulierung intelligenter Systeme’ (2018) 143 AöR 1, 23 et seq. who also treats quality control as an overarching regulatory concern and protection against discrimination as a special problem of ‘failure’ of intelligent systems.

36 Cf. with regard to AI-based decisions in general M Martini, Blackbox Algorithmus (Footnote n 33) 50.

37 See for the following considerations C Krönke, Öffentliches Digitalwirtschaftsrecht (2020) 500 et seq.

38 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ 2017 L 117 (MDR).

41 Such sales forms are also explicitly covered by medical devices law, see MDR, Article 6.

42 For functions of the Apple Watch (so far in versions 4 and 5) there are CE markings for an ‘ECG App’, which records a 1-channel electrocardiogram (ECG) and evaluates it with regard to atrial fibrillation (AFib), as well as a function ‘Messages in case of irregular heart rhythm’, which analyses the pulse rate with regard to irregularities indicating AFib (see the description on www.apple.com/de/healthcare/apple-watch/).

43 See the references earlier at (Footnote n 14).

44 See MDR, Annex VIII 3(1): ‘The application of the classification rules depends on the intended purpose of the products’.

45 See the legal definition in Article 2(1) MDR, according to which each medical device ‘shall fulfil one or more of the specific medical purposes [described in detail in the regulation]’.

46 Cf. on this common classification, which is also the basis for the scheme of the Commission’s Guidelines on the qualification and classification of stand-alone software used in healthcare within the regulatory framework of medical devices, European Commission DG Internal Market, Industry, Entrepreneurship and SMEs, ‘Medical Devices: Guidance document’ (2016) MEDDEV 2.1/6 9 et seq. (hereafter European Commission, ‘Medical Devices’), for example R Oen, ‘Software als Medizinprodukt’ (2009) 2 MPR 55, 55 et seq.; M Klümper and E Vollebregt, ‘Die geänderten Anforderungen für die CE-Kennzeichnung und Konformitätsbewertung auf Grund der Richtlinie 2007/47/EG’ (2009) 2 MPJ 99, 100-101; S Jabri, ‘Artificial Intelligence and Healthcare: Products and Procedures’ in T Rademacher and T Wischmeyer (eds), Regulating Artificial Intelligence (2020) 307, 314 et seq.

47 CJEU, C-329/16 Snitem and Philips France (26 January 2018) paras 27 et seq (herafter Snitem and Philips France).

48 Cf. critically with regard to the renouncement of this terminology in the MDR and the practical consequences of this renouncement UM Gassner, ‘Software als Medizinprodukt – zwischen Regulierung und Selbstregulierung’ (2016) 4 MPR 109, 110–111. The previous differentiation between independent and integrated software therefore should remain valid.

49 Short for Unqiue Device Identification.

50 See German Federal Office for Drugs and Medical Devices, ‘Orientierungshilfe Medical Apps’ (BfArM, 1 November 2015) https://docplayer.org/63901775-Bfarm-orientierungshilfe-medical-apps.html point 3 (hereafter BfArM, ‘Orientierungshilfe Medical Apps’). Such a program was also the subject of the proceedings in CJEU, Snitem and Philips France (Footnote n 47) paras 17 et seq. After entering individual patient data, the program alerted the user to possible contraindications, interactions with other drugs and overdoses, etc.

51 From recital 19 sentence 2 of the MDR it becomes clear that software can actually be accessories. This was previously controversial, see UM Gassner, ‘Software als Medizinprodukt – zwischen Regulierung und Selbstregulierung’ (2016) 4 MPR 109, 111.

52 Cf. for this example M Klümper and E Vollebregt, ‘Die geänderten Anforderungen für die CE-Kennzeichnung und Konformitätsbewertung auf Grund der Richtlinie 2007/47/EG’ (2009) 2 MPJ 99, 100.

53 Cf. for a general definition of ‘integrated’ medical software e.g. R Tomasini, Standalone-Software als Medizinprodukt (2015) 44.

54 Cf. for this example G Sachs, ‘Software in Systemen und Behandlungseinheiten’ in UM Gassner (ed), Software als Medizinprodukt – IT vs. Medizintechnik? (2013) 31 et seq.

55 M Klümper and E Vollebregt, ‘Die geänderten Anforderungen für die CE-Kennzeichnung und Konformitätsbewertung auf Grund der Richtlinie 2007/47/EG’ (2009) 2 MPJ 99, 100.

56 Cf. BfArM, ‘Orientierungshilfe Medical Apps’ (Footnote n 46) point 3.

57 See CJEU, Snitem and Philips France (Footnote n 47) para 33.

58 Cf. for the latter two examples again BfArM, ‘Orientierungshilfe Medical Apps’ (Footnote n 49) point 3.

59 Cf. also with numerous practical examples in European Commission, ‘Medical Devices’ (Footnote n 46) 17, 18.

60 See in principle CJEU, Snitem and Philips France (Footnote n 47) para 36.

61 Cf. with this very example Y Frost, ‘Künstliche Intelligenz in Medizinprodukten und damit verbunden medizinprodukte- und datenschutzrechtliche Herausforderungen’ (2019) 4 MPR 117, 117.

62 MDR, Article 10(1) in conjunction with Annex I Chapter I 1.

63 In addition to these general warranty and risk management requirements, there are also labeling, documentation, recording, reporting, and notification obligations that relate to the warranty and risk management requirements. For reasons of simplification, they will not be discussed further here.

64 See MDR, Article 10(9) in connection with Annex IX Chapter I. Cf. on the emergence of quality assurance systems from the 1960s onwards and on the principles of quality management in detail F Reimer, Qualitätssicherung. Grundlagen eines Dienstleistungsverwaltungsrechts (2010) 115 et seq.

65 MDR, Article 10(2) in conjunction with Annex I Chapter I 3.

66 See MDR, Annex I Chapter II 17, in particular point 17.2 MDR: ‘For products incorporating software or in the form of software, the software shall be designed and manufactured in accordance with the state of the art, taking into account the principles of software life cycle, risk management including information security, verification and validation’.

67 International Standard IEC 62304 Medical Device Software – Software Life Cycle Processes.

68 For further relevant standards, see for example the overviews in C Johner, M Hölzer-Klüpfel, and S Wittorf, Basiswissen Medizinische Software (2nd ed. 2015) 28 et seq.; G Heidenreich and G Neumann, Software for medical devices (2015) 260 et seq.

69 A deviation then requires justification, see for example the explicit requirement in MDR, Annex IX Chapter I 2.3, which specifies the test program of an audit procedure by a Notified Body. Cf. on the delicate balance of technical standards between their function of concretizing legal norms on the one hand and the compulsion to design products in conformity with the standard on the other hand, which is to be avoided because it may not be appropriate to the risks and/or innovation, H Pünder, ‘Zertifizierung und Akkreditierung – private Qualitätskontrolle unter staatlicher Gewährleistungsverantwortung’ (2006) 5 ZHR 170 567, 571.

70 See the formulation in MDR, Annex I Chapter I 17.2. If the harmonized standards do not (any longer) adequately reflect these requirements and a corresponding software product is assessed as compliant, the market surveillance authorities can nevertheless argue that the software product does not comply with the Regulation, as compliance with the standards pursuant to Art. 8 para. 1 MDR only gives rise to a presumption of conformity.

71 For these examples of GMLPs, see the considerations at M Diamond and others, ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device’ (FDA, 2019) www.fda.gov/media/122535/download 9–10 (hereafter Diamond and others, ‘Proposed Regulatory Framework’).

72 These possible areas of change are already covered in the Medical Devices Regulation, namely in MDR, Annex VI Part C 6.5.2. Almost identical is the information given in M Diamond and others, ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device’ (FDA, 2019) 6-7 www.fda.gov/media/122535/download, which differentiates between changes regarding performance, inputs and intended use.

73 For such SaMD Pre-Specifications (SPS) and an Algorithm Change Protocol (ACP) see Diamond and others, ‘Proposed Regulatory Framework (Footnote n 70) 10 et seq.

24 ‘Neurorights’ A Human Rights–Based Approach for Governing Neurotechnologies

1 HA Simon, The Sciences of the Artificial (2001) 83.

2 WS McCulloch and W Pitts, ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’ (1943) 5(4) The Bulletin of Mathematical Biophysics 115133 https://doi.org/10.1007/BF02478259.

3 DO Hebb, The Organization of Behavior (1949).

4 JR Searle, ‘Minds, Brains, and Programs’ (1980) 3 Behavioral and Brain Sciences 417457 https://doi.org/10.1017/S0140525X00005756.

5 The confluence of big data, artificial neural networks for deep learning, the web, microsensorics, and other transformative technologies, cf. H Hahn und A Schreiber, ‘E-Health’ in R Neugebauer (ed), Digital Transformation (2019) 311–334 https://doi.org/10.1007/978-3-662-58134-6_19.

6 P Kellmeyer, ‘Artificial Intelligence in Basic and Clinical Neuroscience: Opportunities and Ethical Challenges’ (2019) 25(4) Neuroforum 241250 https://doi.org/10.1515/nf-2019-0018; AH Marblestone, G Wayne, and KP Kording, ‘Toward an Integration of Deep Learning and Neuroscience’ (Frontiers in Computational Neuroscience, 14 September 2016) 94 https://doi.org/10.3389/fncom.2016.00094.

7 D Kuhner and others, ‘A Service Assistant Combining Autonomous Robotics, Flexible Goal Formulation, and Deep-Learning-Based Brain–Computer Interfacing’ (2019) 116 Robotics and Autonomous Systems 98113 https://doi.org/10.1016/j.robot.2019.02.015; F Burget and others, ‘Acting Thoughts: Towards a Mobile Robotic Service Assistant for Users with Limited Communication Skills’ (IEEE, 9 November 2017) 1–6 https://doi.org/10.1109/ECMR.2017.8098658.

8 LAW Gemein and others, ‘Machine-Learning-Based Diagnostics of EEG Pathology’ (2020) 220 NeuroImage 117021 https://doi.org/10.1016/j.neuroimage.2020.117021.

9 P Kellmeyer, ‘Big Brain Data: On the Responsible Use of Brain Data from Clinical and Consumer-Directed Neurotechnological Devices’ (2018) 14 Neuroethics 8398 https://doi.org/10.1007/s12152-018-9371-x (hereafter Kellmeyer, ‘Big Brain Data’); M Ienca, P Haselager, and EJ Emanuel, ‘Brain Leaks and Consumer Neurotechnology’ (2018) 36 Nature Biotechnology 805810 https://doi.org/10.1038/nbt.4240.

10 P Kellmeyer and others, ‘Neuroethics at 15: The Current and Future Environment for Neuroethics’ (2019) 10(3) AJOB Neuroscience 104110; S Rainey and others, ‘Data as a Cross-Cutting Dimension of Ethical Importance in Direct-to-Consumer Neurotechnologies’ (2019) 10(4) AJOB Neuroscience 180182 https://doi.org/10.1080/21507740.2019.1665134; Kellmeyer, ‘Big Brain Data’ (Footnote n 9); R Yuste and others, ‘Four Ethical Priorities for Neurotechnologies and AI’ (2017) 551(7679) Nature News 159 https://doi.org/10.1038/551159a (hereafter Yuste and others, ‘Four Ethical Priorities for Neurotechnologies and AI’); M Ienca and R Andorno, ‘Towards New Human Rights in the Age of Neuroscience and Neurotechnology’ (2017) 13 Life Sciences, Society and Policy 5 https://doi.org/10.1186/s40504-017-0050-1 (hereafter Ienca and Andorno, ‘Towards New Human Rights in the Age of Neuroscience and Neurotechnology’).

11 I Leclerc, ‘The Meaning of “Space”’ in LW Beck (ed), Kant’s Theory of Knowledge: Selected Papers from the Third International Kant Congress (1974) 87–94 https://doi.org/10.1007/978-94-010-2294-1_10. This division into a locus internus (as described here) and locus externus – the set of externally observable facts about human behavior – is reflected in the ongoing debate about the nature of human phenomenological experience, consciousness, and free will in philosophy; the intricacies and ramifications of which lie outside of the scope of this article. For recent contributions to these overlapping debates, see e.g. the excellent overview in P Goff’s Galileo’s Error (2020).

12 I deliberately refrain from qualifying this statement as to whether, and if so when, we should expect neuroscience to ever be able to give a full account of a mechanistic understanding, both for conceptual reasons and practical reasons, for example, inherent limitations of current, and likely future, measurement tools in observing brain processes at the ‘right’ levels of granularity or scale (microscale, mesoscale, and macroscale) and at the appropriate level of temporal and frequency-related sampling to relate them to any given subjective experience.

13 Consider, for example, the concept of ‘dissociation’ in psychiatry (in the context of post-traumatic stress disorder) or neurology (in epilepsy), the notion that brain processes and mental processes can become uncoupled.

14 The 4E framework emphasizes that human cognition cannot be separated from the way in which cognitive processes are embodied (in a physical body [German: ‘Leib’]), embedded (into the environment), extended (how we use tools to facilitate cognition), and enactive (cognition enacts itself in interaction with others) R Menary, ‘Introduction to the Special Issue on 4E Cognition’ (2010) 9(4) Phenomenology and the Cognitive Sciences 459463 https://doi.org/10.1007/s11097–010-9187-6.

15 D Chalmers, ‘Naturalistic Dualism’ in S Schneider and M Velmans (eds), The Blackwell Companion to Consciousness (2017) 363–373 https://doi.org/10.1002/9781119132363.ch26.

16 P Goff, Consciousness and Fundamental Reality (2017); P Goff, W Seager, and S Allen-Hermanson, ‘Panpsychism’ in EN Zalta (ed), The Stanford Encyclopedia of Philosophy (2020) https://plato.stanford.edu/archives/sum2020/entries/panpsychism/.

17 G Tononi and others, ‘Integrated Information Theory: From Consciousness to Its Physical Substrate’ (2016) 17(7) Nature Reviews Neuroscience 450461 https://doi.org/10.1038/nrn.2016.44.

18 HH Mørch, ‘Is the Integrated Information Theory of Consciousness Compatible with Russellian Panpsychism?’ (2019) 84(5) Erkenntnis 10651085 https://doi.org/10.1007/s10670-018-9995-6.

19 TF Hoad, ‘Private’ in TF Hoad (ed), The Concise Oxford Dictionary of English Etymology (2003) www.oxfordreference.com/view/10.1093/acref/9780192830982.001.0001/acref-9780192830982-e-11928.

20 Another legacy in the military domain is the rank of private, i.e. soldiers of the lowest military rank.

21 See e.g. the usage definition from Merriam Webster, ‘Privacy’ (Merriam Webster Dictionary) www.merriam-webster.com/dictionary/privacy.

22 J Hirshleifer, ‘Privacy: Its Origin, Function, and Future’ (1980) 9(4) The Journal of Legal Studies 649664.

23 L Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (2014).

24 AD Vanberg, ‘Informational Privacy Post GDPR: End of the Road or the Start of a Long Journey?’ (2021) 25(1) The International Journal of Human Rights 5278 https://doi.org/10.1080/13642987.2020.1789109 (hereafter Vanberg, ‘Informational Privacy Post GDPR’); TW Kim and BR Routledge, ‘Informational Privacy, A Right to Explanation, and Interpretable AI’ in IEEE (ed), 2018 IEEE Symposium on Privacy-Aware Computing (PAC) (2018) 64–74 https://doi.org/10.1109/PAC.2018.00013; AD Moore, ‘Toward Informational Privacy Rights 2007 Editor’s Symposium’ (2007) 44(4) San Diego Law Review 809846; L Floridi, ‘Four Challenges for a Theory of Informational Privacy’ (2006) 8(3) Ethics and Information Technology 109119 https://doi.org/10.1007/s10676-006-9121-3 (hereafter Floridi, ‘Four Challenges for a Theory of Informational Privacy’).

25 J Pohl, ‘Transition From Data to Information’ in Collaborative Agent Design Research Center Technical Report - RESU72 (2001) 1–8.

26 Depending on the context, a very different granularity of privacy protection might be necessary. Consider, for example, the difference between collecting only one specific type of biometric data (without other contextual data) vs. collecting multimodal personal data to glean health-related information in a consumer technology context, which would require different granularity of data and information protection.

27 A Ballantyne, ‘How Should We Think about Clinical Data Ownership?’ (2020) 46(5) Journal of Medical Ethics 289294 https://doi.org/10.1136/medethics-2018-105340; P Hummel, M Braun and P Dabrock, ‘Own Data? Ethical Reflections on Data Ownership’ (2020) Philosophy & Technology 1-28 https://doi.org/10.1007/s13347-020-00404-9; M Mirchev, I Mircheva and A Kerekovska, ‘The Academic Viewpoint on Patient Data Ownership in the Context of Big Data: Scoping Review’ (2020) 22(8) Journal of Medical Internet Research https://doi.org/10.2196/22214; N Duch-Brown, B Martens and F Mueller-Langer, ‘The Economics of Ownership, Access and Trade in Digital Data’ (SSRN, 17 February 2017), https://doi.org/10.2139/ssrn.2914144.

28 Canada Supreme Court, McInerney v MacDonald (11 June 1992) 93 Dominion Law Reports 415–31.

29 JC Wallis and CL Borgman, ‘Who Is Responsible for Data? An Exploratory Study of Data Authorship, Ownership, and Responsibility’ (2011) 48(1) Proceedings of the American Society for Information Science and Technology 1–10 https://doi.org/10.1002/meet.2011.14504801188.

30 Vanberg, ‘Informational Privacy Post GDPR’ (Footnote n 24); FT Beke, F Eggers, and PC Verhoef, ‘Consumer Informational Privacy: Current Knowledge and Research Directions’ (2018) 11(1) Foundations and Trends(R) in Marketing 171; HT Tavani, ‘Informational Privacy: Concepts, Theories, and Controversies’ in KH Himma and HT Tavani (eds), The Handbook of Information and Computer Ethics (2008) 131–64 https://doi.org/10.1002/9780470281819.ch6; Floridi, ‘Four Challenges for a Theory of Informational Privacy’ (Footnote n 24).

31 P Kellmeyer, ‘Ethical and Legal Implications of the Methodological Crisis in Neuroimaging’ (2017) 26(4) Cambridge Quarterly of Healthcare Ethics: CQ: The International Journal of Healthcare Ethics Committees 530554 https://doi.org/10.1017/S096318011700007X.

32 G Meynen, ‘Neurolaw: Neuroscience, Ethics, and Law. Review Essay’ (2014) 17(4) Ethical Theory and Moral Practice 819829 http://www.jstor.org/stable/24478606; TM Spranger, ‘Neurosciences and the Law: An Introduction’ in TM Spranger (ed), International Neurolaw (2012) 1–10 https://doi.org/10.1007/978-3-642-21541-4_1.

33 Yuste and others, ‘Four Ethical Priorities for Neurotechnologies and AI’ (Footnote n 10); Ienca and Andorno, ‘Towards New Human Rights in the Age of Neuroscience and Neurotechnology’ (Footnote n 10); Kellmeyer, ‘Big Brain Data’ (Footnote n 9).

34 A Lavazza, ‘Freedom of Thought and Mental Integrity: The Moral Requirements for Any Neural Prosthesis’ (Froniters in Neuroscience, 19 February 2018) 12 https://doi.org/10.3389/fnins.2018.00082.

35 F Germani and others, ‘Engineering Minds? Ethical Considerations on Biotechnological Approaches to Mental Health, Well-Being, and Human Flourishing’ (Trends in Biotechnology, 3 May 2021) https://doi.org/10.1016/j.tibtech.2021.04.007; P Kellmeyer, ‘Neurophilosophical and Ethical Aspects of Virtual Reality Therapy in Neurology and Psychiatry’ (2018) 27(4) Cambridge Quarterly of Healthcare Ethics 610627 https://doi.org/10.1017/S0963180118000129.

36 CK Kim, A Adhikari, and K Deisseroth, ‘Integration of Optogenetics with Complementary Methodologies in Systems Neuroscience’ (2017) 18(4) Nature Reviews Neuroscience 222235 https://doi.org/10.1038/nrn.2017.15.

37 C Janiszewski and RS Wyer, ‘Content and Process Priming: A Review’ (2014) 24(1) Journal of Consumer Psychology 96118 https://doi.org/10.1016/j.jcps.2013.05.006; DM Hausman, ‘Nudging and Other Ways of Steering Choices’ (2018) 1 Intereconomics 1720.

38 Open Science Collaboration, ‘Estimating the Reproducibility of Psychological Science’ (2015) 349(6251) Science https://doi.org/10.1126/science.aac4716.

39 JD Creswell, ‘Mindfulness Interventions’ (2017) 68(1) Annual Review of Psychology 491516 https://doi.org/10.1146/annurev-psych-042716-051139.

40 H Matsumoto and Y Ugawa, ‘Adverse Events of TDCS and TACS: A Review’ (2017) 2 Clinical Neurophysiology Practice 1925 https://doi.org/10.1016/j.cnp.2016.12.003; F Fregni and A Pascual-Leone, ‘Technology Insight: Noninvasive Brain Stimulation in Neurology—Perspectives on the Therapeutic Potential of RTMS and TDCS’ (2007) 3(7) Nature Clinical Practice Neurology 383393 https://doi.org/10.1038/ncpneuro0530.

41 AWM Evers and others, ‘Implications of Placebo and Nocebo Effects for Clinical Practice: Expert Consensus’ (2018) 87(4) Psychotherapy and Psychosomatics 204210 https://doi.org/10.1159/000490354; WB Britton and others, ‘Defining and Measuring Meditation-Related Adverse Effects in Mindfulness-Based Programs’ (Clinical Psychological Science, 18 May 2021) https://doi.org/10.1177/2167702621996340; M Farias and others, ‘Adverse Events in Meditation Practices and Meditation-Based Therapies: A Systematic Review’ (2020) 142(5) Acta Psychiatrica Scandinavica 374393 https://doi.org/10.1111/acps.13225; D Lambert, NH van den Berg, and A Mendrek, ‘Adverse Effects of Meditation: A Review of Observational, Experimental and Case Studies’ (Current Psychology, 24 February 2021) https://doi.org/10.1007/s12144–021-01503-2.

42 A Hoffmann, CA Christmann, and G Bleser, ‘Gamification in Stress Management Apps: A Critical App Review’ (2017) 5(2) JMIR Serious Games https://doi.org/10.2196/games.7216.

43 L Herzog, P Kellmeyer, and V Wild, ‘Digital Behavioral Technology, Vulnerability and Justice: An Integrated Approach’ (Review of Social Economy, 30 June 2021) www.tandfonline.com/doi/full/10.1080/00346764.2021.1943755?scroll=top&needAccess=true (hereafter Herzog, Kellmeyer, and Wild, ‘Digital Behavioral Technology, Vulnerability and Justice: An Integrated Approach’).

44 T Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (2017); AA Alhassan and others, ‘The Relationship between Addiction to Smartphone Usage and Depression Among Adults: A Cross Sectional Study’ (BMC Psychiatry, 25 May 2018) https://doi.org/10.1186/s12888-018-1745-4; DT Courtwright, Age of Addiction: How Bad Habits Became Big Business (2021); NM Petry and others, ‘An International Consensus for Assessing Internet Gaming Disorder Using the New DSM-5 Approach’ (2014) 109(9) Addiction 13991406 https://doi.org/10.1111/add.12457.

45 VW Sze Cheng and others, ‘Gamification in Apps and Technologies for Improving Mental Health and Well-Being: Systematic Review’ (2019) 6(6) JMIR Mental Health https://doi.org/10.2196/13717.

46 EU: Council of the European Union, ‘Charter of Fundamental Rights of the European Union’, C 303/1 2007/C 303/01 § (2007).

47 As I am not a legal scholar, this section provides an outside view, informed by my understanding of the neuroscientific facts and ethical discussions, of the current debate at the intersection of neurolaw and neuroethics on the relevance of fundamental rights, particularly international human rights, for protecting mental privacy and mental integrity. In the scholarly debate, this set of issues are usually referred to as ‘neurorights’ and I will therefore use this term here too.

48 S Ligthart and others, ‘Forensic Brain-Reading and Mental Privacy in European Human Rights Law: Foundations and Challenges’ (Neuroethics, 20 June 2020) https://doi.org/10.1007/s12152-020-09438-4; C Bublitz, ‘Cognitive Liberty or the International Human Right to Freedom of Thought’ in J Clausen and N Levy (eds), Handbook of Neuroethics (2015) 1309–1333 https://doi.org/10.1007/978-94-007-4707-4_166.

49 Yuste and others, ‘Four Ethical Priorities for Neurotechnologies and AI’ (Footnote n 10); Ienca and Andorno, ‘Towards New Human Rights in the Age of Neuroscience and Neurotechnology’ (Footnote n 10).

50 D Clément, ‘Human Rights or Social Justice? The Problem of Rights Inflation’ (2018) 22(2) The International Journal of Human Rights 155169 https://doi.org/10.1080/13642987.2017.1349245. Though there are also important objections to these lines of arguments: JT Theilen, ‘The Inflation of Human Rights: A Deconstruction’ (2021) Leiden Journal of International Law 1–24 https://doi.org/10.1017/S0922156521000297.

51 An anthropological good, in my usage here, refers to a key foundational dimension of human existence that, throughout history and across cultures, is connected to strong human interests and preferences. Examples would be the interest in and preference for being alive, for having shelter, freedom, food, and so forth. In this understanding, anthropological goods antecede and often are the basis for normative demands, such as ethical claims and rights claims. As a pre-theoretical notion, they are also related to the more developed notion of ‘capabilities’ [M Nussbaum, ‘Capabilities and Social Justice’ (2002) 4(2) International Studies Review 123135 https://doi.org/10.1111/1521-9488.00258] insofar as capabilities give a philosophically comprehensive account of how dimensions of human existence relate to fundamental rights.

52 Kellmeyer, ‘Big Brain Data’ (Footnote n 9); Sara Goering and others, ‘Recommendations for Responsible Development and Application of Neurotechnologies’ (2021) Neuroethics https://doi.org/10.1007/s12152-021-09468-6.

53 Herzog, Kellmeyer, and Wild, ‘Digital Behavioral Technology, Vulnerability and Justice: An Integrated Approach’ (Footnote n 42); KV Kreitmair, MK Cho, and DC Magnus, ‘Consent and Engagement, Security, and Authentic Living Using Wearable and Mobile Health Technology’ (2017) 35(7) Nature Biotechnology 617620 https://doi.org/10.1038/nbt.3887; N Minielly, V Hrincu, and J Illes, ‘A View on Incidental Findings and Adverse Events Associated with Neurowearables in the Consumer Marketplace’ in I Bárd and E Hildt (eds), Developments in Neuroethics and Bioethics, vol. 3 (2020) 267–277 https://doi.org/10.1016/bs.dnb.2020.03.010.

54 V Jaiman and V Urovi, ‘A Consent Model for Blockchain-Based Health Data Sharing Platforms’ in IEEE Access 8 (2020) 143734143745 https://doi.org/10.1109/ACCESS.2020.3014565; A Khedr and G Gulak, ‘SecureMed: Secure Medical Computation Using GPU-Accelerated Homomorphic Encryption Scheme’ (2018) 22(2) IEEE Journal of Biomedical and Health Informatics 597606 https://doi.org/10.1109/JBHI.2017.2657458; MU Hassan, MH Rehmani, and J Chen, ‘Differential Privacy Techniques for Cyber Physical Systems: A Survey’ (2020) 22(1) IEEE Communications Surveys Tutorials 746789 https://doi.org/10.1109/COMST.2019.2944748.

55 C Letheby and P Gerrans, ‘Self Unbound: Ego Dissolution in Psychedelic Experience’ (2017) 1 Neuroscience of Consciousness https://doi.org/10.1093/nc/nix016.

56 FX Vollenweider and KH Preller, ‘Psychedelic Drugs: Neurobiology and Potential for Treatment of Psychiatric Disorders’ (2020) 21(11) Nature Reviews Neuroscience 611624 https://doi.org/10.1038/s41583-020-0367-2.

57 PT Durbin, ‘Brain Research and the Social Self in a Technological Culture’ (2017) 32(2) AI & SOCIETY 253260 https://doi.org/10.1007/s00146-015-0609-4; S Gallagher, ‘A Pattern Theory of Self’ (2013) 7 Frontiers in Human Neuroscience https://doi.org/10.3389/fnhum.2013.00443; T Fuchs, The Embodied Self: Dimensions, Coherence, and Disorders (2010); D Parfit, ‘Personal Identity’ (1971) 80(1) The Philosophical Review 327.

58 More generally the complexity of the legal landscape and political processes creates the well-known ‘pacing problem’ in governing and regulating technological innovations, also referred to as the ‘Collingridge Dilemma’, cf. for example: A Genus and A Stirling, ‘Collingridge and the Dilemma of Control: Towards Responsible and Accountable Innovation’ (2018) 47(1) Research Policy 6169 https://doi.org/10.1016/j.respol.2017.09.012.

59 Exemplified by the Ethics Policy of the Society for Neuroscience, the largest professional body representing neuroscience researchers: SfN, ‘Professional Conduct’ (SfN) https://www.sfn.org/about/professional-conduct.

60 Consider for example: Partnership on AI www.partnershiponai.org/.

61 L Dayton, ‘Call for Human Rights Protections on Emerging Brain-Computer Interface Technologies’ (Nature Index, 16 March 2021) https://www.natureindex.com/news-blog/human-rights-protections-artificial-intelligence-neurorights-brain-computer-interface.

62 OECD Legal Documents, ‘Recommendation of the Council on Responsible Innovation in Neurotechnology’ https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0457.

63 Article 18, UDHR.

25 AI-Supported Brain–Computer Interfaces and the Emergence of ‘Cyberbilities’

1 See Section II.

7 See Section V.

8 W Rammert, ‘Where the Action Is: Distributed Agency between Humans, Machines, and Programs’ in U Seifert, JH Kim, and A Moore (eds) Paradoxes of Interactivity (2008) (hereafter Rammert, ‘Distributed Agency’).

9 E.g., A Sen, Commodities and Capabilities (1985) and A Sen, Development as Freedom (2001); as an introduction also cf. A Sen, ‘Development as Capability Expansion’ (1989) 19 Journal of Development Planning 41–58.

10 E.g., M Nussbaum, Women and Human Development: The Capabilities Approach (2001) (hereafter Nussbaum, ‘Capabilities Approach’); as an introduction cf. M Nussbaum, Creating Capabilities: The Human Development Approach (2013) (hereafter Nussbaum, ‘Creating Capabilities’).

11 Nussbaum, ‘Creating Capabilities’, 18.

12 C Gore, ‘Irreducibly Social Goods and the Informational Bias of Amartya Sen’s Capability Approach’ (1997) 9(2) Journal of International Development 235250.

13 SM Okin, ‘Poverty, Well-being, and Gender: What Counts, Who’s Heard?’ (2003) 31(3) Philosophy & Public Affairs 280316.

14 I Robeyns and M Fibieger Byskov, ‘The Capability Approach’ (2020) The Stanford Encyclopedia of Philosophy Winter 2020 Edition https://plato.stanford.edu/archives/win2020/entries/capability-approach.

15 Nussbaum, ‘Creating Capabilities’ (Footnote n 10).

19 Footnote Ibid, 33–34.

20 Cf. Section V.

21 Cf. Rammert, ‘Distributed Agency’ (Footnote n 8), 77–86.

24 Cf. M Bratman, Intention, Plans, and Practical Reason (1987).

25 Cf. T O’Connor and C Sandis (eds), A Companion to the Philosophy of Action (2010).

26 E.g., A Mele, Springs of Action: Understanding Intentional Behavior (1992); M Bratman, Faces of Intention: Selected Essays on Intention and Agency (1999).

27 For an account detailing the effects of intentions not only on other mental states but also neurophysiological states underlying the execution of movements, cf. E Pacherie, ‘The Phenomenology of Action: A Conceptual Framework’ (2008) 107 Cognition (hereafter Pacherie, ‘Action’).

28 The same general rationale applies to many other use cases of neurotechnologies that alter, modulate, or monitor brain activity to, for example, enable the use of digital keyboards or cursors, neurofeedback systems, or brain stimulation devices such as deep-brain-stimulators.

29 W Glannon, ‘Neuromodulation, Agency and Autonomy’ (2014) 46 Brain Topography 27 (hereafter Glannon, ‘Neuromodulation’).

31 Footnote Ibid, 51. Note that experiencing control over one’s behavior may be only one aspect of the sense of agency (cf. Sub-section II 2(c)).

34 For an overview of the principles of brain–computer interface operation see JR Wolpaw and EW Wolpaw (eds), Brain-Computer Interfaces: Principles and Practice (2012) (hereafter Wolpaw and Wolpaw, ‘Brain-Computer Interfaces’) or B Graimann, B Allison, and G Pfurtscheller (eds), Brain-Computer-Interfaces: Revolutionizing Human-Computer-Interaction (2010).

35 For an exemplary case see JL Collinger and others, ‘High-Performance Neuroprosthetic Control by an Individual with Tetraplegia’ (2013) 381 Lancet 557564.

36 Cf. Wolpaw and Wolpaw, ‘Brain-Computer Interfaces’ (Footnote n 34) 7: ‘BCI operation depends on the interaction of two adaptive controllers [brain and BCI]’.

37 Wolpaw and Wolpaw, ‘Brain-Computer Interfaces’ (Footnote n 34) 6.

38 Cf. Pacherie, ‘Action’ (Footnote n 27) who integrates empirical studies in her theory. For phenomenological aspects see S Gallagher, ‘Multiple Aspects in the Sense of Agency31(1) New Ideas in Psychology.

39 Pacherie, ‘Action’ (Footnote n 27) 209–213. Also cf. J Shepherd, ‘The Contours of Control’ (2014) 170 Philosophical Studies.

40 Cf. W Rammert and I Schulz-Schaeffer, ‘Technik und Handeln. Wenn soziales Handeln sich auf menschliches Verhalten und technische Abläufe verteilt’ in W Rammert and I Schulz-Schaeffer (eds) Können Maschinen handeln? 11–64 and I Schulz-Schaeffer and W Rammert. ‘Technik, Handeln und Praxis. Das Konzept gradualisierten Handelns revisited’ in C Schuber and I Schulz-Schaeffer (eds) Berliner Schlüssel zur Techniksoziologie 41–76. For further aspects also see I Schulz-Schaeffer, ‘Technik und Handeln. Eine handlungstheoretische Analyse’ in C Schuber and I Schulz-Schaeffer (eds) Berliner Schlüssel zur Techniksoziologie (hereafter Schulz-Schaffer, ‘Technik und Handeln’) and Rammert, ‘Distributed Agency’ (Footnote n 8).

41 Schulz-Schaffer, ‘Technik und Handeln’ (Footnote n 40) 4–5. For an English version with slightly different terminology and line of argument cf. Rammert, ‘Distributed Agency’ (Footnote n 8) 74–77.

42 Schulz-Schaeffer, ‘Technik und Handeln’ (Footnote n 41) 8, 18–19.

43 In the gradualized concept of agency, intra-activity describes interactions among artificial (e.g., machinic and software) agents.

44 Rammert, ‘Distributed Agency’ (Footnote n 8) 82. Note that the gradualized concept of agency defines interactivity as the specific case when human and nonhuman agencies intersect (Footnote ibid, 71).

45 The traditional dualist or asymmetrical perspective on human–machine interaction asserts a dichotomy between ‘human action’ and ‘machine operation’, matching the former with the realm of autonomy and morality and the latter with heteronomy and causality (cf. instrumental theories of technology and the paradigm of tool use). The gradualized concept of agency directly opposes this perspective, at least in the case of complex technology.

46 Rammert, ‘Distributed Agency’ (Footnote n 8) 78–81.

47 Footnote Ibid, 78–80.

48 Cf. Nussbaum, ‘Capabilities Approach’ (Footnote n 10) 78–80.

49 As a first application, Neuralink wants to develop brain–computer interfaces for patients with spinal cord injury, allowing them to control computers and mobile devices. Neuralink’s vision includes constructing an automated robotic neurosurgery system that implants a fully integrated brian–computer interface with over 1000 channels for monitoring and stimulating neuronal activity in multiple brain regions. Neuralink ultimately wants to make this technology available for commercial use (cf. https://neuralink.com).

52 See Section V.

53 Cf. W Wang and others, ‘An Electrocorticographic Brain Interface in an Individual with Tetraplegia’ (2013) 8(2) PLoS ONE; supplemental material shows the patient controlling an external robotic arm with a brain–computer interface and intentionally touching the hand of his girlfriend for the first time in years: UPMC, ‘Paralyzed Man Moves Robotic Arm with His Thoughts’ (YouTube, 7 October 2011) www.youtube.com/watch?v=yff20TlHv34&ab_channel=UPMC.

54 Funding acknowledgement: The work leading to this publication was supported by FUTUREBODY, funded by ERA-NET NEURON JTC2017.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×