Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-skm99 Total loading time: 0 Render date: 2024-04-26T06:14:26.474Z Has data issue: false hasContentIssue false

22 - Medical AI

Key Elements at the International Level

from Part VII - Responsible AI Healthcare and Neurotechnology Governance

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 379 - 396
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

I. Introduction

It is impossible to imagine biomedicine today without Artificial Intelligence (AI). On the one hand, its application is grounded in its integration into scientific research. With AI methods moving into cancer biology, for example, it is now possible to better understand how drugs or gene variants might affect the spread of tumours in the body.Footnote 1 In genomics, AI has helped to decipher genetic instructions and, in doing so, to reveal rules of gene regulation.Footnote 2 A major driving force for the application of AI methods and particularly of deep learning in biomedical research has been the explosive growth of life-sciences data prominently based on gene-sequencing technologies, paired with the rapid generation of complex imaging data, producing tera- and petabytes of information. To better understand the contribution of genetic variation and alteration on human health, pooling large datasets and providing access to them are key for identifying connections between genetic variants and pathological phenotypes. This is not only true for rare diseases or molecularly characterized cancer entities, but also plays a central role in the study of the genetic influence of common diseases. The sheer growth and combination of data sets for analysis has created an emerging need to mine them faster than purely manual approaches are able to.Footnote 3

On the other hand, based on this knowledge from biomedical research, the use of AI is already widespread at various levels in healthcare. These applications can help in the prevention of infectious diseases, for example by making it easier to identify whether a patient exhibiting potential early COVID-19 symptoms has the virus even before they have returned a positive test.Footnote 4 It can also help to understand and classify diseases at the morphological and molecular level, such as breast cancer,Footnote 5 and can foster the effective treatment of diseases such as in the case of a stroke.Footnote 6 AI methods are also increasingly involved in the evaluation of medical interventions, such as in the assessment of surgical performance.Footnote 7 Additionally, physicians increasingly face comparison with AI-based systems in terms of successful application of their expertise.Footnote 8

With life-sciences research increasingly becoming part of medical treatment through the rapid translation of its findings into healthcare and through technology transfer, issues around the application of AI-based methods and products are becoming pertinent in medical care. AI applications, already ubiquitous, will only continue to multiply, permanently altering the healthcare system and in particular the individual doctor–patient relationship. Precisely because medical treatment has a direct impact on the life and physical integrity as well as the right of self-determination of patients involved, standards must be developed for the use of AI in healthcare. These guidelines are needed at the international level in order to ease the inevitable cross-border use of AI-based systems while boosting their beneficial impact on patients’ healthcare. This would not only promote patient welfare and general confidenceFootnote 9 in the benefits of medical AI, but would also help, for example, with the international marketing and uniform certification of AI-based medical devices,Footnote 10 thereby promoting innovation and facilitating trade.

A look at current statements, recommendations, and declarations by international organizations such as the United Nations Educational, Scientific and Cultural Organization (UNESCO), the World Health Organization (WHO), the Organisation for Economic Co-operation and Development (OECD), and the Council of Europe (CoE), as well as by non-governmental organizations such as the World Medical Association (WMA), shows that the importance of dealing with AI in as internationally uniform a manner as possible is already well recognized.Footnote 11 However, as will be shown in the following sections, international standardization for potential concrete AI applications in the various stages of medical treatment is not yet sufficient in terms of content. The situation is further complicated by the fact that the aforementioned instruments have varying degrees of binding force and legal effect. Following the identification of those gaps requiring regulation or guidance at the international level, the aim is to critically examine the international organizations and non-governmental organizations that could be considered for the job of closing them. In particular, when considering the spillover effect of the WMA’s guidelines and statements on national medical professional law, it will be necessary to justify why the WMA is particularly suitable for creating regulations governing the scope of application of AI in the doctor–patient relationship.

II. Application Areas of AI in Medicine Addressed by International Guidelines So Far

As sketched in the introduction, AI can be used to draw insights from large amounts of data at various stages of medical treatment. Thereby, AI can generally be defined as ‘the theory and development of computer systems capable to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making […]’.Footnote 12 Besides different types of AI systems as regards their autonomy and learning type,Footnote 13 a further distinction can be drawn in the context of decision-making in medical treatment as to whether AI is used as a decision aid or as a decision-maker.Footnote 14 While in the former case the physician retains the decision-making or interpretative authority over the findings of AI, in the latter case this does not normally apply, or only to a very limited extent. In any case, this distinction must be viewed critically insofar as even where AI is acting as a decision-maker the actors themselves, who are involved only to a small degree in the development and application of AI, each make individual decisions. Altogether, it is questionable whether decision-making can be assumed to be solely the result of AI’s self-learning properties.Footnote 15 Given that AI can, at least potentially, be used in every stage of medical treatment, from anamnesis to aftercare and documentation, and that the medical standards must be upheld, and that the patient must be kept informed at every stage, the gaps to be filled by an international guideline must be defined on the basis of a holistic view of medical treatment.

1. Anamnesis and Diagnostic Findings

The doctor–patient relationship usually begins with the patient contacting the doctor due to physical complaints, which the doctor tries to understand by means of anamnesis and diagnosis. Anamnesis includes the generation of potentially medically relevant information,Footnote 16 for example about previous illnesses, allergies, or regularly taken medications. The findings are collected by physical, chemical, or instrumental examinations or by functional testing of respiration, blood pressure, or circulation.Footnote 17

An important AI application area is oncology. Based on clinical or dermatopathological images, AI can be used to diagnose and to classify skin cancerFootnote 18 or make a more accurate interpretation of mammograms for early detection of breast cancer.Footnote 19 Another study from November 2020 shows that AI could also someday be used to automatically segment the major organs and skeleton in less than a second, which helps in localizing cancer metastases.Footnote 20

Among other things, wearables (miniaturized computers worn close to the body) and digital health applicationsFootnote 21 are also being developed for the field of oncology and are already being used by patients independently, for example, to determine their findings. For example, melanoma screening can be performed in advance of a skin cancer diagnosis using mobile applications such as store-and-forward teledermatology and automated smartphone apps.Footnote 22 Another ‘use’ case is monitoring patients with depression. The Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) is seeking to complement existing apps for monitoring writing and reading behaviors of depressed patients with an app that provides AI-based speech analysis. The model recognizes speech style and word sequences and finds patterns indicative of depression. Using machine learning, it learns to detect depression in new patients.Footnote 23

As regards health apps and wearables, the WMA distinguishes between ‘technologies used for lifestyle purposes and those which require the medical expertise of physicians and meet the definition of medical devices’ and calls for the use of the latter to be appropriately regulated.Footnote 24 In its October 2019 statement, the WMA emphasizes that protecting the confidentiality and control of patient data is a core principle of the doctor–patient relationship.Footnote 25 In line with this, the CoE recommends that data protection principles be respected in the processing of health data, especially where health insurers are involved, and that patients should be able to decide whether their data will be disclosed.Footnote 26 The WHO draws attention to the complexity of the governance of data obtained from wearables, which may not have been collected initially for healthcare or research purposes.Footnote 27

These statements provide a basic direction, but do not differentiate more closely between wearable technologies and digital health applications with regard to the type of use, the scope of health data collected and any transfer of this data to the physician. It is unclear how physicians should handle generated health data, such as whether they must conduct an independent review of the data or whether a plausibility check is sufficient to use the data when taking down a patient’s medical history and making findings. The degree of transparency for the patient regarding the workings of the AI application as well as any data processing is also not specified. The implementation of a minimum standard or certification procedure could be considered here.

Telematics infrastructure can play a particularly important role at the beginning of the doctor-patient relationship. In its 2019 recommendations, the WHO distinguished between two categories of telemedicine. First, it recommends client-to-provider telemedicine, provided this does not replace personal contact between doctor and patient, but merely supplements it.Footnote 28 Here it agrees with the WMA’s comprehensive 2018 statement on telemedicine, which made clear that telemedicine should only be used when timely face-to-face contact is not possible.Footnote 29 This also means that the physician treating by means of telemedicine should be the physician otherwise treating in person, if possible. This would require reliable identification mechanisms.Footnote 30 Furthermore, education, particularly about the operation of telemedicine, becomes highly important in this context so the patient can give informed consent.Footnote 31 The monitoring of patient safety, data protection, traceability, and accountability must all also be ensured.Footnote 32 After the first category of client-to-provider telemedicine has been established, the WHO also recommends provider-to-provider telemedicine as a second category, so that healthcare professionals, including physicians, can support each other in diagnoses, for example, by sharing images and video footage.Footnote 33 Thus, many factors must be clarified at the national level when creating a legal framework including licensing, cross-border telemedicine treatment, and use cases for remote consultations and their documentation.Footnote 34

In Germany, for example, the first regulations for the implementation of a telematics infrastructure have been in force since October 2020,Footnote 35 implementing the recommendations of the WHO and the WMA among others. There, the telematics infrastructure is to be an interoperable and compatible information, communication, and security infrastructure that serves to network service providers, payers, insured persons, and other players in the healthcare system and in rehabilitation and care.Footnote 36 This infrastructure is intended to enable telemedical procedures, for instance, for video consultation in SHI-accredited medical care.Footnote 37 For this purpose, § 365 SGB V explicitly refers to the high requirements of the physician’s duty of disclosure for informed consent pursuant to § 630e BGB (German Civil Code)Footnote 38, which correspond to those of personal treatment.

Telemedicine should be increasingly used to close gaps in care and thus counteract disadvantages, especially in areas with a poorer infrastructure in terms of local medical care.Footnote 39 To this end, it could be helpful to identify for which illnesses telemedical treatment is sufficient, or determine whether such a treatment can already be carried out at the beginning of the doctor–patient relationship. One example thereof are the large-scale projects in the German region of Brandenburg, where, for example, patients’ vital signs were transmitted telemedically as part of a study to provide care for heart patients.Footnote 40 In the follow-up study, AI is now also being used to prepare the vital data received at the telemedicine center for medical staff.Footnote 41

2. Diagnosis

The findings must then be evaluated professionally, incorporating ideas about the causes and origins of the disease, and assigned to a clinical picture.Footnote 42

Accordingly, AI transparency and explicability become especially important in the area of diagnosis. In its October 2019 statement, the WMA pointed out that physicians need to understand AI methods and systems so that they can make medical recommendations based on them, or refrain from doing so if individual patient data differs from the training data used.Footnote 43 It can be concluded, just as UNESCO’s Ad Hoc Expert Group (AHEG) directly stated in its September 2020 draft, that AI can be used as a decision support tool, but should not be used as a decision-maker replacing human decision and responsibility.Footnote 44 The WHO also recommends the use of AI as a decision support tool only when its use falls within the scope of the physicians’ current field of work, so that the physicians provide only the services for which they have been trained.Footnote 45

There is no clarification as to the extent to which transparency is required of the physician as regards AI algorithms and decision logic. A distinction should be made here between open-loop and closed-loop systems.Footnote 46 An open-loop system, in which the output has no influence on the control effect of the system, is generally easier to understand and explain, allowing stricter requirements to be placed on the control of AI decisions and treatments based on them. On the other hand, it is more difficult to deal with closed-loop systems in which the output depends on the input because the system has one or more feedback loops between its output and input. In addition, there is the psychological danger that the physician, knowing the nature of the system and its performance, may consciously or unconsciously exercise less rigorous control over the AI decision. It is, therefore, necessary to differentiate between both the type of system and the use of AI dependent on its influence as a decision aid in order to identify the degree of necessary control density, from simple plausibility checks to more intensive review obligations of the physician. It is also clear that there is a need to explain which training data and patient data were processed and influenced the specific diagnosis and why other diagnoses were excluded.Footnote 47 This is particularly relevant in the area of personalized and stratified diagnostics. In this context, the previously rejected possibility of AI as a decision-maker and the physician’s ultimate decision-making authority could be re-explored and enabled under specific, narrowly defined conditions depending on the type of application and the type and stage of the disease, which could reduce the burden on healthcare infrastructure.

3. Information, Education, and Consent

Before treatment in accordance with the diagnosis can be started, the patient must be provided with treatment information to ensure that the patient’s behavior is in line with the treatment and with economic information on the assumption of costs by the health insurance company.Footnote 48 In addition, information about the diagnosis, risks, and course of treatment as well as real alternatives to treatment is a prerequisite for effective patient consent.Footnote 49

The WMA’s Declaration of Helsinki states that information and consent should be obtained by a person qualified to give treatment.Footnote 50 The CoE’s May 2019 paper also requires that the user or patient be informed when AI is used to interact with them in the context of treatment.Footnote 51 It is questionable whether a general duty of disclosure can be derived from this for every case in which AI is involved in patient care, even if only to a very small extent. The WHO’s recent guidelines emphasize the increasing infeasibility of true informed consent particularly for the purpose of securing privacy.Footnote 52 In any case, there is currently a lack of guidance regarding the scope of the duty to disclose the functioning of the specific AI.

It would also be conceivable to use AI to provide information itself, for instance, through a type of chatbot system, if it had the training level and the knowledge of a corresponding specialist physician and queries to the treating physician remained possible. In any case, if this is rejected with regard to the physician’s ultimate decision-making authority, obtaining consent with the help of an AI application after information has been provided by a physician could be considered for time-efficiency reasons.

4. Treatment and Aftercare

Treatment is selected based on a diagnosis, the weighing of various measures and risks, the purpose of the treatment and the prospects of success according to its medical indication. After treatment is complete, monitoring, follow-up examinations, and any necessary rehabilitation take place.Footnote 53

According to the Declaration of Helsinki, the physician’s reservation and compliance with medical standards both apply, particularly in the therapeutic treatment of the patient.Footnote 54 No specific regulation has been formulated to govern the conditions under which AI used by physicians in treatment fulfil medical standards, and it is not clear whether it is necessary for them to meet those standards at all or whether even higher requirements should be placed on AI.Footnote 55 In addition, the limitations on a physician’s right to refuse the use of AI for treatment are unclear. It is possible that the weight of the physician’s ultimate decision-making authority could be graded to correspond to the measure and the risks of the treatment, especially in the context of personalized and stratified medicine, so that, depending on the degree of this grading, treatment by AI could be made possible.

AI allows the remote monitoring of health status via telemedicine, wearables, and health applications, for example, by monitoring sleep rhythms, movement profiles, and dietary patterns, as well as reminders to take medication. This is of great advantage especially in areas with poorer healthcare structures.Footnote 56 For example, a hybrid closed-loop system for follow-up care has already been developed for monitoring diabetes patients that uses AI to automate and personalize diabetes management. The self-learning insulin delivery system autonomously monitors the user’s insulin level and delivers an appropriate amount of insulin when needed.Footnote 57 Furthermore, a December 2020 study shows that AI can also be used in follow-up and preventive care for young patients who have suffered from depression or have high-risk syndromes to predict the transition to psychosis in a personalized way.Footnote 58 Meanwhile, follow-up also includes monitoring or digital tracking using an electronic patient file or other type of electronic health record so that, for example, timely follow-up examinations can be recommended. This falls under the digital tracking of clients’ health status and services, which the WHO recommends in combination with decision support and targeted client communication (if the existing healthcare system can support implementation and the area of application falls within the area of competence of the responsible physician and the protection of patient data is ensured).Footnote 59

However, there is as of yet no regulatory framework for the independent monitoring and initiation of AI-measures included in such applications. Apart from the need for regulation of wearables and health applications,Footnote 60 there is also a need for regulation of the transmission of patient data to AI, which must be solved in a way that is compliant with data protection rules.Footnote 61

5. Documentation and Issuing of Certificates

The course of medical treatment is subject to mandatory documentation.Footnote 62 There is no clarification as to what must be documented and the extent of documentation required in relation to the use of AI in medical treatment. A documentation obligation could, for example, extend to the training status of AI, any training data used, the nature of its application, and its influence on the success of the treatment.

Both economically and in terms of saving time, it could make sense to employ AI at the documentation stage in addition to its use during treatment, as well as for issuing health certificates and attestations, leaving more time for the physician to interact with the patient.

6. Data Protection

The use of AI in the medical field must also be balanced against the data protection law applicable in the respective jurisdiction. In the EU this would be the General Data Protection Regulation (GDPR)Footnote 63 and the corresponding member state implementation thereof.

The autonomyFootnote 64 and interconnectednessFootnote 65 of AI alone pose data protection law challenges, and these are only exacerbated when AI is used in the context of medical treatment due to the sensitivity of personal health-related data. For example, as Article 22(1) of the GDPR protects data subjects from adverse decisions based solely on automated processing, at least the final decision must remain in human hands.Footnote 66

The processing of sensitive personal data such as health data is lawful if the data subject has given his or her express consent.Footnote 67 Effective consent is defined as words or actions given voluntarily and with knowledge of the specific factual situation.Footnote 68 A person must, therefore, know what is to happen to the data. In order to consent to treatment involving the use of AI, the patient would have to be informed accordingly.Footnote 69 However, it is difficult to determine how to inform the patient about the processing of the data if the data processing procedure changes autonomously due to the self-learning property of the AI. Broad consentFootnote 70 on the part of the patient is challenging as they would be consenting to unforeseeable developments and would consequently have precisely zero knowledge of the specific factual situation at the time of consent, effectively waiving the exercise of part of their right to self-determination. The GDPR operationalizes the fundamental right to the protection of personal data by defining subjective rights of data subjects, but it is questionable to what extent these rights would enable the patient to intercept and control data processing. The role of the patient, on the other hand, would be strengthened by means of dynamic information and consentFootnote 71, as the patient could give his or her consent bit by bit over the course of treatment using AI. The challenge here would be primarily on the technical side, as an appropriate organization and communication structure would have to be created to inform the patient about further, new data processing by the AI.Footnote 72 The patient would have to be provided with extensive information not only about the processed data but also about the resulting metadata if the latter reveals personally identifiable information, not least in order to revoke their consent, if necessary, in a differentiated way,Footnote 73 and to arrange for the deletion of their data.

Correspondingly, Articles 13 and 14 of the GDPR provide for information obligations and Article 17 of the GDPR for a right to deletion. A particular problem here is that the patient data fed in becomes the basis for the independent development of the AI and can no longer be deleted. Technical procedures for anonymizing the data could in principle help here, although this would be futile in a highly contextualized environment.Footnote 74 The use of different pseudonymization types (for instance noise) to lower the chance of re-identifiability might also be worth considering. This might, however, render the data less usable.Footnote 75 In any case, the balancing of the conflicting legal positions could lead to a restriction of deletion rights.Footnote 76 This in turn raises the question of the extent to which consent, which may also be dynamic, could be used as a basis of legitimacy for the corresponding processing of the data, even after the appropriate information about the limitations has been provided. In order to avoid a revocation of consent leading to the exclusion of certain data, other legal bases for processing are often proposed.Footnote 77 This often fails to take into account that erasure rights and the right to be forgotten may lead to a severe restriction of processing regardless. Additionally, the compliance with other rights, such as the right to data portability,Footnote 78 might be hampered or limited due to the self-learning capabilities of AI, with the enforcement of such rights leading to the availability of a given data set or at least its particular patterns throughout different applications, obstructing the provision of privacy through control over personal data by the data subject.

Because of the AI methods involved in processing patients’ sensitive data and its regularly high contextualization, the likeliness of anonymized data, or data thought to be anonymized, becoming re-identifiable is also higher. Based on AI methods of pattern recognition, particular combinations of data fed into a self-learning AI system might be re-identified if the AI system trained with that data is later, in the course of its application, confronted with the same pattern. In this way, even if data or data sets were originally anonymized before being fed into an AI system, privacy issues may emerge due to the high contextuality of AI applications and their self-learning characteristics.Footnote 79 As a consequence, privacy issues will not only be relevant when data is moved between different data protection regimes, but also when data is analysed. However, the fact of re-identifiability might remain hidden for a considerable time.

Once re-identifiability is discovered, the processing of affected personal data will fall within the scope of application of the GDPR. Although at first glance this implies higher protection, unique characteristics of AI applications pose challenges to safeguard the rights of data subjects. A prominent example hereby is the right to be forgotten. As related to informational self-determination, the right to be forgotten is intended to prevent the representation and permanent presence of information in order to guarantee the possibility of free development of the personality. With the right to be forgotten, the digital, unlimited remembrance and retrieval of information is confronted with a claim to deletion in the form of non-traceability.Footnote 80 The concept of forgetting does not necessarily include a third party but does imply the disappearance of information as such.Footnote 81 Relating to data fed into AI applications, the connection between one’s own state of ignorance and that of others, as well as their forgetting, including AI’s ability to forget, remains decisive. Even if the person is initially able to ward off knowledge, it is still conceivable that others might experience or use this knowledge (relying on the increased re-identifiability of the data), and then in some form, even if derivatively, connect the data back to the individual. In this respect, forgetting by third parties is also relevant as an upstream protection for one’s own forgetting. Furthermore, the right to be forgotten becomes an indispensable condition for many further rights of the person concerned. Foreign and personal forgetting are necessary, if information processing detaches itself from the person concerned and becomes independent to then be fed back into their inherently internal decision-making processes, leveraging the realization of (negative) informational self-determination.Footnote 82

7. Interim Conclusion

The variety of already-existing uses of AI in the context of medical treatment, from initial contact to follow-up and documentation, shows the increasingly urgent need for uniform international standards, not least from a medical ethics perspective. Above all, international organizations such as the WHO and non-governmental organizations such as the WMA have set an initial direction with their statements and recommendations regarding the digitization of healthcare. However, it is striking that, on the one hand there are more recent differentiated recommendations for the application of AI in medical treatment in general but these are not directed at physicians in particular and that, on the other hand, the entire focus of such recommendations is regularly on individual subareas and on the governance in healthcare without a comprehensive examination of possible applications in the physician–patient relationship. Medical professionals, especially physicians, are thus exposed to different individual and general recommendations in addition to the technical challenges already posed by AI. This could lead to uncertainties and differing approaches among physicians and could ultimately have a chilling effect on innovation. Guidelines from a competent international organization or professional association that cover the use of AI in all stages of medical treatment, especially from the physician’s perspective, would therefore be desirable.

III. International Guidance for the Field of AI Application during Medical Treatment

1. International Organizations and Their Soft-Law Guidance

Both the WHO and the UNESCO are specialized agencies of the United Nations traditionally responsible for the governance of public health.Footnote 83 The WHO has been regularly engaged in fieldwork as an aid to research ethics committees, but has recently increasingly moved into developing guidance within the area of public health and emerging technologies.Footnote 84 UNESCO derives its responsibility for addressing biomedical issues from the preamble to its statutes and, at the latest since the 2005 Bioethics Declaration,Footnote 85 has indicated that it intends to assume the role of international coordinator in the governance of biomedical issues.Footnote 86 Here, UNESCO relies on an institutionalization of its ethical mandate in the form of the International Bioethics Committee.Footnote 87 Currently, both organizations’ key activity in this area focuses on setting standards: since the development of science and technology has become increasingly global in order to accompany progress, provide the necessary overview, and ensure equal access to the benefits of scientific development, there is a need for global principles in various areas that member states can apply as a reference framework for establishing specific regulatory measures.

Such global principles are developed by both organizations, notably in the form of international soft law.Footnote 88 According to prevailing opinion, this term covers rules of conduct of an abstract, general nature that have been enacted by subjects of international law but which cannot be assigned to any formal source of law and are not directly binding.Footnote 89 However, soft law instruments cannot be reduced to mere political recommendations but can unfold de facto ‘extra-legal binding effect’, despite their lack of direct legal binding force.Footnote 90 International soft law can also be used as an indicator of legal convictions for the interpretation of traditional sources of international law such as treaties.Footnote 91 Furthermore, it can provide evidence of the emergence of customary law and lead to obligations of good faith.Footnote 92 Soft law can also serve the further development of international law: It can often be a practical aid to consensus-building and can also provide a basis for the subsequent development of legally binding norms.Footnote 93 Such instruments can also have an effect on national legal systems if, for example, they are introduced into national legal frameworks through references in court decisions.Footnote 94

Criticism of UNESCO’s soft law documents is mainly directed at the participation in and deliberation of decisions.Footnote 95 Article 3(2) of the Statutes of the International Bioethics Committee of UNESCO (IBC Statutes)Footnote 96 prescribes the nomination of eminent experts to the member states.Footnote 97 Although the IBC’s reports generally show a particular sensitivity to normative challenges of emerging health technologies, the statute allows the involvement of external experts in the drafting processes – an option that has not been widely used by the IBC in the course of preparing the main UNESCO declaration in the area of bioethics.Footnote 98 The IBC’s reports are regularly revised and finalized by the Inter-Governmental Bioethics Committee (IGBC), which represents the member states’ governments.Footnote 99 This is justified by the fact that the addressees and primary actors in the promotion and implementation of the declarations are the member states.Footnote 100 However, only 36 member states are represented on the committee at once, which is just one-fifth of all UNESCO member states. Moreover, the available seats do not correspond to the number of member states in each geographic region. While approximately every fourth member state is represented from Western Europe and the North American states, only approximately every fifth member state is represented from the remaining regions.Footnote 101

2. The World Medical Association

The highest ethical demands are to be made of physicians within the scope of their professional practice because of their great responsibility towards the life, the bodily integrity and the right of self-determination of the patient.Footnote 102 In order to establish such an approach worldwide, the WMA was founded in 1947 following the Nuremberg trials as a reaction to the atrocities of German physicians in the Third Reich.Footnote 103 Today, as a federation of 115 national medical associations, it promotes ‘the highest possible standards of medical ethics’ and ‘provides ethical guidance to physicians through its Declarations, Resolutions and Statements’.Footnote 104 Unlike the international organizations described earlier, it is not a subject of international law, but a non-governmental organization that acts autonomously on a private law basis. As it is not based on a treaty under international law, the treaties it concludes with states would not be subject to international treaty law either.Footnote 105 The WMA is, therefore, to be treated as a subject of private law.

Such subjects of private law are well able to focus on specific topics to provide guidance and are, therefore, in a good position to address the challenges of biomedical issues. However, the Declaration of Helsinki and other declarations of the WMA have no legally binding character as resolutions of an international alliance of national associations under private law and can only be regarded as a codification of professional law, not as international soft law.Footnote 106 Yet, as will also be shown using the example of Germany, they are well integrated into national professional laws.

One criticism of the WMA’s decision-making legitimacy is that its internal deliberation is not very transparent and takes place primarily within the Council and the relevant committee(s), whose members are designated by the Council from its own members.Footnote 107 This means that some national medical associations barely participate in the deliberation. Currently, for example, only nine out of 27 Council members are from the Asian continent and one out of 27 from the African continent,Footnote 108 which is disproportionate compared to their population densities. Council bills are debated and discussed in the General Assembly but, given the lack of time and number of bills to be discussed, the Assembly does not have as much influence on the content as the Council and Committees.Footnote 109 Each national medical association may send one voting delegate to the General Assembly. In addition, they may send one additional voting member for every ten thousand members for whom all membership dues have been paid.Footnote 110 This makes the influence of a national medical association dependent, among other things, on its financial situation. Of additional concern is the fact that these national medical associations do not necessarily represent all types of physicians, because membership is not mandatory in most countries.Footnote 111 Moreover, other professional groups affected by the decisions of the WMA are not automatically heard.Footnote 112 As a consequence of the WMA’s genesis as a result of human experimentation by physicians in the Third Reich and the organization’s basis in the original Declaration of Helsinki,Footnote 113 the guidelines of the WMA are based primarily on American- or European-influenced medical ethics, although the membership of the WMA is more diverse.Footnote 114

3. Effect of International Measures in National Law

a. Soft Law

Declarations of UNESCO as international soft lawFootnote 115 are adopted by the General Conference.Footnote 116 They cannot be made binding on the member states and are not subject to ratification. They set forth universal principles to which member states ‘wish to attribute the greatest possible authority and to afford the broadest possible support’.Footnote 117 Additionally, UNESCO’s Constitution does not include declarations among the proposals which may be submitted to the General Conference for adoptionFootnote 118, although the General Conference can, in practice, adopt a document submitted to it in the form of a declaration.Footnote 119 Besides their contribution to shaping and developing binding norms and helping the interpretation of international law, soft law norms may also have immediate legal effects in the field of good faith, even if this does not change the non-legal nature of soft law.Footnote 120 This effect has particular relevance in the field of medicine and bioethics. The principle of good faith requires relevant actors not to contradict their own conduct.Footnote 121 Accordingly in the area of soft law, it legally protects expectations produced by these norms insofar as it is justified by the conduct of the parties concerned.Footnote 122 UNESCO itself states that declarations may be considered to engender a strong expectation that members states will abide by them on the part of the body adopting them. Consequently, insofar as the expectation is gradually justified by state practice, a declaration may by custom become recognized as laying down rules that are binding upon states.Footnote 123

b. Incorporation of WMA Measures into Professional Law

At the national level, professional law has an outstanding importance for physicians. In Germany, for example, the definition of individual professional duties is the responsibility of the respective state medical association, which issues professional regulations in the form of statutes. The autonomy of the statutes is granted to the state medical associations by virtue of state law and is an expression of the functional self-administration of the medical associations. In addition to defining professional duties, the state medical associations are also responsible for monitoring physicians’ compliance with these duties.Footnote 124 Due to the compulsory membership of physicians in the state medical associations, the professional law or respective professional code of conduct is obligatory for each individual physician.Footnote 125 The state medical associations are guided in terms of content by the Model Code of Professional Conduct for Physicians (MBO-Ä),Footnote 126 which is set out by the German Medical Association (Bundesärztekammer) as the association of state medical associations (and thus the German member of the WMA). If a declaration or statement is adopted at the international level by the WMA, the German Medical Association will incorporate the contents into the MBO-Ä, not least if it was involved in the deliberation. In addition to the statutes issued by the state medical associations, regulations on the professional conduct of physicians are found partly in federal laws such as the Criminal Code,Footnote 127 or the Civil Code,Footnote 128 and partly in state laws such as hospital laws. Regardless of which regulations are applicable in a specific case, the physician must always carry out the treatment of a patient in accordance with medical standards.Footnote 129

The medical standard to be applied in a specific case must be interpreted according to the circumstances of the individual case, taking into account what has objectively emerged as medical practice in scientific debate and practical experience and is recognized in professional circles as the path to therapeutic success, as well as what may be expected subjectively from the respective physician on average.Footnote 130 Any scientific debate about the application of AI in medical treatment on the level of the WMA would take place in professional circles and could thereby influence the applicable medical standard on a national level. Overall, the WMA’s guidelines would have a spillover effect in national professional law, whether in the area of professional regulations or in the scope of application of other federal or state laws. In this way, the contents of the guidance defined by the WMA could ultimately become binding for the individual physician licensed in Germany.

The situation is similar in Spain. The Spanish Medical Colleges Organization is a member of the WMA as the national medical association of Spain and ‘regulates the Spanish medical profession, ensures proper standards and promotes an ethical practice’.Footnote 131 Furthermore, the WMA is the main instrument for the participation of national medical associations in international issues. For example, the American Medical Association, as a member of the WMA, makes proposals for international guidelines and agendas and lobbies at the national level to achieve the goals of physicians in the health field.Footnote 132

IV. Conclusion: Necessity of Regulation by the World Medical Association

In order to close the gaps in the international guidance on the application of AI in medical care, active guidance by the WMA is recommended. Although it is not a subject of international law, meaning its guidance does not have legally binding effects, it is the only organization that has a strong indirect influence on national medical professional law through its members, as shown above. The incorporation of the contents of the guidance decided by the WMA is faster and less complex in this way than via the path of achieving legal effects through international soft law documents, particularly as the integration of the WMA guidelines into national professional laws reaches the physician actors that apply emerging technologies such as AI in only a few steps of implementation.

Furthermore, national professional laws and national professional regulations form not only the legal but also the ethical basis of the medical profession.Footnote 133 Consequently, professional law cannot be seen independently of professional ethics; instead, ethics constantly affect the legal doctor–patient relationship.Footnote 134 For example, the preamble to the German Model Code of Professional Conduct of the German Medical AssociationFootnote 135 states, among other things, that the purpose of the code of professional conduct is to preserve trust in the doctor–patient relationship, to ensure the quality of medical practice, to prevent conduct unbecoming a doctor, and to preserve the freedom of the medical profession. Furthermore, §2(1) sentence 1 MBO-Ä requires that physicians practice their profession according to their conscience, the prescriptions of medical ethics, and humanity. In addition, § 3(1) MBO-Ä also prohibits the practice of a secondary activity that is not compatible with the ethical principles of the medical profession. Preceding the regulations and the preamble of the model professional code of conduct is the medical vow set out in the WMA’s Declaration of GenevaFootnote 136, which is a modernized form of the Hippocratic Oath, itself over 2,000 years old. Altogether, this shows that ethics of professional conduct are not isolated from the law; they have a constant, universal effect on the legal relationship between the physician and the patient. Since the law largely assumes as a legal duty what professional ethics require from the physician,Footnote 137 the inclusion of medical ethics principles in professional law seems more direct in its effect than the inclusion of bioethical principles in international soft law.Footnote 138

From this example and the overall impact of the Declaration of Helsinki, it is clear that the WMA has the potential to work toward a standard that is widely recognized internationally. The orientation of the WMA towards European or American medical ethics must, however, be kept in mind when issuing guidelines. In particular, the ethical concerns of other members should be heard and included in the internal deliberation. Furthermore, the associations of other medical professions, such as the International Council of Nurses,Footnote 139 with whom partnerships already exist in most cases,Footnote 140 should be consulted, not least because their own professional field is strongly influenced by the use of AI in the treatment of patients, but also to aid the dissemination of medical ethics and standards throughout the health sector. Expanding participation in deliberation increases the legitimacy of the WMA’s guidelines and thus the spillover effect into the national professional law of physicians and other professions beyond. A comparison with other international organizations, such as UNESCO, also shows that the WMA, precisely because it is composed of physicians and because of its partnerships with other professional organizations, is particularly well suited from a professional point of view to grasp the problems of the use of AI in medical treatment and to develop and establish regulations for dealing with AI in the physician–patient relationship as well as in the entire health sector.

Footnotes

* The authors acknowledge funding by the Volkswagen Foundation, grant No. 95827. The state of the science is reflected in this chapter until the end of March 2021. The sources have been updated until mid-September 2021.

1 E Landhuis, ‘Deep Learning Takes on Tumours’ (2020) 580 Nature 550.

2 Ž Avsec and others, ‘Base-Resolution Models of Transcription-Factor Binding Reveal Soft Motif Syntax’ (2021) 53 Nat Genet 354.

3 E Landhuis, ‘Deep Learning Takes on Tumours?’ (2020) 580 Nature 550.

4 S Porter, ‘AI Database Used to Improve Treatment of UK COVID-19 Patients’ (Healthcare IT News, 20 January 2021) www.healthcareitnews.com/news/emea/ai-database-used-improve-treatment-uk-covid-19-patients?utm_campaign=Clips&utm_medium=email&_hsmi=108004999&_hsenc=p2ANqtz-_Z0v3NgnQwS4wQHlc_eXjnWIuszmpfIvLSXXOM4z23_6DtTo2WdoeI8o8wYaIICunIyDMs7g82wwC8V217XJn9K1SfCJByZihVAdmrIAS0yq7u7It2Wd-d3JmQ5wwVPjo9XOkf&utm_content=108004999&utm_source=hs_email; concerning the usefulness of AI applications for pandemic response, see: M van der Schaar and others, ‘How Artificial Intelligence and Machine Learning Can Help Healthcare Systems Respond to COVID-19’ (2021) 110 Mach Learn 1.

5 A Binder and others, ‘Morphological and Molecular Breast Cancer Profiling through Explainable Machine Learning’ (Nat Mach Intell, 8 March 2021) www.nature.com/articles/s42256-021-00303-4.

6 Medieninformation, ‘Hirnschlag mit künstlicher Intelligenz wirksamer behandeln dank Verbundlernen’ (Universität Bern, 9 March 2021). www.caim.unibe.ch/unibe/portal/fak_medizin/dept_zentren/inst_caim/content/e998130/e998135/e1054959/e1054962/210309_Medienmitteilung_InselGruppe_UniBE_ASAP_eng.pdf; WHO, WHO Guideline: Recommendations on Digital Health Interventions for Health System Strengthening (WHO/RHR/19.8, 2019) (hereafter WHO, Recommendations on Digital Health).

7 JL Lavanchy and others, ‘Automation of Surgical Skill Assessment Using a Three-Stage Machine Learning Algorithm’ (2021) 11 Sci Rep 5197.

8 M Nagendran and others, ‘Artificial Intelligence versus Clinicians: Systematic Review of Design, Reporting Standards, and Claims of Deep Learning Studies’ (2020) BMJ 368:m689.

9 See also European-Commission, ‘High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI’ (European Commission, 8 April 2019) https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. The Guidelines of this group have benn subject to criticism, cf. M Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’ (2020) 11 European Journal of Risk Regulation 1, E1 doi:10.1017/err.2019.65.

10 Cf., for example, FDA, ‘Digital Health Software Precertification (Pre-Cert) Program’ (FDA, 14 September 2020) www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program.

11 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (Council of Europe, May 2019) 10 et seq. https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64; CoE Committee of Ministers, ‘Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes’ (1337th meeting of the Ministers’ Deputies, Decl(13/02/2019)1, 13 February 2019) No. 9 https://search.coe.int/cm/pages/result_details.aspx?ObjectId=090000168092dd4b; OECD, ‘Recommendation of the Council on Artificial Intelligence’ (OECD/LEGAL/0449, 22 November 2019) Section 2 https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449; UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (SHS/BIO/PI/2021/1, 23 November 2021) II.7 https://unesdoc.unesco.org/ark:/48223/pf0000381137; WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (WHO, 21 June 2021) 2 et seq., 17 https://www.who.int/publications/i/item/9789240029200 (hereafter WHO, ‘Ethics and Governance for Artificial Intelligence for Health’); and the following documents issued by the WMA: ‘WMA Statement on Mobile Health’ (66th WMA General Assembly, Russia, 20 February 2017) www.wma.net/policies-post/wma-statement-on-mobile-health/ (hereafter WMA, ‘WMA Statement on Mobile Health’); ‘WMA Statement on Augmented Intelligence in Medical Care’ (70th WMA General Assembly, Georgia, 26 November 2019) www.wma.net/policies-post/wma-statement-on-augmented-intelligence-in-medical-care/ (hereafter WMA, ‘WMA Statement on Augmented Intelligence’); ‘WMA Statement on the Ethics of Telemedicine’ (58th WMA General Assembly, Denmark, amended by 69th General Assembly, Iceland, 21 September 2020) No 1 www.wma.net/policies-post/wma-statement-on-the-ethics-of-telemedicine/ (hereafter WMA, ‘WMA Statement on the Ethics of Telemedicine’); ‘Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects’ (18th WMA General Assembly, Finland, last amended by the 64th WMA General Assembly, Brazil, 9 July 2018) No 26 www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ (hereafter Declaration of Helsinki).

12 English Oxford Living Dictionary, ‘Artificial Intelligence’ www.lexico.com/definition/artificial_intelligence. In this chapter, when elaborating on AI methods, Deep Learning and Machine Learning are at the focus of considerations.

13 Acatech, ‘Machine Learning in der Medizintechnik’ (acatech, 5 May 2020) 8, 11 www.acatech.de/publikation/machine-learning-in-der-medizintechnik/.

14 Datenethikkommission, ‘Gutachten der Datenethikkommission’ (2020) 24, 28 (Federal Ministry of the Interior, Building and Community, 23 October 2019) www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=6 (hereafter Datenethikkommission, ‘Gutachten’).

15 The ‘actorhood’ of AI is discussed mainly from the perspectives of action theory and moral philosophy, which are not addressed in this chapter. Currently, however, it is assumed that AI-based systems cannot themselves be bearers of moral responsibility, because they do not fulfill certain prerequisites assumed for this purpose, such as freedom, higher-level intentionality, and the ability to act according to reason. On the abilities required for ethical machine reasoning and the programming features that enable them, cf. LM Pereira and A Saptawijaya, Programming Machine Ethics (2016). On the question of the extent to which AI-based systems can act, cf. C Misselhorn, Grundfragen der Maschinenethik (2018) and Chapter 3 in this volume. With regard to the legal assessment related to the ‘actorhood’ of AI systems and the idea of granting algorithmic systems with a high degree of autonomous legal personality in the future (‘electronic person’), the authors agree with the position of the German Data Ethics Commission, according to which this idea should not be pursued further. Cf. Datenethikkommission, ‘Gutachten’ (Footnote n 14) Executive Summary, 31 Nr 73. For this reason, the article only talks about AI per se for the sake of simplicity; this is neither intended to imply any kind of ‘personalization’ nor to represent a position in the debate about ‘personalization’ with normative consequences.

16 A Laufs, BR Kern, and M Rehborn, ‘§ 50 Die Anamnese’ in A Laufs, BR Kern, and M Rehborn (eds), Handbuch des Arztrechts (5th ed. 2019) para 1.

17 C Katzenmeier, ‘Arztfehler und Haftpflicht’ in A Laufs, C Katzenmeier, and V Lipp (eds), Arztrecht (8th ed. 2021) para 4.

18 TJ Brinker and others, ‘Deep Learning Outperformed 136 of 157 Dermatologists in a Head-To-Head Dermoscopic Melanoma Image Classification Task’ (2019) 113 European Journal of Cancer 47.

19 A Esteva and others, ‘Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks’ (2017) 542 Nature 115.

20 O Schoppe and others, ‘Deep Learning-Enabled Multi-Organ Segmentation in Whole-Body Mouse Scans’ (2020) 11 Nat Commun 5626.

21 The Federal Institute for Drugs and Medical Devices keeps a record of all digital medical applications (DiGA-Verzeichnis) https://diga.bfarm.de/de/verzeichnis.

22 S Chan and others, ‘Machine Learning in Dermatology: Current Applications, Opportunities, and Limitations’ (2020) 10 Dermatol Ther (Heidelb) 365, 375.

23 T Alhanai, M Ghassemi, and J Glass, ‘Detecting Depression with Audio/Text Sequence Modelling of Interviews’ (2018) Proc Interspeech 1716. Cf. also M Tasmin and E Stroulia, ‘Detecting Depression from Voice’ in Canadian Conference on AI: Advances in Artificial Intelligence (2019) 472.

24 WMA, ‘WMA Statement on Mobile Health’ (Footnote n 11).

25 WMA, ‘WMA Statement on Augmented Intelligence’ (Footnote n 11).

26 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (n 11) 10 et seq.

27 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 84.

28 WHO, Recommendations on Digital Health (Footnote n 6).

29 WMA, ‘WMA Statement on the Ethics of Telemedicine’ (Footnote n 11).

30 Footnote Ibid, No 2.

31 Footnote Ibid, No 4.

32 WHO, Recommendations on Digital Health (Footnote n 6) 50.

33 Footnote Ibid, 53 et seq.

35 Law on the Protection of Electronic Patient Data within the Telematic Infrastructure (Gesetz zum Schutz elektronischer Patientendaten in der Telematikinfrastruktur), BGBl. 2020, 2115.

36 Social Security Statute Book V – Statutory Health Insurance (SGB V), Article 1 of the Act of 20 December 1988 (Federal Law Gazette [Bundesgesetzblatt] I page 2477, 2482), last amended by Artikel 1b of the Act of 23 Mai 2022 (Federal Law Gazette I page 760), §306(1) sentence 2.

37 § 364 et seq. SGB V.

38 Civil Code in the version promulgated on 2 January 2002 (Federal Law Gazette [Bundesgesetzblatt] I page 42, 2909; 2003 I page 738), last amended by Article 2 of the Act of 21 December 2021 (Federal Law Gazette I page 5252).

39 These advantages, which also increase the acceptance of health workers for digital health interventions, are described by the WHO: World Health Organization, Recommendations on Digital Health (Footnote n 6) 34. In addition, the WHO has recently suggested exploring whether the introduction and use of AI in healthcare exacerbates the digital divide. Ultimately, AI using telemedicine should reduce the gap in access to healthcare and ensure equitable access to quality care, regardless of geographic and other demographic factors: WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 74.

40 For more information cf. Charité, ‘Fontane’ https://telemedizin.charite.de/forschung/fontane/.

41 For more information cf. Charité, ‘Telemed5000’ https://telemedizin.charite.de/forschung/telemed5000/.

42 A Laufs, BR Kern, and M Rehborn, ‘§ 52 Die Diagnosestellung’ in A Laufs, BR Kern, and M Rehborn (eds), Handbuch des Arztrechts (5th ed. 2019) para 7 et seq.

43 WMA, ‘WMA Statement on Augmented Intelligence’ (Footnote n 11).

44 Ad Hoc Expert Group (AHEG) for the preparation of a draft text of a recommendation on ethics of artificial intelligence, ‘Outcome Document: First Draft of the Recommendation on the Ethics of Artificial Intelligence’ (September 2020) No 36 https://unesdoc.unesco.org/ark:/48223/pf0000373434.

45 WHO, ‘Guideline: Recommendations on Digital Health Interventions for Health System Strengthening’ (n 6) 65; WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) p 6.

46 Acatech, ‘Machine Learning in der Medizintechnik’ (n 13) 11.

47 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 106 et seq.

48 C Katzenmeier, ‘Aufklärungspflicht und Einwilligung’ in A Laufs, C Katzenmeier, and V Lipp (eds), Arztrecht (8th ed. 2021) para 16, 21.

49 Footnote Ibid, para 14.

50 WMA, ‘Declaration of Helsinki’ (Footnote n 11) No 26.

51 CoE Commissioner for Human Rights, ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’ (n 11) 10 et seq.

52 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 40 et seq., with some suggestions in Box 4, 48, 82, and 90.

53 C Katzenmeier, ‘Arztfehler und Haftpflicht’ in A Laufs, C Katzenmeier, V Lipp (eds), Arztrecht (8th ed. 2021) para 4.

54 WMA, ‘Declaration of Helsinki’ (Footnote n 11) No 12, No 10.

55 WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 77.

57 For further information see C Amadou, S Franc, PY Benhamou, S Lablanche, E Huneker, G Charpentier, A Penfornis & Diabeloop Consortium ‘Diabeloop DBLG1 Closed-Loop System Enables Patients With Type 1 Diabetes to Significantly Improve Their Glycemic Control in Real-Life Situations Without Serious Adverse Events: 6-Month Follow-up’ (2021) 44 Diabetes care 3, 844.

58 N Koutsouleris and others, ‘Multimodal Machine Learning Workflows for Prediction of Psychosis in Patients with Clinical High-Risk Syndromes and Recent-Onset Depression’ (JAMA Psychiatry, 2 December 2020) https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2773732.

59 WHO, ‘Guideline: Recommendations on Digital Health Interventions for Health System Strengthening’ (n 6) 69 et seq. Considering the use of AI to extend ‘clinical’ care beyond the formal health-care system based on monitoring: WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 9 et seq.

62 For example, in civil law provisions in Germany according to § 630f BGB and for research studies based on the international standards of the WMA according to the Declaration of Helsinki (Footnote n 11) No 22.

63 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, p. 1–88.

64 R Konertz and R Schönhof, Das technische Phänomen „Künstliche Intelligenz” im allgemeinen Zivilrecht. Eine kritische Betrachtung im Lichte von Autonomie, Determinismus und Vorhersehbarkeit (2020) 69.

65 H Zech, ‘Künstliche Intelligenz und Haftungsfragen’ (2019) ZfPW, 118, 202.

66 B Buchner, ‘DS-GVO Art. 22’ in J Kühling and B Buchner (eds), Datenschutzgrundverordnung BDSG Kommentar (3rd ed. 2020) para 14 et seq. P Schantz and HA Wolff, Das neue Datenschutzrecht (2017) recital 736.

67 GDPR, Article 9(2)(a), in conjunction with Article 6(1)(a) GDPR or Article 6(1)(b) GDPR (doctor–patient relationship as a contractual obligation under civil law).

68 D Kampert, ‘DSGVO Art. 9’ in G Sydow (ed), Europäische Datenschutzgrundverordnung (2nd ed. 2018) para 14.

69 For challenges see Sub-section II 3.

70 GDPR, Recital 33. WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 84 et seq.

71 Cf. instead of many: HC Stoeklé and others, ‘Vers un consentement éclairé dynamique’ [Toward Dynamic Informed Consent] (2017) 33 Med Sci (Paris) 188.I Budin-Ljøsne and others, ‘Dynamic Consent: A Potential Solution to Some of the Challenges of Modern Biomedical Research’ (2017) 18(1) BMC Med Ethics 4. WHO, ‘Ethics and Governance for Artificial Intelligence for Health’ (Footnote n 11) 82.

72 Information obligations in the course of broad and dynamic consent: Datenschutzkonferenz, ‘Beschluss der 97. Konferenz der unabhängigen Datenschutzaufsichtsbehörden des Bundes und der Länder zu Auslegung des Begriffs “bestimmte Bereiche wissenschaftlicher Forschung” im Erwägungsgrund 33 der DS-GVO’ (3 April 2019) www.datenschutzkonferenz-online.de/media/dskb/20190405_auslegung_bestimmte_bereiche_wiss_forschung.pdf.

73 For problems with this, see Sub-section II 3.

74 PHG Foundation, ‘The GDPR and Genomic Data: The Impact of the GDPR and DPA 2018 on Genomic Healthcare and Research’ (2020) 44 et seq. www.phgfoundation.org/media/123/download/gdpr-and-genomic-data-report.pdf?v=1&inline=1.

75 Footnote Ibid, 167.

76 Cf. GDPR, Article 17(3)(d). T Herbst, ‘DS-GVO Art. 22’ in J Kühling, B Buchner (eds), Datenschutzgrundverordnung BDSG Kommentar (3rd ed. 2020) para. 81 et seq.

77 For special categories of personal data, cf. the exemptions defined in Article 9(2) GDPR.

78 Cf. GDPR, Article 20.

79 Cf. instead of many others: B Murdoch, ‘Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era’ (2021) 22(1) BMC Medical Ethics 122.

80 CJEU, Case C-131/12 Google Spain v Gonzalez [2014] ECLI:EU:C:2014:317 para 87 et seq.

81 OJ Gstrein, Das Recht auf Vergessenwerden als Menschenrecht (2016) 111.

82 For this in-depth analysis of the right to be forgotten, cf. F Molnár-Gábor, ‘Das Recht auf Nichtwissen. Fragen der Verrechtlichung im Kontext von Big Data in der modernen Biomedizin’ in G Duttge and Ch Lemke (eds), Das sogenannte Recht auf Nichtwissen. Normatives Fundament und anwendungspraktische Geltungskraft (2019) 83, 99 et seq.

83 JE Alvarez, International Organizations as Law-Makers (2005) 4, 6 et seq. Due to the regional character of the Council of Europe, its instruments are not further elaborated on here.

84 WHO, ‘Global Health Ethics’ https://apps.who.int/iris/handle/10665/164576.

85 UNESCO, ‘Universal Declaration on Bioethics and Human Rights, 19 October 2005, Records of the UNESCO General Conference, 33rd Session, Paris, 3–21 October 2005’ (33 C/Resolution 36) 74 et seq.

86 Constitution of the UNESCO, 4 UNTS 275, UN Reg No I-52 (hereafter UNESCO-Constitution).

87 F Molnar-Gabor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 202 et seq.

88 On the advantages of international soft law compared to international treaties when it comes to the regulation of biomedicine cf. A Boyle,Some Reflections on the Relationship of Treaties and Soft Law’ (1999) 48 International and Comparative Law Quarterly 901, 902 et seq., 912 et seq.; R Andorno, Principles of International Biolaw. Seeking Common Ground at the Intersection of Bioethics and Human Rights (2013) 39 et seq.; W Höfling, ‘Professionelle Standards und Gesetz’ in HH Trute and others (eds), Allgemeines Verwaltungsrecht – zur Tragfähigkeit eines Konzepts, Festschrift für Schmidt-Aßmann zum 70. Geburtstag (2008) 45, 52.

89 M Bothe, ‘Legal and Non-Legal Norms: A Meaningful Distinction in International Relations?’ (1980) 11 Netherlands Yearbook of International Law 65, 67 et seq.

90 J Klabbers, An Introduction to International Institutional Law (2nd ed. 2009) 183.

91 H Hilgenberg, ‘Soft Law im Völkerrecht’ (1998) 1 Zeitschrift für Europarechtliche Studien 81, 100 et seq.

92 M Goldmann, Internationale öffentliche Gewalt (2015) 34, 60 et seq., 187 et seq., 199 et seq.

93 I Venzke, How Interpretation Makes International Law (2012) 380.

94 TA Faunce, ‘Will International Human Rights Subsume Medical Ethics? Intersections in the UNESCO Universal Bioethics Declaration’ (2005) 31 Journal of Medical Ethics 173, 176; D Thürer, ‘Soft Law’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 2.

95 Cf. instead of many others: A Langlois, Negotiating Bioethics (2013) (hereafter Langlois, Negotiating Bioethics) 144.

96 Statutes of the International Bioethics Committee of UNESCO (IBC), Adopted by the Executive Board at its 154th Session, on 7 May 1998 (154 EX/Dec. 8).

97 F Molnár-Gábor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 298 et seq.

98 Footnote Ibid, 301.

99 Statutes of the International Bioethics Committee of UNESCO (IBC) (Footnote n 96) Article 11. Cf. Rules of Procedure of the Intergovernmental Bioethics Committee (IGBC), Adopted by IGBC at its 3rd session on 23 June 2003 in Paris and amended at its 5th session on 20 July 2007 and at its 7th session on 5 September 2011 (SHS/EST/IGBC-5/07/CONF.204/7 Rev) Article 1.

100 Critically on this Langlois, Negotiating Bioethics (Footnote n 95) 56.

101 F Molnár-Gábor, Die internationale Steuerung der Biotechnologie am Beispiel neuer genetischer Analysen (2017) 299 et seq. For the critical assessment of the Inter-Governmental Meeting of Experts, cf. Langlois, Negotiating Bioethics (Footnote n 95) 56. The distribution of seats and the election take place according to the decision of the Executive Council: 155 EX/Decision 9.2, Paris, 03.12.1998. According to this, Group I (Western Europe and the North American States) has seven seats, Group II (Eastern Europe) has four, Group III (Latin America and the Caribbean States) has six, Group IV (Asia and the Pacific States) has seven, and Group V (Africa [eight] and the Arab States [four]) has a total of twelve seats.

102 W Spann, ‘Ärztliche Rechts- und Standeskunde’ in A Ponsold (ed), Lehrbuch der Gerichtlichen Medizin (1957) 4.

103 T Richards, ‘The World Medical Association: Can Hope Triumph Over Experience?’ (1994) BMJ, 308 (hereafter Richards, ‘The World Medical Association’).

104 See official homepage: WMA, ‘About Us’ www.wma.net/who-we-are/about-us/ (hereafter WMA, ‘About Us’).

105 S Vöneky, ‘Rechtsfragen der Totalsequenzierung des menschlichen Genoms in internationaler und nationaler Perspektive’ (2012) Freiburger Informationspapiere zum Völkerrecht und Öffentlichen Recht 4, Footnote note 16, https://www.jura.uni-freiburg.de/de/institute/ioeffr2/downloads/online-papers/fip_4_2012_totalsequenzierung.pdf.

107 On the decision-making process M Chang, ‘Bioethics and Human Rights: The Legitimacy of Authoritative Ethical Guidelines Governing International Clinical Trials’ in S Voeneky and others (eds), Ethics and Law: The Ethicalization of Law (2013) 177, 210 (hereafter Chang, ‘Bioethics and Human Rights’).

108 See official homepage: WMA, ‘About Us’ (Footnote n 104).

109 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 209. Cf. Richards, ‘The World Medical Association’ (Footnote n 103).

110 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 209 et seq. The threshold was 50,000 members a few years ago. Cf. Richards, ‘The World Medical Association’ (Footnote n 103).

111 Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 214.

112 Cf. Chang, ‘Bioethics and Human Rights’ (Footnote n 107), 177, 212.

113 WMA, ‘Declaration of Helsinki’ (Footnote n 11).

114 This medical ethics has been condensed into the four bioethical principles of autonomy, beneficence, non-maleficence, and justice (as set down by Beauchamp and Childress). TL Beauchamp and JF Childress, Principles of Biomedical Ethics (8th ed. 2012). For criticism on principalism cf. U Wiesing, ‘Vom Nutzen und Nachteil der Prinzipienethik für die Medizin’ in O Rauprich and F Steger (eds), Prinzipienethik in der Biomedizin. Moralphilosophie und medizinische Praxis (2005) 74, 77 et seq.

115 S Voeneky, Recht, Moral und Ethik (2010) 383.

117 UNESCO, ‘General Introduction to the Standard-Setting Instruments of UNESCO’ http://portal.unesco.org/en/ev.php-URL_ID=23772&URL_DO=DO_TOPIC&URL_SECTION=201.html (hereafter UNESCO, ‘General Introduction’).

118 Article 4(4) UNESCO-Constitution (Footnote n 86).

119 UNESCO, ‘General Introduction’ (Footnote n 117).

120 D Thürer, ‘Soft Law’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 27 (hereafter Thürer, ‘Soft Law’).

121 M Kotzur, ‘Good Faith (Bona Fide)’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2009) recital 25.

122 Thürer, ‘Soft Law’ (Footnote n 120) recital 27. Cf. definition by M Goldmann, Internationale öffentliche Gewalt (2015) p. 3.

123 UNESCO, ‘General Introduction’ (Footnote n 117).

124 Compare V Lipp, ‘Ärztliches Berufsrecht’ in A Laufs, C Katzenmeier and V Lipp (eds), Arztrecht (8th ed. 2021) recital 12.

125 U Wiesing, Ethik in der Medizin (2nd ed. 2004) 75.

126 (Model) Professional Code for Physicians in Germany – MBO-Ä 1997 – The Resolutions of the 121st German Medical Assembly 2018 in Erfurt as amended by a Resolution of the Executive Board of the German Medical Association 14/12/2018 (hereafter MBO-Ä 1997).

127 E.g. § 203 StGB (German Criminal Code) which protects patient confidentiality.

128 Civil law regulates the contracts for the treatment of patients in §§ 630a ff. BGB.

129 Cf. § 630a BGB, C Katzenmeier, ‘BGB § 630a’ in BeckOK BGB (61st ed. 2022) para. 1 et seq.

130 M Quaas, ‘§ 14 Die Rechtsbeziehungen zwischen Arzt (Krankenhaus) und Patient’ in R Zuck, T Clemens, and M Quass (eds), Medizinrecht (4th ed. 2018) recital 128.

131 For more information see Organizatión Médica Colegial de España, ‘Funciones del CGCOM’ www.cgcom.es/funciones.

132 For more information see American Medical Association, ‘AMA’s International Involvement’ www.ama-assn.org/about/office-international-relations/ama-s-international-involvement.

133 Bundesärztekammer, ‘(Muster-)Berufsordnung-Ärzte’ https://www.bundesaerztekammer.de/themen/recht/berufsrecht.

134 BVerfGE, 52, 131 (BVerfG BvR 878/74) para 116.

135 MBO-Ä 1997 (Footnote n 126).

136 WMA, ‘Declaration of Geneva (1947), last amended by the 68th General Assembly in Chicago, USA, October 2017’ (WMA, 9 July 2018) www.wma.net/policies-post/wma-declaration-of-geneva/.

137 ‘Far more than in other social relations of human beings, the ethical and the legal merge in the medical profession.’ E Schmidt, ‘Der Arzt im Strafrecht’ in A Ponsold (ed), Lehrbuch der gerichtlichen Medizin (2nd ed. 1957) 1, 2; BVerfGE, 52, 131 (BVerfG BvR 878/74).

138 UNESCO states, for example, that ‘Human rights law contains provisions that are analogous to the principles that flow from analysis of moral obligations implicit in doctor–patient relationships, which is the starting point, for example, of much of the Anglo-American bioethics literature, as well as the bioethics traditions in other communities.’ UNESCO IBC, ‘Report on Human Gene Therapy’ SHS-94/CONF.011/8, Paris, 24.12.1994, IV.1.

139 International Council of Nurses www.icn.ch.

140 WMA, ‘Partners, WMA Partnerships’ www.wma.net/who-we-are/alliance-and-partner/partners/.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×