Skip to main content Accessibility help
×
Hostname: page-component-788cddb947-t9bwh Total loading time: 0 Render date: 2024-10-15T19:45:38.329Z Has data issue: false hasContentIssue false

Part I - Automated Banks

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 7 - 92
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

1 AI in the Financial Sector Policy Challenges and Regulatory Needs

Teresa Rodríguez de las Heras Ballell
1.1 Setting the Scene: AI in the Financial Sector

The progressive, but irrepressible, automation of activities, tasks, and decision-making processes through the systematic, pervasive application of AI techniques and systems is ushering in a new era in the digital transformation of the contemporary society and the modern economy.Footnote 1 The financial sector, traditionally receptive and permeableFootnote 2 to technological advances, is not oblivious to this process of intense and extensive incorporation of AI, for multiple purposes and under variegated forms.Footnote 3 The advantages and opportunities that AI solutions offer in terms of efficiency, personalisation potential, risk management, and cost reduction have not gone unnoticed by the financial sector. On the contrary, the characteristics of AI systems seem to perfectly fit in with the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry provides a receptive and conducive environment to the growing application of AI solutions.

Despite the spotlight on AI, the fact that AI solutions are usually applied, implemented, and incorporated in the financial activity in synergetic combination with other transformative and emerging technologies should not be disregarded. These are technologies such as big data, Internet of Things (IoT), cloud computing, distributed ledger technology (DLT), quantum computing, platform model, virtual reality, and augmented realityFootnote 4 that are synchronously present in the market, with similar levels of technical maturity,Footnote 5 commercial viability, and practical applicability. In fact, the multiplying effects triggered by such a combination of sophisticated technological ecosystems largely explain the perceived disruptive nature of AI and its actual impact.

With very diverse uses and applications, AI has penetrated financial markets across the board in an increasingly visible way.Footnote 6 Its alliance with analytical and predictive processing of big data by financial institutionsFootnote 7 is perhaps the most telling dimension of a profound transformation of the industry, business strategies, risks, and operations.Footnote 8

The perception of their usefulnessFootnote 9 and, above all, of the timeliness and desirability of their increasingly pressing incorporation has been encouraged by markedly different competitive conditions, precisely because of the impact of technology on market architecture and exceptional circumstances arising from the pandemic.Footnote 10 Indeed, this process of intense digital migration has altered the structure and conditions of competition in the market with the opening of new niches for the emergence of innovative fintech firmsFootnote 11 and the sweeping entry of Big Tech in the financial services sector. The essential function of financial markets as mechanisms for the efficient allocation of savings to investment can take many different forms. Technological innovation has endowed the sector with new architecturesFootnote 12 on a continuum that shifts from platform modelsFootnote 13 based on a centralised structure to decentralised or distributed modelsFootnote 14 – to varying degrees – that DLTFootnote 15 allows to articulate.Footnote 16

Changes in market architecture and opportunities for the provision of new services and intermediation in the distribution of new financial assets and products have driven the emergence of new market players – crowdfunding platform operators, aggregators, comparators, robo-advisers, algorithm providers, social trading platform operators, and multilateral trading system operators – encouraged by low barriers to entry, promising business opportunities, cost reduction, and economies of scale.

In this new landscape, complex relationships of cooperationFootnote 17 and competitionFootnote 18 are established between entrants and incumbents.Footnote 19 The presence of new players in the market – offering complementary or instrumental services, creating new environments and channels of communication and intermediation, and adding value to traditional services and products – challenges the traditional scope of regulation and the classical limits of supervision.Footnote 20

On the other hand, mobility restrictions, with closures, confinements, and limitations on travel aimed at containing the spread of the Covid-19 pandemic from the first quarter of 2020, although temporary, have turned the opportunity of digital banking into a survival necessity and even an obligation, in practice, for the proper provision of service and customer care. In a fully digital context for all customer interactions and operations, the use of AI for optimisation, personalisation, or recommendation is key. The processing of increasing amounts of data requires automated means. At this forced and exceptional juncture, many digitalisation initiatives have been prioritised to meet the needs of the changed circumstances. A bank that has completed its digital migration is in a very favourable and receptive position for AI solutions.

This trend, as a response to market demands, is met with increasing regulatory attention seeking to unleash the possibilities and contain the risks of AI. The European Union (EU) provides a perfect illustration. Efforts to define a harmonised regulatory framework for the market introduction, operation, and use of AI systems under certain prohibitions, requirements, and obligations crystallised in the proposed Regulation known as the AI Act.Footnote 21 From a sectoral perspective, the European Banking Authority (EBA) had already advocated the need to incorporate a set of fundamental principles to ensure the responsible use and implementation of safe and reliable AI in the banking sector.Footnote 22 Indeed, promoting safe, reliable, and high-quality AI in Europe has become one of the backbones of the EU’s digital strategy as defined in the strategic package adopted on 19 February 2020. The White Paper on AIFootnote 23 and the European Commission Report on Safety and Liability Implications of AI, the Internet of Things and RoboticsFootnote 24 define the coordinates for Europe’s digital future.Footnote 25 The Ethics Guidelines for Trustworthy AI prepared by the independent High-Level Expert Group on AI in the European Union,Footnote 26 which takes the EBA as a reference, marked the first step towards the consolidation of a body of principles and rules for AI – explainability, traceability, avoidance of bias, fairness, data quality, security, and data protection. But the legal regime for the development, implementation, marketing, or use of AI systems requires incorporating other rules found in European legislation, in particular, the recently adopted Regulations on digital services and digital markets – Digital Services Act (DSA)Footnote 27 and Digital Markets Act (DMA),Footnote 28 or in some of the forthcoming instruments related to AI liability. Even so, it does not result in a coherent and comprehensive body of rules relating to the use of AI systems in the banking sector. It is necessary to compose a heterogeneous and plural set of rules that derive from sectoral regulations, result from the inference of general principles, apply standards from international harmonisation instruments, or project the rules on obligations, contracts, or liability through more or less successful schemes based on functional equivalence and technological neutrality.Footnote 29

The aim of this chapter is to follow this path, which starts with the observation of a growing and visible use of AI in the financial sector, moves into the regulatory and normative debate, and concludes with a reflection on the principles that should guide the design, development, and implementation of AI systems in decision-making (ADM) in the sector. To this end, the chapter is structured as follows. First, it explores the concept of an AI system, considering definitions proposed in the EU, especially in the AI Act, and the interaction of this term with other related terms such as ADM (Section 1.2.1). The various applications of AI in the financial sector in general and in the banking sector in particular are then explored (Section 1.2.2). This provides the conceptual basis for analysing the regulatory framework, including existing and emerging standards, applicable to AI systems, and concludes (Section 1.3) with a proposal of the main principles that should guide the design, implementation, and use of AI systems in the financial sector (Section 1.4).

1.2 Concept and Taxonomy: AI System and ADM

The digital transformation is generating an intimate and intense intertwining of various technologies with socioeconomic reality. This implies not only recalibrating principles and rules, but also terminology and concepts. The legal response must be articulated with appropriate definitions and concepts with legal relevance that adequately grasp the distinctive features of technological solutions without falling into a mere technical description, which would make the law irremediably and forever obsolete in the face of technological progress. The law would rather opt for a functional categorisation, which understands the functions without prejudging the technological solution or the business model.

1.2.1 AI Systems: Concept and Definition

In the European legislation, whether in force or pending adoption, references to automation appear scattered and with disparate terminology. Both in the General Data Protection Regulation (GDPR),Footnote 30 or in the Digital Services Act references to automated individual decisions, algorithmic decisions, algorithmic content recommendation or moderation systems, algorithmic prioritisation, or the use of automatic or automated means for various purposes can be spotted. But there is no definition or explicit reference to ‘AI’ in the said texts. It is the future, and still evolving, AI Act that expressly defines ‘AI systems’, for the purposes of the regulation, in order to delimit its material scope of application.

The initial definition of AI system for the purposes of the proposed instrument in the European Commission’s proposal was as follows: artificial intelligence system (AI system) means ‘software that is developed using one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions that influence the environments with which it interacts’ (Art. 3.1 AI Act).

With this definition, AI systems are defined on the basis of two components. First, the qualification as learning systems and thus separated from more traditional computational systems. That is, the fact that they employ or are developed using ‘AI’ techniques, which the AI Act would define in an Annex I, subject to further extension or modification, and which currently includes: machine learning strategies, including supervised, unsupervised, and reinforcement learning, employing a wide variety of methods, including deep learning; logic and knowledge-based strategies, especially knowledge representation, inductive (logic) programming, knowledge bases, inference and deduction engines, expert systems and (symbolic) reasoning; statistical strategies, Bayesian estimation, search and optimisation methods. Second, the influence on the environment with which they interact, generating outcomes such as predictions, recommendations, content, or actual decisions. Behind this definition lies the assumption that it is precisely the ‘learning’ capabilities of these systems that largely determine their disruptive effectsFootnote 31 (opacity, vulnerability, complexity, data dependence, autonomy) and hence the need to reconsider the adequacy of traditional rules. This is, in fact, the reasoning that leads to rethink the rules of liability and thus assess their adequacy in the face of the distinctive features of AI as proposed in the report published on 21 November 2019, titled Report on Liability for Artificial Intelligence and other emerging technologies.Footnote 32 And it was issued by the Expert GroupFootnote 33 on New Technologies and Liability advising the European Commission.Footnote 34

Along the same lines, the Commission adopted two related proposals in 2022: proposal for a directive on adapting non-contractual civil liability rules to artificial intelligenceFootnote 35 and proposal for a directive of the European Parliament and of the Council on liability for defective products.Footnote 36

However, the wording of this definition for AI systems in the Commission’s proposal has been subject to significant reconsideration and might still evolve into its final wording. The compromise text submitted at the end of November 2021 by the Slovenian Presidency of the European Council (Council of the European Union, Presidency compromise text, 29 November 2021, 2021/0106(COD), hereafter simply Joint Undertaking) proposed some changes to this definition.Footnote 37 The text, in its preamble, explains that the changes make an explicit reference to the fact that the AI system should be able to determine how to achieve a given set of pre-defined human objectives through learning, reasoning, or modelling, in order to distinguish them more clearly and unambiguously from more traditional software systems, which should not fall within the scope of the proposed Regulation. But also with this proposal, the definition of an AI system is stylised and structurally reflects the three basic building blocks: inputs, processes, and outputs.

Yet, a subsequent version version of the compromise textFootnote 38 of the AI Act offers another definition that refines the previous drafting and provides sufficiently clear criteria for distinguishing AI from simpler software systems. Thus, AI system means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, or decisions influencing the environments with which the AI system interacts. Some key elements of the initial definition are preserved or recovered in this recent version that finally narrows down the description of ‘learning systems’ to the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches.

The definition is still evolving. In the latest compromise textFootnote 39 the new definition of AI system is ‘a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments’.

Also, the European Parliament Resolution on liability for the operation of artificial intelligence systemsFootnote 40 referred expressly to AI systems and formulated its own definition (Art. 3.a).Footnote 41 This Resolution contains a set of recommendations for a Regulation of the European Parliament and of the Council on civil liability for damage caused by the operation of AI systems. The proposal has not been adopted. Instead, the Commission proposed the abovementioned tandem of draft Directives, that follow a substantially different approach aimed to revise the Defective Product Liability rules so as to accommodate AI-driven products and to alleviate the burden of proof in fault-based liability scenarios on damages caused by AI systems.

The rest of the regulatory texts do not explicitly refer to AI, although they contain rules on algorithms, algorithmic systems of various types, automation or automatic decision-making. Thus, as mentioned above, the GDPR, the DSA, the DMA or, among others, the P2B RegulationFootnote 42 refer to algorithmic rating, algorithmic decision-making, algorithmic recommendation systems, algorithmic content moderation, algorithmic structures, automated profiling, or a variety of activities and actions performed by automated means. They include rules related to algorithms, such as disclosure, risk assessment, accountability and transparency audits, on-site inspections, obtaining consent, etc. As the definition of AI systems proposed by the AI Act reveals, recommendations, decisions, predictions, or other digital content of any kind, as well as actions resulting from the system in or in relation to the environment, are natural and frequent outputs of AI systems. Consequently, regulatory provisions that in some way regulate algorithmic processes and decision-making by automated means in a variety of scenarios and for a variety of purposes are also relevant for the construction of the regulatory framework for AI in the European Union.

Provided that the AI system falls under the scope of application of the proposed AI Act, an AI system may be subject to the AI Act as well as to other rules depending on the specific purpose, the purpose for which it is intended or the specific action. As an illustration, if the system is intended to produce recommendations by a very large banking platform, the DSA (Art. 27) – applicable to any online platform – applies, or if the system is intended for profiling, the GDPR (Art. 22) would be relevant.

In conclusion, understanding the complementarity between the various legal texts that directly or indirectly address the use of AI systems for a variety of purposes and from a range of legal perspectives is fundamental to composing the current and future regulatory framework for AI, as discussed below.

1.2.2 Current and Potential Uses of AI in the Financial Sector

With varying degrees of intensity, AI systems are used transversally in the banking sector along the entire front-line-mid-office-back-office value chain. For customer service and interaction, AI systems offer extraordinary possibilities for personalisation, recommendation and profiling, account management, trading and financial advice (robo advisers), continuous service via chatbots and virtual assistants, and sophisticated Know Your Customer (KYC) solutions.Footnote 43 In the internal management of operations, AI solutions are applied in the automation of corporate, administrative and transactional processes, in the optimisation of various activities, or compliance management. For risk management, AI solutions are projected to improve fraud prevention mechanisms, early warning and cybersecurity systems, as well as being incorporated in predictive models for recruitment and promotion. Another interesting useFootnote 44 of advanced analysis models with machine learning is the calculation and determination of regulatory capital. Significant cost savings are estimatedFootnote 45 if these models are used to calculate risk-weighted assets.

Acknowledging this transversal and multipurpose use allows to anticipate some considerations of interest and relevance for legal analysis. It can be seen that automation has an impact on decision-making processes, actions, or operations of a diverse nature, which will be decisive in determining at least three elements.

First, the applicable regulatory regime – for example, whether it is used to automate compliance with reporting rules, to prevent fraud, to personalise customer offers, or to handle complaints via a chatbot. Second, the possible liability scenarios – for example, whether algorithmic biases and data obtained from social media for the credit scoring and creditworthiness assessment system could lead to systematic discriminatory actions. Third, the transactional context in which it is used – for example, in consumer relations with retail customers, in relations with the supervisor, or in internal relations with employees or partners.

The benefits deriving from the use of automation and AI and the expected gains from systematic and extensive application are numerous.Footnote 46 Algorithm-driven systems provide speed, simplicity, and efficiency in solving a multitude of problems. Automation drastically reduces transaction costs, enabling services that would otherwise be unprofitable, unaffordable, or unviable to be provided on reasonable and competitive terms. Cost reduction explains, for example, the burgeoning sector of robo-advisersFootnote 47 that have expanded the market beyond traditional financial advisers with appreciable benefits for consumers by diversifying supply, increasing competition, and improving financial inclusion.Footnote 48 Such expansion has facilitated financial advice to small investment and low-income investors on market terms.

ADM systems can therefore perform automated tasks and make mass decisions efficiently (high-frequency algorithmic trading, search engines, facial recognition, personal assistants, machine translation, predictive algorithms, and recommender systems). The use of automated means is critical for the large-scale provision of critical services in our society that would otherwise be impossible or highly inefficient (search, sorting, filtering, rating, and ranking).

However, the expansive and growing use of algorithms in our society can also be a source of new risks, can lead to unintended outcomes, have unintended consequences, or raise legal concerns and social challenges of many different kinds. ADM may be biased or discriminatoryFootnote 49 as a result of prejudiced preconditions, based on stereotypes or aimed at exploiting user vulnerabilities, inadequate algorithm design, or an insufficient or inaccurate training and learning data set.Footnote 50 The automation of ADM makes bias massive, amplified and distorted, and easily gain virality. In a densely connected society such as ours, virality acts as an amplifier of the harmful effects of any action. Negative impacts spread rapidly, the magnitude of the damage increases, and the reversibility of the effects becomes less likely and increasingly impractical. The incorporation of decision and learning techniques into increasingly sophisticated AI systems adds to the growing unpredictability of future response. This leads to greater unpredictability and unstoppable complexity that is not always consistent with traditional rules and formulas for attribution of legal effects and allocations of risk and liability (infra 3.1.2.2 and 3.1.2.3).

1.3 An Initial Review of the Policy and Regulatory Framework in the European Union

The use of AI systems for decision-making and the automation of tasks and activities in the financial sector does not have a comprehensive and specific legal framework, either across the board or in its various sectoral applications.

The legal and regulatory framework needs to be assembled by the interlocking of legal provisions from various instruments and completed by the inference of certain principles from rules applicable to equivalent non-automated decisions. The application of the principle of functional equivalence (between automated and non-automated decisions with equivalent functions) guided by technological neutrality makes it possible to extract or extrapolate existing rules to the use of AI systems. However, as argued in the final part of this chapter, this effort to accommodate existing rules to the use of different technologies, under a non-discrimination approach on a medium basis, presents difficulties due to the distinctive characteristics of AI systems, thus compromising legal certainty and consistency. It is therefore suggested that a set of principles be formulated and a critical review of regulation be conducted to ensure that the European Union has a framework that provides certainty and encourages the responsible use of AI systems in the financial sector.

1.3.1 The Expected Application of the Future AI Law to the Uses of AI in the Financial Sector

The (future) AI Act is based on a risk-based classification of AI uses, applications, and practices, to which a specific legal regime is attached: prohibition, requirements for high-risk systems, transparency, and other obligations for certain low-risk systems. The classification of AI systems is not done on the basis of the employed technology but in conformity with the (intended, actual, reasonably expected) specific uses or applications. This means that there is no explicit sectoral selection, but certain practices can be identified with preferred sectoral uses such as creditworthiness assessment, and automated credit rating determination.

1.3.1.1 Prohibited Practices under the AI Act and Their Relevance to Financial Activity

The prohibited practices under Article 5 of the AI Act does not at a first sight naturally embrace the expected uses of AI in the financial sector, but, to the extent that they are defined on the basis of certain effects, they cannot be fully ruled out and should be taken into consideration as red lines. Thus, for example, a personalised-marketing AI system that uses subliminal techniques to substantially alter behaviour in a way that may cause physical or psychological harm (Art. 5.a) or a loan offering and marketing system that exploits any of the vulnerabilities of a group of people based on age, physical or mental disability, or a specific social or economic situation (Art. 5.b).

As initially drafted, although slightly nuanced in subsequent versions (in the Joint Undertaking), the scenarios for the use of biometric identification systems or the assessment or classification of the trustworthiness of natural persons according to their social behaviour or personality are less likely to cover AI applications in the financial sector. The reason is because the prohibition is linked to their use by (or on behalf of) public authorities, or in publicly accessible spaces for law enforcement purposes (although there are still scenarios in which they could apply, such as precisely banks, mentioned in Recital 9 as ‘publicly accessible spaces’). These requirements, questioned for being excessively restrictive, would leave outside the scope of prohibited use of the application in a private space – of an institution – of biometric recognition systems or even assessment systems (social scoring) that could be implemented to accompany a creditworthiness assessment or to profile the eligibility of applicants for banking products. Therefore, in the latest compromise text (14 June 2023), these restrictive criteria have been deleted. The prohibition is extended now to (Article 5.1.d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces.

While the potential impact of the AI Act’s on prohibitions of certain practices in the financial sector appears limited, the likelihood of these being systems classified as high risk is certainly much higher.

1.3.1.2 High-Risk Systems in the AI Act

Annex III of the AI Act provides a list of AI systems, related to eight areas (pursuant to the most recent version of the compromise text), defined by their use, purpose or aim, which will very easily reflect frequent applications of AI in the financial sector: systems for the remote biometric identification of natural persons (1.a Annex III), systems for recruitment or selection of natural persons or for making decisions on promotion or termination of employment relationships or assignment of tasks and monitoring and evaluation of performance (4.a and b Annex III), and, directly and obviously, systems for assessing the creditworthiness of natural persons or establishing their credit rating – with the exception of AI systems used for the purpose of detecting financial fraud – (5.b Annex III).

Confirming the application of the AI Act to certain uses of AI systems proposed by a financial institution will mean it being subject to certain more intensive requirements if it is qualified as high risk (Art. 8 AI Act). These are essentially audit, risk assessment and management, data governance (training, validation, and testing), technical documentation, event logging, cybersecurity, and transparency and reporting obligations to which financial institutions are by no means neither oblivious nor unfamiliar. They respond to a regulatory strategy of supervision and risk management that is well known in regulated sectors such as the financial sector. In fact, the need to avoid duplications, contradictions, or overlaps with sectoral regulations has been taken into account in the AI Act, in particular in relation to the financial sector already subject to risk management, assessment, and supervision obligations similar to those envisaged in the future Regulation – (see Recital 80, and Articles 17.3, 18.2, 20.2, 29.4, 61.4, 62.3). In this regard, the AI Act articulates some solutions to ensure consistency between the obligations of credit institutions under Directive 2013/36/EUFootnote 51 when they employ, operate, or place on the market IA systems in the exercise of their activity.

1.3.2 Principles and Rules for the Use of AI Systems in Decision-Making

However, the eventual application of the AI Act does not exhaust the regulatory framework of reference for the use of AI systems in the financial sector, nor, in fact, does it resolve a good number of questions that the implementation and subsequent operation of such systems in the course of their activity will generate. To this end, and for this reason, it is essential to explore other regulatory instruments and to discover legal avenues to answer a number of important questions. First, to what extent can automated systems be used with full functional equivalence for any activity, decision, or process without prior legal authorisation? Second, to what extent are decisions taken or assisted by AI systems attributed to the financial institution operating the system? Third, who bears the risks and liability for damage caused by the AI systems used?

1.3.2.1 On the Principle of Non-discrimination for the Use of AI in Decision-Making

Neither the AI Act nor, in principle, any other regulation expressly enables the use of AI systems to support decision-making or to automate specific tasks, processes, or activities.Footnote 52 Occasionally and incidentally, reference to automation is found in some texts, even simply in the recitals – Regulation (EU) 2020/1503/EUFootnote 53 to automatic investment in par. 20 – without further specification or development in the legal provisions. In other cases, this possibility is confirmed because reference is made to ‘with or without human intervention’ or ‘by automatic means’, as in the DSA – Art. 3(s) on recommender systems, Art. 16(6) on means of notification and action, Art. 17(3) on the statement of reasons. And in other cases, an express limitation to full automation of a decision-making process such as complaints handling on a platform – DSA, Art. 20(6) – is provided for.

Within this regulatory context, the question on the admissibility, validity, and enforceability of the use of AI systems must be approached on the basis of two backbone principles: the principle of functional equivalence and the principle of non-discrimination and technological neutrality. These principles lead to a positive and enabling initial response that allows the use of AI systems for decision-making or to assist in decision-making, to automate tasks, processes, and activities in a general way and without the need for prior express legislative recognition. There is no reason to deny this functional equivalence or to generally discriminate against the use of AI systems under analogous conditions. Subject-specific limits or sector-specific regulatory requirements might in practice restrict certain applications in the financial sector, but the basic rule is the feasibility of using AI in any activity and for any decision-making.

Naturally, the implementation of an IA system will require ensuring that the automated process is in compliance with the rules applicable to the same process, situation, or transaction if it were not automated. AI systems have to be designed, implemented, and operated in such a way that they comply with the rules that would apply to the legal nature of the decision or activity and, therefore, also to its regulatory treatment in the financial field. If the marketing of certain financial products is automated through a digital banking application, it should be ensured that the legal requirements for pre-contractual information are met. If an automated robo-adviser system is implemented, the requirements for financial advice must be met, if it is indeed categorised as such.

Despite the apparent simplicity of this principle of non-discriminatory recognition of AI, its effect is intense and powerful. It constitutes, in practice, a natural enabler for the multiple and intensive integration of AI in any area of financial activity, as a principle. As long as compliance with the rules and requirements applicable to the action or the equivalent non-automated process can be ensured, AI can be employed to make or assist in making any decision.

1.3.2.2 On the Attribution of Legal Effects

The particular complexity in the chain of design, development, implementation, and operation of AI systems with a set of actors involved, very often without prior agreement or coordination among them, raises a legal question of indisputable business relevance: to whom the legal effects, and thus the risks of a decision, or an action resulting from an automated process, are attributed.

Although this issue can be interpreted as a single attribution problem from a business perspective, from a legal point of view, it is useful to distinguish between two different, albeit related, issues.

First is the question to whom the decision – any decision with contractual relevance (offer, acceptance, modification, renegotiation, termination) – or the resulting action – commercial practice, compliance with supervision request – is to be attributed. That is, if a bank implements an application that incorporates a credit scoring system leading to the automated granting or refusal (without human intervention in each decision) of consumer credit applications, assessing creditworthiness and the decision to accept or deny the credit request are attributable to the bank. Thus, if the credit is granted, the bank is the counterparty to the resulting credit contract; whereas, if it is unjustifiably denied, discriminating against certain groups, the bank would be the offender, violating, for example, the right not to be discriminated against. Similar reasoning would apply to the use of an AI system in an employee recruitment or promotion programme, or to a fraud detection and prevention system.

This attribution of legal effects is based on the formulation of the concept of ‘operator’. This concept proposed by the Report on Liability for Artificial Intelligence and Other Emerging Technologies and subsequently taken over by the European Parliament Resolution of 2020Footnote 54 is based on two factors: control and benefit. Thus, the operator will be the centre of imputation of the legal effects insofar as it controls (or should be able to control) the risks of operating an AI system that it decides to integrate into its activity and, therefore, benefit from its operation.

This attribution of legal effects to the operator also has another important consequence. The operator cannot hide behind the automated or increasingly autonomous nature of the AI system used in order not to assume the consequences of the action or decision taken, nor can the bank consider attributing such effects to other actors involved in the life cycle of the AI system. Thus, for example, it cannot be attributed to the developer of the system, the distributor, or the provider of the data per se and vis-à-vis the bank customer. This is without prejudice to the possibility for the operator (the bank) to bring subsequent actions or remedies against these actors. However, the operator is who assumes the legal – legal or contractual – effects vis-à-vis the affected person concerned (customer).

Second, a question arises as to who should bear the risks and liability for damage caused by the operation of AI systems, as expounded below.

1.3.2.3 On Liability for Damage Caused by the Operation of AI Systems

The operation of an AI system can cause a wide range of damages. In certain sectors, substantial property damage and personal injury can be anticipated (autonomous vehicles, drones, home automation, care robots). Their applications in financial activities are linked to systemic risks, threats to economic stability and financial integrity, or cyclical responses and market shocks. But their malfunctioning can also simply cause massive data loss, disrupt access to services and products, generate misleading messages to customers about the status of their accounts, recommend unsuitable investments according to risk profile, or result in non-compliance with certain obligations vis-à-vis supervisory authorities. The use in rankings, recruitment services, content filtering, or virtual assistants for complaint handling opens the door to a far-reaching debate on their impact on fundamental rights and freedoms – freedom of expression, the right not to be discriminated against, the right to honour, and personality rights – but also on the competitive structure of the market or on the fairness of the commercial practices. Hence, the approach adopted by the proposed AI Act in Europe is based on the identification of certain AI practices, uses, or applications which, due to their particular risk or criticality, are prohibited, qualified as high risk and therefore subject to certain obligations and requirements, or subject to harmonised rules regulating their introduction on the market, their putting into service, and their use.Footnote 55

However, in the face of such potentially negative effects, the fundamental question is whether, beyond the adoption of specific rules for AI systems aimed at controlling their use and mitigating their negative effects, traditional legal liability regimes are adequately equipped to manage the risks and effectively resolve the conflicts arising from such situations in complex technological environments.

In this respect, the European Union faces important legislative policy choices. First, to assess whether a thorough reform of the product liability regimeFootnote 56 is necessary to accommodate AI systems.Footnote 57 The questions are manifold: are AI systems products?, is a decision of the AI system that causes damage necessarily the result of a defect?, and do the provisions of the Directive work adequately in the face of an AI system that has been updated since it was put on the market? Second, to consider whether it is appropriate to establish a harmonised liability regime specific to AI, as suggested in the abovementioned Parliament Resolution,Footnote 58 and if so, whether it should be an operator’s liability and whether the distinction between strict liability for high-risk systems and fault-based liability for the rest is appropriate. The Proposal of the Commission in 2022Footnote 59 departs from the route initiated by the Parliament in 2020 as it proposes a Directive instead of a Regulation, and it puts forward a minimum and complementary harmonisation to national rules on (fault-based) (non-contractual) civil liability in a targeted manner with rules on specific aspects of fault-based liability rules at Union level.

1.4 Concluding Remarks: Principles for the Responsible Use of AI in Decision-Making

The principle of non-discrimination against the use of AI systems in any activity and for any decision-making enables intense and extensive automation in the banking (financial) sector through the implementation of AI solutions. Within this favourable and automation-friendly framework, compliance with the regulatory requirements demanded by the nature of the sectoral activity (law-compliant AI systems) must nevertheless be ensured and some specific limitations must be added which, by reason of their use or purpose (e.g. credit scoring, recruitment and promotion, biometric recognition), the future AI Act could prohibit or subject to certain obligations. To the extent that these AI systems are also employed to provide recommendations, personalise offers, produce rankings, or moderate content, additional rules (DSA, DMA, GDPR) could apply if they are used by financial institutions that have transformed their business model into an online platform.

Even so, there is neither a compact and coherent set of principles capable of guiding automation strategies nor a comprehensive body of rules that would provide full legal certainty for the implementation of AI systems in the banking sector. The highly distinctive characteristics of AI do not always make an application of existing rules under a technology-neutral and functional equivalence approach fully satisfactory, nor are the existing rules always feasible or workable in the AI context. Therefore, there are calls in the European Union for the complementation of the legal framework with other specific principles to crystallise a body of rules suitable for AI. The EBA also advocated for this strategy at sectoral level.

The formulation of ethical principles is certainly a starting point, but the integration of AI systems in the course of an economic activity, throughout the transactional cycle and for business management requires a clear framework of duties and obligations. This is the endeavour that policymakers in the European Union and internationally must face now.Footnote 60 It is necessary to specify how AI systems should be designed, implemented, and commissioned to satisfy the principles of traceability, explainability, transparency, human oversight, auditability, non-discrimination, reasoned explanation of decisions, and access to a review mechanism for significant decisions. It will be key to understand how the provisions of the future AI Act interact with contract law and liability rules,Footnote 61 to what extent the classification of an AI system as high risk under the AI Act could imply the application of a strict liability regime (as previously proposed under the Parliament’s resolution scheme, even if this approach has not been followed by the recent Commission’s proposals for Directives), or what effects the failure to articulate a human-intervention mechanism under Art. 22 GDPR would have on the validity and effectiveness of an automated decision based on profiling, or what implications the failure of the bank operator to comply with the requirements of the AI Act would have on the validity and the enforceability of the contract or on the eventual categorisation of certain bank practices as unfair commercial practices.It is essential for financial firms, referred to in this book as Automated Banks, to be provided with clear and coherent rules for the use and implementation of AI systems in decision-making. The law must be developed in combination with, and accompanied by, detailed (technical) standards, best practices, and protocols progressively and increasingly harmonised in the financial sector.

2 Demystifying Consumer-Facing Fintech Accountability for Automated Advice Tools

Jeannie Paterson , Tim Miller , and Henrietta Lyons
2.1 Introduction: Money, Power, and AI

As the authors of this book recognise, money and power are intimately linked. For most consumers, access to banking services, credit, and a saving plan for retirement are necessary – although not sufficient – requirements for a stable, meaningful, and autonomous life. Conversely, financial hardship may have considerable impact on not only the financial but also the emotional well-being of consumers.Footnote 1 There are many causes of financial hardship, including high levels of personal debt, reliance on high-cost credit, lack of access to mainstream banking services, and unexpected circumstances such as unemployment or ill health.Footnote 2 Additionally, consumers are sometimes subject to fraudulent, deceptive, and dishonest practices, which can escalate their financial problems.Footnote 3 Moreover, many consumers find that they lack the time or skills to manage their day-to-day finances, select optimal credit products, or invest effectively for the future.Footnote 4

So where does AIFootnote 5 – the third theme of this book – sit in this schema? The growing capacity of AI and related digital technologies has contributed to a burgeoning interest in the potential for financial technology (‘fintech’) to transform the way in which traditional banking and financial services are provided.Footnote 6 Governments across the globe have promoted the capacity of AI informed fintech to improve market competition and consumer welfare,Footnote 7 and have introduced initiatives to support the development of innovative fintech products within their jurisdictions.Footnote 8 Fintech products are increasingly being used by the financial services sector for internal processes, decision-making, and interactions with customers.Footnote 9

Inside financial institutions, fintech products are assisting in fraud detection, cybersecurity, marketing, and onboarding new clients.Footnote 10 Fintech products are being developed to automate financial services firms’ decisions about lending, creditworthiness, and pricing credit and insurance.Footnote 11 In a consumer-facing role, fintech products are being used for communicating with customers, such as through chatbots (generative or otherwise),Footnote 12 and in providing access to financial products, for example, loanFootnote 13 or credit card online applications.Footnote 14 Fintech products are being developed to provide credit product comparisons for consumers looking for the best deal.Footnote 15 However, the most common forms of consumer-facing fintech are, at the time of writing, financial advice toolsFootnote 16 primarily for investing and budgeting.Footnote 17

Consumer-facing fintech generally, and automated financial advice tools specifically, are often promoted as benefiting consumers by assisting them to make better decisions about credit, savings, and investment, and by providing these services in a manner that is more cost-effective, convenient, and consistent than could be provided by human advisers.Footnote 18 These features undoubtedly hold attractions for consumers. However, in our opinion, the allure of AI, and its financial market equivalent of fintech, should not be allowed to overshadow the limitations of, and the risks of harm inherent in, these technologies. As this book makes clear, whether used by governments or private sector firms, AI and automated decision-making tools raise risks of harm to privacy, efficacy, bias, and perpetrating existing power hierarchies. Albeit on a different scale, consumer-facing fintech, such as automated financial advice tools, carry many of the same kinds of risks, which equally demand regulatory attention and best practice for good governance. There has been little assessment of whether automated financial advice tools are effective in achieving improving the financial well-being of consumers. It is also unclear whether and to what extent such tools are equitable and inclusive, or conversely amplify existing bias or patterns of exclusion in financial services and credit markets.

Some of the potential risks of harm to consumers from automated financial advice tools will be addressed by existing law. However, we argue that there is a need to move past the commercial, and indeed political, promotion of ‘AI’ and ‘fintech’ to understand their specific fields of operation and demystify their scope. This is because the use of AI in this equation is not neutral or without friction. Automated advice tools raise discrete and unique challenges for regulatory oversight, namely opacity, personalisation, and scale. We therefore suggest, drawing on the key principles propounded in AI ethics frameworks, that the effective regulation of automated financial advice tools should require greater transparency about what is being offered to consumers. There should also be a regulatory commitment to ensuring the outputs of such tools are contestable and accountable, having regard to the challenges raised by the technology they utilise.

This chapter explores these issues, beginning with an overview of automated financial advice, focusing on what are currently the most widely available tools, namely ‘robo’ investment advice and budgeting apps. We discuss the risks of harm raised by these uses of AI and related technologies, arising from uncertainty about the quality of the service provided, untrammelled data collection, and the potential for bias, as well as the need for a positive policy focus on the impact of such tools on goals of equity and inclusion. We review the guidance provided by regulators, as well as the gaps and uncertainties in the existing regulatory regimes. We then consider the role of principles of transparency and contestability as preconditions to greater accountability from the firms deploying such tools, and more effective oversight by regulators.

2.2 Aspiration and Application in Consumer-Facing Fintech

The term ‘fintech’ refers to the use of AI and related digital technologies to deliver financial products and services.Footnote 19 The AI used to deliver fintech products may include natural language processing in front-end interfaces to communicate effectively with clients and statistical machine learning models to make predictions that inform financial decision-making. ‘Consumer-facing’ fintech refers to the use of fintech to provide services to consumers, as opposed to use by professional investors, business lenders, or for back-room banking processes. As already noted, perhaps the most prominent form of fintech service offered to consumers, as opposed to informing the internal processes of financial institutions, is automated financial advice, primarily about investing and budgeting.

The aims of most fintech products are to allow services to be delivered at scale, reducing human handling of information, and, in the case of consumer-facing fintech, benefiting consumers. Automated financial advice tools typically purport to offer a low-cost option for financial advice derived from insights from consumer data and statistical analysis and provided through an accessible interface using state-of-the-art processing to identify and respond to consumers’ financial aims. The commonly stated aspiration of governments and regulators in supporting the development of these and other fintech products is to promote innovation and to provide low-cost, reliable, and effective financial services to consumers.Footnote 20 Some fintech providers express aspirations to be more inclusive and empower ordinary people to participate in the financial and banking sectors.Footnote 21

There are undoubted attractions in such aspirations.Footnote 22 The majority of consumers do not seek financial planning advice,Footnote 23 probably because it is perceived as being too expensive.Footnote 24 Yet many consumers find financial matters difficult or confusing. This is due to a combination of factors, including low financial literacy, limits on time, and the impact of behavioural biases on decision-making. In principle, automation should allow financial services providers to lower the cost and improve the consistency of advice,Footnote 25 as well as providing the convenience of an on-demand service.Footnote 26 Additionally, by using consumers’ own data, automated financial advice tools have the potential to be uniquely tailored to those consumers’ individual circumstances.Footnote 27 Indeed, this is one of the premises behind Australia’s consumer data right, which aims to give consumers control over their data to promote innovation and competition in the banking sector.Footnote 28

Currently, the two main kinds of automated financial advice tools are robo-advisers and budgeting apps.Footnote 29 Though these tools will no doubt evolve, they provide a simpler, less personalised service than might be envisaged by the ‘AI’ label commonly attached to them.

2.2.1 Robo-Advisers

Robo-advisersFootnote 30 provide ‘automated financial product advice using algorithms and technology and without the direct involvement of a human adviser’.Footnote 31 In principle, robo-advice might cover automated advice about any topic relevant to financial management, such as budgeting, borrowing, investing, superannuation, retirement planning, and insurance. Currently, most robo-advisers provide automated investment advice and portfolio management.Footnote 32

Typically, robo-advice services begin with consumers answering a questionnaire about their goals, expectations, and aptitude for risk. An investment profile for consumers is derived from this information, based on their goals and capacity to bear risk. An algorithm matches consumers’ profiles with an investment portfolio available through the advisory firm to produce an investment recommendation.Footnote 33 Should a consumer choose to follow the advice and invest in that portfolio, many robo-advisers will also manage the portfolio on an ongoing basis, keeping it within the parameters recommended for the consumer. Consumers generally pay a fee for the service provided by the robo-adviser, often a percentage of the amount invested, with minimum investment amounts required to access the service.

Robo-advice is sometimes described as ‘trading with AI’.Footnote 34 This language might be thought to suggest specialised insights into the stock market uniquely tailored to consumers’ needs and arrived at through sophisticated machine learning models. The practice is more straightforward. At the time of writing, robo-advisers do not rely on state-of-the-art AI technology, such as using neural networks to process data points and make predictions about stock market moves, or link individual profiles to unique investment strategies. As Baker and Dellaert explain, the matching process will be based on ‘a model of how to optimise the fit between the attributes of the financial products available to the consumer and the attributes of the consumers who are using the robo-advisor’.Footnote 35 The robo-adviser will typically build the consumer profile based on the entry questionnaire and match this with an investment strategy established using financial modelling techniques and based on the investment packages already offered by firm. The process will usually have been automated through some form of expert system – a hand coded application of binary rule identified by humans. Ongoing management of the consumer’s portfolio will be done on a similar basis, often using exchange-traded funds (ETFs) that ‘require no or less active portfolio management’.Footnote 36

Unlike human financial advisers, robo-advice tools typically do not provide budgeting or financial management advice to consumers.Footnote 37 Their recommendations are limited to the kinds of investment that will match consumers’ investment profiles. Robo investment advisers do not provide advice on matters of tax, superannuation, asset management, or savings, and they do not yet have the capacity to provide this more nuanced advice.Footnote 38 Sometimes robo-advice tools are used in conjunction with human financial advisers who will provide a broader suite of advice. Automated budgeting tools are also increasingly available on the market.

2.2.2 Budgeting Tools

Budgeting tools allow consumers to keep track of their spending by categorising expenses and providing dashboard-style visualisations of spending and saving.Footnote 39 Some banks offer budgeting tools to clients, and there are many independent service providers. Some neo-banks have, additionally, consolidated their brand around their in-built budgeting tools.Footnote 40

As with robo-advisers, automated budgeting tools collect information about consumers through an online questionnaire. Budgeting tools also typically require consumers to provide access to their bank accounts, in order to scrape transaction data from that account,Footnote 41 or alternatively rely on data-sharing arrangements.Footnote 42 Based on this information, the services provided by budgeting tools include categorising and keeping track of spending; providing recommendations about budgeting; and monitoring savings.Footnote 43 In some cases, the tools will transfer funds matching consumers’ savings goals to a specific account, provide bill reminders, make bill payments, monitor information about credit scores, suggest potential savings through various cost-cutting measures or identifying alternative service providers.Footnote 44 Additionally, automated budgeting tools may provide articles and opinion pieces about financial matters, such as crypto, non-fungible tokens (NFTs), or budgeting.Footnote 45 Some budgeting tools have a credit card option,Footnote 46 and at least one is linked to a ‘buy now-pay later’ provider.Footnote 47

Automated budgeting tools often describe their service as relying on AI.Footnote 48 Again, however, they do not, as might have been expected from this terminology, typically provide a personalised plan for saving derived from insights from multiple data points relating to consumers. They may use some form of natural language processing to identify spending items. Primarily, somewhat like robo-advisers, they rely on predetermined, human-coded rules for categorising spending and presenting savings. Most budgeting tools are free, although some charge for a premium service. This means that the tools are funded in other, more indirect ways, including through selling targeted advertisements on the app, fees for referrals, commissions for third-party products sold on the app, the sale of data (usually aggregated), and in some cases a percentage of the savings where a lower cost service or provider is identified for consumers.Footnote 49

2.3 Regulation and Risk in Consumer-Facing Fintech

This brief survey of available automated financial advice tools aimed at consumers suggests that they are operating with a fairly narrowly defined scope and using relatively straightforward digital processes. The tools may evolve into the future to make greater use of such state-of-the-art AI, such as using generative AI for providing general advice to consumers. However, even in their current form, the tools pose risks of harm to consumers that are more than fanciful, and similar to those raised by AI generally. The risks arising from AI are becoming increasingly well recognised, including poor efficacy, eroding privacy, data profiling, and bias and discrimination.Footnote 50 These risks are also inherent in consumer-facing fintech and automated financial advice tools. Moreover, we suggest they are only partially addressed by existing law. While financial services law commonly imposes robust obligations on those providing financial advice, those obligations may not squarely address the issues arising from the automated character of the advice, particularly issues of bias. Additionally, some automated advice tools, such as budgeting apps, may fall outside of these regimes. It is therefore worth considering these issues in more detail.

2.3.1 Quality of Performance

One of the notable features of automated financial advice is that consumers are unlikely to be able to scrutinise the quality of the service provided. Consumers will typically turn to automated advice tools because they lack skills in the relevant area, be it investing or budgeting. This lack of expertise makes it difficult for them to assess the quality of the advice they receive.Footnote 51 There is not a lot of information for consumers in selecting between different tools, as compared to standard consumer goods. While some rankings of automated financial advice tools have emerged, these often focus on ease of use – the interface, syncing with bank data, fees charged – rather than the quality of the advice provided,Footnote 52 and some ranking reviews include sponsored content.Footnote 53 Accordingly, at least at this point in time, automated financial advice tools may be very much a credence good – for which assertions of quality are all that is available to consumers. Unless the advice provided by the tools is patently bad, it may not be apparent that the poor quality of the automated process is to blame, as opposed to other external factors. Indeed, without a point of comparison, which is effectively excluded by the personalised nature of the service, it may be difficult for consumers to identify poor quality advice at all.

There is currently little academic research on the extent to which consumers are well-served by automated financial advice tools, particularly when weighted against possible costs in terms of data-sharing.Footnote 54 There have been a number of concerns raised in the literature about how well the tools may function. Although robo-advisers may operate in a manner that is more objective and consistent than human financial advisers,Footnote 55 this does not mean they operate free from the influence of commissions, which may be coded into their advisory process. It is unclear to what extent the recommendations provided by automated financial advice tools are personalised to consumers, as opposed to being generic or based on broad target groupings. Additionally, concerns have been raised about the relatively small number of investment options actually held by robo-investment advisers.Footnote 56 While automated budgeting tools may assist consumers by providing an accessible, straightforward, and visual way of monitoring spending,Footnote 57 this does not necessarily translate into long-term savingsFootnote 58 or improved financial literacy.Footnote 59 It is further possible that one of the main functions of at least some budgeting apps is to obtain consumers’ attention in order to market other financial services, such as credit cards, as well as the opportunity for the providers to profit from the use or sale of consumer data for marketing and data analytics.Footnote 60

In consumer transactions – particularly those that are complex, hard for consumers to monitor, or which carry the risk of high impact harms – reliance is usually placed on regulators to take ‘ex ante’ measures for ensuring that the products supplied to consumers are acceptably safe and reliable. Financial services regulators in jurisdictions such as Australia, the United Kingdom, the European Union, and the United States of America have responded to the rise of robo-advisers by affirming that the existing regulatory regime applies to this form of advice.Footnote 61 Financial services providers are typically subject to an array of statutory conduct obligations, which overlap, albeit imperfectly, with their fiduciary duties arising under general law.Footnote 62 These statutory duties require firms to manage conflicts of interest,Footnote 63 act in their clients, best interests,Footnote 64 ensure the suitability of the advice provided,Footnote 65 and take reasonable care in proving the advice.Footnote 66 These obligations should, in principle, assist in addressing concerns about the quality of the service provided by robo-advisers.Footnote 67 Nonetheless, some uncertainties remain, including, for example, whether the category-based approach deployed by robo-advisers fits with statutory requirements for personalised advice that is suitable for the individual consumer.Footnote 68

Regulators have additionally stated they expect firms providing robo-advice to have a ‘human in the loop’, in the sense of a person with ‘an understanding of the technology and algorithms used to provide digital advice’ and who are ‘able to review the digital advice generated by algorithms’.Footnote 69 Recommendations for a human overseeing the automated advice leave open the question of what that human should be monitoring – is it merely compliance with existing law applying to the giving of advice, or should there be other considerations taken into account, arising from the automated character of the advice?

In terms of the issue of automation, regulators have focused on the informational aspects of the process. They have emphasised that firms providing automated advice should give consideration to the way in which the information on which the advice is based is collected from consumers so as to ensure it is accurate and relevant, especially because there is no human intermediary to pick up possible discrepancies or errors. Regulators have also advised firms to take care in the way the advice is framed and explained, given the potential for misunderstanding and error in an automated process.Footnote 70 Issues of information gathering and reporting are important but they are only part of the challenge presented by automation for consumer protection law and policy. Moreover, they tend to represent a very individualised response to the risks of harm to consumers relying on automated financial advice, focusing on what consumers need to provide and understand, as opposed to the substance of the process through which advice is provided.

Notably, there is typically no specific law or regulatory guidance that applies to automated budgeting tools, which do not involve financial services. These tools will be subject to general consumer protection regimes, which typically prohibit misleading conduct, and mandate reasonable care and skill in the provision of services.Footnote 71 Uncertainties about the application of existing law to automated advice give rise to the question of whether other kinds of regulatory mechanisms are required to complement sector-specific or general consumer protection law in order to address the risks of harms that are specific to the use of AI and related digital technologies. In answering this question, we suggest that, at minimum, the risks around data collection and bias need to be considered.

2.3.2 The Data/Service Trade-Off

Automated financial advice tools operate on the core premise that consumers necessarily hand over data to obtain the service. A firm may be using consumer data for the dual purposes of providing advice and making a return for itself, such as through promoting other products for a commission on sales, up-selling add-on products for a fee, or on-selling the data for profit.Footnote 72 This behaviour is particularly apparent in the case of budgeting apps, which are typically free. As already noted, these services earn income through in-app advertising, fees, and commissions for referrals and potentially through selling aggregated consumer data, as well as targeted advertising. Notably, the privacy terms of automated budgeting tools commonly allow the collection of a wide range of consumer data and the use of that data for a number of purposes, including improving the service and related company group services, marketing, and, in aggregated form, sharing with third parties.Footnote 73

Data protection and privacy law impose obligations on the collection and processing of data.Footnote 74 However, the key requirements of notice and consent typically found under these regimes may easily be met in automated advice contexts because the exchange is at the heart of the transaction. Consumers provide their data in order to obtain the advice they need. While consumers may be unaware of how much information they are handing over, there is some evidence that consumers, particularly younger consumers, are prepared to trade data for cheaper, more efficient financial services.Footnote 75 However, to the extent consumers are ill or under-informed about the quality of the service being provided by automated advice tools, the data-for-service bargain may look thinner than they might have at first thought.Footnote 76 Under the fintech service model, consumers provide personal data to obtain a personalised and cost-effective service but have few objective measures as to the quality of what is actually being provided.

2.3.3 Bias and Exclusion

In discussing legal and regulatory responses to the growing influence of AI and related technologies, much attention has rightly been given to their role in amplifying surveillance, bias and discrimination.Footnote 77 The technologies may use personal data to profile consumers, which in turn allows firms to differentiate between different consumers and groups with a high degree of precision, leading to risks of harmful manifestations of targeted advertising, or differential pricing.Footnote 78 Bias and error are particular concerns in firms’ use of AI technologies for decision-making, including in decisions about lending,Footnote 79 credit,Footnote 80 or insurance.Footnote 81 Automated lending decisions and credit scoring might be more objective than human-made decisions and might benefit cohorts that have previously been disadvantaged by human prejudice.Footnote 82 But there is no guarantee this is the case, and indeed the outcomes may be worse for these groups. Differential treatment of already disadvantaged groups – such as minoritiy or low-income cohorts – may already be embedded in the practices and processes of the institution. To the extent this data is used in credit-scoring models or to inform automated decisions, historical unequal treatment may be amplifiedFootnote 83 or distorted.Footnote 84 Unequal treatment may, moreover, be difficult to identify or address where it is based, not directly on protected attributes, but on proxies for those attributes found in the training data.Footnote 85

Bias may also be embedded in automated advice tools used by consumers. For example, a robo-advice tool might exhibit bias by treating a person who takes time off work for childrearing as going through a period of precarious employment or being unable to hold down steady employment. An automated budgeting tool might exhibit bias by characterising products for menstruation as discretionary spending, instead of essentials. There are complex technical and policy decisions to be made in identifying and responding to the risks of unacceptable bias in automated financial advice tools.Footnote 86 Consumer protection and financial services law have not traditionally have not been central to this process, which is primarily the domain of human rights law. However, decisions based on historical prejudice may be unconscionable or unfair, contrary to consumer protection law. Certainly, in the United States, the Federal Trade Commission has indicated that discriminatory algorithms would fall foul of its jurisdiction to respond to unfair business practices.Footnote 87

A related issue concerns financial exclusion. Fintech innovators and government initiatives to encourage innovation often refer to an aspiration of promoting inclusion and overcoming exclusion.Footnote 88 There are few findings on the extent to which this aspiration is achievable. There are plausible reasons why automated advice tools may fail to assist, or assist adequately, consumers already excluded from mainstream financial or banking services, or consumers who have had less engagement with the mainstream banking system, such as where they are ‘not accessing or using financial services in a mainstream market in a way that is appropriate to their needs’.Footnote 89 Financially excluded consumers might not be offered meaningfully relevant advice tools because there is no relevant or useful data about them or because they are unlikely to be sufficiently profitable for financial services providers to develop products suited to them. These consumers may also find that the models on which the advisory tools are based are inaccurate when applied to their circumstances.

For example, investment tools may be of little value to consumers struggling to make ends meet and with no savings to invest. The models used by automated budgeting tools may have a poor fit with consumers living on very low incomes and for whom cutting back on discretionary spending is not an option available. In these circumstances, the tools will do little to improve equity, leaving unrepresented groups without advice, or relevantly personalised advice. Moreover, there may be a real risk of harm. Inept recommendations may subject consumers to harms of financial over-commitment or lull inexperienced consumers into a false sense of financial security. At a more systematic level, the availability of automated advice tools for improving financial well-being may feed into longstanding liberal rhetoric about the value of individual responsibility, as opposed to government initiatives for improving overall financial well-being.

It is possible to envisage services that would be useful to financially excluded consumers or consumers experiencing financial harshi, such as for example, advice on affordable loans and other services.Footnote 90 Emma Leong and Jodi Gardner point to proposed uses of Open Banking in the United Kingdom to provide tools that assist with better managing fluctuating incomes.Footnote 91 The United Kingdom Financial Conduct Authority notes there are some apps on the market providing legal aid and welfare support advice.Footnote 92 These kinds of initiatives are likely to require a deliberate policy decision to initiate rather than arising ‘naturally’ in the market.Footnote 93 This is because there would seem to be little commercial incentive for firms to invest in tools specifically tailored to low-income or otherwise marginalised consumers from whom there is little likelihood of ongoing lucrative return to the firm, without government support.

2.4 New Regulatory Responses to the Risks of Automated Financial Advice

Automated financial advice tools illustrate the continuing uncertainties in regulating consumer-facing fintech and AI informed consumer products. We have seen that regulators will need to adapt existing regimes to the new ways in which services are being provided to consumers, which requires attention not only to the risks in providing advice but in the automation of advice. We further suggest that regulators need to be cognisant of the ways in which the AI and digital technologies informing the tools raise unique challenges for regulation. Opacity is a key concern in any regulatory response to making AI systems more accountable.Footnote 94 Automated financial advice tools may not currently rely on sophisticated AI, in the sense of deep learning or neural networks. Nonetheless, they are for commercial (if not technical) reasons highly opaque as to the technology being utilised and how recommendations are reached. Their very purpose is to provide advice without significant human intervention and at scale, which may amplify harms of bias or error in the system.Footnote 95 The tools typically purport to provide output on factors personal to the consumer, which may make it difficult to determine whether an adverse outcome is unfortunate, a systematic error or failure of a legal duty.Footnote 96

One response to navigating the challenges of regulating consumer-facing fintech is provided by the principles of ethical AI.Footnote 97 Principles of AI ethics are sometimes criticised as too general to be useful.Footnote 98 The principles operate as a form of soft law – they are not legally binding and must necessarily be supplemented by legal rules.Footnote 99 However, principles of AI ethics may be effective when operationalised to apply to specific contexts and when used in conjunction with other forms of regulation. The principles provide the preconditions for responsible use of AI and automated decision tools by firms. They also provide an indication of what regulators should demand from firms deploying such technology to reduce the risk of harm to consumers.Footnote 100 While there are various formulations of the principles of ethical AI,Footnote 101 key features typically include requirements for AI to be transparent and explainable,Footnote 102 along with mechanisms for ensuring accountabilityFootnote 103 and – at least in the Australian government’s principlesFootnote 104 – contesting adverse outcomes.Footnote 105

2.4.1 Transparency and Explanations

Principles of ethical AI typically require the use of such technologies to be transparent.Footnote 106 A starting place for transparency is to inform consumers when AI is being used in an interaction with them. Applied to automated financial advice tools, transparency must mean more than informing consumers that AI is being used to provide advice. Consumers choosing to turn to a robo-adviser or budgeting app will usually be aware of the automated character of the advice. Consumers also require transparency in the kind of technology being used to provide that advice: i.e. is it based in machine learning or a hand coded expert system. Additionally, a principle of transparency would require firms to inform consumers clearly about the scope of the service that is being provided, including the limitations of the technology in terms of personalised or expert advice.Footnote 107 If the advice provided is generalised to broadly defined categories of consumers, then this should be made clear, to counter consumers’ expectations of a unique and personal experience.

To the extent that consumers overestimate the capacities of fintech, transparency in way the advice is produced is important to ground expectations and allow scrutiny of the veracity of claims made about it. For regulators, transparency is key to overseeing the performance of the tools. Transparency is key to allowing bias or distortions in the scope of advice to be identified, scrutinised and, in some instances, rectified. Regulation can support the imperative for firms to take these ethical demands seriously, including by treating them as necessary elements of statutory obligations of suitability or best interests, and essential to ensuring that claims about the operation of the product are not misleading. For example, the process of automation, and its claims to objectivity and consistency, may make consumers overconfident about the advice and more likely to act on it.Footnote 108 This might suggest an obligation on firms to be scrupulously clear on the limits of what is able to provided by automated advice tools, and of the insights that can be derived from the technology being utilised.Footnote 109

Transparency in ethical AI is closely associated with initiatives in AI ‘explanations’ or ‘explainability’.Footnote 110 Explanations in this sense do not lie in the details of the code. Rather, explainable AI considers the kind and degree of information that should be provided in assisting the various stakeholders in the decision or recommendation process to understand why decisions were taken or the factors that were significant in reaching a recommendation.Footnote 111 Explainable AI aims to provide greater transparency into the basis for automated decisions, predictions, and recommendations.Footnote 112 There are different ways in which explanations may be provided, and indeed the field of study in computer science is still developing.Footnote 113 Possibilities include the use of counterfactuals, feature disclosure scores, weightings of influential factors, or a preference for simpler models where high levels of accuracy are not as imperative.Footnote 114 Overall, however, a requirement for explanations would assist in scrutinising the basis of the recommendations produced through automated financial advice tools.

For lawyers, suggesting that a core element in the regulation of automated financial advice tools should focus on requirements related to transparency/explanations may seem a surprising aspiration.Footnote 115 Disclosure as a consumer protection strategy has increasingly fallen out of favour, particularly in the regulation of financial services and credit. The insights into decision-making from behavioural psychology have shown that mere information disclosure does not lead to better decisions by consumers. Consumers are subject to bounded rationality which means they rely on rules of thumb, heuristics, and behavioural bias rather than information.Footnote 116 In this light, it may be thought that any demand for greater transparency in automated financial advice tools may be of marginal utility. However, in a consumer protection context, consumers’ interests are substantially protected by regulators, and therefore transparency and explanations are relevant to both consumers seeking to protect their interests, and regulators charged with overseeing the market. Explanations should be provided in a form that is meaningful to the recipient.Footnote 117 This means that the detail and technicality of the information provided may need to differ between consumers and regulators.Footnote 118 In other words, the requirements should be scaled according to who is receiving the explanation.

2.4.2 Accountability

Principles of AI ethics typically require mechanisms for ensuring firms are accountable for the operation of the technologies.Footnote 119 To have impact, accountability will require more than allocating responsibility for supervising the AI to a person. There is little worth in having a ‘human in the loop’ in circumstances where the design of the AI or automated tool means it is difficult for that person genuinely to oversee, interrogate or control the tool.Footnote 120 Accountability for automated financial advice tools should therefore require a firm to implement systematic processes for reviewing the operations and performance of the tools.Footnote 121 A commitment to accountability may therefore require firms to have processes for scrutinising the data on which the AI is trained, its ongoing use, and its outputs.Footnote 122 A model for the kind of robust approach required might be found in the audits increasingly recommended for AI used in public sector decision-making.Footnote 123 Such processes should aim to ensure the veracity of the tools and are a critical element in addressing and redressing concerns about bias, equity, and inclusion.Footnote 124

2.4.3 Contestability

There is little utility in requiring transparency and accountability in AI systems if there is no mechanism available to those affected by an AI or automated decision for acting to challenge an outcome that is erroneous, discriminatory, or otherwise flawed. Some formulations of AI ethical principles respond to this issue by requiring processes for contesting adverse outcomes.Footnote 125 While accountability processes should aim to be proactive in preventing these kinds of problems, contestability is a mechanism for individuals, advocates, or regulators to respond to harms that do occur.

Lyons et al. make the point that little is currently known about ‘what contestability in relation to algorithmic decisions entails, and whether the same processes used to contest human decisions … are suitable for algorithmic decision-making’.Footnote 126 Contestability for automated decisions may not be able simply to follow existing mechanisms for dealing with individual complaints or concerns. The models informing AI may be complex and opaque, thus creating challenges for review by subject domain experts who may nonetheless be unfamiliar with the technology. Additionally, scale creates a challenge. This is because one of the benefits of automated decision-making is that it can operate on a scale that is not possible for human decision-makers or advisers, and yet this makes processes for individual review potentially unmanageable.

The inquiry into what contestability requires may be different in the context of automated financial advice tools, as opposed to public sector use of automated decision-making. Consumers using automated advice tools will not be challenging a decision made about their rights to access public resources or benefits. Rather they will be challenging the advice given to them, the consistency of this advice with any representations about the tool, or compliance with any applicable regulatory regimes. Nonetheless, complexity and scale remain significant challenges. It is possible that the field of consumer protection law may have insights given its focus on both legal rights and structural mechanisms for protecting consumers’ interests in circumstances where there are considerable imbalances in power, resources, and information, which in some ways mirrors concerns around AI contestability. For example, in this context of automated financial advice tools, contestability for poor outcomes may come through the oversight provided by ombudsmen and regulators, rather than traditional litigation. These inquiries have the capacity to look at systemic errors, thus bringing expertise and capacity to review processes through which advice or recommendations are provided, rather than necessarily reopening every decision.

2.5 Conclusion

The triad of money, power, and AI collide in fintech innovation, which sees public and private sector support for using AI, along with blockchain and big data, in the delivery of financial services. Currently, the most prominent forms of fintech available to consumers are automated advice tools for investing and budgeting. These tools offer advantages of low cost, convenient and consistent advice on matters consumers often find difficult. Without discounting these attractions, we have argued that the oft-stated aspiration of automated advice financial tools in democratising personal finance should not distract attention from their potential to provide only a marginally useful service, while extracting consumer data and perpetuating the exclusion of some consumer cohorts from adequate access to credit, advice and banking. From this perspective, consumer-facing fintech provides a exemplary example of the need for careful regulatory attention being provided to the use of AI and related technologies even in seemingly low-risk contexts. Fintech tools that hold out to consumers a promise of expertise and assistance should genuinely be fit for the purpose. Consumers are unlikely to be able to monitor this quality themselves. As such, robust standards of transparency, accountability, and contestability that facilitate good governance and allow adequate regulatory oversight are crucial, even for these modest applications of AI.

3 Leveraging AI to Mitigate Money Laundering Risks in the Banking System

Doron Goldbarsht
Footnote *
3.1 Introduction

Money laundering involves the transfer of illegally obtained money through legitimate channels so that its original source cannot be traced.Footnote 1 The United Nations estimates that the amount of money laundered each year represents 2–5 per cent of global gross domestic product (GDP); however, due to the surreptitious nature of money laundering, the total could be much higher.Footnote 2 Money launderers conceal the source, possession, or use of funds through a range of methods of varying sophistication, often involving multiple individuals or institutions across several jurisdictions to exploit gaps in the financial economy.Footnote 3

As major facilitators in the global movement of money, banks carry a high level of responsibility for protecting the integrity of the financial system by preventing and obstructing illicit transactions. Many of the financial products and services they offer are specifically associated with money laundering risks. To ensure regulatory compliance in the fight against financial crime, banks must develop artificial intelligence (AI) about emerging money-laundering processes and create systems that effectively target suspicious behaviour.Footnote 4

‘Smart’ regulation in the financial industry requires the development and deployment of new strategies and methodologies. Technology can assist regulators, supervisors, and regulated entities by alleviating the existing challenges of anti-money laundering (AML) initiatives. In particular, the use of AI can speed up risk identification and enhance the monitoring of suspicious activity by acquiring, processing, and analysing data rapidly, efficiently, and cost-effectively. It thus has the potential to facilitate improved compliance with domestic AML legal regimes. While the full implications of emerging technologies remain largely unknown, banks would be well advised to evaluate the capabilities, risks, and limitations of AI – as well as the associated ethical considerations.

This chapter will evaluate compliance with the Financial Action Task Force (FATF) global standards for AML,Footnote 5 noting that banks continue to be sanctioned for non-compliance with AML standards. The chapter will then discuss the concept of AI, which can be leveraged by banks to identify, assess, monitor, and manage money laundering risks.Footnote 6 Next, the chapter will examine the deficiencies in the traditional rule-based systems and the FATF’s move to a more risk-oriented approach, which allows banks to concentrate their resources where the risks are particularly high.Footnote 7 Following this, the chapter will consider the potential for AI to enhance the efficiency and effectiveness of AML systems used by banks, as well as the challenges posed by its introduction. Finally, the chapter will offer some concluding thoughts.

3.2 Enforcement and Detection: The Cost of Non-compliance

The FATF sets global standards for AML, with more than 200 jurisdictions committed to implementing its recommendations.Footnote 8 It monitors and assesses how well countries fulfil their commitment through legal, regulatory, and operational measures to combat money laundering (as well as terrorist financing and other related threats).Footnote 9 Pursuant to the FATF recommendations, banks must employ customer due diligence (CDD) measures.Footnote 10 CDD involves the identification and verification of customer identity through the use of other sources and data. Banks should conduct CDD for both new and existing business relationships.Footnote 11 They have a duty to monitor transactions and, where there are reasonable grounds to suspect criminal activity, report them to the relevant financial intelligence agency.Footnote 12 Banks must conduct their operations in ways that withstand the scrutiny of customers, shareholders, governments, and regulators. There are considerable consequences for falling short of AML standards.

In a 2021 report, AUSTRAC, Australia’s financial intelligence agency, assessed the nature and extent of the money laundering risk faced by Australia’s major banks as ‘high’. The report highlighted the consequences for customers, the Australian financial system, and the community at large.Footnote 13 It drew attention to impacts on the banking sector – including financial losses, increased compliance costs, lower share prices, and increased risk of legal action from non-compliance – as well as reputational impacts on Australia’s international economic security.Footnote 14

In this climate of heightened regulatory oversight, banks continue to be sanctioned for failing to maintain sufficient AML controls. In 2009, Credit Suisse Group was fined US$536 million for illegally removing material information, such as customer names and bank names, so that wire transfers would pass undetected through the filters at US banks. The violations were conducted on behalf of Credit Suisse customers in Iran, Sudan, and other sanctioned countries, allowing them to move hundreds of millions of dollars through the US financial system.Footnote 15 Also in 2009, Lloyds Banking Group was fined US$350 million after it deliberately falsified customer information in payment records, ‘repairing’ transfers so that they would not be detected by US banks.Footnote 16 In 2012, US authorities fined HSBC US$1.9 billion in a money laundering settlement.Footnote 17 That same year, the ING Bank group was fined US$619 million for allowing money launderers to illegally move billions of dollars through the US banking system.Footnote 18 The Commonwealth Bank of Australia was fined A$700 million in 2017 after it failed to comply with AML monitoring requirements and failed to report suspicious matters worth tens of millions of dollars.Footnote 19 Even after becoming aware of suspected money laundering, the bank failed to meet its CDD obligations while continuing to conduct business with suspicious customers.Footnote 20 In 2019, fifty-eight AML-related fines were issued worldwide, totalling US$8.14 billion – more than double the amount for the previous year.Footnote 21 Westpac Bank recently agreed to pay A$1.3 billion fine – an Australian record – for violating the Anti-Money Laundering and Counter-Terrorism Financing Act 2006. Westpac had failed to properly report almost 20 million international fund transfers, amounting to over A$11 billion, to AUSTRAC, thereby exposing Australia’s financial system to criminal misuse.Footnote 22 In 2020, Citigroup agreed to pay US$400 million fine after engaging in what US regulators called ‘unsafe and unsound banking practices’, including with regard to money laundering.Footnote 23 The bank had previously agreed to a US$97.4 million settlement after ‘failing to safeguard its systems from being infiltrated by drug money and other illicit funds’.Footnote 24 The severity of these fines reflects the fact that non-compliance with AML measures in the banking industry is unacceptable to regulators.Footnote 25 More recently, AUSTRAC accepted an enforceable undertaking from National Australia Bank to improve the bank’s systems, controls, and record keeping so that they are compliant with AML laws.Footnote 26

The pressure on banks comes not only from increased regulatory requirements, but also from a marketplace that is increasingly concerned with financial integrity and reputational risks.Footnote 27 A bank’s failure to maintain adequate systems may have consequences for its share price and its customer base. Citigroup, for example, was fined in 2004 for failing to detect and investigate suspicious transactions. The bank admitted to regulators that it had ‘failed to establish a culture that ensured ongoing compliance with laws and regulations’. Within one week of the announcement by regulators, the value of Citigroup shares had declined by 2.75 per cent.Footnote 28

It is, therefore, in the best interests of the banks themselves to manage risks effectively and to ensure full compliance with the domestic legislation that implements the FATF recommendations, including by retaining senior compliance staff.Footnote 29 Despite the high costs involved, banks have largely expressed a strong commitment to improving their risk management systems to protect their own integrity and that of the financial system – as well as to avoid heavy penalties, such as those detailed above.Footnote 30 Yet, while banks continue to invest in their capabilities in this area, they also continue to attract fines. This suggests that the current systems are inadequate for combating financial crime.

The current systems rely on models that are largely speculative and rapidly outdated.Footnote 31 Fraud patterns change constantly to keep up with technological advancements, making it difficult to distinguish between money laundering and legitimate transactions.Footnote 32 But while emerging technologies can be exploited for criminal activity, they also have the potential to thwart it.Footnote 33 AI has proven effective in improving operational efficiency and predictive accuracy in a range of fields, while also reducing operational costs.Footnote 34 Already, some banks have begun using AI to automate data in order to detect suspicious transactions. Indeed, AI could revolutionise the banking industry, including by improving the banking experience in multiple ways.Footnote 35

3.3 Leveraging AI for AML

AI simulates human thought processes through a set of theories and computerised algorithms that execute activities that would normally require human intellect.Footnote 36 It is, in short, the ability of a computer to mimic the capabilities of the human mind. The technology uses predictive analytics through pattern recognition with differing degrees of autonomy. Machine learning is one of the most effective forms of AI for AML purposes.Footnote 37 It can use computational techniques to gain insights from data, recognise patterns, and create algorithms to execute tasks – all without explicit programming.Footnote 38 Standard programming, in contrast, operates by specific rules that are developed to make inferences and produce outcomes based on input data.Footnote 39 Machine learning initiatives allow AML systems to conduct risk assessments with varying levels of independence from human intervention.Footnote 40 Deep learning, for example, is a form of machine learning that builds an artificial neural network by conducting repeated tasks, allowing it to improve the outcome continuously and solve complex problems by adapting to environmental changes.Footnote 41 Although there are many machine learning techniques, AI has four main capabilities for AML purposes: anomaly detection, suspicious behaviour monitoring, cognitive capabilities, and automatic robotic processing.Footnote 42 The effectiveness of these capabilities depends largely on processing power, the variability of data, and the quality of data, thus requiring some degree of human expertise.

The processes involved in AI can be broadly grouped into supervised and unsupervised techniques. Supervised techniques use algorithms to learn from a training set of data, allowing new data to be classified into different categories. Unsupervised techniques, which often operate without training data, use algorithms to separate data into clusters that hold unique characteristics. Researchers maintain that algorithmic processes have the potential to detect money laundering by classifying financial transactions at a larger scale than is currently possible – and with greater accuracy and improved cost-efficiency.Footnote 43

3.4 The Shift to a Risk-Based Approach

One of the most significant obstacles for banks seeking to meet their compliance obligations is the difficulty of appropriately detecting, analysing, and mitigating money laundering risks – particularly during CDD and when monitoring transactions.Footnote 44 Currently, transaction monitoring and filtering technology is primarily rule-based, meaning that it is relatively simplistic and predominantly focused on automated and predetermined risk factors.Footnote 45 The system operates as a ‘decision tree’, in which identified outliers generate alerts that require investigation by other parties. Thus, when a suspicious activity is flagged, a compliance officer must investigate the alert and, if appropriate, generate a suspicious matter report.Footnote 46

In order to minimise the costs and time required to investigate suspicious transactions, it is essential to detect them accurately at the first instance.Footnote 47 In rule-based systems, the task is made all the more difficult by the high false positive rate of the alerts, which is believed to be above 98 per cent.Footnote 48 If risk assessment in low-risk situations is overly strict, unmanageable numbers of false positive identifications can cause significant operational costs.Footnote 49 Conversely, if risk assessments are too lax, illicit transactions can slip through unnoticed.Footnote 50 These static reporting processes make it difficult to analyse increasingly large volumes of data, making them impractical on the scale required by banks. It has thus become necessary for banks to choose between the efficiency and the effectiveness of their AML processes.

Moreover, the rule-based systems rely on human-defined criteria and thresholds that are easy for money launderers to understand and circumvent. The changing patterns of fraud make it difficult for rule-based systems and policies to maintain their effectiveness, thus allowing money laundering transactions to be misidentified as genuine.Footnote 51 AML systems are designed to detect unusual transaction patterns, rather than actual criminal behaviour. Rule-based systems thus have the potential to implicate good customers, initiate criminal investigations against them, and thereby damage customer relationships – all without disrupting actual money laundering activities. This is because the systems were designed for a relatively slow-moving fraud environment in which patterns would eventually emerge and be identified and then incorporated into fraud detection systems. Today, criminal organisations are themselves leveraging evolving technologies to intrude into organisational systems and proceed undetected.Footnote 52 For example, AI allows criminals to use online banking and other electronic payment methods to move illicit funds across borders through the production of bots and false identities that circumnavigate AML systems.Footnote 53

According to the FATF, implementing a risk-based approach is the ‘cornerstone of an effective AML/CFT system and is essential to properly managing risks’.Footnote 54 Yet many jurisdictions continue to use antiquated rule-based systems, leading to defensive compliance. To keep pace with modern crime and the increasing volume and velocity of data, banks need a faster and more agile approach to the detection of money laundering. They should reconsider their AML strategies and evolve from traditional rule-based systems to more sophisticated risk-based AI solutions. By leveraging AI, banks can take a proactive and preventive approach to fighting financial crime.Footnote 55

3.5 Advantages and Challenges
3.5.1 Advantages

New technologies are key to improving the management of regulatory risks. Banks have begun exploring the use of AI to assist analysts in what has traditionally been a manually intensive task to improve the performance of AML processes.Footnote 56 In 2018, US government agencies issued a joint statement encouraging banks to use innovative methods, including AI, to further efforts to protect the integrity of the financial system against illicit financial activity.Footnote 57 The United Kingdom Financial Conduct Authority has supported a series of public workshops aimed at encouraging banks to experiment with novel technologies to improve the detection of financial crimes.Footnote 58 AUSTRAC has invested in data analysis and advanced analytics to assist in the investigation of suspicious activity.Footnote 59 Indeed, developments in AI offer an opportunity to fundamentally transform the operations of banks, equipping them to combat modern threats to the integrity of the financial system.Footnote 60 And, where AI reaches the same conclusions as traditional analytical models, this can confirm the accuracy of such assessments, ultimately increasing the safeguards available to supervisors.Footnote 61 Although machine learning remains relatively underutilised in the area of AML, it offers the potential to greatly enhance the efficiency and effectiveness of existing systems.Footnote 62

3.5.1.1 Improved Efficiency

Incorporating AI in AML procedures can reduce the occurrence of false positives and increase the identification of true positives. In Singapore, the United Overseas Bank has already piloted machine learning to enhance its AML surveillance by implementing an AML ‘suite’ that includes know-your-customer (KYC), transaction monitoring, name screening, and payment screening processes.Footnote 63 The suite provides an additional layer of scrutiny that leverages machine learning models over traditional rule-based monitoring systems, resulting in real benefits. In relation to transaction monitoring, the recognition of unknown suspicious patterns saw an increase of 5 per cent in true positives and a decrease of 40 per cent in false positives. There was a more than 50 per cent reduction in false positive findings in relation to name screening.Footnote 64

AI has the capability to analyse vast volumes of data, drawing on an increased number of variables. This means that the quality of the analysis is enhanced and the results obtained are more precise.Footnote 65 At the same time, utilising AI in AML can increase productivity by reducing staff work time by 30 per cent.Footnote 66 By combining transactional data with other information, such as customer profile data, it is possible to investigate AML risks within days. In contrast, traditional methods that review isolated accounts often require months of analysis. Additionally, banks can use AI to facilitate the live monitoring of AML standards, which can also improve governance, auditability, and accountability.Footnote 67 Overall, the use of machine learning has resulted in a 40 per cent increase in operational efficiency, reinforcing the notion that investment in AI initiatives may have positive implications for the reliability of AML processes.Footnote 68

3.5.1.2 Reduced Compliance Costs

By leveraging AI, banks have an opportunity to reduce costs and prioritise human resources in complex areas of AML.Footnote 69 It has been estimated that incorporating AI in AML compliance procedures could save the global banking industry more than US$1 trillion by 2030Footnote 70 and reduce its costs by 22 per cent over the next twelve years.Footnote 71 The opportunities for cost reduction and improved productivity and risk management offer convincing incentives for banks to engage AI and machine learning to achieve greater profitability.Footnote 72 With increased profits, banks could further improve the accuracy of AML systems and, in the process, advance the goals of AML.Footnote 73

3.5.1.3 Increased Inclusiveness

Digital tools have the potential to increase financial inclusion, promoting more equitable access to the formal financial sector.Footnote 74 Customers with less reliable forms of identification – including First Nations peoples and refugees – can access banking services through solutions such as behavioural analytics, which reduces the burden of verification to one instance of customer onboarding. Utilising AI makes banks less reliant on traditional CDD, offering enhanced monitoring capabilities that can be used to manage verification data.Footnote 75

3.5.2 Challenges

Despite the growing recognition of the potential for AI to improve the accuracy, speed, and cost-effectiveness of AML processes, banks remain slow to adopt these technologies due to the regulatory and operational challenges involved.Footnote 76 Significant hurdles to wider adoption persist and these may continue to stifle innovations in AML compliance.

3.5.2.1 Interpretation

The difficulty of interpreting and explaining the outcomes derived from AI technologies is among the main barriers to securing increased support for these tools.Footnote 77 The Basel Committee on Banking Supervision has stated that, in order to replicate models, organisations should be able to demonstrate developmental evidence of theoretical construction, behavioural characteristics, and key assumptions; the types and use of input data; specified mathematical calculations; and code-writing language and protocols.Footnote 78 Yet artificial neural networks may comprise hundreds of millions of connections, each contributing in some small way to the outcomes produced.Footnote 79 Indeed, as technological models become increasingly complex, the inner workings of the algorithms become more obscure and difficult to decode, creating ‘black boxes’ in decision-making.Footnote 80

In the European Union, the increased volume of data processing led to the adoption of the General Data Protection Regulation (GDPR) in 2016.Footnote 81 The GDPR aims to ensure that the data of individuals is protected – particularly in relation to AML procedures, which often collect highly personal data.Footnote 82 With respect to AI and machine learning, Recital 71 specifies that there is a right to obtain an explanation of the decision reached after algorithmic assessment. Because regulated entities remain responsible for the technical details of AI solutions, fears persist concerning accountability and interpretability where technologies cannot offer robust transparency.Footnote 83 While the GDPR expects that internal compliance teams will understand and defend the algorithms utilised by digital tools, compliance officers working in banks require expertise and resources to do so. It may take a long period of time for even the most technologically literate of supervisors to adjust to new regulatory practices.Footnote 84 Efforts to improve the interpretation of AI and machine learning are vital if banks are to enhance risk management and earn the trust of supervisors, regulators, and the public.

3.5.2.2 Data Quality

The data utilised to train and manage AI systems must be of high quality.Footnote 85 Machine learning models are not self-operating; they require human intervention to ensure their optimal functioning.Footnote 86 In other words, machines cannot think for themselves. Rather, they merely execute and learn from their encoded programming.Footnote 87 Since machine learning is only as good as its input, it is crucial that the models used are based on relevant and diverse data.Footnote 88 Where money-laundering transactions have not previously been identified by the system, it may be difficult for machine learning to detect future instances.Footnote 89 Moreover, false positives would be learned into the system if the training data included them.Footnote 90 Therefore, it is essential that data quality is monitored on an ongoing basis to ensure thorough data analysis and regular data cleansing. This serves to highlight the vital importance of vigilant human collaboration in the technological implementation of AI to ensure that models are well maintained and remain effective.Footnote 91

3.5.2.3 Collaboration

The inexplicable nature of AI, especially machine learning processes, has sparked concerns that are exacerbated by the lack of data harmonisation between actors and users.Footnote 92 Currently, customer privacy rules and information security considerations prevent banks from warning each other about potentially suspicious activity involving their customers. While some customers rely on a single financial services provider for all their banking requirements, criminals often avoid detection by moving illicit proceeds through numerous financial intermediaries.Footnote 93 The FATF has reported that intricate schemes involving complex transaction patterns are difficult and sometimes impossible to detect without information from counterparty banks or other banks providing services to the same customer.Footnote 94 Nevertheless, the FATF’s rules to prevent ‘tipping off’ support the objective of protecting the confidentiality of criminal investigations.Footnote 95

While data standardisation and integrated reporting strategies simplify regulatory reporting processes, they also raise various legal, practical, and competition issues.Footnote 96 It is likely that the capacity of banks to model will continue to be limited by the financial transactions that they themselves process.Footnote 97 Moreover, where information is unavailable across multiple entities, some technological tools may not be cost-effective.Footnote 98 On the other hand, stronger collaboration may introduce the risk of data being exploited on a large scale.Footnote 99 There is as yet no ‘model template’ in relation to private sector information sharing that complies with AML and data protection and privacy requirements. However, information sharing initiatives are being explored and should be considered in targeted AI policy developments.

3.5.2.4 Privacy

Due to the interconnectedness of banks and third party service providers, cyber risks are heightened when tools such as AI and machine learning are used and stored in cloud platforms. Concentrating digital solutions might exacerbate these risks.Footnote 100 These regulatory challenges reinforce the desire to maintain human-based supervisory processes so that digital tools are not replacements but rather aids in the enhancement of regulatory systems.Footnote 101 Article 22 of the GDPR provides that subjects of data analysis have the right not to be subject to a decision with legal or significant consequences ‘based solely on automated processing’.Footnote 102 The FATF also maintains that the adoption of AI technology in AML procedures requires human collaboration, due to particular concerns that technology is incapable of identifying emerging issues such as regional inequalities.Footnote 103

3.5.2.5 Bias

Although algorithmic decision-making may appear to offer an objective alternative to human subjectivity, many AI algorithms replicate the conscious and unconscious biases of their programmers.Footnote 104 This may lead to unfairly targeting the financial activities of certain individuals or entities, or it may produce risk profiles that deny certain persons access to financial services. For example, AI and machine learning are increasingly being used in relation to KYC models.Footnote 105 Recommendation 10 of the FATF standards requires banks to monitor both new and existing customers to ensure that their transactions are legitimate.Footnote 106 Without the incorporation of AI, existing KYC processes are typically costly and labour-intensive.Footnote 107 Utilising AI can help evaluate the legitimacy of customer documentation and calculate the risks for banks where applications may seem to be fake.Footnote 108 The data input team should ensure that it does not unintentionally encode systemic bias into the models by using attributes such as employment status or net worth.Footnote 109 Transactional monitoring is less vulnerable to such biases, as it does not involve personal data such as gender, race, and religion. Nonetheless, AI and machine learning algorithms could implicitly correlate those indicators based on characteristics such as geographical location.Footnote 110 If not implemented responsibly, AI has the potential to exacerbate the financial exclusion of certain populations for cultural, political, or other reasons.Footnote 111 The use of these digital tools may thus lead to unintended discrimination.Footnote 112 Such concerns are heightened by the fact that the correlations are neither explicit nor transparent.Footnote 113 Therefore, regulators must remain mindful of the need to limit bias, ensure fairness, and maintain controls. The evolving field of discrimination-aware data mining may assist the decision-making processes that flow through information technology to ensure that they are not affected on unjust or illegitimate grounds.Footnote 114 It does this by recognising statistical imbalances in data sets and leveraging background information about discrimination-indexed features to identify ‘bad’ patterns that can then be either flagged or filtered out entirely.Footnote 115

3.5.2.6 Big Data

The term ‘big data’ refers to large, complex, and ever-changing data sets and the technological techniques that are relevant to their analysis.Footnote 116 Policymakers and technical organisations have expressed significant concerns over the potential misuse of data.Footnote 117 There are also apprehensions that the lack of clarity around how data is handled may lead to potential violations of privacy.Footnote 118 In addition, there are uncertainties surrounding the ownership of data, as well as its cross-border flow.Footnote 119 Nonetheless, the primary focus should remain on the use of big data, rather than its collection and storage, as issues pertaining to use have the potential to cause the most egregious harm.Footnote 120

3.5.2.7 Liability

The issues discussed above raise questions of liability regarding who will carry the burden of any systemic faults that result in the loss or corruption of data and related breaches of human rights.Footnote 121 While artificial agents are not human, they are not without responsibility.Footnote 122 Because it is impossible to punish machines, questions of liability are left to be determined between system operators and system providers.Footnote 123 This situation can be likened to a traffic accident in which an employee injures a pedestrian while driving the company truck. While the employer and the employee may both be liable for the injuries, the truck is not.Footnote 124 These issues enliven questions of causation. Will the use of AI and machine learning be considered a novus actus interveniens that breaks the chain of causation and prevents liability from being attributed to other actors?Footnote 125 The answer to this question will largely depend on the characteristics of artificial agents and whether they will be considered as mere tools or as agents in themselves, subject to liability for certain data breaches or losses. Despite the impact of automation processes on decision-making, doubts remain as to whether AI uses ‘mental processes of deliberation’.Footnote 126 Due to the collaborative nature of AI technology and human actors, it is generally assumed that AI is merely an instrument and that accountability will be transferred to banks and developers.Footnote 127 Therefore, where supervisors can be considered legal agents for the operation of artificial technology, they may incur liability on the basis of that agency relationship.Footnote 128 Alternatively, where system developers are negligent as far as security vulnerabilities are concerned, they may be liable for the harm caused by unauthorised users or cyber criminals who exploit these deficiencies.Footnote 129 Thus, supervisors and developers have a duty of care to ensure that they take reasonable steps to prevent harm or damage.Footnote 130 It is possible that, as a result of its continued advancement, machine learning may eventually be granted legal personhood. Rights and obligations would therefore belong to the technology itself, excusing operators and developers from liability.Footnote 131 However, this viewpoint remains highly contested on the basis that AI does not possess ‘free will’, since it is programmed by humans and has little volition of its own.Footnote 132 Banks must not underestimate the importance of these concerns. They should ensure that AI and machine learning are carefully implemented with well-designed governance in place so that risks and liabilities are not unintentionally heightened by the use of new technologies.Footnote 133 Strong checks and balances are required at all stages of the development process.Footnote 134

3.5.2.8 Costs

Banks must consider the costs of maintaining, repairing, and adapting new AI systems.Footnote 135 While AI models have the potential to improve the cost-efficiency of AML compliance, it may be difficult for banks – especially smaller institutions – to budget for high-level AI solutions.Footnote 136 Moreover, there are associated indirect costs that require firms to invest in additional funding – for example, updating existing database systems to make them compatible with new AI solutions and hiring staff with appropriate technical expertise.Footnote 137

3.5.3 Consideration

AI and machine learning have the potential to provide banks with effective tools to improve risk management and compliance with regard to AML. However, if these new technologies are not introduced with care and diligence, they could adversely affect AML systems by introducing greater burdens and risks. Some of the challenges presented by AI are similar to those posed by other technology-based solutions aimed at identifying and preventing money laundering. Machine learning, however, offers a relatively new and unique method of classifying information based on a feedback loop that enables the technology to ‘learn’ through determinations of probability.Footnote 138 Banks can thus analyse and classify information through learned anomaly detection algorithms, a technique that is more effective than traditionally programmed rule-based systems.Footnote 139 At the same time, the utilisation of AI can exacerbate the complexity and severity of the challenges inherent in AML compliance, particularly in relation to interpretation and explanation.Footnote 140 As discussed above, machine learning algorithms usually do not provide a rationale or reasoning for the outcomes they produce, making it difficult for compliance experts to validate the results and deliver clear reports to regulators.Footnote 141 This is particularly concerning for banks, where trust, transparency, and verifiability are of great importance to ensure satisfaction and regulatory confidence.Footnote 142 Nonetheless, in the current regulatory climate, it seems almost inevitable that banks will continue to leverage AI for AML compliance.

3.6 Conclusion

The traditional framework for AML compliance is largely premised on old banking models that do not adequately keep pace with the modern evolution of financial crime. Traditional rule-based monitoring systems are clearly inadequate to detect the increasingly sophisticated methods and technologically advanced strategies employed by criminals. Banks are burdened with false positives while most money laundering transactions remain unidentified, posing a significant threat to the integrity of banks and the financial system itself. Banks that do not meet their compliance obligations expose themselves to significant pecuniary losses and reputational damage.Footnote 143

The FATF has highlighted the potential of innovative technologies such as AI and machine learning to make AML measures faster, cheaper, and more effective than current monitoring processes. While rule-based algorithms remain relevant, harnessing AI and machine learning holds great promise for increasing the accuracy of risk identification and heightening its efficiency due to the large analytical capacity of these processes. While these initiatives may be costly and risky to implement, they offer an excellent return on investment for banks that seek to strengthen their internal AML regime. The implementation of AI is increasingly recognised as the next phase in the fight against financial crime.

Due to the various regulatory and operational challenges that are likely to arise, banks should approach the adoption and implementation of AI with cautious optimism. They should ensure that sophisticated AI and machine learning models can be adequately understood and explained. To achieve optimal outcomes, these technologies should operate in conjunction with human analysis, particularly in areas of high risk. However, banks should be aware that the emphasis on collaboration between analysts, investigators, and compliance officers with regard to AI technology may introduce its own legal and ethical complications relating to privacy, liability, and various unintended consequences, such as customer discrimination.

In the increasingly complex environment of financial crime and AML regulation, banks should thoroughly consider the advantages and challenges presented by AI and machine learning as they move towards the transformation of risk assessment by leveraging AI to mitigate money laundering risks.

4 AI Opacity in the Financial Industry and How to Break It

Zofia Bednarz and Linda Przhedetsky Footnote *
4.1 Introduction

Automated Banks – the financial entities using ADM and AI – feed off the culture of secrecy that is pervasive and entrenched in automated processes across sectors from ‘Big Tech’ to finance to government agencies, allowing them to avoid scrutiny, accountability, and liability.Footnote 1 As Pasquale points out, ‘finance industries profit by keeping us in the dark’.Footnote 2

An integral part of the financial industry’s business model is the use of risk scoring to profile consumers of financial services, for example in the form of credit scoring, which is a notoriously opaque process.Footnote 3 The use of non-transparent, almost ‘invisible’ surveillance processes and the harvesting of people’s data is not new: financial firms have always been concerned with collecting, aggregating, and combining data for the purposes of predicting the value of their customers through risk scoring.Footnote 4 AutomationFootnote 5 introduces a new level of opacity in the financial industry, for example through the creation of AI models for which explanations are not provided – either deliberately, or due to technical explainability challenges.Footnote 6

In this chapter we argue that the rise of AI and ADM tools contributes to opacity within the financial services sector, including through the intentional use of the legal system as a ‘shield’ to prevent scrutiny and blur accountability for harms suffered by consumers of financial services. A wealth of literature critiques the status quo, showing that consumers are disadvantaged by information asymmetries,Footnote 7 complicated consent agreements,Footnote 8 information overload,Footnote 9 and other tactics that leave consumers clueless if, when, and how they have been subject to automated systems. If consumers seek to access a product or service, it is often a requirement that they be analysed and assessed using an automated tool, for example, one that determines a credit score.Footnote 10 The potential harms are interlinked and range from financial exclusion to digital manipulation to targeting of vulnerable consumers and privacy invasions.Footnote 11 In our analysis we are mostly concerned with discrimination as an example of such harm,Footnote 12 as it provides a useful illustration of problems enabled by opacity, such as significant difficulty in determining if unfair discrimination has occurred at all, understanding the reasons for the decision affecting the person or group, and accessing redress.

The rules we examine will differ among jurisdictions, and our aim is not to provide a comprehensive comparative analysis of all laws that provide potential protections against scrutiny and increase the opacity of ADM-related processes of Automated Banks. We are interested in exploring certain overarching tendencies, using examples from various legal systems, and showing how financial firms may take advantage of the complex legal and regulatory frameworks applicable to their operations in relation to the use of AI and ADM tools.

As the use of AI and ADM continues to grow in financial services markets, consumers are faced with the additional challenge of knowing about, and considering how their ever-expanding digital footprint may be used by financial institutions. The more data exists about a person, the better their credit score (of course within certain limits, such as paying off debts on time).Footnote 13 The exact same mechanism may underpin ‘open banking’ schemes: consumers who do not have sufficient data – often vulnerable people, such as domestic violence victims, new immigrants, or Indigenous people – cannot share their data with financial entities, may be excluded from accessing some products or offered higher prices, even if their actual risk is low.Footnote 14

In Australia, consumers have claimed that they have been denied loans due to their use of takeaway food services and digital media subscriptions.Footnote 15 Credit rating agencies such as Experian explicitly state that they access data sources that reflect consumers’ use of new financial products, including ‘Buy Now Pay Later’ schemes.Footnote 16 As more advanced data collection, analysis, and manipulation technologies continue to be developed, there is potential for new categories of data to emerge. Already, companies can draw surprising inferences from big data. For example, studies have shown that seemingly trivial Facebook data can, with reasonable accuracy, predict a range of attributes that have not been disclosed by users: in one study, liking the ‘Hello Kitty’ page correlated strongly with a user having ‘[d]emocratic political views and to be of African-American origin, predominantly Christian, and slightly below average age’.Footnote 17

Unless deliberate efforts are made, both in the selection of data sets and the design and auditing of AMD tools, inferences and proxy data will continue to produce correlations that may result in discriminatory treatment.Footnote 18

This chapter proceeds as follows. We begin Section 4.2 with discussion of rules that allow corporate secrecy around AI models and their data sources to exist, focusing on three examples of such rules. We discuss the opacity of credit scoring processes and the limited explanations that consumers can expect in relation to a financial decision made about them (Section 4.2.1), trade secrecy laws (Section 4.2.2), and data protection rules which do not protect de-identified or anonymised information (Section 4.2.3). In Section 4.3 we analyse frameworks that incentivise the use of ADM tools by the financial industry, thus providing another ‘protective layer’ for Automated Banks, again discussing two examples: financial product governance regimes (Section 4.3.1) and ‘open banking’ rules (Section 4.3.2). The focus of Section 4.4 is on potential solutions. We argue it is not possible for corporate secrecy and consumer rights to coexist, and provide an overview of potential regulatory interventions, focusing on preventing Automated Banks from using harmful AI systems (Section 4.4.1), aiding consumers understand when ADM is used (Section 4.4.2), and facilitating regulator monitoring and enforcement (Section 4.4.3). The chapter concludes with Section 4.5.

4.2 Rules That Allow Corporate Secrecy to Exist
4.2.1 Opacity of Credit Scoring and the (Lack of) Explanation of Financial Decisions

Despite their widespread use in the financial industry, credit scores are difficult for consumers to understand or interpret. A person’s credit risk has traditionally been calculated based on ‘three C’s’: collateral, capacity, and character.Footnote 19 Due to the rise of AI and ADM tools in the financial industry, the ‘three C’s’ are increasingly being supplemented and replaced by diverse categories of data.Footnote 20 An interesting example can be found through FICO scores, which are arguably the first large-scale process in which automated computer models replaced human decision-making.Footnote 21 FICO, one of the best-known credit scoring companies,Footnote 22 explains that their scores are calculated according to five categories: ‘payment history (35%), amounts owed (30%), length of credit history (15%), new credit (10%), and credit mix (10%)’.Footnote 23 These percentage scores are determined by the company to give consumers an understanding of how different pieces of information are weighted in the calculation of a score, and the ratios identified within FICO scores will not necessarily reflect the weightings used by other scoring companies. Further, while FICO provides a degree of transparency, the ways in which a category such as ‘payment history’ is calculated remains opaque: consumers are not privy to what is considered a ‘good’ or a ‘bad’ behaviour, as represented by data points in their transaction records.Footnote 24

Globally, many credit scoring systems (both public and private) produce three-digit numbers within a specified range to determine a consumer’s creditworthiness. For example, privately operated Equifax and Trans Union Empirica score consumers in Canada between 300 and 900,Footnote 25 whereas credit bureaus in Brazil score consumers between 1 and 1,000.Footnote 26 In an Australian context, scores range between 0 and 1,000, or 1,200, depending on the credit reporting agency.Footnote 27 By contrast, other jurisdictions use letter-based ratings, such as Singapore’s HH to AA scale which corresponds with a score range of 1,000–2,000,Footnote 28 or blacklists, such as Sweden’s payment default records.Footnote 29

Credit scoring, it turns out, is surprisingly accurate in predicting financial breakdowns or future loan delinquency,Footnote 30 but the way different data points are combined by models is not something even the model designer can understand using just intuition.Footnote 31 Automated scoring processes become even more complex as credit scoring companies increasingly rely on alternative data sources to assess consumers’ creditworthiness, including ‘predictions about a consumer’s friends, neighbors, and people with similar interests, income levels, and backgrounds’.Footnote 32 And a person’s credit score is just one of the elements lenders, Automated Banks, feed into their models to determine a consumer’s risk score. It has been reported that college grades, and the time of day an individual applies for a loan have been used to determine a person’s access to credit.Footnote 33 These types of data constitute ‘extrinsic data’ sources, which consumers are unknowingly sharing.Footnote 34

The use of alternative data sources is purported as a way of expanding consumers’ access to credit in instances where there is a lack of quality data (such as previous loan repayment history) to support the underwriting of consumers’ loan.Footnote 35 Applicants are often faced with a ‘Catch-22 dilemma: to qualify for a loan, one must have a credit history, but to have a credit history one must have had loans’.Footnote 36 This shows how ADM tools offer more than just new means to analyse greater than ever quantities of data: they also offer a convenient excuse for Automated Banks to effectively use more data.

Of course, increasing reliance on automated risk scoring is not the origin of unlawful discrimination in financial contexts. However, it is certainly not eliminating discriminatory practices either: greater availability of more granular data, even when facially neutral, leads to reinforcing of existing inequalities.Footnote 37 Automated Banks have been also shown to use alternative data to target more vulnerable consumers, who they were not able to reach or identify when only using traditional data on existing customers.Footnote 38 The quality change that AI tools promise to bring is to ‘make the data talk’: all data is credit data, if we have the right automated tools to analyse them.Footnote 39

Collection, aggregation, and use of such high volumes of data, including ‘extrinsic data’, also make it more difficult, if not impossible, for consumers to challenge financial decisions affecting them. While laws relating to consumer lending (or consumer financial products in general) in most jurisdictions provide that some form of explanation of a financial decision needs to be made available to consumers,Footnote 40 these rules will rarely be useful in the context of ADM and AI tools used in processes such as risk scoring.

This is because AI tools operate on big data. Too many features of a person are potentially taken into account for any feedback to be meaningful. The fact that risk scores and lending decisions are personalised make it even more complicated for consumers to compare their offer with anyone else’s. This can be illustrated by the case of Apple credit card,Footnote 41 which has shown the complexity of investigation necessary for people to be able to access potential redress: when applying for personalised financial products, consumers cannot immediately know what features are being taken into account by financial firms assessing their risk, and subsequent investigation by regulators or courts may be required.Footnote 42 The lack of a right to meaningful explanation of credit scores and lending decisions based on the scores makes consumers facing Automated Banks and the automated credit scoring system quite literally powerless.Footnote 43

4.2.2 Trade Secrets and ADM Tools in Credit Scoring

The opacity of credit scoring, or risk scoring more generally, and other automated assessment of clients that Automated Banks engage in, is enabled by ADM tools which ‘are highly valuable, closely guarded intellectual property’.Footnote 44 Complementing the limited duty to provide explanation of financial decisions to consumers, trade secrets laws allow for even more effective shielding of the ADM tools from scrutiny, including regulators’ and researchers’ scrutiny.

While trade secrets rules differ between jurisdictions, the origin and general principles that underpin these rules are common across all the legal systems: trade secrets evolved as a mechanism to protect diverse pieces of commercial information, such as formulas, devices, or patterns from competitors.Footnote 45 These rules fill the gap where classic intellectual property law, such as copyright and patent law, fails – and it notably fails in relation to AI systems, since algorithms are specifically excluded from its protection.Footnote 46 Recent legal developments, for example the European Union Trade Secrets Directive,Footnote 47 or the US Supreme Court case of Alice Corp. v CLS Bank,Footnote 48 mean that to protect their proprietary technologies, companies are now turning to trade secrets.Footnote 49 In practice, this greatly reduces the transparency of the ADM tools used: if these cannot be protected through patent rights, they need to be kept secret.Footnote 50

The application of trade secrets rules leads to a situation in which financial entities, for example lenders or insurers, who apply third party automated tools to assess creditworthiness of their prospective clients might not be able to access the models and data they use. Using third party tools is a common practice, and the proprietary nature of the tools and data used to develop and train the models will mean financial entities using these tools may be forced to rely on the supplier’s specifications in relation to their fairness as they may not be able to access the code themselves.Footnote 51

Secrecy of ADM tools of course has implications for end users, who will be prevented from challenging credit models, and is also a barrier for enforcement and research.Footnote 52 Trade secret protections apply not only to risk scoring models, but often extend also to data sets and inferences generated from information provided by individuals.Footnote 53 Commercial entities openly admit they ‘invest significant amounts of time, money and resources’ to draw inferences about individuals ‘using […] proprietary data analysis tools’, a process ‘only made possible because of the [companies’] technical capabilities and value add’.Footnote 54 This, they argue, makes the data sets containing inferred information a company’s intellectual property.Footnote 55

The application of trade secrets rules to credit scoring in a way that affects the transparency of the financial system is not exactly new: ‘[t]he trade secrecy surrounding credit scoring risk models, and the misuse of the models coupled with the lack of governmental control concerning their use, contributed to a financial industry wide recession (2007–2008)’.Footnote 56

In addition to trade secrets laws, a sui generis protection of source code of algorithms is being introduced in international trade law through free trade agreements,Footnote 57 which limit governments from mandating access to the source code. The members of the World Trade Organization (WTO) are currently negotiating a new E-commerce trade agreement, which may potentially include a prohibition on government-mandated access to software source code.Footnote 58 WTO members, including Canada, the EU, Japan, South Korea, Singapore, Ukraine, and the United States support such a prohibition,Footnote 59 which in practice will mean a limited ability for states to adopt laws that would require independent audits of AI and ADM systems.Footnote 60 It is argued that adoption of the WTO trade agreement could thwart the adoption of the EU’s AI Act,Footnote 61 demonstrating how free trade agreements can impose another layer of rules enhancing the opacity of AI and ADM tools.

4.2.3 ‘Depersonalising’ Information to Avoid Data and Privacy Protection Laws: Anonymisation, De-identification, and Inferences

Automated Banks’ opacity is enabled by the express exclusion of ‘anonymised’ or ‘de-identified’ data from the scope of data and privacy protection laws such as the GDPR.Footnote 62 In its Recital 26, the GDPR defines anonymised information as not relating to ‘an identified or identifiable natural person’ or as ‘data rendered anonymous in such a manner that the data subject is not or no longer identifiable’. This allows firms to engage in various data practices, which purport to use anonymised data.Footnote 63 They argue they do not collect or process ‘personal information’, thus avoiding the application of the rules, and regulatory enforcement.Footnote 64 Also, consumers to whom privacy policies are addressed believe that practices focusing on information that does not directly identify them have no impact on their privacy.Footnote 65 This in turn may mean privacy policies are misrepresenting data practices to consumers, which could potentially invalidate their consent.Footnote 66

There is an inherent inconsistency between privacy and data protection rules and the uses and benefits that ADM tools using big data analytics promise. Principles of purpose limitation and data minimisationFootnote 67 require entities to delimit, quite strictly and in advance, how the data collected are going to be used, and prevent them from collecting and processing more data than necessary for that specific purpose. However, this is not how big data analytics, which fuels ADM and AI models, works.Footnote 68 Big data means that ‘all data is credit data’, incentivising the Automated Banks to collect as much data as possible, for any possible future purpose, potentially not known yet.Footnote 69 The exclusion of anonymised or de-identified data from the scope of the protection frameworks opens doors for firms to take advantage of enhanced analytics powered by new technologies. The contentious question is at which point information becomes, or ceases to be, personal information. If firms purchase, collect, and aggregate streams of data, producing inferences allowing them to describe someone in great detail, including their age, preferences, dislikes, size of clothes they wear and health issues they suffer from, their household size and income level,Footnote 70 but do not link this profile to the person’s name, email, physical address, or IP address – would it be personal information? Such a profile, it could be argued, represents a theoretical, ‘model’ person or consumer, built for commercial purposes through aggregation of demographic and other information available.Footnote 71

De-identified data may still allow a financial firm to achieve more detailed segmentation and profiling of their clients. There are risks of harms in terms of ‘loss of privacy, equality, fairness and due process’ even when anonymised data is used.Footnote 72 Consumers are left unprotected against profiling harms due to such ‘narrow interpretation of the right to privacy as the right to anonymity’.Footnote 73

There is also discussion as to the status of inferences under data and privacy protection laws. Credit scoring processes are often based on inferences, where a model predicts someone’s features (and ultimately their riskiness or value as a client) on the basis of other characteristics that they share with others deemed risky by the model.Footnote 74 AI models may thus penalise individuals for ‘shopping at low-end stores’, membership in particular communities or families, and affiliations with certain political, religious, and other groups.Footnote 75 While AI-powered predictions about people’s characteristics are often claimed to be more accurate than those made by humans,Footnote 76 they may also be inaccurate.Footnote 77 The question is if such inferences are considered personal information protected by privacy and data laws.

Entities using consumers’ data, such as technology companies, are resisting against expressly including inferred information in the scope of data and privacy protections. For example, Facebook openly admitted that ‘[t]o protect the investment made in generating inferred information and to protect the inferred information from inappropriate interference, inferred information should not be subject to all of the same aspects of the [Australian Privacy Act] as personal information’.Footnote 78 The ‘inappropriate interference’ they mention refers to extending data correction and erasure rights to inferred information.

Second, there is an inherent clash between the operation of privacy and data protection rules and the inference processes AI tools are capable of carrying out. Any information, including sensitive information, may be effectively used by an ADM system, even though it only materialises as an internal encoding of the model and is not recorded in a human understandable way. The lack of explicit inclusion of inferred information, and its use, within the privacy and data protection frameworks provides another layer of opacity shielding financial firms (as well as other entities) from scrutiny of their ADM tools.

When information is ‘depersonalised’ in some way: de-identified on purpose through the elimination of strictly personal identifiers,Footnote 79 through use of anonymous ‘demographic’ data, through ‘pseudonymisation’ practices, or because it is inferred from data held (either personal or already de-identified), the result is the same – privacy and data protection rules do not apply. The firms take advantage of that exclusion, sometimes balancing on the thin line between legal and illegal data processing, making their data practices non-transparent to avoid scrutiny by consumers and regulators.

As a US judge in a recent ruling put it: ‘[i]t is well established that there is an undeniable link between race and poverty, and any policy that discriminates based on credit worthiness correspondingly results in a disparate impact on communities of color’.Footnote 80 The data used in large-scale AI and ADM models is often de-identified or anonymised, but it inherently mirrors historical inequalities and biases, thus allowing the Automated Banks to claim impartiality and avoid responsibility for the unfairness of data used.

The reason why privacy and data protection rules lack clear consideration of certain data practices and processes enabled by AI may be due to these tools and processes being relatively new and poorly understood phenomena.Footnote 81 This status quo is however very convenient for the companies, who will often raise the argument that ‘innovation’ will suffer if more stringent regulation is introduced.Footnote 82

4.3 Rules That Incentivise the Use of ADM Tools by Financial Entities

In addition to offering direct pathways allowing Automated Banks to evade scrutiny of their AI and ADM models, legal systems and markets in the developed world have also evolved to incentivise the use of automated technology by financial entities. In fact, the use of ADM and AI tools is encouraged, or sometimes even mandated,Footnote 83 by legal and regulatory frameworks. After all, the fact that they are told to either use the technology, or to achieve outcomes that can effectively only be reached with the application of the tools in question, provides a basis for a very convenient excuse. Though this is mainly an unintended effect of the rules, it should not be ignored.

In this section, we discuss two examples of rules that increase the secrecy of AI or ADM tools used in the context of risk scoring: financial products governance rules and ‘open banking’ regimes.

4.3.1 Financial Products Governance Rules

Financial firms have always been concerned with collecting and using data about their consumers, to differentiate between more and less valuable customers. For example, insurance firms, even before AI profiling tools were invented (or at least before they were applied at a greater scale) were known to engage in practices referred to as ‘cherry-picking’ and ‘lemon-dropping’, setting up firms’ offices at higher floors in buildings with no lifts, so that it would be harder for disabled (potential) clients to reach them.Footnote 84 There is a risk that the widespread data profiling and use of AI tools may exacerbate issues relating to consumers’ access to financial products and services. AI tools may introduce new or replicate historical biases present in data,Footnote 85 doing so more efficiently, in a way that is more difficult to discover, and at a greater scale than was possible previously.Footnote 86

An additional disadvantage resulting from opaque risk scoring systems is that consumers may miss out on the opportunity to improve their score (for example, through the provision of counterfactual explanations, or the use of techniques including ‘nearby possible worlds’).Footnote 87 In instances where potential customers who would have no trouble paying back loans are given low risk scores, two key issues arise: first, the bank misses out on valuable customers, and second, there is a risk that these customers’ rejections, if used as input data to train the selection algorithm, will reinforce existing biases.Footnote 88

Guaranteeing suitability of financial services is a notoriously complicated task for policymakers and regulators. With disclosure duties alone proving largely unsuccessful in addressing the issue of consumers being offered financial products that are unfit for purpose, policymakers in a number of jurisdictions, such as the EU and its Member States, the United Kingdom, Hong Kong, Australia, and Singapore, have started turning to product governance regimes.Footnote 89 An important component of these financial product governance regimes is an obligation placed on financial firms, which issue and distribute financial products, to ensure their products are fitness-for-purpose and to adopt a consumer-centric approach in design and distribution of the products. In particular, a number of jurisdictions require financial firms to delimit the target market for their financial products directed at retail customers, and ensure the distribution of the products within this target market. Such target market is a group of consumers of a certain financial product who are defined by some general characteristics.Footnote 90

Guides issued by regulators, such as the European Securities and Markets AuthorityFootnote 91 and the Australian Securities and Investment Commission,Footnote 92 indicate which consumers’ characteristics are to be taken into account by financial firms. The consumers for whom the product is intended are to be identified according to their ‘likely objectives, financial situation, and needs’,Footnote 93 or five ‘categories’: the type of client, their knowledge and experience, financial situation, risk tolerance, and objective and needs.Footnote 94 For issuers or manufacturers of financial products these considerations are mostly theoretical: as they might not have direct contact with clients, they need to prepare a potential target market, aiming at theoretical consumers and their likely needs and characteristics.Footnote 95 Both issuers and distributors need to take reasonable steps to ensure that products are distributed within the target market, which then translates to the identification of real consumers with specific needs and characteristics that should be compatible with the potential target markets identified. Distributors have to hold sufficient information about their end clients to be able to assess if they can be included in the target market,Footnote 96 including:

– indicators about the likely circumstances of the consumer or a class of consumers (e.g. concession card status, income, employment status);

– reasonable inferences about the likely circumstances of the consumer or a class of consumers (e.g. for insurance, information inferred from the postcode of the consumer’s residential address); or

– data that the distributor may already hold about the consumer or similar consumers, or results derived from analyses of that data (e.g. analysis undertaken by the distributor of common characteristics of consumers who have purchased a product).Footnote 97

Financial products governance frameworks invite financial firms to collect data on consumers’ vulnerabilities. For example in Australia, financial firms need to consider vulnerabilities consumers may have, such as those resulting from ‘personal or social characteristics that can affect a person’s ability to manage financial interactions’,Footnote 98 as well as those brought about by ‘specific life events or temporary difficulties’,Footnote 99 in addition to vulnerabilities stemming from the product design or market actions.

The rationale of product governance rules is to protect financial consumers, including vulnerable consumers,Footnote 100 yet the same vulnerable consumers may be disproportionately affected by data profiling, thus inhibiting their access to financial products. Financial law is actively asking firms to collect even more data about their current, prospective, and past customers, as well as the general public. It provides more than a convenient excuse to carry out digital profiling and collect data for even more precise risk scoring – it actually mandates this.

4.3.2 How ‘Open Banking’ Increases Opacity

Use of AI and ADM tools, together with ever-increasing data collection feeding the data hungry models,Footnote 101 is promoted as beneficial to consumers and markets, and endorsed by companies and governments. Data collection is thus held out as a necessary component of fostering AI innovation. Companies boast how AI insights allow them to offer personalised services, ‘tailored’ to individual consumer’s needs. McKinsey consulting firm hails ‘harnessing the power of external data’ noting how ‘few organizations take full advantage of data generated outside their walls. A well-structured plan for using external data can provide a competitive edge’.Footnote 102

Policymakers use the same rhetoric of promoting ‘innovation’ and encourage data collection through schemes such as open banking.Footnote 103 The aim of open banking is to give consumers the ability to direct companies that hold financial data about themselves to make it available to financial (or other) companies of the consumer’s choice. Thus, it makes it possible for organisations to get access to consumers’ information they could never get from a consumer directly, such as for example their transaction data for the past ten years.

Jurisdictions such as the EU, United Kingdom, Australia, and Hong Kong have recently adopted regulation promoting open banking, or ‘open finance’ more generally.Footnote 104 The frameworks are praised by the industry as ‘encourag[ing] the development of innovative products and services that help consumers better engage with their finances, make empowered decisions and access tailored products and services’.Footnote 105

While open banking is making it possible for financial firms to develop new products for consumers, the jury is still out as to the scheme’s universally positive implications for consumers and markets.Footnote 106 One thing that is clear, however, is that because of its very nature, open banking contributes to information and power asymmetry between consumers and Automated Banks.

Traditionally, in order to receive a financial product, such as a loan or an insurance product, consumers would have to actively provide relevant data, answering questions or prompts, in relation to their income, spending, age, history of loan repayments, and so on. Open banking – or open finance more broadly – means that consumers can access financial products without answering any questions. But these questions provided a level of transparency to consumers: they knew what they were being asked, and were likely to understand why they were being asked such questions. But when an individual shares their ‘bulk’ data, such as their banking transaction history, through the open banking scheme, do they really know what a financial firm is looking for and how it is being used? At the same time, in such a setting, consumers are deprived of control over which data to share (for example, they cannot just hide transaction data on payments they made to merchants such as liquor stores or pharmacies). The transparency for financial firms when data is shared is therefore significantly higher than in ‘traditional’ settings – but for consumers the process becomes more opaque.Footnote 107

4.4 Can Corporate Secrecy Coexist with Consumer Rights? Possible Regulatory Solutions

ADM tools contribute to maintaining corporate secrecy of Automated Banks, and as we argue in this chapter, legal systems perpetuate, encourage, and feed the opacity further. The opacity then increases the risk of consumer harm, such as discrimination, which is more difficult to observe, and more challenging to prove.

In this section we provide a brief outline of potential interventions that may protect against AI-facilitated harms, particularly if applied synchronously. This discussion does not aim to be exhaustive, but rather aims to show something can be done to combat the opacity and resulting harms.

Interventions described in academic and grey literature can be divided into three broad categories: (1) regulations that prevent businesses from using harmful AI systems in financial markets, (2) regulations that aid consumers to understand when ADM systems are used in financial markets, and (3) regulations that facilitate regulator monitoring and enforcement against AI-driven harms in financial markets. Approaches to design (including Transparency by DesignFootnote 108) are not included in this list, and while they may contribute to improved consumer outcomes, they are beyond the scope of this chapter.

The somewhat provocative title of this section asks if corporate secrecy is the real source of the AI-related harms in the described context. The interventions outlined below focus on preventing harms, but can the harms really be prevented if the opacity of corporate practices and processes is not addressed first? Corporate secrecy is the major challenge to accountability and scrutiny, and consumer rights, including right to non-discrimination, cannot be guaranteed in an environment as opaque as it currently is. We submit that the regulatory interventions urgently needed are the ones that prevent secrecy first and foremost. AI and ADM tools will continue to evolve, and technology as such is not a good regulatory targetFootnote 109 – the focus must be on harm prevention. Harms can only be prevented if the practices of financial firms, such as credit scoring discussed in this chapter, are transparent and easily monitored both by regulators and consumers.

4.4.1 Preventing Automated Banks from Designing Harmful AI Systems

International and national bodies in multiple jurisdictions have recently adopted, or are currently debating, various measures with an overarching aim of protecting consumers from harm. For example, the US Federal Trade Commission has provided guidance to businesses using AI, explaining that discriminatory outcomes resulting from the use of AI would contravene federal law.Footnote 110 The most comprehensive approach to limiting the use of particular AI tools can be found in the EU’s proposed Artificial Intelligence Act. Its Recital 37 specifically recommends that ‘AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems’. This proposal is a step towards overcoming some opaque practices, through the provision of ‘clear and adequate information to the user’ along with other protections that enable authorities to scrutinise elements of ADM tools in high-risk contexts.Footnote 111 Early criticisms of the proposed Act note that while a regulatory approach informed by the context in which ADM is used has some merit, it does not cover potentially harmful practices such as emotion recognition and remote biometric identification,Footnote 112 which could be used across a range of contexts, generating data sets that may later be used in other markets such as financial services.

An alternative approach to regulating AI systems before they are used in markets is to limit the sources of information that can be used by ADM tools, or restrict the ways in which information can be processed. In addition to privacy protections, some jurisdictions have placed limitations on the kinds of information that can be used to calculate a credit score. For example, in Denmark, the financial services sector can use consumers’ social media data for marketing purposes but is explicitly prohibited from using this information to determine creditworthiness.Footnote 113 Similarly, the EU is considering a Directive preventing the use of personal social media and health data (including cancer data) in the determination of creditworthiness.Footnote 114 Such prohibitions are, however, a rather tricky solution: it may be difficult for the regulation to keep up with a growing list of data that should be excluded from analysis.Footnote 115 One way of overcoming this challenge would be to avoid focusing on restricted data sources, and instead create a list of acceptable data sources, which is a solution applied for example in some types of health insurance.Footnote 116

Imposing limits on how long scores can be kept and/or relied on by Automated Banks is another important consideration. In Australia, credit providers are bound by limits that stipulate the length of time that different pieces of information are held on a consumer’s file: credit providers may only keep financial hardship information for twelve months from the date the monthly payment was made under a financial hardship arrangement, whereas court judgements may be kept on record for five years after the date of the decision.Footnote 117 In Denmark, where the credit reporting system operates as a ‘blacklist’ of people deemed more likely to default, a negative record (for instance, an unpaid debt) is deleted after five years, regardless of whether or not the debt has been paid.Footnote 118 A challenge with these approaches is that the amount of time particular categories of data may be kept may not account for proxy data, purchased data sets, and/or proprietary scoring and profiling systems that group consumers according to complex predictions that are impossible to decode.

4.4.2 Aiding Consumers Understand When ADM Systems Are Used in Financial Services

Despite the development of many principles-based regulatory initiatives by governments, corporates, and think tanks,Footnote 119 few jurisdictions have legislated protections that require consumers to be notified if and when they have been assessed by an automated system.Footnote 120 In instances where consumers are notified, they may be unable to receive an understandable explanation of the decision-making process, or to seek redress through timely and accessible avenues.

Consumers face a number of challenges in navigating financial markets, such as understanding credit card repayment requirementsFootnote 121 and failing to accurately assess their credit.Footnote 122 For individuals, it is crucial to understand how they are being scored, as this will make it possible for them to be able to identify inaccuracies,Footnote 123 and question decisions made about them. Credit scoring is notoriously opaque and difficult to understand, so consumers are likely to benefit from requirements for agencies to simplify and harmonise how scores are presented.Footnote 124An example of a single scoring system can be found in Sri Lanka, where credit ratings, or ‘CRIB Scores’ are provided by the Central Information Bureau of Sri Lanka, a public-private partnership between the nation’s Central Bank and a number of financial institutions that hold equity in the Bureau. The Bureau issues CRIB Score reports to consumers in a consistent manner, utilising an algorithm to produce a three-digit number ranging from 250 to 900.Footnote 125 In Sri Lanka’s case, consumers are provided with a singular rating from a central agency, and although this rating is subject to change over time, there is no possibility of consumers receiving two different credit scores from separate providers.

Providing consumers with the opportunity to access their credit scores is another (and in many ways complementary) regulatory intervention. A number of jurisdictions provide consumers with the option to check their credit report and/or credit score online. For example, consumers in CanadaFootnote 126 and AustraliaFootnote 127 are able to access free copies of their credit reports by requesting this information directly from major credit bureaus. In Australia, consumers are able to receive a free copy of their credit report once every three months.Footnote 128

However, such approaches have important limitations. Credit ratings are just one of many automated processes within the financial services industry. Automated Banks, with access to enough data, can create their own tools going outside the well-established credit rating systems. Also, it is consumers who are forced to carry the burden of correcting inaccurate information which is used to make consequential decisions about them, while often being required to pay for the opportunity to do so.Footnote 129

In addition, explainability challenges are faced in every sector that uses AI, and there is considerable investigation ahead to determine the most effective ways of explaining automated decisions in financial markets. It has been suggested that a good explanation is provided when the receiver ‘can no longer keep asking why’.Footnote 130 The recent EU Digital Services ActFootnote 131 emphasises such approach by noting that recipients of online advertisements should have access to ‘meaningful explanations of the logic used’ for ‘determining that specific advertisement is to be displayed to them’.Footnote 132

Consumer experience of an AI system will depend on a number of parameters, including format of explanations (visual, rule-based, or highlighted key features), their complexity and specificity, application context, and variations suiting users’ cognitive styles (for example, providing some users with more complex information, and others with less).Footnote 133 The development of consumer-facing explainable AI tools is an emerging area of research and practice.Footnote 134

A requirement of providing meaningful feedback to consumers, for example, through counterfactual demonstrations,Footnote 135 would make it possible for individuals to understand what factors they might need to change to receive a different decision. It would also be an incentive for Automated Banks to be more transparent.

4.4.3 Facilitating Regulator Monitoring and Enforcement of ADM Harms in Financial Services

The third category of potential measures relies on empowering regulators, thus shifting the burden away from consumers. For example, regulators need to be able to ‘look under the hood’ of any ADM tools, including these of proprietary character.Footnote 136 This could be in a form of using explainable AI tools, access to raw code, or ability to use dummy data to test the model. A certification scheme, such as quality standards, is another option, the problem however is the risk of ‘set and forget approach’. Another approach to providing regulators insight into industry practices is the establishment of regulatory sandboxes, which nevertheless have limitations.Footnote 137

Financial institutions could also be required to prove a causal link between the data that they use to generate consumer scores, and likely risk. Such approach would likely reduce the use of certain categories of data, where correlations between data points would not be supported by a valid causal relationship. For example, Android phone users are reportedly safer drivers than iPhone users,Footnote 138 but such rule would prevent insurers from taking this into account when offering a quote on car insurance (while we do not suggest they are currently doing so, in many legal systems they could). In practice, some regulators are looking at this solution. For example, while not going as far as requiring direct causal link, the New York State financial regulator requires a ‘valid explanation or rationale’ for underwriting of life insurance, where external data or external predictive models are used.Footnote 139 However, such approach could result in encouraging financial services providers to collect more data, just to be able to prove the causal link,Footnote 140 which may again further disadvantage consumers and introduce more, not less, opacity.

4.5 Conclusions

Far from being unique to credit scoring, the secrecy of ADM tools is a problem affecting multiple sectors and industries.Footnote 141 Human decisions are also unexplainable and opaque, and ADM tools are often made out to be a potential, fairer and more transparent, alternative. But the problem is secrecy increases, not decreases, with automation.Footnote 142

There are many reasons for this, including purely technological barriers to explainability. But also, it is obviously cheaper and easier not to design and use transparent systems. As we argue in this chapter, opacity is a choice made by organisations, often on purpose, as it allows them to evade scrutiny and hide their practices from the public and regulators. Opacity of ADM and AI tools used is a logical consequence of the secrecy of corporate practices.

Despite many harms caused by opacity, the legal systems and market practice have evolved to enable or even promote that secrecy surrounding AI and ADM tools, as we have discussed using examples of rules applying to Automated Banks. However, the opacity and harms could be prevented with some of the potential solutions which we have discussed in this chapter. The question is whether there is sufficient motivation to achieve positive social impact with automated tools, without just focusing on optimisation and profits.

Footnotes

1 AI in the Financial Sector Policy Challenges and Regulatory Needs

1 This marks the beginning of a second generation of digital transformation. The terminology ‘first and second generation’ to refer to the successive waves of emerging technologies is used and explained by the author in other previous publications. T Rodríguez de las Heras Ballell, Challenges of Fintech to Financial Regulatory Strategies (Madrid: Marcial Pons, 2019), in particular, pp. 61 et seq.

2 Financial markets have been incorporating state-of-the-art digital communication channels and technological applications for more than two decades – International Finance Corporation (IFC), Digital Financial Services: Challenges and Opportunities for Emerging Market Banks (Report, 2017) footnote 42, p. 1. Regulation has been gradually accommodating these transformations: J Dermine, ‘Digital Banking and Market Disruption: A Sense of déjà vu?’ (2016) 20 Financial Stability Review, Bank of France 17.

3 The study resulting from the survey conducted by the Institute of International Finance – Machine Learning in Credit Risk, May 2018 – revealed that traditional commercial banks are adopting technological solutions (artificial intelligence and machine learning and deep learning techniques) as a strategy to gain efficiency and compete effectively with new fintech entrants (Institute of International Finance, Machine Learning in Credit Risk (Report, May 2018). PwC’s 2021 Digital Banking Consumer Survey (Survey 2021) confirms this same attitude of traditional banks to rethink their sales, marketing and customer interaction practices, models, and strategies (PwC, Digital Banking Consumer Survey (Report, 2021) <www.pwc.com/us/en/industries/banking-capital-markets/library/digital-banking-consumer-survey.html>. In this overhaul and modernisation strategy, the incorporation of digital technologies – in particular, the use of AI and machine learning models to deliver highly accurate personalised services – is a crucial piece.

4 Capgemini, World Fintech Report 2018 (Report, 2018) highlights the possibilities offered by emerging technologies for the delivery of customer-facing financial services – artificial intelligence, data analytics, robotics, DLT, biometrics, platforms, IoT, augmented reality, chatbots, and virtual assistants – pp. 20 et seq. Capgemini, World Fintech Report 2021 (Report, 2021) confirms how the synergistic combination of these transformative technologies has opened up four routes for innovation in the financial sector: establishing ecosystems, integrating physical and digital processes, reorienting transactional flows, and reimagining core functions.

5 World Economic Forum, Forging New Pathways: The next evolution of innovation in Financial Services (Report, 2020) 14 <www.weforum.org/reports/forging-new-pathways-the-next-evolution-of-innovation-in-financial-services>.

6 According to the European Banking Authority (EBA), 64 per cent of European banks have already implemented AI-based solutions in services and processes, primarily with the aim of reducing costs, increasing productivity, and facilitating new ways of competing. EBA, Risk assessment of the European Banking System (Report, December 2020) 75.

9 European Securities and Markets Authority (ESMA), European Banking Authority (EBA), European Insurance and Occupational Pensions Authority (EIOPA), Joint Committee Discussion Paper on automation in financial advice, (Discussion Paper JC 2015 080, 4 December 2015) <https://esas-joint-committee.europa.eu/Publications/Discussion%20Paper/20151204_JC_2015_080_discussion_paper_on_Automation_in_Financial_Advice.pdf>. PwC, Global Fintech Survey 2016, Beyond Automated Advice. How FinTech Is Shaping Asset & Wealth Management (Report, 2016) 8, <www.pwc.com/gx/en/financial-services/pdf/fin-tech-asset-and-wealth-management.pdf>.

10 Capgemini, World Fintech Report 2021 (Report, 2021) <https://fintechworldreport.com/>: ‘The consequences of the pandemic have made the traditional retail banking environment even more demanding’.

11 According to the definition of the Financial Stability Board (FSB), Financial Stability Implications from Fintech (Report, June 2017) 7 <www.fsb.org/wpcontent/uploads/R270617.pdf>, fintech is defined as ‘technology-enabled innovation in financial services that could result in new business models, applications, processes or products, with an associated material effect on the provision of financial services’.

12 TF Dapp, ‘Fintech Reloaded-Traditional Banks as Digital Ecosystems’ (2015) Deutsche Bank Research 5.

13 T Rodríguez de las Heras Ballell, ‘The Legal Anatomy of Electronic Platforms: A Prior Study to Assess the Need of a Law of Platforms in the EU’ (2017) 1 The Italian Law Journal 3, 149–76.

14 IH-Y Chiu, ‘Fintech and Disruptive Business Models in Financial Products, Intermediation and Markets – Policy Implications for Financial Regulators’ (2016) 21 Journal of Technology Law and Policy 55.

15 A Wright and P De Filippi, ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’ (2015) <https://ssrn.com/abstract=2580664> .

16 R Lewis et al, ‘Blockchain and Financial Market Innovation’ (2017) Federal Reserve Bank of Chicago, Economic Perspectives 7.

17 According to the KPMG-Funcas report, Comparison of Banking vs. Fintech Offerings (Report, 2018) <https://assets.kpmg/content/dam/kpmg/es/pdf/2018/06/comparativa-oferta-%20banca-fintech.pdf> 48 per cent of domestic fintech firms are complementary to banks, 32 per cent are collaborative, and 20 per cent are competitors. It is estimated that 26 per cent of financial institutions have partnered with Big Tech or technology giants and a similar percentage plan to do so within the next twelve months – KPMG – Funcas, La banca ante las BigTech (Report, December 2019), presented in the framework of the Observatorio de la Digitalización Financiera (ODF).

18 World Economic Forum, Beyond Fintech: A Pragmatic Assessment of Disruptive Potential in Financial Services (Report, 2017) <www.weforum.org/reports/beyond-Fintech-a-pragmatic-assessment-of-disruptive-potential-in-financial-services>.

19 G Biglaiser, E Calvano, and J Crémer, ‘Incumbency Advantage and Its Value’ (2019) 28 Journal of Economics & Management Strategy 1, 41–48.

20 Spanish Fintech and Insurtech Association (AEFI), White Paper on Fintech Regulation in Spain (White Paper, 2017) <https://asociacionfintech.es/wp-content/uploads/2018/06/AEFI_LibroBlanco_02_10_2017.pdf>. Basel Committee on Banking Supervision, Sound Practices. Implications of Fintech Developments for Banks and Bank Supervisors (Report, 2018).

21 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules in the field of artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts of the Union, {SEC(2021) 167 final}. – {SWD(2021) 84 final}, {SWD(2021) 85 final}. – {SWD(2021) 85 final}, Brussels, 21.4.2021, COM(2021) 206 final, 2021/0106(COD). References to draft provisions will be made in this Paper to the drafting of the compromise text adopted on 3 November 2022 submitted to Coreper on 11 November 2022 for a discussion scheduled on 18 November 2022 with the amendments subsequently adopted by the European Parliament on 14 June 2023.

22 EBA, Report on Big Data and Advanced Analytics (Report EBA/REP/2020/01, 2020), 33–42.

23 White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, COM(2020) 65 final, Brussels, 19 February 2020.

24 Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee, Report on the Security and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics (Report COM(2020) 64, 19 February 2020).

25 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Shaping Europe’s Digital Future, COM(2020) 67 final, Brussels, 19 February 2020.

26 ‘Building Trust in Human-Centric AI’, European Commission (Web Page) <https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html>.

27 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance), OJ L 277, 1–102.

28 Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance), OJ L 265, 1–66.

29 Principles enshrined in international harmonisation instruments adopted by the United Nations: notably and essentially, 1996 Model Law on Electronic Commerce, 2001 Model Law on Electronic Signatures, 2005 Convention on the Use of Electronic Communications in International Trade, 2017 Model Law on Electronic Transmittable Documents <www.uncitral.un.org>.

30 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.

31 T Rodríguez de las Heras Ballell, ‘Legal Challenges of Artificial Intelligence: Modelling the Disruptive Features of Emerging Technologies and Assessing Their Possible Legal Impact’ (2019) 1 Uniform Law Review 113.

32 European Commission, Report of the Expert Group in Its New Technologies Formation, Report on Liability for Artificial Intelligence and Other Emerging Technologies (Report, November 2019) <https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608>.

33 European Commission, Expert Group on Liability and New Technologies, in Its Two Trainings, New Technologies Formation and Product Liability Formation <https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail&groupID=3592->.

34 The author is a member of the Expert Group on Liability and New Technologies (New Technologies Formation), which assists the European Commission in developing principles and guidelines for the adaptation of European and national regulatory frameworks for liability in the face of the challenges of emerging digital technologies (Artificial Intelligence, Internet of Things, Big Data, Blockchain, and DLT). The Expert Group issued its Report on Liability for Artificial Intelligence and Other Emerging Technologies which was published on 21 November 2019. The views expressed by the author in this paper are personal and do not necessarily reflect either the opinion of the Expert Group or the position of the European Commission.

35 Proposal COM/2022/496 of 28 September 2022 for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive).

36 Proposal COM/2022/495 of 28 September 2022 for a Directive of the European Parliament and of the Council on liability for defective products.

37 Artificial intelligence system (AI system) means a system that

  1. (i) receives machine and/or human-based data and inputs,

  2. (ii) infers how to achieve a given set of human-defined objectives using learning, reasoning, or modelling implemented with the techniques and approaches listed in Annex I, and

  3. (iii) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influence the environments it interacts with.

38 The fourth compromise text is as follows:

On 5 July 2022, the Czech Presidency held a policy debate in WP TELECOM on the basis of a policy options paper, the outcomes of which were used to prepare the second compromise text. Based on the reactions of the delegations to this compromise, the Czech Presidency prepared the third compromise text, which was presented and discussed in WP TELECOM on 22 and 29 September 2022. After these discussions, the delegations were asked to send in their written comments on the points they felt most strongly about. Based on those comments, as well as using the input obtained during bilateral contacts with the Member States, the Czech Presidency drafted the fourth compromise proposal, which was discussed in the WP TELECOM meeting on 25 October 2022. Based on these discussions, and taking into account final written remarks from the Member States, the Czech Presidency has now prepared the final version of the compromise text.

39 Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)).

40 Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), 5 October 2020 <www.europarl.europa.eu/doceo/document/A-9-2020-0178_ES.pdf>.

41 (a) ‘Artificial intelligence system’ means any software-based or hardware-embedded system that exhibits behaviour simulating intelligence, inter alia, by collecting and processing data, analysing and interpreting its environment and taking action, with a degree of autonomy, to achieve specific objectives.

42 Regulation (EU) 2019/1150 of 20 June 2019 of the European Parliament and of the Council on promoting fairness and transparency for professional users of online intermediation services (Text with EEA relevance) [2019] OJ L 186/57.

43 See also discussion in Chapters 24 in this book.

44 Although their effective use is still limited, there are very significant advantages that herald very promising expected implementation rates. EBA, Report on Big Data and Advanced Analytics (Report EBA/REP/2020/01, 2020) 20, figure 2.1.

45 A Alonso and JM Carbó, ‘Understanding the Performance of Machine Learning Models to Predict Credit Default: A Novel Approach for Supervisory Evaluation’ (Working Paper No 2105, Banco de España March 2021) <www.bde.es/f/webbde/SES/Secciones/Publicaciones/PublicacionesSeriadas/DocumentosTrabajo/21/Files/dt2105e.pdf>.

46 Deloitte, Artificial Intelligence. Innovation Report (Report, 2018).

47 O Kaya, ‘Robo-Advice: A True Innovation in Asset Management’ (Research Paper, Deutsche Bank Research, EU Monitor Global Financial Markets, 10 August 2017) 9.

48 T Bucher-Koenen, ‘Financial Literacy, Cognitive Abilities, and Long-Term Decision Making: Five Essays on Individual Behavior’ (2010) Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften der Universität Mannheim.

49 A Chander, ‘The Racist Algorithm’ (2017) 115 Michigan Law Review 1023.

50 S Barocas and A Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671.

51 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 relating to the taking up and pursuit of the business of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC Text with EEA relevance [2013] OJ L 176/338.

52 See also arguments raised by Bednarz and Przhedetsky in Chapter 4 in this book as to the legal rules that incentivise the use of ADM and AI tools by financial entities.

53 Regulation (EU) 2020/1503 of the European Parliament and of the Council of 7 October 2020 on European providers of equity finance services to enterprises, and amending Regulation (EU) 2017/1129 and Directive (EU) 2019/1937 (Text with EEA relevance) [2020] OJ L 347/1.

54 Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), 5 October 2020 <www.europarl.europa.eu/doceo/document/A-9-2020-0178_ES.pdf>.

55 According to Article 1 of the proposal, the Regulation of the European Parliament and of the Council laying down harmonised rules in the field of artificial intelligence (Artificial Intelligence Act) states:

  1. (a) harmonised rules for the placing on the market, putting into service and use of artificial intelligence systems (“AI systems”) in the Union;

  2. (b) prohibitions of certain artificial intelligence practices;

  3. (c) specific requirements for high-risk AI systems and obligations for operators of such systems;

  4. (d) harmonised transparency rules for certain AI systems;

  5. (e) rules on market monitoring, and market surveillance governance; and enforcement

56 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L 210/29.

57 Proposal for a Directive on liability for defective products COM(2022) 495. BA Koch et al, ‘Response of the European Law Institute to the Public Consultation on Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence’ (2022) 13 Journal of European Tort Law 1, 25–63 <https://doi.org/10.1515/jetl-2022-0002>.

58 Proposal for a Directive on liability for defective products COM(2022) 495.

60 The European Law Institute’s projects on Smart Contract and Blockchain, Algorithmic Contracts and Innovation Paper on Guiding Principles for Automated Decision-Making in Europe seek to contribute to this pre-legislative debate in the Union (‘ELI Projects and Other Activities’, European Law Institute (Web Page) <www.europeanlawinstitute.eu/projects-publications/>). At the international level, work has also started in the same direction, such as the new UNCITRAL/UNCITRAL work plan project on automation and the use of AI in international trade (‘Working Group IV: Electronic Commerce’, United National Commission on International Trade Law (Web Page) <https://uncitral.un.org/es/working_groups/4/electronic_commerce>).

61 C Codagnone, G Liva, and T Rodríguez de las Heras Ballell, Identification and Assessment of Existing and Draft EU Legislation in the Digital Field (Study, 2022) <www.europarl.europa.eu/thinktank/de/document/IPOL_STU(2022)703345>. Study requested by the AIDA special committee, European Parliament.

2 Demystifying Consumer-Facing Fintech Accountability for Automated Advice Tools

1 See also Jodi Gardner, Mia Gray, and Katharina Moser (eds), Debt and Austerity: Implications of the Financial Crisis (Edward Elgar, 2020).

2 See further Lucinda O’Brien et al, ‘More to Lose: The Attributes of Involuntary Bankruptcy’ (2019) 38 Economic Papers 15.

3 Jeannie Paterson, ‘Knowledge and Neglect in Asset-Based Lending: When Is It Unconscionable or Unjust to Lend to a Borrower Who Cannot Repay?’ (2009) 20 Journal of Banking and Finance Law and Practice 18.

4 See generally, Michael Trebilcock, Anthony Duggan, and Lorne Sossin (eds), Middle Income Access to Justice (University of Toronto Press, 2012).

5 AI is a disputed category – we are using the term to cover automated decision-making processes informed by predictive analytics, machine learning techniques, and natural language processing.

6 See, for example, Ross P Buckley et al, ‘Regulating Artificial Intelligence in Finance: Putting the Human in the Loop’ (2021) 43(1) Sydney Law Review 43.

7 See, for example, the UK Financial Conduct Authority’s innovation services, which aim to ‘create room for the brightest and most innovative companies to enter the sector, support positive innovation to come to market in a controlled and sustainable way, support innovation that has genuine potential to improve the lives of consumers across all areas of financial services [and] support innovation delivered by a diverse range of participants, both in terms of the type of firm, and the people behind the developments’: ‘Our Innovation Services’, Financial Conduct Authority (Web Page) <www.fca.org.uk/firms/innovation/our-innovation-services> accessed 11 July 2023. See also ‘Competition in the Technology Marketplace’, Federal Trade Commission (Web Page) <www.ftc.gov/advice-guidance/competition-guidance/industry-guidance/competition-technology-marketplace> accessed 11 July 2023; Bank of England and Financial Conduct Authority, ‘Machine Learning in UK Financial Services’ (Web Page, October 2019) 3 <www.bankofengland.co.uk/report/2022/machine-learning-in-uk-financial-services> accessed 11 July 2023; Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right (Final Report, October 2020) 19.

8 See, for example, ‘Enhanced Regulatory Sandbox’, Australian Securities & Investments Commission (ASIC) (Web Page, 1 September 2020) <https://asic.gov.au/for-business/innovation-hub/enhanced-regulatory-sandbox> accessed 22 May 2022. Also, Philip Maume, ‘Regulating Robo-Advisory’ (2019) 55(1) Texas International Law Journal 49, 56.

9 OECD, Personal Data Use in Financial Services and the Role of Financial Education: A Consumer Centric Analysis (Report, 2020) 20 <www.oecd.org/daf/fin/financial-education/Personal-Data-Use-in-Financial-Services-andthe-Role-of-Financial-Education.pdf> accessed 20 May 2022.

10 Bank of England and Financial Conduct Authority, ‘Machine Learning in UK Financial Services’, 6.

11 See, for example, Zest (Web Page) <www.zest.ai/> accessed 11 July 2023.

12 OECD, Personal Data Use in Financial Services.

13 See, for example, Better (Web Page, 2022) <https://better.com>; Cashngo (Web Page, 2022) <www.cashngo.com.au>; Nano (Web Page, 2022) <https://nano.com.au>; Rocket Mortgage (Web Page, 2022) <www.rocketmortgage.com>.

14 See, for example, Petal (Web Page, 2022) <www.petalcard.com>.

15 See, for example, LoanOptions.ai (Web Page, 2022) <www.loanoptions.ai>.

16 OECD, Personal Data Use in Financial Services, 20.

17 ‘What You Need to Know about How FinTech Apps Work’, Consumer Action (Web Page, 16 February 2021) <www.consumer-action.org/english/articles/fintech_apps> accessed 20 May 2022.

18 See, for example, Paul Smith and James Eyers, ‘CBA in $134m Play to Be “AI Superpower”’ (8 November 2021) Australian Financial Review <www.afr.com/technology/cba-aims-to-be-ai-superpower-with-us100m-tech-plunge-20211105-p596bx> accessed 20 May 2022. See also Daniel Belanche, Luis V Casaló, and Carlos Flavián, ‘Artificial Intelligence in FinTech: Understanding Robo-Advisors Adoption among Customers’ (2019) 119(7) Industrial Management & Data Systems 1411, 1411.

19 Dirk A Zetsche et al,From Fintech to Techfin: The Regulatory Challenges of Data-Driven Finance’ (2018) 14(2) NYU Journal of Law & Business 393, 400; Bonnie G Buchanan, Artificial Intelligence in Finance (Report, The Alan Turing Institute, 2019) 1 <www.turing.ac.uk/sites/default/files/2019-04/artificial_intelligence_in_finance_-_turing_report_0.pdf> accessed 11 July 2023.

20 The Australian Government, The Treasury, Consumer Data Right Overview (Report, September 2019) 2 <https://treasury.gov.au/sites/default/files/2019-09/190904_cdr_booklet.pdf> accessed 11 July 2023; OECD, Personal Data Use in Financial Services, 15.

21 See, for example, ‘Built to Make Investing Easier’, Betterment (Web Page) <www.betterment.com/investing> accessed 20 May 2022: ‘Automated technology is how we make investing easier, better, and more accessible’. See also ‘About Us’, Robinhood (Web Page) <https://robinhood.com/us/en/about-us> accessed 11 July 2023: ‘We’re on a mission to democratize finance for all’.

22 Australian Securities & Investments Commission (ASIC), Providing Digital Financial Product Advice to Retail Clients (Regulatory Guide 255, August 2016) para 255.3 <https://download.asic.gov.au/media/vbnlotqw/rg255-published-30-august-2016-20220328.pdf> accessed 11 July 2023: ‘digital advice has the potential to be a convenient and low-cost option for retail clients who may not otherwise seek advice’.

23 Footnote Ibid para 255.3, noting that only around 20 per cent of adult Australians seek personal financial advice. See also The Australian Government, The Treasury, Financial System Inquiry: Interim Report (Report, July 2014) paras 3.69–3.70 <https://treasury.gov.au/sites/default/files/2019-03/p2014-fsi-interim-report.pdf> accessed 15 May 2022. See also Deloitte Access Economics, ASX Australian Investor Study (Report, 2017) <www2.deloitte.com/content/dam/Deloitte/au/Documents/Economics/deloitte-au-economics-asx-australian-investor-study-190517.pdf> accessed 20 May 2022; Australian Securities & Investments Commission, Regulating Complex Products (Report 384, January 2014) 16–18 <https://download.asic.gov.au/media/lneb1sbb/rep384-published-31-january-2014-03122021.pdf> accessed 11 July 2023.

24 Consumers more commonly seek advice from mortgage brokers when seeking to buy a home, which is paid by commissions from banks. Doubts have been raised about the extent to which conflicts of interest undermine the value of the service to consumers and indeed the extent of the benefit provided which is often of unreliable quality. See Australian Securities & Investments Commission, Review of Mortgage Broker Remuneration (Report 516, March 2017) 17 <https://download.asic.gov.au/media/4213629/rep516-published-16-3-2017-1.pdf> accessed 11 July 2023; Productivity Commission, Competition in the Australian Financial System (Inquiry Report No 89, 2018) 301 <www.pc.gov.au/inquiries/completed/financial-system/report> accessed 11 July 2023. See also generally Jeannie Marie Paterson and Elise Bant, ‘Mortgage Broking, Regulatory Failure and Statutory Design’ (2020) 31(1) Journal of Banking and Finance Law and Practice 7. Also, generally Maume, ‘Regulating Robo-Advisory’, 50: noting the FCA estimates that in the United Kingdom there are sixteen million people in this financial advice gap.

25 Bob Ferguson, ‘Robo Advice: An FCA Perspective’ (Annual Conference on Robo Advice and Investing: From Niche to Mainstream, London, 2 October 2017) <www.fca.org.uk/news/speeches/robo-advice-fca-perspective> accessed 20 May 2020; Maume, ‘Regulating Robo-Advisory’, 69.

26 Tom Baker and Benedict Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’ (2018) 103 Iowa Law Review 713, 714.

27 ‘10 Things Consumers Need to Know about FinTech’, Consumers International (Web Page) <www.consumersinternational.org/news-resources/blog/posts/10-things-consumers-need-to-know-about-fintech> accessed 20 May 2022.

28 Australian Government, The Treasury, Consumer Data Right Overview, 2; Edward Corcoran, Open Banking Regulation around the World (Report, BBVA, 11 May 2020) <www.bbva.com/en/open-banking-regulation-around-the-world> accessed 20 May 2022.

29 See also Jeannie Marie Paterson, ‘Making Robo Advisers Careful’ (2023) Law and Financial Markets Review 18.

30 See, for example, Betterment (Web Page) <www.betterment.com> accessed 11 July 2023; Robinhood (Web Page) <https://robinhood.com/us/en/about-us> accessed 11 July 2023; Wealthfront (Web Page) <www.wealthfront.com/> accessed 11 July 2023.

31 ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.1.

32 Financial Conduct Authority, Automated Investment Services: Our Expectations (Report, 21 May 2018) <www.fca.org.uk/publications/multi-firm-reviews/automated-investment-services-our-expectations> accessed 11 July 2023.

33 Belanche et al, ‘Artificial Intelligence in FinTech’, 1413; Dominik Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’ (2018) 60(1) Business & Information Systems Engineering 81, 81.

34 See, for example, Jaaims (Web Page) <www.jaaimsapp.com> accessed 11 July 2023.

35 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 734.

36 Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’, 82.

37 Maume, ‘Regulating Robo-Advisory’, 53. But see, providing both investment and budgeting advice, Douugh (Web Page) <https://douugh.com/> accessed 11 July 2023.

38 Sophia Duffy and Steve Parrish, ‘You Say Fiduciary, I Say Binary: A Review and Recommendation of Robo-Advisors and the Fiduciary and Best Interest Standards’ (2021) 17 Hastings Business Law Journal 3, 5.

39 See, for example, Goodbudget (Web Page) <https://goodbudget.com/> accessed 11 July 2023; Mint (Web Page) <https://mint.intuit.com/> accessed 11 July 2023; MoneyBrilliant (Web Page) <https://moneybrilliant.com.au/> accessed 11 July 2023; Empower (Web Page) <www.personalcapital.com/> accessed 11 July 2023; Spendee (Web Page) <www.spendee.com/> accessed 11 July 2023; Toshl (Web Page) <https://toshl.com/> accessed 11 July 2023; Rocketmoney (Web Page) <www.rocketmoney.com/> accessed 11 July 2023; Wemoney (Web Page) <www.wemoney.com.au> accessed 11 July 2023.

40 See, for example, UpBank (Web Page) <https://up.com.au/> accessed 11 July 2023; Revolut (Web Page) <www.revolut.com/en-AU/> accessed 11 July 2023; Pluto Money (Web Page) <https://plutomoney.app/> accessed 11 July 2023.

41 See Han-Wei Liu, ‘Two Decades of Laws and Practice around Screen Scraping in the Common Law World and Its Open Banking Watershed Moment’ (2020) 30(2) Washington International Law Journal 28.

42 See e.g. Frollo using Australia’s open banking regime: <www.instagram.com/p/CHzG3winmBo/>.

44 Joris Lochy, ‘Budgeting Apps – A Red Ocean Looking for a Market’ (Blog Post, 8 March 2020) <https://bankloch.blogspot.com/2020/03/budgeting-apps-red-ocean-looking-for.html> accessed 20 May 2020.

45 See e.g. Spendee (Web Page) <www.spendee.com/> accessed 11 July 2023.

46 See e.g. Mint (Web Page) <https://mint.intuit.com/> accessed 11 July 2023; Rocketmoney (Web Page) <www.rocketmoney.com/> accessed 11 July 2023.

47 E.g., Zippay provides a budgeting function (Web Page) <https://zip.co/au> accessed 11 July 2023.

48 See e.g. ‘We Combine Best-in-Breed AI Driven Categorization and Analytics with a Deep Set of Features That Are Proven to Work’, Budget Bakers (Web Page) <https://budgetbakers.com/> accessed 11 July 2023.

49 Joris Lochy, ‘Budgeting Apps – A Red Ocean Looking for a Market’.

50 See Zofia Bednarz, ‘There and Back Again: How Target Market Determination Obligations for Financial Products May Incentivise Consumer Data Profiling’ [2022] International Review of Law, Computers & Technology <www.tandfonline.com/doi/10.1080/13600869.2022.2060469> accessed 20 May 2022.

51 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 723.

52 See, e.g., Tamika Seeto, ‘6 Budgeting and Savings Apps Worth Checking Out in 2022’, Canstar (Blog Post, 15 March 2022) <www.canstar.com.au/budgeting/budgeting-apps/> accessed 20 May 2022; Choice (Web Page) <www.choice.com.au/money/financial-planning-and-investing/creating-a-budget/articles/how-we-test-budgeting-apps> accessed 20 May 2022.

53 See also Christy Rakoczy, ‘Best Budgeting Software: Fight the Right Software for Any Budgeting Goal’, Investopedia (Web Page) <www.investopedia.com/personal-finance/best-budgeting-software/> accessed 22 May 2022: ‘We recommend the best products through an independent review process, and advertisers do not influence our picks. We may receive compensation if you visit partners we recommend. Read our advertiser disclosure for more info’.

54 Jung et al, ‘Robo-Advisory: Digitalization and Automation of Financial Advisory’, 84.

55 Lukas Brenner and Tobias Meyll, ‘Robo-Advisors: A Substitute for Human Financial Advice?’ (2020) 25 Journal of Behavioral and Experimental Finance 100275.

56 Duffy and Parish, ‘You Say Fiduciary, I Say Binary’, 23.

57 Yaron Levi and Shlomo Benartzi, ‘Mind the App: Mobile Access to Financial Information and Consumer Behavior’ (17 March 2020) 9: ‘The interpretation of our results is that the mobile apps have a causal impact on the attention and spending behavior among consumers that decided to adopt it.’ <http://dx.doi.org/10.2139/ssrn.3557689> accessed 22 May 2022.

58 Evan Kuh, ‘Budgeting Apps Have Major Faws When It Comes to Helping Users Actually Save’, CNBC (Halftime Report, 13 June 2019) <www.cnbc.com/2019/06/13/budgeting-apps-don’t-help-users-save-money.html>; Rhiana Whitson, ‘Would You Use a Budgeting App? There Are Some Big pros and cons to Consider’, ABC Online (News Report, 4 August 2021) <www.abc.net.au/news/2021-08-04/how-do-you-keep-track-of-your-budget-we-look-at-your-options/100342676>.

59 Stefan Angel, ‘Smart Tools? A Randomized Controlled Trial on the Impact of Three Different Media Tools on Personal Finance’ (2018) 74 Journal of Behavioral and Experimental Economics 104–11: adolescent users of a smartphone budgeting app check their current account balance more than a control group. However, the app did not have a significant effect on subjective or objective financial knowledge indicators.

60 See discussion of the data use below.

61 ASIC, Providing Digital Financial Product Advice to Retail Clients; United States Securities and Exchange Commission, Commission Interpretation Regarding Standard of Conduct for Investment Advisers (Release No IA-5248, 2019) 12–18.

62 See generally Simone Degeling and Jessica Hudson, ‘Financial Robots as Instruments of Fiduciary Loyalty’ (2018) 40 Sydney Law Review 63.

63 See e.g., Corporations Act 2001 (Cth) s 912A(1)(aa), requiring financial services licensees to have ‘adequate arrangements’ for ‘managing’ conflicts of interest.

64 Corporations Act 2001 (Cth) s 961B(1); Securities and Exchange Commission, ‘Commission Interpretation Regarding Standard of Conduct for Investment Advisers’; Securities and Exchange Commission, Regulation Best Interest: The Broker Dealer Standard of Conduct (Release No 34-86031, 5 June 2019). Also, Duffy and Parish, ‘You Say Fiduciary, I Say Binary’; Han-Wei Liu et al,In Whose Best Interests? Regulating Financial Advisers, the Royal Commission and the Dilemma of Reform’ (2020) 42 Sydney Law Review 37.

65 ‘COBS 9.2 Assessing suitability’, Financial Conduct Authority (United Kingdom) Handbook (Web Page) <www.handbook.fca.org.uk/handbook/COBS/9/2.html>; European Parliament and Council Directive 2014/65/EU of 15 May 2014 Markets in Financial Instruments Directive II [2014] OJ L 173/349, art 25(2). Also, Corporations Act 2001 (Cth) pt 7.8A (design and distribution obligations).

66 Australian Securities and Investments Commission Act 2001 (Cth) s 12ED; Investment Advisers Act Release No. 3060 (28 July 2010) (United States).

67 See Paterson, ‘Making Robo-Advisers Careful’; ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.55.

68 Melanie L Fein, ‘Regulation of Robo-Advisers in the United States’ in Peter Scholz (ed), Robo-Advisory (Palgrave Macmillan, 2021), 112.

69 ASIC, Providing Digital Financial Product Advice to Retail Clients, paras 255.60, 255.73; Division of Investment Management, Robo Advisers (IM Guidance Update No 2017-02, February 2017) 8 <www.sec.gov/investment/im-guidance-2017–02.pdf> accessed 20 May 2022.

70 Financial Conduct Authority, ‘Automated Investment Services – Our Expectations’; European Securities Markets Authority, Guidelines on Certain Aspects of the MiFID II Suitability Requirements (Guidelines, 28 May 2018); Division of Investment Management, Robo Advisers (IM Guidance Update No 2017-02, February 2017) 3–6 <www.sec.gov/investment/im-guidance-2017-02.pdf> accessed 22 May 2020.

71 Jeannie Paterson and Yvette Maker, ‘AI in the Home: Artificial Intelligence and Consumer Protection’ in Ernest Lim and Phillip Morgan (eds), The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge: Cambridge University Press, forthcoming, 2024).

72 On this trade-off, see also Matthew Adam Bruckner, ‘The Promise and Perils of Algorithmic Lenders’ Use of Big Data’ (2018) 93 Chicago-Kent Law Review 3. Also, Zetsche et al, ‘From Fintech to Techfin’, 427.

73 See, especially ‘Inuit Privacy Policy’, Mint (Web Page) <www.intuit.com/privacy/statement/> accessed 11 July 2023; ‘Privacy Policy’, Frollo (Web Page) <https://frollo.com.au/privacy-policy/> accessed 11 July 2023; ‘Privacy Policy’, Pocketguard (Web Page) <https://pocketguard.com/privacy/> accessed 11 July 2023.

74 See, eg, Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1; Data Protection Act 2018 (UK); Privacy Act 1988 (Cth); California Consumer Privacy Act, 1.81.5 Cal Civ Code § 1798.100–1798.199.100 (2018).

75 OECD, Personal Data Use in Financial Services, 20.

76 Footnote Ibid; Bednarz, ‘There and Back Again’.

77 Centre for Data Ethics and Innovation, Review into Bias in Algorithmic Decision-Making (Report, November 2020) 21.

78 Ryan Calo, ‘Digital Market Manipulation’ (2014) 82 George Washington Law Review 995; Bednarz, ‘There and Back Again’.

79 See generally Ari Ezra Waldman, ‘Power, Process, and Automated Decision-Making’ (2019) 88(2) Fordham Law Review 613; Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021).

80 Emmanuel Martinez and Lauren Kirchner, ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’ (25 August 2021) The Markup <https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms> accessed 22 May 2022.

81 See, e.g., Ramnath Balasubramanian, Ari Libarikian, and Doug McElhaney, McKinsey & Co, Insurance 2030: The Impact of AI on the Future of Insurance (Report, 12 March 2021) <www.mckinsey.com/industries/financial-services/our-insights/insurance2030-the-impact-of-ai-on-the-future-of-insurance> accessed 22 May 2022; Zofia Bednarz and Kayleen Manwaring, ‘Keeping the (Good) Faith: Implications of Emerging Technologies for Consumer Insurance Contracts’ (2021) 43 Sydney Law Review 455, 470–75.

82 Jennifer Miller, ‘A Bid to End Loan Bias’ (20 September 2020) The New York Times <https://link.gale.com/apps/doc/A635945144/AONE?u=unimelb&sid=bookmark-AONE&xid=164a6017> accessed 22 May 2022.

83 Andeas Fuster et al, ‘Predictably Unequal? The Effects of Machine Learning on Credit Markets’ (2022) 77(1) Journal of Finance 1.

84 Will Douglas Heaven, ‘Bias Isn’t the Only Problem with Credit Scores – and No, AI Can’t Help’ MIT Technology Review (Blog Post, 17 June 2021) <www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/> accessed 20 May 2020; Laura Blattner and Scott Nelson, ‘How Costly Is Noise? Data and Disparities in Consumer Credit’ (2021) arXiv 2105.07554 <https://arxiv.org/abs/2105.07554> accessed 20 May 2022.

85 See also Zetsche et al, ‘From Fintech to Techfin’, 424.

86 See, e.g., Sian Townson, ‘AI Can Make Bank Loans More Fair’ Harvard Business Review (Article, 6 November 2020) <https://hbr.org/2020/11/ai-can-make-bank-loans-more-fair>.

87 Elisa Jillson, ‘Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI’ Federal Trade Commission Business Blog (Blog Post, 19 April 2021) <www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai> accessed 22 May 2022.

88 Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 66, 172. See also Zetsche et al, ‘From Fintech to Techfin’, 418–22.

89 Emma Leong and Jodi Gardner, ‘Open Banking in the UK and Singapore: Open Possibilities for Enhancing Financial Inclusion’ (2021) 5 Journal of Business Law 424, 426.

90 See, e.g., Tully (Web Page) <https://tullyapp.com>; Touco (Web Page) <https://usetouco.com>.

91 Leong and Gardner, ‘Open Banking in the UK and Singapore’, 429.

92 Financial Conduct Authority, Call for Input: Open Finance (Publication, 2019) 8 [2.11], discussed in Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 66.

93 Commonwealth Government, Inquiry into Future Directions for the Consumer Data Right, 171.

94 See Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1; Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’ (ACM Conference on Fairness, Accountability, and Transparency, 1–10 March 2021) <https://ssrn.com/abstract=3772964> accessed 22 May 2022.

95 William Magnuson, ‘Artificial Financial Intelligence’ (2020) 10 Harvard Business Law Review 337, 340.

96 Bednarz, ‘There and Back Again’; Martinez and Kirchner, ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’.

97 See Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence 389, 389: ‘Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy)’.

98 Australian Human Rights Commission, Human Rights and Technology, 54; Brent Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (2019) 1 Nature Machine Intelligence 501.

99 Lorne Sossin and Charles W Smith, ‘Hard Choices and Soft Law: Ethical Codes, Policy Guidelines and the Role of the Courts in Regulating Government’ (2003) 40 Alberta Law Review 867.

100 Jake Goldenfein, ‘Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms’ in Cliff Bertram, Asher Gibson, and Adriana Nugent (eds), Closer to the Machine: Technical, Social, and Legal Aspects of AI (Office of the Victorian Information Commissioner, 2019) 43: ‘[T]he time and place for instilling public values like accountability and transparency is in the design and development of technological systems, rather than after-the-fact regulation and review’.

101 See Jobin et al, ‘The Global Landscape of AI Ethics Guidelines’, 389: ‘Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy)’.

102 Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework (Report, 2019) <www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework> accessed 22 May 2022; Australian Council of Learned Academics, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing (Report, July 2019) 132; Australian Human Rights Commission, Human Rights and Technology, 49; European Commission, Artificial Intelligence: A European Approach to Excellence and Trust (White Paper, 2020) 20; Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (Report, HL 2017–2019) 38.

103 Jobin et al, ‘The Global Landscape of AI Ethics Guidelines’; Institute of Electrical and Electronics Engineers, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (Report, 2019) 21; Australian Council of Learned Academics, The Effective and Ethical Development of Artificial Intelligence, 105; Australian Human Rights Commission, Human Rights and Technology, 50.

104 Henrietta Lyons, Eduardo Velloso, and Tim Miller, ‘Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions’ (2021) 5 Proceedings of the ACM on Human-Computer Interaction <https://arxiv.org/abs/2103.01774> accessed 22 May 2022.

105 See Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework; European Commission, High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Guidelines, 8 April 2019) <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> accessed 20 May 2022.

106 See Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework.

107 Financial Conduct Authority, Automated Investment Services: Our Expectations; ASIC, Providing Digital Financial Product Advice to Retail Clients, para 255.98.

108 See also Brenner and Meyll, ‘Robo-Advisors: A Substitute for Human Financial Advice?’ (substitution effect of robo-advisers is especially driven by investors concerned about investment fraud from human advisers).

109 See Jeannie Paterson, ‘Misleading AI’ (2023) 34 (Symposium) Loyola University Chicago School of Law Consumer Law Review 558.

110 See Select Committee on Artificial Intelligence, ‘AI in the UK’, 40; Australian Human Rights Commission, Human Rights and Technology, 75.

111 On explanations, see Tim Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267(1) Artificial Intelligence 1; Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology 841; Jonathan Dodge et al, ‘Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment’ (International Conference on Intelligent User Interfaces, Marina del Ray, 17–20 March 2019).

112 Tim Miller, ‘Explainable Artificial Intelligence: What Were You Thinking?’ in N Wouters, G Blashki, and H Sykes (eds), Artificial Intelligence: For Better or Worse (Future Leaders, 2019) 19, 21; Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 844.

113 Umang Bhatt et al, ‘Explainable Machine Learning in Deployment’ (Conference on Fairness, Accountability, and Transparency, Barcelona, January 2020) 648.

114 See Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’; Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’.

115 See also Karen Yeung and Adrian Weller, ‘How Is “Transparency” Understood by Legal Scholars and the Machine Learning Community’ in Mireille Hildebrandt et al (eds), Being Profiled: Cogitas Ergo Sum (Amsterdam University Press, 2018); John Zerilli et al, ‘Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?’ (2019) 32 Philosophy and Technology 661.

116 See generally Robert A Hillman and Jeffrey J Rachlinski, ‘Standard-Form Contracting in the Electronic Age’ (2002) 77 New York University Law Review 429; Russell Korobkin, ‘Bounded Rationality, Standard Form Contracts, and Unconscionability’ (2003) 70 University of Chicago Law Review 1203.

117 Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 843. See also Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’.

118 See Wachter et al, ‘Counterfactual Explanations without Opening the Black Box’, 843.

119 Lyons et al, ‘Conceptualising Contestability’.

120 Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (pre-print) (1 March 2019). Engaging Science, Technology, and Society (pre-print) <http://dx.doi.org/10.2139/ssrn.2757236>.

121 See also Cobbe et al, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’ (discussing the principle of reviewability as a core element of accountability for automated decision-making systems).

122 Baker and Dellaert, ‘Regulating Robo Advice across the Financial Services Industry’, 724. Cf Proposal for a Regulation (EU) 2021/1016 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts [2021] (EU AI Draft Regulations).

123 Compare Cobbe et al, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’.

124 Brent Mittelstad, ‘Auditing for Transparency in Content Personalization Systems’ (2016) 10 International Journal of Communication 4991.

125 See, e.g., Australian Government Department of Industry, Science, Energy and Resources, Australia’s Artificial Intelligence Ethics Framework.

126 Lyons et al, ‘Conceptualising Contestability’, 1–2.

3 Leveraging AI to Mitigate Money Laundering Risks in the Banking System

* The author wishes to thank Isabelle Nicolas for her excellent research assistance.

1 Black’s Law Dictionary (2009), 1097.

2 ‘Money Laundering’, United Nations Office on Drugs and Crime (Web Page) <www.unodc.org/unodc/en/money-laundering/overview.html>.

3 Ana Isabel Canhoto, ‘Leveraging Machine Learning in the Global Fight against Money Laundering and Terrorism Financing: An Affordances Perspective’ (2021) 131 Journal of Business Research 441 at 449.

6 FATF, Opportunities and Challenges of New Technologies for AML/CTF (Report, 2021) 5 <www.fatf-gafi.org/media/fatf/documents/reports/Opportunities-Challenges-of-New-Technologies-for-AML-CFT.pdf>.

8 Doron Goldbarsht, ‘Who’s the Legislator Anyway? How the FATF’s Global Norms Reshape Australian Counter Terrorist Financing Laws’ (2017) 45 Federal Law Review 127. See also ‘About’, FATF (Web Page) <www.fatf-gafi.org/about/whoweare/#d.en.11232>.

9 FATF, The FATF Recommendations.

10 Footnote Ibid, Recommendation 10.

11 Footnote Ibid, Recommendations 10, 11.

12 Footnote Ibid, Recommendation 20.

13 AUSTRAC, Australia’s Major Banks: Money Laundering and Terrorism Financing Risk Assessment (Report, 2021) <www.austrac.gov.au/sites/default/files/2021–09/Major%20Banks%20ML_TF_Risk%20Assessment%202021.pdf>.

15 Department of Justice, Office of Public Affairs, ‘Credit Suisse Agrees to Forfeit $536 Million in Connection with Violations of the International Emergency Economic Powers Act and New York State Law’ (Media Release, 16 December 2009) <www.justice.gov/opa/pr/credit-suisse-agrees-forfeit-536-million-connection-violations-international-emergency>.

16 Andrew Clark, ‘Lloyds Forfeits $350 m for Disguising Origin of Funds from Iran and Sudan’ (10 January 2009) The Guardian <www.theguardian.com/business/2009/jan/10/lloyds-forfeits-350m-to-us>.

17 Associated Press, ‘HSBC to Pay $1.9b to Settle Money-Laundering Case’ (11 December 2012) CBC News <www.cbc.ca/news/business/hsbc-to-pay-1-9b-to-settle-money-laundering-case-1.1226871>.

18 Toby Sterling and Bart H Meijer, ‘Dutch Bank ING Fined $900 Million for Failing to Spot Money Laundering’ (4 September 2018) Reuters <www.reuters.com/article/us-ing-groep-settlement-money-laundering-idUSKCN1LK0PE>.

19 AUSTRAC, ‘AUSTRAC and CBA Agree $700 m Penalty’ (Media Release, 4 June 2018) <www.austrac.gov.au/austrac-and-cba-agree-700m-penalty>.

21 Brian Monroe, ‘More than $8 Billion in AML Fines Handed Out in 2019, with USA and UK Leading the Charge: Analysis’ (2021) ACFCS <www.acfcs.org/fincrime-briefing-aml-fines-in-2019-breach-8-billion-treasury-official-pleads-guilty-to-leaking-2020-crypto-compliance-outlook-and-more/>.

22 AUSTRAC, ‘AUSTRAC and Westpac Agree to Proposed $1.3bn Penalty’ (Media Release, 24 September 2020) <www.austrac.gov.au/news-and-media/media-release/austrac-and-westpac-agree-penalty>.

23 Emily Flitter, ‘Citigroup Is Fined $400 Million over “Longstanding” Internal Problems’ (7 October 2020) New York Times <www.nytimes.com/2020/10/07/business/citigroup-fine-risk-management.html>.

24 Michael Corkery and Ben Protess, ‘Citigroup Agrees to $97.4 Million Settlement in Money Laundering Inquiry’ (22 May 2017) New York Times <www.nytimes.com/2017/05/22/business/dealbook/citigroup-settlement-banamex-usa-inquiry.html>.

25 Richard Grint, Chris O’Driscoll, and Sean Paton, New Technologies and Anti-money Laundering Compliance: Financial Conduct Authority (Report, 31 March 2017) <www.fca.org.uk/publication/research/new-technologies-in-aml-final-report.pdf>.

26 AUSTRAC, ‘AUSTRAC Accepted Enforceable Undertaking from National Australia Bank’ (Media Release, 2 May 2022) <www.austrac.gov.au/news-and-media/media-release/enforceable-undertaking-national-australia-bank>.

27 Barry R Johnston and Ian Carrington, ‘Protecting the Financial System from Abuse: Challenges to Banks in Implementing AML/CFT Standards’ (2006) 9 Journal of Money Laundering 49.

29 Raghad Al-Shabandar et al, ‘The Application of Artificial Intelligence in Financial Compliance Management’, in Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing (New York: Association for Computing Machinery, 2019).

30 KPMG, Global Anti-money Laundering Survey: How Banks Are Facing Up to the Challenge (2004), cited in Johnston and Carrington, ‘Protecting the Financial System’, 58.

31 Howard Kunreuther, ‘Risk Analysis and Risk Management in an Uncertain World’ (2002) 22 Risk Analysis 655, cited in Canhoto, ‘Leveraging Machine Learning’, 443.

32 Zhiyuan Chen et al, ‘Machine Learning Techniques for Anti-money Laundering (AML) Solutions in Suspicious Transaction Detection: A Review’ (2018) 57 Knowledge and Information Systems 245.

33 Grint et al, New Technologies and Anti-money Laundering Compliance: Financial Conduct Authority.

34 Institute of International Finance, Machine Learning in Anti-money Laundering: Summary Report (Report, 2018) <www.iif.com/portals/0/Files/private/32370132_iif_machine_learning_in_aml_-_public_summary_report.pdf>.

35 Praveen Kumar Donepudi, ‘Machine Learning and Artificial Intelligence in Banking’ (2017) 5 Engineering International 84.

36 Ana Fernandez, ‘Artificial Intelligence in Financial Services’, Economic Bulletin, June 2019, 1.

37 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 22.

38 Pariwat Ongsulee, ‘Artificial Intelligence, Machine Learning and Deep Learning’ (15th International Conference on ICT and Knowledge Engineering, 2017).

39 Steven S Skiena, The Algorithm Design Manual (London: Springer, 2008), cited in Canhoto, ‘Leveraging Machine Learning’, 443.

40 Isabel Ana Canhoto and Fintan Clear, ‘Artificial Intelligence and Machine Learning as Business Tools: A Framework for Diagnosing Value Destruction Potential’ (2020) 63 Business Horizons 183, cited in Canhoto, ‘Leveraging Machine Learning’, 444.

41 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 22.

42 Alessa, Webinar – An Executive Guide on How to Use Machine Learning and AI for AML Compliance (Video, 2019) <www.youtube.com/watch?v=k46_UY4DGXU>.

43 While this chapter is primarily concerned with the adoption of AI by banks for AML purposes, AI is also increasingly relied on by AML regulators. Occurring in parallel with increased regulatory demands, the evolution of AI in regulatory technology promised to improve compliance monitoring, as well as reduce costs, which undoubtedly motivated its uptake. See Hannah Harris, ‘Artificial Intelligence and Policing of Financial Crime: A Legal Analysis of the State of the Field’ in Doron Goldbarsht and Louis de Koker (eds), Financial Technology and the Law (Cham: Springer, 2022); Lyria Bennett Moses and Janet Chan, ‘Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability’ (2018) 28 Policing and Society 806; Douglas W Arner, Janos Barberis, and Ross Buckley, ‘FinTech, RegTech, and the Reconceptualization of Financial Regulation’ (2017) 37 Northwestern Journal of International Law and Business 390.

44 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 11.

45 Institut Polytechnique de Paris, ‘More AI, and Less Box-Ticking, Says FATF in AML/CTF Report’ (Media Release, 13 July 2021) <www.telecom-paris.fr/more-ai-less-box-ticking-fatf-aml-cft>.

46 Dattatray Vishnu Kute et al, ‘Deep Learning and Explainable Artificial Intelligence Techniques Applied for Detecting Money Laundering – A Critical Review’ (IEEA Access, 2021) 82301.

47 Footnote Ibid, 82301.

49 Jingguang Han et al, ‘Artificial Intelligence for Anti-money Laundering: A Review and Extension’ (2020) 2 Digital Finance 213.

50 Footnote Ibid, 219.

51 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 12.

52 Alessa, Webinar.

53 Richard Paxton, ‘Is AI Changing the Face of Financial Crimes and Money Laundering?’ (26 August 2021) Medium <https://medium.com/@alacergroup/is-ai-changing-the-face-of-financial-crimes-money-laundering-912ce0d168bd>.

54 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 13.

56 Ilze Calitz, ‘AI: The Double-Edged Sword in AML/CTF Compliance’ (27 January 2021) ACAMS Today <www.acamstoday.org/ai-the-double-edged-sword-in-aml-ctf-compliance/>.

57 Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Financial Crimes Enforcement Network, National Credit Union Administration, and Office of the Comptroller of the Currency, Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing (3 December 2018).

58 AUSTRAC, Annual Report 2020–21 (Report, 2021) 21.

60 Bob Contri and Rob Galaski, ‘How AI Is Transforming the Financial Ecosystem’ (2018), cited in Deloitte and United Overseas Bank, The Case for Artificial Intelligence in Combating Money Laundering and Terrorist Financing: A Deep Dive into the Application of Machine Learning Technology (Report, 2018) 4.

61 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 14.

62 Mark Luber, cited in Markets Insider, ‘Machine Learning and Artificial Intelligence Algorithm Paves New Ways for Anti-money Laundering Compliance in LexisNexis Risk Solutions’ Award-Winning Solution’ (Media Release, 14 November 2018) <https://markets.businessinsider.com/news/stocks/machine-learning-and-artificial-intelligence-algorithm-paves-new-ways-for-anti-money-laundering-compliance-in-lexisnexis-risk-solutions-award-winning-solution-1027728213>.

63 Deloitte and United Overseas Bank, The Case, 25.

65 Fernandez, ‘Artificial Intelligence’, 2.

66 Deloitte and United Overseas Bank, The Case, 29.

67 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 20.

68 Deloitte and United Overseas Bank, The Case, 29.

69 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services: Market Developments and Financial Stability Implications (Report, 1 November 2017) 23.

70 ‘Strengthening AML Protection through AI’ (July 2018) Financier Worldwide Magazine <www.financierworldwide.com/strengthening-aml-protection-through-ai#.YV6BGi0Rrw4>.

72 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 9.

74 Ratna Sahay et al, ‘Financial Inclusion: Can It Meet Multiple Macroeconomic Goals?’ (IMF Staff Discussion Note SDN/15/17, September 2015).

75 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 17.

76 Grint et al, New Technologies and Anti-money Laundering Compliance: Financial Conduct Authority.

77 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 36.

78 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 28.

79 Erik Brynjolfsson and Andrew McAfee, ‘Artificial Intelligence, for Real’, Harvard Business Review: The Big Idea (July 2017) 10 <https://starlab-alliance.com/wp-content/uploads/2017/09/AI-Article.pdf>.

80 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 26.

81 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1. See Christa Savia, ‘Processing Financial Crime Data under the GDPR in Light of the 5th Anti-money Laundering Directive’, Thesis, Örebro Universitet (2019) <www.diva-portal.org/smash/get/diva2:1353108/FULLTEXT01.pdf>.

82 Savia, ‘Processing Financial Crime Data’.

83 Penny Crosman, ‘Can AI’s “Black Box” Problem Be Solved?’ (1 January 2019) American Banker 2.

84 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 36.

86 Alessa, Webinar.

87 Lyria Bennett Moses, ‘Not a Single Singularity’ in Simon Deakin and Christopher Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Oxford: Hart, 2020) 207.

88 Mireille Hildebrandt, ‘Code-Driven Law: Freezing Future and Scaling the Past’ in Simon Deakin and Christopher Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Oxford: Hart, 2020) 67.

90 McKinsey & Company, Transforming Approaches to AML and Financial Crime.

92 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 38.

93 FATF, Partnering in the Fight against Financial Crime: Data Protection, Technology and Private Sector Information Sharing (Report, July 2022) 12 <www.fatf-gafi.org/media/fatf/documents/Partnering-int-the-fight-against-financial-crime.pdf>.

95 FATF, The FATF Recommendations, Recommendation 21.

96 Juan Carlos Crisanto et al, From Data Reporting to Data Sharing: How Far Can Suptech and Other Innovations Challenge the Status Quo of Regulatory Reporting? (Financial Stability Institute Insights No 29, 16 December 2020) 2.

97 FATF, Stock Take on Data Pooling, Collaborative Analytics and Data Protection (Report, July 2021), 11 <www.fatf-gafi.org/media/fatf/documents/Stocktake-Datapooling-Collaborative-Analytics.pdf>.

98 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 41.

99 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 31.

100 Crisanto et al, From Data Reporting, 5.

101 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 39.

102 General Data Protection Regulations, art. 22.

103 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 39.

104 Footnote Ibid, 41.

105 KYC is an element of CDD that aims to prevent people from opening accounts anonymously or under a false name. See FATF, Opportunities and Challenges of New Technologies for AML/CTF, 43.

106 FATF, The FATF Recommendations, Recommendation 10.

107 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 20.

108 Footnote Ibid, 20.

109 Finextra, ‘Responsible Artificial Intelligence for Anti-money Laundering: How to Address Bias’ (Blog, 1 September 2021) <www.finextra.com/blogposting/20830/responsible-artificial-intelligence-for-anti-money-laundering-how-to-address-bias>.

110 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 27.

111 World Bank, Principles on Identification for Sustainable Development: Toward the Digital Age (Report, 2021) <https://documents1.worldbank.org/curated/en/213581486378184357/pdf/Principles-on-Identification-for-Sustainable-Development-Toward-the-Digital-Age.pdf>.

112 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 27.

113 Lyria Bennett Moses and Janet Chan, ‘Using Big Data for Legal/Law Enforcement Decisions: Testing the New Tools’ (2014) 37 UNSW Law Journal 672.

114 Bettina Berendt and Sören Preibusch, ‘Better Decision Support through Exploratory Discrimination-Aware Data Mining: Foundations and Empirical Evidence’ (2014) 22 Artificial Intelligence and Law 180.

115 Footnote Ibid, 180.

116 Janet Chan and Lyria Bennett Moses, ‘Making Sense of Big Data for Security’ (2016) 57 British Journal of Criminology 299.

117 Footnote Ibid, 314.

118 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 43.

119 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 37.

120 US President’s Council of Advisors on Science and Technology, cited in Moses and Chan, ‘Using Big Data’, 647.

121 Fernandez, ‘Artificial Intelligence’, 6.

122 Samir Chopra and Laurence F White, ‘Tort Liability for Artificial Agents’ in Samir Chopra and Laurence F White (eds), A Legal Theory for Autonomous Artificial Agents (Ann Arbor: University of Michigan Press, 2011) 120.

123 Footnote Ibid, 154.

124 Leon E Wein, ‘The Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence’ (1992) 6 Harvard Journal of Law and Technology 110, cited in Chopra and White, ‘Tort Liability’, 121.

125 Chopra and White, ‘Tort Liability’, 122.

126 Pintarich v Federal Commissioner of Taxation (2018) 262 FCR 41; [2018] FCAFC 79. This case is relevant to the applicability of judicial review to decisions made by machines.

127 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 26.

128 Chopra and White, ‘Tort Liability’, 130.

129 Footnote Ibid, 126.

130 Footnote Ibid, 125.

131 Samir Chopra and Laurence F White, ‘Personhood for Artificial Agents’ in Samir Chopra and Laurence F White (eds), A Legal Theory for Autonomous Artificial Agents (Ann Arbor: University of Michigan Press, 2011). In Australia, AI has already been granted recognition as an inventor in patent applications, suggesting that there is a cultural shift occurring that challenges assumptions in relation to the influence and abilities of AI. See Alexandra Jones, ‘Artificial Intelligence Can Now Be Recognised as an Inventor after Historic Australian Court Decision’ (1 August 2021) ABC News <www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264>.

132 Chopra and White, ‘Personhood’, 173.

133 Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services, 26.

134 Basel Committee on Banking Supervision, Revisions to the Principles for Sound Management of Operational Risk (Report, 2021) 16.

135 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 40.

136 Canhoto, ‘Leveraging Machine Learning’, 448.

137 Merendino et al (2018), cited in Canhoto, ‘Leveraging Machine Learning’, 448.

138 FATF, Opportunities and Challenges of New Technologies for AML/CTF, 22.

139 Deloitte and United Overseas Bank, ‘The Case’, 25; Fernandez, ‘Artificial Intelligence’, 2; FATF, Opportunities and Challenges of New Technologies for AML/CTF, 20.

140 Kute et al, ‘Deep Learning’, 82313.

141 Ouren Kuiper et al, ‘Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities’ in Luis A Leiva et al (eds), Artificial Intelligence and Machine Learning (Cham: Springer, 2022) 105.

142 Footnote Ibid, 105.

143 Grint et al, New Technologies and Anti-money Laundering Compliance: Financial Conduct Authority.

4 AI Opacity in the Financial Industry and How to Break It

* The authors would like to thank Arundhati Suma Ajith for excellent research assistance.

1 Frank Pasquale, The Black Box Society (Cambridge: Harvard University Press, 2015) 187.

3 Janine S Hiller and Lindsay Sain Jones, ‘Who’s Keeping Score?: Oversight of Changing Consumer Credit Infrastructure’ (2022) 59(1) American Business Law Journal 61, 104.

4 Pernille Hohnen, Michael Ulfstjerne, and Mathias Sosnowski Krabbe, ‘Assessing Creditworthiness in the Age of Big Data: A Comparative Study of Credit Score Systems in Denmark and the US’ (2021) 5(1) Journal of Extreme Anthropology 29, 34–35.

5 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104(3) California Law Review 671, 673–77.

6 Alejandro Barredo Arrieta et al, ‘Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI’ (2020) 58 Information Fusion 82, 99–101.

7 Peter Cartwright, ‘Understanding and Protecting Vulnerable Financial Consumers’ (2014) 38(2) Journal of Consumer Policy 119, 121–23.

8 Frederik Borgesius, ‘Consent to Behavioural Targeting in European Law: What Are the Policy Implications of Insights from Behavioural Economics?’ (Conference Paper for Privacy Law Scholars Conference, Berkeley, CA, 6–7 June 2013).

9 Petra Persson, ‘Attention Manipulation and Information Overload’ (2018) 2(1) Behavioural Public Policy 78.

10 Andrew Grant and Luke Deer, ‘Consumer Marketplace Lending in Australia: Credit Scores and Loan Funding Success’ (2020) 45(4) Australian Journal of Management 607.

11 Zofia Bednarz and Kayleen Manwaring, ‘Risky Business: Legal Implications of Emerging Technologies Affecting Consumers of Financial Services’ in Dariusz Szostek and Mariusz Zalucki (eds), Internet and New Technologies Law: Perspectives and Challenges (Baden: Nomos, 2021) 5974.

12 Aaron Klein, Brookings Institution, Reducing Bias in AI-Based Financial Services (Report, 10 July 2020) <www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/>.

13 Hohnen et al, ‘Assessing Creditworthiness’, 36.

14 Zofia Bednarz, Chris Dolman, and Kimberlee Weatherall, ‘Insurance Underwriting in an Open Data Era – Opportunities, Challenges and Uncertainties’ (Actuaries Institute 2022 Summit, 2–4 May 2022) 10–12 <https://actuaries.logicaldoc.cloud/download-ticket?ticketId=09c77750-aa90-4ba9–835e-280ae347487b>.

15 Su-Lin Tan, ‘Uber Eats, Afterpay and Netflix Accounts Could Hurt Your Home Loan Application’ (5 December 2018) Australian Financial Review <www.afr.com/property/uber-eats-afterpay-and-netflix-accounts-could-hurt-your-home-loan-application-20181128-h18ghz>.

16 ‘Credit Bureau’, Experian Australia (Web Page) <www.experian.com.au/business/solutions/credit-services/credit-bureau>. ‘Secured from critical sectors of the Australian credit industry as well as from niche areas such as Specialty Finance data, short-term loans (including Buy Now Pay Later) and consumer leasing, enabling a more complete view of your customers’.

17 Michal Kosinski, David Stillwell, and Thore Graepel, ‘Private Traits and Attributes Are Predictable from Digital Records of Human Behavior’ (2013) 110 Proceedings of the National Academy of Sciences of the United States of America 5805.

18 Anya ER Prince and Daniel Schwarcz, ‘Proxy Discrimination in the Age of Artificial Intelligence and Big Data’ (2020) 105(3) Iowa Law Review 1257, 1273–76.

19 Eric Rosenblatt, Credit Data and Scoring: The First Triumph of Big Data and Big Algorithms (Cambridge: Elsevier Academic Press, 2020) 1.

20 Hiller and Jones, ‘Who’s Keeping Score?’, 68–77.

21 Rosenblatt, Credit Data and Scoring, 7.

22 Hohnen et al, ‘Assessing Creditworthiness’, 36.

23 ‘What’s in My FICO® Scores?’, MyFico (Web Page) <www.myfico.com/credit-education/whats-in-your-credit-score>.

24 Consumer Financial Protection Bureau, ‘The Impact of Differences between Consumer- and Creditor-Purchased Credit Scores’ (SSRN Scholarly Paper No 3790609, 19 July 2011) 19.

25 ‘What Is a Good Credit Score?’, Equifax Canada (Web Page) <www.consumer.equifax.ca/personal/education/credit-score/what-is-a-good-credit-score>; ‘FICO Score 10, Most Predictive Credit Score in Canadian Market’, FICO Blog (Web Page) <www.fico.com/blogs/fico-score-10-most-predictive-credit-score-canadian-market>.

26 Frederic de Mariz, ‘Using Data for Financial Inclusion: The Case of Credit Bureaus in Brazil’ (SSRN Paper, Journal of International Affairs, 28 April 2020).

27 ‘Credit Scores and Credit Reports’, Moneysmart (Web Page) <https://moneysmart.gov.au/managing-debt/credit-scores-and-credit-reports>.

28 ‘Credit Score’, Credit Bureau (Web Page) <www.creditbureau.com.sg/credit-score.html>.

29 ‘Payment Default Records’, Swedish Authority for Privacy Protection (Web Page) <www.imy.se/en/individuals/credit-information/payment-default-records/>.

30 Or even car accidents one will have in the future, Rosenblatt, Credit Data and Scoring, 6.

32 Mikella Hurley and Julius Adebayo, ‘Credit Scoring in the Era of Big Data’ (2016) 18 Yale Journal of Law and Technology 148, 151.

33 Hiller and Jones, ‘Who’s Keeping Score?’, 68–77.

34 Zofia Bednarz and Kayleen Manwaring, ‘Hidden Depths: The Effects of Extrinsic Data Collection on Consumer Insurance Contracts’ (2022) 45(July) Computer Law and Security Review: The International Journal of Technology Law and Practice 105667.

35 ‘Examining the use of alternative data in underwriting and credit scoring to expand access to credit’ (Hearing before the Task Force on Financial Technology of the Committee on Financial Services, U.S. House of Representatives, One Hundred Sixteenth Congress, First Session July 25, 2019) <www.congress.gov/116/chrg/CHRG-116hhrg40160/CHRG-116hhrg40160.pdf>.

36 Hohnen et al, ‘Assessing Creditworthiness’, 38.

37 Hiller and Jones, ‘Who’s Keeping Score?’, 87–96; Bartlett et al, ‘Consumer-Lending Discrimination in the FinTech Era’ (2022) 143(1) Journal of Financial Economics 30.

38 Hiller and Jones, ‘Who’s Keeping Score?’, 92–93.

39 Quentin Hardy, ‘Just the Facts: Yes, All of Them’ (25 March 2012) The New York Times <https://archive.nytimes.com/query.nytimes.com/gst/fullpage-9A0CE7DD153CF936A15750C0A9649D8B63.html>.

40 See for example: US: Equal Credit Opportunity Act (ECOA) s 701, which requires a creditor to notify a credit applicant when it has taken adverse action against the applicant; Fair Credit Reporting Act (FCRA) s 615(a), which requires a person to provide a notice when the person takes an adverse action against a consumer based in whole or in part on information in a consumer report; Australia: Privacy Act 1988 (Cth) s 21P, stating that if a credit provider refuses an application for consumer credit made in Australia, the credit provider must give the individual written notice that the refusal is based wholly or partly on credit eligibility information about one or more of the persons who applied; Privacy (Credit Reporting) Code 2014 (Version 2.3) para 16.3 requiring a credit provider who obtains credit reporting information about an individual from a credit reporting bureau and within 90 days of obtaining that information, refuses a consumer credit application, to provide a written notice of refusal, informing the individual of a number of matters, including their right to access credit reporting information held about them, that the refusal may have been based on the credit reporting information, and the process for correcting the information; UK: lenders are not required to provide reasons for loan refusal, even when asked by a consumer, but s 157 Consumer Credit Act 1974 requires them to indicate which credit reporting agency (if any) they used in assessing the application.

41 Neil Vidgor, ‘Apple Card Investigated after Gender Discrimination Complaints’ (10 November 2019) The New York Times <www.nytimes.com/2019/11/10/business/Apple-creditcard-investigation.html>.

42 See e.g. Corrado Rizzi, ‘Class Action Alleges Wells Fargo Mortgage Lending Practices Discriminate against Black Borrowers’ (21 February 2022) ClassAction.org <www.classaction.org/news/class-action-alleges-wells-fargo-mortgage-lending-practices-discriminate-against-black-borrowers> or Kelly Mehorter, ‘State Farm Discriminates against Black Homeowners When Processing Insurance Claims, Class Action Alleges’ (20 December 2022) ClassAction.org <www.classaction.org/news/state-farm-discriminates-against-black-homeowners-when-processing-insurance-claims-class-action-alleges>; Hiller and Jones, ‘Who’s Keeping Score?’, 83–84.

43 Hiller and Jones, ‘Who’s Keeping Score?’, 65.

44 Consumer Financial Protection Bureau, ‘The Impact of Differences between Consumer- and Creditor-Purchased Credit Scores’ (SSRN Scholarly Paper No 3790609, 19 July 2011) 5.

45 Brenda Reddix-Smalls, ‘Credit Scoring and Trade Secrecy’ (2012) 12 UC Davis Business Law Journal 87, 115.

46 Katarina Foss-Solbrekk, ‘Three Routes to Protecting AI Systems and Their Algorithms under IP Law: The Good, the Bad and the Ugly’ (2021) 16(3) Journal of Intellectual Property Law & Practice 247, 248.

47 Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure [2016] OJ L 157/1.

48 573 U.S. 208 (2014).

49 Foss-Solbrekk, ‘Three Routes to Protecting AI Systems and Their Algorithms under IP Law’, 248; Meghan J Ryan, ‘Secret Algorithms, IP Rights and the Public Interest’ (2020) 21(1) Nevada Law Journal 61, 62–63.

50 Ryan, ‘Secret Algorithms’, 62–63.

51 Hiller and Jones, ‘Who’s Keeping Score?’, 83.

52 Reddix-Smalls, ‘Credit Scoring and Trade Secrecy’, 117; see also Bartlett et al, ‘Consumer-Lending Discrimination in the FinTech Era’.

53 Gintarè Surblytė-Namavičienė, Competition and Regulation in the Data Economy: Does Artificial Intelligence Demand a New Balance? (Cheltenham: Edward Elgar, 2020).

54 Facebook, ‘Submission to the Australian Privacy Act Review Issues Paper’ (6 December 2020) 25 <www.ag.gov.au/sites/default/files/2021–02/facebook.PDF>.

56 Reddix-Smalls, ‘Credit Scoring and Trade Secrecy’, 89.

57 Kristina Irion, ‘Algorithms Off-Limits?’ (FAccT’22, 21–24 June 2022, Seoul) 1561 <https://dl.acm.org/doi/pdf/10.1145/3531146.3533212>.

60 Footnote Ibid, 1562.

61 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (AI Act) and Amending Certain Union Legislative Acts [2021] OJ COM 206.

62 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR) [2016] OJ L 119/1, Recital (26); Australian Privacy Act 1988 (Cth) s 6.

63 Katharine Kemp, ‘A Rose by Any Other Unique Identifier: Regulating Consumer Data Tracking and Anonymisation Claims’ (August 2022) Competition Policy International TechReg Chronicle 22.

66 Footnote Ibid, 27–29.

67 See e.g. Art. 5 GDPR.

68 Tal Zarsky, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 4(2) Seton Hall Law Review 995, 1004–18.

69 Footnote Ibid, 1010.

70 Wolfe Christl and Sarah Spiekermann, Networks of Control: A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy (Vienna: Facultas, 2016); Forbrukerrådet (Norwegian Consumer Council), Out of Control: How Consumers Are Exploited by the Online Advertising Industry (Report, 14 January 2020) 19–22.

72 Mireille Hildebrandt, ‘Profiling and the Identity of the European Citizen’ in Mireille Hildebrandt and Serge Gutwirth (eds), Profiling the European Citizen: Cross-Disciplinary Perspectives (New York: Springer, 2008) 305–9; Sandra Wachter, ‘Data Protection in the Age of Big Data’ (2019) 2 Nature Electronics 6, 7.

73 N Chami et al, ‘Data Subjects in the Femtech Matrix: A Feminist Political Economy Analysis of the Global Menstruapps Market’ (Issue Paper 6, Feminist Digital Justice, December 2021) 4.

74 Hurley and Adebayo, ‘Credit Scoring in the Era of Big Data’, 183.

76 Wu Youyou, Michal Kosinski, and David Stillwell, ‘Computer-Based Personality Judgments Are More Accurate than Those Made by Humans’ (Research Paper, Proceedings of the National Academy of Sciences 112(4): 201418680, 12 January 2015).

77 Hurley and Adebayo, ‘Credit Scoring in the Era of Big Data’, 183.

78 Facebook, ‘Submission to the Australian Privacy Act Review Issues Paper’, 25–26.

79 CM O’Keefe et al, The De-Identification Decision-Making Framework (CSIRO Reports EP173122 and EP175702, 18 September 2017), ix.

80 Office of the Insurance Commissioner Washington State, Final Order on Court’s Credit Scoring Decision; Kreidler Will Not Appeal (Media Release, 29 August 2022) <www.insurance.wa.gov/news/final-order-courts-credit-scoring-decision-kreidler-will-not-appeal>.

81 For example, Prof Sandra Wachter has pointed out the GDPR is based on an outdated concept of a ‘nosey neighbour’: Sanda Wachter, ‘AI’s Legal and Ethical Implications’ Twimlai (Podcast, 23 September 2021) <https://twimlai.com/podcast/twimlai/ais-legal-ethical-implications-sandra-wachter/>.

82 Microsoft Australia, ‘Microsoft Submission to Review of the Privacy Act 1988’ (December 2020) 2–3 <www.ag.gov.au/sites/default/files/2021–02/microsoft-australia.PDF>; Facebook, ‘Submission to the Australian Privacy Act Review Issues Paper’, 25.

83 See Zofia Bednarz, ‘There and Back Again: How Target Market Determination Obligations for Financial Products May Incentivise Consumer Data Profiling’ (2022) 36(2) International Review of Law, Computers & Technology 138.

84 Marshall Allen, ‘Health Insurers Are Vacuuming Up Details about You: And It Could Raise Your Rates’ (17 July 2018) NPR <www.npr.org/sections/health-shots/2018/07/17/629441555/healthinsurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates>.

85 Australian Human Rights Commission, Using Artificial Intelligence to Make Decisions: Addressing the Problem of Algorithmic Bias (Technical Paper, November 2022) 34–44.

86 E Martinez and L Kirchner, ‘Denied: The Secret Bias Hidden in Mortgage-Approval Algorithms’ (25 August 2021) The Markup.

87 Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31(2) Harvard Journal of Law & Technology 841, 848.

88 European Union Agency for Fundamental Rights, Bias in Algorithms: Artificial Intelligence and Discrimination (Report, 2022) 8–9 <https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf>.

89 Hannah Cassidy et al, ‘Product Intervention Powers and Design and Distribution Obligations: A Cross-Border Financial Services Perspective’ (Guide, Herbert Smith Freehills, 11 June 2019) <www.herbertsmithfreehills.com/latest-thinking/product-intervention-powers-and-design-and-distribution-obligations-in-fs>.

90 Martin Hobza and Aneta Vondrackova, ‘Target Market under MiFID II: the Distributor’s Perspective’ (2019) 14 Capital Markets Law Journal 518, 529.

91 European Securities and Markets Authority (ESMA), ‘Guidelines on MiFID II Product Governance Requirements’ (ESMA35–43-620, 5 February 2018).

92 Australian Securities and Investment Commission (ASIC), ‘Regulatory Guide 274: Product Design and Distribution Obligations’ (December 2020).

93 ASIC, ‘Regulatory Guide 274’, para 274.6.

94 ESMA, ‘Guidelines on MiFID II Product Governance Requirements’, 34–35.

95 ESMA, ‘Final Report: Guidelines on MiFID II Product Governance Requirements’ (ESMA35–43-620, 2 June 2017) 34, para 17.

96 ‘The MiFID II Review – Product Governance: How to Assess Target Market’ Ashurst (Financial Regulation Briefing, 3 October 2016) <www.ashurst.com/en/news-and-insights/legal-updates/mifid-12-mifid-ii-product-governance-how-to-assess-target-market/#:~:text=Regular%20review%20by%20the%20manufacturer,how%20to%20get%20that%20information>.

97 ASIC, ‘Regulatory Guide 274’, para. 277.180.

98 ASIC’s RG para. 274.47 provides examples of such personal and social characteristics: ‘speaking a language other than English, having different cultural assumptions or attitudes about money, or experiencing cognitive or behavioural impairments due to intellectual disability, mental illness, chronic health problems or age’.

99 ASIC, ‘Regulatory Guide 274’ para. 274.47: ‘an accident or sudden illness, family violence, job loss, having a baby, or the death of a family member’.

100 For example, Indigenous Australians, whose lack of financial literacy historically made them an easy target for mis-selling of inadequate products: Commonwealth of Australia, Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry (Interim Report Vol. 2, 2018) 452–57.

101 Machine Learning in particular has been described as ‘very data hungry’ in the World Economic Forum and Deloitte; WEF and Deloitte, The New Physics of Financial Services: Understanding How Artificial Intelligence Is Transforming the Financial Ecosystem (Report, August 2018) <www.weforum.org/reports/the-new-physics-of-financial-services-how-artificial-intelligence-is-transforming-the-financial-ecosystem/>.

102 Mohammed Aaser and Doug McElhaney, ‘Harnessing the Power of External Data’ (Article, 3 February 2021) McKinsey Digital.

103 Nydia Remolina, ‘Open Banking: Regulatory Challenges for a New Forum of Financial Intermediation in a Data-Driven World’ (SMU Centre for AI & Data Governance Research Paper No 2019/05, 28 October 2019).

104 EMEA Center for Regulatory Strategy, ‘Open Banking around the World’ Deloitte (Blog Post) <www.deloitte.com/global/en/Industries/financial-services/perspectives/open-banking-around-the-world.html>.

105 UK Finance, ‘Exploring Open Finance’ (Report, 2022) <www.ukfinance.org.uk/system/files/2022–05/Exploring%20open%20finance_0.pdf>.

106 Joshua Macey and Dan Awrey, ‘The Promise and Perils of Open Finance’ Harvard Law School Forum on Corporate Governance (Forum Post, 4 April 2022) <https://corpgov.law.harvard.edu/2022/04/04/the-promise-and-perils-of-open-finance/>.

107 Bednarz et al, ‘Insurance Underwriting in an Open Data Era’.

108 Heike Felzmann et al, ‘Towards Transparency by Design for Artificial Intelligence’ (2020) 26(6) Science and Engineering Ethics 3333, 3343–53.

109 Lyria Bennett Moses, How to Think about Law, Regulation and Technology: Problems with ‘Technology’ as a Regulatory Target (SSRN Scholarly Paper No ID 2464750, Social Science Research Network, 2013) 1819.

110 Elisa Jilson, ‘Aiming for Truth, Fairness and Equity in Your Company’s Use of AI’ US Federal Trade Commission (Business Blog Post, 19 April 2021) <www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai>.

111 European Commission, ‘Regulatory Framework Proposal on Artificial Intelligence’ European Commission (Web Page) <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=encourages%20dangerous%20behaviour.-,High%20risk,life%20(e.g.%20scoring%20of%20exams>.

112 Daniel Leufer, ‘EU Parliament’s Draft of AI Act: Predictive Policing Is Banned, but Work Remains to Protect People’s Rights’ (4 May 2022) Access Now <www.accessnow.org/ai-act-predictive-policing/>.

113 Hohnen et al, ‘Assessing Creditworthiness’.

114 Proposal for a Directive of the European Parliament and of the Council on consumer credits [2021] OJ COM 347 (47).

115 For examples of such potentially harmful data sources see: Pasquale, The Black Box Society, 21, 31; Hurley and Adebayo, ‘Credit Scoring in the Era of Big Data’, 151–52, 158; Hiller and Jones, ‘Who’s Keeping Score?’.

116 E.g., health insurers in the United States under the US Public Health Service Act, 42 USC § 300gg(a)(1)(A) may only base their underwriting decisions on four factors: individual or family coverage; location; age; and smoking history.

117 ‘Your Credit Report’, Financial Rights Legal Centre (Web Page, 6 February 2017) <https://financialrights.org.au/>.

118 Hohnen et al, ‘Assessing Creditworthiness’, 40.

119 Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1(9) Nature Machine Intelligence 389, 2–5.

120 See e.g. Art. 22 GDPR.

121 Jack B Soll, Ralph L Keeney, and Richard P Larrick, ‘Consumer Misunderstanding of Credit Card Use, Payments, and Debt: Causes and Solutions’ (2013) 32(1) Journal of Public Policy & Marketing 66, 77–80.

122 Marsha Courchane, Adam Gailey, and Peter Zorn, ‘Consumer Credit Literacy: What Price Perception?’ (2008) 60(1) Journal of Economics and Business 125, 127–38.

123 Beth Freeborn and Julie Miller, Report to Congress under Section 219 of the Fair and Accurate Credit Transactions Act of 2003 (Report, January 2015) i <www.ftc.gov/system/files/documents/reports/section-319-fair-accurate-credit-transactions-act-2003-sixth-interim-final-report-federal-trade/150121factareport.pdf>. In one study of 1001 US consumers, 26 per cent found inaccuracies in their credit reports.

124 Heather Cotching and Chiara Varazzani, Richer Veins for Behavioural Insight: An Exploration of the Opportunities to Apply Behavioural Insights in Public Policy (Behavioural Economics Team of the Australian Government, Commonwealth of Australia, Department of the Prime Minister and Cabinet, 2019) 1, 14. Studies have shown simplifying and standardising information in consumer markets aids comprehension and assists consumers in making choices that result in better outcomes.

125 Credit Information Bureau of Sri Lanka, ‘CRIB Score Report Reference Guide’ (Guide) <www.crib.lk/images/pdfs/crib-score-reference-guide.pdf>.

126 ‘Getting Your Credit Report and Credit Score’ Government of Canada (Web Page) <www.canada.ca/en/financial-consumer-agency/services/credit-reports-score/order-credit-report.html>.

127 ‘Access Your Credit Report’ Office of the Australian Information Commissioner (Web Page) <www.oaic.gov.au/privacy/credit-reporting/access-your-credit-report>.

129 Some consumers discovered that their reports ‘featured inconsistent or misleading claims descriptions and statuses, included personal information unrelated to insurance at all, and no explanation of the terms used to assist in comprehensibility’. See Roger Clarke and Nigel Waters, Privacy Practices in the General Insurance Industry (Financial Rights Legal Centre Report, April 2022) vii <https://financialrights.org.au/wp-content/uploads/2022/04/2204_PrivacyGIReport_FINAL.pdf>.

130 Leilani Gilpin et al, ‘Explaining Explanations: An Overview of Interpretability of Machine Learning’ (2019) v3 arXiv, 2 <https://arxiv.org/abs/1806.00069>.

131 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/1, para 27.10.2022.

132 Footnote Ibid, para 52.

133 Yanou Ramon et al, ‘Understanding Consumer Preferences for Explanations Generated by XAI Algorithms’ (2021) arXiv, 9–14 <http://arxiv.org/abs/2107.02624>.

134 Jessica Morley et al, ‘From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices’ (2020) 26(4) Science and Engineering Ethics 2141.

135 Rory Mc Grath et al, ‘Interpretable Credit Application Predictions with Counterfactual Explanations’ (2018) v2 arXiv, 4–7 <https://arxiv.org/abs/1811.05245>.

136 Ada Lovelace Institute, Technical Methods for the Regulatory Inspection of Algorithmic Systems in Social Media Platforms (December 2021) <www.adalovelaceinstitute.org/wp-content/uploads/2021/12/ADA_Technical-methods-regulatory-inspection_report.pdf>.

137 Sophie Farthing et al, Human Rights and Technology (Australian Human Rights Commission, 1 March 2021) 1, 95–97.

138 Henry Hoenig, ‘Sorry iPhone Fans, Android Users Are Safer Drivers’ Jerry (Blog Post, 20 April 2023) <https://getjerry.com/studies/sorry-iphone-fans-android-users-are-safer-drivers>.

139 New York State Department of Financial Services Circular Letter No 1 (2019), 18 January 2019, ‘RE: Use of External Consumer Data and Information Sources in Underwriting for Life Insurance’.

140 Gert Meyers and Ine Van Hoyweghen, ‘“Happy Failures”: Experimentation with Behaviour-Based Personalisation in Car Insurance’ (2020) 7(1) Big Data and Society 1, 4.

141 See for example Chapters 8, 10 and 11 in this book.

142 Pasquale, The Black Box Society.