Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-24T09:27:04.783Z Has data issue: false hasContentIssue false

AI: Coming of age?

Published online by Cambridge University Press:  19 January 2022

Trevor Maynard*
Affiliation:
Cambridge Centre for Risk Studies, Judge Business School, University of Cambridge, CambridgeCB2 1TN, UK Data Science Institute, London School of Economics and Political Science, LondonWC2A 2AE, UK
Luca Baldassarre
Affiliation:
Swiss Re, Zurich, Switzerland
Yves-Alexandre de Montjoye
Affiliation:
Imperial College London, LondonSW7 2BX, UK
Liz McFall
Affiliation:
School of Social and Political Science, University of Edinburgh, EdinburghEH8 9YL, UK
María Óskarsdóttir
Affiliation:
Department of Computer Science, Reykjavik University, Reykjavik, Iceland
*
*Corresponding author. E-mail: t.maynard572@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

AI has had many summers and winters. Proponents have overpromised, and there has been hype and disappointment. In recent years, however, we have watched with awe, surprise, and hope at the successes: Better than human capabilities of image-recognition; winning at Go; useful chatbots that seem to understand your needs; recommendation algorithms harvesting the wisdom of crowds. And with this success comes the spectre of danger. Machine behaviours that embed the worst of human prejudice and biases; techniques trying to exploit human weaknesses to skew elections or prompt self-harming behaviours. Are we seeing a perfect storm of social media, sensor technologies, new algorithms and edge computing? With this backdrop: is AI coming of age?

Type
Editorial
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Institute and Faculty of Actuaries

1. Introduction

This guest editorial records the key insights from a panel discussion between the authorsFootnote 1 . The panel focussed on three key areas: ethics, explainability and the transformation of business models. These are presented in turn below.

2. Ethics

We have seen examples of image software misclassifying ethnic groups in a harmful way; or text analysis software mirroring human insults and prejudices. The complexity of social and economic relations makes this a tough problem, and more work is required to create a robust ethics framework. Such a framework would, for example, have to address risks including individual privacy and confidentiality and issues of representation and bias in training data. It is entirely possible to identify individuals even from ‘anonymised’ demographic (Rocher et al., Reference Rocher, Hendrickx and de Montjoye2019) or location (de Montjoye et al., Reference de Montjoye, Hidalgo, Verleysen and Blondel2013) information, when a vast number of external data are available to reidentify someone, e.g. on social media. Evidence documents the role of AI in reproducing the societal inequalities of the data sets it was trained upon (Noble, Reference Noble2018; Moss et al., Reference Moss, Watkins, Singh and Elish2021).

There are various techniques and processes to control these risks and these are discussed in detail in Chapter 2 and the annexes of proposed EU AI Act (Brussels, 21.4.2021 COM, 2021a, 2021b). When considering ethical issues, it is important to weigh up the benefits against the risks and costs to consumers. For example, products such as parametric insurance sold through chatbots can be created. These have lower overheads such as brokerage, loss adjusting, and dispute handling, which makes them cheaper. As they are sold on-line, via an end-to-end digital process, they can also be sold at scale and linking to external data sets can avoid form-filling, allowing a near instant purchase. Such products have the potential to increase financial inclusion both in the developed world and emerging markets. At the same time for applications with such an impact on individuals, it is crucial to ensure algorithms are regularly reviewed and tested, designed to be robust to, e.g. to exogenous changes in input data, and to introduce appropriate oversight and redress mechanisms. On balance, most of us agree with the Institute of Risk Managers who held a round table discussion (Maynard & Goodman, Reference T. and Goodman2020) with Chief Risk Officers and concluded, with reference to financial inclusion, that ‘society needs this to succeed’.

Financial inclusion is a globally recognised problem. Around two billion people do not have a basic bank account, and, in some countries, large parts of the population do not have access to useful and affordable financial products and services such as checking accounts, saving instruments, loans and insurance. The World Bank, the G20, and more than 55 countries have committed to the advancement of financial inclusion worldwide through initiatives such as the Financial Inclusion Global Initiative (FIGI) (http://www.worldbank.org/en/topic/financialinclusion/overview#1). Traditionally, creditworthiness of potential borrowers is assessed using ratings from credit bureaus and information about their bank history or repayment history of previous loans. Without previous bank history, this is clearly not possible. The use of AI may be the solution to this problem, especially in combination with alternative data, i.e. novel sources of information about people’s behaviour to assess their creditworthiness, including mobile phone data (CDR), smart phone usage data, as well as social media, text, and images (Kharif, Reference O.2016; Ruiz et al., Reference Ruiz and Gomes2017; Singh et al., Reference Singh, Bozkaya and Pentland2015; De Cnudde et al., Reference De Cnudde, Moeyersoms, Stankova, Tobback, Javaly and Martens2019). The insights obtained from such data could thus facilitate access to borrowers with little or no credit history, such as young borrowers or people in developing countries, who are not expected to have a credit history, and potentially help increase the financial well-being of numerous individuals worldwide (Óskarsdóttir et al., Reference Óskarsdóttir, Sarraute, Bravo, Baesens and Vanthienen2018, Reference Óskarsdóttir, Bravo, Sarraute, Vanthienen and Baesens2019), although concerns exist here too (Kazeem, Reference Kazeem2020).

In summary, ethical issues must be a core consideration when embedding AI systems but must also be balanced, through robust design and appropriate oversight, against other worthy aims such as increasing financial inclusion.

3. Explainability

We do not understand how human beings come to decisions. Yet we have high expectations of explainability of machine learning, which may have arisen due to the comparatively simple nature of traditional regression models. Whilst with such regression models, it may be easy to explore the sensitivity of their parameters, and it is less clear whether these parameters relate directly to the world they claim to model. As a panel we considered the question: How can insurers be fair to customers when handling disputes and complaints if they cannot explain the decisions of their algorithms?

Defining principles of fairness in any context is never straightforward. In insurance, fairness is broadly considered in actuarial terms, such that price is determined in accordance with statistical risk, while allowing for a desired return on capital. This is an over-simplification as choices have to be made on what data to use for pricing and which variables can be used for behavioural proxies. Despite this, actuarial fairness, in general, stands in contrast with other societal or solidaristic approaches to fairness which may define it in terms of equity or equality (Baker, Reference Baker2010; Meyers & Van Hoyweghen, Reference Meyers and Van Hoyweghen2018). Fairness is related, ineluctably, to the situation, context, individual and stakeholder interests. Different expectations of fairness have tended to be handled through articulated principles, terms and conditions explaining how decisions are made. In the context of algorithmic decision making, this has become more challenging as decisions may be ‘black-boxed’ or have outcomes that are not easily explainable (Pasquale, Reference Pasquale2016).

Insurance, professionals, regulators, and companies are beginning to address this, for example, by introducing reviews of data used to model risks, principles of explainability, and guidance on avoiding discrimination against protected conditions by non-causal proxies, but there are no immediate, easily enforceable solutions in sight (EIOPAs Consultative Expert Group, 2021; New York State Department of Financial Services, 2019; Singapore Monetary Authority (2020); Frees & Huang, Reference Frees and Huang2021).

4. Transformation of Business Models

The Internet of Things, including industrial process sensors, personal health devices and webcams, will provide a data stream of unimaginable richness and quantity. We considered how this data can be understood, how industry can extract the signal from the noise and, specifically, what the impact on insurers might be.

We are already seeing examples of transformation in insurance. At a simplistic level, it is embedded within the core software we all use: Word, Outlook, etc. More specialist applications are now being used or tested, however. The Lloyd’s Lab (https://lloydslab.com/), an insurance innovation accelerator, has carried out experiments with multiple firms, with AI at the heart of their process. These include Scrub AI who automate data cleaning, Safekeep focussed on subrogation, Predata/Moonshot and Verisk-Maplecroft each using predictive analysis to highlight geopolitical risks, Orca AI providing warnings to shipping, Describe Data who combine AI and new data sources for a deeper understanding of risk, and Clausematch who are automating document comparison through natural language processing. So, AI is starting to impact every step of the insurance workflow: from underwriting and regulation to claims (Lloyd’s Innovation Team, 2019). As discussed in McFall et al. (Reference McFall, Meyers and Hoyweghen2020), Meyers & Hoyweghen (Reference Meyers and Hoyweghen2020), Jeanningros & McFall (Reference Jeanningros and McFall2020), there has been some academic analysis of how insurance practices are changing to incorporate new data sources such as wearable sensors, telematics devices, or on-line information but more work is needed. Customers are wary about what uses their personal, confidential information might be put to, what effect self-tracking, interactive schemes might have on the accessibility or affordability of insurance. Insurers should aim to be as clear as possible about this and communicate with academic and advocacy groups to explore potential use cases and benefits as well as risks.

Again, it is important to look at both the risks but also the benefits that AI can bring to society. For example, new algorithms may be able to cut financial crime and fraud. These cost insurers billions of dollars annually (Federal Bureau of Investigation, 2021) so, if they can be reduced, this should ultimately flow through into lower overheads and, once competition and market forces have their usual impact, cheaper insurance. In the case of fraud, it is well known that fraudsters tend to collaborate in order to maximize the reward and mitigate risk. To detect such fraud circles, it is therefore necessary to use an appropriate representation of the data, i.e. social networks, and develop AI algorithms suitable for such data structures such as Graph Neural Networks (Šubelj et al., Reference Šubelj, Furlan and Bajec2011). In fact, fraud detection models with variables derived from networks are better at finding insurance fraud (Óskarsdóttir et al., Reference Óskarsdóttir, Ahmed, Antonio, Baesens, Dendievel, Donas and Reynkens2021). These algorithms should, however, only be used to flag highly suspicious cases that need to be further investigated while making sure they do not incorrectly block accounts with unusual but legal behaviour. AI algorithms provide a guided and intelligent selection of cases and thus, if robust and monitored, can contribute to a more effective fraud investigation process.

5. Conclusion

The panel concluded that AI has much to offer the society in general and insurance in particular, but must be used carefully. Whilst we fully anticipate continued hype and disappointment in some areas, we have also each observed true transformation of some business models using AI. There are core ethical concerns that will need to be balanced against the gains of more bespoke and personalised digital experience and the hope of greater financial inclusion; recent work by global regulators and the European Commission will help in this regard. Explainability per se should not be a barrier, and we should focus instead on the auditability of the process of creating AI algorithms including the training data sets and the rights of redress that customers have to dispute decisions. Based on the comments made in this editorial, it is the view of the authors that AI is indeed coming of age.

Acknowledgements

The panellists are grateful to the organisers of the 3rd Insurance Data Science Conference, Arthur Charpentier, Markus Gesmann, Ioannis Kyriakou, Silvana Pesenti, and Andreas Tsanakas, for organising an excellent conference and providing us the opportunity to discuss these issues.

Footnotes

1 We record here the key insights from a panel discussion held virtually, at the 3rd Insurance Data Science conference, on 17 June 2021. This paper uses the phrase “the panel” but, as is typical with such discussions, the views and points arose from individual comments. Therefore, the views expressed in this paper do not necessarily reflect those of each panel member alone, but the authors confirm this paper does reflect an accurate summary of the issues discussed. Also, the comments made were from an individual perspective and do not necessarily represent those of our employers.

References

Baker, T. (2010). Health insurance, risk, and responsibility after the patient protection and affordable care act. University of Pennsylvania Law Review, 159, 15771622; 1.Google Scholar
Brussels, 21.4.2021 COM (2021a). 206 final Laying Down Harmonised Rules on Artificial Intelligence. Available online at the address https://ec.europa.eu/newsroom/dae/redirection/document/75788 [accessed November 2021].Google Scholar
Brussels, 21.4.2021 COM (2021b). 206 final – Annex II. Available online at the address https://ec.europa.eu/newsroom/dae/redirection/document/75789 [accessed November 2021].Google Scholar
De Cnudde, S., Moeyersoms, J., Stankova, M., Tobback, E., Javaly, V. & Martens, D. (2019). What does your facebook profile reveal about your creditworthiness? Using alternative data for microfinance. Journal of the Operational Research Society, 70(3), 353363.CrossRefGoogle Scholar
de Montjoye, Y.A., Hidalgo, C.A., Verleysen, M. & Blondel, V.D. (2013). Unique in the crowd: the privacy bounds of human mobility. Nature Scientific Reports, 3(1), 15.Google ScholarPubMed
EIOPA’s Consultative Expert Group (2021). Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector, June 2021. Available online at the address https://www.eiopa.europa.eu/content/artificial-intelligence-governance-principles-towards-ethical-and-trustworthy-artificial_en [accessed November 2021].Google Scholar
Frees, E. & Huang, F. (2021). The discriminating (pricing) actuary. North American Actuarial Journal, 6 August 2021. Available online at the address https://doi.org/10.1080/10920277.2021.1951296 [accessed November 2021].CrossRefGoogle Scholar
Jeanningros, H. & McFall, L. (2020). The value of sharing: branding and behaviour in a life and health insurance company. Big Data & Society, 7, 205395172095035–15.CrossRefGoogle Scholar
Kazeem, Y. (2020). A Chinese super app is facing claims of predatory consumer lending in Nigeria, Kenya and India. Quartz Africa 21 January 2020. Available online at the address https://qz.com/africa/1788351/operas-okash-opesas-predatory-lending-in-nigeria-india-kenya/ [accessed November 2021].Google Scholar
O., Kharif (2016). No credit history? no problem. lenders are looking at your phone data. Bloomberg, November 2016. Available online at the address https://www.bloomberg.com/news/articles/2016-11-25/no-credit-history-no-problem-lenders-now-peering-at-phone-data [accessed November 2021].Google Scholar
Lloyd’s Innovation Team (2019). Taking control: artificial intelligence and insurance. Available online at the address https://www.lloyds.com//media/files/news-and-insight/risk-insight/2019/aireport_2019_final_pdf.pdf [accessed November 2021].Google Scholar
T., Maynard & Goodman, D. (2020). Risk, Science and Decision Making: How should risk leaders of the future work with AI? IRM Occasional Paper, February 2020. Available online at the address https://irmcomms.wufoo.com/forms/how-should-risk-leaders-of-the-future-work-with-ai/ [accessed November 2021].Google Scholar
McFall, L., Meyers, G. & Hoyweghen, I.V. (2020). The personalisation of insurance: Data, behaviour and innovation, November 2020. Available online at the address https://journals.sagepub.com/doi/full/10.1177/2053951720973707 [accessed November 2021].Google Scholar
Meyers, G. & Hoyweghen, I.V. (2020). ‘Happy failures’: experimentation with behaviour-based personalisation in car insurance. Big Data & Society, 7, 205395172091465–14.CrossRefGoogle Scholar
Meyers, G. & Van Hoyweghen, I. (2018). Enacting actuarial fairness in insurance: from fair discrimination to behaviour-based fairness. Science as Culture, 27(4), 413438.CrossRefGoogle Scholar
Moss, E., Watkins, E., Singh, R. and Elish, M.J. (2021). Assembling accountability: algorithmic impact assessment for the public interest. Data & Society. Available online at the address https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/ [accessed November 2021].Google Scholar
New York State Department of Financial Services (2019). Insurance Circular Letter No. 1 Re: Use of External Consumer Data and Information Sources in Underwriting for Life Insurance.Google Scholar
Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism, 1st edition. NYU Press, New York.CrossRefGoogle Scholar
Óskarsdóttir, M., Ahmed, W., Antonio, K., Baesens, B., Dendievel, R., Donas, T. & Reynkens, T. (2021). Social network analytics for supervised fraud detection in insurance. Risk Analysis. Available online at: https://doi.org/10.1111/risa.13693 CrossRefGoogle Scholar
Óskarsdóttir, M. Bravo, C., Sarraute, C., Vanthienen, J. & Baesens, B. (2019). The value of big data for credit scoring: Enhancing financial inclusion using mobile phone data and social network analytics. Applied Soft Computing, 74, 2639, 2019.CrossRefGoogle Scholar
Óskarsdóttir, M., Sarraute, C., Bravo, C., Baesens, B. & Vanthienen, J. (2018). Credit scoring for good: Enhancing financial inclusion with smartphone-based microlending. In International Conference on Information Systems 2018, ICIS 2018 (pp. 19). Association for Information Systems.Google Scholar
Pasquale, F. (2016) The Black Box Society - The Secret Algorithms That Control Money and Information. Harvard University Press , Massachusetts, United States.Google Scholar
Rocher, L., Hendrickx, J.M. & de Montjoye, Y.-A. (2019). Estimating the success of re-identifications in incomplete datasets using generative models, July 2019. Available online at the address https://www.nature.com/articles/s41467-019-10933-3/ [accessed November 2021].Google Scholar
Ruiz, S., Gomes, P., Rodrigues, L. & Gama, J. (2017). Credit scoring in microfinance using non-traditional data. In EPIA Conference on Artificial Intelligence (pp. 447458). Springer.CrossRefGoogle Scholar
Singapore Monetary Authority (2020). Available online at the address https://www.mas.gov.sg/news/media-releases/2020/fairness-metrics-to-aid-responsible-ai-adoption-in-financial-services [accessed November 2021].Google Scholar
Singh, V.K., Bozkaya, B. & Pentland, A. (2015). Money walks: implicit mobility behavior and financial well-being. PloS One, 10(8), e0136628.CrossRefGoogle ScholarPubMed
Šubelj, L., Furlan, Š. & Bajec, M. (2011). An expert system for detecting automobile insurance fraud using social network analysis. Expert Systems with Applications, 38(1), 10391052.CrossRefGoogle Scholar