Skip to main content Accessibility help
Hostname: page-component-55597f9d44-54vk6 Total loading time: 0.456 Render date: 2022-08-15T18:47:53.036Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true } hasContentIssue true

Multidisciplinary collaboration on discrimination – not just “Nice to Have”

Published online by Cambridge University Press:  01 November 2021

Chris Dolman
Insurance Australia Group, Sydney, Australia
Edward (Jed) Frees*
Wisconsin School of Business, University of Wisconsin-Madison, Madison, WI, USA, College of Business and Economics, Australian National University, Canberra, Australia
Fei Huang
School of Risk and Actuarial Studies, University of New South Wales, Sydney, Australia
*Corresponding author. E-mail:
Rights & Permissions[Opens in a new window]


© The Author(s), 2021. Published by Cambridge University Press on behalf of Institute and Faculty of Actuaries

Although much of the discipline of actuarial science has its roots in isolated mathematicians or small collaborative teams toiling to produce fundamental truths, practice today is frequently geared towards large collaborative teams. In some cases, these teams can cross academic disciplines. In our view, whilst certain matters can be effectively researched within isolated disciplines, others are more suited to multidisciplinary teamwork. Discrimination, particularly data-driven discrimination, is an extremely rich and broad topic. Here, we mainly focus on insurance discrimination in underwriting/pricing, and we use the word “discrimination” in an entirely neutral way, taking it to mean the act of treating distinct groups differently – whether or not such discrimination can be justified based on legal, economic or ethical grounds. Whilst narrow research into this subject is certainly possible, a broad perspective is likely to be beneficial in creating robust, well-considered solutions to actual or perceived problems. Significant harms can and, indeed, have been caused by well-intended but narrowly framed solutions to large, difficult problems. In discrimination, for example, the intuitively appealing “fairness through unawareness” is known to make overall discrimination worse in some circumstances (for a worked example, see Reid & O’Callaghan Reference Reid and O’Callaghan2018). Whilst the unawareness problem has been understood in the computer science community for some time (see, e.g. Pedreschi et al. 2008), it is an idea still embedded in many laws around the world, and too frequently seen by some as a solution for data-driven discrimination.

As with other institutions, insurers are redefining the way that they do business with the increasing capacity and computational abilities of computers, availability of new and innovative sources of data, and advanced artificial intelligence algorithms that can detect patterns in data that were previously unknown. Conceptually, Big Data and new technologies do not alter the fundamental issues of insurance discrimination; one can think of credit-based insurance scoring and price optimization as simply forerunners of this movement. Yet, old challenges may become more prominent in this rapidly developing landscape. Issues regarding privacy and the use of algorithmic proxies take on increased importance as insurers’ extensive use of data and computational abilities evolve. Actuaries need to be attuned to these issues and, ideally, involved in proposals to address them.

For example, Frees & Huang (Reference Frees and Huang2021) draw upon historical, economic, legal, and computer science literatures to understand insurance discrimination. In particular, they review social and economic principles that can be used to assess whether insurance discrimination is ethical or is “unfair” and morally indefensible in some sense, examine insurance regulations and laws across different lines of business and jurisdictions, and explore the machine learning literature on mitigating proxy discrimination via algorithmic fairness. Taking advantage of the literature from multiple disciplines, they help actuaries and financial analysts understand the insurance prohibitions landscape based on a holistic view. Choices regarding insurance prohibitions involve policy decisions that should also involve legal and economics scholars, as well as government representatives and advocates for the industry and for consumers. Actuaries can certainly make important contributions to these discussions, notably by quantifying the financial impact of policy alternatives, but actuaries should not be the sole arbiters of any policy decision. Other voices can and should be heard, and cross-disciplinary collaboration can be a productive method to create this.

A good example of a broad intent to create multidisciplinary collaboration in academia is the Humanising Machine Intelligence team at the Australian National University1 – drawing on academics and outside partners (including one of the authors) from a wide range of disciplines (such as philosophy, computer science, political science, law, and sociology), to tackle difficult research problems, including algorithmic ethics and discrimination (certainly including, but not limited to insurance). We expect to see more structured multidisciplinary teams of this nature emerging to tackle large societal problems which require it. This has led to fruitful collaborations of relevance to insurance, for example, see Dolman et al. (Reference Dolman, Lazar, Caetano and Semenovich2020).

On insurance discrimination, we notice that Oxford University Press recently published a book,Footnote 1Calculating Race: Racial Discrimination in Risk Assessment (Wiggins Reference Wiggins2020). This book adds an interesting and useful perspective for actuaries. Many actuaries do not have good understanding of the history of our profession, and this book helps us to understand the historical relationship between statistical risk assessment and race in the life insurance and mortgage industries of the US. We expect that many actuaries would not be aware of the rich lessons of history which such a book can teach us. Wiggins shows how Prudential battled with state regulators to overtly discriminate against clients in insurance pricing on the basis of race at the end of the 19th century. It also investigates the state-sponsored mortgage insurance program of the Federal housing Administration and its prolonged effects on mortgage lending. Many actuaries will no doubt find these historical facts alarming.

Unfortunately, however, this book oversteps in its conclusions and recommendations for today’s world, perhaps owing to the singular perspective (in this case, historical) that it brings to such a broad topic. In forming conclusions, Wiggins attempts to draw links between these historical incidents – primarily of direct discrimination – and the current challenging topic of indirect discrimination. This often felt one-sided and, on occasion, without solid basis. Most alarmingly, it failed to acknowledge the nuanced debate which is occurring right now in the literature across multiple disciplines. The discussion of the Correctional Offender Management Profiling for Alternative Sanctions, for example, does not mention the ongoing “fairness metric debate” on how to quantify fairness (see, e.g. Kleinberg et al. Reference Kleinberg, Mullainathan and Raghavan2016; Corbett-Davies et al. Reference Corbett-Davies, Pierson, Feller and Goel2016; Chouldechova Reference Chouldechova2017; Berk et al. Reference Berk, Heidari, Jabbari, Kearns and Roth2021). Whilst this weakens the conclusions, it does not diminish the value a reader may gain from the historical discussion. Nonetheless, we observe that a much stronger book may have arisen from the incorporation of broader perspectives.

As technological innovation continues to evolve the role of the actuary, more opportunities along with more challenges will be faced, including the effects of changing demographics, cyber risk, natural disasters, low interest rates, advances in data science and much more. Solutions to many of these problems will require actuaries not only to work in isolation, but also to collaborate and discuss solutions with experts from multiple disciplines. These collaborations will lead to more robust solutions, enhancing the development of all disciplines and benefiting the policyholders, insurance markets, regulators, and society at large.


Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2021). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1), 344.CrossRefGoogle Scholar
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153163.CrossRefGoogle ScholarPubMed
Corbett-Davies, S., Pierson, E., Feller, A., & Goel, S. (2016). A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post, Available online at the address [accessed 10-Aug-2021].Google Scholar
Dolman, C., Lazar, S., Caetano, T., & Semenovich, D. (2020). Should I Use That Rating Factor? A Philosophical Approach to an Old Problem. Available online at the address. [accessed 10-Aug-2021].Google Scholar
Frees, E. & Huang, F. (2021). The (Discriminating) Pricing Actuary. North American Actuarial Journal, doi: 10.1080/10920277.2021.1951296.CrossRefGoogle Scholar
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.Google Scholar
Pedreshi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 560568).CrossRefGoogle Scholar
Reid, A. & O’Callaghan, S. (2018). Ignorance isn’t bliss. [accessed 10-Aug-2021]Google Scholar
Wiggins, B. (2020). Calculating Race: Racial Discrimination in Risk Assessment. Oxford University Press.CrossRefGoogle Scholar
You have Access

Save article to Kindle

To save this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the or variations. ‘’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Multidisciplinary collaboration on discrimination – not just “Nice to Have”
Available formats

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Multidisciplinary collaboration on discrimination – not just “Nice to Have”
Available formats

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Multidisciplinary collaboration on discrimination – not just “Nice to Have”
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *