To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Credit-score models provide one of the many contexts through which the big data micro-segmentation or ‘personalisation’ phenomenon can be analysed and critiqued. This chapter approaches the issue through the lens of anti-discrimination law, and in particular the concept of indirect discrimination. The argument presented is that, despite its initial promise based on its focus on impact, ‘indirect discrimination’ is after all unlikely to deliver a mechanism to intervene and curb the excesses of the personalised service model. The reason for its failure does not lie in its inherent weaknesses but rather in the 'shortcomings' (entrenched biases) of empirical reality itself which any 'accurate' (or useful) statistical analysis cannot but reflect. Still, the anti-discrimination context offers insights that are valuable beyond its own disciplinary boundaries. For example, the opportunities for oversight and review based on correlations within outputs rather than analysis of inputs is fundamentally at odds with the current trend that demands greater transparency of AI but may after all be more practical and realistic considering the ‘natural’ opacity of learning algorithms and businesses’ ‘natural’ secrecy. The credit risk score context also provides a low-key yet powerful illustration of the oppressive potential of a world in which individual behaviour from ANY sphere or domain may be used for ANY purpose; where a bank, insurance company, employer, health care provider, or indeed any government authority can tap into our social DNA to pre-judge us, should it be considered appropriate and necessary for their manifold objectives.
This is the introductory chapter to the edited collection on 'Data-Driven Personalisation in Markets, Politics and Law' (Cambridge University Press, 2021) that explores the emergent pervasive phenomenon of algorithmic prediction of human preferences, responses and likely behaviours in numerous social domains – ranging from personalised advertising and political microtargeting to precision medicine, personalised pricing and predictive policing and sentencing. This chapter reflects on such human-focused use of predictive technology, first, by situating it within a general framework of profiling and defends data-driven individual and group profiling against some critiques of stereotyping, on the basis that our cognition of the external environment is necessarily reliant on relevant abstractions or non-universal generalisations. The second set of reflections centres around the philosophical tradition of empiricism as a basis of knowledge or truth production, and uses this tradition to critique data-driven profiling and personalisation practices in its numerous manifestations.
The most fascinating and profitable subject of predictive algorithms is the human actor. Analysing big data through learning algorithms to predict and pre-empt individual decisions gives a powerful tool to corporations, political parties and the state. Algorithmic analysis of digital footprints, as an omnipresent form of surveillance, has already been used in diverse contexts: behavioural advertising, personalised pricing, political micro-targeting, precision medicine, and predictive policing and prison sentencing. This volume brings together experts to offer philosophical, sociological, and legal perspectives on these personalised data practices. It explores common themes such as choice, personal autonomy, equality, privacy, and corporate and governmental efficiency against the normative frameworks of the market, democracy and the rule of law. By offering these insights, this collection on data-driven personalisation seeks to stimulate an interdisciplinary debate on one of the most pervasive, transformative, and insidious socio-technical developments of our time.