To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine.
This paper presents a dilemma for the additive model of reasons. Either the model accommodates disjunctive cases in which one ought to perform some act $$\phi $$ just in case at least one of two factors obtains, or it accommodates conjunctive cases in which one ought to $$\phi $$ just in case both of two factors obtains. The dilemma also arises in a revised additive model that accommodates imprecisely weighted reasons. There exist disjunctive and conjunctive cases. Hence the additive model is extensionally inadequate. The upshot of the dilemma is that one of the most influential accounts of how reasons accrue to determine what we ought to do is flawed.
Email your librarian or administrator to recommend adding this to your organisation's collection.