The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore
Muyskens K., Ballantyne A., Savulescu J., Nasir HU., Muralidharan A.
A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population than for the Chinese majority. We discuss the problematic normative nature of bias in health AI and explore the ways in which bias can interact with various forms of social inequalities. From there, we examine the specifics of the diabetic retinopathy case and weigh up specific trade-offs between utility and equity. Ultimately, we conclude that it is ethically permissible to prioritise utility over equity where certain criteria hold. Given that any medical AI is more likely than not to have lingering bias due to bias in the training data that may reflect other social inequalities, we argue that it is permissible to implement an AI tool with residual bias where: (1) its introduction reduces the influence of biases (even if overall inequality is worsened), and/or (2) where the utility gained is significant enough and shared across groups (even if unevenly).