Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The principle of (respect for) patient autonomy has traditionally emphasized independence in medical decision-making, reflecting a broader commitment to epistemic individualism. However, recent philosophical work has challenged this view, suggesting that autonomous decisions are inherently dependent on epistemic and social supports. Wilkinson and Levy's "scaffolded model" of autonomy demonstrates how our everyday decisions rely on distributed cognition and various forms of epistemic scaffolding-from consulting others to using technological aids like maps or calculators. This paper explores how Large Language Models (LLMs) could operationalize scaffolded autonomy in medical informed consent. We argue that rather than undermining patient autonomy, appropriately designed LLM systems could enhance it by providing flexible, personalized support for information processing and value clarification. Drawing on examples from clinical practice, we examine how LLMs might serve as cognitive scaffolds in three key areas: enhancing information accessibility and comprehension, supporting value clarification, and facilitating culturally appropriate decision-making processes. However, implementing LLMs in consent procedures raises important challenges regarding epistemic responsibility, authenticity of choice, and the maintenance of appropriate human oversight. We analyze these challenges through the lens of scaffolded autonomy, arguing that successful implementation requires moving beyond simple questions of information provision to consider how technological systems can support genuinely autonomous decision-making. The paper concludes by proposing practical guidelines for LLM implementation while highlighting broader philosophical questions about the nature of autonomous choice in technologically mediated environments.

More information Original publication

DOI

10.1111/bioe.70030

Type

Journal article

Publication Date

2026-02-01T00:00:00+00:00

Volume

40

Pages

183 - 193

Total pages

10

Keywords

artificial intelligence ethics, informed consent, large language models, medical decision‐making, patient autonomy, Humans, Personal Autonomy, Informed Consent, Decision Making, Language, Comprehension, Empowerment, Patient Participation, Large Language Models