The AI Transparency Imperative: Should Patients Be Told When Artificial Intelligence is Used in Their Care?
Introduction: The New Frontier of Informed Consent
The integration of Artificial Intelligence (AI) into clinical practice is rapidly transforming healthcare, offering unprecedented capabilities in diagnostics, treatment planning, and administrative efficiency. From analyzing medical images to predicting patient risk, AI is becoming an indispensable tool for clinicians. However, this technological leap introduces a profound ethical and legal question: Should patients be explicitly informed when AI is used in their care?
The answer is not a simple yes or no. It lies at the intersection of patient autonomy, the legal doctrine of informed consent, and the practical realities of a busy clinical environment. For professionals and the public interested in digital health, understanding the emerging consensus on AI disclosure is crucial for maintaining trust and ensuring ethical practice in the age of intelligent medicine.
The Foundation: Informed Consent and Patient Autonomy
The bedrock of modern medical ethics is the principle of informed consent, which mandates that a patient must be fully apprised of all material information regarding a proposed medical intervention before agreeing to it. This includes the nature of the procedure, potential risks, benefits, and alternatives. The core legal standard is often the "reasonable patient" standard: what information would a reasonable person in the patient's position consider material to their decision-making?
Surveys consistently show that a significant majority of patients—often over 60%—express discomfort with their physician relying on AI and state a clear preference for being notified of its use [1]. This strong patient preference suggests that the use of AI, particularly in high-stakes decisions, is indeed material information. Failing to disclose AI use risks undermining patient autonomy and eroding the fundamental trust relationship between patient and provider.
The Materiality Framework: When Does AI Use Require Disclosure?
Given that AI is used in countless ways, from the mundane (scheduling optimization) to the critical (cancer diagnosis), a blanket disclosure policy for every instance would lead to information overload, potentially distracting patients from truly important discussions. This challenge has led legal and health policy experts to propose a materiality framework to guide disclosure.
A prominent framework, proposed by Mello and colleagues in JAMA, suggests that disclosure is ethically and legally required when the AI tool's use meets one or more of the following criteria [2]:
| Criterion | Description | Example of Required Disclosure |
|---|---|---|
| High-Impact | The AI tool has a direct, significant impact on the patient's diagnosis, prognosis, or treatment plan. | An AI algorithm that determines a patient's eligibility for a life-saving clinical trial. |
| Novelty or Uncertainty | The AI tool is experimental, newly deployed, or has a high degree of uncertainty regarding its performance or potential for bias. | A proprietary, black-box AI model with limited real-world validation data. |
| Replacement of Human Judgment | The AI tool is replacing or substantially diminishing the role of human clinical judgment in a critical decision. | An autonomous AI system that issues a final diagnostic recommendation without required human oversight. |
If the AI is merely a low-impact clinical decision support tool, similar to a digital calculator or a standard lab machine, and the clinician retains full oversight, disclosure may not be necessary. However, as AI systems become more complex and autonomous, the threshold for materiality drops significantly.
The Practical Challenge: Transparency vs. Trust
The debate is ultimately a balancing act. On one side is the imperative for transparency to uphold patient rights and build trust. On the other is the risk of information overload, which could confuse patients or cause them to irrationally reject a beneficial, AI-assisted treatment.
The solution lies in developing clear, standardized, and context-specific disclosure policies. Healthcare organizations must move beyond generic consent forms and adopt a nuanced approach that informs patients about the type of AI being used and its role in their care. This policy should be part of the organizational governance for any AI deployment.
For more in-depth analysis on this topic, the resources at www.rasitdinc.com provide expert commentary on the ethical governance and practical implementation of AI in clinical settings, offering professional insight into navigating this complex landscape.
Conclusion: A Call for Clear Policy
The question of whether to tell patients when AI is used is evolving from an ethical debate into a legal and regulatory requirement. As AI continues to mature, healthcare providers and systems have an obligation to proactively define their disclosure policies based on the principle of materiality. By prioritizing transparency in high-impact scenarios, the healthcare industry can harness the power of AI while reinforcing the patient-centric values that define ethical medical practice.
References
[1] Mello, M. M., et al. (2025). Ethical Obligations to Inform Patients About Use of AI Tools. JAMA. [2] Rose, S. L., et al. (2024). An Ethically Supported Framework for Determining Patient Notification of Artificial Intelligence Use in Clinical Care. CHEST Journal. [3] Cohen, I. G. (2020). Informed Consent and Medical Artificial Intelligence: What to Tell the Patient. Georgetown Law Journal.