Can AI Make Decisions About Experimental Treatments? Navigating the Ethical and Regulatory Frontier

Introduction: The Promise and Peril of Algorithmic Authority

The integration of Artificial Intelligence (AI) is profoundly transforming drug discovery and clinical research, promising to accelerate the development of life-saving therapies and reduce costs [1]. However, entrusting AI with decision-making authority—particularly concerning experimental treatments—presents a complex ethical and regulatory challenge. The shift from AI as an analytical tool to AI as a decision-maker fundamentally alters the risk profile for patients and the accountability structure for researchers, demanding rigorous scrutiny.

AI’s Role in the Experimental Treatment Pipeline

AI is deeply embedded in the experimental treatment pipeline, primarily in a supportive capacity focused on efficiency and precision. Applications include:

Crucially, these transformative applications do not grant AI the final decision on a patient's enrollment, treatment course, or withdrawal. That ultimate decision remains firmly with the human clinician and the patient.

The Ethical Imperative: Autonomy, Accountability, and Bias

The prospect of AI making decisions about experimental treatments raises critical ethical questions rooted in the four pillars of medical ethics: autonomy, beneficence, nonmaleficence, and justice [4].

Informed consent is the cornerstone of experimental medicine. When an AI system is involved in the decision, the principle of autonomy is challenged. A patient cannot give truly informed consent if the AI's decision-making process is an opaque "black box." Patients have a right to understand the basis of their treatment recommendation, including the potential for programming errors or data anomalies within the AI model [4]. Moreover, the absence of human empathy in an algorithmic recommendation can erode the trust essential for a patient to consent to a high-risk, experimental procedure [4].

2. Accountability and Nonmaleficence

If an AI system makes a decision that leads to patient harm—a violation of nonmaleficence—the question of legal and ethical responsibility is unclear. Is the developer, the prescribing physician, the deploying hospital, or the regulator accountable? The current lack of a clear accountability framework for autonomous AI decisions is a major barrier to its adoption in high-stakes experimental settings.

3. Bias and Justice

AI models are only as unbiased as their training data. If training data disproportionately represents certain demographics, the resulting AI may exhibit systemic bias, leading to unjust or inequitable decisions about access to potentially life-saving experimental treatments [4]. This exacerbates the existing problem of social justice in healthcare, potentially widening the gap in access to cutting-edge therapies.

The Regulatory Response: A Framework for Credibility

Recognizing the rapid advancement of AI, regulatory bodies are establishing frameworks to govern its use. The U.S. Food and Drug Administration (FDA), for example, has introduced a draft guidance outlining a risk-based credibility assessment framework for AI models supporting regulatory decision-making in drug and biological product development [5].

This framework is a crucial step, acknowledging AI's supportive role and requiring a structured process to ensure reliability. The FDA's approach, based on a seven-step process, begins with defining the question of interest and the context of use (COU) for the AI model. This structured assessment aims to establish "trust in the model’s ability to provide information or data that is scientifically sound and appropriate for its COU" [5], setting a precedent for responsible integration by focusing on the model's credibility rather than the decision itself.

For more in-depth analysis on this topic, the resources at www.rasitdinc.com provide expert commentary and a comprehensive look at the intersection of digital health, AI, and ethical governance.

Conclusion: The Future is Collaborative, Not Autonomous

The answer to the question, "Can AI make decisions about experimental treatments?" is currently no, at least not autonomously. While AI is an indispensable tool for optimizing the experimental process, the final decision must remain a collaborative one, involving the patient, the clinician, and the ethical and regulatory guardrails designed to protect human life.

The future of experimental medicine will be defined by AI augmenting human judgment, not replacing it. The focus must shift from technical capability to ethical governance, ensuring that as AI accelerates scientific discovery, it does so with an unwavering commitment to patient safety, autonomy, and justice. The ethical responsibility ultimately rests with the human professionals who deploy and interpret these powerful new tools.


References

[1] Vats, K. (2025). From Design to Closure: Artificial Intelligence Transforming Clinical Research. Cureus. https://www.cureus.com/articles/425698-from-design-to-closure-artificial-intelligence-transforming-clinical-research [2] Olawade, D. B. (2025). Artificial intelligence in clinical trials: A comprehensive review. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S1386505625003582 [3] Sergi, C. M. (2025). Artificial Intelligence and the future of clinical trials. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12490578/ [4] Farhud, D. D., & Zokaei, S. (2021). Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iranian Journal of Public Health. https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/ [5] U.S. Food and Drug Administration. (2025). Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (Draft Guidance). https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological