What Is the FDA's Approach to Adaptive AI Algorithms?

What Is the FDA's Approach to Adaptive AI Algorithms?

By Rasit Dinc

Artificial intelligence (AI) and machine learning (ML) are rapidly transforming healthcare, offering unprecedented opportunities for improving diagnostics and personalizing treatments. As these technologies become more integrated into clinical practice, regulatory bodies like the U.S. Food and Drug Administration (FDA) are challenged with ensuring their safety and effectiveness. This is particularly true for adaptive AI algorithms, which can learn and evolve from real-world data.

The FDA's traditional regulatory paradigm was not designed for the dynamic nature of adaptive AI. Recognizing this, the agency has been proactively developing a new regulatory framework to address the unique challenges posed by these technologies. This blog post explores the FDA's evolving approach to regulating adaptive AI algorithms in healthcare.

The FDA's Evolving Regulatory Framework

The journey towards a new regulatory framework began with the acknowledgment that the existing system was ill-equipped to handle the continuous learning capabilities of AI/ML-based Software as a Medical Device (SaMD). In 2019, the FDA published the discussion paper, "Proposed Regulatory Framework for Modifications to AI/ML-Based Software as a Medical Device (SaMD)" [1].

This was followed by the release of the "Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan" in January 2021. This action plan detailed a multi-pronged approach to overseeing AI/ML-based SaMD, including updated guidance, the promotion of good machine learning practices, and a focus on real-world performance monitoring [2].

Key Components of the FDA's Approach

The FDA's approach to regulating adaptive AI is centered around a "Total Product Life Cycle (TPLC)" approach. This approach is supported by several key components:

Challenges and Criticisms

Despite these efforts, significant challenges remain. Some experts argue that the current regulatory framework may create an "illusion of safety," as it may not fully account for the risks associated with the real-world performance of adaptive algorithms. There are concerns about the lack of robust, real-time monitoring mechanisms to detect performance degradation or the emergence of bias in AI models [6].

To address these concerns, there is a growing call for a more adaptive, community-engaged regulatory approach. This would involve mandating extensive post-market evaluations and requiring developers to disclose training data sources.

Conclusion

The FDA's approach to regulating adaptive AI algorithms is a work in progress, but it represents a significant step towards balancing innovation with patient safety. By embracing a TPLC approach and implementing mechanisms like the PCCP and ACP, the FDA is creating a regulatory framework that is more agile and responsive to the unique characteristics of AI/ML technologies.

As AI continues to evolve, so too will the regulatory landscape. It is essential for all stakeholders to work together to ensure that these powerful technologies are used in a safe, effective, and equitable manner to unlock the full potential of AI to revolutionize healthcare.

References

[1] U.S. Food and Drug Administration. (2019). Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf

[2] U.S. Food and Drug Administration. (2021). Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device-action-plan

[3] U.S. Food and Drug Administration. (2021). Good Machine Learning Practice for Medical Device Development: Guiding Principles. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

[4] U.S. Food and Drug Administration. (2023). Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence

[5] Singh, V., Cheng, S., Kwan, A. C., & Ebinger, J. (2025). United States Food and Drug Administration Regulation of Clinical Software in the Era of Artificial Intelligence and Machine Learning. Mayo Clinic Proceedings: Digital Health, 3(3), 100231. https://www.sciencedirect.com/science/article/pii/S2949761225000380

[6] Abulibdeh, R., Celi, L. A., & Sejdić, E. (2025). The illusion of safety: A report to the FDA on AI healthcare product approvals. PLOS Digital Health, 4(6), e0000866. https://pmc.ncbi.nlm.nih.gov/articles/PMC12140231/