Last Updated: August 20, 2019 3:24PM
This post is part of a series on the use of AI in clinical trials. For background and to see related posts, check out the introductory piece.
With any innovation comes risk. Regulatory agencies often step in to guide innovators—helping protect those who want to move fast and break things against some of that breakage. But in the case of AI, the healthcare industry has encountered a deafening silence from regulators.
The FDA recently announced draft guidance for AI-based SAMD (software as a medical device). It has been quiet, though, about policies regarding other applications of AI in healthcare, including for clinical trials. This lack of guidance contributes to a feedback loop:
- Developers are unclear on the requirements surrounding the use of AI tools in an actionable medical setting.
- As a result, companies are reticent to submit their ideas—for example, trials with unique screening approaches or responsive algorithms that can predict disease—to the FDA for fear of rejection.
- That, in turn, gives the FDA little frame of reference to develop their recommendations.
- The cycle continues.
“No one wants to be the guinea pig, but we need one to see how regulators handle it,” noted Derek Dunn, Director of Global Clinical Operations for Alexion Pharmaceuticals, Inc.
Without predecessors as substantial equivalents, applicants for FDA approval find themselves needing to use de novo applications to prove their level of risk is low enough to avoid classification as Class III devices or drugs. Such drugs require pre-market approval.
Adaptive vs Locked AI Systems
One major barrier to clear regulations for AI application in healthcare is the adaptive nature of deep learning systems. Unlike a physical device such as an ultrasound, AI-based SAMD can learn continuously. This is excellent news for patients, researchers, and clinicians—so long as the algorithm is learning correctly. But the black box nature of neural networks and the unavoidable presence of human bias in designing and, sometimes, training these programs can make it difficult to completely eradicate new errors or course correct after finding a bug.
Is the answer to allow only “locked” algorithms in the market? Surely that’s too limiting, as it removes one of the most genius elements of AI. Perhaps version control or regulation of data quality could solve these issues.
Due to requirements that benefits outweigh risks, the threshold for SAMD approvals may differ by specialty. An application with the potential to extend life in a terminal cancer patient merits more risk than, say, a solution for eczema management that carries the possibility of severe side effects.
Regulations Around the World
These issues, of course, are hardly restricted to US-based trial applicants. Differences in how organizations qualify measurable results may also impact intercontinental use. This hurdle certainly causes frustration, given how quickly organizations could share contemporary data and tools if permitting authorities allowed. When a regulatory organization decides upon its rules of engagement for using AI in healthcare, there’s no guarantee other regulatory bodies around the globe will share those requirements.
“There is a big difference between what the FDA wants and what the European regulators want to see with regards to data and endpoints,” said Dunn. “The FDA is focused on more clinical endpoints. They want proof you can change the level of a parameter, and results be specific and measurable. The European Medicines Agency doesn’t put as much weight on clinical endpoints; they also want to be shown how a product has measurable effects on patients’ lives via health economics data., etc.”
This need is already giving rise to recommendations for an ISO-like regulation system. The key to resolving these differences across specialties and geographies will be communication.
“When we are more educated, that allows us to be more open to the advances with AI,” said Kimberly Guedes, Centrexion Therapeutics Executive Director of Clinical Operations.
That solution is exactly what will help CROs and patients accept AI as well. How? Stay tuned—I’ll discuss in a future post.