FDA Readies to Regulate AI and Machine Learning in Medical Devices
The U.S. Food and Drug Administration has set its sights on the issue of artificial intelligence and machine learning in medical devices. Experts now say that the agency will need to open up a specific regulatory pathway for these devices that is sensitive to the unique risks and benefits posed by this technology and its possible impact on the practice of medicine.
Continual learning, also known as “continuous” or “lifelong” learning, is an artificial intelligence technique through which mathematical models remain adaptive to the external world. Mathematical models are able to evolve by continuously incorporating new information based on real world feedback without losing previously learned knowledge, which can make its decisions more adaptive to the outside world in a similar manner to how human and animal minds acquire new insights. Researchers have anticipated the ability of these techniques to advance outcomes in medicine because of the potential it carries for automation, accuracy, and objectivity.
A review of the relevant databases found 222 such devices had been approved in the U.S. between January 2015 and March 2020.The number of devices that have been approved in America has increased since 2015 with many of those devices approved for radiology, a medical discipline that uses imaging, according to a comparative analysis published in the peer-reviewed journal The Lancet.
Radiology Today, a trade publication, has said that talk about artificial intelligence in radiology is “running rampant.”
In radiology, there are certain tasks that a machine can more quickly and simply do than other areas of medicine like psychiatry which is more reliant on patient-physician interactions and more integrated treatment, said Dr. Kerstin Vokinger, professor at the University of Zurich and affiliated faculty at the Program on Regulations, Therapeutics, and Law at Harvard Medical School, one of the authors of the Lancet study, explaining why radiology has so quickly embraced this technology.
Publications from the FDA indicate that the regulatory body is in the process of working out a framework under which to regulate these technologies. In January, the FDA published an “action plan” which said that these devices would safely and effectively improve patient care, provided they occur under an “appropriately tailored total product lifecycle-based regulatory oversight.”
It also said that the agency is in the process of collecting real world data about the implementation of these devices.
Other industries, such as the autopilot function in Tesla cars, already use continuous learning, according to another study.
That study, also published in The Lancet and co-authored by Vokinger, suggests that the FDA will need to account for a few unique risks posed by artificial intelligence and machine learning, including the possibility for new information to introduce errors or to lower the effectiveness of a device after it has been approved, as well as the potential for data sets to be racially or ethnically biased.
“The inherent risks of continual learning systems, as well as the benefits, mean that it’s important that the FDA takes a cautious approach to regulating continual learning systems,” according to the study.
Specifically, the study says, the FDA will need to clarify what aspects of these devices manufacturers can change after they’ve received authorization.
“A crucial principle is to ensure that the introduction of continual learning systems does not lead to a reduction in medical device performance,” it commented. “To reach this goal, the FDA and the manufacturer should determine, during the review process, how the device’s performance could further be improved through learning, for example, by identifying and addressing specific errors.”
Bias is a particular concern since using data based on one portion of the population has the potential to lead to real world harm to the health of marginalized groups, researchers like Vokinger explain.
“What you can see over the past fifty years or so is that these devices in general are associated with higher risks,” Vokinger said.
Continual learning, in this respect, can either lead to the reduction of bias or the introduction of more bias, depending on the data and practices used, Vokinger said.
“This has already been detected as a problem, and the question now is how to mitigate this,” she said.
The FDA action plan also flags this as a particular area of concern and reports that the agency is actively supporting research efforts to “evaluate” and “eliminate” algorithmic bias.
Researchers expect the framework to appear sooner rather than later, given the immense interest in the technology.
It is challenging to predict what role artificial intelligence and machine learning will have in the future of medicine, Vokinger said, when asked about how artificial intelligence and machine learning will likely shape the medical device industry.
Just because a device is authorized by the FDA doesn’t mean it will get implemented in clinics, she said, and so it remains to be seen how precisely this will shape the industry moving forward.
“We have this increasing number of products put on the market. To be fair, artificial intelligence is nothing super new, it has already been applied [for many years], but now more manufacturers are focused on this new technology and more sophisticated methodologies that are embedded in artificial intelligence,” she said.
“I do believe that there’s a lot of potential, and I could imagine that in certain areas, especially in more technical areas, there could be quite a big impact,” she said.
In The News
WASHINGTON — From the algorithm that delivers your next TikTok video to the AI that powers ChatGPT, computer code learning... Read More
WASHINGTON — From the algorithm that delivers your next TikTok video to the AI that powers ChatGPT, computer code learning isn’t a one-way operation. Of course, the program gleans information from user engagement, but experts warn that computer coding technologies have changed human behavior, too, both... Read More
NETANYA, Israel — Bright Data, one of the world’s leading web data platforms, is suing Meta, stating the owner of... Read More
NETANYA, Israel — Bright Data, one of the world’s leading web data platforms, is suing Meta, stating the owner of Facebook and Instagram is wrongly claiming public user data as its own, violating the spirit of openness on which the internet was founded. Meta, meanwhile, has... Read More
SAN FRANCISCO (AP) — Elon Musk was depicted Wednesday as either a liar who callously jeopardized the savings of “regular... Read More
SAN FRANCISCO (AP) — Elon Musk was depicted Wednesday as either a liar who callously jeopardized the savings of “regular people" or a well-intentioned visionary as attorneys delivered opening statements at a trial focused on a Tesla buyout that never happened. Lawyers on opposing sides drew... Read More
Microsoft is cutting 10,000 workers, almost 5% of its workforce, joining other tech companies that have scaled back their pandemic-era... Read More
Microsoft is cutting 10,000 workers, almost 5% of its workforce, joining other tech companies that have scaled back their pandemic-era expansions. The company said in a regulatory filing Wednesday that the layoffs were a response to “macroeconomic conditions and changing customer priorities.” The company said it... Read More
More than half of 17.5 million users who responded to a Twitter poll created by billionaire Elon Musk over whether... Read More
More than half of 17.5 million users who responded to a Twitter poll created by billionaire Elon Musk over whether he should step down as head of the company had voted yes by the time the poll closed Monday. There was no immediate announcement from Twitter,... Read More
Elon Musk's Twitter has dissolved its Trust and Safety Council, the advisory group of around 100 independent civil, human rights... Read More
Elon Musk's Twitter has dissolved its Trust and Safety Council, the advisory group of around 100 independent civil, human rights and other organizations that the company formed in 2016 to address hate speech, child exploitation, suicide, self-harm and other problems on the platform. The council had... Read More