Image courtesy of FDA

This past April, we reported from the World Medical Innovation Forum on the future of Artificial Intelligence (AI) in orthopedics.

We noted that AI is already in orthopedics in the form of diagnostic programs, robot assist programs and patient treatment algorithms. There is big money at stake. Morris Panner, CEO of Ambra Health, has written that funding for companies that provide AI-based health care solutions surpassed $400 million in 2017 and is breaking investment records.

Alan Reznick, M.D., M.B.A., F.A.A.O.S. and Ken Urish, M.D., Ph.D. write in AAOS NOW that AAOS is currently developing big data through registries like the American Joint Replacement Registry (AJRR), “and as we look to find a useful home for AI in orthopaedics, AJRR could be one place to start.”

“AI could be used to screen radiographs for subtle abnormalities, back up an emergency department doctor’s nighttime fracture readings with a machine-learning-based second opinion or follow a bone tumor’s response to chemotherapy.”

Urish is currently working on AI as a way to evaluate MRI data to detect osteoarthritis and track cartilage loss over time. “This approach may have many implications as we evaluate the usefulness of treatments like lubricants, platelet-rich plasma, and stem cells, as well as medical treatments for inflammatory arthropathies.”

Proposed Regulatory Framework

But how is the FDA going to assure the safety and effectiveness of the software associated with artificial intelligence—also known as software-as-a-medical-device (SaMD)— whose algorithms can change without human intervention and possibly affect people in ways for which they were not approved or cleared.

We didn’t have to wait long to find out how the FDA is thinking about this.

In April the agency issued a Discussion Paper titled: Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).

Then-Commissioner Scott Gottlieb, M.D., said this was an effort to “consider a new regulatory framework specifically tailored to promote the development of safe and effective medical devices that use advanced artificial intelligence algorithms.”

Avoid Returning for FDA Permission

At issue is how the agency will regulate AI/ML used in software and how it will keep track of ongoing changes to such software to ensure that any changes are safe and effective, without requiring an entirely new submission for each iteration of the software.

Currently, the FDA does not take into consideration the continued adaptation of SaMDs using AI/ML.

Simply put, AI algorithms are software that can learn from and act on data. But what if that data changes the safety profile of the device? Does the manufacturer have to go back for another clearance or approval?

The agency wants to base its regulatory framework on the internationally harmonized International Medical Device Regulators Forum (IMDRF) risk categorization principles, FDA’s benefit-risk framework, risk management principles in the software modifications guidance, and the organization-based TPLC approach as envisioned in the Digital Health Software Precertification (Pre-Cert) Program.

Let’s start with some definitions. The FDA defines SaMD is a software “intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.”

The agency wants to figure out how to regulate SaMDs that “learn” by using real-world evidence to continuously adapt and improve and, therefore, may need to be re-submitted for a new premarket approval.

For instance, a SaMD may need to undergo maintenance throughout its lifecycle. Maintenance can be adaptive (e.g., adjust to the use environment), perfective (e.g., improve performance), corrective (e.g., correct errors) or preventative (e.g., adjust any errors before they become issues). In the U.S., how and when manufacturers present these changes to FDA to allow the agency to reassess the safety and effectiveness of the SaMD, is a “work in progress.”

Gottlieb said the artificial intelligence technologies granted marketing authorization and cleared by the agency so far are generally called “locked” algorithms that don’t continually adapt or learn every time the algorithm is used.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.