Radiology Experts Warn that Ensuring Autonomous AI Safety and Effectiveness is “A Long Way Off”
Imaging artificial intelligence (AI) solutions that provide decision support or administrative assistance to radiologists are poised to improve patient care. However, regulators are currently unable to ensure safety and effectiveness of AI intended to automate key components of imaging workflow without physician-expert oversight.
That is the message that the American College of Radiology (ACR) and Radiological Society of North America (RSNA) sent to the U.S. Food and Drug Administration (FDA) regarding the agency’s February 2020 public workshop on the "Evolving Role of Artificial Intelligence in Radiological Imaging," which focused on higher risk, “autonomously” functioning AI. Examples of autonomous imaging AI would include algorithms that identify normal radiological examinations or rule out critical diseases.
“It is unlikely FDA could provide reasonable assurance of the safety and effectiveness of autonomous AI in radiology patient care without more rigorous testing, surveillance, and other oversight mechanisms throughout the total product life cycle,” the joint letter states.
The radiology groups highlighted issues around elevated risk of autonomous imaging AI, and urged the FDA to adopt more-stringent review, validation, monitoring, and product labeling requirements.
Moving forward, the FDA will use the workshop presentations and written feedback to inform future regulatory policymaking.