Keith J. Dreyer, DO, PhD, FACR, ACR Data Science Institute® (ACR DSI) Chief Science Officer and Chief Data Science Officer, contributed this post.

I recently had an opportunity to present at the U.S. Food and Drug Administration (FDA)’s Virtual Public Workshop - Transparency of Artificial Intelligence/Machine Learning-enabled Medical Devices on artificial intelligence (AI) algorithm transparency and to suggest what can be done to help healthcare providers find good AI algorithms and mitigate risks to patients, particularly underserved populations including children.

While hundreds of algorithms have now achieved FDA clearance, much of the information available on algorithms is documentation related to the FDA process for clearance or approval. It is created by the algorithm developers, then shared with the FDA, but not available to consumers of AI in its entirety. The result is a growing list of algorithms which have achieved clearance but that healthcare providers can’t be sure will work on our patient populations and on our equipment. As potential AI consumers, we don’t have access to the necessary evidence such as testing data details — or summaries on the patient populations used in algorithm development— that would help us determine if an algorithm would be suitable for our particular populations and modality manufacturers.

Right now, healthcare organizations looking for an AI solution are faced with unnecessary hurdles from the beginning. Once the AI use case is determined (meaning, knowing what they would like an algorithm to do), they then go through ACR DSI AI Central or FDA list of available products for a look. But they don’t have actual performance data. If the first algorithm they try fails on their data, it’s back to the list to try again. Performance failures like these aren’t published, therefore we aren’t able to learn from the experiences of other institutions. It doesn’t have to be like this. If consumers knew the pre-market testing parameters, i.e. the scanners used for the algorithm training data or the demographics of the testing data, we would save time. We could eliminate some AI products that weren’t likely to be a good fit from the beginning.

We think the answer is for the FDA to ensure that a specific set of parameters is included in all public-facing documents for all FDA regulatory pathways. These metrics would help healthcare providers to understand how the algorithm was developed. These include such things as testing demographics, scanner parameters, types of findings, methods for ground truth creation, etc.

We need simple ways for consumers to vet AI and simple processes for reporting on AI performance. Something similar to the ACR registries would be ideal. Just as there is a radiation dose index that occurs during a CT, there would be tracking for algorithms to help benchmark performance and quantify quality. AI users need a way to send back real time data and access the data from others, and the FDA needs this information as well to inform their premarket approval processes.

If AI continues to grow at the current rate, four years from now, there will be thousands of algorithms to select from. We need more transparency, clear product labeling, deeper tracking and a simple reporting process for those who have algorithms on-site. It’s time to take AI evaluation to the next level by making information available and accessible to healthcare providers.

Please share your thoughts in the comments section below and join the discussion on Engage (login required).

×Your comment has been successfully submitted for approval and will be published after approval.
Leave a Comment

You may also like