ACR Bulletin

Covering topics relevant to the practice of radiology

The Big Purchase

Now that AI is becoming more accessible, hearing from those who have put a program into place can provide valuable insights.
Jump to Article

Who is going to pay for AI?

—Nina E. Kottler, MD, MS
January 27, 2021

Practical considerations for implementing AI technology into clinical workflow were the focus of the closing session of the 2020 ACR Imaging Informatics Summit, “You’ve Purchased an AI Model. Now What?” The varied practice settings offered several workable strategies for implementing AI and ensuring a successful program.

Work With Vendors to Develop Implementation Solutions

Kicking off the presentations at the Summit, ACR Informatics Commission Chair Christoph Wald, MD, MBA, PhD, FACR, joked that “all of the data scientists in Boston had already been hired by MGH and Brigham.” So, the Lahey Hospital & Medical Center team — without a data scientist onsite — relied heavily on working relationships with a single AI vendor and a third-party workflow orchestrator to integrate AI algorithms into their clinical workflow. This approach aided in developing a context-sensitive widget to be deployed within the PACS viewer to alert radiologists to the presence of AI results on a given study. Wald emphasized the importance of collecting user (i.e., radiologist) feedback that is aligned with specific tools and conveyed to the internal quality assurance team as well as to the vendor.

Communicate With Radiologists and Staff

These points were further supported by the complementary presentations of Arun Krishnaraj, MD, MPH, chair of the Commission on Patient- and Family-Centered Care, and Christopher M. Gaskin, MD, FACR, associate chief medical information officer at the University of Virginia (UVA) Health System, on their experience implementing AI models at UVA. Krishnaraj discussed their approach to implementing AI by first focusing on how a particular tool is mapped onto its intended use case to determine how it should integrate into the clinical workflow.

According to Gaskin, working with UVA’s AI vendor was important to tune the implementation and presentation of results, including decisions regarding the precise timing of when the images were exposed to an algorithm and when the results are presented to the interpreting radiologist. Ultimately, UVA’s system was set up to alert radiologists to the arrival of new AI results after a report has been finalized and to facilitate integrated review of those results post hoc.

The UVA experience raised a critical point that clear communication with radiologists and staff throughout the process of developing and implementing AI into the clinical workflow is necessary to ensure success. Krishnaraj focused on the implementation of a lung CT de-noising algorithm to expand CT lung cancer screening to underserved populations in rural Virginia — a fantastic example of AI helping facilitate a public health initiative within their department. He found the biggest challenge in this project was “keeping everyone informed across multiple remote imaging sites during implementation.” Communication became even more important when an algorithm had to be taken out of the clinical workflow at UVA, due to the vendor’s decision to conform to an update to the FDA’s regulatory pathway.

Communication can also play a big role in mitigating the “expectation-reality mismatch,” described by Jayashree Kalpathy-Cramer, PhD, scientific director at Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science. In one example, the initial excitement around AI at UVA — particularly among the trainees and younger faculty — waned over time as radiologists were “underwhelmed” by the AI algorithm’s performance in clinical workflow. According to Kalpathy-Cramer, this is most often due to data heterogeneity and poor generalization of AI algorithms introduced to new sites after initial training and validation elsewhere. Though performance issues like these are well known in data science circles, communicating them to radiologists can help manage expectations for AI in the workflow.

Payors and hospitals have different incentives: For payors, decreasing costs is the priority, while hospitals strive to improve patient throughput.

Determine the Value Proposition

While academic departments like UVA’s have a mission to support research and the technological advancement of the field, private practices are more motivated to implement AI if there is a business case for it. At the Summit, Nina E. Kottler, MD, MS, vice president of clinical operations at Radiology Partners, addressed this issue — or, as she put it, “Who is going to pay for AI?”

Different actors in the radiology ecosystem have different incentives. For radiologists, particularly in private practice, efficiency is important enough that radiology practices might be willing to pay for productivity gains. Payors and hospitals have different incentives: For payors, decreasing costs is the priority, while hospitals strive to improve patient throughput. Kottler cautioned that “quality is an expected component of our product as radiologists,” so it may be hard to justify paying for AI that only targets improvements in quality.

Start the AI Adoption

Kottler was asked where a practice should begin if it has not yet adopted AI. Harkening back to points made by Wald, Krishnaraj, and Gaskin earlier, Kottler emphasized that in selecting an AI technology, the focus should be on selecting a vendor based on their willingness and availability to work with your organization, rather than on selecting a specific algorithm. Wald reminded us that this is particularly important for practices without a data scientist on site.

In follow-up to the previous question, a conference participant asked about how practices might go about trialing an algorithm from a vendor prior to committing to a contract. Laura Coombs, PhD, ACR vice president of data science and informatics, said the ACR AI-LAB™ platform Evaluate module will be able to provide this service. This is important because algorithms are notoriously brittle in environments different from where they were trained, and practices should attempt to "try before they buy" — using their own data. The ACR Data Science Institute® has received interest from vendors in engaging in this service and the details are being worked out.

A final point to consider: Panelists repeatedly said they believe imaging AI tools — in their current form — are not “ready” for permanent storage. They do not routinely store AI results in PACS, but rather store results in a separate archive for quality assurance purposes. However, the results could indirectly become a part of the medical record if a radiologist acknowledges them in the report.

Finally, practices must recognize that algorithms can degrade over time and they will need a solution for longitudinal monitoring. The ACR AI-LAB platform’s Assess-AI module links to the ACR National Radiology Data Registry to collect longitudinal data regarding algorithm performance, as well as examine metadata, enabling practices to detect if — and how — an algorithm's performance degrades with time.

Author WALTER F. WIGGINS, MD, PHD,  CLINICAL DIRECTOR OF THE DUKE CENTER FOR AI IN RADIOLOGY