In this month’s edition, Colin M. Segovis, MD, 3rd year radiology resident at Wake Forest Baptist Hospital, discusses his views on clinical decision support (CDS) from an in-training perspective and how it will affect us in the future.

RFS: Why are you such an advocate for CDS?

CS: Technologies such as PACS, RIS, EHR and computerized physician order entry have improved many aspects of the radiology workflow, but have also eliminated many of the face-to-face interactions that gave radiologists the opportunity to share their imaging expertise to ensure the patient receives the most appropriate imaging exam. CDS brings the radiologist’s expertise out of the reading room and to imaging exam ordering.

CDS systems refer to software that can interface with a pre-existing EHR (i.e., Epic) or operate as a standalone product to provide “just-in-time” information to the health care team about a patient care decision. The goals of a CDS system include increasing quality of care, avoidance of errors and adverse effects, and improving efficiency and cost benefit. CDS systems are not radiology specific; these systems can be applied to almost any aspect of patient care. For example, a CDS system could flag an incorrect diet order for a diabetic patient and recommended a diabetic diet instead of a regular house diet.

Radiologists are the imaging experts and have worked tirelessly to make this expertise available through tools such as the ACR Appropriateness Criteria®. Thus, the resources exist to ensure patients receive the appropriate imaging exam, but getting these resources to the ordering physician at the appropriate time is challenging. CDS brings these resources to the ordering physician at the time of ordering an imaging exam. For example, if a hospital deploys ACR Select™, the ACR’s clinical decision support product, then imaging exam orders are checked against ACR Appropriateness Criteria before the orders are signed. If a physician places an imaging exam order that differs from recommendations listed in the ACR Appropriateness Criteria, then the CDS system can flag this order at the time of order entry and suggest a more appropriate exam based on current evidence. Additionally, CDS systems can be used to prevent the ordering of exams that an institution has deemed inappropriate.

The shift in reimbursement from “volume” to “value” makes ordering the appropriate imaging study even more critical. There are countless examples of payers not reimbursing for an “inappropriate” study. Providers and payers are working together to use CDS systems to decrease inappropriate ordering. In some cases, CDS systems are replacing pre-authorization.

CDS systems will enhance patient care and make the radiologist’s expertise more available. Incorporating a CDS system into an institution's patient care and IT infrastructure is not trivial and requires rigorous planning before implementation, but the benefits are substantial. CDS systems decrease inappropriate exam ordering, increase value and enhance patient care.

RFS: Is your program currently involved in any CDS endeavors?

CS: Wake Forest Baptist Medical Center is deploying ACR Select, the CDS product offered by the ACR. This program will interface with our existing electronic ordering system and give ordering providers up-to-date recommendations based on the ACR Appropriateness Criteria as the provider is ordering the imaging study.

RFS: As we transition from a quality, not quantity mentality in the field, how can we be fairly reimbursed? How do you quantify quality?

CS: This is an extremely difficult question. Quality depends on the outcome measure. Who decides the outcome measure? Does the referring provider define quality metrics ... the institution, the payer, the patient? In radiology, the definition of exam “quality” should include appropriateness, accuracy and report turnaround, but how these factors are measured and weighted remains a question.

A starting point for exam “quality” is exam “appropriateness.” Patients should undergo the imaging exam most suited to answering the clinical question. If the clinical question is not clear, then the radiologist should also have the ability to clarify the clinical question; the radiologist needs the opportunity to serve as a consultant, directly or indirectly, through mechanisms such as CDS systems.

Accuracy, specifically report accuracy, is not a simple quality measure. Clearly, findings that are directly related to the clinical question and that will acutely or chronically affect a patient’s health need to be communicated in a clear and concise manner. How is report accuracy graded? Radiologists are the imaging experts, and therefore are in the best position to grade report accuracy. Some institutions use peer review systems but there can be bias in peer reviews. For example, if reimbursement is linked to “accuracy” and “accuracy” is based on peer reviews from only within a practice group, then how does the system prevent accuracy inflation to ensure maximum reimbursement? Also, how do incidental findings factor into the “accuracy” of a report? There are guidelines for incidental findings (for example on Abdominal CT), but there is still substantial variability in how radiologists deal with such findings. Reports must be accurate but work remains on defining “accuracy.”

Report turnaround time will likely always be a quality measure but there needs to be recognition that not all imaging studies are equal. Rapid exam reporting is important for patient care but is currently inappropriately weighted. This is not surprising since a volume-based reimbursement system places inappropriate weight on report turnaround. Furthermore, length of stay is a quality metric in many emergency departments resulting in pressure to produce reports as quickly as possible. Any quality-based system must include a measure of imaging exam complexity, not just capturing the complexity of the imaging modality but also capturing the complexity of the patient.

How quality-based systems will tie quality metrics to reimbursement is evolving, but appropriateness, accuracy and report turnaround will all likely factor into any new quality-based system.

RFS: What are your thoughts on Accountable Care Organizations (ACOs)? As the landscape for radiology reimbursement changes, do you feel more radiology groups should make an effort to join a physician-led ACO?

CS: Everyone should read Dr. Frank Lexa’s excellent primer on ACOs.

Radiology groups should make an effort to join ACOs in which radiology has a voice, regardless of leadership. One empirical observation that supported the creation of ACOs was the presumption that “supply-sensitive” services, including diagnostic imaging, drove excessive cost in health care (A Radiologist's Primer on Accountable Care Organizations). This observation suggests that decreased imaging utilization will decrease cost. Radiologists should belong to ACOs that are willing to accept evidence supporting appropriate imaging utilization and place radiologists in a position to help define “appropriateness”. Physician leadership of ACO does not guarantee that radiologists will be valued. Therefore, any radiology group that joins an ACO should only do so after careful consideration of radiology’s role and voice within the ACO.