In creating the ACR Appropriateness Criteria® (ACR AC), the ACR Task Force on Appropriateness Criteria incorporated attributes for developing acceptable medical practice guidelines used by the Agency for Healthcare Research and Quality (AHRQ) as designed by the Institute of Medicine. These attributes are:
An important aspect of committee operations is the disclosure and management of potential conflicts of interest. In 2016, ACR began an organization-wide review of its conflict of interest policies. The current ACR COI policy is available on its website. The AC program’s COI process varies from the organization’s current policy to accommodate the requirements for qualified provider-led entities as designated by the Centers for Medicare and Medicaid Services’ AUC program.
When physicians become participants in the AC program, welcome letters are sent to inform them of their panel roles and responsibilities, including a link to complete the COI form. The COI form requires disclosure of all potential conflicts of interest. ACR staff oversees the COI evaluation process, coordinating with review panels consisting of ACR staff and members, who determine when there is a conflict of interest and what action, if any, is appropriate. In addition to making the information publicly available, management may include exclusion from some topic processes, exclusion from a topic, or exclusion from the panel.
Besides potential COI disclosure, AC staff begins every committee call with the conflict of interest disclosure statement listed below reminding members to update their COI forms. If any updates to their COI information have not been submitted, they are instructed not to participate in discussion where an undisclosed conflict may exist.
Finally, all ACR AC are published as part of the Journal of the American College of Radiology (JACR) electronic supplement. Those who participated on the document and are listed as authors must complete the JACR process that includes completing the International Committee of Medical Journal Editors (ICMJE) COI form which is reviewed by the journal’s staff/publisher.
The following COI statement is read to the Panel at the beginning of each call:
“By participating on this call, you confirm that you do not have a conflict of interest that would exclude you from working on this AC document. If you have a potential conflict of interest that has not been previously disclosed and reviewed by staff, we ask that you recuse yourself from the relevant discussion. Please discuss your possible conflict of interest with ACR staff at your convenience any time after the conference call.”
In 2000, the ACR Task Force on Appropriateness Criteria became the Committee on Appropriateness Criteria under the ACR’s Commission on Quality and Safety. In 2012, two separate AC Committees were created – the Committee on Diagnostic Imaging (DI)/Interventional Radiology (IR) Appropriateness Criteria and the Committee on Radiation Oncology (RO) Appropriateness Criteria. (Currently, the ACR RO AC is undergoing a major transition. While the RO AC panels are not actively developing new content, active RO topics are still available on the ACR web site.)
The DI/IR AC committee is comprised of the panel chairs from the DI and IR panels. The DI/IR Committee oversees the activities of their expert panels. The diagnostic panels are mainly organized along body systems (breast, cardiac, gastrointestinal, musculoskeletal, neuroradiology, thoracic, urologic, vascular, pediatric and women’s imaging). Each expert panel is chaired by an individual with leadership capabilities and national recognition of expertise in the area of focus.
The Subcommittee on Radiation Exposure assigns and regularly reviews the relative radiation levels for the procedures included in the topics and updates the “Radiation Dose Assessment Introduction” document as needed.
The Subcommittee on Appropriateness Criteria Methodology provides methodological oversight to all the panels. Its purpose is to ensure that consistent, sound methods are used in developing and revising AC topics across all the panels. It includes members from the various AC panels as well as individuals with methodological and research expertise.
The current AC organizational chart can be found here.
Over 450 volunteer physicians are involved in the ACR Appropriateness Criteria development process.
Each panel chair is responsible for recruiting and selecting the radiology members of their expert panel. Physicians representing diverse geographic regions, practice settings (e.g., freestanding imaging centers, hospitals, private practice), demographic characteristics (e.g., sex, age, years of practice), imaging modalities (e.g., CT, MR, nuclear medicine, ultrasound), or clinical settings (e.g., ambulatory, emergency). ACR staff works with the staff from numerous major medical societies to recruit a multidisciplinary team, including nonradiology clinical experts for topic areas relevant to their specialty to participate in the development of the criteria. More than 80 representatives from 20 medical specialty organizations participate on the ACR AC expert panels.
In addition to the experts who practice in imaging studies and relevant clinical topics provided by the ACR and collaborating medical specialty organizations, the expert panels have primary care providers (for example but not limited to internal medicine, pediatrics, obstetrics/gynecology and general medicine providers). The expert panels also have PhD and masters degree physicians with expertise in clinical trial design, methodology and/or statistical analysis. The panels have about 12 to 15 members on average.
The expert panels are support by the AC Methodology Subcommittee that reviews and disseminates updates to the methodology through ACR staff. All staff are trained in the ACR AC methodology in weekly meetings and guide the panels through the daily implementation of the methodology and administrative processes. They are non-voting methodologists to their assigned panels.
The ACR AC are based on a systematic review of evidence as demonstrated by literature search, evidence table development, and topic development documents. Topic selection may be based on the prevalence of the condition, the variability of practice, the relative cost, the potential for morbidity or mortality, and the potential for improved care. Each question is clarified and refined to be as specific as possible and frequently the clinical conditions are broken down into a number of variants.
Once a clinical condition has been defined, a literature search of peer-reviewed medical journals is conducted and the relevant articles are identified and collected. The topic author assesses the literature then drafts or revises the narrative summarizing the evidence found in the literature. ACR staff drafts an evidence table based on the analysis of the selected literature. These tables rate the study quality for each article included in the narrative.The expert panel reviews the narrative, evidence table and the supporting literature for each of the topic-variant combinations and assigns an appropriateness rating for each procedure listed in the variant table(s). Each individual panel member assigns a rating based on his/her interpretation of the available evidence.
The ACR adopted the definition of appropriateness mentioned in the RAND/UCLA Appropriateness Method User’s Manual (Fitch 2001) where “the expected health benefit (e.g., increased life expectancy, relief of pain, reduction in anxiety, improved functional capacity) exceeds the expected negative consequences (e.g., mortality, morbidity, anxiety, pain, time lost from work) by a sufficiently wide margin that the procedure is worth doing, exclusive of cost" (Brook et al., 1986; Park et al., 1986).
An assumption when assessing appropriateness is that the ordering health care provider has not yet determined whether a radiological procedure is clinically useful for the specific situation. The expert panel may recommend no radiological procedure as being appropriate for a specific clinical scenario. In those instances where more than one radiological procedure may be appropriate, the expert panel will provide additional guidance or clarification of the issues.
The ACR AC methodology is based on the RAND/UCLA Appropriateness Method. The appropriateness ratings for each of procedure or treatment included in the AC topics are assessed using a modified Delphi method. An initial survey is conducted to elicit each panelist’s expert interpretation of the evidence based on the available data regarding the appropriateness of an imaging or therapeutic procedure for a specific clinical scenario. The expert panel members review the evidence presented and assess the risks or harms of doing the procedure balanced with the benefits of performing the procedure. The direct or indirect costs of a procedure are not considered when determining appropriateness (additional assumptions regarding rating appropriateness can be found in the document Rating Round Information). When the evidence for a specific topic and variant is uncertain or incomplete, expert opinion may supplement the available evidence or may be the only means for assessing appropriateness.
The appropriateness is represented on an ordinal scale that uses integers from 1 to 9 grouped into three categories: 1, 2, or 3 are in the “usually not appropriate” category, where the harms of doing the procedure outweigh the benefits; and 7, 8, or 9 are in the “usually appropriate” category, where the benefits of doing a procedure outweigh the harms or risks. The middle category, designated “may be appropriate”, is represented by 4, 5, or 6 on the scale. The middle category is where the risks and benefits are equivocal or unclear, the dispersion of the individual ratings from the group median rating is too large (i.e., disagreement), the evidence is contradictory or unclear, or there are special circumstances or subpopulations which could influence the risks or benefits embedded in the variant.
The ratings assigned by each panel member are presented in a table displaying the frequency distribution of the ratings without identifying which members provided any particular rating. To determine the panel’s recommendation, the rating category that contains the median group rating without disagreement is selected. This may be determined after either the first or second rating round. If there is disagreement after the first rating round, a conference call is scheduled to discuss the evidence and, if needed, clarify the variant or procedure description. If there is still disagreement after the second rating round, the appropriateness rating is “may be appropriate.”
This method enables each panelist to articulate his or her individual interpretations of the evidence or expert opinion without excessive influence from fellow panelists in a simple, standardized and economical process. For additional information on the ratings process, see the Rating Round Information document.
ACR AC is creating online public synopses to help patients understand the recommendations better. The patient committee drafting the synopses has access to provide their perspectives to the panel during the development process to provide their unique perspectives.
ACR’s online feedback form is available to any registered AC user. The individual can put general comments about the AC such as suggested topics, new evidence or questions about methodology. The form also allows for detailed comments which can be linked to a panel, topic, variant or a specific procedure for a variant. This linking will allow ACR to better respond to specific issues.
ACR is working towards a more formal external review process to better manage solicitations for public comments.