top of page

What AI “App Stores” Will Mean for Radiology


What AI “App Stores” Will Mean for Radiology

Recent studies show that artificial intelligence algorithms can help radiologists improve the speed and accuracy of interpreting X-rays, CT scans, and other types of diagnostic images. Putting the technology into everyday clinical use, however, is challenging because of the complexities of development, testing, and obtaining regulatory approval. But a concept adapted from the world of PCs and smartphones – the app store – shows promise as a tool for bringing radiology AI from trials into day-to-day practice.

Radiology AI marketplaces are conceptually similar to app stores in that they enable discovery, distribution, and monetization of “apps,” or in this case AI models, and provide a feedback channel between users and developers. Where these marketplaces differ from a conventional app store is in how they support the lifecycle requirements for developing, training, obtaining regulatory approval, deploying, and validating AI models. Our company, Nuance, introduced its AI marketplace for diagnostic imaging in 2017; several others have also launched AI marketplaces including Arterys and GE Healthcare, with its Edison platform.

Making AI Practical for Healthcare

Radiology algorithms focus narrowly on a single finding on images from a single imaging modality, for example lung nodules on a chest CT. While this may be useful in improving diagnostic speed and accuracy in specific cases, the bottom line is an algorithm can only answer one question at a time. Because there are many types of images and thousands of potential findings and diagnoses, each would require a purpose-built algorithm. In contrast, a radiologist considers a myriad of questions and findings at once for every imaging exam as well as incidental findings unrelated to the original reason for the exam, which are quite common.

Accordingly, to fully support just the diagnostic part of radiologists’ work, developers would need to create, train, test, seek FDA clearance for, distribute, support, and update thousands of algorithms. And healthcare organizations and doctors would need to find, evaluate, purchase, and deploy numerous algorithms from many developers, then incorporate them into existing workflows.

Compounding the challenge is deep-learning models’ voracious demand for data. Most models have been developed in controlled settings using available, and often narrow, data sets — and the results algorithms produce are only as robust as the data used to develop them. AI models can be brittle, working well with data from the environment in which they were developed but faltering when applied to data generated at other locations with different patient populations, imaging machines and techniques. For example, in a November, 2018 study published in PLOS Medicine, researchers at the Icahn School of Medicine and other institutions showed that the performance of a deep learning model used to diagnose pneumonia on chest X-rays was significantly lower when used to evaluate X-rays from other hospitals.

The Marketplace in Action

AI marketplaces can help tackle these complex challenges. First, they solve the “last mile” problem by giving physicians and hospital systems one-stop access to a wide range of AI models. AI developers in turn gain storefront access to a consolidated market and scalable revenue model.

The marketplace model also includes a built-in feedback channel between developers and users to bridge the gap between the technical functionality of algorithms and how they are used in everyday practice. Radiologists can share results with app developers allowing them to iteratively refine the algorithms using annotated, real-world, data. They also gain valuable validation data to support FDA clearance. Hospital systems benefit too as they can track metrics for algorithm usage, costs, and performance from multiple locations.

AI marketplaces are already creating collaborative communities of healthcare developers and users. For example, the University of Rochester is using an FDA-cleared application developed by Aidoc that analyzes CT exams for a suspected intracranial hemorrhage, then prioritizes them on the radiologist’s worklist for immediate attention when time-to-treatment is critical. For longer-term patient care, the University of Pennsylvania is using an application developed by Aidence and eUnity to assist radiologists in the time-consuming task of detecting and characterizing lung nodules for making follow-up comparisons and reporting.

Marketplace feedback further supports a number of public-private partnerships to encourage and improve AI development. These include the FDA’s Software Precertification Program (SPP) which works for AI developers like the TSA Pre-check program does for registered travelers. The SPP evaluates developers’ ability to respond to real-world algorithm performance and provides a more efficient way to bring new AI models to market. The FDA’s National Evaluation System for health Technology(NEST) is working with the American College of Radiology Data Science Institute (DSI) to create ways to validate algorithms and monitor performance. That collaboration uses the DSI’s Lung-RADS program to validate and monitor algorithms for detecting and classifying lung nodules in lung cancer screening programs. The goal is to build a national registry of algorithm performance metrics that gauge everything from ease of integration into workflows to diagnostic accuracy.

Addressing Burnout

While AI marketplaces should foster widespread adoption of AI in radiology, they also have the potential to help alleviate radiologist burnout by augmenting and assisting them in two ways. The first, through the iterative development process, is by facilitating the design of algorithms that integrate seamlessly into radiologists’ workflows and simplify them, rather than introducing additional, burdensome screens and steps to click through.

The second is by improving the speed and quality of radiology reporting. These algorithms can automate repetitive tasks and act as virtual residents, pre-processing images to highlight potentially important findings, making measurements and comparisons, and automatically adding data and clinical intelligence to the report for the radiologist’s review. Algorithms also can provide quality checks, for example by detecting errors in laterality or patient sex and to ensure report accuracy and assist with billing and coding, all of which can reduce clinicians’ stress.

Improving Clinical Outcomes

By taking over routine tasks, adding quality checks, and enhancing diagnostic accuracy, AI algorithms can be expected to improve clinical outcomes. For example, an FDA-cleared model available from Densitas automatically assesses breast density on digital mammograms, as dense breast tissue has been associated with increased risk of breast cancer. By handling and standardizing that routine but important task, the algorithm helps direct their attention to patients at highest risk. In addition, in studies AI algorithms have proven equal to, and in some cases better than, an average radiologist at identifying breast cancer on screening mammograms.

As the population ages, the need for diagnostic radiology will surely increase. Meanwhile, radiology residency programs in the U.S. have only recently begun to reverse a multi-year decline in enrollments, raising the specter of a shortage of radiologists as the need for them grows. The recent emergence of AI marketplaces can accelerate the adoption of AI algorithms, helping to manage growing workloads while providing doctors with tools to improve diagnoses, treatments, and, ultimately, patient outcomes.

Recent Posts
bottom of page