AI algorithms are coming to radiology, but how long will it be until they're integrated into workflows?
The rise of deep learning artificial intelligence (AI) in today's society has been swift; nearly every industry has been impacted for the better in some way by the innovative technology. That includes health care, where AI has begun to transform the ways that hospitals, physicians, and patients interact. However, while there is interesting research that will further escalate AI adoption over the next few years, Paras Lakhani, MD, an assistant professor in the department of radiology at Thomas Jefferson University Hospital in Philadelphia, says that there are still many unknowns about the technology.
"For example, we don't know the best ways to integrate these algorithms into routine workflow, and [it is difficult to generalize workflow] at multiple institutions and with multiple vendors and variations in image processing," Lakhani says. "At Thomas Jefferson University Hospital, we are interested mainly in applied research, both in creation and deployment of AI algorithms, as well as consumption of algorithms produced by others."
AI technology is rapidly evolving, however, especially in the radiology field, where the interpretation process is heavily dependent on image-based findings. Elad Benjamin, cofounder and CEO of Zebra Medical Vision, says AI is beginning to provide tools that can have a real impact on automated reading of radiology images. "There is real underlying technology, which is fundamentally better than previous generations," Benjamin says. "Over the next few years, it will change how radiology data are viewed, analyzed, and treated."
Automated Assistants Examples in which deep learning has been shown to be effective include the analysis of chest radiographs, fracture detection, and mammography. But deep learning could be widely used for many study types, assuming there are enough high-quality, labeled data to train the algorithm.
Daniel H. Kim, MSc, FRCR, MBChB, a radiologist in the South West of England, says there has been a lot of buzz around AI in DR, and the recent release of huge data sets, including around 100,000 chest radiographs, is a huge plus.
"AI technology is ready, but access to well-curated data is the limiting factor at present. The release of more large data sets will be a major boost," Kim says. "Deep learning is excellent for pattern recognition. Plain radiographs are an ideal target for this kind of AI because some studies tend to be fairly repetitive and often single image. In the near term, AI in DR will be especially useful in situations where there is no access to an expert radiology opinion."
Kim recently published an article about using deep learning for the purposes of AI fracture detection.
"We achieved some excellent results using transfer learning," Kim says. "I am now beginning some work in cancer detection and characterization, which is exciting and has the potential to reduce diagnostic delay, avoid unnecessary treatment, and improve management decisions."
It's generally agreed that AI is very good at mapping one input to another, and for DR, some use cases could include improvement of image quality such as reducing artifacts; recognizing poor-quality images, eg, inadequate coverage, motion artifacts, patient rotation, and penetration; abnormality detection; classification of lesions; and reducing dose.
"Right now, this is still in a stage of development," Lakhani says. "I think you'll start to see the initial work being done in screening mammography."
In November, Lunit, an AI-powered medical image analysis software company, introduced Lunit INSIGHT, a cloud-based AI solution for real-time image analysis. Lunit INSIGHT for chest radiography covers lung nodules/mass; consolidation, eg, pneumonia and tuberculosis; and pneumothorax. For mammography, it detects breast cancer lesions in mammograms.
Brandon Suh, MD, MPH, CMO of Lunit, says that AI is very good at pattern recognition such as detecting abnormal lesions in images. He believes AI algorithms will augment the interpretation capabilities of radiologists, enabling them to detect and classify lesions they would otherwise have missed.
"At first, I think most products will be dealing with increasing the efficiency for radiologists, to do what they do faster, by optimizing the workflow or detecting lesions. These products require the least amount of work for regulatory approval, so it is a good first step for the companies," Suh says. "Increasingly, there will be products that are more deeply involved in the interpretation process, namely classification of lesions, rather than simple detection."
Zebra Medical Vision has developed a broad portfolio of automated tools that assist radiologists in creating more complete, accurate, and consistent reports, including multiple tools to automatically assist physicians in identifying bone, lung, liver, and cardiovascular disease.
"AI can help bridge the gap between the growing demand for radiology services and the supply of qualified radiologists," Benjamin says. "Automated assistants can provide preliminary reads, taking some of the analysis burden off physicians. They can also assess numerous findings, which are difficult for physicians to view or quantify, further helping provide better patient care."
Current Limitations It's important to understand that AI still cannot make decisions on its own. "The AI capabilities we have today are still at the level of helping to identify findings in an easier, faster, and more cost-effective way," Benjamin says. "However, decision making will come as well; it's just a matter of time."
The performance level of AI algorithms is limited to the data they are trained on, Suh notes. If it is only exposed to "easy" cases, an algorithm will not perform well with "difficult" cases. In other words, it is difficult for AI algorithms to outperform the data they are exposed to.
"This is because it performs well with 'similar' patterns or features it has been exposed to in the training process, so it struggles with completely new patterns," he says. "In order to overcome this, you need solid, ground-truth data—CT data or biopsy data—that goes beyond the level of human vision. Large-scale data are also very important to expose the AI algorithms to a variety of patterns."
Kim says that, at the moment, AI in medical imaging is good at "closed world" problems where there is a defined, often binary, outcome—for example, the presence or absence of a fracture.
"AI struggles when faced with complex problem solving [or] applying general principles more broadly and when faced with large numbers of possible outcomes," Kim says. "A daily inpatient CT list is a good example of something far beyond the realms of AI in the near future."
Lakhani notes it may take longer before people see an AI system that can generalize as well as a trained physician. "Radiologists do much more than look at images, including communication of findings, being part of the decision-making process on findings that should be actionable, correlating with medical history, and providing information relevant to the care team," he says. "However, you could surmise that an AI system that understands everything—not just the images, but all the medical records for that patient—could do more than we realize."
When it comes to AI software, Lakhani feels that more standards need to be in place. "Much of what you see in research is retrospective analysis on cases held out of training," he says. "Ideally, I think seeing performance across multiple sites or at least multiple vendors would be important to ensure generalizability."
Kim says that models derived from deep learning can be validated by testing the accuracy of the model as a diagnostic test against a new set of images that were not used in the training process. "For these AI methods to be broadly adopted in clinical practice, they will require robust testing for repeatability and reproducibility, they will need to undergo rigorous safety testing, they will need to be validated in prospective clinical trials, and many will require CE or FDA approval," he says.
A New Set of Tools According to Lakhani, there may soon be classification algorithms that detect the presence or absence of an abnormality with an associated probability. There will then be various visualization tools such as heat maps or bounding boxes that assist the physician in read-outs.
"Over time, we may see tools that even have predictions for pathology, eg, for tumors, even recommendations for next action/treatment steps, and even predictions for mortality," he says. "However, when we will see this is uncertain. This could take five to 10 years or more or come sooner than we realize."
Thomas Jefferson University has been doing applied research on AI for almost two years, learning much about the technology. The researchers plan to openly disseminate and discuss information, especially in this early stage, so they can ensure that best practices are put forward.
Benjamin expects radiology to leap forward in its capabilities over the next few years. He sees the potential for the specialty to be a guide for how medicine can be practiced, with the right combination of automated assistants and humans, to give patients the best care possible. And while some are asking, "When will radiologists be replaced by software?" Benjamin notes that people should really be asking, "Will radiologists who don't embrace new technologies and tools to help them be more successful be replaced by those who do embrace them?"
Suh foresees the scope of AI algorithms, as they become more advanced, going from detection to diagnosis to prediction. He thinks they will first cover specific narrow areas and later be able to cover a wide range of diseases.
"Within a few years, I believe there will be many products that will be launched in the market that will be used in routine clinical workflow," Suh says. "We will have to see how they are received by the end users, whether it is justified hype or a bubble. Only time will tell, but I think, eventually, the technology will catch up with the expectations."
Kim predicts that AI will increasingly become a part of everyday clinical practice, in radiology and other specialties. "There will be a synergistic relationship between AI systems and expert human analysts," he says. "Computers are good at reliably completing large numbers of repetitive tasks in a short space of time. Humans are better at applying broad clinical principles to solving new and complex problems. This combination is likely to yield significant improvements in patient care."
Many people in the industry agree that this is an exciting and rapidly evolving time for AI and medical imaging. And while some radiologists view AI as a threat and adopt a defensive position on the subject, Kim feels that this is the wrong attitude to have. He sees AI as an opportunity to make significant improvements in many areas, such as cancer care.
"Radiologists have a moral duty to embrace any new technology that can lead to improved patient care," Kim says. "Moreover, the fear is likely to be unwarranted. It is likely that AI and radiologists will work together in a synergistic partnership. Radiologists should take ownership and help to promote this new technology."