An AI Assist for Spotting COVID-19 in X Rays
Chest x rays offer a quick screening method for lung problems: A collapsed lung, the buildup of excess fluid, or swollen tissue are all recognizable by radiologists in these black and white images. Doctors also use the images to help quickly diagnosis diseases, such as pneumonia. Making x-ray screening of COVID-19 similarly speedy would benefit over-run hospitals, and the developers of artificial intelligence (AI) algorithms are hoping that they can help. But getting the data they need from hospitals and ensuring accuracy are challenges that they must first overcome.
“In places like New York, where there was this huge explosion of [COVID-19] patients, they’re already taking chest x rays alongside viral testing. So why not have a greater immediate impact by building AI [software] to help screen through all those images?” says Alexander Wong, who works on medical image processing problems using AI at the University of Waterloo, Canada.
The majority of people who get COVID-19 develop symptoms including a dry cough, a fever, and tiredness. Some also develop a severe lung infection that leaves them gasping for air. Pneumonias caused by bacteria or other viruses cause similar symptoms, and differentiating these illnesses is tricky without laboratory tests. For COVID-19, test kits are still in short supply, and the results can take days to weeks to come back. So doctors are turning to x-ray machines for help, though the CDC currently recommends against making a diagnosis based entirely on an x ray. The machines, which are available in most hospitals and are easy to sanitize, can help reduce diagnosis times to minutes, allowing doctors to quickly isolate and treat patients.
Researchers hope that AI algorithms could give doctors an edge by enabling them to better distinguish COVID-19 from other diseases. “We know that specialists are already overloaded, so these kinds of quick screening tools could be incredibly useful.” says Marzyeh Ghassemi a computer scientist who uses AI algorithms to study medical problems. Ghassemi, who works at Canada’s Vector Institute and at the University of Toronto, previously trained algorithms to learn the features of pneumonia-infected lungs from x rays and is currently retraining her algorithms to do the same for COVID-19. She thinks that AI will be particularly helpful if COVID-19 becomes cyclic, like flu. “It's going to be very valuable to be able to say, wait a minute, this is not [normal] pneumonia, this is COVID again,” she says.
Like other AI image-recognition algorithms, those developed to interpret chest x rays learn by example. To train an algorithm, scientists give it many different chest x rays from people already diagnosed with a disease or lung problem. The algorithm then learns to recognize the traits of the illness in the images—for example, lungs infected with bacterial pneumonia typically show patches with decreased opacity. Once the algorithm has undergone this learning curve, it can then match x rays from undiagnosed patients with the same problem without any input from the user.
The method has been shown to work for those with pneumonia or who have a pneumothorax (where air leaks out of a lung). But this success results from having trained algorithms on the thousands of pre-diagnosed x-ray images housed in online repositories. For COVID-19 positive patients, however, only a few hundred x rays are publicly available. That makes it hard to train the algorithms with any degree of robustness.
Many of those COVID-19 x rays lie in a repository put together by Joseph Paul Cohen, at the University of Montreal. Cohen helped develop an AI tool called Chester that came online last year and that can distinguish between different lung problems, such as pneumonia or a lung hernia. When COVID-19 hit, he wanted to augment Chester but quickly came up against a wall because hospitals didn’t want to share their data. “It’s just a nightmare to get images,” he says.
Instead, Cohen and his colleagues turned to journals. “The one avenue where the whole hospital gets behind making data public is to put a figure in a paper,” he says. By scraping x-rays images of COVID-19 patients from publications, and also by collecting those made available by groups including the European Society of Radiology, the team has collected 226 images, along with relevant clinical data, for patients with COVID-19.
Cohen published a paper with the initial dataset on the arXiv preprint server just over a month ago, and the paper has already been cited nearly 40 times by groups developing x-ray screening AI algorithms. One of those groups includes Rodolfo Pereira at the Pontifical Catholic University of Paraná in Brazil. Pereira normally develops AI algorithms for classifying music, but he turned his hand to studying x rays when COVID-19 hit. He and his colleagues have performed over 5000 “experiments” to test a number of different AI x-ray screening methods. “We pretty much threw everything we could at the problem,” he says.
Their results indicate that their algorithms can distinguish between different types of pneumonia, including that caused by COVID-19. But team member Lucas Teixeira from the State University of Maringá, Brazil, offers caution. “[The algorithms] aren’t perfect,” he says. “They work with [Cohen’s] dataset, but we don’t know how they will work with other datasets.”
The question of applicability and reliability of chest-x-ray algorithms is one that all the researchers interviewed for this story emphasized. The way an AI algorithm interprets an x-ray scan can depend on the machine used to take the image, whether the person was lying down or standing up, or the hospital where the x ray was taken. “There are all sorts of biases that can trip you up,” says Alistair Johnson at the Massachusetts Institute of Technology, who helped to build MIMIC-CXR, an online repository that contains nearly 400,000 (non-COVID-19) x-ray images.
But methods exist to mitigate these problems, says Wong, who, along with his team, has developed a research-grade AI tool for analyzing chest x rays. One possibility is implementing “explainable” algorithms, where the decision-making process is viewable to the user. With the reasoning visible, issues can be quickly pin-pointed and fixed. For example, using explainable AI, Wong realized that one of his team’s test algorithms “cheated” when making diagnoses by reading metadata labels in the image. The researchers therefore wiped the labels from the x rays. Explainable algorithms also benefit clinicians by providing the rationale behind a diagnosis, he says. “At the end of the day what we are trying to build is not about replacing doctors, it’s about augmenting the decision making [process] to make it better and faster.”
Katherine Wright is a Senior Editor for Physics.