Saturday, May 18, 2024

Facebook and NYU trained AI to estimate COVID results

Must read


“COVID is a unique virus,” Dr. William Moore of NYU Langone Health told Engadget. Most viruses attack the respiratory bronchioles, resulting in a pneumonia-like area of ​​increased density, he explained. “But what you usually won’t see is a tremendous amount of cloudy density.” But that’s exactly what doctors are discovered with COVID patients. “They will have an increased density which appears to be an inflammatory process of pneumonia rather than a typical bacterial pneumonia, which is a denser area and in a specific location. [COVID] appears to be bilateral; it appears to be somewhat symmetrical. “

When the epidemic first hit New York City, “we started trying to figure out what to do, how we could actually help manage patients,” Moore continued. “So there were two or three things going on: There was a huge number of patients coming in, and we had to find ways to predict what was going to happen. [to them]. “

To do this, the NYU-FAIR team started with chest x-rays. As Moore notes, x-rays are performed regularly, essentially whenever patients complain of shortness of breath or other symptoms of respiratory distress and are ubiquitous in rural community hospitals and large metropolitan medical centers. The team then developed a series of metrics to measure complications as well as patient progression from ICU admission to ventilation, intubation and potential mortality.

“This is another clearly demonstrable metric we could use,” Moore said of patient deaths. “Then we were like ‘okay let’s see what we can use to predict that’, and of course the chest x-ray was one of the things that we thought was very important.

Once the team established the necessary metrics, they set about training the AI ​​/ ML model. However, this raised another challenge. “Because the disease is new and its progression is not linear”, explains Nafissa Yakubova, manager of the Facebook AI program, who previously helped NYU develop faster MRIs, Engadget told. “It’s hard to make predictions, especially long-term predictions.”

In addition, at the start of the epidemic, “we did not have COVID datasets, above all there were no labeled datasets [for use in training an ML model], “she continued.” And the size of the datasets was also quite small. “

Scott Olson via Getty Images

So the team did the next best thing: they “pre-trained” their model using larger publicly available chest x-ray databases, in particular MIMIC-CXR-JPG and CheXpert, using a self-supervised learning technique called Momentum contrast (MoCo).

Fundamentally, as Towards data science Dipam Vasani explains, when you train an AI to recognize specific things – for example, dogs – the model has to develop this ability through a series of steps: first to recognize lines, then to basic geometric shapes, then to more patterns. detailed, before being able to distinguish a Husky from a Border Collie. The FAIR-NYU team took the first steps of their model and pre-trained them on the larger public datasets, then went back and refined the model using the smaller COVID-specific dataset. “We don’t make the diagnosis of COVID – whether you have COVID or not – on the basis of an x-ray,” Yakubova said. “We’re trying to predict the progression of its severity.”

“The key here, I think, was… to use a series of images,” she continued. When a patient is admitted, the hospital will take an x-ray and then probably take more in the next few days, “so you have this time series of images, which was essential to have a more accurate prediction.” When fully trained, the FAIR-NYU model managed a diagnostic accuracy of around 75% – on par and, in some cases, exceeding the performance of human radiologists.

Cremona, radiology department of the Maggiore hospital in Cremona;  radiologists look at CT scans of the lungs of patients with covid-19.  (Photo by: Nicola Marfisi / AGF / Universal Images Group via Getty Images)

AGF via Getty Images

It’s a smart solution for a number of reasons. First, the initial pre-training is extremely resource intensive – to the point that it is simply not possible for individual hospitals and health centers to do it alone. But using this method, massive organizations like Facebook can develop and will develop the initial model and then provide it to hospitals as open source code, which these healthcare providers can then complete the training using using their own data sets and a single GPU.

Second, since the initial models are trained on generalized chest x-rays rather than COVID-specific data, these models could – in theory at least, FAIR has not yet tried it – be adapted to other respiratory diseases by swapping just the data used for fine tuning. This would allow health care providers not only to model a given disease, but also to adapt that model to their specific locality and situation.

“I think that’s one of the really amazing things the team has done from Facebook,” Moore concluded, “is take something that’s a great resource – the CheXpert and MIMIC databases – and power apply it to a new and emerging disease process that we knew very little about when we first started doing it, in March and April.


- Advertisement -spot_img

More articles


Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article