ORLANDO -- Computer-assisted image analysis during colonoscopy could help clinicians better detect polyps and achieve adenoma detection rates (ADRs) that are closer to true adenoma prevalence, researchers reported here.
Using a convolutional neural network (CNN) model -- a type of deep learning that is especially effective at performing image classification -- researchers obtained an accuracy of 96% and AUC (area under the ROC curve) of 0.99 when classifying polyp versus non-polyp images, reported William E. Karnes, MD, of the University of California Irvine, and colleagues at .
Action Points
- Note that this study was published as an abstract and presented at a conference. These data and conclusions should be considered to be preliminary until published in a peer-reviewed journal.
Additionally, the processing time for polyp detection and localization was 10 milliseconds per image -- nearly four times faster than that required to assess a live video stream, Karnes explained to Ƶ.
"Our deep learning methodology achieved extremely high accuracy for polyp detection in images and has proven applicable to live video," said Karnes. "Continuous learning by CNN has the potential to assist colonoscopy achieve higher ADRs, approaching true adenoma prevalence, and thereby reduce the risk of interval cancers by more than 80%."
Jonathan A. Leighton, MD, of Mayo Clinic Arizona, who wasn't involved in the study, said that while research on visual gaze patterns has been going on for the last few years, using computers to pick up abnormalities is relatively new.
"We go to medical school and we learn a lot of things, but vision is not something that everyone studies or analyzes. Sometimes if you have a polyp miss rate it might be just because it's hard to visualize the entire screen," he told Ƶ.
Karnes' technology is currently looking at polyps that colonoscopists probably wouldn't miss, but it would be extremely useful if it could pick up polyps, specifically in the right colon, that may difficult for the human eye to see, Leighton said.
"If you could develop technology to do that, it would be huge," he said.
Karnes and colleagues searched their colonoscopy quality database -- Qualoscopy -- for natural images collected from screening and surveillance colonoscopies. The dataset included images of all portions of the colorectum, including retroviews in the rectum and cecum, appendiceal orifice, ileocecal valve and features such as forceps, snares, melanosis coli, and diverticula.
The tool was pre-trained on the ImageNet data corpus, then trained and validated on 8,641 colon images: 4,088 contained polyps of all sizes and morphologies and 4,553 contained no polyps. They randomly selected 80% of images to train the model, while the remaining 20% were used to validate the model.
Karnes explained the process to Ƶ: "We took a bunch of images that didn't have polyps from all parts of the colon -- poop, bubbles, you name it -- and then images that actually had polyps. We then drew boxes around the polyps so that the deep learning algorithm knew what the area of interest was, to learn what was in the box that was different from everything else."
One of the benefits of the convolutional neural network model is that it can function without lag during live video using an ordinary desktop machine. It can also run on a Chrome Book which makes it ideal for randomized studies, Karnes said.
He concluded that the convolutional neural network model is deployable and ready for validation as a novel method to assist detection of polyps during colonoscopy. His team has already written a grant for multi-center studies and is currently waiting funding for future research.
Disclosures
Karnes disclosed no financial relationships with industry.
Primary Source
World Congress of Gastroenterology at ACG2017
Karnes WE, et al "Adenoma detection through deep learning" ACG 2017; Poster 1032.