Ƶ

AI for Radiation Therapy Works, but Is it Fully Trusted?

— Most machine learning-generated plans selected over human plans in head-to-head comparison

Ƶ MedicalToday
Radiologists in front of a computer monitor establishing the radiotherapy treatment area using a treatment planning system.

The use of artificial intelligence in creating radiation treatment plans for prostate cancer appeared to be a success, according to a blinded, head-to-head study.

Overall, radiation oncologists considered the vast majority -- 89 of 100 -- of machine learning (ML)-generated plans clinically acceptable for treatment, reported Thomas Purdie, PhD, and colleagues at Princess Margaret Cancer Centre in Toronto. This finding was stable across both the simulation phase (92%) and deployment phase (86%) of the study.

Moreover, in a head-to-head comparison of radiation treatment plans generated by an ML algorithm or humans, 72% of the ML-generated plans were selected over human-generated plans, "indicating the potential for ML to even surpass human performance owing to increased consistency whereby the ML plan amalgamates a consensus of experts as opposed to a single expert," they wrote in a research letter published in .

However, when radiation oncologists were faced with actually putting these treatment plans into practice on patients, the number of ML plans selected for treatment was significantly reduced (83% in the simulation phase vs 61% in the deployment phase).

"Once you put ML-generated treatments in the hands of people who are relying upon it to make real clinical decisions about their patients, that preference towards ML may drop," said Purdie in a press release. "There can be a disconnect between what's happening in a lab type of setting and a clinical one."

While machine learning "holds great promise" for healthcare delivery, its impact has been mostly tested in simulated environments that can't be replicated in real-world clinical practice, noted Purdie and colleagues. Therefore, in this study, they wanted to prospectively deploy and test an algorithm for therapeutic radiation treatment planning for prostate cancer patients.

The study involved two phases. The first was a retrospective simulation phase in which previously delivered radiation treatment plans were compared with ML-generated plans in 50 patients by seven radiation oncologists under blinded review.

In the second prospective deployment phase, nine radiation oncologists compared ML- and human-generated radiation treatment plans for 50 patients under blinded review. The selected plan -- ML or human -- was used for patient treatment following all standard clinical quality controls.

Radiation treatment planning using ML reduced the median time needed for the entire planning process by 60%, Purdie and team noted.

Overall, the radiation oncologists selected the radiation treatment plan that was "quantitatively superior based on compliance with clinical guidelines" in 85% of cases in the simulation phase versus 72% in the deployment phase, they said.

"Preference towards ML- or human-generated RT plans was attributed at the individual level based on the perceived origin (ML or human) of the selected RT plan, when it was quantitatively inferior based on consensus review," the authors wrote. For example, they noted that most of the observed preference for human-generated treatment plans was accounted for by two of the nine treating radiation oncologists.

The study demonstrates "that fully automated, ML-generated therapeutics are realizable in a clinical environment," Purdie and colleagues wrote. "Prospective deployment studies validating the impact of ML on real-world clinical settings are necessary to quantify the value of these methods, and to drive acceptance for routine use in patient care."

"When it comes to new technologies and devices, I think there is a general distribution of uptake and acceptance, with some early adopters, some who are more skeptical but convincible, and then the late or reluctant adopters," said Neha Vapiwala, MD, of Perelman School of Medicine at the University of Pennsylvania, who was not involved in the study.

"There is also something to be said for some of the nuances that simply can't be captured with machine learning, such as the body of one's professional experience and history with a specific tumor type, or a particular kind of clinical case," she told Ƶ. "You might follow the typical guidelines, and a treatment plan may look 'perfect,' but perhaps you've observed outcomes with patients in that particular scenario that you incorporate in your decision making and that may favor a non-automated approach."

Vapiwala also noted that while machine learning as described in this study is not yet broadly available, there are certain elements that are more prevalent in today's clinical practice and appear to be more readily adopted, such as auto-contouring of structures on simulation CT images that are used for dose modeling.

  • author['full_name']

    Mike Bassett is a staff writer focusing on oncology and hematology. He is based in Massachusetts.

Disclosures

This work was supported by the Canadian Institutes of Health Research through the Collaborative Health Research Projects (National Sciences and Engineering Research Council of Canada partnered) and the Princess Margaret Cancer Foundation.

Purdie and McIntosh received royalties from RaySearch Laboratories in relation to machine learning radiation therapy treatment planning.

Primary Source

Nature Medicine

McIntosh C, et al "Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer" Nat Med 2021; DOI: 10.1038/s41591-021-01359-w.