Curious Dr. George | Plumbing the Core and Nibbling at the Margins of Cancer

The Challenges of Using Artificial Intelligence to Improve Cancer Treatment

Razelle Kurzrock, MD, and Jeff Shrager, PhD

In a previous post, CureMatch co-founder Razelle Kurzrock, MD, told us all about her company’s artificial intelligence (AI) platform that matches patients with treatments based on their cancer’s molecular profile. Here, AI expert Jeff Shrager, PhD, responds, and Kurzrock offers a rebuttal.

Shrager is Co-Founder and Director of Research at xCures, and was formerly Director of Research at Cancer Commons. He is also an Adjunct Professor of the Symbolic Systems Program at Stanford University. Email: jshrager@stanford.edu.

Kurzrock is Director of the Center for Personalized Cancer Therapy and the Rare Tumor Clinic at U.C. San Diego, and Co-Founder and Board Member of CureMatch, Inc. Email: razelle@curematch.com.

Shrager: Whereas I applaud Dr. Kurzrock and CureMatch for their efforts to apply machine learning in precision oncology, I want to offer a bit of a heads-up.

Whereas it is certainly true that “we live in the ‘big data’ generation,” two senses of that term are often conflated. Google and Facebook have enormous datasets with many independent observations across relatively few features. Medical data, especially at the molecular level, is exactly the opposite, having relatively few independent observations across an enormous feature space. Moreover, the settings in which modern AI (i.e., machine learning) has seen successes are those where there are either existence proofs of a solution, which can be drawn upon as a teacher, (e.g., self-driving cars, where even 16-year-olds drive cars adequately well), in closed systems for which we have excellent simulators (e.g., astrophysics), domains in which the roles are static (e.g., games), or in which experiments are basically free (games again, or any domain with a good simulator).

Medicine is completely different: We have essentially no simulations, medical experiments are extremely costly, we lack good treatments (which is why we’re bothering with this at all), and the treatment space changes rapidly. You can’t just teach your robot doctor to cure cancer by observing good doctors curing cancer, because there are no such doctors and cures—there may be some better and some worse doctors, but as far as I know, there isn’t one that can cure cancer “adequately well” who you can use as a guide; indeed, there may be no cure for cancer at all.

Heads up! Machine-learning applications in domains like medicine, where there are small numbers of samples that range over very high dimensionality feature spaces, and with the above-enumerated limitations, are exceedingly prone to getting stuck in non-optimal minima, preferring solutions that work well enough, over exploring solutions that might work better than the ones that have been observed or tried. The way out of this problem is active learning: Rather than taking the apparently best action in all cases, one must balance the strength of belief in one’s rankings against the information gain of trying something new. Doing this requires having a global view of the whole medical (or at least oncological) space, and working out some very difficult “statistico-ethical” questions. Indeed, this is what the clinical trial system is striving to do, although it is doing so horribly inefficiently, and will basically never get there. We can solve this problem, but it requires a much broader AI approach than simply treating each patient in accord with a locally-optimal solution.

(This commentary abbreviates the argument made in much greater detail in a paper I wrote with my colleagues at xCures last year for The Journal of Law, Medicine & Ethics: Is Cancer Solvable? Towards Efficient and Ethical Biomedical Science.) 

Kurzrock: I would like to thank Dr. Shrager for highlighting two excellent points pertaining to the use of AI in routine oncology practice and the inefficiency of clinical trials—I fully agree with him. Allow me to provide some brief comments.

First, I concur that current AI-containing software platforms are certainly not sophisticated enough to be “robot doctors” that could treat cancer. Indeed, decision-support platforms like CureMatch’s BionovTM are not here to replace oncologists. They are necessary tools that help oncologists process immensely complicated data, such as that revealed by next-generation sequencing of tumors. Decision-support platforms are rule-based systems that enable evaluation of complex information by utilizing prior knowledge, akin to the dimensional origami model Dr. Shrager referenced in his earlier work.

Moreover, some of the work that lends confidence to the decision-support platforms are clinical trials. I agree with Dr. Shrager’s point regarding clinical trials’ extreme inefficiency, the fact that they are indispensable to clinical oncology research, and the concept that new clinical strategies are needed, especially to address the questions raised by today’s precision medicine that utilizes complex molecular diagnostics. For example, the prospective cross-institutional I-PREDICT study demonstrated the value of customized, matched combination therapies (rather than scripted monotherapies) and of a matching score similar to that used by BionovTM. Other efforts, such as obtaining real-world data via a Master Observational Trial are also unique approaches that enhance the clinical trial process.

***

Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.