Skip to main content
12 minutes reading time (2348 words)

Addressing Ethical Concerns Around Artificial Intelligence in Cancer Care With Andrew Hantel, MD

In this interview from the 2023 American Society of Hematology (ASH) Annual Meeting, Dr. Andrew Hantel, Instructor in Medicine at Harvard Medical School, shares more about his presentation on oncologists' perspectives on ethical implications of artificial intelligence (AI) in cancer care, as well as ongoing efforts build a framework for the integration of AI into oncology care delivery.  

Oncology Data Advisor: Welcome to Oncology Data Advisor. I'm Keira Smith. Today, we're here at the ASH Annual Meeting, and I'm joined by Dr. Andrew Hantel. Thanks so much for coming on the program today. Would you like to start off by introducing yourself and telling us a little bit about the work that you do?

Andrew Hantel, MD: Sure, I'm a medical oncologist. I see patients with acute leukemia and other hematologic malignancies, but about 80% of my time is spent doing health services research and medical ethics research.

Oncology Data Advisor: Great. I know you're presenting here your research on A National Survey of Oncologists' Perspectives on Ethical Implications of Artificial Intelligence in Cancer Care. For background, what are some of the current FDA-approved uses of AI and hematology oncology?

Dr. Hantel: There are already quite a bit. The FDA has approved over 500 different AI tools through their device pathway. Not all of those are for hematology and oncology, but the majority of them are in radiology and pathology, each of which have major implications for oncology. A lot of them are used for oncology diagnostics—looking at computed tomography (CT) scans, doing radiation oncology mapping, and also doing histopathological diagnosis. There are clinical decision-making risk prediction algorithms as well. Then there are upcoming forms of patient or patient-facing AI, either with the inclusion of clinicians in them through things like chatbots or avatars or those kind of things as well. It has been more on the diagnostic side of things, but it's coming into the therapeutic and also the patient-facing side of things as we speak.

Oncology Data Advisor: That's very exciting. I know one of the biggest conversations right now has to do with the ethical concerns around AI. What are these concerns specifically in cancer care?

Dr. Hantel: A lot of the concerns for AI in general are applicable to cancer care. I don't think there are any that aren't, and I think that there aren't too many things that are idiosyncratic for cancer care amongst those different AI concerns. There are largely concerns around our ability to explain what the AI is doing, the decisions that it's making when it might be going wrong, as well as biases that might emerge over time even after something's approved. If you have, let's say, a drug or a device that's a static molecule or a radiation machine or something, that stays the same the whole time, but the purpose of the AI is that it changes. Even the AI that is proven to let's say, have its biases mitigated at this point might change in the future depending on what populations it's applied to. Then you have other issues related to, is this getting in the way of physicians and patients having a good therapeutic relationship? Are we shifting the responsibilities of clinicians onto the AI rather than making sure that we're taking care of the patient in front of us?

Oncology Data Advisor: There are definitely a lot of questions to address around there. Speaking about the survey that you designed, how did you and your team go about designing it and administering it to oncologists?

Dr. Hantel: There are a number of general health care AI frameworks about the different ethical issues, some of which we just touched on, as well as different processes that should be undertaken when you're designing and implementing AI. We published one in the Journal of Clinical Oncology last year about a process framework to make sure that AI is ethically implemented. There are both the different subject and content areas and then how you actually apply them. What we did was we took those backgrounds as the basis for our survey because a lot of that normative work had already been done.

Then we constructed a survey alongside our study team members, which included health services, research, and survey experts as well as practicing oncologists and AI experts in our group. After coming up with that draft survey, we then took it to a number of oncologists to do some cognitive testing, which is basically where you work through the survey with them and make sure that the questions that you asked are actually getting at the issues that you want to understand, as well as identify if things are confusing, making sure they're worded right, that the responses that they can choose from are appropriate, and those kind of things. Then after that, you have a survey.

Oncology Data Advisor: Great. What results did the survey reveal?

Dr. Hantel: The two major categories, I'd say, were optimism as well as conflict. The optimism was that in the prior physician surveys that have been done on artificial intelligence and health care, there had been more optimism from the side of patients and a lot of skepticism from the side of physicians. Essentially, at the times that those surveys were done in the mid-2010s to 2020, I think people were less familiar on the global scale with what AI was doing and what it could do, whereas now I think that people are understanding the specific applications of it for their work as a clinician or as a patient for how it might affect their cancer journey. From that standpoint, people are becoming more optimistic because they can see that it can potentially take away some of the less interesting portions of their job, like administrative burden. That was the optimism part in terms of physicians feeling like it could improve different aspects of cancer care, both in terms of documentation and actually helping them appropriately diagnose and treat patients.

Then beyond that, there were also some interesting conflicts in that despite the optimism, people aren't very well-trained right now about what AI models are, how to assess them as a practicing clinician, how to understand when they're appropriate, when they might make mistakes, and how to identify when they might make the mistakes. We saw some conflicts where physicians were saying, "All right, this AI is going to be used. Is it my responsibility or am I liable when it goes wrong? Is that the responsibility of the company?" We saw that there was a lot of deference of that responsibility to the AI developers rather than themselves as oncologists. We also saw other signs of conflict between acceptance of that responsibility. The vast majority of people said that it was the oncologist's responsibility to protect their patients from bias, but very few of them, less than a third, thought that they could adequately assess the bias of a tool when they were presented with one. There's conflict and a big knowledge gap there.

Beyond that, there are also issues in terms of conflicts around the difference between what the AI recommends and what they would've recommended. The vast majority of practicing oncologists said that they want to be able to explain and understand the decisions that an AI tool is recommending, but that very few of them thought that patients needed to. At the same time, when we presented them with a scenario in which an AI would recommend a different regimen than the one they were initially going to recommend, the most common response was for the oncologist to present both options to the patient and have the patient choose. Well, how can the patient choose if you're not adequately able to inform them and make them understand what the decision is? You're then foisting the responsibility upon them. I think that shows a lot of the uncomfortableness and the lack of familiarity with some of the actual ways that this might be used and where physician's responsibility continues to be.

Oncology Data Advisor: This is all so interesting. Thanks so much for explaining all the results. Were these results mainly consistent with what you expected to see, or were there any that stood out to you as particularly surprising?

Dr. Hantel: I think the optimism was surprising just because it wasn't consistent with prior studies, and some of the conflicts were surprising, such as the deference of responsibility to the AI rather than saying that "I'm still liable as a physician when an AI recommends something and I abide by it." I was very surprised that while almost 90% thought it was the AI developer's responsibility, less than half thought it was their own responsibility as a clinician. In an era where we think that we are still responsible for the patient in front of us, is this going to no longer be the case? Then if that's the case, do we have any trust in medicine moving forward?

Oncology Data Advisor: As the use of AI in cancer continues to develop, are there ways to continue addressing these ethical concerns or any suggestions you have around that?

Dr. Hantel: There's a lot of things coming out about AI governance. The White House's AI Bill of Rights came out recently. The World Health Organization (WHO), even within the last couple of weeks, put together some, not necessarily guidelines, but consensus information about how ethical AI should be approached. Neither of those were specific to health care, much less to cancer care; but at least for the processes and the areas that need to be assessed for ethical AI, there are very good frameworks out there. It's essentially asking, what are the actual structures that are going to be making that governance a reality? Right now, that is falling under the auspices of either local institutional review boards (IRBs) or the FDA nationally. From that standpoint, that usually covers traditional research and and development, but not necessarily, as I mentioned before, things after they're developed and then applied to different populations. That isn't fully covered by the IRB. It's only covered at the broadest level by the FDA.

The FDA might want the company who's developed the AI to say, "Hey, are your metrics still as good across this huge broad population?" It might say yes, in different pockets, but that accuracy or that appropriateness might be vastly different. I think the gap right now is that there are no local committees or IRB analogs for following things after they're approved and then applied within the hospital. There's also almost no regulation for the penumbral things, like any of these operational or non-research applications of AI that are popping up everywhere—something that will write your notes and put them into the computer and all those things. Very few of those, I think, are going to fall under the auspices of traditional research, but at the same time, they need to have just as much regulation. Right now, there are no governance structures for how those are done. I think that's a big opportunity for needing to create some kind of regulatory policy as well as some structures that allow those things to be formed.

Oncology Data Advisor: Absolutely. My last question for you is, do you have any future research planned to continue investigating the burgeoning role of AI in cancer care?

Dr. Hantel: Yes, in a couple of ways. We're looking at a couple of those different applications that we asked about in terms of the surveys—some of those direct-to-patient Ais in terms of being able to understand how appropriate AI-translated notes from doctor-speak to lay-speak are, both in terms of the clinician saying, "Okay, this AI-based translation of my note into something that a patient can look up on their portal, does that have fidelity with what I had put in the note, and then does the patient understand it?" Those different issues can come up there. We're looking at some of those different applications and the ethical ramifications. Then from the policy side—a lot of what I just went through in terms of the IRB and the local regulation—there's different research that we're doing in that area to develop guidance and governance structures, put them in place, and look at how well they're implemented, how they can be followed, and how that process can actually be borne out and then disseminated into different ports of biomedical research.

Oncology Data Advisor: This is all really fascinating. It was so wonderful to hear about this research, and we look forward to hearing more in the future as the role of AI continues to develop. Thank you so much for coming by to talk about all this today.

Dr. Hantel: Thanks for having me.

About Dr. Hantel

Andrew Hantel, MD, is an Instructor in Medicine at Harvard Medical School and a Faculty Member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and the Harvard Center for Bioethics. His research focuses on leveraging health services and care delivery methods to address ethical questions in oncology, with specific interests in research participation, climate change, and AI.

For More Information

Hantel A, Marron JM, Kehl K, et al (2023). A national survey of oncologists' perspectives on the ethical implications of artificial intelligence in cancer care. Presented at: 2023 American Society of Hematology Annual Meeting. Abstract 2347. Available at: https://ash.confex.com/ash/2023/webprogram/Paper175030.html

Hantel A, Clancy DD, Kehl KL, et al (2022). A process framework for ethically deploying artificial intelligence in oncology. J Clin Oncol, 40(34):3907-3911. DOI:10.1200/JCO.22.01113

White House (2022). Blueprint for an AI Bill of Rights. Available at: https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

World Health Organization (2023). WHO calls for safe and ethical AI for health. Available at: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health

Transcript edited for clarity. Any views expressed above are the speaker's own and do not necessarily reflect those of Oncology Data Advisor. 


Related Posts