Blood Cancer Awareness Month: Exploring Patient-Reported Outcomes With Rahul Banerjee, MD, FACP, and Ajay Major, MD, MBA
Recently, Dr. Rahul Banerjee, Assistant Professor at the University of Washington and Editorial Board member for Oncology Data Advisor, spoke with Dr. Ajay Major, Assistant Professor at the University of Colorado, in a live interview discussion in honor of Blood Cancer Awareness Month. Dr. Banerjee and Dr. Major engaged in a riveting conversation about the role of patient-reported outcomes (PROs) in lymphoma and myeloma, including how to interpret them and how to include them when designing research.
Oncology Data Advisor: Hi everybody, welcome to Oncology Data Advisor. I'm Keira Smith, and today we're having a live expert interview panel in honor of Blood Cancer Awareness Month. I'm joined by one of our Editorial Board members, Dr. Rahul Banerjee, who is an Assistant Professor at the University of Washington. He'll be speaking with Dr. Ajay Major, who is an Assistant Professor at the University of Colorado. Today, they'll be discussing patient-reported outcomes in lymphoma and myeloma. With that, I'll turn it over to you, so Dr. Banerjee, take it away.
Rahul Banerjee, MD, FACP: Thank you, Keira and Dr. Major; pleasure to have you on the show.
Ajay Major, MD, MBA: Thanks for having me.
Dr. Banerjee: As Keira mentioned, Dr. Major's expertise is in blood cancers, particularly in patients with lymphoma and myeloma. His research expertise is on patient-reported outcomes, so we'll call them PROs for the purpose of this interview. He led a very nice analysis of PROs in patients with neuropathy and myeloma, using data from the Multiple Myeloma Research Foundation. I know he has a couple of other PRO-related papers in the works. Dr. Major, again, it's a pleasure. Thank you again for coming on the show.
Dr. Major: Thanks. Happy to be here.
Dr. Banerjee: I always ask everybody this is as my first question: when you look at an abstract or a paper or anything that has PRO data involved, embedded into it, or talks about it, what are the things that you look for? What makes something a great PRO study versus a mediocre PRO study versus honestly a lousy PRO-based study?
Dr. Major: I don't really believe there are any lousy PRO studies, and the caveat on that is because in the blood cancer space, PROs are fairly new. We finally have good enough therapies that we can actually start to talk about many of these diseases as chronic diseases, where we need to think about quality of life and how tolerable our therapies are. But I think there are two big things that I look for. The first is a really good statistical plan. When you're designing a phase 1 or phase 2 trial, you really think about whether you're going to do a 3+3 design or a Bayesian design, really methodically thinking about how you're going to be constructing your study. That exact same concept applies to PRO studies.
You have to think, what is the patient population you're studying? What is the timeframe over which you're studying it? What do you expect to happen to their quality of life or their symptom burden as a result of their treatment? What I also look for is a really good hypothesis. We hypothesize that quality of life will change at XYZ way, based on how we give XYZ treatment. Based on that, you have a good understanding of the cohort that you're studying in that particular trial, designing and seeing that the PROs have been designed to really answer that specific question. I do sometimes see that people throw in a PRO instrument as kind of an afterthought, "Well, we're going to collect it." Collection of PRO data, in my opinion, is good in a lot of settings right now. We should be doing it in across the trial setting, across observational studies, and clinical trials, of course. But the problem with that is that you don't really know what you're looking at. So what I really look for is a good hypothesis.
The other big component I look for is seeing that the data has been presented with some granularity when it comes to PROs. This is informed by some research that other groups have been doing—and actually from some research our group has been doing too—reporting someone's change in their overall or global quality of life, which you see reported a lot in studies because it feels easier to get a handle on. It's like, "Oh, we know overall their quality of life did this based on this treatment." But really when you report just global quality of life, you are missing some of the granularity about what actually happens to symptom burden.
We have an abstract that'll hopefully be coming out at the American Society of Hematology (ASH) soon, where we looked at patients with myeloma and how their global quality of life changed during induction with carfilzomib-based therapies. What we found is that overall quality of life was actually the same over that treatment period. But using the instrument we use, which is one of the European Organisation for Research and Treatment of Cancer (EORTC) instruments—which looks specifically at disease-related symptoms versus side effects from therapy—we actually found that side effects from therapy precipitously increased over the first couple of weeks and then decreased to baseline by six months. Disease-related symptoms precipitously decreased over the treatment period. Those kind of canceled each other out and represented a global quality of life that was completely unchanged. But for patients, there were clearly dynamic changes going on in what was happening with their symptom burden.
The ZUMA-7 study, one of the big chimeric antigen receptor (CAR) T studies, used EORTC instruments. What I like about that study is that in their supplement, they included all of the individual domains that they used for the PRO instrument. Nausea, vomiting, financial toxicity, all of that was included. You really have to capture all that and present all of that in your paper for us to get a good understanding to counsel patients on what's going to happen with them. I tell people with myeloma, if I'm treating them with carfilzomib upfront, "Listen, you're going to have some more side effects, which we think will resolve, but your disease-related symptoms will get better." I think that's important to set expectations for patients.
Dr. Banerjee: It's also super helpful, especially with blood cancers, you know as well as I do, because fatigue is so hard to tell. Is this fatigue from the myeloma, or is it the fatigue from the lenalidomide that they're on? I think having these kinds of bigger longitudinal studies, where people have been able to suss this out bit by bit or adjust for lenalidomide dose adjustments or for duration, is important.
Dr. Major: Yes, absolutely.
Dr. Banerjee: Basically, in my mind, statistical significance in PROs is much less important than clinical significance in terms of clinically important differences and so forth. Can you speak a bit about that, in terms of how you look at what actually is a meaningful change on a PRO instrument for a patient?
Dr. Major: You're opening the Pandora's box in the PRO space, which is a good thing. This whole entity, which some people have called minimally clinically important difference (MCID) or minimally important difference (MID), is something that has been a predominant focus in the PRO space, actually more on the surgical side, interestingly. There's a lot of surgical literature in PROs, looking at results from hip replacements or knee replacements, where they want to attribute some change in a patient's symptoms with the value or the quality of whatever particular surgical intervention they went under. Especially in the orthopedic space, there has been a lot of asking what exactly is that difference per patient, but also looking in aggregate at a mean of an entire patient cohort that represents what patients think is clinically significant. I don't know how helpful that is in a blood cancer space with therapeutics.
That's partially because it's really nuanced. A single PRO instrument can have multiple different minimally important differences, based not only on the patient population you're studying, but also the patient setting. For example, what is the minimally important difference for multiple myeloma in the front line versus in the first relapse setting? We presume that changes for patients in terms of what feels better for them overall is probably different. The minimally important difference also differs based on, of course, the therapy, but also based on time. If you're looking at difference between zero months and six months in a patient with myeloma—who, by the way, might be undergoing an autologous transplant—versus zero months and three months when they're still going to be on induction, that minimally important difference is also going to be different. It can get really in the weeds of how we look at that.
I think they're less useful, except in one scenario. There are a couple different ways to generate a minimally important difference. Some people do a distribution-based model. There's a paper from 2004 that everyone cites, that found that a half standard deviation difference—which of course is dependent on your sample size—is considered to be a minimally important difference. Fine, but there's another strategy of doing it where you do something called anchor-based generation of minimally important difference, where you directly ask questions to patients, basically saying what we call the patient global impression of change. We just ask patients in the same study, "How do you feel like your health has improved or worsened since you started therapy or since your last appointment?" That allows you to stratify internally within the study and say, "Look, some patients said that since they got their CAR-T, they had a significant improvement. Here's how their quantitative PRO instrument changed, or their scores changed," versus the patients who said, "Gosh, Doc, I'm doing way worse since I got the CAR-T," and then seeing how that difference is.
I kind of like that model better, and it's pretty simple to put into studies. It's literally one or two questions. It allows you to internally determine the minimally important differences within your study group, and that allows you to less rely on, "Oh, this minimally important difference study was done in a different country, so we don't know." It's an internal way of doing it. I think it's more helpful, but I think people often get lost in, "Oh, did we meet the five-point minimally important difference?" I don't know. I think there are easier ways of looking at that.
Dr. Banerjee: Agreed. Or worse, like, "Oh, we had four points. Can I find a citation that says that four points is fair game?" I'll admit that I actually haven't heard of the anchor-based approach. That's brilliant, almost like it's a patient-reported assessment of a patient-reported outcome.
Dr. Major: Exactly.
Dr. Banerjee: Or what level actually makes a difference to them. Obviously we're talking about all blood cancers here. I think myelofibrosis is a great example where ruxolitinib drastically decreases the spleen size. I think that's obviously clear as day, both clinically and statistically significant. But I'm sure that would be a good example where spleen size by computed tomography (CT) is one thing, but having a patient say, "Look, I feel so much better. I can eat more. My belly isn't bloated," is just huge and way more meaningful.
Dr. Major: Yes, and the other reason I add a caveat with the minimally important difference, which I think is important, is that the cutoffs for what's minimally important are different if you're looking at a mean score. If you're taking 100, looking at their mean scores at zero, three, and six months out, that minimally important difference is going to be lower than what's actually meaningful for an individual patient. There's been some research that's been done, where basically if you were to implement PROs in more of a routine clinical environment, the individual patient change is probably significant and very different. That's the other reason that I think there's an academic approach to, "Do we think this therapy overall improves someone's myelofibrosis fatigue, spleen size, what have you?" But for an individual patient, absolutely. They may have what would not be considered to be that minimally important of a difference, but if they clinically feel better, that's great.
Getting into if you were ever to implement PROs in clinic—obviously, there's a lot of research and that's a whole other kind of area of implementation science—it's a simple question. Actually, there's a group at Mayo that just published a paper on a novel question, "Was it worth it?" which I love. It's a simple question, very poignant, but something you could really ask patients in the clinical environment so easily and get a sense of whether the therapy they went through was worth it and how that correlates with their changes in PRO scores.
Dr. Banerjee: Agreed. That also takes into account the expectations they had for this therapy and where they ended up, which I think is also a very real-world thing that our patients think about and want to know about. Maybe then I can ask you, you touched on transplant a little bit. We talked about the ZUMA-7 and CAR-T analysis and the difficulty with transplant. With all of our therapies, but especially with transplants and CAR-T, there's this roller coaster that you expect with these acute periods of toxicities versus a chronic period. When you collect PROs across that range, I'm sure that the curves under the curve, so to speak, make things complicated. Can you talk about where you think the field is headed for longitudinal PROs? Or things that you look for when you see a study of longitudinal PROs, in terms of understanding how to make sense of them from a patient or a physician perspective?
Dr. Major: Yes, and granted, I'm not a biostatistician, because longitudinal analysis is its own thing. There are books on it. But again, I think it comes down to really thinking about what you are looking for in the patient population that you're studying and how you use your log? You bring up a great point—CAR T is the classic example, so is transplant—where a lot of the CAR-T have found at their three-month PROs that patients feel worse, especially in the ones that compared CAR T to transplant, for example in diffuse large B-cell lymphoma (DLBCL). Of course, people are going to feel worse in the three months during recovery. We would expect to see some sort of rebound, and they often have found that by six months, people are feeling better. In that scenario, there could be a repeated measure; something like, "Are you missing some of that dynamic change in their quality of life?"
I know there's been a lot of talk about this time to deterioration, which is a Kaplan-Meier way of visualizing the percentage of patients that hit some sort of established cutoff or when their quality of life is deteriorated. I think the problem I have with that is it works in some settings. For example, in metastatic solid tumors, that might work, because our expectation is that at some point, the therapies are going to stop working and the cumulative toxicity will get too much. There's probably not as much of that dynamic change. There might be for some of the novel therapies in cell tumors, for sure. But we might be saying, "If we give you a therapy in a metastatic cell tumor setting, we do want to see a point at where you have some sort of improvement that is faster than the placebo or whatever other arm you're looking at." That may work in some of the hematological limits, too.
But some of that dynamic change, I think, is a little bit more complicated. Looking at individual means at time points has obviously been done. I'm a big fan of generalized linear mixed methods, because it allows you to actually model and take into account random effects—what all the individual patients are doing over time. It also allows you to enter, in kind of a multivariable way, covariates that might help you determine if are there are patient populations that might actually have changes that are not being represented by the aggregates. We have a paper coming out hopefully soon, where we looked at how quality of life changes in patients with chronic lymphomas who are actually under a watch-and-wait or an active surveillance strategy. The question we were looking at is, "Do patients really have a totally retained quality of life over time when they're on active surveillance?"
Obviously, a lot of patients talk about watch and worry, totally valid. What's their quality of life doing? If you look at an aggregate by repeated measures analysis of variance (ANOVA) over a three-year period, there was no difference. But when you do a generalized linear mixed method model, you actually tease out that there are gender differences in how quality of life changes, and actually differences in people who had existing comorbidities, which is really fascinating. People who had existing comorbidities had at least a better initial quality of life, and anybody outside of the cancer space often amasses more social capital surrounding that. People who don't have comorbidities but are diagnosed with a new cancer or chronic lymphoma initially don't have the social capital that they need to support their overall quality of life. That's an example of why you've got to pick the right longitudinal way to think about the data based on what you're studying.
Dr. Banerjee: That is fascinating. I look forward to seeing that paper. I totally agree, and I think it's a good reminder. Like you said, biostatistics is probably one of the most important components of PRO analysis. As you're designing a PRO study for us as faculty in the field, having biostats faculty input into the instruments—how you analyze them and how you actually report them—I think, is super important. I think the other big source of angst for all of us—and you can speak to this better than I can—is just the sheer number of PRO instruments that are out there for anything. I think that's a little daunting, because then it's very hard to compare them. I don't know which one to use. I don't know how much it's going to cost.
Maybe we can delve into quality of life, which you had talked about. You mentioned the EQ-5D, an example of a PRO instrument that you use. There are so many out there, so many ways to look at quality of life. What are you a fan of? Where do you think the field is going to be headed in five years—ideally across blood cancers in general, but if not, particularly in lymphoma and myeloma?
Dr. Major: I'm a convert to the Patient-Reported Outcomes Measurement Information System (PROMIS) instruments, and I think you probably are too, as somebody who works in the cellular therapy space. But again, and I hate to keep getting to this point, but it really depends on what you're trying to study. I do think that most studies now, especially in hematologic malignancies, probably should include at least some degree of the PROMIS instruments. That's because the PROMIS instruments are generic, and by generic, what we mean is that they're not disease-specific. They can look at overall quality of life, as well as specific domains of quality of life. You can compare them across therapies without having to account for some of the disease-related characteristics.
Especially in blood cancers, especially in a cellular therapy space, we're being inundated with new therapies—new CAR T therapies, bispecifics, what have you. It's impossible to design new PRO instruments for just bispecifics, CAR T, etc. That's not possible. I think PROMIS instruments probably should be included in some way in most of these heme malignancy studies, because they allow us to look at quality of life between all these different, unusual, new types of therapies. That being said, if the study that you're looking at is looking to see how disease-related symptoms change, for example in myeloma, that's a little bit tougher to do with the PROMIS instruments. The whole framework is designed so that you can pull individual questions. You can design your own instrument, and that's obviously fine to do, but it can be tough to say, "Do I want to pull... Let's see. What pain instruments do I want to use if it's a myeloma-related bone pain? Or how does that differ from. . ..." That can be a little bit challenging.
I think some of the other instruments, for example the EORTC instruments or the Functional Assessment of Chronic Illness Therapy (FACIT) instruments—especially ones that are specific to lymphoma, like the NFLym or the EORTC My20, which is specific for myeloma—can and should still be incorporated into studies, with PROMIS added as well, because it serves two purposes. One, it allows you to answer some of the granularity about the disease-related symptoms. But it also allows you, within the study, to start to look at comparisons between these instruments and the new PROMIS instruments. If the field goes the way I think it's going to go, we're probably going to switch to using PROMIS only. Being able to validate and see that, "Oh, this lymphoma-specific instrument has good enough agreement with the PROMIS instruments that we feel like we can use it to start to look at those symptoms," I think is going to be key.
Dr. Banerjee: I love that. I totally agree. Just to familiarize the viewers who don't know what we're talking about, it's PROMIS without the E. The instruments that could look at global health, quality of life, sleep, anxiety, and more—are you familiar with the Patient-Reported Outcomes—Common Terminology Criteria for Adverse Events (PRO–CTCAE) as well? Can you speak a little bit about that?
Dr. Major: Yes, and I'm glad you bring that up, because another construct that's happening in the PRO space that's attempting to address this issue is the new construct that we call tolerability. There's a lot of discussion right now in the PRO community about what tolerability means, because to your point, a lot of clinical trials are saying, "Okay, PROs are important, but how on earth do we incorporate those into a study?" There is some pressure to come up with a uniform construct for how we do this, especially in the clinical trial setting, and that's what tolerability is. Tolerability is this construct that was informed very much by patient advocacy groups, which is great, saying, "Listen, there are therapies that we are willing to tolerate some of the side effects from, knowing that we're going to get some sort of XYZ benefit down the way."
Tolerability, thus far as it's been defined, is a combination of physical functioning or social functioning, disease-related symptoms, symptomatic adverse events, and overall side effect burden. It's kind of these five components. You had mentioned the PRO–CTCAE, which is a patient-reported version of the CTCAE grading that we use in oncology for symptomatic adverse events. There are a couple of studies that I've been working on, where basically you can use some of the PROMIS instruments to look at physical functioning and role functioning. There are single-item questions. You can look at overall side effect burden. The questions include, "How affected are you by the side effects of your treatment?" But incorporating the PRO-CTCAE can be great to look at specific symptoms you think are from the therapy that you're testing. Again, using PROMIS is still part of the construct, but it is a little bit ill-defined.
The issue with this tolerability construct, at least in my view, is there are aspects of quality of life that we may be missing. Physical function and role function are important, but they don't reflect, for example, emotional well-being and some of the mental health components, which obviously are super important. They also don't reflect things like financial toxicity, meaning the financial distress that a patient may get from treatment, getting to the hospital, or any number of financial stressors related to their cancer therapy. We know that affects quality of life in a deleterious way too. Again, this is under development right now in the PRO sphere, but PRO–CTCAE is a way to start to look at some of these individual components that might be driven by new therapies.
Dr. Banerjee: Agreed. I totally agree with this. I learned a lot from this, and I'll add to the audience, that it isn't as burdensome of a question battery as you might think it is. Just asking someone, "How bad? What is the severity of the symptom? How much did it interfere with you?" and getting a sense of the fact that those two don't always correlate is pretty compelling, and it shows us how much we really don't understand about the patient experience unless we ask about it.
Dr. Major: You're bringing up a great point, which is that there are one or two added questions that you can pull from a bank and customize it, but there's really a super important need for good qualitative research within the PRO sphere, especially in blood cancers. The issue with these new therapies—and there's actually a lot of research on this—is that clinician-assessed symptoms do not reflect very well on what patients report. Symptoms are often underestimated, or the severity is underestimated, especially for more subjective symptoms. Doing good focus groups, semistructured interviews, talking to patients who have undergone these therapies, understanding the burden of symptoms for them as well as for their caregivers, and publishing those data is extremely informative for us to then decide, "Okay, this is the lived experience of patients who've gone through this new CAR T therapy or bispecific. Let's make sure that the questions that we choose on any of these instruments really reflect what patients are actually experiencing."
This is particularly important for, as you mentioned fatigue. As a lymphoma doc, fatigue is one of the most common symptoms in aggressive lymphomas, chronic lymphomas, and survivorship, but fatigue is obviously really difficult to understand. There are a number of researchers who look at fatigue as a symptom cluster. How is fatigue different in a chronic lymphoma patient versus a survivor of Hodgkin lymphoma versus someone who's undergoing current intensive induction for DLBCL? Really needing some good qualitative work to understand how those are all different, and how we need to customize our PROs for those, is huge. It's paramount. It's critical.
Dr. Banerjee: Agreed. I'll just throw a shoutout to the word you mentioned that I'm glad you did—caregiver. I think for Blood Cancer Awareness Month, that's quite relevant, especially because of the intensity of our therapies, transplant, and CAR-T. I think we're kind of figuring out the caregiver experiences. Both you and I have anecdotes that caregivers tell us about seeing that in a more systematic way, from what they go through and how that evolves over the course of these therapies. It's super important for us to better understand.
Dr. Major: The great thing is the PROMIS instruments include some stuff for caregivers. In the pediatric and adolescent young adult (AYA) population, there's a whole patient and then patient proxy section. That's extremely important, especially in the therapy and CAR T space. There was a great abstract at ASH last year looking at the caregiver and patient overall burden and quality of life in outpatient versus inpatient transplant, which showed some very interesting differences that I think we have to consider as we start to think about outpatient CAR T and all the stuff that's coming down pipeline.
Dr. Banerjee: Absolutely. We talked about survey burden briefly as kind of the downside of these. I recall that PROMIS is available in different languages, and technically they don't care if you submit it electronically or by paper. Can you speak to that? How do you think a good PRO instrument should practically be implemented to patients?
Dr. Major: Great question. I'm a huge advocate of electronic PROs, just as my experience. There are two ways I'll answer your question. I'll never forget, I was in a meeting with patient advocates, because the question that often comes up is survey burden. How many surveys is too much? How many questions is too much? What was really interesting in talking to patient advocates was that many of them said, "We don't really care how many questions we answer, as long as we're aware of what's going to be going on and the frequency." A lot of them said to me, "Because of the fact that you're asking about our symptoms and our symptom burden, we want to provide that information." I'll never forget it. That's also why it's so important that patient advocates and patients need to be incorporated into the design of PRO studies, because people who've lived through CAR T will tell you, "Listen, I don't want to be filling out surveys when it's Day 2 and I'm having cytokine release syndrome."
We have to communicate with our patients about what their expectations are and how many surveys are too much. They should be looking at our surveys. They should be helping us determine what the frequency is. But many of them said that they're willing to answer lots of questions if it's really getting down into what their symptom burden is like. I think the other component of electronic PROs, as I was saying, is that I think most studies need to have a hybrid model. The electronic PROs are great. Through REDCap or any number of systems, they can email directly to patients, and patients can fill them out and they go directly into a REDCap. There's even automatic scoring of PROs in REDCap so that when the patient fills out their survey, it automatically populates a score. It can make it really easy to do some dynamic, rapid research.
You can also text people, by the way. They do a lot of built-in texting for kids and AYA patients. But there's got to be a hybrid model still, because you don't want to miss patients who don't have access to appropriate electronic means to participate in these studies. You have to really make a concerted effort to make sure that you are capturing all of the PROs. But I agree, the nice thing about the PROMIS instruments is that for the languages other than English to be even put out there, they do some initial tier one internal validation of the quality of the question. That's different than a lot of the other instruments.
When you use those instruments, you can feel more confident that you really are capturing what you're capturing, though still considering the limitation that larger validation cohorts are always needed. I think a hybrid model is always good, but I tend to favor electronic PROs (ePROs) because I find a lot of patients in this day and age, even older folks, have no problem doing ePROs. I think the response rates, especially in a trial setting, can be quite good. Real world is a little bit of a different story, because if you're just blasting out ePROs to anyone in your hospital system, that's a little bit different. But there's a whole field of implementation science looking how to improve that.
Dr. Banerjee: Agreed. This is really, really true. I agree that the digital divide is another theoretical spectrum with the ePROs. But exactly as you said, if you have a hybrid model in place that has a way to approach these patients, then I think that's quite manageable. Another plug for REDCap—multi-university academic health care centers have it, and PROMIS and a lot of the instruments are already built for REDCap. You can import them in readily. I think for fellows and junior faculty like us, who are interested in incorporating PROs in a meaningful way, once you've talked with your patient advocate, once you've talked with your biostatistician, and once you've gotten institutional review board (IRB) approval, REDCap and PROMIS instruments are a nice way to go.
Dr. Major: The other plug I'll give is that automatic survey scheduling is built into REDCap. It's been truly wonderful as an investigator, because you can literally consent people virtually, as long as it's good with your institutional review board (IRB). Then once they fill out their consent form, it automatically sends them their first survey. Then three months later, it automatically sends the next one. You can really design it to be a fully automated system, which especially if you're looking at large data sets or if you're doing prospective observational stuff, can be really powerful.
Dr. Banerjee: Totally agree. Especially when there's a shortage of clinical research coordinators plugging things in, taking a lot of the logistics and automating them for these PROs is great. Wonderful. Well, thank you, Dr. Major. This was very, very illuminating for all of us. Any closing thoughts or things you want to say about PROs going forward?
Dr. Major: If you're interested in incorporating PROs into your study, for any new investigators out there, talk to patients. But especially try to find someone who actually does PROs. This is not just a job security plug, but truthfully, we're out there and we want to help you with your studies and help really generate some meaningful PRO work. The one thing I will say is that we're getting into the era—it's already happened—where drugs or medical therapies have been approved because they improve overall quality of life in the hematologic malignancy space. This is so critically important. But talk to your patients, talk to patient advocates, talk to biostatisticians, and talk to investigators like me who are interested in PROs. We can help you incorporate them in your studies and make sure that you have a really robust study design.
Dr. Banerjee: Agreed. I love it. I will say, and obviously we've spent the entire half hour talking about PROs in research settings, but PROs in real-world clinical settings are a topic for another day.
Dr. Major: Oh, totally.
Dr. Banerjee: They've been shown to improve survival in the solid oncology world, and hopefully someday will be shown here, but we have a ways to go before we get there. Wonderful. Well, Dr. Major, thank you again for your time, and thanks to everyone for tuning in. Again, my name is Dr. Banerjee. I'm on the Board of Oncology Data Advisor, and it's been a pleasure talking with you and having all of you listen in.
Dr. Major: Thank you.
Oncology Data Advisor: Thank you so much, Dr. Banerjee and Dr. Major. This was a really valuable conversation, and I think everybody learned a lot, so thank you again.
About Dr. Banerjee and Dr. Major
Rahul Banerjee, MD, FACP, is an Assistant Professor in the Division of Medical Oncology at the University of Washington; he also holds a faculty appointment at the Fred Hutchinson Cancer Center. His clinical interests are in multiple myeloma, AL amyloidosis, and CAR-T therapy, and his research interests are in toxicity management, digital health, and the patient experience.
Ajay Major, MD, MBA, is an Assistant Professor of Medicine at the University of Colorado, where he specializes in the treatment of patients with lymphoma and myeloma. His research focuses on patient-reported outcomes and the effects of treatment toxicities on overall quality of life and survivorship in patients with blood cancers.
For More Information
Major A, Jakubowiak A & Derman B (2022). Longitudinal real-world neuropathy and patient-reported outcomes with bortezomib and lenalidomide in newly diagnosed multiple myeloma. Clin Lymphoma Myeloma Leuk, S2152-2650(22)00219-1. DOI:10.1016/j.clml.2022.07.002
Elsawy M, Chavez JC, Avivi I, et al (2022). Patient-reported outcomes in ZUMA-7, a phase 3 study of axicabtagene ciloleucel in second-line large B-cell lymphoma. Blood. [Epub ahead of print] DOI:10.1182/blood.2022015478
Farivar SS, Liu H, Hays RD (2004). Half standard deviation estimate of the minimally important difference in HRQOL scores? Expert Rev Pharmacoecon Outcomes Res, 4(5):515-23. DOI:10.1586/1473722.214.171.1245
Major A, Wright R, Hlubocky FJ, et al (2022). Longitudinal assessment of quality of life in indolent non-Hodgkin lymphomas managed with active surveillance. Leukemia & Lymphoma. [Epub ahead of print]. DOI:10.1080/10428194.2022.2123225
EuroQOL Research Foundation (2022). EQ-5D-5L. Available at: https://euroqol.org/eq-5d-instruments/eq-5d-5l-about/
National Institutes of Health (2022). Patient-reported outcomes measurement information system (PROMIS). Available at: https://commonfund.nih.gov/promis/index
FACIT Group (2022). FACIT measures and languages. Available at: https://www.facit.org/measures-language-availability
FACIT Group (2022). NFLym-SI-18. Available at: https://www.facit.org/measures/NFLymSI-18
EORTC Study Group on Quality of Life (1999). EORTC QLQ – MY20. Available at: https://www.eortc.org/app/uploads/sites/2/2018/08/Specimen-MY20-English.pdf
National Cancer Institute (2022). Patient-reported outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE®). Available at: https://healthcaredelivery.cancer.gov/pro-ctcae/
Prica A, Dhir V, Aitken R (2021). Quality of life and caregiver burden in patients and their caregivers undergoing outpatient autologous stem cell transplantation compared to inpatient transplantation. Blood (ASH Annual Meeting Abstracts), 138(suppl_1). Abstract 3055. DOI:10.1182/blood-2021-153120
Transcript edited for clarity. Any views expressed above are the speakers' own and do not necessarily reflect those of Oncology Data Advisor.