Streamlining the Pathology Workflow Through Artificial Intelligence With Waqas Haque, MD, MPH, and Leif Honda of TriMetis

This second episode of Oncology Data Advisor’s podcast series, Exploring Artificial Intelligence in Oncology,features Leif Honda, Chief Innovation Officer at TriMetis, a digital pathology company focused on improving patient outcomes by accelerating human tissue-based research, development, and testing. Dr. Waqas Haque, member of the Oncology Data Advisor Fellows Forum, speaks with Mr. Honda about the TriMetis Computer-Assisted Pathology (TCAP) AI platform, including how it was developed and trained, its capabilities for streamlining the pathology workflow, and the ongoing process of preparing it for clinical utility.

Waqas Haque, MD, MPH: Hi, welcome everybody. I’m Waqas. I’m a third-year Internal Medicine Resident at New York University (NYU) on a Clinical Investigator Track, and I’m starting Oncology Fellowship at the University of Chicago this upcoming summer. I’m excited to bring you the next episode of the Oncology Data Advisor podcast. This week, we’ll be having a feature from Leif Honda, who’s the Chief Innovation Officer at TriMetis, a company dedicated to automating the workflow process through AI for different pathology services.

I’m really excited to talk to you today, Leif. If you don’t mind, just give us a short introduction about yourself and maybe tell us a about what TriMetis does, the reasons for why it got started, and how it’s improving biotech.

Leif Honda: Yes, thank you, Waqas. Again, my name is Leif Honda, and I’m the Chief Innovation Officer at TriMetis. We essentially have been in the business of handling specimens for research for over 20 years, and through that process, we have improved how we do things by leveraging technology. Those technologies include artificial intelligence, workflow automation, and robotics. We are investing heavily in this space in order to ultimately make life easier for clinicians and researchers. Behind all that is to try to accelerate the validation and approval of tools that clinicians could actually use on a daily basis, and I can go into that a little bit later.

Dr. Haque: Awesome, thanks for sharing that. I also read that you have an economics background. I majored in economics myself, so I’m glad to see someone with a similar training in this field.

I want to talk a little bit about the TriMetis Computer-Assisted Pathology (TCAP) AI platform that’s been developed. I attended the AI in Precision Oncology seminar last month, and some of the things they talked about in terms of AI were bioethics and how a lot of the training data for AI companies comes from three different states, Massachusetts, California, and New York. That’s where a lot of the biotech innovation is happening. So, tell us a little bit more about how TCAP was trained, who trained it, and what it encompasses.

Mr. Honda: TCAP came from a place where, essentially, we needed more detail on what was in the tumor tissues that we were using for research—not just a cursory overview of what the pathology and the diagnosis were, but actually the morphology and the information contained in the tissue. That was telling us more about whether those tissues were usable or not for certain studies. We would screen and fail lots of tissues, trying to explicitly find these perfect cohorts of patient data and tissues in order to discover novel biomarkers, therapeutic targets, and other landmark things that could tell us something about the disease state and the change in disease state.

That was a very difficult thing to do, and at the time, we decided that computers are better than the human eye in looking at some of these things. The human eye is designed to do certain things and receive certain inputs, but it isn’t exactly great at counting, especially at the level that we need for research where we need to know if there are 200 cells in a certain tissue. To count 200 cells actually takes longer than is really desired from a pathology perspective. It’s also one of those things where they tend to just guesstimate. They’re looking at surface area and what cells are there, and they might say, “Well, there are roughly 200.” What we found was that 40% to 60% of the time, pathologists were off on their count.

That revealed to us, “Hey, we need to do something that really is able to count these things fast.” It’s really on the low end of a doctor’s training; they don’t go to school for 14 years to count cells. We found that they had no problem getting rid of that relationship and saying, “You tell me how many you see in here, and I’ll tell you if you’re right.” Our system is designed to give answers and then let the clinician say, “Yes, I see that. That is correct, and I’m willing to adopt that.” Or they can adjust it and say “It’s a little bit more or less, and it missed this or that.”

Generally speaking, in order to look at a whole slide, which isn’t a very big piece of material, it could take hours just to count cells. We’ve automated that whole thing. At the same time, the machine learning has become so sophisticated and so fast, and it’s heading for quantum computing where these results can be very efficient. Therefore, clinicians can see patients much faster and produce results and get through the workload at a pace that is actually quite extraordinary at this point. There are just not enough doctors such as yourself, people studying hematology and oncology, in the world to support the number of patients out there. We’re readily looking at how we can speed up that process for people in the field.

Dr. Haque: It’s really good to hear, and it sounds like you guys have had a lot of success over the past years. Kudos to you and your team. I think one concern that some people will have is the idea of self-governing AI versus non-self-governing AI. Sometimes when you keep adding more data to your system, you have to wonder, if it’s all validated, is it going to work in your population? I know we’re obviously now dealing with the clinical side, but just from a pathology side, what is the role of self-governing AI?

Mr. Honda: When we decided to use AI, we already had experience with diagnostics, therapeutics, and the whole process of getting through the FDA. For the most part, we knew very well that we would be training sets of data, but then in order to get something through, you also have to have confounding data. You have to have different cohorts that challenge the AI and say, “Is this actually what you’re seeing?” What we did is we hired a number of pathologists to train this, and in this case, we chose Visiopharm because they had 20 years in the industry of allowing people to train AI and pathology.

We started to train the AI, and then we introduced a cohort of 100 in a certain indication. These were pristine samples, samples that we knew were good. We knew the diagnosis was good, and they had been checked twice or three times by pathologists, and then that they were able to train them. It’s an iterative design. You train it and then you challenge it with new information. You let that set get bigger and bigger, and then you introduce things that are confounding like diseases that may be close to non–small cell lung cancer, for example, but aren’t. Can it identify these cells within that situation? You’re constantly training it and making larger and larger sets.

Then the AI goes in and you start to ask the prompts and say, “What do you see?” In this case, you can find areas and regions where we don’t know what we see here. We have to re-identify that and dig in deeper. Maybe we need to add some other tissues that are a little more difficult. For example, fine needle aspirate is more of a slurry, and it has cells in it that are from the actual methodology of acquiring the sample. What you want to do is start to train with those types of things. Then you get into game theory, which is where you really start to challenge it and force it to learn on its own a little bit.

At that point, once its performance shows that it’s statistically valid, then you have to validate it with tissue sets that are essentially blinded. You give them to doctors, they have a read on it, they put in their number, and then we show them what ours is. You start to have a correspondence between the numbers and say, “Yes, this performance is getting tighter and tighter. We try to push that up to 98%, 100%, which is well above what is used with the naked eye, generally speaking. So, it’s a lot of learning.

Sometimes, there’s the point where you have to cut it off and you have to say, “We’re not going to use AI at this point.” The governance and responsibility that we have ethically and legally is that it has to perform within these certain parameters consistently every time. Although we’re working in research and it isn’t yet a clinical application, in order for it to get to clinical application, it has to prove itself in the research. We go through the discipline of validating these different data sets, challenging it, retraining it, training it again, and then we lock that down and we produce an algorithm. It’s not AI at that point anymore; it actually becomes an algorithm or an application, as we call it.

That is where a doctor, a research doctor in this case, can feel very confident that the cells that are being counted are truly those cells. We have layers of pathologists training, but also layers of pathologists questioning these things. For the most part, we do pilots with a lot of our clients so that they can understand, is this really yielding what it is? We’ve found that, generally speaking, it’s almost 100% of what it says it is, but we’re very focused on quality control within the tumors themselves or the cells themselves. We set a new standard by using cellularity that is based on tumor nuclei. It’s no longer estimate of tumor cells, but of tumor nuclei. Now we’re getting a layer of detail that we’ve never been able to achieve before.

Dr. Haque: Thanks for sharing that, Leif. It sounds like you all are doing a lot of work in terms of making sure the cells are accurate by getting a pathologist to validate the data and by building larger data sets. A similar question—last year the word of the year was hallucinations, where basically the AI is creating data that’s nonexistent or imperceptible to our perception. Do you have any other thoughts or ideas on how we can prevent that within pathology?

Mr. Honda: I think for the most part, it’s the process of analyzing a giant data lake where you have so many different images that are from different indications, making sure that you’re training on these different sets so that you’re not allowing it to hallucinate in any case, and then being able to narrow it back down. I think that’s, in essence, what has to happen. I know people think of AI in that it thinks for itself and it has a conscience, and all of a sudden it’s going to take off and we’re going to have a Terminator situation. We’re really far away from that right now. But it does come down to, is it good science? Are we using methodologies and are we looking at things that tell us explicitly that what we see is what is truly in those tissues?

What we found is that removes hallucinations. In this case, we’re taking a very small view of the world, and we’re not taking on everything. We’re not trying to say, “Throw an image up there, I’ll identify it for you. I’ll tell you if it’s liver cancer or breast cancer, and then I’ll tell you if it’s this type of diagnosis and what stage it is.” We have to take baby steps right now. We’re not really ready for prime time with that type of thing. The data lake itself is somewhat imbalanced. It’s based on the procedures you can get access to. There are, what, 40,000 new patients a year in hematology? It sounds like it’s a lot, but it’s actually not that many, thankfully. At the same time, you have to have access to those tissues and you have to train on those.

The way those access points happen throughout the world is different. Sometimes we have to go to locations where the care is not using neoadjuvant therapy or not having an intervention early on those tissues that could wipe out certain features, and using things that are based in ground truth that is truly the morphology of those tissues. We have to do a lot to stay above-board with all this. These are the things that we’re constantly attentive to and concerned about when it comes to hallucinations. What we can say right now is that we’re not in the clinic, and that’s for good reason. Our objective is to speed up this whole process of getting into the clinic by using digital images and larger data pools, but we also very much control what we’re going after in order to not let it get out of hand and have a lot of misinformation being delivered.

Dr. Haque: Thanks for sharing those thoughts. I agree, we’re definitely far from the doomsday Terminator situation. Being a pathologist is obviously a complex job. It takes years of training and medical school residency. The way a pathologist reads the slide can be different from beginning to end. They typically tend to scan the image and make a sense of the architecture of the slide, then maybe they count the cells and look at the nuclear deoxyribonucleic acids (DNA).

I was actually reading a paper last week talking about how the different parts of the reading that the pathologist does might be optimized for different kinds of machine learning models. One might be convolutional neural networks (CNN), one might be support vector machine (SVM), there might be different models and different parts of the step from beginning to end. Maybe we can kind of combine all those different things together. My final question, what are your thoughts on how we can take what TriMetis is making with TCAP and move some of this work pathology workflow into the clinic, and what’s the best way of going about that in the next decade?

Mr. Honda: Our objective is to move upstream. For the most part, a lot of the digital scanning is being done at the end of the process, and that is where the data has become fixed, let’s say. Our objective is to get the scanning done right away. If it’s a clinical trial and you’ve consented that patient, as soon as it gets to pathology, we capture that image in order to start to understand and follow what’s in that tissue. Once the data diverges from that point, it’s very hard to reconstitute it all. We want to move upstream.

We also want the utilities there to be upstream. At that point, we can look at those and be able tell the clinician, “This tissue is predisposed to these biomarkers,” for example. We already have this in progress where we can say, with reasonable statistical value, that those biomarkers are present and you should test for those as knowns in that tissue. That is also the point where in research, if you know that this is going to have a ROS1 mutation or KRAS or whatever in there, not only should you test for it and get that information, but it also is the point where you can screen that patient.

What happens in research and development is that the speed to which your study and your research goes is gated by the access to this information. If you do it at the end, you essentially have to screen lots of samples and maybe re-screen lots of samples to tell you if those markers are in it. If you start way up ahead, then you know the presence of those markers, you can confirm those, and then now that allows you to do better research within the cancer indication. We’re trying to move upstream, and in the process, we think we can deliver data that’s valuable to the clinician and to the pathologist that’s actionable. They can then say, “Okay, let’s test for this,” or “This has enough tumor in it based on nuclei that we know this genomic sequencing will work.”

That’s very important too, because you don’t want to get through the process, and then—appealing to the economic part of this whole thing—have to say, “Well, I’ve spent $3,000 to test this, but it failed.” You want to know right away, “Is this tissue viable, and can it yield some results that are important to me as the person calling the shots?” The earlier it’s done, the better. In any case, we think that the convergence of those images is going to happen. Now, the systems that are out there as far as scanners go are really gated towards this kind of old model of plugging in and scanning. We think that’ll change entirely where it’ll become automated, where it comes off the hematoxylin and eosin (H&E), it’s scanned right then and there, and those results are presented to pathologists so they can read them and they can agree with them, report on them, and then move on in the process.

Dr. Haque: It sounds really exciting. I was working in the hospital last month and we had to call Hematology on a Friday at 4:00 PM, which is the worst time to call any consultant. We had to get a slide confirmed just to see if someone had lymphoma or not. The Attending had to come personally, get the actual blood draw, run over to a different hospital, and then get it processed so that we could get a read in 24 hours. I think all the work you’re doing and all the work that’s been going on in AI will definitely speed things up. I think faster is better, for sure. Thanks for sharing those thoughts.

Mr. Honda: I think we’re doing a lot of work that we can do better with the technology that’s there. This is technology that often intersects and exists in other sectors. What you see from Google and Amazon and Microsoft and IBM is that they can do these things. It just hasn’t been applied in this area, necessarily, but remarkable things are being done in this space. I think that it’s complementary to everything that doctors do, and hopefully, it also takes some of the strain off the doctors of having to do things.

These things run 24/7 in the background, so once the image is captured, you can go home and rest and be with your family. Those things can be teed up for you the next day. You can have telepathology at work, and you can have pathology distributed globally so that you can get answers. It’s not at 4:00 PM on a Friday that the person’s reading it, and now the bottlenecks kind of fade away and our performance becomes better.

Hopefully, through this whole process, the machine learning and the information that’s in these tissues allows us to actually understand more. Once we get into three-dimensional data, which is literally three dimensions of tissue, with the fourth dimension being the actual data itself, I think pathology will change and morph and become more informed. The pathologists will have more to say about these diseases and hopefully have better therapies, better utilities, and better outcomes. That’s where we’re headed, one foot in front of the other right now.

Dr. Haque: Thanks so much, Leif, for your time. Congratulations and good luck with all the work you and TriMetis are doing.

Mr. Honda: Thank you, and good luck with your endeavors. I look forward to talking to you again. Take care.

About Dr. Haque and Mr. Honda

Waqas Haque, MD, MPH, is a third-year Internal Medicine Resident at New York University (NYU) in a Clinical Investigator Track. He recently matched to the University of Chicago for fellowship, which he will be beginning later in 2024. As a Clinical Investigator Track Resident, Dr. Haque has balanced his patient care work with a variety of research projects. He hopes to begin fellowship training next year in Medical Hematology/Oncology at an academic program with opportunities to further his work in innovative clinical trial design, value-based care delivery to cancer patients, and becoming an early-stage clinical investigator.

Leif Honda is the Chief Innovation Officer at TriMetis. He has more than 25 years of experience in the life sciences industry. Before joining TriMetis, he co-founded FoundationBio and several other startup companies in the fields of research biospecimen procurement and diagnostics. Mr. Honda has invented and holds the patents for several peptide research tools and renal insufficiency diagnostics. He is passionate about helping clients further their research and giving back to the community by donating his time and expertise.

For More Information

TriMetis (2023). Available at:

TriMetis (2023). TriMetis Computer-Assisted Pathology (TCAP) AI. Available at:

Transcript edited for clarity. Any views expressed above are the speaker’s own and do not necessarily reflect those of Oncology Data Advisor. 

Related Articles