Webinar: An Expert Panel Discussion on Rapid Healthcare Innovation
BEGIN VIDEO TRANSCRIPT:
Chris Barnett:
Okay, let’s get started. There’s still a few more folks attending and joining the webinar, but I’ll go ahead and get introductions done, and we’ll get to the discussion while people are coming on board. So we’re very pleased today to present an expert panel discussion. And our host today will be Riddhiman Das who is the founder and CEO of TripleBlind. And we’ve got a great panel today, we’ve got Mayo Clinic’s Dr. Paul Friedman, who is professor of medicine and chair of the department of cardiovascular medicine.
Chris Barnett:
We also have Dr. Suraj Kapa, cardiologist and director of artificial intelligence for knowledge management and delivery. And we’re also joined by Sukant Mittal who’s executive director at Novartis. So to begin our program, Das is going to make a short presentation, so Das I’ll turn it over to you to kick things off.
Riddhiman Das:
Thank you, Chris. So welcome everybody. It’s great to have you here on our first webinar, actually. What TripleBlind does is unlocks private healthcare data. I think everybody understands that data, especially healthcare data has privacy concerns associated with it. There’s patient privacy concerns, there is PHI and PII concerns in healthcare, not just in the United States, but globally. And what TripleBlind enables and is in the mission to go deliver on, is enable the liquidity of this data to happen in ways that enable and foster innovation of new AI routines and algorithms and diagnose diseases to physician assistant tools, to drug discovery and other similar applications.
Riddhiman Das:
So as we know, lots and lots of data has been generated by healthcare institutions world wide, and as a result of EMR systems and as a result of continuing digitization, most of that data today exists in computer accessible ways. However, those data that exists at those institutions are not broadly shared. Researchers and third party institutions that want to leverage other datasets, not at their resident institutions struggle with de-identification, privacy concerns, regulatory concerns, and compliance concerns in the processes of how to get access to third-party data.
Riddhiman Das:
What Triple Blind brings to this process is patented advanced mathematical cryptographic and other privacy primitives that enable this data to be shared widely and broadly, without introducing any risks, compliance issues, or any potential of abuse in the process, while at the same time enforcing and respecting patient privacy. The objective of data sharing of course is to foster better algorithms that are not just accurate, but also generalized better to data that is not represented at the first party institution.
Riddhiman Das:
Historically data sets at first part of the institutions have skewed often towards the populations that they serve in and around the geographical area where they are. However, as you are able to leverage data sources from different cultures, different socioeconomic backgrounds and different physiological backgrounds, your algorithms generalize when they are able to use those in their training processes, are able to generalize better to data out in the field.
Riddhiman Das:
So what TripleBlind therefore enables is institutions to be able to leverage third-party data or allow third parties to use their data without introducing any potential of abuse or potential of re-identification and essentially guaranteeing that the data is going to be used for the stated purpose by enforcing permissions and auto blogs around this entire process.
Riddhiman Das:
So some of the pinpoints we often think about when we brought the solution to market is how are you leveraging third-party data? Or how are you sharing your data and algorithms with third parties? The algorithms part is important, because oftentimes we invest a lot of dollar and time investment into the development of these algorithms. And just giving it to somebody does not offer enough protection against reverse engineering of the intellectual property of the algorithm as one of the training data used in the development of this algorithm. And that continues to be a challenge, and obviously TripleBlind has a solution called algorithm encryption that mitigates those challenges. But this is a pinpoint we often encounter in our prospects and customers
Riddhiman Das:
In the data sharing processes, are you facing regulatory, legal, financial time constraints, de-identification or manual anonymization of data is often very expensive, time-consuming and never offers a firm guarantee around whether or not the actual data was actually de-identified. And therefore, is there a dream dataset that you would like to go after? Is there multiple dream data sets that can help you build better algorithms that when presented with new data will generalize beta to the data they see out in the field.
Riddhiman Das:
TripleBlind improves upon a lot of the state-of-the-art and other ways to achieve data sharing today. TripleBlind allows the actual data to be used in a privacy preserving way, meaning without introducing any risk of re-identification of the patient, you can use their data to train or run predictions from a trained algorithm. The important distinction there is we’ve seen synthetic data, differential privacy and similar techniques that keep the data private by introducing stochastic noise to the dataset, and what those often, when they’re used in the medical field, means that the actual patient data was not able to be used in the training processes, so therefore the stochastic noise added as part of the privacy procedure ends up in stochastic or rather a very deterministic pattern of inaccuracies in the end model.
Riddhiman Das:
Interoperability is a big concern, how do you interface with others, whether they may be on a different cloud or a different set of hardware or a different on-prem data center potentially, and how do you allow institutions to work together even when they are on different regulatory regimes? How does the European hospital collaborate with the US hospital around the sharing of patient data in a private way, while enforcing GDPR on the European side, HIPPA on the US side.
Riddhiman Das:
Which brings me to my last point, which is compliance, how do you ensure that these techniques too in fact, allow you to be compliant, not just with HIPPA and GDPR, but also with the myriad regulations around privacy in the world. Data residency, data protection laws in East Asia, Southeast Asia, a lot of emerging markets, as well as those that we have seen in the Western world, GDPR, and obviously Canada and Mexico.
Riddhiman Das:
And here to have a discussion around this, I’m pleased to be joined by this elite panel here. Thank you all for joining us. So I think Suraj was going to kick us off on how Mayo is addressing some of these today.
Dr. Suraj Kapa:
Thanks Das. And it’s my pleasure to talk to everybody about how we at Mayo are number one, using AI and number two, crystallize how issues of us being able to use our insights, when gaining from AI to actually benefit the world at large and how privacy plays a role in that. So we all know that healthcare is a global challenge, the truth of the matter is that as it stands today and as we deliver healthcare today, providing care at a global level is essentially not a viable construct in traditional brick and mortar enterprises.
Dr. Suraj Kapa:
Even at this current day, 80% of Americans admit they delay or forego preventive care. And in fact, up to a quarter of Americans don’t even have a personal doctor. Which essentially makes it inviable to provide healthcare at a large scale when people are unable to see the experts or actually have medical diagnostic tests ordered at the point of care. So the question is, how do we deliver expert level insights to individuals in order to address potentially brewing diseases early in their process?
Dr. Suraj Kapa:
And what we feel, and what we’ve realized at Mayo is that using our data here, is that AI can use data that’s obtained at a very low levels, such as electrocardiograms, such as other noninvasive tests, and actually get deep insights into them, which often would require much more expensive tests and engagement of more expert clinicians. So one clear example of this is the electrocardiogram. The electrocardiogram has been around since the 1800s, and in fact even centuries prior to that, there has been a recognition that every single time the heartbeats there’s an electrical signal that regulates that beats. And that electrical signal can be sensed by patches around the chest that essentially see the electrical field. That electrical field and the characteristics of it are defined by the electrocardiogram modern parlance.
Dr. Suraj Kapa:
And interestingly electrocardiogram is almost a biometric for individuals, even if we strip it of the name, date of birth, where they were located. The truth of the matter is Das’s electrocardiogram, Dr. Friedman’s, Sukant’s are all different from each other, even though we might all be called Normal Sinus Rhythm, normal ECG. And because of that, it’s almost like when we talk about genetic data or even chest x-ray data, well, we can trace it back to the individual level. Is it really enough to just strip the identifiers as we would normally think of it?
Dr. Suraj Kapa:
And now you might ask yourself, well fine, that’s okay, but the data will sit on the internal servers, but then how do we deliver AI tools to get deeper insights from relatively straightforward data top 10 that can in modern day be obtained from your watch or from your smartphone, so that people can get insights no matter where they are, but actually distribute the model to those ECG’s without having them in a common server that can be traced back to the individual.
Dr. Suraj Kapa:
In Mayo we demonstrated that using a simple 12 lead ECG, which is extendable to even single lead formats, we can actually identify the risk of a number of different diseases with a high degree of accuracy. The presence of a low heart function even before patients are symptomatic. Hypertrophic cardiomyopathy, a common cause of sudden death in young athletes. The potential risk of atrial fibrillation, which is a global health problem that affects even within the US tens of millions and globally hundreds of millions of individuals.
Dr. Suraj Kapa:
Pulmonary hypertension, which early in its stages might be eminently treatable. Amyloid, which is a growing health problem as a population ages. And even physiologic age, which might be a marker of wellness and mortality risk several years after an ECG is obtained. And when we look at all of these insights, what we want to do is we want to scale these cost-effectively while maintaining patient privacy, given the concern about biometric identification and other factors we’ve just touched upon.
Dr. Suraj Kapa:
Ultimately to achieve this, we need to enable algorithm deployments, we need to allow for algorithms to work upon external data, but obviously there are concerns that are laid in here. For example, number one, there is current data to show that models can actually be deconstructed with the data that was used to train them reconstructed from the model themselves.
Dr. Suraj Kapa:
And the question is, how much do you want others to be have access to those models? You don’t want to [inaudible 00:12:16] disc in accrued matter of speaking to all external sites and potentially have bad actors reconstructing, many of those hyper parameters. We need to ensure data privacy, but data privacy is a very complex meaning when we talk about the world of AI, because is it really enough to strip the name and the date of birth from that? If you can reconstruct the individual from their data. In fact there’s multiple papers now showing that even from your genome, we’re able to reconstruct somebody’s facial characteristics, or same thing with a CT of their head or an MRI of their head.
Dr. Suraj Kapa:
And that means that even though I stripped all of the classic identifiers we talk about in HIPAA and GDPR, it doesn’t necessarily mean the data can be brought back to the individual, we need to source agnostic functionalities. In other words, in order to achieve the best benefit of these algorithms, such as the ECG algorithm they talked about, yes, we want them to work on our standard 12 ECGs in the hospital, but to scale it to consumer health, we need to apply it to ECG and it might be obtained through multiple different form factors. But the truth is even at the individual level, we want all that data to be private, but at the facility level and at the company level, they’ll likely want it to be private as well.
Dr. Suraj Kapa:
We might have our own ECG function that operates on it, but we want company ABC and D who have different ECG form factors that they have out there to be using these algorithms as well, to achieve the best opportunities for population health. We need to be able to validate across populations and by validating across populations, that doesn’t just mean within the US, but it means globally. And that means being able to account for every single country’s laws, as far as how they implement algorithms and how the data is sent to us to achieve validation.
Dr. Suraj Kapa:
And ultimately healthcare integration, how is the responses of the algorithms upon data brought back into the consumer level or the health system level integration so people can act upon those results. And within all of these, we need systems in place, we need approaches in place that will allow the relationships to exist so that all the work that’s put into developing these algorithms at institutional levels at corporate levels can truly be delivered at a global landscape to overall health. So I’ll stop there and hand it back to you Das.
Riddhiman Das:
Thank you, Suraj, that was enlightening. Thanks for covering a lot of ground in that presentation. So we’ll go next to a panel discussion among the four of us. And I’ll kick off with a question targeted at a particular individual, but that doesn’t mean that it was meant for just that individual, feel free to make this more of a dialogue than a monologue. So the first question is for you, Paul. So as you think about where the field of healthcare innovation is in 2021, what are the biggest challenges that you encounter?
Dr. Paul Friedman:
Yeah. It really comes down to what both you and Suraj highlighted and really distill it down to a simple idea when you think about healthcare today, it comes down to information, I have an x-ray, there’s a spot on it. There’s turning that information into knowledge, that is what does my old X-ray look like? Is it a new spot or an old spot? Is it cancer? Or is it that GI had an infection when I was 10 years old and it doesn’t matter. And then using that to manage health. And so that knowledge management is one key component, and the second is geographic independence.
Dr. Paul Friedman:
The idea of every time that there’s a health issue you have to go to a doctor or a clinic that gets an icepick view of your health for one hour on one day and then another year is just not the way we live our lives, and increasingly we’re seeing that evolve. And when you think about the barriers to that, the barriers to knowledge or what you and Suraj really eloquently described, that healthcare information is siloed, it’s like trying to bake a cake, but you’ve got the flour here and the chocolates at my neighbor’s house and the frosting is at the store, it’s just not going to get made until you figure out a way to get it all together, and then you can have something really useful or delicious you know what I mean?
Dr. Paul Friedman:
And the reasons for those barriers are, again, the ones that you’ve very nicely highlighted that is first their regulatory concerns, privacy, the second one is increasingly people are recognizing that there’s an ethical right of individuals to own their own data, because maybe I don’t want the world to know that I have the spot on my lungs, whether it’s benign or not, it’s my personal decision. And so there needs to be a way to get the information where it’s needed so it can be useful while protecting my privacy. For institutions, they may feel there’s financial value in the information they have and anyone who doesn’t believe in the financial value of data, just look at Google who had sort of their largest profit yet in this last quarter all from making data available, right?
Dr. Paul Friedman:
And healthcare obviously the impact for the health of human beings and the wellness of our societies is critical. So I think at the end of the day, overcoming those barriers in a way that is respectful of privacy rights for people that is respectful of potential innovation that others have labored on, that is.as potential financial value that some may have labored to aggregate is critical so that people say, yeah, sure, here’s my file, because I’d love to have… If you were looking at my x-ray maybe more accurately say Suraj you’re looking at my x-ray, I would want him to have my old one, even if it were done at a hospital in Kansas, on a different system somewhere else where they may have their own security issues, I’d say, get it there, I need him to see both, because the difference at the end of the day to me is whether I’m reassured or whether I get a needle in my chest. So it’s huge.
Dr. Paul Friedman:
So it’s a real world problem that needs solution. So the ability to find means of sharing information that’s respectful of these constraints is critical.
Riddhiman Das:
Thank you, Paul. Confidence is inspiring for us given the mission of what TripleBlind is up to. Sukant I’d like to go to you with the next question. Given what Paul just outlined as some of the biggest challenges, what are the opportunities for innovators to fill these gaps?
Sukant Mittal, Ph. D:
First, Das and the TripleBlind team thank you for inviting me and I definitely appreciate the invitation here to join my esteemed panelist from Mayo. Super broad question, there is literally a million things that are popping up in my head as opportunities. As we start thinking about this decade, definitely there is going to be this inflection point of what we see will be thinking of as innovation. A lot of technologies are actually coming to the floor at the same time, let’s talk about what’s first happening to level set and then we can go into the opportunities.
Sukant Mittal, Ph. D:
We talk about personalization. Paul mentioned at an individual level owning data, so personalization of data and those data generating modalities informed sequencing. And then our ability to sort of act on those sequencing data to sort of personalize therapies using technologies like CRISPR, right? So it’s just the personalization of you’re now getting down from a population model down to a level of kind of one is sort of a important factor. And as we start thinking about the insight generating value chain, that is becoming more mature, right? So the computing power is there, storage capacity is there, algorithms are there, there’s just a wealth of the stuff.
Sukant Mittal, Ph. D:
And then the third thing is there’s enough outcomes of the data Das that we can start now connecting the outcomes to what was once a hypothesis, right? So the classic correlation between genotype and phenotype is sort of coming to the fore, so as a result of that, now you start talking about the opportunities, the highest level. I think, Suraj, you and Dr. Friedman already covered a lot of them. You start thinking from more of a preventative medicine mindset and earlier diagnosis, agile development of drugs, I was having a conversation with a CEO of a large pharma and he had a very interesting perspective on it.
Sukant Mittal, Ph. D:
And he’s like, well, think about the number of drugs that have been discovered in the life of humanity altogether, you’re basically talking about no more than 500 to a thousand effective drugs. And we were talking about lifetime of humanity, that’s all we have, right? So that doesn’t really sort of speak volumes about where we are and our efficiency to generate growth. And then the last part of [inaudible 00:21:40] is improve access to medicine and healthcare. And Suraj had an interesting fact that only 22.5% of times, can you get access to a personalized physician?
Sukant Mittal, Ph. D:
So what all of that really tells you is this information somehow continues to reside in a silo and democratization of this information and data liquidity is important to sort of converge all of these different things, so you’re effective.
Sukant Mittal, Ph. D:
I’ll just stay a couple more seconds just to narrow it down a little more to pharma. I think just, we talk about clinical trials even today, there are a lot of multi-centered trials, but for some reason when you try to expand this geographically, and I think Dr. Freedman already made this point, you’re not really backing trials at a global level, right? So what is it that we can do to sort of bring all of this information, cut through the red tape of different countries, different laws to sort of aggregate this information and sort of run these trials so that they’re a little more effective and what they’re producing is an outcome. We don’t see this conversation today in form of what’s happening with vaccines, it’s relevant for a certain population and not for the others.
Sukant Mittal, Ph. D:
And on the commercial side it’s the logitudinous as you start seeing what you’re doing in the trials and you’re doing with a specific set of people. How does that really map to the real world evidence? And it’s just amazing coming from a larger organization, I see that we talk about sharing information externally, but even within the organization, there’s plenty of opportunity to make sure that there’s data democratization is happening.
Riddhiman Das:
You touched on some great points there Sukant, really appreciate it. I think a lot about the NF1, right? Historically the ways we’ve approached de-identification via anonymization is aggregating data up into buckets to where you can no longer tell what was in the bucket, but how do you do that when NS1, right? Which is where I think TripleBlind’s de-identification by encryption becomes really fundamental.
Riddhiman Das:
And I also liked what you mentioned about the feedback loop in a clinical trial being so long, right? And how do you determine effectiveness or potentials for effectiveness or are you trending the right way in the middle of a clinical trial? Without breaking of course double blindedness and other privacy concerns, right? Thank you for sharing that. So Suraj, next question is targeted at you around, what do you think is the future of digital health and practical implementations of it?
Dr. Suraj Kapa:
Yeah, so, it’s an interesting question and relates to directly with what Sukant and Paul are talking about. The long term promise of digital health is to essentially allow for to take away maybe a little bit from the vertical monopolies for healthcare and make it a little bit more horizontal, where we can start enabling the acquisition of data from broader cohorts that don’t necessarily reflect the narrow cohorts of patients, but the broader concept of population.
Dr. Suraj Kapa:
And on top of that actually talking about getting not just the people who happen to go to the right physician or clinician, who happened to get the right test, but enabling the test to maybe even be obtained before they show up to the clinician’s office. To be able to offer the insight by virtue of them having gotten further data from their blood pressure cuff or from their electrocardiogram or from their digital thermometer they’re using, so that they can have a hint that there’s something going on and they should seek medical attention. Because I think people want, and both on the physician side, clinician side, as well as the patient side, they would rather be seeing, or be seen by a clinician for a reason, rather than just because.
Dr. Suraj Kapa:
And if the consumer enabled technologies allow them through AI derived insights, from the experience of places like Mayo, allows them to say, Hey, I’m doing pretty well, I don’t really need to go in and get seen. Or, Hey, there might be something brewing, normally I would never see somebody, but this is telling me I might have a fib. Maybe I should go see Paul Freedman to talk about anticoagulation, and maybe I can be treated and actually have an ablation procedure done to make myself feel better. This actually engenders a greater approach to how patients interact with the healthcare system. So I think that’s actually the promise of how digital health can actually evolve to actually be somewhat of a consumer enabled health, rather than just the traditional enterprise of physician or clinician enabled health that we’ve been operating on for such a long time, which has also resulted in the extraordinarily high costs and other issues that we’ve dealt with as far as access.
Riddhiman Das:
Yeah, wow, thanks for sharing consumer driven healthcare. I learned a new term today, that tells me how little I know about healthcare, right? But Sukant, based on what Suraj just talked about the future for digital health and practical implementations, What in your opinion are the right tools that innovators need to be able to bring this fish into reality?
Sukant Mittal, Ph. D:
So the interesting thing here now says these answers are definitely not coordinated, but I see a common themes, it’s just, all of us are coming from different perspectives but we are all in essence talking about the same thing, Suraj was talking about healthcare consumerism, right? I mean, this has been on the precipice for a long time, and it all comes back to how do you really take these different data sets? Okay, medical data sets existing chest x-rays and MRIs, and then you have the demographics, then you have the behavior. You’re basically combining all of those different things, right?
Sukant Mittal, Ph. D:
So who is that essentially help you merge this stuff, compliantly, democratize information, and I love the analogy that Dr. Friedman have, information to knowledge, and I would even take it a step further, it’s information to knowledge, to insights, right? Think about what’s social media get for sharing information with people, I mean, I guess that’s a little controversial, and they don’t have as many regulations and the consumer and social media space, but it’s undeniable, but there’s a lot more awareness about that, right?
Sukant Mittal, Ph. D:
There’s a way to compliantly share crowdsource information on which one can act and operate real time. I think in healthcare the possibilities are limitless, it will accelerate discovery, and it will definitely accelerate understanding of correlation of that discovery to the associated outcomes. And I keep coming back to this example of COVID, because it’s top of mind for everyone. But you saw an industrywide deliberate effort to sort of merge data from all over the place, across institutions, across large pharma, across the globe. And that is what has manifested in sort of accelerating the development of vaccines and understanding the risk factors. So clear example right there, I mean it can be done only if a tool existed to sort of make it easier for everyone and open up the value proposition [crosstalk 00:29:15] possibly these are coming to us.
Riddhiman Das:
I love that. And for the audience, we definitely did not coordinate on the answers, but I love that there is a trend emerging. And I love that you talked about sharing information, because there are times when I’m okay learning information or sharing information as long as it doesn’t involve sharing the underlying data, right? If I for example if you had an algorithm that said, Das, this algorithm looks at your Facebook profile of all things and determines how active you are and therefore some other variable in cardiac health.
Riddhiman Das:
I would be okay with you giving me that prediction, which is information, if you had a way to do that without actually me giving you all of my online Facebook data, right? So I want to maintain the privacy of my Facebook information, but I still wouldn’t be interested in a derivation of information from that. And so the tools that enable innovators to be able to effectively derive the right information, even when the data is not always at the same place, I can see the value of that.
Riddhiman Das:
So we’ve talked about AI and third party data and the future of digital health. Suraj the next question is intended for you, which is, as you and the broader Mayo clinic are developing AI algorithms for use in diagnosing diseases early in other use cases. How are you ensuring that those are free of bias and ethical?
Dr. Suraj Kapa:
Yeah, so that’s a critical question that we are dealing with, because the problem is you always have to determine whether or not the model you create, number one is applicable across populations, because the last thing you want to do is run across a situation where the model you created, all of a sudden goes into commercialization, commoditization, being used by other groups and has large amounts of error in those other groups=. I mean, Fitbit is a phenomenal example of that, where it’s been around for a long time, but the truth is it’s only recently that the lack of accuracy and people with a higher melanin concentration results in poor assessment of heart rates.
Dr. Suraj Kapa:
And because of that, I think we’re still working on how do we make sure that there is no inherent bias in the models that we create. And there’s several ways we may have looked into that. Looking at validating across independent populations, looking at different ethnicities, races, et cetera, within our own data sets to determine that there’s consistency in the outputs across different populations. But of course this obviously becomes much more difficult when we talk about a global population, maybe their environmental characteristics and other characteristics that might affect certain data in certain ways, we don’t necessarily know that yet.
Dr. Suraj Kapa:
And that’s where we have to get across bias, but then there is the ethical side of this as well, which obviously there is a Venn diagram overlap with bias, I mean, part of ethics is making sure there’s equality of the determination, but there’s also equity in the distribution in terms of the benefits that people obtain out of these AI based solutions. So for example, a couple of very low-level examples might be okay, everybody has this tool that gives them insights, even at a consumer based level or a primary care level, but on one end, they can immediately go get advanced care and on the other one, they just become extremely nervous about something going on that they can’t validate, and you raise concern for them, but they can’t actually seek the higher level of care that they need. And that might actually result in a worse outcome.
Dr. Suraj Kapa:
Another example of ethics related to this is the opportunity to actually distribute that care. We see that there is actually even in the United States variable distribution of 4G and 5G networks, WiFi, infrastructures, that can allow delivery of many algorithms that require higher level computational analysis, and well to Sukant’s point, yes, computation even edge computing is allowed for the rapid distribution of AI, we’re looking at it from our narrow fore side that exists in already developed societal infrastructure that allow that to permeate.
Dr. Suraj Kapa:
But if you go into the rural United States or even to other regions in the world that doesn’t necessarily exist, edge computing might evolve that significantly. However that still is in evolution and thus we might actually expand perceived disparities in healthcare, see continued improvements in life expectancy in areas that there’s already an improvement and further worsening in communities that we are seeing worsening right now.
Riddhiman Das:
Thank you, Suraj. And I love that you talked about the diversity of data being a key factor in removing bias from algorithms. So Paul, the next question is directed at you. How are you ensuring that when you set out to develop these algorithms, you’re capturing the broadest population possible from a training data standpoint? And you’re muted by the way.
Dr. Paul Friedman:
I’m new to Zoom, what can I tell you? Really good question. And one that’s critical to everything we do, because as you understand at a very deep level, neural networks learn really well on things they’ve seen and they can detect subtle, non-linear, invisible to human, multiple interrelated patterns that essentially translate the hidden signals our bodies are giving off all the time, so that it can read an ECG after having seen a hundred thousand of them and tell you how old you are, whether you have a weak heart pump, whether you’re a man or a woman in a whole slew of other things.
Dr. Paul Friedman:
But if it has never seen a certain condition, it isn’t trained on it, like any of us, won’t recognize the pattern and it gets misfunction. And one of the concerns is, due to variations between human beings, because of sex or race or ethnicity or geographical upbringing and diet and other things, there may be some physiologic differences that can pack these signals. So to have robust and reliable tools, they have to be trained on a diverse cohort of people, it’s a must.
Dr. Paul Friedman:
So then your question was, how do we do that? Well, there are a couple of ways, first, we analyze the people with the information we have from our own hospitals. Now it varies between Mayo Clinic in Rochester versus Mayo Clinic in Florida and Arizona, and so we do have some diversity, but I’ll put forward that we’re not comfortable that that’s sufficient diversity when we really want to release something to the world. So we have partnered with other hospitals and we ran into many of the challenges we talked about earlier and spent a lot of time working through them, either contractually or through encryption. Actually, we really haven’t done it yet through encryption, the goal would be to facilitate it, it’s been more through contracts through stripping off data, through other means that are very labor, intensive and complex so that we could get data from hospitals in Norway, in Russia, in south America, in Asia, and especially Dr. Kapa referred to the COVID project.
Dr. Paul Friedman:
We had roughly 30 hospitals from four continents providing data, it’s a global scourge, a shared enemy is to fight COVID, and the idea here was the virus in addition to infecting the lungs, impacts the heart and causes changes in ECG. And in fact we found that it did, but to get to that point to collect 45,000 ECGs from around the world was hugely labor intensive, but it is a requirement. You have to have the diversity of input to be sure that you can trust the output.
Dr. Paul Friedman:
John Halamka, who is our head of platform has talked about this concept of an AI ingredient list, so that just like when you go to the supermarket and you buy food, you know what’s in it, in some ways you could look at an AI algorithm and say, here’s what it’s been trained on here are the conditions, so you know you can trust it here. And by the way, it’s never seen someone from X conditions or whatnot, so that you would know that’d be super careful. And I think that that may be will also help those of us who are using these algorithms be aware whether there are constraints.
Riddhiman Das:
That was great. Thank you, Paul. Sukant, how are you accessing these different types of data in your world, in pharma?
Sukant Mittal, Ph. D:
It’s an interesting question and I don’t think, the answer is not necessarily going to blow you guys away, we are operating in an industry that is fairly, I’d say fairly conservative, right? I mean, we care a lot about legalities and the consents and patient information as first and foremost for a pharmaceutical company. So we sort of use container method where in we will access information or we will receive weekly updates or monthly updates or whatever cadence we choose to purchase the information from a specific vendor.
Sukant Mittal, Ph. D:
And then we sort of send that, re-route that information, or we have a direct connection between the source and a third party vendor to sort of take all that information, put it in a nice format, de-identify it and send it back. And if I remember correctly, one of the things you mentioned on the first slide, hits home here, there are financial implications to that, right? And there are time implications to that, and there is a lack of transparency on what is really happening, you’re basically sending data directly or your pass through and what you’re getting back is what you’re sort of forced to believe, right?
Sukant Mittal, Ph. D:
So it’s not a super innovative solution, for the most part. Definitely not at scale, maybe there is a portion of the larger organization that is Novartis, that is sort of playing around with the idea of making it more point of source in real time. But by and large we use a container methodology where they’re sending it to a container, scrubbing of and taking it back.
Riddhiman Das:
Got it, got it. Sounds like technology tools could really help there, for third party data ingestion. We have another question for Paul and then we’ll move on to audience questions here. So Paul, can you share with us an interesting story about success in the field of digital health that you’ve been a part of and what the biggest challenge you encountered in that effort might have been.
Dr. Paul Friedman:
Maybe I’ll start with a patient story, a very specific one to make it real for all of us. And I can share this with you, because the patient has given his consent, and in fact it was just published in a very nice article by Casey Ross in STAT. And it’s the story of Peter Merklin, he’s a 73 year old man who 15 years ago has a stroke, no one can figure out why, so he’s not getting what turns out to be perhaps the optimal treatment for it.
Dr. Paul Friedman:
And he then has a number of standard medical evaluations and they all look okay, so keep taking asprin, but there wasn’t really any specific treatment. We created an algorithm using AI from 500,000 ECGs, to determine if, even though your ECG is in normal rhythm, if at other times you’re having an irregular heartbeat called atrial fibrillation, and in that irregular heartbeat the top chambers are quivering and there’s a risk of stroke and a very specific treatment, a blood thinner.
Dr. Paul Friedman:
And through a research study, we ran the AI on ECGs that were already done, and Peters were in the list, he’d had ECGs at Mayo clinic, probably 20 or 30 of them, none of them were they arrhythmia, but the AI looked at these normal ECGs and said, Hey, there’s something going on here at other times, then another algorithm looked and said, if atrial fibrillation is present, would he benefit from the blood thinning medicines that we know from multiple prospective randomized trials and professional society guidelines, lower the risk of stroke? And the answer was yes.
Dr. Paul Friedman:
So he automatically gets an email note, a portal message saying, we’re doing a research study, you’re candidate, would you like to enroll? Now keep in mind, this is from people who may or may not be here anymore, they could be anywhere for this particular study in the US but really in the work, right? So we get this message and says, yes, I’m interested. So he gets in the mail something he wears on his chest for a month and within days it shows atrial fibrillation, gets a video consultation, here are the goals and risks, and now, because we know he has atrial fibrillation by traditional methods and we knew to look because of the AI, he says, here are your options, you can take this medicine that lowers the risk of stroke by 80%, or we can continue to watch. So he’s on the medicine.
Dr. Paul Friedman:
And hopefully this is the kind of thing, why wait for the stroke to happen? Why wait for the heart pump to get weak? Why wait for the hospitalization? Now, as Dr. Kapa mentioned, we have to be mindful about this, because there’s always the risk of false alarms, how do we know? And like any tool, we have to test it and vet it and validate it, and to do that and to make it available in other hospital systems and to make it available globally, it comes back to this issue, how do we take information and make it knowledge and insights so that we can improve health, and quite often that involves sharing ingredients, sharing knowledge debates, bringing it together to make the cake as I started off with at the beginning.
Dr. Paul Friedman:
Because in isolation, any one piece of information is of limited utility, but when you put it all together so you can see a pattern, you can then have those insights, have that knowledge, make the informed decisions and leverage the benefit of what medical science knows. That is, if you have someone who has had a stroke and they have atrial fibrillation, you can prevent other strokes with blood thinners.
Riddhiman Das:
Thanks for sharing that Paul.
Chris Barnett:
That’s great. Das, we’ve got a couple of questions from the audience, how about if I can pose this for you, and then you can direct whether you want to respond or hand it off to some of the panelists, okay?
Riddhiman Das:
Uh-huh (affirmative).
Chris Barnett:
Okay. So the first one from the audience is, thank you for the enlightening discussion and then it says, what role do you see international organizations like the world health or world bank playing in governance of these algorithms, and do you think at some point they should be overseen by somebody other than just private sector? There’s an assumption in that question, but I’ll let you deal with it as it comes.
Riddhiman Das:
So I think I’ll go to Paul on that one, Paul, you’re pretty involved in the development of AI algorithms, what’s your take on that?
Dr. Paul Friedman:
Yeah. Like anything, first of all, sometimes they seem to almost be magical, right? The way they work, but it’s a test, and it needs to be vetted, validated, and done so independently. So we’re taking our tests through the food and drug administration, because that’s how medical tests are reviewed. And similarly, I think that the engagement of independent organizations whose sole charge is the good of the health of the country or health of society is an important part of this.
Dr. Paul Friedman:
So for example, we talk a lot about bringing pieces of data and sharing the data, I think having assurances from independent bodies that indicate that these are data that when they’re put together it works and we’re getting what you want, and that it’s not, you’re getting a fuzzy picture, but you’re getting a clear picture of the information you need, will lead people to adopt it, use it and trust it.
Dr. Paul Friedman:
And that will be… So for example if we’re getting data through a mechanism that encrypts it for part of a research study, at the Mayo clinic IRB, because it’s been appropriately vetted either by our own IRB or other organization says, this is fine, then all of a sudden consent may be dramatically changed. And all of a sudden the cost of getting information, which is an important barrier goes way down, because you don’t have to have a study coordinator consenting each patient. If they’re saying there’s no way that this patient’s privacy might be violated.
Dr. Paul Friedman:
So having these institutions and organizations review it, and that’s why Mayo clinic has a platform, I think we feel at Mayo that that trusted sources have to help in information exchange, and Mayo clinic as you know is doing it with internal and external partners as part of an overall program to make health knowledge available. So this whole concept is really, I think, fundamental to the future of improvement of health that we quite literally learn from each other, and we have to, otherwise we keep repeating mistakes.
Dr. Paul Friedman:
I mean, there are millions of simple examples, but again to surely answer that question, I do think that the specific role of the governing bodies will vary, right? It’ll depend on what their jurisdiction is what’s being done with the data and whether they simply provide sort of a guidance document versus whether they actually have regulatory oversight, and those are very different things and maybe more technical and legal in nature.
Riddhiman Das:
Got it. Sukant is there anything you’re seeing from the pharma perspective? Is the governance different for diagnostics versus pharmaceutical algorithms, drug discovery and others?
Sukant Mittal, Ph. D:
Sorry, so the question, is there a difference in the diagnostic space and the pharmaceutical space-
Riddhiman Das:
As it relates to the role of regulatory organizations governing these algorithms.
Sukant Mittal, Ph. D:
Not as much, right? I mean, I think the pharmaceutical companies are still operating on the tried to dusted methods of the past century, these two good five decades, at least. And they’re starting to experiment with this. It’s still a work in progress, what it means for pharmaceutical companies, most of those are still, I think focused on the research and discovery side. So not as much governance, because it hasn’t really been applied to real patient data, right? It’s still in research and discoveries, so not as much governance yet.
Riddhiman Das:
Got it. Thank you.
Chris Barnett:
All right. Das, I think we have time for one more question from the audience here. So this question is about the international or global aspects of everything the panel has been talking about. There were a couple of mentions of different countries, but the question really is, what are the global implications, or what are the specific implications considerations in different parts of the world that might be underserved or low income, or have special considerations? Other than outside of the US.
Riddhiman Das:
Sure, I’ll take a first stab at that one and then go to one of the panelists. So we’ve seen emergence of really strong, personal and health data privacy regulation emerge as a global phenomenon, right? So in the early nineties the US came up with HIPAA, and it’s a strong piece of regulation, but we’ve seen stronger arguably regulations that are more all encompassing emerge out of Europe in the form of GDPR, emerged even in China, personal data protection act. Most countries in Southeast Asia have what’s called data residency, in fact, that we just saw it yesterday that India is banning potentially American express, because American express was not storing Indian consumer data in India.
Riddhiman Das:
So as a global trend, we see privacy being paramount in how governments are choosing to regulate the internet economy, right? And we’ve seen abuses of that, not just in healthcare, in financial services and other industries. And so, from a triple blind standpoint, the way we’ve chosen to look at the world is almost everywhere in the world you’re going to see some kind of privacy regulation. And so, as long as the tool or framework you were using to participate in the data economy allows for encoding of those rules on top of that tool, you are able to cover a much wider ground than a tool that just attempts to solve for a specific narrow regulation. Suraj, is there anything you would add to that in terms of how you look at the world as opposed to just the US.
Dr. Suraj Kapa:
Now, I appreciate the question, I mean, your answer is spot on, actually relates even back to the previous question about what role do governmental bodies play? I think as it stands right now, frankly, governmental bodies don’t really work together and they often enact their own individual rule sets and considerations, which does make it hard to deploy nationally, internationally models in such a way that they equivalently address all population needs. And it is an issue and it’s an issue that needs to be addressed.
Dr. Suraj Kapa:
However that being said, as long as we address bias and other factors and how the data as it’s aggregated at the consumer level is brought to the healthcare landscape. So for example, to Paul’s point about that patient with atrial fibrillation, if they were told by their device, by their ECG that they might have atrial fibrillation, but they don’t have a cardiologist to see, it just raises concern without actionable intervention.
Dr. Suraj Kapa:
And that’s how I think we need to work closely with local bodies, local political bodies, local socioeconomic bodies, local healthcare bodies, to understand how deployment of these algorithms will integrate with the local healthcare infrastructure.
Dr. Paul Friedman:
I’ll just add that, if we think about the pandemic that we’re all still in the throws of in varying degrees, and I’m aware we are, it’s become clear to arbitrarily silo healthcare data makes zero sense. And one of the reasons we’ve had so much tragedy has been so much variability in response, and inconsistency and one infected person going from one place to another that makes a whole other place infected.
Dr. Paul Friedman:
And I think we’re all human beings on one planet, and we have to work together to improve our shared wellness and health. And I think we can drill into that on multiple levels and multiple disease states, but that’s really what we’re talking about, it’s fundamental, it’s collaboration to improve the human condition through encryption, through platforms, through various methodologies that let us talk to each other and it’s putting down the barriers, it’s, the evolution of living in feudal castle surrounded by moats, where people are speaking different vulgar languages to a shared language where I’m on the phone talking to someone in Japan or Australia, because we have the ability to do that.
Dr. Paul Friedman:
And that’s the same kind of [nectology 00:53:12] with shared sense of purpose is what really will improve health globally, I mean, just to take the broad picture of this. And even though you can very quickly get technical and in the weeds, I think it’s important to think about the fundamental and primary importance of this for human health. Which is why Mayo clinic is so interested in this. Why as I mentioned earlier, we have a platform. Why we’re working with partners and talking to people who are trying to solve this problem from multiple angles, whether it be industry, pharma, technology, healthcare providers, we all have to come together to make this happen.
Riddhiman Das:
Thanks for sharing, Paul. That was good.
Chris Barnett:
Great, that was a fantastic set of questions and discussion. So I think we’re going to, or just a little bit over, but I think it was worth it. So we’re going to wrap right there. I really appreciate the panelists today, appreciate your time, appreciate the great insights, and attendees thank you very much, we’ll be sending out a follow-up email, so you can review this if you want and also there’ll be some contact information if there’s something specific you’d like to follow up on. So thanks everybody and have a great day. Appreciate your time.
Riddhiman Das:
That’s for dialing in. Thanks everybody.
Chris Barnett:
[crosstalk 00:54:21] thanks all.
Dr. Suraj Kapa:
Thanks everybody.