Intangiblia™
#1 Podcast on Goodpods - Intellectual Property Indie Podcasts
#3 Podcast on Goodpods - Intellectual Property Podcast
Plain talk about Intellectual Property. Podcast of Intangible Law™
Intangiblia™
Mireille Gomes - Can Algorithms Heal? Reimagining Health Equity with AI and Data Justice
What if our smartest health tools still miss the people who need them most? We sit down with AI and digital health scientist Mireille Gomes to examine how innovation can serve dignity, not just efficiency—and what it takes to build technology that works from Geneva to rural clinics without electricity.
The journey of Mireille Gomes spans continents and roles, from vaccine strategy at Gavi to AI diagnostics at Merck. Together, we unpack the real barriers to deployment—uneven infrastructure, overworked staff, and data voids that erase entire communities from the record. We look at consent‑first design, why open data must be truly anonymous, and how representation in civil registration and vital statistics underpins every “fair” algorithm. You’ll hear pragmatic ideas for triage tools that flag urgency in seconds, health education in local languages, and micro‑local models that adapt to context while sharing standards globally.
We also push on the hard questions: Who decides which data matters? Can algorithms be biased toward justice if the world is not? Where is the line between breakthrough and overreach when crises demand speed? Mirielle argues for building abuse cases into development, testing for misuse before launch, and preserving community storytelling—especially Indigenous knowledge—alongside dashboards. The goal is health equity by design, so no one’s care depends on their birthplace or bandwidth.
If you care about AI in healthcare, data justice, and solutions that actually work on the ground, this conversation offers a clear roadmap and candid guardrails. If it resonates, subscribe, leave a review, and share it with someone shaping the future of digital health.
Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.
The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
The principle would be if you can identify both the positives and the harmful behaviors a particular technology can cause, that you do everything possible to prevent the harmful behavior from happening.
Speaker 1:Can algorithms heal? Reimagining health equity with AI and data justice. So today we are joined by someone who proves that science and strategy don't have to live in separate worlds. Miral Gomes is an AI and digital health scientist with a PhD from Oxford, in my case, the other place, I'm from Cambridge. Global experience in health systems reforms and a deep commitment to reducing health inequities. From leading vaccine strategy at GAVI to building AI diagnostic at Merck, she has worked across continents, institutions, and sectors to make data-driven innovation serve the public. Whether negotiating with global alliances or coding genomics models, Mural brings both precision and purpose to everything she touches. In this episode, we ask, can we design AI system that not only predicts disease, but also protect dignity? What does justice look like in a world where health is increasingly meditated by data? So welcome to our podcast. And please uh give us uh um a short introduction of how you landed here in Geneva.
Speaker 4:Thank you for having me and that very gracious introduction. Um I was working in different parts of Africa and Asia, and I wanted to understand how global policy was made that was influencing all the health decisions in the countries I was working with. And since a lot of the funding came from Geneva, I thought it was time for me to come here and understand how these decisions were made. So I came to Geneva for a job at GAVI, the Vaccine Alliance. And I've been here for a while, both at GAVI and at Merck, and it's interesting to see the world from this lens.
Speaker 1:Yeah, it's Geneva is a beautiful ecosystem. Yes. And it's full of people from everywhere, from every background. So it's always a um I f I fell in love with the place before I fell in love with my husband, who is a Genoa. So yes, I completely see the the beauty of it, and also working in in such a beautiful field, health, and also very um taxing, very important, very uh very impactful. So um I'm very happy to have you here on the podcast and let's dive in, please. So you work in digital health from Canada to Ethiopia, from policy making to algorithm design. What led you to combine scientific research with public strategy?
Speaker 4:Um, so a bit of background about myself. I was born in India. My family then moved to Oman and then Canada. I'm Canadian by citizenship, uh, all for better quality of life. And along the way, I've moved and lived in different countries. For me, doing this life journey and seeing the health inequities simply because um of your birth lottery and just how where you're born determines your access to health and dignity of life made me want to combine my scientific training with public strategy to enable health equity.
Speaker 1:Yeah, it makes uh um when when you have the opportunity of um coming from a developing country, moving to a developed country, you see so much, your your mind's changing so much. And and you see that things that for you may have been um normal. I come from the Dominican Republic, so it's also a developing country, that uh some systematic failures are accepted as the norm. And then you you go abroad and you realize, well, this isn't this is not the norm, and this is not the norm here, and it shouldn't be the norm there. So you start questioning a lot the system and what can you do to improve it.
Speaker 4:And I I completely agree, and there's also a lot of history and of colonization and other forces that led to the current situation in developing countries. So I personally do not feel like it is a self-created problem, but rather a global problem for global citizens, since everyone has been impacted by people across other borders throughout their history, and it's not their own fault or responsibility of why things are the way they are. So um, as global citizens, we are responsible for each other.
Speaker 1:Yeah, and that's why we we built these amazing organizations to find ways together to tackle all these issues that affect us all. Yes, agreed. So you have led large-scale partnerships at Merck and Gabi. What's the biggest challenge when using AI or digital health tools across very different health systems?
Speaker 4:For me, uh there are two specific challenges. One is um the training of health workers and two, infrastructure to give some examples. Um, with infrastructure, you could build an AI tool for a developed country hospital that allows a highly trained surgeon to make a very precise incision during surgery, and the AI could be great in guiding the surgeon to be minimally invasive. In contrast, there are lots of hospitals that don't even have electricity. So, how do you build an AI tool with non-functioning electricity or smartphones or anything else? So for me, that's one of the huge discrepancies in preventing AI from being accessible in all settings. Um, and the second one is the training of staff. So going back to my surgeon, you could have an Oxford or Cambridge trained surgeon. Um, or you could have the community health worker who has just literacy skills, who's been taught the basic nursing, et cetera, training, and then you're dumping them with a complicated digital tool on top of their already heavy task load. Um, so these are some of the considerations when developing digital tools that have to be kept in mind, both in terms of will it actually function in the setting if there's no electricity? And two, like you're asking somebody with basic reading and maybe basic math skills to look at your very complicated tool to use it.
Speaker 1:So it's is the environment side and also the person behind the technology that you are um asking them to learn or to use and or to tackle. Exactly. So it's it's it's to keep in mind the realities that are different. Yes. And every country has their own context and their own um strength and also weaknesses.
Speaker 4:And even within each country, like I could be at a very expensive private hospital in India, or I could be in an area that has a mobile truck coming to provide clinical services.
Speaker 1:Yeah, it can it can differ so much even within the country, yeah. Yeah, yeah, makes makes perfect sense. But who decides what data matters or how it is used?
Speaker 4:That's a very good question. Um I think the current data gap is that data is not representative and doesn't capture all the uh groups of individuals that should be represented. And I think that leads to biases in the algorithm for over-predicting or underpredicting a certain um uh person or type of character. Um, so having firstly making sure that every human on this planet is counted. That's just a basic human right and dignity, and then having that reflected in the data being used by AI and machine learning would be the holistic approach. Um, there are a lot of places where people don't have birth certificates or death certificates.
Speaker 1:So they don't exist.
Speaker 4:They don't exist. In paper. In paper. And it's very sad to not to have walked this earth and have not been recorded as existing.
Speaker 1:There's no footprint of what you have to. Yeah. Well, yeah, that's that's that's quite uh powerful to be invisible in the system because then the system doesn't know that you're there, so it doesn't it doesn't take you take your realities or your necessities into account.
Speaker 4:And then how would an AI take it into account?
Speaker 1:The AI cannot go to the field and gather information.
Speaker 4:Yes, and if the information doesn't exist, then they you're underrepresented for the AI too.
Speaker 1:Oh wow, that's a that's a scary thought.
Speaker 4:Yes, yes, unfortunately it's a lot of people's reality.
Speaker 1:Yeah, yeah, makes sense. Um, you work on disease prediction, vaccine strategy, and data access policy. Where do you draw the line between innovation and overreach?
Speaker 4:So I think we are an interesting species as humans.
Speaker 1:That's a beautiful way of putting it.
Speaker 4:I think we're driven by both curiosity and progress. And um, so I think as a species there will always be both overreach and innovation, that's just because who we are. But um, as the amazing Spider-Man comics say, with great power comes great responsibility. Um, and that was originally said by Voltaire. I think Spider-Man took it from there. But um I think yeah, there's always going to be both innovation and overreach, and you just have to be responsible.
Speaker 1:And find a balance. Yes. Which is not very easy. It's very it's is it's easy to say in the abstract, but in the reality is it's it's hard to find a balance and also to take into account the different forces as well that drive the innovation or can or it can hinder innovation as well.
Speaker 4:Agreed. And I I think with any commodity or power, it's based on the individual and how they use it.
Speaker 1:Yeah. Yeah, it's a tool. It's a tool. You can use it to build or destroy it. Destroy. So you work at the highest levels of scientific and strategic design, but how do we make sure the person on the ground, a patient, a nurse, a community, is not left behind?
Speaker 4:Um so it's tough, especially because we just talked about how not everybody's even counted or documented. But making sure as many of the different groups of people are represented in the design phase and the design thinking before a solution is scaled is important to ensure the local context and the gaps that maybe as a developed country person I just enforce without even realizing what the problems are on the ground. So making sure that every group is represented in the design thinking would be a way to try to capture. I'm sure we will not be perfect, but at least it's an attempt.
Speaker 1:Well, perfection doesn't exist. That's the first thing. Uh, but it's it makes sense to from the design, uh the the first step is to understand who are you gonna impact, how you're gonna impact their lives and the the communities and and and the people in the in the health of professionals. So it's important for and from that point, then you can you can design better, you can you can address the issues better because you already have the first information, which I I I believe that a lot of programs and projects they they should take their time to to go to the reality, understand it, and design from that reality.
Speaker 4:And I think humility is also important. Yes. Like not designing by thinking you have all the answers. Exactly. I think the innovation is often with the people on the ground and we fail to appreciate it.
Speaker 1:And they already know what is needed as well. They have the a deep understanding of what's the issue, what what what exactly needs to be changed, and they maybe have an idea how to change it, but they don't have the resources.
Speaker 4:Completely agree. And uh maybe I'm just helping them get the resources to implement their innovation.
Speaker 1:Pushing very bit. So what does health equity by design mean to you?
Speaker 4:Um I think designing health systems to ensure access to health care and dignity for everyone is health equity to me. Um, there shouldn't be anybody on this planet who shouldn't have full access to healthcare simply because they were born in the wrong house or in the wrong district. And it's sad to think that we as humans would allow another human to live an undignified life.
Speaker 1:Yeah, just just because they happen to be in the wrong side of the tracks.
Speaker 4:Yes, exactly. And um yeah, I think if you have the ability, the resources, then it's also your dignity and responsibility to help those who don't.
Speaker 1:So and synchronous change. Yeah. You may be privileged in one one moment and then the next you may may not, because life changes and takes you to a different path.
Speaker 4:Agreed.
Speaker 1:Yeah. So it's important to see that. Can algorithms be biased towards justice, or do we need more than just better code?
Speaker 4:So a lot of algorithms are based on the data they're provided, and we talked about having better representation in the data to ensure that everyone is counted. Um and yes, you could build into algorithms some principles of justice and equity, but as long as there are data gaps, it will be difficult to ensure equity.
Speaker 1:Yeah. Because it's this is it comes from the core of the information. Yes. Yeah. Yeah. If you could redesign one part of the global health infrastructure with AI, where would you start and what will be different?
Speaker 4:So this is a difficult question.
Speaker 1:I would just solve the world problems.
Speaker 4:Um I think, especially given the current climate of lack of global health funding or stopping global health funding by decisions by very few people. Um I think AI could help in a context where you have to do more with less. So now where we're funding constrained, etc., if there are already, even before this funding crisis, there were already a lot of hospitals or primary care that didn't have sufficient staff. Um, if there's a way that AI can help triage patients to just basic uh kind of primary, secondary, tertiary care, if um AI could bring, for example, MPOX epidemic is still going on, it's not in our news anymore, but just health education in local languages or promotion of good practices, um, just simple ways to fill the gaps that don't require a very complicated algorithm, but just um identifying the gaps and providing the information.
Speaker 1:Provides efficiency, incentive incentive, I guess they're critical and takes time. Yes. Because when you receive a patient, the first thing you need to assess is how um how can I help the patient? Yeah, and how urgent is the need that they uh that they have. So and that takes time to evaluate.
Speaker 4:And I think that's true across every health system. Um, so something that could work for the NHS in the UK, which is extremely constrained at the moment, to help identify the patient's um severity of disease and where they should go, could also help in a local village in Sierra Leone. Yeah, of course. Uh, where the health worker doesn't only have a high school education, but the AI could at least identify the symptoms and say this is high priority or you need to be treated now.
Speaker 1:Yeah, exactly.
Speaker 4:Exactly.
unknown:Okay.
Speaker 1:So now we're going into a bit of a fun part of the session after all this heavy talk. I'm ready. Uh flash uh questions. Okay. So you have to pick one. Um with pick one without necessarily uh saying why, but if you want to explain yourself a little bit, it's okay. We're open to conversation.
unknown:Got it.
Speaker 1:But the idea is to uh think quick.
Speaker 4:I'm ready.
Speaker 1:Okay. Diagnosing early or understanding deeply. Diagnosing early. Open source health data or tightly protected patient sovereignty.
Speaker 4:Open source health data with full anonymity so that it cannot be traced to an individual.
Speaker 1:Perfect. Tech that scales or localize tools that adapt.
Speaker 4:Tech uh that scales with the ability to customize locally. High rate. Yes.
Speaker 1:I was like, can I get away with that answer? Data transparency at all costs or consent uh consent first design, even if it slows down uh the system.
Speaker 4:Consent first.
Speaker 1:A world where everyone is tracked for health or where they are left unmonitored. Um opt-in. Okay. Global health driven by AI dashboards or shaped by community storytelling.
Speaker 4:Global health AI dashboards with community health storytelling.
Speaker 1:Another hybrid. Another hybrid. Can I elaborate? Yeah, yeah, of course.
Speaker 4:Um so during the pandemic, a lot of Aboriginal communities in Canada and elsewhere lost a lot of their elders due to COVID. And they're in that community is the history of storytelling plays a major role. And it's a passed down of not only um traditional healing methods, but also of history and ancestors and all. So losing that completely is a loss to many generations and several. So having captured these stories so that they can be passed down digitally or otherwise is important.
Speaker 1:Yeah. So it's it it it has a a wealth of uh of information that we're losing. Yeah, that we're losing every time that uh because uh every time we lose uh a community leader, elder that have seen so much and knows so much and also has has learned from his own elders. So it's this this kind of information, especially in communities, indigenous uh people's communities, where um they they have such uh uh a deep understanding far beyond that we can we can grasp of of life, of health, of of um everyday um information as well, environment, um, agriculture is is it's limitless the information that they have. So it's it's uh it's important not to lose it and not to let it just slip away.
Speaker 4:And if the digital communications or others can help capture that, that would be a way to preserve.
Speaker 1:Yeah. Always with their consent, of course. Yes, of course. Then uh next question: a universal health algorithm or millions of micro-local ones.
Speaker 4:Micro-local ones for the context.
Speaker 1:Faster breakthroughs or longer-term resilience?
Speaker 4:Um risk versus benefit. Okay. So, for example, during the pandemic, we had to, I was at Gabby, that was uh you had to get vaccines out fast. Um, and then you weigh the risk of getting a vaccine out as soon as possible with versus the short-term and long-term side effects. Um, but these are considerations that are taken into account during the decision making, and regulatory bodies also help ensure that risks are minimized.
Speaker 1:And I think in that in that aspect, there's no um easy answer. There's no easy answer because there's there's always gonna be a lost. Yes. And you need to wait which what which one are you willing to to go ahead and and live with? Yeah. So it's no, it's an impossible situation, it's an impossible situation. Yeah, very tough question. Yeah, yeah, yeah. Okay, last flash question. Could AI train on clinical trials or on lived experiences?
Speaker 4:Um both. I mean the clinical trials. Yes. There's no way you can get all of the information during a clinical trial. Um the real world evidence once a drug is on the market in a not so controlled setting is also important. So mm losing any of that information would be tragic.
Speaker 1:Yeah, yeah, it's it's because then when one is outside, you have the whole world that can tell you how it works.
Speaker 4:And maybe you haven't considered a specific patient group in your clinical trial. For example, a lot of pregnant mothers, etc., not to say that the drug is given to them after, but there are maybe different minority populations that you haven't included, whether racially or disease-specific minorities. So tracking the information on how it's affecting these different groups is very important because we're seeing in healthcare now, uh after having years and years of data, we are seeing that drugs might affect different minority groups differently. So making sure that you try to capture them in your clinical trials to begin with, but then also after.
Speaker 1:Yeah, there's so many factors. There's um um the medicine combination as well. So a person who has uh another underlying condition that is taking medicine for something else that interacts with this new medicine, there's also can be a reaction for that, also a genetical predisposition, um even the the the the health uh um um uh habits of the person. How likely is the person to eat well, how the the food combination, there even kind of food that can also have a bad reaction to medicine.
Speaker 3:Yes.
Speaker 1:So it's there's so many factors to take into account that yeah, not nothing trumps uh uh real life.
Speaker 4:Agreed, and we recently saw that from the obesity drugs that have been identified where they were originally not designed for obesity but other diseases, and then the uh post-market information has shown that they affect obesity as well, and now they've become blockbuster drugs. Uh there are a few I shouldn't show any biases.
Speaker 1:Yeah, I'm just asking for a friend.
unknown:Of course.
Speaker 1:Uh yes, this that's uh and also the way that interacts with you, uh it can be also so so different. Yes. Um, I have a very low threshold in any medicine in general because I I I hate taking medicine. So if I can avoid it, I will endure a headache. Oh wow, stomachache. I I I prefer to I make a tea, I drink some water. Okay. I I try to not to take a lot of medicine, but when I do, I'm I'm I'm down. It really is. It's you're at your last. Like it's like it's it's um it's crazy because I think it's because my body's not used to having them. So once in mysticism, it's like, okay, I'm gonna grab every single one drop of molecule, and you're gonna get it. Uh and something funny happened to to light up the mood. Um, the first day that I had with my husband, uh, we were supposed to meet uh for for tea. And I fell asleep because I took uh um um a pill for my cold that I had. I was I had a very, very bad cold, and I never take anything. So I decided to take it, and I fell completely asleep. Completely, and because he I didn't know, but he had a little bit of anti-allergic into it. And I also was down. The poor guy waited for me for over an hour.
Speaker 4:Oh wow, but we met.
Speaker 1:He's a keeper, and he's Swiss, so yeah, being on top is very important. So yeah, he really uh uh it's it's something funny that I say uh all the time. Like, I am I'm I'm very much uh I could be a very interesting specimen. You would be in this minority group of people. You would be like, what is wrong with her body that takes on so many medicines?
Speaker 4:See, this is why we should track post market falling asleep for an hour.
Speaker 1:There you go, for just a common cold medicine. Okay, now uh the next the next game. You take the pile there, and the question um I will ask you the statement. You will tell me if it's true, meaning that you believe it's happening right now and it's about to happen, okay, or futuristic with a tiny robot. That means that it's something sci-fi is gonna happen way in the future, or it may never happen.
Speaker 4:Okay.
Speaker 1:Okay, ready? Ready. Health systems will use AI to detect disease outbreaks before symptoms start to appear. Patients will have digital health twins that simulate their care outcomes. Groups of patients. Okay, not individual groups, yes. Okay. You'll be able to pause your digital health footprint.
Speaker 4:Just no. Neither true or futuristic. Just no. No, no pausing.
Speaker 1:No pausing. Public health policies will be co-drafted by humans and AI. Governments will use AI to predict well healthcare workers should be sent before crisis happened. Diagnosis will include an emotional impact score. Possible. Yeah, but it would be nice. You have the right to delete your medical prediction history.
Speaker 4:It it depends on the legislation where you are.
Speaker 1:And what what's what the service uh the purpose is gonna serve. Yes. Because they need information to treat you.
Speaker 4:And if you don't have it, then yes, but it's still your information, so you should have a right to it.
Speaker 1:To be forgotten.
Speaker 4:Yeah.
Speaker 1:Yeah. Tricky, tricky. Yes, I'll just leave it here. No question, no answer. No answer. AI will be able to detect bias in global health funding models.
unknown:True.
Speaker 1:True. Health insurance will be partially determined by your microbiote.
Speaker 4:Depends on legislation.
Speaker 1:So no answer. Yes. And lastly, every hospital AI will have a built-in system that checks if its decisions are fair and ethical before taking action.
Speaker 4:So I mean, even between you and I, we might have a different interpretation of what's fair and ethical.
Speaker 1:Yes.
Speaker 4:So it would be hard to code that into an algorithm.
Speaker 1:So no answer. No answer. Love that, love that. Yeah, neutrality. Yes.
Speaker 4:And Switzerland.
Speaker 1:Yeah, yeah, we are in Switzerland neutral. So, final question. If you could embed one invisible value into every health innovation being built right now, not a feature, not a gadget, but a principle. What would it be and why?
Speaker 4:Um the principle would be if you can identify both the positives and the harmful behaviors a particular technology can cause, that you do everything possible to prevent the harmful behavior from happening. Um, so just some examples. Historically, um during World War II, the first early computers were used to take census information to quickly identify ethnic minority groups and round everyone up. Um with latest AI technology for images, it's being used for lots of interesting things, but it's also being used for child pornography. Um so if these harmful outcomes can be identified by the tech developers before the technology is released, then putting precautionary measures in the technology, I think, would be a principle that um would be a responsibility of the developer. Um on the other hand, it's not all doom and gloom. There's also lots of beautiful things that technology is being used for, for example, to save the Amazon forest and identify deforestation or rehouse 60,000 families after the Nepalese earthquake. So there's a lot of beautiful things tech is being used for, but for the harmful things, just like Einstein's relativity theory of the atom that led to the atomic bomb versus everything else, I think they should be identified, and every measure that can be taken to prevent the harmful behavior from happening is the responsibility of the developer.
Speaker 1:Perfect. So to put the fences in the right places. Yes, where possible. Thank you so much. Thank you for your time. Thank you for coming and talking uh with us. It's been uh quite a pleasure uh to learn uh about the work that you do and also uh everything that is um uh the potential that technology can have in health. And thank you for reminding us that digital health is not about faster systems or smarter tools, it's about people, it's about justice, it's about asking better questions before we scale solutions. Because health is not just a data point, it's a human rate.
Speaker 4:Thank you for having me. It's been a pleasure.
Speaker:Thank you for listening to Intangibia, the podcast of Intangible Law. Plain talk about intellectual property. Did you like what we talked today? Please share with your network. Do you want to learn more about intellectual property? Subscribe now on your favorite podcast player. Follow us on Instagram, Facebook, LinkedIn, and Twitter. Visit our website www.intangibilia.com. Copyright Letista Caminero 2020. All rights reserved. This podcast is provided for information purposes only.