Intangiblia™

Vipin Saroha - Beyond the Dashboard: How Data and AI Are Rewiring Public Value

Leticia Caminero Season 6 Episode 4

Systems should make life easier, not more complicated. That idea runs through our conversation with technology strategist Vipin Saroha, whose journey from SAP in India to Geneva to advising global institutions shaped a simple practice: start with the problem, then use data and AI to serve people with clarity and care.

We dig into what most teams get wrong about data—confusing volume with insight and falling into confirmation bias. Instead of chasing clever dashboards, we map a workflow where hypotheses are tested, methods are transparent, and systems explain themselves in plain language. The result is trust. And trust is what unlocks adoption, the critical moment when data actually changes a decision. From HR policy Q&A to legal discovery, we show how AI can strip away repetitive labor so humans focus on context, tradeoffs, and fairness.

Designing for the public means building for real settings: clinics with noise, fields with poor connectivity, and city services that must be accessible, secure, and easy to use. We explore digital twins, predictive maintenance, and crowdsourced reporting—and why each only works when the loop closes and action is visible. Along the way, we share a framework for people-first AI strategy: educate users, co-design with business owners, choose use cases where automation is safe and useful, and require explainability where stakes are high. The through line is constant: human judgment at the end of the loop, with AI as the force multiplier.

If you care about ethical AI, public sector innovation, and data that leads to better outcomes—not just faster reports—you’ll find practical steps you can apply today. Subscribe, share with a colleague who wrangles dashboards for a living, and leave a review with one question you want AI to help your community answer next.

Send us a text

Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.


The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.

SPEAKER_05:

How I can very easily visualize it and understand it and use it in the right context as well. So that's where the whole idea of data you know analysis comes in as well. You know, and then AI is a layer on top of it, which is making it just much more easier for us to understand that data as well. I don't need to be an Excel guru to figure out to do calculation. I can I can use an AI tool to do that for me.

SPEAKER_02:

We're joined today by VPN. He is a technology strategist who knows that everything behind AI model or data dashboard. There's a bigger mission, delivering real public value. VPN's work expansion from India to USA to Switzerland to SAP to the UN and from product innovation to digital transformation. He helped governments, global organizations, and major consultancies think how they use data in AI, not just how to become smarter, but to become more human-centered, more ethical, and more effective. In this episode, we explore how AI and data can do more than just optimize. They can align a system with purpose, drive smarter decisions, and unlock the kind of information and innovation that benefits everyone, not just the bottom line. Welcome. Thank you so much. Please introduce yourself and what led you to be here in Geneva, Switzerland.

SPEAKER_05:

No, thank you. No, I think you already made a good introduction, uh, just a little uh more context in that. So I'm a software engineer uh by by training, uh, but then I kind of dabbled into the area of public sector. So that's when I decided to do a public administration masters as well, still focusing on how decision making and data is used, or how data is used for decision making in public sector uh organizations. So that's what has been the trajectory, and now kind of I'm in this situation where I kind of advise then those agencies and also occasionally governments onto like how now they can effectively use data. And now with AI, how they can use the AI knowledge, how they can even identify what use case they have for AI. Because it's very easy to say this AI will solve a problem, but if we don't have the right data, then none of this would work as well. And then we're very easy to blame the technology and not the inherent issues we have within the structure of the organization or the data that we have. So that's that's what I'm kind of doing. What led me to Geneva was uh so I started my career in India with SAP, pure technology play, working with Fortune 500 companies on how to use SAP products um as well. And that's when I realized I want to do something in the public sector. So went to US, there be there for three years, finishing my master's at the same time, working with the government there in the state of Georgia as well. And then that basically then landed me in UN in Geneva uh over 10 years ago, 2015, I would say. Uh and first I was with a humanitarian organization, kind of helped them do information management, how they do, how they manage their data, their information in time of crisis. And then after that, I spent good around seven-ish years in in WIPO, World Intellectual Property Organization, also kind of helped them uh work with governments, identify where we can bring in operational efficiencies by building up digital solutions, new products uh as well. Did that for seven years and then decided it's time to move back to the private sector, uh, but not to leave all that experience that I have gained uh behind. So I am part of the international development team uh in EY, where we basically work with UN agencies, big international NGOs to help them advise on uh digital solutions, AI, you know, their ERP systems uh as well. So I've not completely left behind what I have learned, but now I'm coming in from an external perspective to help them be more efficient and and uh really not lose ground with this whole AI wave.

SPEAKER_02:

Yeah, so not everything can be done with AI.

SPEAKER_05:

Not everything should be done with AI.

SPEAKER_02:

Okay, but it's that's also the thing. But it's it's it's interesting to um the perspective that you have because you have been inside so many different uh um uh ecosystem and organizations, and also the way that they approach issues as quite different. So it's it's uh you have a very unique uh view, and you can also understand your counterparts, um, the reason why they decide the things that they decide or how they make the decisions themselves. So it's a very privileged uh information that you have now working on the other side. Yeah.

SPEAKER_05:

Yeah. So for sure. No, I think that's that's the whole idea as well, that it's very easy to say how an organization should function if you've never worked in that context.

SPEAKER_04:

Yeah.

SPEAKER_05:

Uh the true value comes in once you've been in there, you've seen the problems, you've seen it firsthand uh as well. And that's what that's what makes a big difference between even saying a certain solution will work or will not work uh in the context of that organization.

SPEAKER_02:

Yeah, it's true. It's it's it's easier once you walk uh mild in their shoes, then you can say, okay, I know, I know how to address this issue and I know what um how can I help you?

SPEAKER_01:

Exactly, exactly.

SPEAKER_02:

So from from this uh uh intersection, um you work in AI, data, and public sector strategy. What originally drew you to this space?

SPEAKER_05:

As I said, I've I have a software engineering background, but I was never a kind of a person who'll sit behind a computer and just write a piece of code all day long.

SPEAKER_01:

Sounds boring.

SPEAKER_05:

Uh yes, it is. But then again, there are people who love it. My brother does that day in, day out. He loves it. I didn't, I was more I want to talk to people and understand their problems. So that's where this whole concept of okay, but at the same time, I don't want to lose everything that I've learned. You know, I'm not a salesperson. I I I want to help organizations using what I've learned, either it's being uh through my education or through my experience as well. And that's where it was a good mix and good intersection of me wanting to talk to people and understand their problem and using technology and data to solve that uh problem, that whatever we are trying to do is fact-based. Uh, it's very easy to say it's fact-based because I think that's a fact. But then actually to show something behind it, that it is fact-based because there are facts behind it as well. And that's that's where I think I want it to be, and I want I'm still working on because I think there's something new. I also learn every day, whether it's AI or it's data analytics, but how we can effectively then use that, use that data to make that decision making, to make the organization better and effective. And that's the whole idea behind what I want to do and what I aspire to keep doing.

SPEAKER_02:

Okay, so it's it's making sure that the information is accurate in order then to use the technology based on that information.

SPEAKER_05:

Exactly. And then how how to even use that? Because everybody has data lying around, you know, I have it on my laptop, you have it as well. But it's also about how I can very easily visualize it and understand it and use it in the right context uh as well. So that's where the whole idea of data, you know, analysis comes in as well, you know, and then AI is a layer on top of it, which is making it just much more easier for us to understand that data as well. I don't need to be an Excel guru to figure out to do calculation. I can I can use an AI tool to do that for me.

SPEAKER_02:

Yeah. I'm very bad at Excel. As a good lawyer, I'm very bad at math.

SPEAKER_01:

So that's okay.

SPEAKER_05:

You're good at other things. So that's why I am I am very bad when it comes to legal things. I try to avoid it as much as possible.

SPEAKER_02:

So that's why everyone has their own uh strengths. Yeah. You've helped shape digital strategies for large institutions. In your experience, what's the biggest misconception about using AI and data for the public good?

SPEAKER_05:

It's a very interesting question. I think it's it it even goes beyond the places I worked with. It's the biggest misconception is that just because I have data that I should have the answers. Like, but if I don't know how to use the data, if I don't even know where to look for that data, uh that's that's the biggest, biggest issue that merely having data will give you the answer uh is not the right way forward. We need to be able to ask the right questions uh as well. And what happens a lot of times, and and it it goes a little bit in a different tangent from your question, is that we tend to come up with an answer first and then try to find data to justify that answer. So that's and that happens more often than not, unfortunately. Uh so that's so that I would say biggest misconception is that I have a decision, I have the data to back it, but what is the order I went through? Did I look at my data first with a hypothesis, and then did I prove or disprove that hypothesis with the data and then came with a fine uh answer? Or was I going the other way, the wrong other way on that?

SPEAKER_02:

Yeah, and also the the lens that you're looking at is like if you're only looking for data that confirms your hypothesis, you're never gonna even acknowledge the other data that doesn't.

SPEAKER_05:

Exactly, exactly. And then that leads to that leads to problems as well, because then we are only looking into the same little uh little bubble uh we sit in uh as well. And and when it comes to public good, data is there, it's just that how we use it. Yeah, that's the that's the point uh on that. And more often than not, we tend to just look at, as you said, things that justify and verify what we already believe.

SPEAKER_02:

And then what do we need the data for if we're gonna make the same decision anyways?

SPEAKER_05:

Yeah, exactly. That's why I'm saying then then people tend to w work towards finding that data to justify that uh that answer. And you might find some outliers, but outlier is not a norm. You know, it is outlier for a reason. And it happens maybe 2% of the time. Yeah, you know.

SPEAKER_02:

Perfect. So um you've implemented AI for public sector, digital transformation, and even IP modernization. What's a success story that reminds you AI can deliver long-term value, not just efficiency?

SPEAKER_05:

So I think it's it's AI in terms of long-term value is about how we use that AI, whether it's a solution, whether it's a feature uh that we have, and there are a lot of use cases around that. Um for me, using an AI solution that gives long-term value is that how can I relieve people from doing mundane work, repetitive tasks uh as well. So I can't go into the you know the details of projects I've worked on, but um for all confidentiality reasons. Uh but to give you an example, for example, we all have worked with HR departments, and we all have questions on HR policies uh as well. And now I've been on paternity leave, I've looked at HR policy and never been able to figure out whether they're all been written from a legal lens uh as well. So with an AI solution, you should be very, you are able to easily understand what that means for you for me. So I don't need now a group of people who are just sitting there answering questions that a solution can answer. So that now I have those people freed up to do actual innovative work as well. And that's the long-term strategy. There is it's it's related that there is an efficiency gain, but that efficiency gain only gets you a certain distance. Yeah after that, it is the human which will uh drive that conversation uh in there, and that's the long-term value that how can I free up my people from doing repetitive tasks? If it's a repetitive task, a machine can do it and it should do it uh as well, so my people can think.

SPEAKER_02:

Yeah. So, for example, if you have something that I have done in the past, thousands of pages, because you need to find specific information. And it takes a person weeks, if not months, to do it. But an AI can do it right away. Yeah. So that it does something that it's it there's no added value in you wasting weeks of your time just looking for a very specific information in a sea of documents.

SPEAKER_05:

Exactly. And then and you're a lawyer, you said, right? So like you wouldn't know better than that, like you know, for any legal case, you have to do paralegals or sitting there identifying that one case, you know, going through books and stuff. And that with with AI, it's it's it's a matter of few seconds to find that. But then at the end of the day, and that's where the human intelligence comes in, and that's where the long-term value generates. Is that now I know people who can use the AI solutions to be more productive. So they use that solution and then they put their intelligences like it'll give you 20 cases, but that for that particular case, you then still need to find those two ones that exactly match as well. We are working, we uh technology is moving faster that it even gives you a very closely you know written uh solution in a right way as well, but it still needs to be validated. You cannot go to a judge and say, like it was in it's a case that was written by an AI, so give me a you know uh ruling on it.

SPEAKER_02:

Yeah, exactly. So it's the human factor is very important because then has a lawyer for in in this uh example, you will decide if it's actually relevant or if it follows um what you need to, or or is also highlights what you want to highlight, because sometimes it's a great case, but it's great for the other party, not for your not for the one you want to build.

SPEAKER_05:

And human has to be at the end of this loop, be it data analysis, be it AI. You know, if you don't have human at the end of the loop, there is a certain level of trust that comes with the fact that we human are the ones deciding uh on this um as well, and then the level of efficiency gains that you get out of it. It's all about how can I be more productive using that. So I don't need to fill out a form that that can be very easily filled out because the format is the same all the time. Yeah, it's to it's to enhance uh the the the human productivity, exactly, and and the intelligence as well at the same time, because there might be, you know, there's certain uh capacity we as humans have as well in terms of brain and and focus.

SPEAKER_01:

Energy.

SPEAKER_05:

Energy, exactly. That is like I can't keep spending 24 hours just to find one case law, but at least if I have a good starting point that those are the only 20 ones I need to look at, I can focus on those 20s. And then even if those those don't make sense, then I can ask for more and then identify more uh as well. But I have a companion who can help me go through that data quickly and faster.

SPEAKER_02:

Yeah, it's is it's guiding you through through the through the field of information. Exactly. Governments are increasingly adopting AI tools, but trust and transparency are still concerns. What's one step we could we should take to make AI strategies more inclusive and people first?

SPEAKER_05:

Uh it's a very interesting question. It has been, I think, there since the whole AI thing started, how much how we build trust.

SPEAKER_01:

Um do no harm.

SPEAKER_05:

Yeah, exactly. But then what does do no harm definition is? And that's the key thing. Um it's not one step, it's it's a it's a series of steps, you know, it's it's it's educating people, you know, to even tell them what AI is, you know, so you have to go through the whole journey as well, because AI has uh you know uh just completely blew up on the scene quickly, and suddenly everybody was expected, you know, to know about it, everything inside out within a matter of a year or two years, you know, with uh with the transformational technologies at it is, I think there's a lot of work that still needs to be done to educate people, not just in schools, in universities, at work, uh as well, so that they know what is AI, what it can do, and what it cannot do. That's the key thing uh there as well. And then work with those people to identify within their processes what are those tasks that can be done by AI, where you can build a solution uh on that. If my task is day in, day out talking to people, talking to governments to figure out what their problems are, and AI will not be, at least not in near future, would be able to replicate that as well. But the back-end analysis of that is where AI uh could come in. So that's where we need to, so is it's a bunch of steps. We need to educate, then we need to figure out where in our process this AI solution can be placed, and then develop that solution. Okay, but those solutions development should not be uh technology decisions, they need to be business decisions. If you are a lawyer, you know the best in your process of analysis where you spend the most amount of your time. And this is where you then you know it can tell you where you can then save the most amount of uh time as well. And then the solution has to be for you and not because I, as a technology person, told you solution will work. So it's it's the involvement, and that's the tenant for any kind of transformation. AI just brought it in the forefront for everything, even if you do a big you know ERP transformation or a big data analytics uh transformation in your in your organization or in any organization, those are the steps you need to you need to look for. You need to involve people at the right stage, you need to communicate that effectively as well, and then own up the risks as well, that there will be risks as well. But then at the same time, there are risks with people's analysis as well. Yeah, you know, so whatever I am giving an analysis is based on my knowledge. My knowledge is also limited. I can't give you an analysis outside that as well, and it could be wrong as well.

SPEAKER_02:

Yeah. And I like the your approach that it's not a technology solution, it's a business solution. Because it's it's something that's uh it has to serve the the purpose of the business, of the organization, what you're looking to achieve. It's not technology for the sake of technology, exactly. It's technology because you need it for X or or C.

SPEAKER_05:

And we're seeing more and more of that as well, that governments as well are wanting to use, you know, how they can make the lives of people better, you know, how they can use it in the field of education, forecasting, be it weather, or what is the best time for cultivating your fields uh as well. So these are the there are solutions out there. It's all about then educating the people, making sure that they know how to use them to the best of their abilities as well. And it shouldn't be just another, you know, solution or app on my phone that I'll use it uh once every often and then I forget about it.

SPEAKER_01:

Yeah, or or it drives me crazy every time I need to use it.

SPEAKER_05:

Yeah, exactly, exactly. And yeah, it agree it has to be easy to use. Yeah, it has to be. And then and that's what a that's why AI has also just come up on the stage. It's it talks you in the way a human human could could talk, or at least it mimics that way. Uh so and that's how we need to make sure that people don't lose that uh in the process that he may be talking like a human, but it's not human. So I need to have my own judgment on top of it as well. We need to keep in mind that it's still an AI, it's not a human, it can be human-like, but it's not a it's not a human, it's not AI, but but obviously it can do a lot of things, it's just all about making sure that finding the right solution. Do I need a AI solution for that work or piece of work or not? As well. And if if everybody is convinced of that from the very beginning, the adoption of any solution, and I'm not even talking about AI at this point, adoption of any solution is much more higher. Uh either it's by masses of people at a national, subnational level, or in an organization.

SPEAKER_02:

Yeah, it's it's more than just technology. Yeah. So your background combines strategy, product design, and data storytelling. When design in tech for public use, what elements should never be overlooked?

SPEAKER_05:

I think the biggest tenant when it's when it's for public use or public good, uh for that matter, is we do not, we should not lose focus from the main objective. That we are, as we were talking earlier, that we are doing it for a certain objective to achieve a certain outcome, and not doing it for the sake of doing it. Because we have the mandate and I have the skill set and I have the you know the financial resources to do it. You know, if it's gonna help somebody, that's when we do it.

SPEAKER_02:

Okay.

SPEAKER_05:

And then if if we are not doing it, then as I said, it'll be one of those solutions that'll it's it's like now I have 20 dashboards to look at the data, but did I ever ask for that?

SPEAKER_01:

Was it really necessary? Yeah. So many circles.

SPEAKER_05:

Exactly. So so it's it's all about like, do I need it and what's then the outcome of that as well? And does it serve a good wider public good in that sense?

SPEAKER_02:

And the user experience is is so viral. That's something that uh has been uh uh a common trend for for about a decade now. That how are we navigating through this technology, how are we using it? Um the is it obvious enough that anyone can uh use it right away, or do you need a learning a learning uh uh curve for all of that? So it's it's important to to keep in mind who are you making the technology for and the real life interactions they're gonna have with that technology. Are they gonna have time to sit down, use it, or are they gonna do it on uh walking in in a field or in a busy hospital? So you need to understand the different realities that they will have.

SPEAKER_05:

Yeah, yeah. And also the people that will be impacted by it. Yeah. You know, if if it's a solution that has a wide impact and is very critical, you mentioned health for that matter. There needs to be stronger guardrails around that as well. The data security needs to be much more stronger than something, you know, if I need to see what events are in my locality, you know. So it's it's it's it's it's about that, you know. It's like what's the purpose uh of those whatever solution I'm I'm gonna develop and who's gonna serve? Because that's what will define how I need to structure everything around that.

SPEAKER_02:

Yeah, because it comes from the ground up.

SPEAKER_05:

Exactly.

SPEAKER_02:

How do we avoid building smart systems that feel cold or distant from real human needs?

SPEAKER_05:

That's a tough one. How do we avoid it?

SPEAKER_02:

Technically, it's warm and fussy. Yeah.

SPEAKER_05:

I think it's it's all about like it's not even about feeling cold or distant, it's about like I can have a conversation with a GPT tool, you know, chat GPT Gemini, and it'll feel human-like and it'll feel warm in there. But it's it it goes back to the educating part of it, like if Yeah, if I know what is behind it, I would have a certain level of trust and knowledge of how to use it. You know, it's it's I wouldn't even say it's it's anything to with do with cold or distance, it's just about the fact that how I use it.

SPEAKER_03:

How I use it.

SPEAKER_05:

You know, I can use Google search to find 100 things as well. There, I know for sure it's nobody behind it. I can do the same thing with Chat GPT, yeah. Ask the same question, and probably get even much more detailed uh results uh on it as well. So it's all about how we are using those solutions. There are solutions out there which are helping people, young adults, to get some sort of uh you know um psychological help as well. Yeah, because there's there somebody to listen to them and not judging uh on that. So if we just take that definition, systems are neither cold or or distant or warm or fuzzy or anything. Systems are systems, yeah is how I use it and how I feel about it.

SPEAKER_02:

So it's the human impression to the system.

SPEAKER_05:

Exactly, exactly. Yeah, if my car is my car, you know, it could be it it'll drive the way I drive it, and uh that's how I will feel about it.

SPEAKER_02:

Yeah, it's true. It's just a means of transportation, or if or it's something that you love and think about it.

SPEAKER_05:

And you are attached to it as well. So it's it's just humans if we as we as people, we humans, is how we how we want to use it. But I don't think the technology should and can be seen in this black and white of cold, distant, warm, fuzzy. Is there it'll do a job if you ask the right questions.

SPEAKER_04:

Okay. Okay.

SPEAKER_05:

And sorry, I'm not sure if this uh was the answer you were looking for. I don't really have an answer for this question.

SPEAKER_02:

No, because it's it's a matter of if you already gave the answer, is it's how we use the technology because it's it's um something, a tool can either be used um in a way that feels cold and feels distant, or it can be used in a way that feels that it enlightens your life or or keeps you company. So, as you said in in your answer, is how you use it and how um how you make it a part of your life. So it's it's it's uh again, it's a decision of not necessarily of the designer of the technology, but of the user of the technology. So uh speaking of design, if we could redesign public digital services from scratch today, what principle should you put at the core?

SPEAKER_05:

That it is there to serve the wider public good as well. And if we're talking about public digital services, is what I'm thinking is like if I have to go get my passport renewed or uh you know get changes to my documents as well and stuff, this is something it should be kept in mind. How am I going to use it um as well, and that's that it it'll serve that purpose without making my life complicating and filling out 20 different forms.

SPEAKER_02:

Of information you already have about me.

SPEAKER_05:

Exactly, exactly. So I think it's it's all about who's at the center and who's gonna be served by that uh that service, and we need to keep that in mind when we design anything.

SPEAKER_02:

So people centered.

SPEAKER_05:

And this is nothing a new concept, the human-centric approach. Yeah, you know, it is a it's it's a concept as old as technological uh innovation. So it's it's and it's it's right now even more important than ever that whatever approach we take, it needs to be a human-centric approach uh as well, be it an organization, be it a government level, uh as well. And if we don't have that, then we miss the point altogether of developing something.

SPEAKER_02:

Makes no sense. Yeah. Okay, so what's the most important human mistake you've seen an AS system mimic? And were you surprised?

SPEAKER_05:

I you will I am sure you would have come across people who with one hundred con percent confidence say that no what I'm saying is correct.

SPEAKER_01:

Yes.

SPEAKER_05:

And when you ask for facts and then you look at those facts and you realize it's all wrong. But it is mimicking human nature or human behavior. In that, because it is trained that that way, to give answers in a confident manner and always give answers.

SPEAKER_03:

Yeah.

SPEAKER_05:

You have to explicitly tell, and now it's being incorporated as well, that if it doesn't know it, it'll say it will not know it. But it was always trained to give you an answer, no matter where the answer was coming from. Any answer. If it in the like even now, sometimes and even when it all came out, you could ask for references and it'll give you references of things that didn't exist as well. But it all goes back to that humans trained it, it's mimicking the human behavior. That's this is how we expected it to it to work. And we didn't have proper data in place for it to be trained. So it's garbage in, garbage out kind of a situation. If my data is not good, my results won't be good either. Yeah. As well on that.

SPEAKER_02:

So it's it's the uh overconfidence, uh, don't know when you're right or wrong kind of approach. You've designed dashboards, systems, and integrations. What have you seen that's more powerful than data?

SPEAKER_05:

Usage of data. How we use it to uh for decision making uh as well. I can give you all the data and all the analysis behind it. If you refuse to use it.

SPEAKER_01:

Yeah.

SPEAKER_05:

It's the it's the human who is the human who's gonna accept it and humans who's gonna reject it.

SPEAKER_02:

It's a decision there.

SPEAKER_05:

It's is the decision makers in that whole situation. Systems are there to serve that, but I should be a willing recipient of that to want to use it.

SPEAKER_02:

Because then the all the work is for nothing. Yeah. So now we move to the flash section.

SPEAKER_04:

Okay.

SPEAKER_02:

Data dashboards that show what's happening or ones that ask why it's happening.

SPEAKER_05:

Ones that ask why it's happening.

SPEAKER_02:

Deleting 10 million data points to protect one person or keeping them for the greater good.

SPEAKER_05:

I don't know who that one person would be, but I would keep it for the time being for my further analysis.

SPEAKER_02:

A system that learns from users or one that teaches them how it works.

SPEAKER_05:

A bit of both, actually. It has to be both.

SPEAKER_02:

Okay.

SPEAKER_05:

Otherwise it's not evolving.

SPEAKER_02:

Okay. It doesn't make sense.

SPEAKER_05:

No, it you can't have a system that cannot learn and at the same time it teaches how it works. Like because then it'll get stuck in the same loop.

SPEAKER_02:

Yeah. Public infrastructure run by predictive AI or by human intuition.

SPEAKER_05:

Can I say none? No, again, it has to be a mix of both. It has to be, it has we've seen examples of sorry, I know it's a flash section, so I'm not gonna.

SPEAKER_01:

Don't worry about it. I will say expand.

SPEAKER_05:

I I think it's a both, we've seen examples where you have predictive algorithms providing uh public services and they failed miserably because they were biased in it, because my data was biased. Uh, but then you're also, if you just rely on human intuition, there is also bias uh as well. So we just need to make sure that they both work together to be as less biased as possible.

SPEAKER_02:

Yeah. So it's a combination.

SPEAKER_05:

It's a combination.

SPEAKER_02:

A button that slows a system down or a button that makes it forget.

SPEAKER_05:

If it's about making it forget my data, if I don't want to be part of it, then yeah, I would say button that makes it forget. And I think we already have that with GDPR in place as well. That I have the right to get all my data and get it deleted as well. So yeah, I would say the second part.

SPEAKER_02:

Okay. A silent algorithm or one required to explain itself in plain language.

SPEAKER_05:

Yeah, it should explain itself. Yeah. We have it in place. We've developed, like I worked on projects where we developed solutions, and this is all about building trust as well. Yeah. You can very easily for a simple task asking a question, yes, you don't may not need why, how it came up to an answer. But when you're doing big complex analysis, I want the system to tell it, tell me what was the thought process behind. And these capabilities exist. It's it's it's you can if it's an enterprise level uh solution you are working on, you can very easily embed that that you always see how it came up with the solution, what it searched for, what it reviewed, and give you all that uh analysis as well, on top of just the answer.

SPEAKER_02:

So it's it's not only the the answer themselves, but also the process.

SPEAKER_05:

The process of reaching an answer. Because then that can also lead you to another part uh and exactly, and then you can you can also fine-tune that process. Maybe the thought process is wrong. Yeah, it's the same way with humans, right? If somebody tells you this is how I approach this problem, and there's like, okay, there are two mistakes in your approach.

SPEAKER_02:

You started wrong here. Yeah.

SPEAKER_05:

So, you know, you just need to shift courses. So that's that's that's that's the whole idea behind it, that it needs to be able to explain how it got to that answer. Yeah. As well. And it exists, it exists, it's just about how we want to showcase and who controls the technology to be able to showcase you that.

SPEAKER_02:

You can take the futurist true. So through, you have to choose true or futurist. Uh, true means that it's happening right now, is um is the everyday life or it's about to happen, and futuristic is way in the future, it's not gonna happen for now, is almost high-fi. Okay, first question public dashboards will come with a BIOS alert that pulses when the data tills.

SPEAKER_05:

I think futuristic for the fact that I don't think so. Anybody wants to develop it.

SPEAKER_02:

A global open source data set will be declared a digital commons.

SPEAKER_05:

Futuristic.

SPEAKER_01:

Okay.

SPEAKER_05:

Because it requires a lot of collaboration to have. And which data set are we even talking about? For what?

SPEAKER_01:

So who owns it?

SPEAKER_05:

Who owns it, where it's coming from? Parts of it are probably digital commons already.

SPEAKER_01:

Okay.

SPEAKER_05:

But when you look at the whole thing, it may not be, and it may never be. But when you're talking about you know, patent applications, there's a lot of that data that has to be kept safe and protected. So that will never make it to public eye until it's accepted or rejected.

SPEAKER_02:

So no.

SPEAKER_05:

No, so no.

SPEAKER_02:

Citizens will be able to request edits to their government profile the way that they request Wikipedia changes.

SPEAKER_05:

Yeah, it's true. I've I've seen that happening as well.

SPEAKER_02:

Governments will auction unused citizen data back to the public with royalties.

SPEAKER_05:

Ah, that's I hope it never happens. So I'll I'll stick with futuristic and not even futuristic. I hope it never happens.

SPEAKER_01:

Um no, no, a big no sign.

SPEAKER_05:

They might want to sell it back to public sector, private sector for royalties. I don't think so. Public's gonna buy that.

SPEAKER_02:

Every new digital law will be tested by an AI train on citizen behavior.

SPEAKER_05:

Yeah, I would say futuristic. I've not seen things moving in that direction as of yet.

SPEAKER_02:

Okay. Public service bots will be evaluated on empathy, not just efficiency.

SPEAKER_05:

That's true. Yeah, I agree. It should happen, and I think there are some kind of side ventures that are training it to be more empathetic.

SPEAKER_01:

Okay.

SPEAKER_05:

Uh as well. So emergency. And you can also ask it to be empathetic.

SPEAKER_01:

Like be kind. Yeah.

SPEAKER_05:

So you can, it's it's all about how you frame the question. Okay. So if you go in there on any AI solution and say, like, look, your persona is your persona is somebody who's very empathetic, listens, and XYZ things, it will take on that persona. And it will give you the answer in that empathetic manner already.

SPEAKER_02:

Okay. Ceres will have digital twins that negotiate resources in real time with other series twins.

SPEAKER_05:

So there's two parts to it.

SPEAKER_01:

Okay.

SPEAKER_05:

Cities with digital twins, it's true, it's happening not at a very large scale, but it is happening in a lot of countries, Asia, Africa, Europe, it's happening. The negotiating part, that's not what digital twins will ever do. That's not the role. Because digital twins are effectively just for us to analyze how services are to be rendered in that. It's a real-time visualization of how a city functions. So it will now negotiating resources. No, that's that's not yes to the twins to understand better how the sense function, but no to the what services you need to provide where uh as well, but not the negotiation part.

SPEAKER_02:

I think that's you need a human.

SPEAKER_05:

That's way too futuristic. Or will ever be.

SPEAKER_02:

Okay. So there will be a constitutional right not to be optimized. You should have the legal right to be left imperfect, inefficient, unprofiled, and unquantified by AI.

SPEAKER_05:

I don't know. I don't know. No, I don't know. I think I it has to be it has to be futuristic. Yeah. Um but at the same time, there are discussions around this topic as well that who should own the data. Uh but but nothing's to come in near future.

SPEAKER_02:

Okay. Public effect structure will run predictive self-diagnosis and call for maintenance when anyone notice before anyone notices a problem.

SPEAKER_05:

Yeah, that's true. That's true. That that's already is in place. Um, and it's not even just AI, this is just predictive algorithms. Uh so there are services that run like that.

SPEAKER_02:

So there's a pothole here.

SPEAKER_05:

Yeah, I wouldn't even go that far. You know, you have you have nuclear reactors, for example, which can tell you, right, when the next maintenance is due and and things like that. So this this is already there. Um as widely spread as it should be, no, because it requires a lot of other things fall in place before you have that public infrastructure in in place. But that's then connected to digital twins uh as well. Yeah.

SPEAKER_02:

Uh so it's it's it's um they need to work in tandem to have that proper predictivity in the There's a lot of apps um coming out, or they were in a at some point of citizens reporting that what it needs to be done in the city.

SPEAKER_05:

Yeah, that's crowdsourcing. Uh so that's a pretty yeah, it's a good concept and it's been there for a long time. And I think UN agencies have also leveraged that. You have had a lot of uh open source solutions that are all around crowdfunding uh as well, so or crowdsourcing. Um but yeah, but no, that's in place. That's in place. It's it's all about the fact that because these are a lot of these are city level initiatives. Uh so if a city is willing to invest in it and willing to report on it and willing to take action on it, is when you can make it effective uh as well. I've I'm from India. I've seen in cities where people report and the and the municipal services take action on it fairly quickly, but there are other cities where they don't.

SPEAKER_02:

Okay.

SPEAKER_05:

Not just in India, even here or in US, uh, for that matter, and nothing then happens. Okay. And then people don't report.

SPEAKER_02:

Yeah, because they feel that they're not being listened to. Exactly. Yeah. So that's lost. Yeah.

SPEAKER_05:

Yeah, exactly. So so the crowd crowdsourcing concept is is not new, it's been there, and a lot of good products have been built up on that uh as well. So, like the whole concept of if I to stretch it a little bit, open source. Any kind of open source technology is community driven. Anybody can go in, log in, suggest improvements, um, the community will review those improvements and then accept it as well. Yeah. Uh so from a technology field. Everyone for everyone. Exactly. So there are there are those communities that exist as well in there.

SPEAKER_02:

Yeah. But um, final question now to the long view to reflect. If you had the chance to influence the next generation of public AI and data systems with a single idea, not a feature, but as a foundation. What principle would you build into the core of every design?

SPEAKER_05:

I think it's what we've discussed earlier as well. It's I would make sure that we need to have human at the center of it, and we are very clear from day one why we are doing it, what's the objective of it as well. So that's that has to be the core tenant uh as well. Whatever solution is being developed needs to be for the public good, needs to be for the people, and they need to be they somebody has talked to them and listen to them uh as well before designing it uh also. So you need to have that human-centric approach uh to this whole thing. Uh it's again, as I said, it's not a new concept, but this is something that we have to keep honing in and keep reminding so that we don't forget.

SPEAKER_02:

Pushing in every second.

SPEAKER_05:

Exactly. It's the power of repetition.

SPEAKER_02:

Yes, that's how you learn. Thank you so much for your time. Thank you for singing with us. Thank you for having me and and talking about uh how AI can be used for public goods and how we we we cannot lose the human. That's that's the main the main uh takeaway, I think, from this uh uh episode is the human is the key to every technology. So technology doesn't just shape what we do, it reflects what we believe, what we protect, and what we dare to imagine. Thank you for reminding us that data and AI are not just for systems or in speed. They're about choices, values, and the invisible architecture behind every decision that affects our lives. Because building for the public good means asking better questions, designing with care, and always making space for what doesn't fit neatly into the model. The future is not something we wait for, it's something that we shape one decision, one system, one conversation at a time.

SPEAKER_05:

I think it was a great conversation. Uh, it's an important topic, and uh, let's hope uh people keep remembering that and continue doing the good work.

SPEAKER_01:

Thank you, thank you so much. Thank you.

SPEAKER_00:

Thank you for listening to Intangibilia, the podcast of Intangible Law. Plain talk about intellectual property. Did you like what we talked today? Please share with your network. Do you want to learn more about intellectual property? Subscribe now on your favorite podcast player. Follow us on Instagram, Facebook, LinkedIn, and Twitter. Visit our website www.intangibia.com. Copyright Leticia Caminero 2020. All rights reserved. This podcast is provided for information purposes only.