Intangiblia™
#1 Podcast on Goodpods - Intellectual Property Indie Podcasts
#3 Podcast on Goodpods - Intellectual Property Podcast
Plain talk about Intellectual Property. Podcast of Intangible Law™
Intangiblia™
Anna Aseeva - Sustainable by Code: Rethinking Tech Governance from IP to AI
What if the rules we write today could make tomorrow’s technology more human, safer, and genuinely worth wanting? We sit down with Anna Aseeva, a legal strategist working at the intersection of sustainability, intellectual property, and AI, to map a smarter path for digital innovation that starts with design and ends with systems people trust.
We dig into the significant shifts shaping tech governance right now. Anna explains a practical model for aligning IP and sustainability: protect early to nurture fragile ideas through sandboxes and investment, then open up mature solutions with licensing that shares benefits and safeguards intent.
This conversation is equally about culture and code. We talk about legal design that reads like plain talk, citizen participation that turns evidence into policy input, and civic apps that could let communities steer platform rules. We cover digital sustainability beyond emissions—lighter websites, greener hosting, and product decisions that fight digital obesity and planned obsolescence. And we don’t shy away from the realities of AI: hallucinated footnotes, invented coauthors, and the simple fixes that come from a careful human in the loop.
If you’re a builder or curious listener who wants technology to serve people and planet, you’ll find clear takeaways: design for sustainability from day one, keep humans in charge of final decisions, protect what’s fragile, open what’s ready, and invite people into the process.
Subscribe, share with a friend, and tell us: where should human review be non-negotiable?
Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.
The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.
The more people are aware, the more they have information, knowledge, and the more they relate or they can relate to policy, to law, to a technology, the less there is fear and the more there is not only acceptance but also desirability.
SPEAKER_03:Welcome to Intangibilia. Today we have a very interesting episode. We're gonna be talking about sustainable by code, rethinking tech governance from IP to AI. We are joined by Anna. She is helping shape the legal foundation of our digital future. She works at the crossroads of sustainability, technology, and law, asking not just how innovation moves forward, but how it can move wisely, fairly, and with intention. Anna's trajectory is as global as interdisciplinarity. She has advice on responsible AI governance, help build international frameworks on sustainability and innovation, and explore how legal systems can adapt to realities of blockchain, web 3, and generative AI. She work her work spans institutions, sectors, and continents, always guided with a clear ethical compass and deep understanding of the systems that connect us. From corporate responsibility to legal design, from dispute resolution to future-facing IP. Anna brings together rigor and imagination. In this episode, we explore how law can be more than just a boundary, it can be a tool for possibility, protection, and purpose. Because the rules we design today will shape the tech we live with tomorrow. Welcome. Thank you. Thanks, Lauricia.
unknown:Hi.
SPEAKER_03:So, Anna, thank you so much for joining us. Thank you for talking. And I would like to ask you the first uh um breaking the ice question. Tell us about yourself uh and how you landed here in Geneva.
SPEAKER_01:All right, thank you. You already quite introduced me, so I will not repeat what you've said. I will give you, let's say, three figures four, 16, and 20. Um is the number of fields where I'm specialized. Well, they're all quite interconnected, but firstly, I did actually in Geneva, I started in Geneva by doing a degree in international relations at the Graduate Institute. It was Ashoe de Geneve back in the days. Uh I think people know it by both Ashoe de Geneve and Graduate Institute. So I are political science, then area studies at the University of Geneva, law, including PhD in law from Sciences Po Paris, and I also studied design and business communication. I studied partially web design and web development, and now I'm studying more like the business psychology side. So the four uh fields, then 16, actually 16 and a half, but let's stick with 16, is the number of startups that I helped in one way or another. I advised either legally or just administratively or uh regarding the sustainability. And 20 is the number of years of experience, as you mentioned, from corporate sector to the United Nations to law firms in Brussels, uh, a lot of consulting, NGOs, and now I work for Digital for Planet, is an NGO in Zurich, and we work on digital sustainability.
SPEAKER_03:That's a beautiful career. So you have walked many paths.
SPEAKER_01:Yes, I saw a lot of yeah, let's say continents and fields and people.
SPEAKER_03:That's amazing. I'm very happy to have you here. Um, so let's dive in. Um, you move comfortably between policy, ethics, innovation, and law. What's one thing you believe the legal field should embrace more boldly as we shaped the digital age?
SPEAKER_01:Yes. Today the digital age is this transformation from let's say commodities economy to the digital economy, fully digital economy, and this leap happened much faster than we thought. Because, or thanks to COVID, to the pandemic, we were all locked and we were all online, and so the we are fully now in the digital economy, let's say. And the legal field, as actually other fields, what it needs really in this digital age is interdisciplinarity, but the real one, not for example, before uh coming to the consulting and NGOs, I worked for 10 years in academia from PhD to associate professor position. And there, for example, in legal field, whenever you have one researcher who does social studies for you or data analysis, and it becomes either qualitative, qualitative, sorry, or quantitative analysis of something of legal decisions of case law, it's like, oh, it's interdisciplinary. I don't think so. I think really bringing people from hard science, natural science, uh, social science and humanities, including law, uh, it's it's really like being stronger together, more aware, and more sustainable. I think this is really important. Like people who create tech, the tech people, should absolutely work together with uh people from social science and humanities, with lawyers, with economists, with sociologists, with natural scientists, and the other way around.
SPEAKER_03:So that you can build from their strengths. So it comes with their own expertise and knowledge, and then you can build something better together.
SPEAKER_01:Exactly, as you say from the the strength, because the difference is not a weakness, it's complementarity. You can complement each other and it's only strength, as you say.
SPEAKER_03:Yeah, perfect. I love that. So um, you work also on the frameworks for responsible AI, platform governance, and emerging tech. What do you think is the most exciting shift happening right now in how we regulate innovation?
SPEAKER_01:How we regulate innovation. Um I'd say what is really important. I don't know if it's very exciting, but it's exciting. It's a definite definitely in um a needed one, I'd say, is the human participation and value-based participation. Not only what I mean human participation is human oversight. Because obviously we have human participation because humans create AI. Human oversight, once everything is created, it's relevant for AI, it's relevant for IP. I'm sorry, if your AI wrote a book for you on your behalf or an article or case analysis, anything, there could be plagiarism, and that's also an IP issue, right? So it should be not only human oversight of the final content of anything, even if it's just text revision, even if it's just language spell check. Less that, but if it's a bit more than spell check, human oversight and value-based oversight, meaning when we use the work of others, we cite them. It's called intellectual honesty. So let's continue with that in AI age and platforms age, just the same basic values that yeah, that's our principal difference. Exactly.
SPEAKER_03:We just bring them online. Yeah, of course, it makes sense because um that's how every everyone else has grows as well. You acknowledge other people uh work and then you build on that, and then everyone keeps growing together, no one is left uh behind.
SPEAKER_01:Exactly, it's beneficial for everyone.
SPEAKER_03:Yes, yes, yes, yes. In your experience, how can I intellectual property and sustainability work hand in hand, especially as more creators, coders, and entrepreneurs want to build with purpose?
SPEAKER_01:You're right, more and more creators and social entrepreneurs and startups build with purpose because, on the one hand, uh one of the main differences of a startup from just a company or just a small company, small and medium enterprise is that to have the purpose, have a social purpose, sustainability purpose, right, is one of core differences of a startup from just SME. So I'd say at this uh more micro level is incentivizing through investments as well through regulation, but also protecting, uh including IP protection, what startups offer. Because it's in a way nurturing, right? The these small but super smart, unique ideas with purpose, sandboxing. So once uh when when they are in the sandbox, we incentivize and protect. However, and that's paradoxical, once they grow, uh I would go, including a regulation for promoting open licensing for shared solutions. Once the solution is mature enough, is grown up enough, we should share it. I so that's it sounds contradictory, but it's not, it's complementary. We protect and incentivize what when we need it. And one when it's being built. Yes, yes. When it's grown, you need to share this with the humanity. Open licensing, open source.
SPEAKER_03:Yeah, but and and the beauty about this kind of um frameworks is it's not just um freely do whatever you want. There's rules into open source, and there's rules and and how uh you let other people in in your technology, um, in ways that it doesn't uh lose the purpose, or also it's not deceiving to the other people, or it also can be um used for something that was not intended to be used.
SPEAKER_01:Exactly.
SPEAKER_03:So it's about being open, but also keeping in mind that there are rules in place that you you should follow.
SPEAKER_01:That's the other side of also innovation and with purpose. Yeah. So yeah, we need to protect where it's needed to be protected and incentivized and open where it's needed to be done. For example, as we work at Digital for Planet a lot with this Horizon Europe funding, it's a very specific pillar two Horizon Europe funding, it's research and innovation. And there, there are a lot of projects on this open source and open licensing, a lot of rules. European Union is like the champion in regulation, and also, as you said, it makes sense because you need also some rules and limits to protect the solutions, to share them, but also to protect when where it can also be you used, yeah, harm on other people and used in the that's why we have a legal system, yes, that's why we have lawyers.
SPEAKER_04:Exactly, we're not that bad.
SPEAKER_01:Let's not kill all the lawyers.
SPEAKER_03:So you advise across jurisdictions. Have you seen any standard samples of countries or institutions integrating sustainability and digital innovation into law and in a forward-thinking way?
SPEAKER_01:That's an excellent question. Uh, maybe you know, if not, it's worth reading. Stanford University has a center on AI, and they publish this annual index. It's Stanford University AI Index, something like that. Uh, and it's been out 10 days ago or something, maybe two weeks. I saw it 10 days ago. And there are like there is an overview, and there is point number four and point number five. Point number four is about innovation in 2025. There is a graph. The US is really still number one, it innovates. China is catching up, is so is super close to the US in this graphic. They're really very close to the US in terms of innovation. And point number five, the next one after point number four in this Stanford AI index, it's about regulation, AI regulation. And obviously, you have European Union. So you have OECD, European Union, and for a reason. The GDPR, the General Data Protection Regulation of the EU, now affected more than 100 countries, law in more than 100 countries. It's absolutely extraterritorial and it protects, I think, the right values like privacy, it's about uh protection. Yeah, it's about consent, it's about, I mean, private data, trust, security. I think EU AI Act will do something similar, will have quite extraterritorial effect, and there are other regulations: Cyber Resilience Act, Cyber Security Act. We'll see how it goes. But EU regulates a lot in a quite forward-thinking way, especially regarding social sustainability. Social sustainability meaning um, yeah, the trust, security, human-centered solutions, uh, protection of privacy. So it's still Europe regulates, is still true as well.
SPEAKER_03:So, um, from what I'm getting, is like in in every country they have their different um approaches. So one is running um to the race of innovation and the race of the technology itself, developing the technology, and then another one is running in innovating in the policy making and in regulation making in when the and when the technology arrives, then the the the legal system is gonna be ready in place.
SPEAKER_01:It is already in place, yes, in the European Union. And the thing is that the famous Brussels effect, quite a few laws as a GDPR, and I expect more or less the same, but EU AI Act, it's quite extraterritorial.
SPEAKER_03:So yeah, because it's the the way it's it's it's been uh uh impacted all the business. And now businesses are global, or at least uh uh they have presence in more than one country. So it's it's fairly easy to be touched by the GDPR.
SPEAKER_01:Yes, you're totally right, and that's a great example. Beforehand, as we discussed, we had this more, let's say, extractive economy or commodities economy. We had value chains, the global value chains, which were physical. Now in the digital economy, everything is online, so it's much more global than before, even more global than global value chains, let's say it's global value chains to zero.
SPEAKER_03:I like that. So you've championed legal design and system thinking. What would you true uh what would a truly user-centered future-proof legal system look like, whether for tech contracts, IP, or AI?
SPEAKER_01:This question has a lot of elements. One element is user-centered, future-proof, yeah, and legal system, and we say legal system for the tech in general, because it's AI, it's contracts, platforms. I'd say um when we speak about good legal system, let's say future-proof, user-centered, we speak a lot about policy salience. I think before we should assess or even dream this or that policy or legal system being salient, we should think about societal desirability. So for me, the uh user-centered means end users. End users for me is citizens, it's you and I. So being socially acceptable or accepted is one thing. There were a lot of socially accepted things since 2020, right? During the pandemic, a lot of things came from the government and the society had to accept them. We didn't have much choice. Uh, social desirability is something slightly different, and I think the user-centered, future-proof legal system should be socially desirable. Two elements of this social desirability, very quickly, I don't want to like be super theoretical. Uh, a simple way, simple example of illustrate that would be so two elements, social acceptance. First, imagine uh the scale, uh, sorry, the balance. In one scale you have knowledge, in another scale you have fear. In between, you have information. So the more information you have, the more you have knowledge. The less information you have, the more you have fear. It's not only not necessarily about a policy or legal system, it's about anything, like a forest, sunrise, right? So the goal, I think, and the thing about social acceptance is to have more uh real, reliable, simply accessible information about anything, about the technology, about the policy, about the whole legal system. So the more you have information, the more you have knowledge, the less you have fear. So you would avoid these situations with burning the masks or like destroying the 5G antennas, you know, because people have more knowledge, so less fear. That's first uh step. Next step is the right of citizens, really like legally uh enforceable right of citizens to information, including the right of actively participating in the making of this information. I know it's a little bit more complicated now than the example with the balance. Let's say, for example, sustainability information to use in lawmaking and in courts. Citizens can provide evidence. This evidence, handled in a particular way, becomes data, then we use all the GDPR and all the like the to have this data be ethically correct, ethically proof, etc. And then you can use this evidence in court decis in courts, in litigation, in lawmaking. For example, so the citizens participate quite directly in the lawmaking, not only through votes, we know also the limits of the democratic system and the voting system, correct? Due to recent events. So here we have a completely different mechanism, how citizens can participate in the making of the information. And coming back one step back, the more information, the more knowledge, the less fear, the more the system is future-proof and use user-centered and citizen-centered.
SPEAKER_03:Interesting. And I like the balance, and it's it's so true in everything. In everything, then the more you understand about uh um a situation, a technology, a process, the the less uh you demystify it. Exactly. And it becomes uh uh something that you can really uh fully um uh take advantage of or or put in your everyday life, and also can help you to have um to surpass the bias against new developments or against technology.
SPEAKER_01:Exactly. You can relate to that directly instead of having fear, oh, this is lawyer stuff, oh, this is legal stuff, I don't understand anything because it's complicated. No, there is nothing complicated, especially if you participated in the making of that.
SPEAKER_03:Yeah, of course. Just join in, start. If we want digital tools, platforms, and IP system to support long-term sustainability, where do we begin? Is it education, policy design, or something else?
SPEAKER_01:So actually, everything that you mentioned is super uh relevant. You mentioned design. Where was that? Education, design, policy, something else. I'd say all these things are important, but in my opinion, we should start with design. You know, this famous sustainability by design, but if we really embed this at the uh yeah, basically pre um pre-production level, uh at the conceptual level, you embed sustainability in the design, then of course you proceed with education, with policy, also with IP, but yes, starting from the design.
SPEAKER_03:Design, because that's what makes uh the entire uh system or process or technology from what you take into consideration when you're making it.
SPEAKER_01:Yeah, from the uh from the very beginning, from the upstream. But then the rest is super important as well. In education, starting from school education, for example, I don't know, to avoid digital obesity, like you know, these all these like digital things also make kids thinking about circularity, like changing phones every year, or changing laptops every year. Yeah, every time is a new one, is it a good thing? So starting from education, actually from very young stages.
SPEAKER_03:Once when they're when they're open to to have uh their minds changed, because uh, you know, the older you get, the less likely you are to change your mind.
SPEAKER_01:Yeah, used to. Oh, with my parents, we were used to that. I receive a new iPhone for every birthday. If it's embedded in your childhood memories and they're happy, you will never change it. Yeah, yeah, it's hard.
SPEAKER_03:It's like because it becomes like a uh um a tradition for them or or part of like a happy memory that they can uh emulate again by by buying something.
SPEAKER_02:Exactly.
SPEAKER_03:Uh many tech leaders talk about building responsibly. From your view, what does it really mean to embed responsibility into innovation from the start?
SPEAKER_01:It's very much it's like a the continuation of the previous uh question and answer, but let's develop it further because basically the answer to this one is also from the design on the one hand, and on the other hand, from education from very young ages. But because it was already the discussion in the previous one, let's add something, let's take an example. Uh, because you can use this example in any field, but I will give an example of uh writing, let's say research innovation writing. Either you write a scientific article or a I don't know, a case note, or a research innovation proposal, like we write a lot uh for like for the Horizon Europe funding or tenders, anything. Let's say you're like uh research innovation writing, uh you can use, and everyone is using AI in this writing. So there are at least three different stages, or let's say three different strands. Strand one is when it's ethically okay-ish and even recommended. For example, using AI for spell check, grammar, or even translation. That would be ethically okay.
SPEAKER_02:Okay.
SPEAKER_01:Then second uh tier is when it's ethically okay-ish, but really subject to control. And the third tier obviously would be ethically not okay. Never use it. No, you you can still use it, but not recommended at all. And I explain then also uh like the human role and risk uh, let's say, mitigation. And that's an example about the writing, but I guess anything that you create with AI, and soon uh there will be news created with AI. We had a discussion on Wednesday with uh friends who work uh for a television. I will not take tell you more than that, but uh for television. And uh, I don't know, five or maximum 10 years from here, AI will be involved much more in the news making in every stage. Anyways, coming back to my example, so we have these three tiers and human oversight, proofreading, control, and decision making, whether publish it or not, whether use it or not, even for the first tier, where it's like just to proofread, is important. So human oversight for any of them and uh risk mitigation also for the first tier, just check that everything is okay. For the second one, when you, for example, generate footnotes, you give references, just links, and ask generating footnotes. Recently, I asked, I asked AI to uh follow a particular format of footnotes for an article. I already had footnotes in this in this case. They were done, they were done but in a different format. It was like Oxford something, and it was for like Harvard, I don't remember exactly.
SPEAKER_03:You needed to change your case. Yes, yes, it was the title.
SPEAKER_01:A CM conference, and they have a particular way of citation. I asked uh AI to do that, and maybe just because it was my book, so exactly to my book, so I it's like a single-authored monograph. I refer to it once in the article. AI added another name, like I don't know, John Smith. I I was puzzled. I write to the AI, who is John Smith? Like, oh, it's a co-author of Anna Seva. I was like, really, I am Anna Seva. So I really discussed with AI and it was shocking to me because it was not John Smith, it was it was another name. And apparently AI found maybe another Anna Seiva publishing with that person, and it decided to add the name to the book for no reason. I have no clue. Oh wow, okay. It's just because it's my book, I noticed that. And then I had to reread every footnote that AI didn't add everything. We're second-guessing everything. You're like, what else do you do here? Yes, so I had to go through all footnotes and check every word in the title. AI could add something to the title of the article that you already cited, just in a different format.
SPEAKER_03:Oh wow, that's that's scary because then then it is it can it can damage the whole research.
SPEAKER_01:Yes, and that was just footnotes. Yeah, yeah. Can you imagine if you ask to summarize something or develop something?
SPEAKER_02:Okay.
SPEAKER_01:And so here the risk mitigation is very different. Either you then go through every word or just use a bit less AI. And and then obviously the the the uh non-recommendable, the third strand is asking AI writing something from scratch. I would not recommend. And because we speak about writing, it's writing from scratch, but then creating anything news or any content, any good, any service, because now we are in the age of digital goods and digital services. I guess don't ask AI to create it from scratch. Again, coming back to one of your questions, in the EU, soon we will have all these like tight regulations. Even the EU AI Act uh speaks a little bit about that, not too much uh Cyber Resilience Act as well, Cybersecurity Act, but there will be more regulations. Imagine people start asking AI to create something from scratch. Yeah, you will have what I had as a situation with the footnotes where all the footnotes were already there. You I just needed to reformat them.
SPEAKER_03:Yeah, because that's uh that's one of the things we have been seeing, especially when when we talk about copyright uh um AI generated um content.
SPEAKER_01:There you go, then you can have plagiarism easily.
SPEAKER_03:And and there's there's so far that the different authorities that said you can protect your copyright if there's a human uh uh input and there's a human uh uh like guiding the whole process. It was not just the AI who did the whole thing.
SPEAKER_01:AI cannot be the author.
SPEAKER_03:Exactly. So it's it's when the human use the AI as a tool to create, then okay, it is protected. But if the if you just put a general prompt and and you took whatever the AI gave you, then that itself there's no there's no there's no authorship from the human side, so there's no there's no copyright to be to be granted. So that's what we're seeing right now on that development. So it's interesting because it it it it connects with that, with there's uh there's the human input is very, very important, it's crucial to make sure that it's accurate, that there's no hallucination, that it's not going out of the way and doing things that have nothing to do, or creating footnotes, or or making reference that is out of out of plateau. Oh plagiarism, for example. Plagiarism, exactly.
SPEAKER_01:You ask AI to okay, please write me a paragraph or a page on this and that. AI writes without footnotes as if it has written by itself by taking information from one gazillion of websites, and in reality, maybe took this whole page word by word from somebody else's work, which is open source. And that it's plagiarism, and you will be like then accused of plagiarism.
SPEAKER_03:Yeah, that's that's a scary thing. It's like I I think no technology should be used unchecked. You always need to have uh uh surveillance eye. I use AI very much in in especially on the on the creator of my podcast, and and and a lot of the the tools that really help you do the the ground work faster. Yes, but you you have to be vigilant, you you cannot just let it run free.
SPEAKER_01:Uh oversight, uh revision, and decision making. The final decision should be human. Actually, there is an interesting case. Uh it's I I just read the one pager. There is now uh um a claim under the preliminary ruling. Procedure. It's a specific procedure in the European Union.
SPEAKER_03:Okay, so we go into the flash section. I will ask you to pick one. So you have to think fast and pick one. You can explain yourself if you want. And the idea is to uh to give to give like a novel or picture what you think about these specific uh uh choices. Okay, let's start. Rewrite the legal system from scratch or debug the one we've got.
SPEAKER_01:Debug the one we've got. For sure.
SPEAKER_03:A future where every citizen has an AI lawyer or where no one needs one.
SPEAKER_01:I don't choose the first one. AI lawyer, it would be too expensive. You will have an AI lawyer and then a physical lawyer or who is overseeing what the AI is overseeing what the AI does. If the AI lawyer is for free because it's the free version, it's still the same cost. Uh just the physical lawyer, so it depends. But I am strongly against only AI lawyer for all the reasons that we just discussed. No lawyers, no, I don't think it's a good idea. We will lose jobs, and a lot of my students. I taught in academia for 10 years, like my all my students from different countries will lose jobs. Um, I'd say rather no one needs a lawyer for different reasons than there is no crime, nothing, no happiness, happiness, but yeah, it's a bit yeah, it's a bit like futuristic stuff.
SPEAKER_03:Yeah, hopefully. Hopefully, there will be a moment that uh we're all gonna live in harmony and 100% in peace.
SPEAKER_01:Let's hope for that.
SPEAKER_03:So um public policy made in parliaments or prototype in hackathons?
SPEAKER_01:I like the prototype in hackathons, but I don't think that it's democratic enough.
SPEAKER_02:Okay.
SPEAKER_01:But it's very much it mirrors the idea. You remember the citizen participation, like uh in the production of information, including the one that uses evidence and lawmaking. So this uh prototyped in hackathons, I think it's stage one, and then still uh debated in parliaments as stage two, because parliaments are still the most democratic uh institutions, especially in the European Union. I'd say so. I I say step one, prototyped in hackathons, step two made in parliament. I mean not made, but let's say approved, finalized, finalized in parliaments. Yes.
SPEAKER_03:Okay, interesting. A world where every startup has a sustainability clause, or where no startup needs one.
SPEAKER_01:Hmm. The second one is a nice one, but I think it's really uh very much futuristic. And the first one is already it's the reality. So a startup that has a sustainability clause, every startup, yeah, is better.
SPEAKER_03:Uh teach ethics to machines or teach systems to listen to people.
SPEAKER_01:Oh, I'd say teach systems to listen to people, ethics to machines, yeah, very futuristic. And also it's it's it's very relative, it's extremely relative. The values are yeah, there's no absolute uh yes, in ethics as an absolute, so yeah, yes, and something that was relevant uh or shocking or horrible 100 years ago is just normal or obsolete today, right?
SPEAKER_03:Yeah, exactly. Copyrights that expire with cultural relevance or last as long as human memory.
SPEAKER_01:Um I'd say uh copyright might maybe in five five percent of cases being good to last long and for all in all other cases as long as it's uh yeah culturally relevant that it expires. I would I would rather say so. Yeah, I am quite a bit in favor of open licenses and shared solutions and shared things, but again with uh some limits, with some protection.
SPEAKER_03:Climate lawsuits argued by AI or peace treaties negotiated with virtual reality?
SPEAKER_01:Well, peace treaties negotiated in virtual reality would mean virtual treaties in virtual reality. So for virtual peace and the war and peace situation, they will they would all be virtual. It's not true. So I'd say climate lawsuits argued by AI with human oversight, always more expensive climate lawsuits than now, but uh yeah. I uh because I have to choose one of the two, I prefer the first one, the second one.
SPEAKER_03:I don't know, it's a VR legal reforms crowdsourced by global citizens or co-drafted with machines trained on centuries of case law.
SPEAKER_01:Well, here again is the same. Like you said, you use a lot AI for some basic stuff, and uh I'm sorry, especially for legal trainees and and legal interns, that machine can be much faster in revising centuries of case law and like tons of case law, right? So it can help, but it should be again maybe step one, and then crowdsourced by global citizens is awesome. I love it. So I think both, but step step one, yeah, is like co-drafted with machines, but then uh crowdsourced by global citizens. I really like it.
SPEAKER_03:Okay, final flash question: data treated as a public good or has a personal property with royalties?
SPEAKER_02:Hmm.
SPEAKER_01:Uh yeah, there was at a higher level discussion. I remember at least uh Katarina Pistor uh discussed this data as public good in her code of capital, maybe even people before. I just remember this work. Uh to be honest, um, that was quite a theoretical work. In practice, I'd say uh personal data is not only properties, like there are not only property issues. I mean, to be protected, but all this like security and and stuff. I think personal personal data and sensitive data should be protected as personal property, maybe not with royalties. But then non-personal and non-sensitive data could be and maybe should be a public good or even commons. Data as commons, also again, I don't remember if it's Katarina Pistol or somebody else. There are a lot of writings and discussions now about data as uh as the commons, why not?
SPEAKER_03:Okay, interesting. Okay, let's you can take the palette. Ah yes, yes, yes, yes. And now we go to the game true or futuristic. Um worries. So true or futuristic. True, meaning that it's gonna happen soon, it's about to happen, it's already happening. Or futuristic, not yet, maybe never, you're too far ahead. Okay, let's start. Every digital license will include sustainability terms by default.
SPEAKER_01:Yes. I think it's already sort of started happening.
SPEAKER_03:Yeah, yeah, it's something that we're seeing, and people are more aware of the footprint of technology.
SPEAKER_01:But also sustainability by design and not only environmental but also social and economic sustainability. Exactly. I think 10 years from now it could be quite true.
SPEAKER_03:Yeah. AI agents will negotiate copyright and licensing for creators.
SPEAKER_01:Yes, it could happen. Not now, but yes.
SPEAKER_03:Platforms will be legally required to report on their algorithm impact.
SPEAKER_01:True. I think with the Cyber Resilience Act, Cyber Security Act, all that, then uh there are other acts, including the Digital Services Act in the EU. At least in the EU, this is already becoming true in a way, partially. Yeah, yeah.
SPEAKER_03:IP enforcement will happen through decentralized community-led systems. I hope. No.
SPEAKER_01:Well the courts. Futuristic. I mean, yeah, okay. I mean, you mean DAOs, like decentralized.
SPEAKER_03:Yeah, I I mean that they they will they will replace the the court system. Futuristic.
SPEAKER_04:I I need to make some outrageous statement, right? Now you give a lot of ideas to PhD students to write like you can you can send them this question this these questions, maybe they will get inspired.
SPEAKER_01:Yes, totally.
SPEAKER_03:Open source AI will carry a trust label to ensure ethical deployment.
SPEAKER_01:Again, it's already happening in a way very partially. It's we're not there yet, but in some sense and in some fields, yes. For example, in Digital for Planet, we have uh this one project which is called Certain. It's about certification and regulation of AI, and it's in a way, I mean it's not about that, but it it's becoming true.
SPEAKER_03:Okay, legal user experience will become a standard field in both tech and law schools.
SPEAKER_01:For law schools, I really hope. I mean, it's already reality in some schools. I will not give you the names, I don't want like to, but yes, yes, totally true, and it's already becoming reality in some progressive law schools. Uh the tech schools have no idea, but I hope as well. Very good question. I like it. I love it. Yeah.
SPEAKER_03:Digital sustainability will be measured and taxed like carbon emissions.
SPEAKER_01:Yes, it's already again, it's already like in a way happening, but not that clear cut. But yes, there are all these calculators about websites. Uh, you have calculators that can show you how many emissions a website like emits, like CO2 emissions. Then you can tell me, okay, there are like a lot of calculators, and uh what is the methodology behind? Of course, if you want to go the way you first study the methodology behind the calculator and then you trust the calculator, but we are already there.
SPEAKER_03:Okay, so it's already it there's already in the mindset of the people to not only mindset, these calculators are using.
SPEAKER_01:They're tools that are available, yeah. Calculators, and on the other hand of the spectrum, I worked with that a lot when I was like more like a freelance consultant uh to create uh it's an eco-design of the websites. It's here for the last at least five years already. Sober websites, it depends which interface you use, like all these like pop-up windows. It's like it's not very complicated, but you can design a very non-heavy and ecologically friendly website. And then the hosts and the platforms who use only green electricity, you know, all these things.
SPEAKER_03:Okay, interesting. Yes, yes. A global IP commons will emerge to encourage innovation with share benefits.
SPEAKER_01:I hope. Yes. That's a little bit that what we discussed regarding data, also commons, yeah.
SPEAKER_03:Yeah. Consumers would vote directly on platform public on platform policy through civic apps. Ah, I like this.
SPEAKER_01:That will be great, right? That will be great, and it's also a little bit like uh regarding the citizens participating in the making of the information, and then finally of policy and lawmaking. Totally.
SPEAKER_03:And finally, inventorship will be redefined to include collaborative networks, human and machines.
SPEAKER_01:I think it's already happening. It's already uh the case. Inventorship. No, as we discussed in IP, only a human being can be the author. That's very important, but let's see how it develops.
SPEAKER_03:Yeah, there's there's a lot of room there to grow. So, one final question to to close the the episode. If you could share one message with everyone shaping the future of law, tech or innovation today, what's the one principle or mindset you hope they carry forward?
SPEAKER_01:I'd say this collective co-creation, bottom-up co-creation. So something that we already discussed. Uh and user-centered, meaning citizen-centered uh legal system, which is such because it's also co-created by citizens. And here we touched upon like citizens giving the evidence, participating uh through the evidence in lawmaking and policy making, and also this consumers voting directly on platform policy through civic apps, it rejoins completely the um the message. The more people are aware, the more they have information, knowledge, and the more they relate or they can relate to policy, uh, to law, to a technology, the less there is fear, and the more there is not only acceptance, but also desirability.
SPEAKER_03:Of course. Thank you so much. Uh, it's been lovely talking with you. Um it's it's been a very interesting, uh uh fulfilling experience uh to see someone who is uh working in the midst of all these technologies, but you have also a very inclusive and and and kind of um uh colla collaborative or or kind of a community uh sense of how technology can develop and how we can form uh our society with technology.
SPEAKER_01:Yes, that's a message because I mean in digital sustainability, if we care about environmental sustainability, which is very often the case when you say digital sustainability or sustainability for the tech or for digital or for ICT, people immediately think about emissions and environment, but it's also social and economic sustainability, and they're not separate, they're all together. That's for you need like social acceptability and social desirability and socioeconomic rights and equality and all that, so all these socioeconomic and environmental values, yeah.
SPEAKER_03:Yeah. So the future of technology doesn't have to be cold or chaotic, it can be thoughtful, fair, and full of opportunity if we design it that way. Thank you, Anna, thank you so much, for reminding us that law is not just a set of restrictions, it can be a platform for vision, collaboration, and global progress. Because when purpose meets innovation, we don't just imagine better systems, we build them.
SPEAKER_01:Yes, absolutely. Thank you so much, Ladisia. Let's make it happen.
SPEAKER_00:Thank you for listening to Intangibilia, the podcast of Intangible Law. Plain talk about intellectual property. Did you like what we talked about today? Please share with your network. Do you want to learn more about intellectual property? Subscribe now on your favorite podcast player. Follow Wells on Instagram, Facebook, LinkedIn, and Twitter. Visit our website www.intangiblia.com. Copyright Leticia Caminero 2020. All rights reserved. This podcast is provided for information purposes only.