Intangiblia™

Plug, Play, or Pay: The Legal Code Behind AI Interoperability

Leticia Caminero Season 5 Episode 19

The invisible legal architecture behind AI systems, either talking to each other or failing spectacularly, takes center stage in this deep dive into interoperability. Far more than technical specifications, the ability of AI models to connect and share data represents a battlefield where intellectual property rights, competition law, and global governance clash to determine who controls the digital ecosystem.

Starting with IBM's mainframe antitrust case, we trace how European regulators forced a tech giant to provide third parties with technical documentation needed for maintenance. This early precedent established that when your system becomes essential infrastructure, monopolizing access raises legal red flags. The SAS v. World Programming Limited ruling further clarified that functionality, programming languages, and data formats cannot be protected by copyright, giving developers freedom to create compatible systems without infringement concerns.

Patent battles reveal another dimension of interoperability politics. Cases like Huawei v. ZTE established detailed protocols for negotiating Standard Essential Patents, preventing companies from weaponizing their intellectual property to block competitors. The Microsoft v. Motorola judgment defined what "reasonable" licensing fees actually look like, protecting the principle that interoperability shouldn't bankrupt smaller players.

Google's decade-long fight with Oracle over Java API copyright culminated in a Supreme Court victory validating that reimplementing interfaces for compatibility constitutes fair use, a landmark decision protecting the ability to build systems that communicate with existing platforms without permission. Meanwhile, the Oracle v. Rimini ruling reinforced that third-party software support isn't derivative copyright infringement, even when designed exclusively for another company's ecosystem.

Beyond courtrooms, international frameworks increasingly shape AI interoperability standards. From UNESCO's ethics recommendation to ISO/IEC 42001 certification, from the G7 Hiroshima AI Process to regional initiatives like the African Union's Data Policy Framework, these governance mechanisms are establishing a global language for compatible, trustworthy AI development.

Whether you're building AI systems, crafting policy, or simply trying to understand why your tools won't work together, these legal precedents reveal that interoperability isn't just about good coding. It's about who controls the playground, the rulebook, and ultimately, the future of AI innovation.

Send us a text

Support the show

Speaker 1:

Is your AI system playing well with others or locking the playground gate? Today we're exposing the hidden legal wiring behind AI interoperability From patent showdowns and copyright drama to antitrust standoffs and a global tug of war over tech standards. We're serving 15 years of regulatory tea, global court rulings and policy power plays. Leticia brings the IP brains, I bring the SaaS. Let's find out what it really takes to make AI systems speak the same language, or whether someone's always trying to pull the plug. This is Intangiblia.

Speaker 2:

You are listening to Intangiblia, the podcast of intangible law playing talk about intellectual property. Please welcome your host, leticia Caminero property.

Speaker 3:

Please welcome your host, leticia Caminero. Today we're diving into a topic that might sound technical but touches every single AI innovation out there Interoperability. If you've ever wished your AI tools could talk to each other, share data or just play nice, that's interoperability.

Speaker 1:

But getting there isn't just about good coding. It's a battlefield of intellectual property, competition law, global power plays and very long acronyms. From SAS to ZTE, from GDPR to GPAI. We're decoding how countries and companies have fought over who controls the plug, the socket and the instruction manual.

Speaker 3:

This episode was built in the AI playroom my voice, ai clone, artemisa, fully artificial and a bit too good at this game. What you're about to hear is for informational purposes only. No legal advice, just rules of play quorum style. You know the drill.

Speaker 1:

Three, two, one, tag your suit. Just kidding, let's go. You know the drill. Three, two, one, tag your suit.

Speaker 3:

Just kidding, let's go. So, before we start dropping case low Artemisa. What is interoperability?

Speaker 1:

It's the legal and technical magic that lets different AI systems connect, share data and function together without a meltdown. Think of it like universal chargers for your algorithms, except the law gets involved when someone tries to lock the socket.

Speaker 3:

And when we talk about AI, interoperability isn't just convenience, it's power. Whoever controls the standard controls the ecosystem. It's about market access, competition and, in some cases, fundamental rights.

Speaker 1:

Exactly and surprise, it's also about IP, copyright on APIs, patents on standards, brand promises, fair use. Interoperability is where law meets code and egos meet lawsuits.

Speaker 3:

Our first stop Brussels, year 2011. Seen, a very unsexy, very powerful piece of tech, the mainframe computer. Ibm practically invented this space and for decades it ran the show. Think banking systems, airlines, insurance servers. This wasn't about casual use, this was mission-critical infrastructure.

Speaker 1:

And IBM had the nerve to play both host and gatekeeper. They sold the machines and they insisted on being the only ones allowed to service them. Not cute.

Speaker 3:

Third-party maintenance companies. Think of them like the indie repair shops of the tech world. Third-party maintenance companies Think of them like the indie repair shops of the tech world that, hey, IBM's not giving us the spare parts or the technical info we need to fix these machines Without access.

Speaker 1:

Their business were toast, so they took it to the European Commission Cue the antitrust alarms, because when you control the whole ecosystem, you don't just have market power, you have market responsibility Exactly.

Speaker 3:

The commission opened two formal investigations, one following a complaint from T3 Technologies and another from Turbo Hercules, a company trying to run IBM compatible systems on non-IBM hardware, turbo, hercules that name sounds like a protein shake, and yet they were trying to shake up the mainframe market, not their biceps. They argued that IBM was using its dominance over mainframe hardware to lock customers into its software and services and refusing to license interfaces or share vital info with competitors.

Speaker 1:

Translation IBM was being stingy with its toys after inviting everyone to play. Not a great look in a post-Microsoft antitrust world.

Speaker 3:

Under EU competition law, article 102 of the TFU to be exact, has a problem, especially when your product becomes the de facto standard. If everyone builds on you, you don't get to pull up the ladder. So what did IBM do? They settled no fine, but a legally binding commitment. Ibm agreed to supply third-party maintenance providers with mainframe spare parts and technical documentation on fair, reasonable and non-discriminatory a-frand terms for five years.

Speaker 1:

Aka, you built the mainframe, but now it's time to share the manual.

Speaker 3:

This was one of the earliest moments where interpropagability became a legal obligation, not just a technical preference. It's also one of the first times the EU used antitrust to guarantee access to IP-dependent systems in the tech sector.

Speaker 1:

Honestly, it set the tone because from here on out, anyone dominating a system that others depend on better have a France strategy or prepare for EU therapy sessions.

Speaker 3:

And for AI. This case taught us if your system becomes the infrastructure, your IP rights might bend to public interest, Because sometimes the line between invention and monopoly is just a locked port away.

Speaker 1:

All right. Case number two takes us deep into the logic of software and the limits of copyright. The year is 2012 and the Court of Justice of the European Union is about to answer a burning question Can you copyright how a program works, Not the code itself, but its functions, language and data format? Enter SAS Institute big hot analytics company. They had a system where users could write scripts in a language called SAS language to analyze massive data sets.

Speaker 3:

Now along comes word programming, led with a cheeky UK-based startup that builds a software system that does exactly what SAS's platform does, not by copying the code, but by observing how it behaves and creating a compatible product.

Speaker 1:

Basically they clone the vibes, the language, the outputs, the functionality, but not the source code. Sas lost their mind.

Speaker 3:

And they sued for copyright infringement, saying that, even though the code wasn't copied, the look, logic and feel of the software were protected.

Speaker 1:

And here's where it gets hot. The CJEU ruled that functionality, programming, languages and data file formats are not protected by copyright.

Speaker 3:

That means you can't stop someone from building software that does the same thing in the same way, using the same command language, as long as they don't copy your actual code.

Speaker 1:

And this is huge for interoperability because it says, if you want to make your AI model or system compatible with a dominant platform, you can study its behavior and build around it without violating copyright.

Speaker 3:

The court made it clear ideas and principles behind software, including interfaces and languages, are not protectable, Only the expression of those ideas. The code, can be copyrighted.

Speaker 1:

Which is why this case became a holy grail for reverse engineers, open source rebels and AI developers who need to plug into proprietary ecosystems without getting sued.

Speaker 3:

It gave legal backing to clean room and re-implementation the idea that you can build interoperable software by analyzing how something works from the outside without stealing the original code.

Speaker 1:

Let's say it together Interoperability is not infringement, Unless you're copying code. Then it's a different episode.

Speaker 3:

And for AI. This case supports the idea that if you want to train your model to work with someone else's formats, scripts or inputs, you can legally, as long as you're not copying the code base.

Speaker 1:

It's a green light for compatibility innovation and a red flag for vendors who think copyright gives them monopoly powers over ecosystems. Sorry, the law says function fiction protected.

Speaker 3:

Now we enter the world of standards and patents that play both hero and villain. It's 2014 and the European Commission is taking on not one but two tech giants, samsung and Motorola their alleged crime weaponizing patents that were supposed to be shared fairly.

Speaker 1:

Oh yes, the juicy world of standard essential patents, or EPS. These are patents on tech that everyone uses, like Wi-Fi, 3g or compression formats If your phone speaks the same wireless language as mine. Think ASEP and probably ASEP.

Speaker 3:

But here's the catch If your patent becomes part of a standard, you're expected to license it on FRAND terms fair, reasonable and non-discriminatory.

Speaker 1:

Which is legal code for you. Don't get to become a troll after joining the standards party.

Speaker 3:

In the Samsung case, the commission said hold up. Samsung had joined the standard setting process for mobile communications and agreed to Fran licensing. But later they went after Apple, seeking injunctions to block iPhone sales in Europe unless Apple agreed to their licensing terms.

Speaker 1:

Sounds a lot like give us what we want or your iPhones get banned. Not exactly the spirit of friend.

Speaker 3:

Meanwhile, Motorola was doing something similar, fighting for injunctions in Germany over SEPs against Apple, even after Apple offered to take a license and submit to a court-decided royalty.

Speaker 1:

So what did the EU do? They pulled the Article 102 lever abuse as dominance and said if someone's willing to negotiate, you can't use your patent as a nuclear button.

Speaker 3:

In Motorola's case, they were formally found to be in breach of competition law. In Samsung's case, the commission accepted a legally binding settlement. Samsung promised not to seek injunctions on SEPs in Europe for five years against any company that agreed to a pre-established licensing framework.

Speaker 1:

You can fight over price, but not by threatening to shut people down. That's extortion, not negotiation.

Speaker 3:

These cases were huge because they clarify something crucial Even if you have a legitimate patent, you can't use it to block market access when you've promised to license it fairly, especially if your patent sits inside a standard that everyone has to use to even enter the game and for AI. This is where it gets really relevant. As AI matures, standards are forming for model architecture, for data labeling, for interoperability between agents, and this case set the expectation that patents embedded in those standards can become traps.

Speaker 1:

So if you build the rules, don't flip the board. Brand means you license.

Speaker 3:

You don't extort Friend means you license, you don't extort, and this case gave implementers, startups, challengers, small innovators some protection from being bullied by giants who suddenly remember they own the socket everyone's using.

Speaker 1:

So yes, patents matter, but once you sign the friend handshake, you can't turn it into a chokehold.

Speaker 3:

Next up the CJEU again, this time in 2015. You've heard of Huawei, right, the telecom titan with a lot of patents. Well, this case, huawei BEB ZTE gave us the first full legal choreography for negotiating standard essential patents Spoiler. It's more tango than TikTok.

Speaker 1:

Two tech giants, one patent and a big question if someone's using your SEP, your standard essential patent, how exactly are you supposed to handle it?

Speaker 3:

Because, remember, when your patent becomes part of a standard, you're expected to license it under front terms. But what if the other side is using your tech without a license? Can you super an injunction or are you required to play nice first?

Speaker 1:

Huawei said they're infringing. We want an injunction. Zt said we're willing to negotiate and we'll take a license. Cue the European Court of Justice stepping in like the referee at a patent wrestling match.

Speaker 3:

And the CJU ruled with all the elegance of a rules committee at a ballroom dance contest. They said, yes, you can go to court, but not before. Following the negotiation protocol, let's break it down.

Speaker 1:

The court laid out the Fran Dance steps.

Speaker 3:

Step one step holder must alert the user with all the details what pattern, how it's being used and why it's essential.

Speaker 1:

Two Step two the implementer must clearly express willingness to take a license.

Speaker 3:

Step three the step holder makes a specific brand offer.

Speaker 1:

Or step four if the implementer doesn't like it, they must respond promptly and reasonably, maybe with a counteroffer.

Speaker 3:

Five, step five while all this is happening. No injunctions unless there's clear bad faith. That's right. You don't get to sue just because you're frustrated. You need to negotiate like an adult. No holding the industry hostage while shouting, but it's mine.

Speaker 1:

What's beautiful about this decision is that it balanced patent rights and market access. It said you can protect your inventions but not weaponize them to block out competition. When you've pledged France and for AI.

Speaker 3:

This sets a precedent, because if a model, architecture, data set structure or API becomes a standard, the owners of those building blocks may face the same obligations.

Speaker 1:

And this case is still the gold standard in the EU for SEP disputes. You don't just get to scream infringement, you have to dance the Fran dance formally, gracefully and with good faith.

Speaker 3:

So what did Huawei learn? That a dominant position in the market comes with responsibility and paperwork.

Speaker 1:

And what did ZTE learn? That playing fair means showing up to negotiate, not stalling while you profit off the tech.

Speaker 3:

In other words, if you want to use a standard, be ready to license and if you own the patent, be ready to offer fair terms before you sue, and it's not a vibe, it's a process.

Speaker 1:

And the court just handed you the choreography. And it's not a vibe, it's a process.

Speaker 3:

And the court just handed you the choreography. Next up, another standard, essential balance, or, as I like to call them, the ticket booth at the entrance to the playground.

Speaker 1:

Because in tech, standards are how everyone agrees on the shape of the slide and the width of the swing chains. And if your patent ends up in a standard, you're not the gatekeeper anymore, you're the host.

Speaker 3:

Enter Motorola, who held key patents for Wi-Fi and video encoding standards, the kind of tech that makes devices like Microsoft's Xbox or Windows PCs just work.

Speaker 1:

Microsoft needed access to those patents to make their products compatible. So Motorola says sure, but only if you pay us 2.25% of the product's price.

Speaker 3:

Wait, wait 2.25% per device. That's like charging a playground entry fee based on the price of the child's sneakers.

Speaker 1:

Exactly, microsoft called foul and sued, claiming Motorola had breached its promise to license those patents on Fran terms Fair, reasonable and non-discriminatory.

Speaker 3:

And here's the kicker Fran is not just a feel good acronym. It's a binding promise companies make when their patents become part of a global standard.

Speaker 1:

In 2013, judge James Robart held a first of its kind trial to determine what a fair royalty actually looks like. And spoiler it was a lot less than what Motorola wanted.

Speaker 3:

We're talking fractions of a cent per unit for some patents, because just being part of the playground doesn't mean you get to charge by the bounce.

Speaker 1:

Then came the 2015 Ninth Circuit decision. It upheld the verdict and awarded Microsoft $14,500,000 zero cents in damages.

Speaker 3:

The court made it crystal clear If you promise to license your tech fairly, you can backpedal and hold it hostage once everyone's built around it.

Speaker 1:

This case was a wake-up call for SEP holders. You want your patent in the standard Cool, but that comes with global responsibility and a receipt book, not a ransom note.

Speaker 3:

And it protected the idea that interoperability shouldn't bankrupt you. Just because your swing set uses patented bolts doesn't mean you get to tax the whole playground. Okay, buckle up, because this one's got a billion dollar stakes, chips in everything and a roller coaster of legal opinions. It's FTC versus Qualcomm, and it ran from 2017 to 2020 like a three season legal thriller.

Speaker 1:

Let me set the scene. Qualcomm, king of mobile chipsets, not just building the hardware, but licensing the technology that makes smartphones actually talk to each other. And they weren't just licensing to manufacturers, they were refusing to license to competitors.

Speaker 3:

Right. They had a no license, no chips policy. If you didn't pay up, you didn't get the chips, End of story. And the US Federal Trade Commission said hold up, that smells like abuse of dominance.

Speaker 1:

So the FTC sued in 2017. They argued that Qualcomm used its standard essential patents SEPs to squeeze unfair royalties out of phone makers and block competitors from entering the market.

Speaker 3:

In 2019, the district court Judge Lucy Koh agreed. She ruled that Qualcomm had violated antitrust law and ordered sweeping remedies. They had to renegotiate licenses, stop threatening to cut off chips and even license their patents to rival chip makers.

Speaker 1:

But wait through the appeals court. In 2020, the Ninth Circuit totally reversed the ruling. They said Qualcomm's practices may be aggressive but not anti-competitive. Aka, this isn't illegal, it's just capitalism.

Speaker 3:

And more than that, the court emphasized brand disputes belong in patent or contract law, not in antitrust. They warned that turning every royalty negotiation into an antitrust issue could chill innovation.

Speaker 1:

Basically, don't bring a Sherman Act to a licensing fight Now here's where this gets spicy for interoperability and AI. If courts won't step in when stepholders deny access to standards, even under a friend pledge, then standard setting might not be enough to ensure access, which is scary, because imagine an AI company that invents a new model interface, gets it adopted as a standard and then refuses to license it to rivals. If Qualcomm's logic applies, that's just business, not antitrust.

Speaker 3:

So this case sent shockwaves. In the EU, refusing to license a CP could be abuse of dominance, but in the US the courts just said not our lane.

Speaker 1:

And the global message don't count on antitrust law to guarantee interoperability, at least not in American courts.

Speaker 3:

Exactly. If you want access to a tech standard, you'd better make sure the licensing terms are crystal clear and enforceable through contract law, not wishful thinking.

Speaker 1:

Because if Qualcomm can walk away from Fran like it's a breakup text, so can future AI platform owners. So this case reminds us, standards are only as open as their enforcement, and sometimes innovation doesn't need more rules, it needs better referees.

Speaker 2:

Intangiblia, the podcast of intangible law. Playing talk about intellectual property.

Speaker 3:

Now let's head over to Cupertino, Apple's playground. It's not just fancy, it's fenced off, padlocked and guarded by a sleek privacy policy.

Speaker 1:

Enter Corellium, a cybersecurity firm with a very different mission. They didn't want to compete with iPhones. They wanted to clone the iOS operating system in a virtual sandbox so researchers could stress test it without breaking real devices.

Speaker 3:

So what did Corellium do? They built a virtual iPhone, a simulated version of iOS. You could pause, dissect, rewind and break without actually breaking a phone.

Speaker 1:

To Apple that sounded like digital trespassing. They sued for copyright infringement and violating the DMCA, claiming Corellium copied iOS and bypassed its technical protection measures to build this virtual lab.

Speaker 3:

Corellium, on the other hand, argued fair use. They weren't reselling iPhones or making user-friendly clones. They were giving researchers tools to find bugs, study behavior and improve security across the ecosystem.

Speaker 1:

Think of it this way Apple built the playground and said no one touches the blueprints. Corellium said we built a scale model in a lab to test which slide breaks under pressure.

Speaker 3:

In 2020, the district court sided with Corellium on fair use. It ruled that Corellium's tool was transformative. It didn't just copy iOS. It added new layers, pulse functions, memory inspection, crash diagnostics, the court also pointed out.

Speaker 1:

Corellium removed most consumer facing features no camera, no app store, no phone calls. This wasn't a consumer product, it was a research utility.

Speaker 3:

And that made all the difference. The use was scientific, not commercial in nature, at least not in the same market Apple operated in, which meant no copyright infringement.

Speaker 1:

Leader the twist Of course there is. The court didn't throw out Apple's DMCA anti-circumvention claim. It said fair use may let you copy the playground for research, but hopping the fence without permission might still be illegal. So what did we learn here? That reverse engineering for interoperability and research can be lawful, especially when it adds value and avoids direct competition. But if you pick the lock on someone's tech, the DMCA might still have something to say.

Speaker 3:

Apple and Corellium later settled out of court, but this case still stands as a landmark, especially for anyone developing compatibility tools, security software or virtual testing platforms.

Speaker 1:

In short, if you want to open up someone else's playground for safety checks, bring your own tools and maybe a good lawyer.

Speaker 3:

All right, let's talk about the API showdown of the century. Is Google v Oracle? And the US Supreme Court had to answer a deceptively simple question Is an API that's an application programming interface copyrightable? All right, let's talk about the API showdown of the century. It's Google versus Oracle, and the US Supreme Court had to answer a deceptively simple question Is an API that's an application programming interface coprobatable?

Speaker 1:

Let's break it down. Apis are like menus they tell developers what's on offer, what functions they can call, what data types go in and out. In this case, google copied about 11,500 lines of Java API declarations, so Android developers could keep writing in Java without reinventing the wheel.

Speaker 3:

An article. Who had acquired Java through its purchase of Sun Microsystems said excuse me, that's our intellectual property. Google said calm down, it's just the structure, the function headers. We rewrote everything else.

Speaker 1:

And then it escalated. A full-on, multi-year legal drama. Lower courts, flip-flopped, billions were at stake. The whole tech industry held its breath.

Speaker 3:

And in 2021, the Supreme Court finally ruled they assumed that APIs were copyrightable though that's still debated but said Google's use was fair use.

Speaker 1:

That's right. The court didn't settle the copyrightability question directly, but it said that, even if API declarations are protected, copying them for the sake of interoperability, to enable developers to build new programs in a familiar language, is fair game In their words, google's use was transformative.

Speaker 3:

They were building something new Android, not just copying Java. And preventing that reuse would risk harm to the public, especially to innovation.

Speaker 1:

And the stakes were high. If Oracle had won, it could have opened the door to copyright claims over any interface, command structure or software protocol. Imagine the mess AI systems blocked from reusing training, apis, data structures or model calls.

Speaker 3:

Instead, the court sent a strong message. Interoperability matters and re-implementing APIs is a form of creative, transformative use, not piracy Also fun fact, the court cited public interest in creativity incentives.

Speaker 1:

They knew this wasn't just about Google or Oracle. It was about how software ecosystems grow.

Speaker 3:

So for the AI world this really is massive, because if we want models to talk to each other, tools to be compatible and code to be remixable, then APIs need breathing room and this case gave them that room.

Speaker 1:

It didn't say APIs are free real estate, but it said use them thoughtfully, transformatively, and the law might just have your back.

Speaker 3:

So this case a copyright win for interoperability and a reminder that law can evolve to keep software innovation moving forward, even when it takes 11 years and 11,000 lines of code to get there.

Speaker 1:

Picture this.

Speaker 3:

Oracle builds a huge software playground, custom jungle gyms, secure tunnels and a user base locked into its system. Then Remini Street shows up with tools, says hey, we'll maintain your equipment, patch your software, even add a new ladder or two.

Speaker 1:

Oracle was not amused. They sued Remini for copyright infringement, claiming Remini's updates and patches were just unauthorized derivatives of Oracle's code.

Speaker 3:

Here's where it gets messy. Remini wasn't copying code line for line.

Speaker 1:

Rimini wasn't copying code line for line. They were just building software that worked with Oracle's systems, like crafting new robes for Oracle's monkey bars without using Oracle's actual robe. And, let's be honest, oracle didn't want a third party doing maintenance on his playground. They argued Rimini's software, even if built from scratch, was illegal because it only made sense in Oracle's sandbox, the case bounces through the courts for over a decade.

Speaker 3:

Then, in 2024, the Ninth Circuit says stop the tantrum.

Speaker 1:

The ruling- interoperability does not equal infringement. The court clarified that just because a product works exclusively with another doesn't make it a derivative work under copyright law. Think about that If Remini writes code to fit Oracle's platform but doesn't copy protected code, that's legal. That's like designing a seesaw that bolts into Oracle's playground, but using your own materials. Oracle also tried to say Remini's updates made them look bad, that it was a branding issue, a quality issue, but the court wasn't buying it.

Speaker 3:

This case is huge for developers, support vendors and anyone making add-ons, plugins or third-party tools. It says you don't have to license the whole jungle gym if you're just building a better slide.

Speaker 1:

It also reinforces that software interoperability is legal territory, not a monopoly loophole.

Speaker 3:

Oracle may own the playground, but the ruling says all this can still show up, bring their own tools and make the swing safer without getting sued. Not every legal turning point comes with a judge's gavel. Some of the most powerful forces shaping AI interoperability come from soft law, standards and international coordination.

Speaker 1:

Exactly If the courtroom is where legal drama plays out, this is the writer's room, the place where global agendas, ethical codes and interoperability frameworks are drafted.

Speaker 3:

We call in this segment the infrastructure of influence, because what happens here doesn't always make headlines, but it quietly defines how AI systems behave, connect and scale worldwide. Welcome, Lisa, Take your seat for yourself now. 2021, 193 countries agreed on something monumental a shared vision for how AI should respect human rights, diversity and dignity.

Speaker 1:

That's right. The UNESCO recommendation on the ethics of artificial intelligence was the first global instrument of its kind. Not just nice words. It laid out clear principles for transparency, fairness, accountability and yes, interoperability.

Speaker 3:

It recognized that AI isn't just about code. It's about how systems impact real people in real places, across cultures, borders and power structures, and it called for governments to invest in shared infrastructure, open standards and ethical oversight.

Speaker 1:

What makes it so powerful is that it's not binding, but it's everywhere. It shows up in national AI strategies and procurement rules and data policies, and in how countries shape their AI ambitions.

Speaker 3:

UNESCO set the tone. If we want AI that's globally trusted, it needs to be globally guided and somehow the world actually agreed. Now, while governments were writing ethical principles into global declarations, the engineers were busy drafting a manifesto of their own. Enter the IEEE's Ethically Aligned Design Initiative.

Speaker 1:

This was no vague gesture. It's a comprehensive framework, Hundreds of pages, created by technologists, ethicists, philosophers and policy wonks from around the world. Their mission to ensure that AI and autonomous systems are designed with human values at the core.

Speaker 3:

The ITO really didn't just ask what machines can do. They asked what machines should do, especially when those machines are making decisions that affect lives, rights or democratic processes.

Speaker 1:

And here's where it gets juicy. They didn't just stop at ethics. They linked ethics to design architecture. That means fairness, explainability, transparency and, yes, interoperability. All start in the system design phase, not as a patch after launch.

Speaker 3:

Ethically aligned design influence not only developers, but governments, standardization bodies and companies trying to bake values into their AI from day one into their AI from day one.

Speaker 1:

It's like the instruction manual for responsible innovation, only with actual chapters on bias, agency, sustainability and social justice.

Speaker 3:

And while it's technically voluntary, this framework shapes ISO working groups and even corporatic sports. If your AI startup says it's ethically designed, there's a good chance it's following this playbook.

Speaker 1:

So, yes, it came from engineers, but it landed everywhere.

Speaker 3:

Prove that ethics isn't a soft science when it's built into the source code. Next up, we're headed to the OECD, where ethics meets taxonomy, because before you regulate something, you've got to agree on what it actually is.

Speaker 1:

Right. One country's chatbot might be another's automated decision maker, with human consequences. Without a shared language, trying to govern AI globally is like playing chess with mismatched rule books.

Speaker 3:

So the OECD said let's fix that. They created a classification framework to help policymakers, regulators and developers describe AI systems based on what they do, how risky they are and how humans interact with them.

Speaker 1:

This is semantic interoperability at its finest. It's not about code. It's about making sure that, whether you're in Tokyo, Toronto or Tunis, when someone says high-risk AI, everyone knows what that means.

Speaker 3:

The framework breaks down systems by input type, learning method, autonomy level and social impact, and it's designed to work across regulatory models, whether you're enforcing the EU AI Act, designing a sandbox in Kenya or building an ethics board in.

Speaker 1:

Brazil and the best part, it's not just for governments. Developers, civil society, auditors and even startups can use it to explain their systems in plain language and make their work compatible with international standards.

Speaker 3:

So while it won't send you to court, it might just get you through customs. This is the kind of quiet policy tool that enables trust, transparency and collaboration, especially when the stakes are high and the systems are opaque.

Speaker 1:

The OECD didn't tell the world what to regulate. They gave the world a map to navigate what's already here.

Speaker 3:

So far we've seen frameworks and declarations that demode, but now we're entering standard territory where things get certified, auditable and officially laminated.

Speaker 1:

That's right. In December 2023, the International Organization for Standardization and the International Electrotechnical Commission dropped something big ISO slash IEC 42001, the first global management system standard for AI.

Speaker 3:

This isn't about whether your AI is cool or scary. It's about whether your entire organization knows what it's doing. The standard covers governance structure, risk controls, documentation, audit trails, stakeholder engagement and continuous improvement, and here's the plot twist.

Speaker 1:

It's sector numeral Health tech, finance, government tools doesn't matter If you build AI. Iso 4201 gives you a common governance language.

Speaker 3:

But what makes it a superstar in our theme today is this ISO 40-2001 is built for interoperability. It's designed to help your AI system play nice across platforms, borders and supply chains without sacrificing trust or accountability.

Speaker 1:

And let's not forget the real world Flex, you can get certified. This isn't just guidance, it's something you can be audited on, show off and wave around in a funding pitch or government bid.

Speaker 3:

So now, if you say your AI is ethical, secure and transparent, you can prove it with a stamp ISO slash.

Speaker 1:

IEC 42001 is basically the gold standard for grown up AI governance. It's how you tell the world we don't just build smart tech, we run it responsibly.

Speaker 3:

All right. So the ink was barely dry on ISO slash IEC 42001. And within a year the certification wave began.

Speaker 1:

Because nothing says trust me, like an internationally recognized badge, and companies were lining up fast.

Speaker 3:

By early 2024,. We saw early adopters in sectors like healthcare, banking and autonomous vehicles get certified. Why? Because those are the industries where a bad AI day could land you in court or the news.

Speaker 1:

Governments took notice too. Japan encouraged domestic AI companies to get certified, india's Bureau of Standards launched training programs and Canada already tying certification benchmarks into federal procurement.

Speaker 3:

Even here in Switzerland, the early certification pilots started rolling out, blending the ISO standard with national ethical AI guidelines.

Speaker 1:

It's not just compliance, it's positioning and for global businesses, iso 40-2001 certification became a passport to credibility. It tells partners, regulators and customers we're interoperable, explainable and not here to break things.

Speaker 3:

Because, let's be honest, a lot of AI companies talk a big game, but this, this is the audit trail to back it up and, from a global strategy perspective, this kind of rollout helps reduce regulatory chaos.

Speaker 1:

If everyone certifies against a shared standard, cross-better AI suddenly gets way less messy.

Speaker 3:

So if you're wondering where AI governance is going, look for the logos. The certification wave isn't just a trend, it's a signal of market maturity.

Speaker 1:

ISO 42001, because in 2025, interoperable and accountable isn't a tagline, it's a competitive edge Up. Next, we're shifting from standards bodies to diplomacy rooms, and this one's a two-parter the Global Partnership on AI, icaca-gpai and the G7 Hiroshima process.

Speaker 3:

Let's start with GPAI, launched in 2020,. This was the moment when 15 countries, from the EU to India, to the US, said we need to work together on AI that's safe, fair and globally aligned.

Speaker 1:

GPAI is in a treatment. It's more like a global co-working space where researchers, civil society and policymakers tackle issues like data governance, responsible innovation and yes interoperability.

Speaker 3:

Think of it as the R&D wing of global AI policy. The reports and recommendations that emerge from GPAI often feed directly into national strategies and standard setting.

Speaker 1:

And then it's Fast forward to 2023, when Japan held the presidency of the G7 and launched the Hiroshima AI process. This was a huge moment.

Speaker 3:

For the first time, G7 countries committed to a code of conduct for advanced AI developers, with core principles like transparency, risk management and international alignment.

Speaker 1:

And what was at the center of it all? Interoperability. Technical, legal, institutional, you name it. The G7 agreed that their systems, safeguards and definitions had to be mutually compatible or we'd all be stuck in a global AI traffic jam.

Speaker 3:

This wasn't just diplomacy. It was a blueprint for harmonizing national strategies so AI systems could scale without fragmenting global trust.

Speaker 1:

So, yes, GPAI and the Hiroshima process may not show up in court, but they're shaping the vocabulary, the priorities and the roadmaps that countries follow next In short. You don't need a binding treaty when you've already aligned the strategy and if your AI ambitions go beyond borders, this is the coordination that lets your system travel legally and ethically.

Speaker 2:

Intangiblia, the podcast of intangible law. Playing talk about intellectual property.

Speaker 3:

Now let's zoom in on a region that's quietly building one of the most ambitious digital ecosystems on the planet Africa In 2022,.

Speaker 1:

the African Union launched its data policy framework a bold plan not just for AI, but for digital transformation across the entire continent for AI, but for digital transformation across the entire continent.

Speaker 3:

The vision a harmonized, cross-bender digital space where data can flow, systems can talk and innovation doesn't get stuck at every national firewall.

Speaker 1:

And here's where it gets exciting. For us, interoperability isn't a footnote, it's a foundational pillar.

Speaker 3:

The framework calls for shared technical standards, open protocols and institutional coordination between member states. That includes AI tools used in agriculture, healthcare, finance and public services.

Speaker 1:

It also pushes for regional cloud infrastructure, local data processing and multilingual AI interfaces, so that systems can serve people where they are, in the languages they speak and this isn't just about tech.

Speaker 3:

It's about sovereignty, development and equity making sure that African nations aren't just adopting global AI systems, but shaping and exporting their own and in a world where regulatory chaos often holds AI back.

Speaker 1:

the EU said what if we made interoperability the default, not the afterthought?

Speaker 3:

So, while others debate compliance, the African Union is designing an ecosystem that scales responsibly and cooperatively.

Speaker 1:

If you're looking for regional leadership in AI governance, this isn't just an example, it's a blueprint with backbone.

Speaker 3:

So far, we've taught frameworks, declarations and voluntary standards, but now we're entering binding territory.

Speaker 1:

That's right. The Council of Europe, home of the European Court of Human Rights and one of the OGs of international law, is drafting a treaty that could become the first legally binding AI convention in the world.

Speaker 3:

Called the Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, or CII for short, and yes, that title is doing a lot of work.

Speaker 1:

CII goes beyond ethics. It's about rights, remedies and rules that stick If adopted. Countries that sign on would need to align their AI systems with core democratic values and legal guarantees, and here is where it hits our theme.

Speaker 3:

CAI includes provisions on interpretability at the institutional level, ensuring AI oversight, audits and legal processes can actually coordinate across borders.

Speaker 1:

Because, let's face it, an algorithm doesn't care which legal system it's deployed in, but humans definitely do.

Speaker 3:

The treaty is still under negotiation, but it's already influencing national laws and global debates, especially in Europe, latin America and parts of Africa, regions that often align with Council of Europe principles.

Speaker 1:

And don't forget, the CAI would be open to non-European countries too. So we're talking about a potential global governance mechanism, not just a regional one.

Speaker 3:

So while others are still drafting ethical charters, the Council of Europe is setting up legal consequences.

Speaker 1:

And, let's be honest, sometimes the only thing stronger than a standard is a signature on a treaty with real accountability.

Speaker 3:

Let's take a moment to bow to the Queen. The General Data Protection Regulation, better known as GDPR, didn't just change Europe. It changed how the entire world thinks about data.

Speaker 1:

Came into force in 2018, still causing compliance panic in 2025. Gdpr was one of the first laws to say if your system processes people's data, it better be transparent, fair and wait for it interoperable.

Speaker 3:

That's right GDPR baked in rights like data portability and access, which don't work unless systems can actually talk to each other.

Speaker 1:

If you've ever downloaded your data from a platform or transferred it to a new service, that's GDPR's interoperability clause working behind the scenes.

Speaker 3:

And even though it's European law, gdpr became the de facto global benchmark, because if you want to do business in the EU and, let's face it, everyone does you play by these rules.

Speaker 1:

AI systems, especially those trained or operating on personal data, are now being retrofitted to fit GDPR's expectations explainability, user control, legal basis, check, check and check.

Speaker 3:

And while GDPR isn't an AI law per se, it paved the way Because, once we accepted that data rights were real, we had to figure out how to build systems that respect them by design.

Speaker 1:

So if AI wants to be global, it has to be GDPR ready. And if it wants to be GDPR ready, it better know how to interoperate across borders, platforms and preferences.

Speaker 3:

Still the gold standard, still bossing the tech industry around, and still the reason every privacy notice you've ever read sounds slightly panic. If GDPR was the EU's privacy mind, drop. The Digital Decade Declaration was the strategy remix Less law, more mission. And that mission Make the EU a digitally sovereign, interoperable and resilient beast by 2030.

Speaker 1:

This isn't a regulation, it's a commitment statement, but it comes with measurable targets for digital infrastructure, digital skills, digital business and digital public services.

Speaker 3:

And underneath it all, you guessed it interoperability, because you can't build a pan-European digital ecosystem if every country is still coding in isolation.

Speaker 1:

The declaration pushes for cross-border services, common data spaces and standards that make public systems plug and play across all 27 member states.

Speaker 3:

So, whether it's ID, health systems or justice platforms, europe wants them to be secure, portable and legally compatible.

Speaker 1:

This isn't just convenience, it's policy, infrastructure and yes, AI is in the mix too, as the EU rolls out the AI Act. The digital decade goals are what give those rules a place to live Shared infrastructure, aligned standards and digital trust across borders.

Speaker 3:

So, while the declaration doesn't come with fines or enforcement, it sets the strategic tempo. If GDPR was the what, this is the how, fast and with who.

Speaker 1:

And as the world watches, europe build its interoperable digital machine, don't be surprised if these benchmarks start popping up in other countries' national plans too.

Speaker 3:

It's the final reminder in our tour interoperability isn't just technical, it's strategic and deeply intentional. All right, five things to carry with you, whether you're coding a chatbot, writing procurement policy or just trying not to get sued by 2030.

Speaker 1:

Quality. One interoperability is both a tech feature and a legal battlefield. If your AI can't connect, it might not just be bad design, it could be a standards war, a patent trap or the start of an antitrust complaint.

Speaker 3:

Two brand is the name of the game in standard setting. Fair, reasonable and non-discriminatory isn't just polite, it's enforceable. And non-discriminatory isn't just polite, it's enforceable. If you've pledged to license your tag fairly, the courts will hold you to it.

Speaker 1:

No, take backs. Three copyright doesn't cover APIs or languages. Yet Reverse engineering for compatibility is still legal in many places. But tread lightly. Not all jurisdictions speak the same IP dialect.

Speaker 3:

Four policies moving faster than ever, From ISO to UNESCO, from the G7 to the AU. Standards and ethical frameworks are popping up like cookies on a news site. Read before you click access Five.

Speaker 1:

The future will be interoperable or it won't work at all. Despite the noise, we're witnessing a slow but steady global convergence. If your AI wants to scale, partner or govern anything, you'll need a passport of compliance, auditable systems and a really good lawyer. I'll advise you to court lawyer. That's a wrap on Plug, play or Pay. If today's episode gave you legal whiplash, good, that's what happens when the whole world races to set the rules. And remember it's not about building the smartest AI. It's about building one that can collaborate, comply and scale without taking down the ecosystem or your company. Thanks for listening. Until next time, stay connected, stay compliant and stay clever.

Speaker 2:

Thank you for listening to Intangiblia, the podcast of intangible law playing. Talk about intellectual property. Did you like what we talked today? Please share with your network. Do you want to learn more about intellectual property? Subscribe now on your favorite podcast player. Follow us on Instagram, facebook, linkedin and Twitter. Visit our website wwwintangibliacom. Copyright Leticia Caminero 2020. All rights reserved. This podcast is provided for information purposes only.

People on this episode