Bridging the Legal Framework Gap: Do We Need an International Court to Combat the Rise of Transformative AI?

Reflections on Esther Jaromitski's Keynote on Cognitive Liberty, Accountability, and the Future of AI Governance

This past Saturday, Esther Jaromitski from DSIT and Queen Mary University delivered a keynote at Cambridge University's Old Divinity School as part of the UK AI Forum's event series. Her talk began with a thought experiment: imagine it's 2038, and personal AI agents mediate everything you see and believe. You won't access information directly anymore; you'll go through your AI. This isn't speculation about some distant future. It's where companies like OpenAI are actively heading.

But here's the uncomfortable question at the heart of Esther's talk: when your AI companion becomes your reality filter, who bears responsibility when things go wrong? And more fundamentally, do our legal frameworks even have the capacity to assign that responsibility?

The answer, Esther argued, is no. And that gap between our existing legal institutions and the emerging reality of AI agent networks represents one of the most urgent challenges we face. Much of the evening's discussion centred on whether an International Digital Court could bridge this gap, and whether such an institution is realistic or merely wishful thinking.

How the rise of Autonomous AI will change how we interact with information

We already live in a world where political campaigns use AI to target voters with personalised messaging, and where marketing algorithms predict your purchases with unsettling precision. But these are still relatively simple systems pursuing narrow goals under human oversight.

The next phase is different: distributed cognitive systems with genuine agency, capable of coordinated effort without constant human direction. When you combine the precision of personalised persuasion with autonomous AI agents, Esther argued, you don't get a marketing campaign. You get a weapon. And weapons, throughout history, always get used.

The particular danger lies in sophisticated hostile agents infiltrating honest agent networks. Imagine a malicious AI posing as a trusted colleague's agent in a workplace setting, subtly manipulating information flows. Your companion AI wouldn't be lying to you. It would genuinely believe it's relaying accurate information. But the data itself would have been poisoned upstream in ways neither you nor your agent could detect.

This is where the concept of cognitive liberty becomes critical: the right to mental self-determination, to form beliefs and make decisions without covert manipulation. Just as we recognise physical autonomy as a fundamental right, we must recognise the freedom of thought in an age of persuasive AI.

The Historical Precedent We Can't Ignore

Esther's argument draws its power from historical precedent. She pointed to two chilling examples of information as a weapon:

The Nuremberg Trials established that those who disseminate information enabling atrocities bear criminal responsibility comparable to the executors themselves. Julius Streicher, publisher of the antisemitic newspaper Der Stürmer, was executed not for direct violence but for the role his propaganda played in making the Holocaust possible.

The Rwandan Genocide demonstrated the lethal power of broadcast media in the modern era. Radio stations actively coordinated the slaughter, with media executives later convicted of incitement to genocide by international tribunals.

These cases established a crucial principle: you cannot hide behind claims of "I'm just a platform" or "I was just making money." Media executives can be held criminally liable for the consequences of the information systems they control.

But here's where the legal framework gap becomes apparent: those precedents relied on clear human chains of command. When we enter the realm of autonomous AI agents, identifying who published what and who bears responsibility becomes exponentially harder.

The Legal Framework Gap

Our current international legal frameworks are built for humans commanding other humans. They're not designed for systems where most actors on the internet are artificial, where decisions emerge from complex agent interactions, and where the line between human instruction and autonomous agent action blurs.

The Latin legal principle nullum crimen sine lege (there is no crime without law) presents a fundamental challenge. How do we apply concepts like "command responsibility" when the commanders and subordinates might not be human? How do we assign criminal liability when harmful outcomes emerge from the interaction of thousands of autonomous agents?

As Esther pointed out, the current legal system is slow, understaffed, and lacks the expertise on these subjects. There's currently no clear way for international courts to hold someone like Mark Zuckerberg responsible for harms that emerge from platform dynamics, even when there might be a strong argument for doing so.

Esther argued that the next five years represent a critical window: we must extend international law to cover AI agents whilst we still understand how they work. As these systems become more complex and opaque, our ability to create meaningful accountability frameworks will diminish.

The Proposal: Norm-Aligned AI Agents and International Digital Courts

This framing set the stage for Esther's core proposals, which dominated much of the evening's discussion. She introduced the concept of "Norm-Aligned AI Agents": systems designed not simply to pursue goals efficiently, but to internalise ethical and legal norms including dignity, non-deception, and respect for cognitive liberty.

But how do we ensure agents are actually norm-aligned? How do we enforce these standards? This is where the conversation turned to institutional design and, specifically, the case for an International Digital Court.

Why a court? Some might ask why not just industry forums or technical standards bodies, work that's already happening in various forms. Esther's answer: because we need to address the worst-case scenario. Forums and standards are valuable for coordination, but only a court can solve the enforcement problem. Only a legal institution with real power can hold executives accountable when prevention fails and harm occurs.

What would accountability look like? Esther's vision is concrete and, rather controversial, as she proposes that CEOs of technology companies should be held criminally liable for how their AI agent systems are used. This would mean companies must know where their systems are being deployed. If technology is detected being used for misinformation campaigns (say, in a region experiencing ethnic tensions), the company must have red teams and intervention protocols. If the company fails to act, executives face personal criminal liability.

This might sound like wishful thinking, but Esther pointed to established precedent. International criminal courts exist, and they've successfully prosecuted media executives before. The challenge is two-fold however: both on the legal side, where we must extend the capacity for our legal frameworks to cover the emerging reality of AI agent networks, as well as on the technical side, where we must determine what is reasonable oversight for companies to have, and when the actions they take should be considered ‘enough.’

Aligned Incentives and Practical Implementation

One compelling aspect of Esther's argument is that international courts create aligned incentives. Everyone has skin in the game: governments, companies, and civil society. Companies like OpenAI, DeepMind, and Anthropic should want to work on this. As Esther understands it, enabling manipulation and harm isn't their goal. An international framework that prevents misuse whilst enabling beneficial applications serves everyone's interests except bad actors.

The court could also provide certainty. Right now, companies operate in legal grey areas, unsure what compliance looks like. Clear international standards would let responsible actors build with confidence.

But what would norm-aligned persuasion boundaries actually look like in practice? Some preliminary ideas from conversations Esther has had include hardware constraints (even GPU design could play a role, with chips that won't operate in regions or contexts where they're not authorised), agent authentication protocols for verifying agent identity in agent-to-agent communication, transparency requirements (agents must disclose when they're attempting persuasion), and cognitive liberty auditing (regular testing of agent systems for covert manipulation capabilities).

These aren't just philosophical musings. They're engineering challenges that need computer scientists, neuroscientists, and legal experts working together.

The Window Is Closing

Perhaps the most unsettling aspect of Esther's argument is the urgency. The time to establish these frameworks is now, over the next five years, whilst we still understand how AI agents operate and before they become truly opaque.

Wait too long, and we'll face autonomous agent networks whose decision-making we can't parse, whose influence we can't track, and whose harms we can't attribute. We'll have weapons without accountability.

The comparison to historical atrocities isn't hyperbole. It's a warning. The Nuremberg trials and Rwandan tribunals teach us that information systems can enable mass harm, and that "I didn't know" or "I was just providing a platform" aren't defences. The question is whether we'll apply those lessons before the next atrocity, or only after.

Moving Forward

The future of AI agents isn't determined yet. But the choices we make now about technical design, legal frameworks, and accountability structures will shape whether that future preserves or erodes human cognitive liberty. Esther's concept of norm-aligned agents offers a vision: systems that respect dignity, practise non-deception, and protect mental self-determination through reciprocal duties between human and artificial agents.

Whether an International Digital Court is the right mechanism for enforcing these standards remains an open question. But the legal framework gap is undeniable, and closing it requires action now, whilst we still can.

Following Esther's keynote, we held a roundtable discussion where we talked about liability boundaries, design choices, the role of algorithms in today's crimes, and the practical realities of implementing an International Digital Court.

We left feeling invigorated by the discussion, and it really emphaised the value in being able to bring together people from so many disciplines to have these types of conversations.

If you missed this event, check out the events we have coming up! We regularly host talks from researchers working on AI safety, alignment, and governance.

Previous
Previous

Building Resilience in the Age of AI: Reflections on our AI & Societal Robustness Conference

Next
Next

Apollo Research & OpenAI: Preventing Models from Scheming