As artificial agents become more autonomous and embedded in decision-making, their growing capacity to persuade and influence humans raises profound questions about agency, autonomy, and freedom of thought. My proposed talk introduces the concept of “Norm-Aligned AI Agents”—agents designed not only to pursue goals but to internalise ethical and legal norms, including dignity, non-deception, and respect for cognitive liberty. Drawing on international criminal law, digital rights, and my research on “Systemic Command Responsibility,” I will outline how established legal principles can inform technical constraints on the persuasive capacities of agentic systems.
The talk will show how AI agents' ability to shape beliefs, emotions, and decisions makes the safeguarding of cognitive liberty an urgent priority. I will propose that recognising reciprocal duties between human and artificial agents—such as honesty, transparency, and respect for mental self-determination—is key to maintaining trust and autonomy in an environment of increasingly powerful persuasion engines.
This vision bridges regulation and technical design, offering practical pathways for implementing norm-aligned persuasion boundaries within multi-agent settings. Participants will be invited to debate how such standards could be operationalised, what safeguards are most urgent, and whether norm-aligned AI agents represent a realistic path toward trustworthy, human-aligned agency that protects human cognitive liberty.