SAFER AGENTIC AI FOUNDATIONS: A Framework for Responsible Governance of Agentic AI

by The Agentic AI Safety Community of Practice
(Nell Watson, Chair; Prof. Ali Hessami, Process Architect)

  • This framework provides a multidisciplinary foundation for governing Agentic AI systems—prioritizing human meaning, autonomy, and collective well-being.
  • Empower AI systems to act with initiative—while ensuring their goals remain transparent, corrigible, and accountable to legitimate human oversight.
  • Intended for policymakers, developers, ethicists, and oversight bodies shaping the future of powerful machine agents.
Safer Agentic AI Framework Logo Safer Agentic AI Framework Logo

ABOUT THE SAFER AGENTIC AI COMMUNITY

The Safer Agentic AI Community is a global network of experts working to build practical safety guidelines for AI systems with independent decision-making capabilities. As AI becomes more autonomous, ensuring these systems remain aligned with human values is more important than ever.

WHAT IS AGENTIC AI?

Agentic AI systems can set and pursue goals, adapt to new situations, and make complex decisions on their own. They operate within defined boundaries but show initiative—breaking down tasks, experimenting, and adjusting to feedback. Examples include self-driving cars or AI systems managing logistics in real time.

OUR WORK

Our 25-member Working Group has released the Safer Agentic AI Foundations—a living set of recommended practices. The second volume introduces a structured framework for understanding what makes these systems safe or unsafe. Developed using a rigorous weighted-factors approach, this framework builds on years of experience creating standards and certifications for ethical AI.

KEY FOCUS AREAS

Goal Alignment

Ensuring robust alignment between operational goals and human values.

Value Alignment

Identifying, codifying, and maintaining human values in AI systems.

Safe Operations

Ensuring safe operations throughout the system lifecycle.

Epistemic Hygiene

Maintaining cognitive clarity and accurate information management.

Transparency

Creating clear, interpretable rationales for AI reasoning processes.

Goal Termination

Implementing proper protocols for task completion and system sunsetting.

Security

Implementing comprehensive protection against threats and vulnerabilities.

Contextual Understanding

Establishing robust control mechanisms across operational contexts.

EXPLORE THE FULL FRAMEWORK

OUR EXPERTS

Nell Watson

Nell Watson - Chair, Agentic AI Safety Experts Focus Group

Nell Watson is a respected expert in AI ethics and safety, with a longstanding focus on aligning emerging technologies to human values. As Chair of our initiative, she applies her deep interdisciplinary background—spanning engineering, philosophy, and social sciences—to shape responsible innovation strategies. Nell has contributed to multiple international standards efforts, including the IEEE 7000 series, and regularly advises organizations on trustworthy AI development and policy.

Prof. Ali Hessami

Prof. Ali Hessami – Process Architect, Agentic AI Safety Experts Focus Group

Ali Hessami is a leading authority in systems engineering and risk management. Serving as our Process Architect, he draws on decades of experience in safety and security engineering, assurance, and certification to ensure robust governance frameworks for advanced AI. Ali has played a key role in global standardization and ethics certification initiatives, helping to create transparent, secure, and ethically informed processes for responsible technology adoption.

IDEATION PARTICIPATION AND SUPPORT

Experts from diverse fields, including AI, technology, law, ethics, social sciences, safety engineering, systems engineering, assurance, and certification, have volunteered their time and expertise to support our ongoing ideation sessions. These contributors broadly fall into two groups: regular contributors and those who have participated less frequently. We are deeply grateful to both groups for their engagement, ideas, and contributions to the debates, concept creation, development and articulation. This process, which we term 'Concept Harvesting,' has resulted in the insights shared in this release.

REGULAR CONTRIBUTORS

OCCASIONAL CONTRIBUTORS


Our community unites specialists from AI, technology, ethics, law, social sciences, and beyond. Together, we focus on designing future-ready frameworks and criteria that uphold ethical principles and practical safety measures in real-world deployments.

Our experts have significantly influenced internationally recognized standards and frameworks—such as the IEEE 7000 series and ECPAIS Transparency, Accountability, Fairness, Privacy and Algorithmic Bias Certification—while also advancing new AI ethics initiatives.

SUBSCRIBE FOR UPDATES

Stay informed about the latest developments in our Safer Agentic AI research and frameworks.

GET INVOLVED

Join our growing community of practitioners committed to ensuring the safe and beneficial development of agentic AI systems.

JOIN THE COMMUNITY

FORTHCOMING BOOK

Safer Agentic AI Book Cover

Coming: January 2026

Safer Agentic AI: Principles & Responsible Practices

This essential guide, authored by Nell Watson and Ali Hessami, builds upon our framework to provide practical strategies for implementing safety measures and aligning AI with human values.

The book offers cutting-edge insights into the unique challenges posed by agentic AI, along with actionable guidelines for policymakers, business leaders, developers, and concerned citizens navigating this complex landscape.

HAVE ANY QUESTIONS OR IDEAS?

Use the form below to get in touch with us.
We welcome feedback, questions, and collaborative opportunities.