Skip to main content

A Comprehensive Guide to
Contemporary AI Safety

Safer Agentic AI Book Cover
AVAILABLE NOW, WORLDWIDE

Safer Agentic AI

Principles and Responsible Practices

A practical guide to governing, aligning, and securing autonomous AI systems. Written for policymakers, developers, and leaders navigating the challenges of increasingly capable AI.

Nell Watson
Prof. Ali Hessami
Nell Watson & Prof. Ali Hessami Leading experts in AI ethics and safety engineering
Available at: Kogan Page FLASH SALE: Save 40% at koganpage.com (opens in new tab) with code SALE40 Ends 1 May
Explore

What Experts Are Saying

Praise from leaders in AI, ethics, and technology governance

WHAT IS AGENTIC AI?

Agentic AI systems set and pursue goals, adapt to situations, and make decisions autonomously. Unlike AI that merely recommends, these systems act directly in the world—breaking down tasks, experimenting, adjusting to feedback. This shift from advice to action is why safety becomes critical: mistakes can compound before humans notice or intervene. Examples include self-driving cars or AI managing logistics in real time.

Look Inside the Book

A comprehensive guide spanning AI fundamentals to practical governance frameworks

Table of Contents

From the Foreword

"This timely volume addresses one of our field's most daunting challenges: ensuring the safe and beneficial development of increasingly autonomous AI systems. The authors combine technical expertise and ethical insight in a comprehensive framework... This book is invaluable for AI researchers, developers, policymakers, business leaders and anyone invested in the responsible future of artificial intelligence."
Dame Wendy Hall
Dame Wendy Hall
DBE, FRS, FREng
Regius Professor of Computer Science, University of Southampton
BOOK OVERVIEW

Safer Agentic AI

Key concepts and insights from the book

47:47 Full Player

KEY FOCUS AREAS

Our framework addresses nine critical dimensions for building safe and beneficial agentic AI systems.

Goal Alignment

Ensuring robust alignment between operational goals and human values.

Value Alignment

Identifying, codifying, and maintaining human values in AI systems.

Safe Operations

Ensuring safe operations throughout the system lifecycle.

Epistemic Hygiene

Maintaining cognitive clarity and accurate information management.

Transparency

Creating clear, interpretable rationales for AI reasoning processes.

Goal Termination

Implementing proper protocols for task completion and system sunsetting.

Security

Implementing comprehensive protection against threats and vulnerabilities.

Contextual Understanding

Establishing robust control mechanisms across operational contexts.

Responsible Governance

Establishing accountability, compliance, and oversight frameworks for responsible deployment.

Built on Rigorous Research

The book expands on a comprehensive framework developed by our Community of Practice of international experts in AI, ethics, law, and safety engineering.

7 Inhibitors Forces that push AI systems toward unsafe behavior
balanced by
9 Drivers Principles that keep AI systems aligned and safe
Explore the Full Framework

About the Authors

Leaders in AI safety, ethics, and governance

Nell Watson

Nell Watson

Chair, Safer Agentic AI Community of Practice

Engineer, ethicist, and author of Taming the Machine. Chair of IEEE's Agentic AI Expert Focus Group and President of the European Responsible AI Office. Fellow of the British Computing Society and Royal Statistical Society. Listed as an Icon by the Royal Academy of Engineering.

Prof. Ali Hessami

Prof. Ali Hessami

Process Architect, Safer Agentic AI Community of Practice

Director of R&D at Vega Systems and Chair of IEEE P7000 Technology Ethics Standard. Vice Chair and Process Architect of IEEE ECPAIS. Fellow of the IET and Royal Society of Arts; Chartered Engineer. Visiting Professor at City University London and Beijing Jiaotong University.

OUR WORKING GROUP

Experts from diverse fields—AI, ethics, law, social sciences, and safety engineering—have contributed their time and expertise to develop this framework. We are deeply grateful for their engagement, ideas, and contributions.

Regular Contributors

  • Ali Hessami
  • Matthew Newman
  • Sara El-Deeb
  • Farhad Fassihi
  • Mert Cuhadaroglu
  • Scott David
  • Hamid Jahankhani
  • Nell Watson
  • Sean Moriarty
  • Isabel Caetano
  • Roland Pihlakas
  • Vassil Tashev
  • Keeley Crockett
  • Safae Essafi
  • Zvikomborero Murahwi
  • Lubna Dajani
  • Salma Abbasi

Occasional Contributors

  • Aisha Gurung
  • Leonie Koessler
  • Pramod Misra
  • Aleksander Jevtic
  • McKenna Fitzgerald
  • Pranav Gade
  • Alina Holcroft
  • Michael O'Grady
  • Rebecca Hawkins
  • Md Atiqur R. Ahad
  • Mrinal Karvir
  • Sai Joseph
  • Chantell Murphy
  • Nikita Tiwari
  • Tim Schreier
  • Katherine Evans
  • Patricia Shaw

JOIN OUR COMMUNITY

The Safer Agentic AI Community of Practice is a global network of experts building practical safety guidelines for autonomous AI systems. Subscribe for updates or join our LinkedIn group to connect with fellow practitioners.

Stay Updated

Get the latest on our research, framework updates, and book news.

Connect With Us

Join our LinkedIn group to engage with AI safety experts, share insights, and stay connected with the community.

HAVE ANY QUESTIONS OR IDEAS?

Use the form below to get in touch with us.
We welcome feedback, questions, and collaborative opportunities.