Whatever domain you work in, your autonomous system must be safe. Those who use it, design new features for it, regulate it or might be impacted by it need confidence that the system will behave safely and as expected.
On this site, you will find expert, impartial guidance to help you to create a credible and compelling assurance case for your autonomous system.
The guidance is written by the Assuring Autonomy International Programme at the University of York and is peer-reviewed by industry experts. The guidance covers the core technical issues that must be considered for the safe development and introduction of an autonomous system.
Our latest guidance supports system developers, safety engineers, and assessors to design and introduce safe autonomous systems.
The Safety Assurance of autonomous systems in Complex Environments (SACE) methodology defines a detailed process for creating a safety case for autonomous systems. It considers overall system-level assurance activities, defining a safety process and corresponding safety case patterns.
Browse SACEWe have developed the first methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS).
AMLAS has six stages, which complement the machine learning (ML) development process. It incorporates a set of safety case patterns and a process for systematically integrating safety assurance into your development of ML components.
Browse AMLAS