Helping you assure the safety of your autonomous system

Whatever domain you work in, your autonomous system must be safe. Those who use it, design new features for it, regulate it or might be impacted by it need confidence that the system will behave safely and as expected.

On this site, you will find expert, impartial guidance to help you to create a credible and compelling assurance case for your autonomous system.

The guidance is written by the Assuring Autonomy International Programme at the University of York and is peer-reviewed by industry experts. The guidance will cover the core technical issues that must be considered for the safe development and introduction of an autonomous system.

Our methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) is the first of its kind and can help you to justify the safety of your machine learnt components.

Main AMLAS diagram from PDF

Safe machine learning

We have developed the first methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS).

AMLAS has six stages, which complement the machine learning (ML) development process. It incorporates a set of safety case patterns and a process for systematically integrating safety assurance into your development of ML components.

Browse AMLAS
Future Guidance diagram

Future guidance

Our guidance provides you with proven, accessible methodologies and processes for assuring the safety of your autonomous system.

We are writing five essential pieces of guidance that will be published on this website and free to use. These will help you to create a credible and compelling assurance case for your autonomous system.

Future guidance

Our site depends on cookies to provide our service to you. If you continue to use this site we will assume that you are happy with that. View our privacy policy.