The safety requirements are generated from the system safety assessment process. Such a process covers hazard identification and risk analysis. Importantly, it shall determine the contribution (i.e. in the form of concrete failure conditions) that the output of the machine learning component makes to potential system hazards. A simplified linear chain of events that links a machine learning failure with a hazard is illustrated in Figure 3 below.
Figure 3: Simplified Chain of Failure Events (Adapted from )
Note 3 - Risk acceptance criteria
It is important for the System Safety Requirements to explicitly capture risk acceptance criteria. Such criteria can generally be derived from the following sources:
An existing system against which the proposed system can be compared. This may include a threshold of acceptance above (or below) current performance.
Existing standards for the acceptance of safety-critical systems.
The views of stakeholders, including domain experts and users, who have a deep understanding of the context into which the system is to be deployed. These opinions may be founded on:
A scientific understanding of the processes at play e.g. vehicle dynamics.
An understanding of the legal and ethical frameworks which govern the context.
A study of similar systems and the lessons learnt from failures within these systems and operational contexts.