The argument pattern relating to this stage of the AMLAS process is shown in Figure 7 below and the key elements are described in the sections below.
The top claim in this argument is that system safety requirements that have been allocated to the ML component ([E]) are satisfied by the model that is developed. This is demonstrated through considering explicit ML safety requirements defined for the ML model.
The argument approach is a refinement strategy that justifies the translation of the allocated safety requirements into concrete ML safety requirements ([H]) as described in Activity 3. Justification J2.1 is explicitly provided to explain the issues that were involved in translating the complex real world concepts and cognitive decisions into formats that are amenable to ML implementation. This should also explain and justify the scope of the ML safety requirements and whether any of the allocated system safety requirements were not fully specified as part of the ML safety requirements. Any such allocated requirements must be addressed as part of the system safety process. For example, allocated system safety requirements with real‐time targets, which require the consideration of the performance of the underlying hardware, cannot be fully specified and tested merely by the ML model. As such these can only be meaningfully considered by also testing the integrated ML component (i.e. Stage 5). To support this strategy two subclaims are provided in the argument, one demonstrating that the ML safety requirements are valid, and one concerning the satisfaction of those requirements.
The validity claim is provided to demonstrate that the ML safety requirements are a valid development of the allocated system safety requirements. Evidence from the validation results ([J]) obtained in Activity 4 is used to support the validity claim. Justification J2.2 provides rationale for the validation strategy that was adopted for Activity 4.
This claim focuses exclusively on the ML safety requirements. The claim states that the ML safety requirements are satisfied by the ML model. The claim is made in the context of the ML model ([V]) that is generated and the data ([N], [O]and [P]) that is used to create the model. Although the satisfaction of the ML safety requirements is demonstrated through verification evidence, it is also important, as for more traditional software, to provide assurance regarding the processes used for development. The ML Learning Argument Pattern ([W]) and the ML Data Argument Pattern ([R]) are therefore used to provide argument and evidence that the model (and learning process) and the data (and data management process) are sufficient and are discussed in detail in Stage 4 and Stage 3 respectively. The link with assurance in these stages is established using Assurance Claim Points (ACPs)  (indicated by the black squares). These represent points in the argument at which further assurance is required, focusing specifically here on how confidence in data management and model learning can be demonstrated. These ACPs can be supported through instantiation of the ML Data Argument Pattern ([W]) and the ML Data Argument Pattern ([R]) respectively.
This is a decomposition strategy based on the different types of ML safety requirements. As shown in Figure 7 above, this will include claims regarding performance and robustness requirements, but may also include other types of ML requirements such as interpretability where these requirements are relevant to the system safety requirements. This is indicated by the ‘to be developed’ symbol (i.e. diamond) under the strategy.
This claim focuses on the ML safety requirements that consider ML performance with respect to safety‐related outputs. The defined ML safety requirements that relate to performance are provided as context to the claim. The argument considers each of these requirements in turn and provides a claim regarding the satisfaction of each requirement (G5.1 in the ML verification argument pattern [BB]). The satisfaction of each requirement will be demonstrated through verification activities. These are discussed in more detail in Stage 5.
This claim focuses on, and is stated in the context of, the ML safety requirements that consider ML robustness with respect to safety‐related outputs. The defined ML safety requirements that relate to robustness are provided as context to the claim. The argument considers each of these requirements in turn and provides a claim regarding the satisfaction of each requirement (G5.1 in the ML verification argument pattern [BB]). The satisfaction of each requirement will be demonstrated through verification activities. These are discussed in more detail in Stage 5.