Navigation
Minimap of introduction diagram
Minimap of stage diagram

AMLAS outline

ML verification argument pattern

The argument pattern relating to this stage of the AMLAS process is shown in Figure 14 below. The key elements of the argument pattern are described below.

Figure 14: Assurance Argument Pattern for ML Verification

The top claim in the verification argument pattern corresponds to the bottom claim in the safety requirements argument pattern ([I]); it is at this point that each ML safety requirement that has been established must be shown to be met. The satisfaction of the requirement is shown through the verification activities that are performed, as discussed in Activity 6. This claim is supported by strategy S5.1, which reasons about the verification activities undertaken and a claim G5.2, which provides evidence from the verification log ([AA]) that the verification activities have been performed independently from the development of the ML model.

In order to demonstrate that the ML safety requirement is sufficiently satisfied, the pattern provides a choice over how the claim can be supported. The evidence may come, as discussed in Activity 6, from any combination of testing and formal verification. The choice in the argument should be interpreted as “at‐least‐1”, allowing for multiple legs of argumentation. The combination of verification approaches used should be justified in J5.1. The “requires development” adornment to strategy S5.1 indicates that other verification approaches may optionally also be adopted where this is felt to be required. An argument and evidence regarding any such approaches must be included in the assurance argument.

When the verification strategy includes test‐based verification, it must be demonstrated that the ML model satisfies the ML safety requirement when the verification data is applied. The testing claim is supported through evidence from the test results ([Z]). For any ML safety requirement, the test data used will be a subset of the verification data samples ([P]) generated from Activity 7. The test data must demonstrate that the ML safety requirement is satisfied across a sufficient range of inputs representing the operating environment, that are not included in the data used in the model learning stage. The sufficiency of the test data is justified in the verification log ([AA]). It is also necessary to consider the way in which the test results were obtained. This is particularly important where testing is not performed on the target system. This is considered in G5.6 where evidence must be provided to demonstrate that the test platform and test environment used to carry out the verification testing is sufficiently representative of the operational platform of the system to which the ML component will be deployed. G5.6 is not developed further as part of this guidance.

When the verification strategy includes formal verification, a claim is made that the ML model satisfies formally specified properties. The formally specified properties should be a sufficient formal representation of the intent of the ML safety requirement that is being verified. A justification should be provided in J5.2 to explain the sufficiency of the translation from the ML safety requirement to the formally specified properties. The formal verification claim is supported through evidence from the formal verification results ([Z]). For those results to be valid, it must be demonstrated that the formal model created to perform the verification is sufficiently representative of the behaviour of the learnt model, and that all assumptions made as part of the verification about the system and operating environment are valid. This argument is made under G5.8, which is not developed further as part of this guidance.

Continue to: Stage 6. Model deployment

Our site depends on cookies to provide our service to you. If you continue to use this site we will assume that you are happy with that. View our privacy policy.