The ML model needs to be deployed onto the intended hardware platform and integrated into the broader system of which it is a part. Deploying the component may be a multi‐stage process in which the component is first deployed to computational hardware which is then integrated at a subsystem level before being integrated with the final hardware platform. The deployment process will include connecting the component’s inputs to sensing devices (or equivalent components) and providing its output to the wider system. This activity takes as inputs the system safety requirements ([A]), the environment description ([B]), the system description ([C]) and the ML model ([V]) defined in the previous stages and integrates the model into the overall system.
The development of the ML model is undertaken in the context of assumptions that are made about the system to which the ML model will be integrated ([C]) and the operating environment of that system ([B]). This will include key assumptions that, if they do not hold during operation of the system, may result in the ML model not behaving in the manner expected as a result of development and verification activities.
When an ML component used for object classification is developed an assumption is made that the component will only be used in good lighting conditions. This may be based on the capabilities of the sensors, historic use cases, and the data from which the component is trained and verified. It is crucial to recognise and record that this is a key assumption upon which the assurance of the ML component is based. If a system containing the component is subsequently used at low light levels then the classification generated by the ML component may not meet its safety requirements.
When considering violations of assumptions, this should be linked to the system safety analysis process to identify the impact on system hazards and associated risks.
Measures shall be put in place to monitor and check validity throughout the operation of the system of the key system and environmental assumptions. Mechanisms shall be put in place to mitigate the risk posed if any of the assumptions are violated. Further guidance on the deployment of components to autonomous systems may be found in  .
There is an assumption that the ML component for pedestrian detection deployed in a self driving car will be used only in daylight conditions. The system monitors the light levels. If the level of light drops below a level defined in the operating environment description then the car shall hand back control to a human driver.
Monitoring the systems is an ongoing runtime activity. Inputs can be monitored with appropriate statistical techniques to make sure that they are close to the training data distributions. In some cases the model itself can give to the system a value representing its confidence in the output . The human can be added to the feedback to help audit model inputs and environment.
There will always be some level of uncertainty associated with the outputs produced by any ML model that is created. This uncertainty can lead to erroneous outputs from the model. The system shall monitor the outputs of the ML model during operation, as well as the internal states of the model, in order to identify when erroneous behaviour occurs. These erroneous outputs, and model states, shall be documented in the erroneous behaviour log ([DD]).
As well as considering how the system can tolerate erroneous outputs from the ML model, integration shall consider erroneous inputs to the model. These may arise from noise and uncertainties in other system components; as a result of the complexity of the operating environment; or due to adversarial behaviours. These erroneous inputs shall be documented in the erroneous behaviour log ([DD]).
Due to occlusion of a pedestrian in an image due to other objects in the environment a pedestrian may briefly not be detected. We know that in the real world a human does not disappear, so the system can use this knowledge to ignore non‐detections of previously identified pedestrians that last for a small number of frames.
When integrating the model into the system the suitability of the target hardware platform shall be considered . During the development of the model, assumptions are made about the target hardware and the validity of those assumptions shall be checked during integration. If the target hardware is unsuitable for the ML model then a new model may need to be developed.
It may be possible to create a complex deep neural network that provides excellent performance. However such a model might require large computational power to execute. If the hardware in which the model will be deployed does not have sufficient computational power then a different model may need to be created in order to reduce the required computational power.
It is important to evaluate latency associated with accessing input data. Relying on sensing data from other systems, via external networks, may unacceptably slow down the output of the ML component.
The system in which the ML model is deployed shall be designed such that the system maintains an acceptable level of safety even in the face of the predicted erroneous outputs that the model may provide.
The ML model for pedestrian detection deployed in a self driving car has a performance requirement of 80% accuracy. Due to uncertainty in the model this performance cannot be achieved for every frame. The model uses as inputs a series of multiple images derived from consecutive image frames obtained from a camera. The presence of a pedestrian is determined by considering the result in the majority of the frames in the series. In this way the system compensates for the possible error of the model for any single image used.