Navigation
Minimap of introduction diagram
Minimap of stage diagram

SACE outline

Define and validate a minimum risk strategy for AS outside ODM

Based upon the assessments of the hazardous scenarios outside the ODM ([GG]) and the transitions across the ODM boundary ([KK]) undertaken in activities 22 and 24, a minimum risk strategy for the operation of the AS outside the ODM shall be defined. A minimum risk strategy determines how the AS should respond when moving outside the defined ODM in order to minimise safety risk. The definition of the minimum risk strategies shall take account of the risk acceptance of the relevant stakeholders, as documented in ([LL]).

The minimum risk strategy will vary depending upon the particular AS and its operation. For example in many cases the safest strategy may simply be for the AS to return within the ODM as quickly as possible (for instance if an autonomous submersible vehicle sinks to a depth outside its ODM it should rise back up to a level that is within the ODM). In many cases however, this may be difficult or even impossible for the AS to achieve (for instance if an autonomous car encounters extreme whether conditions that are outside of the ODM, the car cannot change these conditions and an alternative way of minimising the risk under those conditions must be defined). The strategies employed will be dependent upon the state of the AS and the state of the environment and it is possible that a set of principles or heuristics may need to be employed to cover all situations. In some cases the minimum risk strategy will be a set of behaviours rather than a specific action (akin to self‐preservation behaviours, called ‘Safe Modes’ in spacecraft). Where it is possible, the minimum risk strategy may include the AS handing control over to a human operator.

Example 33 - Satellite ‘safe modes’

A satellite may have to rely on extended periods of loss of communications with ground stations, e.g. during eclipses. In these periods, it must be able to make decisions about situations and failures that occur on board, from loss of power to loss of orientation. These may require reverting to ‘safe modes’ where the satellite has a series of basic self‐preservation modes that protect functions (e.g. orientate solar panels to the sun and point antenna at the ground and listen for commands). Analogous behaviours need to be defined for many autonomous systems, e.g. robotic underwater vehicles where they must also not endanger other vessels/humans while in safe mode. A particular issue is that of autonomous road vehicles where there are occupants and third parties to protect, and the vehicle may have to cope with failures as well as unexpected operating conditions.

Note 28 - Handover

Although in many situations handover of control to a human operator can be an appropriate strategy to adopt, there are many challenges that must be considered to ensure this is done safely.

  • Handover time ‐ It is crucial to assess whether a human operator can react quickly enough when they are expected to take over in order to do so safely. Experiments have shown that it can take quite a long time for a human operator to take control of an AS when required. For example it has been seen to take around 35 seconds for a human driver to stabilise the lateral control of an autonomous car (see also [15] and [33]). The consideration here is not just the reaction time of the operator, but how long it takes until the operator is in a position to be able to operate the system safely. This includes the need for the operator to gain sufficient situational and contextual awareness.
  • Monitoring ‐ Successful handover may require the human to monitor the state of the AS. Monitoring is not a stimulating task and it is therefore something that humans are generally not good at. Consideration should therefore be given to whether it is reasonable to expect a human to effectively monitor the behaviour of an AS over long periods of time. This is particularly the case if the human operator is rarely asked to do anything more engaging. Even when a human is monitoring an AS, it is likely that they may often miss the requirement to intervene. There can also be problems with ensuring that all of the required information is available to the human monitor in a timely manner for them to safely intervene. This is even more challenging if the operator is remote from the AS.
  • Competency ‐ Since handovers to human operators may be quite rare, human operators will often have less opportunity to practice, and can become de‐skilled as a result. It is unrealistic to expect humans in many situations with autonomous functions to be able to maintain up-to-date skills, with associated formal certification and competence (like commercial pilots have to do today). The effects of this are exacerbated since the requirement to handover will generally occur during difficult or unusual situations for which higher competency may be required.
  • Unsafe override ‐ The AS may be behaving in a safe manner, but if an operator monitoring the system does not understand what the system is doing and why, then the human operator may incorrectly decide to override and take control. Because the operator did not correctly understand the situation, the override may actually put the AS into a less safe state. It may be inherently difficult for a human operator to interpret the decisions that the AS is taking at any point in time and why those decisions are being taken.
  • Authority ‐ Safety issues can arise when there is ambiguity over who has explicit authority or when authority is transitioned between the human operator and the AS or vice versa. The ambiguity can lead to unexpected actions which could be unsafe, or lead to mode confusion since it becomes unclear which agent is in control of the AS at any point in time.

The selected minimum risk strategies ([NN]) and a justification for their sufficiency shall be documented ([MM]).

Continue to: Artefact LL. Stakeholder risk acceptance definition

Our site depends on cookies to provide our service to you. If you continue to use this site we will assume that you are happy with that. View our privacy policy.