top of page
Writer's pictureOzzie Paez

The increasingly confusing language of automation


One of the barriers to understanding emerging technologies is language. Innovative technologies introduce new words that initially are poorly understood. These include technical terms, contractions and acronyms that quickly join the techno-lexicon. The results can be confusing, imprecise language that undermine learning and shared understanding. That’s the case with the expanding vocabulary of autonomous technologies like self-driving vehicles.

This document is intended to help our readers cope with the growing language of automation. I have interpreted some of the terminology using layman’s language to make them easier for non-engineers to understand. I will reference them in future articles, studies and posts involving smart technologies and automation. Some definitions will evolve as we gain experience with smart, autonomous systems and equipment, including self-driving vehicles. We will revisit and update them as required. We welcome reader comments, feedback and clarifications.

Autonomous systems are systems capable of performing a wide range of tasks with very limited human involvement. These systems use a range of technologies, particularly software, to operate without external direction. Examples include flight management systems on modern aircraft and smart control systems in self-driving vehicles. This definition is different from that used in an Internet Context, which refers to the control of routed IP (Internet Protocol) traffic.

Human factors (HF) engineering is a discipline that focuses on the interactions between systems and human operators. HF engineers analyze how well system designs account for human strengths and limitations. They consider design factors like interfaces, responsiveness, ease of use, skill demands and cognitive work-loads. Human factors focus on how systems work in practice, with fallible human beings at the controls. Their objectives are to improve ease of use and reduce the probability of mistakes, particularly those that undermine safety[1].

Designs that don’t effectively apply human factor design principles make it difficult for operators (drivers, pilots, etc.) to do their jobs. They often contribute to the loss of situational awareness and make it more difficult to re-establish it during emergencies. The BEA accident report on the loss of flight 447 identified human factor shortfalls that contributed to the accident and subsequent loss of life.

Digital Human Factors engineering is an emerging discipline that focuses on the special characteristics and interactions of digital systems and the people who operate and interact with them. It is distinct in focusing on characteristics unique to the digital domain. Specifically, it considers how smart systems, artificial intelligence, machine learning and automation account for human cognition, behavior and culture.

It also considers the scope, time and coping mechanisms related to system failures, modifications and fixes. For example, many digital control and display systems are updated via patches delivered without operator involvement or deep understanding of the implications. In this context, millions of systems such as autonomous and semi-autonomous vehicles can be changed within a period of hours. In this environment, software bugs can affect millions of vehicles and drivers operating under different environmental conditions. The needs and methods to better understand how people will be affected and respond are emerging challenges to digital operations and business models.

Situational Awareness is our awareness of what is happening around us in terms of where we are, where we are headed, how fast we are moving, and what’s around us. It includes our perception of things that might hit us (other cars) and that we might hit (cars, people, animals, etc.). Establishing situational awareness is necessary to interpret our environment and decide on a course of action such as braking, accelerating, swerving and stopping[2].

The same applies in other operational contexts, such as flying and operating power plants. Pilots with situational awareness know what is happening in their environment including air speed, altitude, heading and control configurations. Power plant operators with situational awareness know how much electric power is being generated, fuel and cooling status, systems status and maintenance being done. In its absence, pilots and operators must first establish operational awareness before taking effective action. The process can delay effective response by seconds, minutes, even hours.

Loss of situational awareness has contributed to major catastrophes, including the loss of Air France 447 and the Three Mile and Chernobyl Nuclear Power Plants accidents. It is a condition that is particularly dangerous when human beings suddenly have to take over highly automated operations and functions.

Operator–Mode is a type of functional control in which operators (drivers, pilots, plant operators, etc.) are actively engaged in controlling the vehicle, aircraft, power plant or system. This is the traditional operations mode prior to the introduction of autonomous systems.

Caretaker–Mode is a type of functional control in which operators monitor the systems that control equipment, systems, vehicles, etc. Smart controllers running sophisticated software control vehicle functions, including velocity, acceleration, direction and braking[3].

Failure detection and response in automated systems can be broadly grouped into two categories, system detected and operator undetected. When failures are detected by monitoring systems, they can issue Warning–Takeover–Requests that instruct human operators to take control the vehicle. When failures are detected by operators, they have to evaluate the situation and decide to assume control of the vehicle.

Warning–Takeover–Request–Response is the time it takes an operator in caretaker mode to react to a Warning–Takeover–Request issued by the autonomous system’s monitoring system and assume effective control. For example, a self-driving car’s monitoring system issues a warning that it is not detecting all objects under prevailing conditions, i.e. rain, fog, etc.[4]

Failure–Detection–Takeover–Response is the time it takes an operator of an autonomous system to recognize that some aspect of autonomous control, such as braking, has failed and assume effective control. For example, a backup driver realizes that the self-driving vehicle failed to stop at an intersection and responds by assuming control.

References

 

[1] Human Factors Engineering, PSNet, US Department of Health and Human Services, reference April 2018, https://psnet.ahrq.gov/primers/primer/20

[2] Knowing what is going on around you (Situational Awareness), Leadership and Workers Engagement Forum, accessed April 2018, http://www.hse.gov.uk/construction/lwit/assets/downloads/situational-awareness.pdf

[3] Dean Macris, Ozzie Paez, Automation and the unaware caretakers, May 1, 2018, Ozzie Paez Research, https://www.ozziepaezresearch.com/single-post/2018/04/30/Automation-and-unaware-caretakers

[4] Vivien Melcher, Stefan Rauh, Frederik Diederichs, Harald Widlroither, Wilhelm Bauer, Take-Over requests for automated driving, Procedia Manufacturing, 2015 (Volume Three), https://www.sciencedirect.com/science/article/pii/S2351978915007891

17 views0 comments
bottom of page