top of page
Ozzie Paez, Dean Macris

Automation and the unaware caretakers

Updated: Oct 25, 2023

We are increasingly relying on technology to automate tasks and make decisions. Artificial Intelligence (AI), analytics, and machine learning are improving performance, productivity, and quality. It’s important to recognize, however, that these gains come at a price. For example, there is growing evidence that automation is changing the roles of operators from engaged controllers to detached caretakers. This shift has major implications for human performance and safety. Unfortunately, we sometimes ignore these trade-offs until an unexpected tragedy happens.

The Air France Airbus A330-200 that crashed on June 1, 2009 during an earlier visit to Charles de Gaulle Airport in Paris.

The crash of Air France Flight 447 demonstrated how automation can undermine human performance, sometimes with tragic consequences. The trade-offs in this case were made real by the resulting loss of life. The following summary describes the accident and discusses its implications. It is based on the accident report published by the Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile (BEA), the French equivalent of the American National Transportation Safety Board[1].

A tragedy unfolds

On June 1, 2009, Air France Flight 447—an Airbus A330-200—took off from Rio de Janeiro, Brazil at 7:09 PM, local time. It was due to arrive at Charles de Gaulle Airport 10.5 hours later, at 11:03 AM, local time. The aircraft disappeared over the Atlantic and was reported missing when it failed to report as prescribed in its flight plan. The next day, June 2, a Brazilian Air Force plane found evidence of the wreckage, including bodies, parts and fuel floating along a three-mile area. The accident had killed all 228 people on board.

Flight 447 debris field measured approximately 600 by 200 meters. Finding and recovering the black boxes took almost two years.

(Image released by BEA in news conference)

The plane’s underwater debris field was identified ten-months later at a depth of over 13,000 feet. Nearly two years after the crash, a submersible robot found and recovered the plane's black boxes. The BEA used the Cockpit Voice and Data Recorders to reconstruct the terrifying last minutes of the flight. The following summary is based on their report:

1. Flight 447 encountered bad weather while flying over the Atlantic at 38,000 feet and 185 knots. Ice affected the plane's pitot tubes that measure the plane's airspeed. As a result, conflicting airspeed readings were sent to the plane's Flight Management System (FMS).

Location of the air sensors (pitot probes) on the Airbus A330.

(BEA Report)

2. The FMS could not resolve the conflicting airspeed readings. It responded by switching its configuration from Normal to Alternate-Law. Most automatic functions—including automatic throttle and autopilot—were unavailable in Alternate-Law.

3. The pilots were suddenly forced to take over flight functions normally controlled by the FMS. They also had to resolve the conflicting airspeed readings, and determine throttle and other control settings.

4. The pilots struggled to establish situational awareness. Their actions unintentionally placed the aircraft in a near stall condition, which they failed to recognize. The pilot flying (PF) made the situation worse by pitching the plane's nose up, which reduced airspeed.

5. The aircraft stalled and began falling at a rate of 10,000 feet per minute. The stall warning alarm could be heard on the Cockpit Voice Recorder. The pilots struggled to regain control the plane, apparently unaware of the dangerous stall condition.

6. The plane fell for approximately three minutes while stalled, before impacting the ocean and killing all on board.

The Cockpit Voice Recorder (CVR) captured the crew’s confusion, as they attempted to recover control. Their exchanges were reported by Aviation Week[2]:

[As the aircraft descended in a steep stall] … the pilot flying reacted by selecting takeoff/go-around, but CVR recordings indicate fundamental confusion. “But we’ve got the engines, what is happening?” the pilot non-flying (PNF) said. And a few seconds later the pilot flying said “I have no more control of the aircraft. I have absolutely no control of the aircraft.” At about the same time the captain came back into the cockpit, the PNF asked, “What is happening? I don’t know… I don’t know what is happening.”

The Captain eventually realized that the pilot flying was pitching the plane's nose up. He tried to intervene, but there was not enough time to recover the stall before impact. BEA's report blamed poor control ergonomics and inadequate pilot training for contributing to crew confusion. It did not blame simple pilot error for the accident. The report’s conclusions and the transcripts from the CVR delivered a chilling account of the events in the cockpit.

Implications

Will R. Voss, the president of the Flight Safety Foundation, later commented on the implications of the accident and the BEA Report: “We are seeing a situation where we have pilots that can’t understand what the airplane is doing unless a computer interprets it for them. This isn’t a problem that is unique to Airbus or unique to Air France. It’s a new training challenge that the whole industry has to face[3].”

Training may not be enough. Researchers have identified cognitive effects associated with people working with automated systems. Complacency and automation bias, for example, can affect operators, such as pilots in modern cockpits. The combined effects magnify their individual impacts, and undermine timely, effective decision-making:

"Automation complacency occurs under conditions of multiple-task load, when manual tasks compete with the automated task for the operator's attention. Automation complacency is found in both naive and expert participants and cannot be overcome with simple practice. Automation bias results in making both omission and commission errors when decision aids are imperfect. Automation bias occurs in both naive and expert participants, cannot be prevented by training or instructions, and can affect decision-making in individuals as well as in teams [emphasis added]. While automation bias has been conceived of as a special case of decision bias, our analysis suggests that it also depends on attentional processes similar to those involved in automation-related complacency.[4]"

Operators functioning in "caretaker mode" are subject to distractions that undermine their situational awareness. Then an unexpected incident forces them to snap back to attention, regain situational awareness, diagnose the problem, and take effective action within a short period of time. It's a process fraught with risks of confusion, indecision, and paralysis. Effects like automation complacency seem to be wired into our cognitive processes and are thus resistive to traditional corrective measures, including more and different training[5].

Conclusion

We are making enormous advances in artificial intelligence, robotics and automation. Modern planes can fly themselves with little human intervention. Self-driving cars are quickly gaining acceptance. Powerplants, factories, ships, trains and stock trades are increasingly run by smart systems. Computers can analyze huge quantities of data, make decisions and take actions in a fraction of the time people require.

Smart systems can be efficient, reliable and competent, but not wise. That's why we insist on having humans in cockpits, behind the wheel and in control rooms–just in case. Research and growing experience suggest these expectations are unrealistic. Operators in caretaker mode can lose situational awareness and fall prey to automation complacency. Their ability to intervene during unexpected events cannot be assured, no matter how much training they receive. These are sobering thoughts at a time when emerging strategies and business models are relying on automation to deliver competitive advantage.


Image references

Aircraft image courtesy of Wikipedia and Pawel Kierzkowski under the Creative CommonsAttribution-Share Alike 3.0 Unported license.

References

 

This blog post was based on an earlier post in Ozzie Paez Decisions blog, The Case of the Unaware Caretakers, August 28, 2012.

A330-200 image courtesy Pawel Kierzkowski - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7937860

[1] Final report on the accident on 1st June, 2009, to the Airbus A330-203, registered F-GZCP, operated by Air France, flight AF 447 Rio de Janeiro – Paris, BEA – Bureau d’Enquêtes et d’Analyses, pour la sécurité de l’aviation civile, July 2012. http://www.bea.aero/en/enquetes/flight.af.447/rapport.final.en.php

[2] Wall, Robert; Flottau, Jens; Lessons of Air France 447 start to emerge, Aviation Week, August 8, 2011, http://www.aviationweek.com/aw/generic/story_generic.jsp?channel=awst&id=news/awst/2011/08/08/AW_08_08_2011_p39-355091.xml

[3] Clark, Nicola. “Report on ’09 Air France Crash Cites Conflicting Data in Cockpit.” New York Times. July 5, 2012. http://www.nytimes.com/2012/07/06/world/europe/air-france-flight-447-report-cites-confusion-in-cockpit.html

[4] R Parasuraman, DH Manzey, Complacency and bias in human use of automation: an attentional integration, Abstract, June 2010, p. 381, The journal of human factors and ergonomics, https://www.ncbi.nlm.nih.gov/pubmed/21077562

[5] Charette, Robert, N. “Air France Flight 447 Crash Causes in Part Point to Automation Paradox.” The Risk Factor Blog, IEEE Spectrum. July 10, 2012. http://spectrum.ieee.org/riskfactor/aerospace/aviation/air-france-flight-447-crash-caused-by-a-combination-of-factors

61 views0 comments
bottom of page