You might recall last year the fatal accident involving a self-driving Tesla. It reinforced the need to be vigilant even as we are assured (or assure ourselves) that “the machine will handle it.” This is commonly identified as “automation complacency.” An older, but equally applicable term, is “Titanic Syndrome,” as in, “It’s been well-engineered and well-tested. Nothing can go wrong, and if it did, well the machine will handle it. Right?”
Robots and accidents
I first encountered this issue in a significant way in working with an industry oversight group called the Motor Vehicle Manufacturers Association (MVMA) in the early 1980s. There had been a rash of serious accidents including fatalities in fabrication and assembly plants in the auto industry, and the MVMA was intent on addressing the problem. As we interviewed workers and gathered our own perceptions of the problem, we learned that a number of the accidents involved unsafe acts in the presence of production robots, which were just then becoming increasingly common in the industry.
While a robotic arm, for instance, was designed as an improvement to reduce manual labor and expedite production, workers (skilled tradesmen as well as production operators) had to learn new habits in order to work safely around them. When running, the arm was going to do what it was programmed to do, whether or not a worker was in the way. Workers had to watch out for the robotic arm as it swept through its programmed cycle, but the early-generation robot could not watch out for them. Workers sometimes acted as though it could.
Various engineering changes were made in order to make such accidents less likely. Increasing emphasis was put on consistently using the lock and tag before entering a potentially lethal situation. But none of the environmental “fixes” eliminated the problem totally. It was still possible for a worker to violate the safety-engineered system (e.g., forget to lockout or choose not to lockout, reach into a protected area over or around a machine guard). Vigilance was still necessary.
In the control room
My next encounter with the issue occurred in my work with nuclear power production, around the same time. Control room operators can to some extent “babysit the technology.” But the engineering is never perfect (witness the Three Mile Island incident), and unanticipated “rare events” can happen. Operators must have the brain engaged at all times. The need for them to communicate with each other effectively (work as a team), and to diagnose effectively does not go away just because the computers are running the plant, and doing so quite well… almost all of the time.
In the cockpit
The third encounter occurred later when I spent a sabbatical year at the University of Texas at Austin, working with a research team that was focused on aircraft safety. This team laid the foundation for what came to be called “crew resource management” (CRM) training. A primary outcome of our research was the finding, now accepted as commonplace, that human error was the primary contributing cause in the vast majority of aircraft incidents. Hence the focus on such cognitive and interactive competencies as situational awareness (mindfulness/vigilance), communication and feedback, workload distribution, group problem solving, and stress awareness.
A second finding was that as automation had become enhanced in new/next generation aircraft (the so-called “glass cockpit”), the number of incidents had indeed decreased, but certainly not to zero, and the pattern of accident-producing errors had shifted. Now, failing to activate the “correct” navigation system, or even a keystroke error, could have severe consequences. A stark and dramatic example is KAL 007. In September of 1983 a Boeing 747 jumbo jet was shot down by Soviet fighter jets as it unintentionally flew well off course on its way from Anchorage to Seoul. The crew made a series of uncorrected errors that resulted in their incorrectly programming the route, and failing to notice that they were far off their intended flight plan, ultimately straying into Soviet airspace.
Airborne automation
There are other less well-known examples of automation complacency in the era of the glass cockpit. Modern aircraft can do virtually all of the flying once the crew on takeoff makes the decision and takes the actions to raise the nose wheel and go airborne. Everything else, including the landing, can be programmed into the computers. The good news is, most pilots (and certainly those with whom I have been privileged to fly up front) like to fly, so they stay hands-on, and mainly use the automation when they are up at altitude. But again, the heavy automation of the cockpit does not eliminate the need for vigilance; to the contrary, not only does it not guarantee a successful outcome independent of operator vigilance, it may lull a crew into a false sense of security.
The bottom line is, no automation is foolproof. Automation can make our job so much easier. But it can also encourage us to be less vigilant (automation complacency). We do so at our peril. The machine does what it is programmed to do (by humans, of course, who can make mistakes). It is not mindful. It does not know our intent. There are limits to how self-correcting it can be. No matter how sophisticated the technology, the human that is using it must maintain vigilance.