Human-centered automation

How will we manage the process of transition to increased automation? The interesting problem of human-centered design and automation, analyzed through the lense of the AF447 crash.

Self-driving cars have long held a place in the popular imagination, promising restful commutes and magically swarm-like, efficient traffic management. But the ultimate end-state is much more easily envisaged than the process of transition, which will be long and incremental. Vehicle automation is not a binary quality; there are a variety of car features today that exist on the spectrum of automotive automation, including antilock brakes, power steering, parking assist, and adaptive cruise control.

Adopting and integrating of each of these features entails the adjustment of driver behaviour, as operational responsibilities are gradually ceded from humans to computers. Within this transition, there is an inflection point, where people must relinquish their sense of control, but maintain a degree of responsibility. This would mark a fundamental reconfiguration of our relationship with automobiles, which have always stood for individual agency. This is also where a new set of safety concerns arises from the need for humans to be able to recover from the failures of imperfect computer control.

Fortunately, a precedent for such a transition can be found in commercial aircraft autopilot systems. In “The Human Factor,” written for Vanity Fair, William Langewiesche tells the story of Air France Flight 447, and how the fuzzy relationship between the pilots and the safety systems led to its crash in the Atlantic. He describes the phenomenon of pilots “de-skilling” as their role has shifted from active flying to an almost purely monitoring function. The scenarios that require active control by pilots are defined by their unpredictability, which also makes them impossible to train for:

Boeing’s Delmar Fadden explained, “We say, ‘Well, I’m going to cover the 98 percent of situations I can predict, and the pilots will have to cover the 2 percent I can’t predict.’ This poses a significant problem. I’m going to have them do something only 2 percent of the time. Look at the burden that places on them. First they have to recognize that it’s time to intervene, when 98 percent of the time they’re not intervening. Then they’re expected to handle the 2 percent we couldn’t predict. What’s the data? How are we going to provide the training? How are we going to provide the supplementary information that will help them make the decisions?”

The podcast 99% Invisible also covered the crash of Flight 447, introducing us to the phrase “mode confusion,” to describe the pilots’ lack of information regarding the parameters of the fly-by-wire system. The pilots don’t yearn for the days of cockpits full of an endless series of toggles and levers that fly-by-wire replaced, but for the sense of active awareness such immersion imparts.

Ultimately, automation involves societal and ethical concerns. A systems failure triggered by severe storm conditions at 35,000 feet presents an ethical vacuum of sorts. On the other hand, a car has much more opportunity to collide with other vehicles, bicycles, and pedestrians, and otherwise interact with the outside world. The MIT Technology Review touched upon the consequent dilemmas in “Why Self-Driving Cars Must Be Programmed to Kill”:

How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

No items found.
No items found.
No items found.
No items found.
No items found.