Tim Harford, in the Guardian on how computers are setting us up for disaster:
We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes […]
As someone terrified of flying, the Air France crash from 2009 has morbidly fascinated me for half a decade. The Airbus plane was smart enough to stop the pilots from crashing it in some ways, but not others. When the autopilot shut off, the pilots were unprepared to take control of the plane; one of them made a fairly basic mistake, and stalled the plane into the Atlantic Ocean.
Harford points out that we might have this backwards.
An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?
When humans are asked to babysit computers, for example, in the operation of drones, the computers themselves should be programmed to serve up occasional brief diversions. Even better might be an automated system that demanded more input, more often, from the human – even when that input is not strictly needed. If you occasionally need human skill at short notice to navigate a hugely messy situation, it may make sense to artificially create smaller messes, just to keep people on their toes.
See also my favorite law firm, Robot Robot & Hwang.