Last year, I became fairly obsessed with superintelligent artificial intelligences. I dipped a toe into the Iaian M. Banks Culture series of books, which are science fiction set in a distant future where humanity has created thousands of godlike AIs to fly their ships and terraform their worlds. I do recommend it.
The next book I read was “Superintelligence: Paths, Dangers, Strategies” by the philosopher Nick Bostrom. Bostrom actually gets paid to think (and write nonfiction!) about artificial intelligence, what it might look like, and when it might arrive. We’ve all seen The Terminator and The Matrix, so you get the gist of how scary the “what” could be.
Raffi Khatchadourian, writing in The New Yorker, has a great review of the book and interview with Bostrom. It’s called The Doomsday Invention, and it covers the “when” of AI. Note that expert consensus on AI is that we’re about twenty years away from being able to create it, and that we’ve been twenty years away for about sixty years.
For decades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful video-game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”
In an array of fields—speech processing, face recognition, language translation—the approach was ascendant. Researchers working on computer vision had spent years to get systems to identify objects. In almost no time, the deep-learning networks crushed their records. In one common test, using a database called ImageNet, humans identify photographs with a five-per-cent error rate; Google’s network operates at 4.8 per cent. A.I. systems can differentiate a Pembroke Welsh Corgi from a Cardigan Welsh Corgi.
We’re not going to go extinct tomorrow, next year, or in ten years, but machines are getting exponentially smarter every day. It’s exciting, and only a little scary.