27 December 2015

Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom

Posted by Callan Bentley

owl How much thought have you given to the consequences of achieving an non-biological intelligence?

If you’re like me, you’ve thought about the notion in a Hollywoodized sense, but once you get out of the cinema showing the latest Terminator film, you might not dwell on the topic too much further.

I’ve given artificial intelligence (AI) a little more thought than that in the past six months, largely spurred on by fortuitously stumbling across these two excellent blog posts by Tim Urban at the website Wait But Why:

The AI Revolution: The Road to Superintelligence, and

The AI Revolution: Our Immortality or Extinction

Those are a great place to start – to bring you up to speed on some of the issues, before you consider investing in Nick Bostrom’s book Superintelligence, the subject of this brief review. Everyone should read them to get a sense of the scope (almost unimaginable, with many manifestations that could represent an existential threat to our species, and all life on Earth, and all life in the universe) and temporal immediacy of the rise of machine intelligence. Go do it.

A month or so ago, the New Yorker published a piece about Bostrom and his ideas called “The Doomsday Invention.” That is also worth reading. It’s more of a profile of Bostrom, who has had an interesting history and has interesting work habits. But ultimately, it’s a good precis of some of his ideas, too.

Prompted by this new article, I decided to jump in and read the book everyone referred to as the source of many of these ideas. Superintelligence is a fairly dense book in many parts, but the rigor of Bostrom’s thinking is palpable. He ventures down many many intellectual paths, considering implications and strategies that might be employed to avoid the worst outcomes. He considers the economic effects of superintelligence, and what a stable AI would mean for our socio-political system. He considers the ethics of how we treat digital minds – once a machine has attained human-level intelligence, is it ethical to switch it off? What would be the legal standing of such an entity? At what level of intellectual achievement would a digital mind gain legal standing? As a result of the “expanding circle” of empathy coupled with familiarity with digital minds, will we someday look back at the slaughter of “bad guy” characters in video games as a moral transgression equivalent to the extirpation of bison on the American steppe? It takes an original mind to pose such questions, and Bostrom’s not afraid to think the unthought.

Most of the book, though, dwells on the bigger issue of the dangers of machine intelligence, and most of my interest in the book stems from that same source. Bostrom makes a strong case that if a machine superintelligence turns against us, it won’t come with a “Skynet” sort of plot line, but will instead be because of something awfully inane, such as we program the computer to maximize paper clip production, and it sees our bodies as either a source of atoms or a source of energy for making paper clips. Thoughtless design of the paper clip-manufacturing AI results over geological time (spoiler alert) in the entire cosmos being converted to paper clips. No malice, no vengeance, simply a computer program doing what it’s been assigned to do, in the most efficient, comprehensive way possible.

The key things I gleaned from Bostrom’s exhaustive treatment of the subject areĀ  that (1) superintelligent AI could happen really quickly, (2) the ‘intelligence’ will mostly likely not be like human intelligence, but will be of an utterly alien flavor that will not be emotionally/intuitively comprehensible to us as humans, (3) there are so, so, so many unknowns that we should really try and nail some of this stuff down now, before it’s too late. On this last note, I was astonished how many of the intellectual arguments that Bostrom articulates end in a big question mark. This book appears to me to be rigorous and intensely well-thought-out, but it does not in any way present “the answers” to its readers. The known and unknown unknowns are many – and having them so precisely enumerated leaves me feeling very unsettled. There are a lot of different ways that this could go really, really wrong, and it’s not clear how many ways there are for it to go right.

If Bostrom is right, and I see no reason to think anything else, the rise of machine intelligence is possibly the most important thing to happen on this planet since the first replicating cells, or the rise of multicellularity, or the ripening of human intelligence. Many groups are working on AI, in many settings (commerical, university, perhaps government, perhaps hackers) and who knows which of them will get there first. If one of those human-level AIs gains the ability to “bootstrap” its own intelligence, it could go from human-level AI to superintelligent AI in some relatively short span of time, perhaps hours. The lead time to prepare for superintelligence may be horrifically short. If this goes wrong, it could do so in a way that is not only spectacularly important for humanity’s existence, but all life on Earth, and indeed all matter in the entire universe! We are rushing toward the development of artificial superintelligence, and probably we will get there sometime in the next century, and when we do, it may alter the course of the universe forever. Pretty high stakes.

I hope that the book Superintelligence prompts more attention to the potential pitfalls of AI, both in the specialist community that works on AI issues, but also in society in general.