Superintelligence is a book about the future of artificial intelligence. AI – that’s the useful abbreviation for artificial intelligence – refers to the braininess of computers. Engineers and programmers are making computers smarter all the time and Nick Bostrom’s book is partly about ways to develop a superintelligent computer. But mostly it’s about the horrific dangers such a brilliant thinking machine would pose to us humans, and the different ways to defend ourselves against extermination or enslavement by the machine.
Superintelligence isn’t light reading. Quite the contrary; it’s dense and long. The paperback edition here at hand has 415 pages, including 91 pages of notes, glossary, bibliography and index, all in rather small type. Nevertheless, it was a bestseller when it came out in 2014, was reviewed favorably, and has become a standard work in the field.
It’s always good to be prepared for whatever disaster might come along, but a 400-page book about why we should get ready to defend ourselves against a smart computer seems an unlikely bestseller. On the other hand, robots are taking jobs from factory workers, and self-driving cars are about to displace truckers and chauffeurs, so a scenario featuring a sinister AI machine that dominates the world may not appear far fetched.
Nick Bostrom’s book is a closely woven tapestry of speculations. In that regard it’s a lot like science fiction, but the author is careful to avoid sounding that way. Bostrom uses the vocabulary and prose style of science, plus occasional diagrams and charts, to compose a fantastical, intricately ornamented thesis. Sometimes the text reads like a parody of scientific prose. Here’s the author writing about a superintelligent computer which is seeking more and more resources:
The likely manifestation of this would be the superintelligence’s initiation of a colonization process that would expand in all direction using von Neumann probes. This would result in an approximate sphere of expanding infrastructure centered on the originating planet and growing in radius at some fraction of the speed of light; and the colonization of the universe would continue in this manner until the accelerating speed of cosmic expansion (a consequence of the positive cosmological constant) makes further procurements impossible as remoter regions drift permanently out of reach (this happens on a timescale of billions of years.)
You get the point. To be fair, most of the writing in Superintelligence is comparatively plain science-speak and reaches that baroque height only now and then.
Early in Superintelligence Bostrom says “The availability of the brain as a template provides strong support for the claim that machine intelligence is ultimately feasible.” The word “ultimately” does a lot of work in that sentence. The fact that brains give us our intelligence doesn’t mean that a machine with true intelligence is feasible now or in the foreseeable future. Our brains are a stunningly complex electrochemical stew, the result of a blind evolutionary process that began maybe six million years ago, give or take a few million. Human’s have around a hundred billion neurons with many more connections between them. For several decades biologists have been studying a one-millimeter roundworm, the nematode C. elegans, which has only 302 neurons. They mapped out its brain years ago. Though several have tried, no one has been able to simulate the brain of c. elegans on a computer. Nick Bostrom suggests that if AI, continues on its current path and creates a superintelligent computer in ten years, then one built by emulating the human brain is, maybe, fifteen years away.
The author’s definition of intelligence comes as an interruption in a sentence about the long delayed arrival of intelligent machines. He says, “Machines matching humans in general intelligence – that is, possessing common sense and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a wide range of natural and abstract domains – have been expected since the invention of computers in the 1940s.” Apart from the opaque phrase about meeting complex information-processing challenges et cetera, Bostrom’s definition is straight forward. Other workers in the AI field have come up with other definitions, but they’re all more or less the same. Anyway, for Superintelligence it’s enough to say that an intelligent machine will have common sense and an ability to learn, to reason, and to adapt to changing circumstances.
Some of the things that computers already do – understanding simple speech, recognizing faces, reading handwritten addresses on mail – though not done perfectly, are called intelligent actions. There are a bundle of such systems on a typical mobile phone and, naturally enough, we call those gadgets smartphones. Bostrom doesn’t get into how these systems work, but reading even a little of the technical material elsewhere will give you an idea of how difficult it can be to teach a machine to do these things. Deep Text, which has been developed by Facebook, is a very advanced system that can help the social media giant analyze several thousand messages per second in 20 languages well enough to get the job done. It’s as impressive as anything else in the civilian world, and there may be even more exotic intelligent machines at work for certain government departments.
One of the premises of Superintelligence is that as the field of AI continues to grow and move ahead it will produce ever more intelligent systems, and sooner or later these will lead to a device so bright it will help humans make a superintelligent machine or, maybe, it will – all by itself – use it’s smarts to improve itself to a superintelligent level.
Shortly after saying that the existence of our brains indicates the ultimate feasibility of making an intelligent machine, Nick Bostron gives us a warning: “Before we end this subsection, there is one more thing that we should emphasize, which is that an artificial intelligence need not resemble much human intelligence. AIs could be – indeed, it is likely that most will be – extremely alien.” Those are among the most interesting and provocative sentences in this work. Of course he’s right. If a machine were to have something that we recognized as intelligence, we would see it as utterly different from our own.
The same paragraph that warns us about the alien intelligence of computers also tells us, “Furthermore, the goal system of AIs could diverge radically from those of human beings.” That’s interesting and scary. The “goal system” of computers today is created by humans when they make specialized computers or provide to a general computer a set of instructions, a program or algorithm, to carry out certain carefully and narrowly defined tasks, such as monitoring speed and distance or, with more complexity, driving a car. In a story published in the Magazine of Science Fiction and Fantasy, a crowd of self-driving cars got together, formed a kind of neural network and began to have goals of their own, holding passengers as hostages until the cars’ demands were met.
That a superintelligent computer could develop it’s own goals is exactly why Nick Bostrom wrote Superintelligence. And what gives the book its length is the author’s astonishingly energetic imagination as he describes in exquisite detail the maniacal and lethal things a really brainy computer could do – and would likely do – if not stopped by us. Furthermore, the superintelligent computer not only has an “intelligence system” which is alien to our way of thinking, and a “goal system” which most likely doesn’t match our goals, it also has ways of achieving its goals. But the ways of superintelligent computer – or “agent” — are mysterious, and Bostrom never describes any plausible ways the machine could physically act.
About a third of the way through the book is a chapter that begins “Suppose that a digital superintelligent agent came into being, and that for some reason it wanted to take control of the world: would it be able to do so?” Nick Bostrom’s short answer is Yes. But you guessed that already, right? His long answer is a series of fantastic scenarios such as you might find in a dark dystopian science-fiction anthology. Under the heading Life in an Algorithmic Economy we find a world where “As our numbers increase and our average income declines further, we might degenerate into whatever minimal structure still qualifies to receive a pension – perhaps minimally conscious brains in vats, oxygenized and nourished by machines, slowly saving up enough money to reproduce by having a robot technician develop a clone of them.” Well, you get the picture.
In Bostrom’s vocabulary, the day when a superintelligent AI takes over the world we will have what he calls a “singleton.” At that point the superintelligent machine will be all knowing and all powerful, with a mind so far beyond our own we cannot comprehend it, and a goal we can only guess at. Our radiantly brilliant machine’s reach will be an infinite sphere whose center is everywhere and whose circumference is nowhere. That last phrase isn’t by Bostrom, but from an anonymous 13th century author describing God — Deus est sphaera infinita, cuius centrum est ubique, circumferentia nusquam. Bostrom’s superintelligent agent and the God of the Book of 24 Philosophers are pretty much alike.
As for us humans, our brains give us whatever we have in the way of common sense or the ability to learn and to reason. Our memory, our intelligence, our reasoning power – these aren’t impressive when compared to Nick Bostrom’s god-like superintelligent computer. But they were good enough for us to get by on, and we’ve become the dominant species here on earth.
Our brains give us not only our learning and reasoning abilities, they also give us our emotional lives. Furthermore, our rational and emotional selves are deeply enmeshed and it’s not clear how well they can be separated, or whether we can solve certain problems without bringing the two to work together. Finally, brains give us an awareness not only of our environment and our bodily self, they also make us aware of an inner thinking-and-feeling self. In other words, they make us more fully conscious than anything else we know about
An old fashioned hand-cranked adding machine could add a column of big numbers and get it right every time, but nobody called it a smart machine or thought it intelligent. A machine is a machine, and Nick Bostrom is scrupulous to remind us that the machine’s intelligence is alien to human intelligence. You have to be reasonably bright and have a certain schooling to solve differential equations. Starting in 1927 Vannevar Bush made a wonderful machine to solve differential equations, but nobody regarded that room-size assemblage of rods, gears and electrical circuits as intelligent. You could see it was a machine. No one ever thought a hand-crank adding machine understood what it was doing. Nor did anyone think the differential analyzer understood differential equations, even though it was, after all, a big analog computer.
PC magazine recently reviewed robotic floor cleaners; they’re helpful little machines that scoot along the floor, cleaning as they go. PC magazine calls them intelligent — maybe we all do — but none of those devices know they’re cleaning the floor. Self-driving cars are far more complicated than floor vacuums, but a self-driving car isn’t aware that it’s chauffeuring us around. None of the intelligent systems have thoughts about anything. They don’t think. Facebook’s Deep Text program doesn’t understand what it reads the way you do. When you type an English sentence into Google’s Translate, you’re the only one who understands what you’ve typed. Google’s Translate system doesn’t understand it. But if you ask it to translate it into French or Italian, or any other foreign language, it will do a pretty good job of translating, even though it doesn’t understand French or Italian, either.
AI people are usually careful not to ascribe emotions to intelligent systems. Instead of saying that the floor cleaning robot wants to sweep the floor, the AI engineer will say that the goal of sweeping the floor is built into the machine. Bostrom is scrupulous to write about the goals of his impressively intelligent agents, but sometimes it’s just not a very effective way of writing, especially when the agent is developing goals of its own. So we find the author saying, “Suppose that a digital superintelligent agent came into being, and that for some reason it wanted to take control of the world…” Thus the machine has desires, wishes and wants, just like you and me.
Or maybe not. The deepest questions in artificial intelligence lie beyond the pages of this book. We know that natural evolution brought about the human brain, a physical artifact that gives rise to the impalpable mind, consciousness and the sense of self. None of our intelligent systems are intelligent in the way we are, none of them know what they’re doing, none of them have consciousness. One of the deep questions is whether or not we can do what blind and wasteful nature did, make something that thinks and has consciousness as we have.
Nick Bostrom studied philosophy while getting a BA at the University of Gothenburg and a Masters at Stockholm University, and he has a PhD in philosophy from the London School of Economics. The full title of his book is Superintelligence: Paths, Dangers, Strategies, and it lives up to its title. Regrettably, the bulk of the book is very much like the style of its writing – inflated, overloaded and burdened with a baroque fastidiousness for detail. It’s too long by half, but it’s provocative and in surprising ways, interesting. Getting into the chapter endnotes is like exploring the mazy depths of a cave and coming upon a striking cave painting, and the bibliography is a treasure if you’re interested in reading around in AI.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Oxford University Press, University of Oxford, 2014. Paperbound edition Oxford University Press, New York, 2016.