back to Nuclear Extinction | radiation | rat haus | Index | Search | tree
 
This article is mirrored from its source at: http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ .
Copyright © 2014 The Atlantic.
Reprinted for Fair Use Only.
 
But What Would the End of
Humanity Mean for Me?
 
Preeminent scientists are warning about serious threats to human life in the not-distant future, including climate change and superintelligent computers. Most people don’t care.
 
James Hamblin    |    The Atlantic    |    May 9, 2014
 

Sometimes Stephen Hawking writes an article that both mentions Johnny Depp and strongly warns that computers are an imminent threat to humanity, and not many people really care. That is the day there is too much on the Internet. (Did the computers not want us to see it?)

Hawking, along with MIT physics professor Max Tegmark, Nobel laureate Frank Wilczek, and Berkeley computer science professor Stuart Russell ran a terrifying op-ed a couple weeks ago in The Huffington Post under the staid headline “Transcending Complacency on Superintelligent Machines.” It was loosely tied to the Depp sci-fi thriller Transcendence, so that’s what’s happening there. “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” they write. “But this would be a mistake, and potentially our worst mistake in history.”

And then, probably because it somehow didn’t get much attention, the exact piece ran again last week in The Independent, which went a little further with the headline: “Transcendence Looks at the Implications of Artificial Intelligence—but Are We Taking A.I. Seriously Enough?” Ah, splendid. Provocative, engaging, not sensational. But really what these preeminent scientists go on to say is not not sensational.

“An explosive transition is possible,” they continue, warning of a time when particles can be arranged in ways that perform more advanced computations than the human brain. “As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity.’”

Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a singularity?

“Experts are surely doing everything possible to ensure the best outcome, right?” they go on. “Wrong. If a superior alien civilization sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘Okay, call us when you get here—we’ll leave the lights on?’ Probably not. But this is more or less what is happening with A.I.”

More or less? Why would the aliens need our lights? If they told us they’re coming, they’re probably friendly, right? Right, you guys? And then the op-ed ends with a plug for the organizations that these scientists founded: “Little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.”

So is this one of those times where writers are a little sensational in order to call attention to serious issues they really think are underappreciated? Or should we really be worried right now?

In a lecture he gave recently at Oxford, Tegmark named five “cosmocalypse scenarios” that will end humanity. But they are all 10 billion to 100 billion years from now. They are dense and theoretical; extremely difficult to conceptualize. The Big Chill involves dark energy. Death Bubbles involve space freezing and expanding outward at the speed of light, eliminating everything in its path. There’s also the Big Snap, the Big Crunch, or the Big Rip.

But Max Tegmark isn’t really worried about those scenarios. He’s not even worried about the nearer-term threats, like the concept that in about a billion years, the sun will be so hot that it will boil off the oceans. By that point we’ll have technology to prevent it, probably. In four billion years, the sun is supposed to swallow Earth. Physicists are already discussing a method to deflect asteroids from the outer solar system so that they come close to Earth and gradually tug it outward away from the sun, allowing Earth to very slowly escape its fiery embrace.

Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos. Their consensus is that the Milky Way galaxy could be colonized in less than a million years—if our interstellar probes can self-replicate using raw materials harvested from alien planets, and we don’t kill ourselves with carbon emissions first.

“I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,” Bostrom told Ross Andersen recently in an amazing profile in Aeon. Bostrom, along with Hawking, is an advisor to the recently-established Centre for the Study of Existential Risk at Cambridge University, and to Tegmark’s new analogous group in Cambridge, Massachusetts, the Future of Life Institute, which has a launch event later this month. Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”

The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.

Tegmark told Lex Berko at Motherboard earlier this year, “I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too.”

I really wanted to know what all of this means in more concrete terms, so I asked Tegmark about it myself. He was actually walking around the Pima Air and Space Museum in Tucson with his kids as we spoke, periodically breaking to answer their questions about the exhibits.

“Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do,” Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.” I imagined glances from nearby museum-goers.

“This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”

“The main reason people don’t act on these things is they’re not educated about them,” Tegmark continued. “I’ve never talked with anyone about these things who turned around and said, ‘I don’t care.’” He’s previously said that the biggest threat to humanity is our own stupidity.

Tegmark told me, as he has told others on more than just this occasion, that more people know Justin Bieber than know Vasili Arkhipov—a Soviet naval officer who is credited with single-handedly preventing thermonuclear war during the Cuban Missile Crisis. That knowledge differential isn’t surprising at all. More people know Bieber than know most historic figures, including Bo Jackson. That’s especially hard to swallow after learning this week from Seth Rogen that, in fact, “Justin Bieber is a piece of shit.”

Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”

As Andersen noted in his Aeon piece, there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn’t kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all. (Though, “it’s not clear that nuke-leveled cities would burn long or strong enough to lift soot that high.”)

“We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” Instead the key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.” Max Tegmark, it seems, also does not care for Justin Bieber.

That’s what this is really about: More than A.I., their article was meant to have us start thinking longer term about a bigger picture. The Huffington Post op-ed was an opening salvo from The Future of Life Institute, of which all four scientists are on the advisory board. The article was born of one of the group’s early brainstorming sessions, one of its first undertakings in keeping with its mission to educate and raise awareness. The Future of Life Institute is funded by Jaan Tallinn, founding engineer of Skype and Kazaa (remember Kazaa, the MP3-“sharing” service that everyone started using after Napster?). Tallinn also helped found Cambridge’s Centre for Existential Risk. The world of existential risk is a small one; many of the same names appear on the masthead of Berkeley’s Machine Intelligence Institute.

“There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention. Something like climate change is of course a very serious problem. I think the general feeling is that already gets a lot of attention. Where we could add more value is in thinking about the potentials of artificial intelligence.”

Tegmark saw a gap in the intellectual-cosmological institute market on the East Coast of the United States, though. “It’s valuable to have a nucleus for these people to get together,” he said. The Future of Life Institute’s upcoming launch event at MIT will be moderated by Alan Alda, who is among the star-studded, white-male Scientific Advisory Board.

Scientific Advisory Board, The Future of Life Institute

The biggest barrier to their stated goal of raising awareness is defining the problem. “If we understood exactly what the potentials are, then we’d have a much better grip on how to sculpt it toward ends that we find desirable,” Wilczek said. “But I think a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”

Automatic trading programs have already caused tremors in financial markets. M.I.T. professor Erik Brynjolfsson’s book The Second Machine Age likewise makes the point eloquently that as computers get better, they will cause enormous changes in our economy. That’s in the same realm of ideas, Wilczek said, as the recent Heartbleed virus. With regard to that sort of computer security and limited access to information, he says, “That is not a solved problem. Assurances to the contrary should be taken with a big grain of salt.”

Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”

Bostrom has said it’s important not to anthropomorphize artificial intelligence. It’s best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that. Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.

Experts have wide-ranging estimates as to time scales. Wilczek likens it to a storm cloud on the horizon. “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”

Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”

Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”

“But now we’re not talking about the life or death of one child. We’re talking about the lives and deaths of every child, and the children of every potential future generation for billions of years.”

But how do you put this into people’s day-to-day lives to encourage the right kind of awareness? Buying a stroller is an immediate decision, and you can tell people to buy a sturdy stroller. What are the concrete things to do or advocate for or protest in terms of existential risks?

“Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.”

But in the end of our conversation, all of this concern took a turn. “The reason we call it The Future of Life Institute and not the Existential Risk Institute is we want to emphasize the positive,” Tegmark said, kind of strikingly at odds with most of what I’d read and heard so far.

“There are seven billion of us on this little spinning ball in space. And we have so much opportunity,” Tegmark said. “We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”

Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials. “We humans spend 99.9999 percent of our attention on short-term things,” Tegmark said, “and a very small amount of our attention on the future.”

The universe is most likely 13.8 billion years old. We have potentially billions more years at our disposal—even if we do get eaten by the sun in four billion years—during which life could be wonderful.


Copyright © 2014 by The Atlantic Monthly Group. All Rights Reserved.


back to Nuclear Extinction | radiation | rat haus | Index | Search | tree