The Washington PostDemocracy Dies in Darkness

Stephen Hawking just gave humanity a due date for finding another planet

November 17, 2016 at 6:15 a.m. EST
Stephen Hawking attends the launch of the Leverhulme Center for the Future of Intelligence at the University of Cambridge. (Niklas Halle'n/Agence France-Presse via Getty Images)

If humanity survives the rise of artificial intelligence, the ravages of climate change and the threat of nuclear terrorism in the next century, it doesn't mean we're home free, according to Stephen Hawking.

The renowned theoretical physicist has gone as far as providing humanity with a deadline for finding another planet to colonize: We have 1,000 years.

Remaining on Earth any longer, Hawking believes, places humanity at great risk of encountering another mass extinction.

“We must ... continue to go into space for the future of humanity,” the 74-year-old Cambridge professor said during a speech Tuesday at Oxford University Union, according to the Daily Express.

“I don’t think we will survive another 1,000 years without escaping beyond our fragile planet,” he added.

Why Stephen Hawking believes the next 100 years may be humanity’s toughest test

During his hour-long speech, Hawking told the audience that Earth's cataclysmic end may be hastened by humankind, which will continue to devour the planet’s resources at unsustainable rates, the Express reported.

His wide-ranging talk touched upon the origins of the universe and Einstein's theory of relativity, as well as humanity's creation myths and God. Hawking also discussed “M-theory,” which Leron Borsten of PhysicsWorld.com explains as “proposal for a unified quantum theory of the fundamental constituents and forces of nature.”

Though the challenges ahead are immense, Hawking said, it is a “glorious time to be alive and doing research into theoretical physics.”

“Our picture of the universe has changed a great deal in the last 50 years, and I am happy if I have made a small contribution,” he added.

Speaking to audience members in a public Q&A session ahead of the annual BBC Reith Lectures, Hawking also said that leaving the planet behind was our best hope for survival.

The key, he noted, was surviving the precarious century ahead.

“Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years. By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.”

Since 2009, NASA has been hunting for Earthlike planets with the potential for human colonization.

In a July 2015 speech, Stephen Hawking explained "Breakthrough Listen," an initiative aimed at discovering intelligent extraterrestrial life. (Video: Breakthrough Initiatives)

Researchers have discovered more than 4,600 “candidate” planets and another 2,300 or so confirmed planets, according to the agency.

“The first exoplanet orbiting another star like our sun was discovered in 1995,” according to NASA. “Exoplanets, especially small Earth-size worlds, belonged within the realm of science fiction just 21 years ago. Today, and thousands of discoveries later, astronomers are on the cusp of finding something people have dreamt about for thousands of years.”

Before we have a chance to relocate, Hawking says, we'll first need to solve the potential threat created by technology.

Stephen Hawking just got an artificial intelligence upgrade, but still thinks AI could bring an end to mankind

While Hawking thinks technology has the capacity to ensure mankind's survival, previous statements suggest the cosmologist is simultaneously grappling with the potential threat it poses. When it comes to discussing that threat, Hawking is unmistakably blunt.

“I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in a 2014 interview that touched upon everything from online privacy to his affinity for his robotic-sounding voice.

Despite its current usefulness, he cautioned, further developing A.I.could prove a fatal mistake.

“Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” Hawking warned in recent months. “Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”

Th

The A.I. Anxiety: Are we fully in control of our technology?

MORE READING: 

Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’

Pluto’s icy heart may hide an underground ocean

Apple co-founder on artificial intelligence: ‘The future is scary and very bad for people’

This 6,000-year-old amulet is the oldest example of a technology still used by NASA

Elon Musk: Human-driven cars may be outlawed because they’re ‘too dangerous’