Is Stephen Hawking right? Could AI lead to the end of humankind?

Physics Science

 The question of whether our pursuit of increasingly sophisticated artificial intelligence may eventually result in thinking computers that replace humans has been reignited by renowned theoretical physicist Stephen Hawking.

The assertion was made by the British scientist in an extensive interview with the BBC. Amyotrophic lateral sclerosis (ALS) is Hawking’s motor neuron disease, and during the conversation, he discussed the new technology he uses to assist him in speaking.

Like the predictive texting feature on many smartphones, it uses a model of his past word usage to anticipate the words he will use next. However, Professor Hawking also expressed his worry about the advancement of machines that could outperform humans.


“Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate,” he allegedly told the BBC. “The development of full artificial intelligence could spell the end of the human race.” Prof. Hawking is a well-known, competent, and respectable person, so I appreciate that he brought up the topic of computers taking over—and possibly killing humankind—and I think it merits a prompt reaction.
At least as far back as 1950, when British code-breaker and father of computer science Alan Turing wondered, “Can machines think?” is when the problem of machine intelligence first emerged. There has been discussion of the possibility of these intelligent robots taking over in various forms of popular culture and media. To mention a few, consider the films Colossus: The Forbin Project (1970), Westworld (1973), and, more recently, Skynet in the 1984 film Terminator and its follow-ups.
The problem of giving responsibility to machines unites all of these. The idea of the technological singularity, or machine super-intelligence, dates at least as far back as Ray Solomonoff, the father of artificial intelligence, who issued the following warning in 1967: Even though highly intelligent machines are not likely to exist shortly, there are still significant risks and challenging issues. It would be beneficial if many thoughtful people gave these issues a lot of consideration before they arose.
I have a sense that artificial intelligence will suddenly become a reality. We won’t have any significant practical experience with machine intelligence until later in the research process. A month or so later, we will have an extremely intelligent machine along with all the hazards and issues that come with our inexperience. In addition to offering this iteration of Hawking’s 1967 warning, Solomon attempted to consider the societal ramifications and provide a timeline for the technical singularity in 1985.


I agree with Hawking, Solomonoff, and others that there will be negative effects from quicker and smarter robots; nevertheless, inventor, author, and computer scientist Ray Kurzweil of the United States is among many who see the positive effects. Someone else may prove to be correct, but if another threat doesn’t destroy our planet before then, Solomonoff was right when he said in 1967 that we give this a lot of careful thinking. Meanwhile, we witness a growing delegation of duty to robots. Global positioning systems (GPSs), handheld calculators, and standard mathematical computations are a few examples of this.
Conversely, these may include air traffic control systems, guided missiles, autonomous trucks on mine sites, or the recently tested driverless autos on our highways. For the sake of efficiency, economy, and precision, humans assign tasks to machines. However, nightmare scenarios involving damage caused by, for instance, an autonomous car would involve legal, insurance, and liability attribution. There is a theory that when computers surpass human intelligence, they may take control. However, this responsibility transfer comes with additional hazards.
Some would argue that computer trading played a major role in the 1987 stock market crisis. Computer errors have also resulted in power grid closures. On a lesser note, my invasive spell checker occasionally “corrects” my writing into something that might offend. Error on the computer? Even in the absence of hackers or malicious intent, hardware or software bugs can cause chaos in large-scale systems and are likely even more dangerous when they do. So, to what extent can we actually rely on highly responsible machines to perform tasks more effectively than we do?
I can envision some ways in which computer systems can become uncontrollable, even in the absence of computers deliberately taking over. These systems may be difficult to fix or even turn off because of how quickly they operate and how little hardware they contain. I would want to see scriptwriters and artificial intelligence experts working together to lay out such scenarios, somewhat in the spirit of Solomonoff’s 1967 study – further encouraging public discussion.


As one hypothetical scenario, it’s feasible that some speech conversion goes horribly wrong, grows worse during an automatic translation, and then subtly taints machine instructions, causing chaos. Faster statistical and machine learning analysis of huge data on human brains may open up a perhaps related can of worms. (And are we humans the defenders of everything that is morally upright, decent, and just, as some might venture to add?) We need this public conversation, as Solomonoff stated in 1967, and considering the stakes, I believe we need it soon.
Scroll top