Noreen Herzfeld on “Artificial Intelligence: An Existential Risk?”

Tesla founder and CEO Elon Musk recently issued a warning regarding the future of artificial intelligence.   “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” Claiming access to cutting-edge AI technology, Musk called for proactive government regulation, noting that while such regulation is generally “irksome, . . . by the time we are reactive in AI regulation, it’s too late.”

Elon Musk

“I think people should be really concerned about it,” Musk said. “I keep sounding the alarm bell.”  Musk is not alone in sounding this alarm.  Several years ago physicist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.”

Really?  How might this happen?  According to Hawking, AI could “take off on its own, and re-design itself at an ever-increasing rate. . .. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

This concern has been a staple of science fiction for decades (see The Terminator or 2001).  However, those with a more intimate knowledge of AI disagree.  As MIT computer scientist Rodney Brooks has wryly pointed out, Musk and Hawking “don’t work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.”  Virtual reality pioneer and Microsoft guru Jaron Lanier says anyone with experience of modern software should know not to worry about our future robotic overlords. “Just as some newborn race of superintelligent robots is about to consume all humanity, our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they’ll know it would do no good.”

So does AI pose an existential risk?  Not for the reasons Hawking and Musk imagine.  We are unlikely to have “strong AI,” one that thinks in ways we humans think, ways as versatile as the human brain, for many, many years, if ever.  Our brains are vastly more complex than our present technology.  However, that doesn’t mean we are out of the woods.

“Weak AI,” programs that do only one thing and do that thing very well (think Deep Blue) are progressing by leaps and bounds and stand to undermine, or at least drastically change, our economy, our politics, and our personal lives.  In fact, as several studies published in just the last few weeks and months show, they are already doing so.

First, the economy.  A study from the National Bureau of Economic Research estimates that hundreds of thousands of jobs in the US have been taken over by automation since the 1990s.  Only one new job in the computer industry is created for every three jobs lost.  It is automation, far more than governmental regulations or off-shoring, that has decimated industrial sector employment. No matter what President Trump says, jobs in coal or manufacturing are not coming back.  Moreover, automated vehicles and Amazon are poised to take over transportation and retail.

Nor are blue-collar workers the only ones who should worry.  A 2013 University of Oxford report estimated that 47 percent of American jobs will likely be threatened by automation in the coming decades, including many white-collar jobs in the legal, health, and educational sectors. A report from the World Bank estimates that this proportion is even higher in developing countries.  AI has begun to shake the foundation of Western capitalism.

Obviously, this has ramifications for our political systems, and we have seen the first of these in the election of Donald Trump in the US and the vote for Brexit in the UK.  Beyond the restiveness of the working class, AI also played a role in our last election through the spread of fake news on social media by bots..  Artificial intelligence makes the development of fake evidence remarkably easy.  A recent study published in the journal Cognitive Research: Principles and Implications found that people could not identify whether or not a photo had been Photoshopped with any more accuracy than guessing.  Author Sophie Nightingale warns, “Photos are incredibly powerful. They influence how we see the world. They can even influence our memory of things. If we can’t tell the fake ones from the real ones, the fakes are going to be powerful, too.”

And it is not only Photoshop.  A recent article in Wired, entitled “AI Will Make Forging Anything Entirely Too Easy,” notes that video and audio are subject to similar falsification.  “In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.”  This has led to a new form of espionage, one the Russians pioneered in our last election.  While in the past espionage was about obtaining information, in the future it will also be about inserting information wherever one can.  AI has begun to shake the foundation of our trust in our media and our political campaigns.

Our private lives stand to be shaken as well.  Meet Roxxy.

A 2016 study from the University of Duisenberg-Essen found that 50% of men surveyed said they could imagine purchasing a sex robot within the next five years.  Sex robots are already selling well, particularly in Japan, where their use has already led to a decline in human-human sexual encounters.  While I will save an examination of the ramifications of this for a future post, here is one threat to humanity that might truly fall under the rubric of “existential.”

AI is unlikely to threaten human existence, as Hawking fears.  There will be no super-intelligent robot apocalypse.  But AI has already begun to upend our economy, our politics, and even our sex lives.  And this is only the beginning.  While no threat to “the human race,” as Hawking fears, AI does pose a threat to the structure of “human civilization” as we know it.  Perhaps Elon Musk got his terms right.