Actually, AI Dangers Are Real: Three Specific Warnings From Serious Thinkers

Posted: Mar 05, 2018 11:09 AM
Actually, AI Dangers Are Real: Three Specific Warnings From Serious Thinkers

Artificial intelligence (AI) is cool.

I often have to write in different languages and was shocked to find that overnight Google Translate went from embarrassing users to “knowing” even my best foreign language better than me. The reason: an AI program called “Deep Learning.” An Echo device with Alexa software has been a godsend to my extremely elderly parents who, like most people their ages, are technophobic. (Plus, my father has very poor eyesight.) I’ve written extensively on vehicle safety and I’m convinced that self-driving vehicles can eliminate the overwhelming majority of the 37,000 annual vehicular deaths in the U.S. each year.

Yet there are those who say AI can be, in the words silicon valley mogul Elon Musk, “summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

Other very smart and tech-savvy people such as Stephen Hawking and Apple co-founder Steve Wozniak have also expressed concern.

The dark side of AI has essentially been the stuff of movies such as Ex Machina, 2001: A Space Odyssey (my greatest claim to fame is that Hal 9000 and I were “born” in the same city), and especially The Terminator franchise. And fact is, the day will almost certainly come when machines go beyond beating us in extremely difficult games like Go to becoming smarter than us in all ways. Nobody’s quite sure what will happen then. (Hint: If they decide to wipe us out they won’t use human-looking machines with sunglasses, but probably microbes.)

Still, that day could be 30 years away. (Estimates vary; nobody knows how fast both processing power and AI will progress; and the concept of computers become “self-aware” is yet a different issue.) But the Future of Life Institute has just released a disturbing 100-page report on potential bad things from AI in just the next five years.


Prepared by a group of 26 leading AI researchers, the report—“The Malicious Use of Artiifical Intelligence”— discusses the threat of AI and related concepts of machine learning and “deep learning.” But it also offers strategies aimed at mitigating the risks in three broad areas: Personal safety, digital security, and invasions of privacy—including government-sponsored snooping and control.

Personal safety. Consider, say, a cleaning robot that goes about its autonomous duties until it identifies the minister of finance whom it then approaches and assassinates by detonating itself. A Roomba that goes boomba. Autonomous flying drones (as opposed to guided ones such as the Unmanned Aerial Vehicles the U.S. military routinely uses) could be used to track and attack specific people. In part with that nifty facial recognition such as the iPhone X uses.

As household items as innocuous as coffee pots become connected into the “universe of things” we can see how a hacker might give commands to those otherwise ultra-safe autonomous vehicles to drive through a crowd. In fact, they could wreak absolute havoc by commandeering numerous vehicles in what’s called “swarming.” (So much for stopping them by shooting the driver.)

At the close of the Global Fortune Forum in Guangzhou on Dec. 7, the event's hosts released a swarm of over 1,000 autonomous small drones that danced and flashed through the air for nine minutes without bumping into each other.

So no, that 5-year period is not exaggerated. Moreover, while those chips in your computer are doubling roughly every two years, AI is growing exponentially. Throw in progress with quantum computing, which will make today’s super computers look like pocket calculators, and you can imagine that we cannot imagine where AI is going. (Don’t worry about affording them; they can be accessed via the cloud.)

Digital security. Computers with sensitive information from bank accounts to embarrassing selfies seem almost routinely breached these days and it’s just going to get worse. Current phishing messages tend to be pretty simple, if not idiotic. I don’t think I’ve ever received a phish message that didn’t have spelling errors. Yet as with the DNC breach (which was in fact idiotic), we’ve seen how effective they can still be.

AI can convince you you’re actually communicating back and forth with a human being through emails, texts, and even voices. Recent breakthroughs have made computer voices almost indistinguishable from human. Next step will be chatbots that will convince us we’re speaking to and viewing a live person.

Attacks on privacy. Autocratic governments will spend fortunes to on AI to identify “troublemaker” targets for surveillance and to discredit or disappear them. China has a social credit system using AI (along with low-tech methods) to minutely control what benefits and punishments will be meted out to its citizens. And we’ve already seen how one, Russia, has tried to influence key elections in the U.S. and elsewhere.


If you saw Star Wars: Rogue One, you may have been a tad surprised to find Peter Cushing in a major if supporting role. (We should all look so good after being dead 23 years.) Mind, that took some real computer heft – although the price will keep dropping.

Meanwhile, the latest craze seems to be using a very simple program to insert female celebrity faces over those of women in porn videos. (“So Gal Gadot, what’s a nice Jewish girl like you doing in a business like that?”) More sophisticated programs can alter mouth movements to any words inserted, such that the best lip-reader wouldn’t know Barack Obama or Donald Trump didn’t actually say those words.

And given that one-half the American population is of below-average intelligence, expect many people to consider these renderings real and spread them all over social media in minutes – helped by bots of course.

Yet we don’t want to give up what we’re getting and will get from AI. A just released report by the cybersecurity firm McAfee and the Center for Strategic and International Studies estimates that cybercrime cost the global economy $600 billion last year. That’s bad. Yet another report predicts that AI will contribute as much as $15.7 trillion to the world economy by 2030. That’s good.

So we very much want ever-improving AI, even as we want effective countermeasures against the bad aspects.

Among the major recommendations of the Future of Life Institute are:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence need to acknowledge that the good things they’re developing can be used for ill, and actively reach out to people who may be affected rather than simply waiting for that harm to show up.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security.
  4. A ban on development of autonomous weapons.

Most of this is easier said than done, and it doesn’t remedy actions by governments – whether they be Russia, China, or the U.S. against its own citizens. As to banning autonomous weapons, tough luck. It’s not like the above-ground nuclear test ban in which a violation would be rather obvious. Anyway, such weapons already exist, depending on the definition. There’s little hope of putting that genie back in the bottle.

But perhaps the greatest value of the report is in reminding us that a lot of new computer technology has already become a two-edged sword. After many years of declining U.S. vehicle fatality rates they’re now going up, even as cars keep getting safer. The only reasonable explanation is driver cell-phone use. As much as 15% of bandwidth is used for porno, while social media seems to a great extent be replacing face-to-face and voice-to-voice interaction, which in turn seems to be reducing our ability to really contact and empathize with human beings. As an article in Psychology Today put it, “As screen time goes up, empathy goes down.”

Maybe nothing can be done to even slow the development of “bad” AI. But traditionally conservatives have led the way in preaching cautiousness over new developments that can cause wrenching changes in society. And that it seems in recent years we’ve been seduced into abandoning that role. Let’s take our eyes away from the cell phone and ears away from Alexa long enough to ponder what Brave New World we could be ushering in.