The problem with AI scare stories is that AI is too stupid to hurt you

Richard Wallace

While you sleep, robots are taking over your job and shady tech corporations are developing Skynet-style AI networks who will eventually drag us all into a terrifying singularity or, worse, unleash uncontrollable hordes of super-smart cyborg dogs.

Or at least, that’s what the news would have you believe. Consider the alarmist notes in these headlines: “AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?” from Fast Company, and “AI will displace 40 percent of world’s jobs in as soon as 15 years,” from CBS news [note: the heading has been softened as of Jan 13th, but the original remains visible in the slug attached to this link.] 

The articles are interesting, and their concerns are quietly relevant — but they may well be inflated in terms of the immediate danger. In fact, The Guardian reports that the Fast Company story was highly significant among experts, but misleadingly framed to the public. Or, as Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, more succinctly puts it, journalists took the findings from “potentially interesting research to sensationalised crap.”

What the papers aren’t telling us – or are burying below provocative headlines – is that while AI is a hugely powerful tool that already raises some revolutionary possibilities for businesses and users alike, the sci-fi dystopia version of AI that we’re all imagining is still a long way off.

Yes, Stephen Hawking may have considered AI an existential threat to humanity. But to that, we say: here’s a blooper reel of robots falling over at the DARPA robotics competition a few years back, set to jaunty ragtime piano. Ooooh, we’re so scared.

Okay, to be fair these robotic quadrapeds from Boston Dynamics are pretty unsettling, but unless you’re a door handle, you’re probably safe for now. And as for other forms of AI? Well, the CEO of Waymo, John Krafcik, said earlier this month that true ‘level five’ driverless cars will simply never happen – and as he’s the CEO of a driverless car company, we should probably listen. (Level five involves cars navigating all types of road without human input – currently we’re somewhere between levels three and four, both of which require human intervention). 

The driverless car issue highlights one of the fundamental barriers between where we are now and full AI capabilities. AI may be able to reliably perform predesignated tasks and even learn behaviours, but machines do this through observation — when confronted with new and unfamiliar situation, such as a challenging road or unusual obstacle, they will simply fail, unlike (most) human judgement.

So why is the press so hot on selling us visions of a dystopian future? Well, for one thing, readers love scare stories, and papers love readers. It’s especially easy to sensationalise in an environment when many of us don’t understand the complexities of the technology. But another reason is that journalists often don’t fully understand the technology themselves. Researchers often inflate the importance of their findings, and the complexity of the field makes it nearly impossible for laypeople to accurately report.

As the Guardian notes, this kind of breakdown in communication between experts, scribes and readers is hardly a novel feature of tech reporting.

“It was a similar story in the United States after Frank Rosenblatt, an engineer at Cornell Aeronautical Laboratory, presented a rudimentary machine-learning algorithm called the “perceptron” to the press in 1958. While the “perceptron” could only be trained to recognise a limited range of patterns, the New York Times published an article claiming that the algorithm was an “electronic brain” that could “teach itself”, and would one day soon “be able to walk, talk, see, write, reproduce itself and be conscious of its own existence”.”

Even now, sixty years after the perceptron, we haven’t reached the point of machine consciousness — and we’re not even close. 

Should we be worried?

Well, yes — a little bit. But not for the reasons we might think.

Automation is creeping into the public space all the time, most notably in the self-service outlets we see at supermarkets and fast food restaurants. Some industries – notably those that require repetitive manual tasks or long-distance driving – are more at risk than others. But we’re still a long way from fully-automated luxury communism, and experts agree that automation will create new job opportunities as we move towards a new type of labour market. In any case, the pros and cons of automation are more complicated that “Robots are taking our jobs”, as Forbes reported last September (after running an article in March titled, “Robots are not taking our jobs.” What changed in six months?!) 

The more immediate worries with AI come from data security. Last year Amazon’s Alexa transmitted a private conversation between the couple who owned the device and a random contact in their address book — even though Amazon stresses that Alexa does not record conversations. And — perhaps ironically — there are reports of security fears around the Amazon Ring, a smart doorbell camera. TechCrunch notes that captured videos of people approaching their property were being shared unnecessarily widely within the company, and were reviewed by human employees, rather than by algorithm. Perhaps this is necessary in order to build a database for future improvements, but the way this is communicated to customers is far from clear. This issue here is not necessarily the technology itself, but the fact that tech companies feel so comfortable taking liberties with our privacy.

Then there’s the concerns around the weaponisation of data. With ongoing scandals around fake news and the effects of platforms like Facebook on political heath at home and abroad, there are real concerns about how artificial intelligence could be cleverly used to spread disinformation and mistrust.

These are very real worries, and still very scary — but the way we currently talk about AI runs the risk of obscuring these issues in favour of far less plausible scenarios, and marginalising a technology with many potentially revolutionary applications. We should be focussing more on how to protect ourselves in terms of data security, how to understand the technology enough to hold companies to account, and how to legislate new technologies for safety and efficiency — and less on pretending that Black Mirror is just around the corner.