Monday, 24 December 2018

The truth about AI: A secular ghost story

Some of Facebook’s AIs invented their own language, one incomprehensible to humans, at which Facebook’s researchers panicked and were compelled to pull the plug. At least, this was the story I heard on a Vanity Fair podcast. The host seemed deeply disturbed by the thought of these alien, almost Lovecraftian beings taking shape under the blithe gaze of an amoral tech giant.

I thought it was probably nonsense — scientists spin the truth all the time. I guessed that the underlying reality was that Facebook scientists had designed a program to evolve some kind of communication protocol which, for whatever reason, become hard to understand; seeking attention, they’d played up the drama to an in-house publicist by glossing the technical details and the publicist over-interpreted it to journalists, whose stories drifted still farther from the facts, until the emerging narrative ended up frightening an innocent podcast host.

As it turned out, I was right about the technology, but wrong about how the story got inflated. The Facebook scientists had made a sober and unassuming blog post about their research, which journalists took up and inflated without further encouragement. This is one of the fundamental mechanisms of the so-called AI Renaissance, which is essentially a cycle of money, hype and fear.

Here’s what’s actually going on with AI technology: deep learning has come into its own. Deep learning is a technology that learns to recognize categories from exemplars — it’s had noteworthy successes in learning what is and isn’t a picture of a cat, for instance, or what is and isn’t a winning chess position.  Deep learning is an enhancement of neural networks, a learning technology that’s been around in some form for sixty odd years. It is now benefiting from faster computers, better networking and networking infrastructure, and vastly more data.

Note that neural networks have essentially nothing to do with neurons. Both are structured as networks. “So maybe they’re the same!” neural network enthusiasts have sometimes reasoned. This is the sole basis of the name “neural network,” but a superficial similarity doesn’t imply a deep affinity.


There are threads in AI unrelated to deep learning but none of them have ever really worked. Consider machine translation, as implemented in Google translate.  It’s good enough for translating simple things, and can convey a general sense of a text, but with anything nuanced or complicated it immediately falls apart: Translated e-commerce websites are more or less usable, translated literature fails, translated poetry is unintentionally funny.

The state of the art in machine translation is to use statistical techniques to find roughly equivalent chunks of text in the source and target languages, and, lately, blending in deep learning to find higher order equivalences. There’s no real understanding or representation of the meaning of the text.

These limitations are inarguable and seemingly obvious but many techies seem to be in a haze of futurist denial.  I’ve spoken with Googlers who have the the glazed intensity of the newly converted. They say things like, “Google has solved machine translation.”  Such statements convey no useful information about the technology but do speak to how, especially with the younger employees, their affiliation with their company is a primary engine of meaning in their lives.  “Working at Google is like making science fiction!” I’ve heard many Googlers enthuse.

Historically, AI researchers have been prone to self-pity.  They complain that when a problem is unsolved its seen as an AI problem, but once a solution is found people say, “Oh, that’s not AI — that’s just an algorithm.”  Fair enough, but that argument is at root insincere — there’s clearly a computational essence of cognition.

I once went to a Google AI Night where a Google researcher posited that maybe computer intelligence was fundamentally different from human intelligence.  The best chess programs approach chess in different ways than the best human players. They use much more computation and a deeper search function instead of a human’s nuanced pattern recognition (or something like that, no one really knows how human chess masters think about chess, or how anyone thinks about anything.)

AI’s prominence in the general culture has been growing, due to the noticeable technical developments but also because of the way we interpret that technology through the lens of science fiction.  The two primary AI-narratives are Pinnochio—e.g., Lt. Cmd. Data longs to be a real boy! — and the Golem — e.g., the Terminator movies, the Matrix movies, and every movie that uses the phrase, “it’s gone rogue!”  Both narratives tacitly assume that an AI’s deep motivations would strongly resemble a human’s. It would either cherish the prospect of a genuine emotional life or else it would cherish the chance to crush humanity and build an empire. The Pinocchio narrative is successful because it reassures audiences that, no matter the technological advances, their humanity has intrinsic and enviable value. The Golem narrative offers an implacable, superhuman and amoral antagonist which the human heroes can destroy without the least moral qualm.

Even if there is someday real AI, neither narrative is likely to play out.  An actual AI would probably regard human beings as utterly alien, and perhaps interesting on that basis, but not obvious objects of emulation. There’s also no clear reason why an AI would want an empire: hierarchical social primates hunger for political and military power as an outgrowth of our hard-coded impulse to be top monkey, but an AI is unlikely to be engaged by that particular ladder.

Robotics are finicky, and using programs to understand images is hard. Some years ago the head of research and development at a big tech company told me that, in programming autonomous cars, it turned out to be prohibitively difficult to determine whether a given image contained a stop-light, but, if the machine already knew where the stop-light was, it could easily determine whether it was red, yellow or green. In the medium-to-near-term any full-fledged AIs are likely to be disembodied, existing in largely virtual and informational worlds, and to regard the real world as a sort of phantom realm, present but hidden, and hard to reach directly, much as most people regard the distant servers which hold, say, their social media posts (which are… where, exactly? As long as it works, who cares?)

And yet, the media continues to worry about the threat of AI.  To some extent, worrying about world-conquering AIs is a kind of ghost story for a secular age. It’s fun to be frightened, and in 2018 the nebulous malevolence in the dark reaches of the internet is more credible than dybbuks and djinni.  And apocalyptic predictions get more clicks than more realistic headlines such as,  “It’s hard to say anything definite about AI,” “AI will probably be reasonably benign” or “Real AI is probably a long way away from existing.”

There have even been calls for legislative limits on AI research, and for such research to be approached with thoughtfulness and caution. It’s hard to argue against thoughtfulness and caution. But as for legislation: I once asked my brother the yacht-broker what the difference was between a yacht and a mere boat. He said that if your mother-in-law asks, it’s a yacht, but if the IRS asks, it’s just a little old boat.  Similarly, if a venture capitalist asks about one’s project, then it’s definitely AI, but if the AI-police ask, then it’s just a regular old computer program.  It might come down to a modern Epimenides paradox: any program smart enough to contrive to be judged legal is too smart and thus illegal.

The mention of AI makes podcast hosts nervous but real AI remains chimerical.  People say it’s ten years away, but then again, they have been saying that for decades. There have been cycles of hype, exuberance and disappointment around AI before, and this is probably another. But, one day, sometime, real AI will arrive, and then we may know what the mind is, what thought is, and who we are.

(Source: The Paris Review)

No comments:

Post a Comment