E-paper

There’s no such thing as AI

By Parmy Olson

No one sells the future more masterfully than the tech industry. According to its proponents, we will all live in the “metaverse,” build our financial infrastructure on “web3” and power our lives with “artificial intelligence.” All three of these terms are mirages that have raked in billions of dollars, despite bite back by reality.

Artificial intelligence in particular conjures the notion of thinking machines. But no machine can think, and no software is truly intelligent. The phrase alone may be one of the most successful marketing terms of all time.

Last week OpenAI announced GPT4, a major upgrade to the technology underpinning ChatGPT. The system sounds even more humanlike than its predecessor, naturally reinforcing notions of its intelligence. But GPT4 and other large language models like it are mirroring databases of text. Helped along by an army of humans reprogramming it with corrections, the models glom words together based on probability. That is not intelligence.

These systems are trained to generate text that sounds plausible, yet they are marketed as new oracles of knowledge that can be plugged into search engines. That is foolhardy when GPT4 continues to make errors, and it was only a few weeks ago that Microsoft Corp. and Alphabet Inc.’s Google both suffered embarrassing demos in which their new search engines glitched on facts.

Not helping matters: Terms like “neural networks” and “deep learning” only bolster the idea that these programs are humanlike. Neural networks aren’t copies of the human brain in any way; they are only loosely inspired by its workings. Long-running efforts to try and replicate the human brain with its roughly 85 billion neurons have all failed. The closest scientists have come is to emulating the brain of a worm, with 302 neurons.

We need a different lexicon that doesn’t propagate magical thinking about computers, and doesn’t absolve the people designing those systems from responsibilities. What is a better alternative? Reasonable technologists have tried for years to replace “AI” with “machine learning systems,” but that doesn’t trip off the tongue in the same way.

Stefano Quintarelli, a former Italian politician and technologist came up with another alternative, “Systemic Approaches to Learning Algorithms and Machine Inferences” or SALAMI, to underscore the ridiculousness of the questions people have been posing about AI: Is SALAMI sentient? Will SALAMI ever have supremacy over humans?

The most hopeless attempt at a semantic alternative is probably the most accurate: “software.” “But,” I hear you ask, “What is wrong with using a little metaphorical shorthand to describe technology that seems so magical?”

The answer is that ascribing intelligence to machines gives them undeserved independence from humans, and it abdicates their creators of responsibility for their impact. If we see ChatGPT as “intelligent,” then we are less inclined to try and hold San Francisco startup OpenAI LP, its creator, to account for its inaccuracies and biases. It also creates a fatalistic compliance among humans who suffer technology’s damaging effects; though “AI” will not take your job or plagiarize your artistic creations — other humans will.

The issue is ever more pressing now that companies from Meta Platforms to Snap to Morgan Stanley are plugging chatbots and text and image generators into their systems.

“[AI is] one of those labels that expresses a kind of utopian hope rather than present reality, somewhat as the rise of the phrase ‘smart weapons’ during the first Gulf War implied a bloodless vision of totally precise targeting that still isn’t possible,” says Steven Poole, author of the book Unspeak, about the dangerous power of words and labels.

Opinion

en-kr

2023-03-30T07:00:00.0000000Z

2023-03-30T07:00:00.0000000Z

https://ktimes.pressreader.com/article/281809993155553

The Korea Times Co.