Artificial Intelligence touches almost every aspect of our lives, from mobile banking and online shopping to social media and real-time traffic maps. But what happens when artificial intelligence is biased? What if it makes mistakes on important decisions 鈥 from who gets a job interview or a mortgage to who gets arrested and how much time they ultimately serve for a crime?
鈥淭hese everyday decisions can greatly affect the trajectories of our lives and increasingly, they鈥檙e being made not by people, but by machines,鈥 said 新澳门六合彩内幕信息 Davis computer science professor .
A growing body of research, including Davidson鈥檚, indicates that bias in artificial intelligence can lead to biased outcomes, especially for minority populations and women.
Facial recognition technologies, for example, have come under increasing scrutiny because they鈥檝e been shown to better detect white faces than they do the faces of people with darker skin. They also do a better job of detecting men鈥檚 faces than those of women. Mistakes in these systems have been implicated in a number of false arrests due to mistaken identity.
In fact, concern about bias in facial recognition technologies led to a number of bans on their use. In June 2019, San Francisco was among the first cities in the nation to ban the use of facial recognition technologies by the police and other city departments. The State of California followed suit in January 2020, imposing a three-year moratorium on the use of facial recognition technology in police body cameras.
Has racial profiling gone digital?
A number of technologies take facial recognition a step further, analyzing and interpreting facial attributes and other data to make risk assessments, identify threats or unusual behavior. In the world of data surveillance, this is called 鈥渁nomaly detection.鈥 Increasingly, these technologies are deployed by law enforcement, airport security, and retail and event security firms.
In a recent study, Davidson and Ph.D. student Hongjing Zhang demonstrated that these types of anomaly detection algorithms are more likely to predict that African Americans or darker skinned males are anomalies.
鈥淪ince anomaly detection is often applied to people who are then suspected of unusual behavior, ensuring fairness becomes paramount,鈥 Davidson said. 鈥淚f one of these algorithms is used for surveillance purposes, it鈥檚 much more likely to identify people of color. If a white person walks in, it would not be likely to trigger an anomalous event. If a black person walks in, it would.鈥
鈥淭he machine is not biased. It has no moral compass. It鈥檚 just seen more white faces in the data it was trained on before and so it鈥檚 learned to associate that with normality.鈥 鈥 Ian Davidson, 新澳门六合彩内幕信息 Davis computer science professor
That sounds a lot like computer-aided racial profiling.
鈥淏ut it鈥檚 completely unintentional,鈥 Davidson said. 鈥淭he machine is not biased. It has no moral compass. It鈥檚 just seen more white faces in the data it was trained on before and so it鈥檚 learned to associate that with normality.鈥
Ensuring that AI is fair and free from bias is complex, he explained. His work shows that adding more people of color to the data the machine learns from helps, but it does not eliminate the issue.
Bias reflects an unjust world
Examples abound of technologies that don鈥檛 perform accurately for people with darker skin. In February, the FDA warned that the pulse oximeters used to monitor oxygen saturation levels for COVID-19 patients may be less accurate for people with darker skin pigmentation. A also found that people of color were more likely to be hit by driverless cars because object detection systems they use to recognize pedestrians don鈥檛 work as well on people with darker skin.
鈥淏ias reflects a world that is unjust,鈥 said Computer Science Professor . He teaches an ethics course that鈥檚 required for all 新澳门六合彩内幕信息 Davis students majoring in computer science and engineering. Part of the course focuses specifically on bias in AI and other technologies.
鈥I want students to have an awareness of the problem and to understand why there are biases in our decisions,鈥 Koehl said. 鈥淔or example, if all your colleagues are white male, you鈥檙e unlikely to discuss problems associated with machine recognition of dark skin.鈥
鈥淭he danger with bias comes from the fact that we consider AI as a system that can make decisions.鈥 鈥 Patrice Koehl, 新澳门六合彩内幕信息 Davis computer science professor
This lack of diversity in the workforce has been a persistent problem in the technology sector. Nearly 80 percent of employees at Apple, Facebook, Google and Microsoft are male and there鈥檚 been little growth in Black, LatinX and Native representation since 2014, according to Mozilla鈥檚 2020 Internet Health Report.
鈥淭he danger with bias comes from the fact that we consider AI as a system that can make decisions,鈥 Koehl said. 鈥淵ou want that decision to be as informed as possible. If the information you provide is wrong or biased, the decision will be wrong.鈥
When it comes to developing future technologies, Koehl is optimistic that today鈥檚 students will do a better job. 鈥淭he problem associated with AI was created in the last 20 years, partly by software engineers. If those engineers were able to create such a big problem, my hope is that the next generation of engineers will spend just as much time looking at the problems and identifying solutions,鈥 he said.
Unraveling the tangled roots of bias
While there鈥檚 growing awareness of bias in artificial intelligence, there鈥檚 no simple solution. Bias can be introduced in a number of ways, beyond the software engineer developing a new technology. Artificial intelligence and machine learning algorithms rely on data, which is not always representative of minority populations and women. That鈥檚 because behind the data, the decisions 鈥 about which data to collect and how to use it 鈥 are still made by people.
鈥淲e cannot address bias and unfairness in AI without addressing the unfairness of the whole data pipeline system,鈥 said , director of 新澳门六合彩内幕信息 Davis鈥 Center for Data Science and Artificial Intelligence Research, or CeDAR.
CeDAR is a hub for research activity focused on using AI for social good, from better healthcare to precision agriculture and combating climate change. Fighting bias and standing up for privacy is a natural part of that mission, Strohmer said.
鈥淭hings like racial profiling existed before these tools. AI just enhances an existing bias." 鈥 Thomas Strohmer, director of CeDAR
鈥淭hings like racial profiling existed before these tools. AI just enhances an existing bias. If you feed a biased data set into an algorithm, the result will be a biased algorithm.鈥
Because new technologies are often adopted at scale, Strohmer noted, biases can quickly become widespread, and they鈥檙e not always easy to detect. To determine if there鈥檚 bias in a data set or an algorithm, you need access to the data and the algorithm.
Facial recognition, video analytics, anomaly detection and other kinds of pattern matching are being used in law enforcement 鈥 often out of public view.
AI in the shadows
, a professor at the 新澳门六合彩内幕信息 Davis School of Law, says this hidden nature of AI is a major concern.
鈥淚f there are problems in law enforcement, they are increasingly difficult for people to see,鈥 said Joh, who has written extensively about technology, policing and bias. 鈥淢ost people understand the utility of a firearm and a badge. If someone experiences excessive force by the police, we intuitively understand that. With technology, we might not even recognize that the problem exists. You might never know unless you become the target of that interaction.鈥
For this reason, she said accountability is crucial. She points to increasing experimentation with AI tools by police departments in towns and cities all across the country, all with little consideration for long-term consequences.
鈥淲e need to realize that these tools can quickly get out of hand or can be used in ways that are unexpected and have socially harmful consequences or disparate consequences,鈥 Joh said. 鈥淎 certain amount of bias has always existed in policing. Hidden technologies can exacerbate the problem immensely.鈥
Joh added it鈥檚 not too late for police departments and other organizations to take a step back and ask the most fundamental questions about the use of AI: Should we be using these tools at all?
Harnessing AI鈥檚 power for good
鈥淓ducation, training and promoting diversity are key to addressing how technology is perpetuating bias,鈥 said , associate director of the 新澳门六合彩内幕信息 Davis DataLab: Data Science and Informatics department. 鈥淛ust as AI is contributing to these persistent societal problems, it can also be empowering for uncovering bias and finding solutions.鈥
In this regard, the DataLab leads by example. It supports a diverse faculty and affiliates program and hosts events for women and underrepresented individuals in data science.
WHAT DO DATING TECHNOLOGY AND ALZHEIMER鈥橲 HAVE IN COMMON?
新澳门六合彩内幕信息 Davis neuropathologist Brittany Dugger, along with researchers at 新澳门六合彩内幕信息 San Francisco, has found a way to teach a computer to precisely detect one of the hallmarks of Alzheimer鈥檚 disease in human brain tissue, delivering a proof of concept for a machine-learning approach capable of automating a key component of Alzheimer鈥檚 research. , and .鈥&苍产蝉辫;
鈥淚n data science, we are working with large sets of information and these inevitably reflect the structural inequalities of our society,鈥 said Reynolds, who is an experimental ecologist by training. 鈥淲ithout a critical and inclusive approach to data and the AI tools it enables, we run the risk of reproducing past injustices.鈥
The DataLab helps students and researchers understand the complexity of large data systems and how technologies and computational methods work. This has added value to a number of research projects, including to improve early diagnosis of Alzheimer鈥檚 in women and communities of color, being conducted by , assistant professor of pathology and laboratory medicine, and her team at 新澳门六合彩内幕信息 Davis Health.
Trust will require transparency
The trend toward greater use of artificial intelligence shows no signs of abating 鈥 whether it鈥檚 to improve healthcare, recommend movies via a streaming service, conduct surveillance, or any of a myriad of other uses.
For that reason, transparency in AI is more important than ever. According to Davidson, that will require fairness, explainability and privacy.
鈥淎s machines replace humans and make more decisions, we need to be able to trust them,鈥 he said. 鈥淭hat includes understanding exactly how machine algorithms are processing information and how decisions are being made.鈥
Related Stories
Media Resources
Catherine Kenny, 新澳门六合彩内幕信息 Davis News and Media Relations, 530-752-3140, cmkenny@ucdavis.edu