You are the owner of this article.
STEMINISM

Are machines inherently racist?

How a professor is studying how to fight biases in machine learning

  • 0
  • 2 min to read
STEMINISM

Editor’s note: Steminism is a biweekly feature column where Ash Shah highlights work and research being done by womxn in STEM at the UW. 

In 2014, Amazon developed a program to screen job applicants’ resumes and sort them according to hire-ability, using the resumes of previous employees as a guideline to vet applicants. 

What started as an idea to design a system that could sort resumes faster than a recruiter almost worked.

The resumes that the AI used to create the model was reflective of Amazon’s demographics, and with Amazon being heavily male-dominated, the algorithm biased for men over women. 

Dr. Anna Lauren Hoffmann, an assistant professor at iSchool, studies fairness and justice in technological systems. 

After her undergraduate studies as an English major, Hoffmann decided to go to graduate school to study library and information sciences. On her very first day at orientation, she met a professor who was talking about the ethics of doing research on people’s MySpace pages. Hoffmann casually joined in the conversation and by the end of it, the professor offered Hoffmann a position in her lab.

Anna Hoffman

“From that day forward, I no longer wanted to be a librarian,” Hoffmann said. “I was like, ‘I want to do whatever this is.’” 

Over the next couple of years, through her postdoc, Hoffmann studied information systems and the ethics of engaging people in that space. 

A group of researchers from the University of Virginia, the UW, and the Allen Institute for Artificial Intelligence began looking into a phenomenon they noticed in facial recognition tech in which the system was able to recognize women more easily when they were in kitchens. 

“The algorithm was trained, in part, on large caches of old magazine advertisements because that was easily available, public domain data,” Hoffmann said. “So, the system learned this information.”

As the name suggests, the field of machine learning seeks to have systems understand data and identify patterns in it to help make decisions, minimizing the need for human intervention. 

In theory, this sounds great. However, when the data being fed reflects years of biases and stereotypes which are then learned by the machines, these same ideas are reinforced. 

Hoffmann conducts her research using social theory, using a combination of close reading argumentation and discourse analysis, which studies the language people use to talk about technologies and attempts to access their social meanings.

She talked about how students may not realize to what degree their information is being used as many universities use similar screening systems when reading personal statements for college applications. These algorithms play a large role in ultimately determining who makes it past the first cut in the admissions process, with one university being accused of tracking student website activity to determine how much tuition they could afford. 

Dating apps have displayed a similar history of reinforcing biases in how they allow users to categorize themselves and how this information is then displayed or used. A study done by Cornell University with data analyzed by OKCupid, found black women and Asian men substantially more likely to rank lower than others. 

Hoffmann also talked about how dating apps make users classify themselves by gender and the ethics of such a clarification. 

“This can expost trans or non-binary folk to particular forms of scrutiny or violence,” Hoffmann said. 

Unfortunately, there isn’t a clear-cut solution. We’re chasing this idea of fairness that might not even exist. 

“There’s not always a right answer,” Hoffmann said. 

According to Hoffmann, there is no one tool or piece of policy that is going to get us where we wanted to be. Instead, she described justice, in the context of technology, as iterative and ongoing.

“The most central work happening in this phase are efforts to resist and refuse the passive adoption of these systems,” she said.

People across the country are fighting back. Whether it be against the use of new data-driven policing methods or against governmental use of facial recognition technology right here in Washington, people are resisting. While there is no one thing that will miraculously clear all algorithms of bias, this is a start. 

Reach Assistant Science Editor Ash Shah at science@dailyuw.com. Twitter: @itsashshah

Like what you’re reading? Support high-quality student journalism bydonating here.

 

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.