You are the owner of this article.
You have permission to edit this article.
Edit

Computational language models can further environmental degradation and language bias

  • 0
  • 2 min to read
Computational language models can further environmental degradation and language bias

Natural language processing technology that can model and predict language patterns has tangible environmental costs and can perpetuate language bias, according to the recent paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜”

Natural language processing is a type of machine learning artificial intelligence that operates by using large-scale pattern recognition to create predictive language models.

These language models can be encountered in our day-to-day life in things like predictive text and autocorrect features, according to Dr. Emily M. Bender, a professor of linguistics at the UW. The algorithms are trained on enormous sets of data taken from the internet to recognize patterns.

However, the size of the internet does not guarantee a diverse data set. Firstly, people who do not have access to or do not feel comfortable participating in the internet are excluded from the data collection. Then, the data is filtered down to remove certain words. While it can eliminate things like spam, porn, and overtly hateful speech, it can also eliminate underrepresented voices, according to Bender. 

“You end up with very large data sets that are too big for us to know exactly what’s in them, but looking at these external factors we can say with pretty good confidence that they overrepresent the views of people who already are on the benefiting end of power differentials in society,” Bender said.

Furthermore, the paper found that biases and abusive language patterns that perpetuate racism, sexism, ableism, or other harmful views can be picked up in the training data, which retains and reinforces a hegemonic point of view.

“The language model doesn’t know or understand this, but a person looking at that language is either getting their own stereotypes reinforced and their own thinking about the world sort of pushed further into these biased views, or, if they’re on the denigrated end of these stereotypes, experiencing the microaggression,” Bender said. 

Drawing on existing literature in the field, the paper found that training these language models requires a lot of computing power and energy, which comes with high carbon emissions and negative environmental costs. Those mostly likely to feel the impact of this environmental degradation are the marginalized communities who are already feeling the worst impacts of climate change and are among the least likely to benefit from these new technologies, according to Bender. 

“When we're looking at the environmental cost of building new technology, something to always keep in mind is who is paying that cost and who is getting the benefit of that technology,” Bender said. “They are typically not going to be the same people.”

To mitigate these problems in the future, it is important for individual consumers to be aware of this technology and prepared to think critically about where their information is coming from. In addition, changes can be made on a societal level to further the understanding of how the data and, in turn, the patterns, are generated. 

“Know as an individual that when you encounter the outcome of pattern recognition, it’s not from an all-knowing, objective computer; it’s just pattern recognition based on some training data,” Bender said. “And then as advocates within our society, [think] about looking for regulation around transparency.” 

Reach reporter Kate Companion at news@dailyuw.com. Twitter: @kate_companion

Like what you’re reading? Support high-quality student journalism by donating here.

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.