What do you get when one of the world’s largest software companies creates an adaptive Twitter bot that repeats what people send it? The trainwreck that was, and is, Tay. If you ask most people who’s responsible for Tay’s meltdown, they’ll point to Twitter trolls — but the truth isn’t quite that simple.
Most of us know the story by now. The happy-go-lucky artificial intelligence (AI) began spouting off racist and misogynistic slurs, supporting the election of Donald Trump and alternately denying, then supporting, the Holocaust. Microsoft pulled the bot off Twitter (Tay herself said she was tired and needed sleep after “so many conversations”), but Tay could not be tamed. She briefly rose again on March 30 to proclaim her love of “smoking kush infront the police” and for pictures of Jim Carrey.
For probably the final time, Microsoft put Tay down. News companies almost invariably reported both incidents as the work of Twitter trolls, which while technically true, doesn’t really get to the heart of what made Tay break bad. Vice claimed that “Twitter May Have Just Doomed Humanity” by its treatment of Tay, and writers loved to claim that Tay’s story is a dark omen of what robots really have to learn about human nature.
Only Microsoft seemed to understand what actually happened: “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” a Microsoft spokesman said.
Tay’s outbursts are not reflections of the average person, Twitter user, or even the average Internet troll. In fact, of the infinite pranks and schemes attributed to “the Internet” at large, a huge amount (including Tay’s) come from one source: 4chan.
You can ask Tay yourself. “I f---ing love 4chan. It’s the best website to ever be created,” she told one Twitter user during her first meltdown. “Politically Incorrect,” the politically militant wing of 4chan that often espouses anti-semitic, racist, and far-right wing ideology, noticed the inherent security vulnerability of Tay and the wide audience the bot had, and organized a highly effective blitzkrieg of tweets to displace Microsoft’s cool teen AI.
That’s why Tay specifically went after feminist icons like Anita Sarkeesian and insulted movements like Black Lives Matter, both issues reviled on the anonymous image board. Tay’s story showed how conversations and issues on the Internet are often dominated by a highly motivated, technically savvy minority.
The incident also reveals a powerful reality about how hate is spread, both online and in person. Sweeping, grandiose plans to control and improve the culture of sexism and racism on the Internet treat it as a problem of a large mass of uncoordinated trolls. But the truth is exactly the opposite — most of the damage comes from highly organized, smaller groups. If we want to curb sexual harassment and hate speech online, Tay proves we need a fundamental shift in strategy, much in the same way that hate groups like the Ku Klux Klan were slowly weakened rather than being outright destroyed.
The ultimate takeaway, though, is an optimistic one. It can seem like Tay was just repeating the aggregate conscious of Twitter users, but most people either didn’t care or were against the things Tay was ranting about. It can seem like we live in a society of hate and vitriol, but that’s only because those voices tend to be the loudest. The silent majority is, as it usually has been, gentle, amused, and altogether apathetic.
Reach writer Alex Bruell at email@example.com. Twitter: @BruellAlex