UW researchers in the department of human-centered design & engineering (HCDE) recently published a paper outlining their work on disinformation campaigns, how they spread, and the broader implications on policy and future research.
The researchers are part of the Emerging Capacities of Mass Participation (emCOMP) Lab, which is directed by associate professor Kate Starbird. Their work focuses on online mass communication interactions on the internet, facilitated by the internet boom.
emCOMP’s work analyzes sociotechnical systems, examining the intersection of people, technology, and policy. Although housed in the College of Engineering, the nature of their work requires a firm grounding in social science.
“To study this space we also have to understand the social science perspective and the policy perspective, and how people act, why people are susceptible to this type of stuff,” Melinda McClure Haughey, an HCDE Ph.D. student and researcher at emCOMP, said.
One area of study researchers at the lab have focused on in the past few years is online disinformation campaigns, which are distinct from misinformation.
“Misinformation might be things that are half true and half false, or sharing things that aren’t true but don’t intend to mislead people, whereas disinformation, as we know it in our lab, is the intention to deceive or mislead people,” McClure Haughey said.
In 2018, Starbird, along with researchers Ahmer Arif and Leo G. Stewart, published research on how Russia’s Internet Research Agency (RU-IRA) engaged in a disinformation campaign in the United States in an attempt to sow further division between Americans on the political right and left.
Their research found that the professional trolls hired by RU-IRA created fake Twitter profiles resembling Americans on both sides of the political spectrum, specifically targeting discourse surrounding the Black Lives Matter movement.
These fake profiles fit into four categories: personal accounts on the left, often portrayed as “proud African Americans,” and personal accounts on the right, often portrayed as “proud white conservatives,” as well as accounts for fake organizations on both the left and right.
By creating fake profiles of both individuals and organizations, RU-IRA was able to reach audiences through different points of access and therefore promote and deepen the intensity of their message.
Accounts on both sides tweeted about the Black Lives Matter movement, positively on the left and negatively on the right, and used inflammatory language toward the other to intensify political divisions among Americans.
“By tapping into this larger reservoir of antagonistic discourses proliferating in American politics, these accounts amplified toxicity in public discussions,” the report explained.
Starbird’s new report, co-written by Arif and Tom Wilson, a Ph.D. student in HCDE, discusses how although disinformation campaigns like the one by RU-IRA surrounding Black Lives Matter may start as orchestrated operations meant to spread false information, they can also be spread or even be originated by organic behaviors of online crowds.
They argue that disinformation goes beyond “bots and trolls” as is often thought, and also relies upon average people to believe disinformation and spread it at a grassroots level to reach a larger audience.
The organic behaviors of individuals online complicate the question of how disinformation campaigns can be addressed and prevented.
The authors of the paper suggest that further research on information operations should be cross-disciplinary, involving a social science perspective and utilizing qualitative data as well as big data analysis in order to best understand how people come to believe and engage with disinformation.
Reach reporter Emily Young at email@example.com. Twitter: @emilymyoung7
Like what you’re reading? Support high-quality student journalism by donating here.