What does my computer know about me?
This is not an entirely new question. As our phones have gotten smarter and assistive technology has infiltrated almost all aspects of daily lives, people have begun to question how these technologies seem to know us so well.
Chirag Shah, an associate professor at the UW and founder of the InfoSeeking Lab, is looking at just that. With a grant from the National Science Foundation (NSF), Shah is looking into how companies like Google can improve the transparency of their search engines.
Right now, there is no good solution to the issues that arise with search engine transparency; they are either too complex — presenting users with entire algorithms — or the language is too simple. For instance, a search engine might say, “You are seeing this because other people like you like it,” which reduces the full picture.
Shah wants to change this. He wants to find transparency that makes users aware of how their search results are being produced without going into so much detail that the information becomes overwhelming.
He said that there are two important parts of creating a more transparent system. The first is figuring out how to present the information behind a system in a way that can be understood by the average person. The second requires understanding how much and when people want to know about how recommendation systems are producing their results.
In order to create a more transparent system, Shah and fellow researcher Yongfeng Zhang from Rutgers University are using their NSF grant to create new methods of machine learning that can be explained and presented in a way that can be simply understood by users.
Shah compares this to the experience of eating at a restaurant.
Much like the varying amounts of transparency that can be associated with the food served in restaurants, there are different levels of transparency that can be applied to search engine results.
“Some people would probably care all the way to the farm where the ingredients you know were produced,” Shah said. “But other people … sometimes you just want a quick meal. You don't want any of those things.”
One of the big reasons that transparency in recommender systems is important is that people trust these systems. We interact with recommender systems every day, on mobile devices, computers, and voice-activated assistants, like Alexa.
Because our interactions with recommender systems are so ingrained in daily life, they become less noticeable. People often don't think about how search results and recommendations are presented to them. Users often trust the technology that they interact with.
This trust can be dangerous as it causes users to become blind to the issues that the lack of transparency can cause, Shah explained.
“You hear things like, ‘If they didn't find it on Google, it must not exist’ or ‘Because they found it at the top of Google result it must be true,’” he said. “And they're not seeing why it is at the top of the list on Google, why they couldn't find it [on] Google.”
Certain results appear over others because of the bias that can be involved in the generation of these results.
Bias in recommendation system results can come from the agenda of the company behind the system or from feedback loops based on user response. Either way, the outcome is the same, users are presented with biased results, the origins of which they are not questioning.
Improving the transparency of recommendation systems is a long-term process, but Shah is hopeful. Even though change will be slow, Shah recognizes the importance of the educational aspect of the process.
Reach contributing writer Teresa Bonilla at firstname.lastname@example.org. Twitter: @toomuchteresa
Like what you’re reading? Support high-quality student journalism by donating here