You are the owner of this article.

UW artificial intelligence project receives $125 million

  • 0
  • 3 min to read
AI at the UW

Seattle continues to make strides in artificial intelligence (AI) as Project Alexandria at the Allen Institute for Artificial Intelligence (AI2) just received a $125 million grant over three years from Paul Allen, the co-founder of Microsoft and founder of AI2. 

AI2 is a nonprofit research institution that has existed in Seattle since 2013. The organization’s goal is to push the limits of AI while keeping everything open and free for the public to reuse. AI2 publishes its research papers and datasets on its website.

Project Alexandria, one of many research subsets at AI2, has a team that focuses on incorporating common sense reasoning and everyday logic into technology. Other projects include Aristo, which focuses on machine learning and reasoning; Euclid, which focuses on natural language understanding; and PRIOR, which focuses on computer vision. 

UW professor Yejin Choi is taking the lead on Project Alexandria. She will help to shift the process in how machines and technology currently work and incorporate common sense reasoning into the capabilities of artificial intelligence. 

“While we have seen dramatic advancements with deep learning models for some of these tasks, they do not yet have the capabilities to abstract away generalizable knowledge [about the world] that can be re-used for other tasks,” Choi told Geekwire. “Project Alexandria aims to create universal representations of commonsense that can be shared and used by other AI systems.”

Choi also brings prior experience to Project Alexandria in an area that is still very new. It is still early in the research process, so the team is focusing on underlying technical problems that must be solved before true progress can happen. 

Current AI systems do not have a lot of common sense, so the Project Alexandria team is doing research on how much common sense AI currently has and how it can be used in tasks. There are many examples of machine learning errors that look funny to humans, but that is because humans have common sense whereas AI does not possess that type of knowledge. 

Many examples of absurd machine learning errors occur in translations. For instance, the French word for avocado, “avocat,” is also the word for lawyer, which can be confusing for machines. The AI might translate “I had an avocado for dinner” as “I had a lawyer for dinner” because it doesn’t have the common sense to know the difference. 

“One of the fields of research at AI2 is natural language processing which deals with how to get computers to understand text,” fourth year UW Ph.D. student Antoine Bosselut said. “One of the big problems there is that there is this very strong reporting bias where the words actually mentioned in text are not actually the ones that give us this common sense. One example of something we see happen is that every machine learning model we train on text ends up learning surface patterns between words.” 

For example, teacher can co-occur with the word student. Those words are strongly correlated together, but it doesn’t give the relationship between teacher and student. These words only tells people that the words “teacher” and “student” occur in the same environment. 

Another difficulty researchers are trying to incorporate into AI is that common sense itself is very uncertain. Many common sense facts that are true in most contexts are not always true. For example, people know that birds fly, but there are exceptions such as penguins and ostriches. Exceptions like this are part of a large amount of logical knowledge that computers must learn and deal with.

“An example is that you are given a statement and asked whether another statement is inferred,” research scientist from AI2 Chandra Bhagavatula said. “From the statement ‘two men are playing football,’ we can infer that they are outdoors because typically people play football outdoors.”

This type of reasoning is called abductive reasoning or abductive inference. It begins with taking an incomplete observation and trying to figure out the likeliest explanation for that observation, and it’s something that AI typically struggles with. 

“Let’s say you go to bed in the night and in the morning you see a jar of peanut butter and a jar of jam open,” Bhagavatula said. “You are not going to infer that someone broke into your house. The abductive inference you are going to make is that your roommate made a sandwich.”

Common sense itself is a part of AI that is old and new at the same time. It has been around for a long time, but it is still early for common sense in that there has been a major boom in AI research due to technology called deep neural networks. 

Deep learning is a methodology or approach for explaining very large, very expressive, and very flexible models of different kinds of data. It allows people to use their intuitions to express knowledge at a high level, but is still highly computational and scalable. For example, expressing information and images is mostly translationary and variant. If you move an image to the side it still looks like an image. You can express these things in deep neural nets and train them with millions of parameters to capture complicated functions that were very difficult to approximate or capture before. However, there are still a lot of open questions for how to use common sense in deep learning architectures and models. “In terms of what we’re doing in Alexandria right now, in some ways common sense has been around for a long time but it’s still early for common sense in that there is this new set of challenges,” research engineer at AI2 Nicholas Lourie said. “There is this deep learning technology that we want to work with and there is still a lot of open questions about how to use common sense or endow common sense to these deep learning architectures and these deep learning models.”

For more information and a list of current and past AI2 interns, check out the website

Reach reporter Monica Mursch at science@dailyuw.com. Twitter: @MonicaMursch

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.