Professor Daniel S. Weld

CSE professor Daniel S. Weld leads the UW Crowdlab which focuses on research in artificial intelligence (AI). He thinks the potential danger of AI is not as critical as some believe.

In a Reddit “Ask Me Anything” at the end of January, Bill Gates fielded a question about the possible dangers of super-intelligent machines.

“I am in the camp that is concerned about artificial intelligence,” he wrote. 

Gates said artificial intelligence (AI) might be a positive tool for humanity at first, but he is concerned with what might happen in a few decades, as it becomes increasingly smarter.

Elon Musk, investor in AI research and CEO of Tesla, went further, saying to a group of MIT students that artificial intelligence might be the greatest existential threat facing humanity.

“We need to be super careful with AI,” he tweeted in August 2014. “Potentially more dangerous than nukes.”

The UW claims its computer science and engineering department is one of the world’s foremost centers for artificial intelligence research. 

Dan Weld, professor of computer science and engineering (CSE) at the UW and founding editor of the Journal of AI Research, said the threat of super-intelligent machines is exaggerated, partly because of the stark challenges of creating one.

Though progress in AI research has been rapid, he said it would be difficult to create an artificial general intelligence like the human mind, capable of responding to a range of situations. 

“No one’s come even remotely close to that,” Weld said.

Weld said computers can only exceed human capabilities in specific fields. Calculators, faster and more accurate than the human mind at math, are a good example of this domain-specific intelligence. In the same way, computer programs can beat chess grandmasters because they exhaustively analyze all possible moves on the board.

So far, no machine possesses the common sense and multipurpose intelligence of human beings, Weld said. For example, a self-driving car may seem advanced, but a plastic bag flying into its sensor confuses it.

Pedro Domingos, a UW CSE professor, agreed we are far from solving the puzzle of multipurpose intelligence, or “strong” AI. 

Domingos said AI does pose some concerns, but a Skynet-style takeover is not one of them. In this, he believes, most AI experts agree.

“The way most AI systems work is that they have the goals, and the goals are defined from the outside,” Domingos said.

AI is only smart in the sense it can figure out how to reach the goals humans give it, he said, but they don’t actually get to remake the goals. Programming a machine to be able to decide what goals it wanted to pursue would be an extraordinarily foolish mistake, but one easily avoided.

“We may be able to have something that’s much smarter than us but is still under our control,” Domingos said.

Weld said if and when strong AI come about, it’s likely to be a gradual process, and there would be safeguards in place to prevent machine intelligence from causing harm. In fact, Weld has worked on these kinds of safeguards. 

He and a team of researchers tried to find a way to transform the Three Laws of Robotics, protective rules first imagined by science fiction author Isaac Asimov, into workable code.

Asimov’s Three Laws state that: A robot may not cause harm to a human being, either deliberately or accidentally; a robot must obey orders given to it by human beings, unless this conflicts with the first law of no harm; and lastly, a robot must secure its own survival, as long as this does not conflict with the first two laws.

Weld said there is a challenge translating the laws into software.

“A big question is, how do you define harm? How does a robot know when it’s actually causing harm?” Weld said. 

David Nixon, a philosophy professor at the UW who has taught classes on the philosophy of artificial intelligence, said if super intelligent machines do arise, there’s no reason to think they would want to cause harm to humanity.

“If they’re going to be intelligent, why are they not able to make a moral argument?” Nixon said. “Why are they not able to see that suffering matters? Why are they super smart but still too dumb to see that?”

Nixon said super intelligent machines might even have a better-developed morality than humans. If so, it would surely include treating other sentient beings kindly. 

This isn’t to imply that AI poses no dangers. 

Weld expressed concerns over current cyber attacks on modern computers. By strengthening safeguards against them, he said we can simultaneously guard against any future rogue AI.

To this concern, Domingos added two others: unemployment brought about by machines superseding human jobs, and computers misinterpreting the directions given them in destructive ways.

Society will need to find a way to make the most of AI so it benefits everyone, Domingos said. 

The other problem, ironically, is AI not being smart enough. 

“People worry that computers will get too smart and take over the world,” Domingos said. “But the real problem is that they’re too stupid and they’ve already taken over the world.”

  

Reach reporter Chetanya Robinson at science@dailyuw.com. Twitter: @chetanyarobins

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.