Teaching AI To Be ‘Smarter’ By Doubting Itself

Interesting post that suggests that in deep learning algorithms, questioning things may lead to higher quality conclusions.

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.

Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

Leave a Reply

Your email address will not be published. Required fields are marked *