Deep learning systems to explain their decisions

31 October 2016Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have found a new way to train neural networks so that they not only provide predictions and classifications but also rationales for their decisions. “In real-world applications, sometimes people want to know why the model makes the predictions it does,” said graduate student Tao Lei. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.” “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make,” commented Tommi Jaakkola, an MIT professor of electrical engineering and computer science. The researchers address neural nets trained on textual data. To enable…


Link to Full Article: Deep learning systems to explain their decisions

Pin It on Pinterest

Share This