Author: Dao, Giang; Lee, Minwoo
Title: Demystifying Deep Neural Networks Through Interpretation: A Survey Cord-id: 8it8myds Document date: 2020_12_13
ID: 8it8myds
Snippet: Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn. The problem is that the single metric is an incomplete description of the real world tasks. The single metric cannot explain why the algorithm learn. When an erroneous happens, the lack of interpretability causes a hardness of understanding and fixing the error. Recently, there are works done to tackle the problem of interpretability to provide i
Document: Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn. The problem is that the single metric is an incomplete description of the real world tasks. The single metric cannot explain why the algorithm learn. When an erroneous happens, the lack of interpretability causes a hardness of understanding and fixing the error. Recently, there are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process. The works are important to identify potential bias and to ensure algorithm fairness as well as expected performance.
Search related documents:
Co phrase search for related documents- Try single phrases listed below for: 1
Co phrase search for related documents, hyperlinks ordered by date