Author: Ozyegen, Ozan; Ilic, Igor; Cevik, Mucahit
Title: Evaluation of interpretability methods for multivariate time series forecasting Cord-id: of1b5x7a Document date: 2021_7_27
ID: of1b5x7a
Snippet: Being able to interpret a model’s predictions is a crucial task in many machine learning applications. Specifically, local interpretability is important in determining why a model makes particular predictions. Despite the recent focus on interpretable Artificial Intelligence (AI), there have been few studies on local interpretability methods for time series forecasting, while existing approaches mainly focus on time series classification tasks. In this study, we propose two novel evaluation me
Document: Being able to interpret a model’s predictions is a crucial task in many machine learning applications. Specifically, local interpretability is important in determining why a model makes particular predictions. Despite the recent focus on interpretable Artificial Intelligence (AI), there have been few studies on local interpretability methods for time series forecasting, while existing approaches mainly focus on time series classification tasks. In this study, we propose two novel evaluation metrics for time series forecasting: Area Over the Perturbation Curve for Regression and Ablation Percentage Threshold. These two metrics can measure the local fidelity of local explanation methods. We extend the theoretical foundation to collect experimental results on four popular datasets. Both metrics enable a comprehensive comparison of numerous local explanation methods, and an intuitive approach to interpret model predictions. Lastly, we provide heuristical reasoning for this analysis through an extensive numerical study.
Search related documents:
Co phrase search for related documents- accuracy loss and activation function: 1
Co phrase search for related documents, hyperlinks ordered by date