Selected article for: "machine learning and propose framework"

Author: Tsang, Michael; Rambhatla, Sirisha; Liu, Yan
Title: How does this interaction affect me? Interpretable attribution for feature interactions
  • Cord-id: szdzw6tc
  • Document date: 2020_6_19
  • ID: szdzw6tc
    Snippet: Machine learning transparency calls for interpretable explanations of how inputs relate to predictions. Feature attribution is a way to analyze the impact of features on predictions. Feature interactions are the contextual dependence between features that jointly impact predictions. There are a number of methods that extract feature interactions in prediction models; however, the methods that assign attributions to interactions are either uninterpretable, model-specific, or non-axiomatic. We pro
    Document: Machine learning transparency calls for interpretable explanations of how inputs relate to predictions. Feature attribution is a way to analyze the impact of features on predictions. Feature interactions are the contextual dependence between features that jointly impact predictions. There are a number of methods that extract feature interactions in prediction models; however, the methods that assign attributions to interactions are either uninterpretable, model-specific, or non-axiomatic. We propose an interaction attribution and detection framework called Archipelago which addresses these problems and is also scalable in real-world settings. Our experiments on standard annotation labels indicate our approach provides significantly more interpretable explanations than comparable methods, which is important for analyzing the impact of interactions on predictions. We also provide accompanying visualizations of our approach that give new insights into deep neural networks.

    Search related documents:
    Co phrase search for related documents
    • ablation study and machine learning: 1, 2
    • accurate prediction model and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11