Latest YouTube Video

Thursday, November 17, 2016

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance. (arXiv:1611.05817v1 [stat.ML])

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior.

In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.



from cs.AI updates on arXiv.org http://ift.tt/2eLJsME
via IFTTT

No comments: