Latest YouTube Video

Tuesday, November 22, 2016

Interpreting Finite Automata for Sequential Data. (arXiv:1611.07100v1 [stat.ML])

Automaton models are often seen as interpretable models. Interpretability itself is not well defined: it remains unclear what interpretability means without first explicitly specifying objectives or desired attributes. In this paper, we identify the key properties used to interpret automata and propose a modification of a state-merging approach to learn variants of finite state automata. We apply the approach to problems beyond typical grammar inference tasks. Additionally, we cover several use-cases for prediction, classification, and clustering on sequential data in both supervised and unsupervised scenarios to show how the identified key properties are applicable in a wide range of contexts.



from cs.AI updates on arXiv.org http://ift.tt/2f4KEuA
via IFTTT

No comments: