Latest YouTube Video

Thursday, March 9, 2017

A Structured Self-attentive Sentence Embedding. (arXiv:1703.03130v1 [cs.CL])

This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification, and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.



from cs.AI updates on arXiv.org http://ift.tt/2mGQip9
via IFTTT

No comments: