We present a new reading comprehension dataset, SQuAD, consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset in both manual and automatic ways to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We built a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research.
from cs.AI updates on arXiv.org http://ift.tt/1UYUfeT
via IFTTT
No comments:
Post a Comment