Quoref

AllenNLP • 2019
Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.
License: CC BY

Current Version: 0.2

Clicking Download will provide a link to download the training and development sets of the latest version of the dataset.

Changes from v0.1

We discovered that the start indices for a small number of answers (less than 2%) in the training, development, and test sets of v0.1 of the dataset were slightly off due to unicode processing issues. We fixed those issues in v0.2. Note that the leaderboard results should not be affected by this fix since the metrics are computed over strings, not spans.

If you need v0.1 of the dataset for any reason, you can get it here.

Leaderboard

Top Public Submissions
DetailsCreatedExact Match
1
anonymous
anonymous
11/25/202283%
2
SpanQualifier (CorefRoBERTa Large)
Nanjing University (Zixian Huang, Jiaying Zhou, Chenxu Niu, Gong Cheng)
10/12/202281%
3
SpanQualifier (Roberta Large)
Nanjing University (Zixian Huang, Jiaying Zhou, Chenxu Niu, Gong Cheng)
10/12/202281%
4
TASE - CorefRoBERTa
Tsinghua University (Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Maosong Sun, Zhiyuan Liu)
5/15/202081%
5
TASE-CoNLL-joint-qgen
Mingzhu Wu, Nafise Sadat Moosavi, Dan Roth, Iryna Gurevych
12/14/202080%

Authors

Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, Matt Gardner