Papers

Learn more about AI2's Lasting Impact Award
Viewing 781-790 of 1022 papers
  • From Recognition to Cognition: Visual Commonsense Reasoning

    Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin ChoiCVPR2019 Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people’s actions, goals, and mental states. While this task is easy for humans, it is…
  • Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

    Mitchell Wortsman, Kiana Ehsani, Mohammad Rastegari, Ali Farhadi, Roozbeh MottaghiCVPR2019 Learning is an inherently continuous phenomenon. When humans learn a new task there is no explicit distinction between training and inference. After we learn a task, we keep learning about it while performing the task. What we learn and how we learn it varies…
  • OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge

    Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh MottaghiCVPR2019 Visual Question Answering (VQA) in its ideal form lets us study reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most VQA benchmarks to date are focused on questions such as simple…
  • Two Body Problem: Collaborative Visual Task Completion

    Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexander Schwing, Aniruddha KembhaviCVPR2019 Collaboration is a necessary skill to perform tasks that are beyond one agent's capabilities. Addressed extensively in both conventional and modern AI, multi-agent collaboration has often been studied in the context of simple grid worlds. We argue that there…
  • Video Relationship Reasoning using Gated Spatio-Temporal Energy Graph

    Yao-Hung Tsai, Santosh Divvala, Louis-Philippe Morency, Ruslan Salakhutdinov and Ali FarhadiCVPR2019 Visual relationship reasoning is a crucial yet challenging task for understanding rich interactions across visual concepts. For example, a relationship \{man, open, door\} involves a complex relation \{open\} between concrete entities \{man, door\}. While…
  • Barack's Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling

    Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer SinghACL2019 Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional language models are only capable of remembering facts seen at training time, and often have difficulty recalling them. To…
  • Sentence Mover's Similarity: Automatic Evaluation for Multi-Sentence Texts

    Elizabeth Clark, Asli Çelikyilmaz, Noah A. SmithACL2019 For evaluating machine-generated texts, automatic methods hold the promise of avoiding collection of human judgments, which can be expensive and time-consuming. The most common automatic metrics, like BLEU and ROUGE, depend on exact word matching, an…
  • Is Attention Interpretable?

    Sofia Serrano, Noah A. SmithACL2019 Attention mechanisms have recently boosted performance on a range of NLP tasks. Because attention layers explicitly weight input components' representations, it is also often assumed that attention can be used to identify information that models found…
  • Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

    Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, W. Dolan, Yejin Choi, Jianfeng GaoACL2019 Although neural conversational models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful neural…
  • SemEval-2019 Task 10: Math Question Answering

    Mark Hopkins, Ronan Le Bras, Cristian Petrescu-Prahova, Gabriel Stanovsky, Hannaneh Hajishirzi, Rik Koncel-KedziorskiSemEval2019 We report on the SemEval 2019 task on math question answering. We provided a question set derived from Math SAT practice exams, including 2778 training questions and 1082 test questions. For a significant subset of these questions, we also provided SMT-LIB…