Allen AI Predoctoral Young Investigators

About the Program

Allen AI Predoctoral Young Investigators is a program that offers predoctoral candidates the opportunity to prepare for graduate-level research through partnership with strong mentors, participation in cutting edge research, and co-authorship of papers at top venues. This program aims to position candidates for successful entry into a top graduate program. For potential candidates who are already enrolled in graduate studies, this program offers the opportunity to work with researchers at AI2 on collaborative projects for the benefit of both AI2’s initiatives and your thesis work.

Program Details

  • Duration: 1-3 years
  • Start Date: Flexible (rolling application with no deadline)
  • Candidates: PYI Program candidates have graduated with a bachelor’s or master’s degree in a relevant field and are preparing to enter a PhD program in the next 1-2 years. We also welcome applications from candidates already enrolled in a PhD program who wish to align their work with AI2’s endeavors, and we are happy to work on a case-by-case basis with regards to work arrangements if there is a mutual fit.
Two AI2 employees collaborating in front of a whiteboard.An AI2 employee smiling, sitting in front of his laptop.

Benefits

  • Gain strong research experience through direct collaboration with AI2 researchers
  • Co-author papers with AI2 researchers and get published in top venues
  • Learn about and leverage AI2’s data, engineering infrastructure, and research tools
  • Gain industry experience in a supportive, agile, mission-driven environment
  • No grant writing, teaching, or administrative responsibilities required
  • Assistance with graduate school vetting and application
  • Generous travel budget
  • Financial and legal support for visa acquisition
See our current Predoctoral Young Investigators on our Team Page!

PYI Program Alumni

  • Nishant Subramani

    Nishant Subramani

    Nishant Subramani is a PhD student at CMU in the Language Technologies Institute (LTI) advised by Mona Diab. He is interested in understanding and manipulating the internals of language models to efficiently and responsibility control models’ generations. During his time at AI2, Nishant contributed to papers on a variety of topics including Extracting Latent Steering Vectors from Pretrained Language Models, Detecting Personal Information in Training Corpora: an Analysis, Don’t Say What You Don’t Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search, Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets, Data Governance in the Age of Large-Scale Data-Driven Language Technology, GEMv2: Multilingual NLG Benchmarking in a Single Line of Code, Natural Adversarial Objects, and BLOOM: A 176B-Parameter Open-Access Multilingual Language Model as well as contributed to both the data and evaluation teams for OLMo. Before AI2, Nishant was at a few startups in industry and holds degrees in statistics and computer science from Northwestern and NYU.

  • Hamish Ivison

    Hamish Ivison

    Hamish Ivison is currently a PhD student at the University of Washington, advised by Hannaneh Hajishirzi. He is broadly interested in exploring methods in making language models more adaptable for end-users and better understanding the impact of data on existing systems. Previously, he worked on meta-learning and instruction-tuning as a PYI on the AllenNLP team, after having graduated from the University of Sydney studying Computer Science, Linguistics, and Ancient Greek.

  • Ben Newman

    Ben Newman

    Benjamin Newman is a PhD student at the University of Washington, advised by Yejin Choi. He is broadly interested in understanding and building reliable language understanding systems. At AI2, he worked as a PYI on the Semantic Scholar team, working on Scientific Decontextualization and Document Processing. Previously, he graduated from Stanford with a BS/MS in Computer Science.

  • Matthew Finlayson

    Matthew Finlayson

    Matthew Finlayson is a PhD student at the University of Southern California, advised by Swabha Swayandipta and Xiang Ren. Previously, he graduated from Harvard with a BA in computer science and linguistics before joining AI2 as a PYI on Aristo where he studied in-context learning, mathematical reasoning, and sampling algorithms for language models.

  • Kunal Pratap Singh

    Kunal Pratap Singh

    Kunal is now a Ph.D. student at EPFL in Switzerland, working with Prof. Amir Zamir. At AI2, his work revolved around problems in Embodied AI. Specifically, he worked on projects around Human-AI collaboration and representation learning for embodied tasks with Luca Weihs and Ani Kembhavi. Kunal is originally from India and finished his undergraduate from the Indian Institute of Technology, Roorkee.

  • Apoorv Khandelwal

    Apoorv Khandelwal

    Apoorv graduated from Cornell University in 2020 with a BS in Computer Science. He then worked as a PYI on AI2’s PRIOR team for two years. During his time at AI2, he worked on projects such as investigating the effects of visual encoders for Embodied AI tasks and on a knowledge-based visual question answering dataset. Apoorv is now a PhD student in Computer Science at Brown University, where he is advised by Chen Sun and Ellie Pavlick.

  • Amita Kamath

    Amita Kamath

    Amita graduated from Stanford University in 2020 with an MS in Computer Science. She was a PYI on the PRIOR team from October 2020 to October 2022, during which time she worked on creating General Purpose Vision Systems, and on using Web search data to greatly expand their capabilities. She is broadly interested in improving the robustness and interpretability of vision-language and NLP systems. Starting Fall 2022, Amita will be a PhD student in Computer Science at UCLA.

  • Jon Saad-Falcon

    Jon Saad-Falcon

    Jon Saad-Falcon graduated from Georgia Tech in 2021 with a B.S./M.S. in Computer Science. He worked as a PYI on the Semantic Scholar team from August 2021 to August 2022. During his time at AI2, he worked on developing techniques for recycling sequence representations in language models. Starting in September 2022, Jon will be a research assistant in the NLP group at Stanford University.

  • Zhaofeng Wu

    Zhaofeng Wu

    Zhaofeng graduated from the University of Washington in 2021 with a M.S. in computer science, and in 2019 with a B.S. in computer science and a B.A. in linguistics. During his one year as a PYI at AI2, he worked on projects including Transparency Helps Reveal When Language Models Learn Meaning, Continued Pretraining for Better Zero- and Few-Shot Promptability, ABC: Attention with Bounded-memory Control, and Learning with Latent Structures in Natural Language Processing: A Survey. Zhaofeng is joining the MIT NLP group in August 2022.

  • Sonia Murthy

    Sonia Murthy

    Sonia Murthy is a PhD student at Harvard University, advised by Tomer Ullman and Elena Glassman. She is interested in combining methods from cognitive science, natural language processing, and human-AI interaction to advance our models of human-machine communication and design systems that better align with humans’ goals and intentions. As a PYI on the Semantic Scholar team, Sonia worked on a multi-document approach to generating diverse descriptions of scientific concepts with the goal of providing readers with personalized concept descriptions. Prior to joining AI2, she graduated from Princeton University as an Independent Concentrator in Intelligent Systems, where she developed cognitive models of human word-color associations and the mental representations that allow people to flexibly communicate.

  • Alexis Ross

    Alexis Ross

    Alexis graduated from Harvard University in 2020 with a BA in Computer Science and Philosophy. She worked as a PYI on the AllenNLP team from July 2020 to July 2022. During her time at AI2, she worked on topics such as explaining NLP systems, leveraging controlled generation for human-in-the-loop data collection, and removing dataset artifacts. Beginning in August 2022, Alexis will be a PhD student in Computer Science at MIT.

  • Jenny Liang

    Jenny Liang

    Jenny Liang is a first-year PhD student at Carnegie Mellon University. Inspired by her time at the AI2, she is currently researching ways for programmers to work with code-generating AI like GitHub CoPilot to output customized code.

  • Will Merrill

    Will Merrill

    Will graduated from Yale in 2019 with a B.S. in computer science and linguistics. He is broadly interested in the linguistic abilities, robustness, and interpretability of neural networks for NLP, often through the lens of theoretical tools like formal languages. At AI2, Will’s work contributed to formal papers on A Formal Hierarchy of RNN Architectures, On the Linguistic Capacity of Real-Time Counter Automata, Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent, Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?, Competency Problems: On Finding and Removing Artifacts in Language Data, and CORD-19: The COVID-19 Open Research Dataset. After leaving AI2, Will will join New York University as a PhD student advised by Tal Linzen.

  • Sanjay Subramanian

    Sanjay Subramanian

    Sanjay earned a B.S.E. in computer and information science and a B.S. in economics from the University of Pennsylvania in 2019 and has been a predoctoral young investigator in AI2’s AllenNLP team from June 2019 to July 2021. His research interests are in NLP and its intersection with computer vision. Through his work at AI2, he was a co-author of papers about linguistic analysis of pre-trained vision and language models, the faithfulness of neural module networks, a dataset of images and associated text from medical research papers, contrast sets for NLP evaluation, and a latent tree model for visual question answering. Beginning in August 2021, Sanjay will be a PhD student in computer science at UC Berkeley.

  • Haokun Liu

    Haokun Liu

    Haokun is an NYU graduate. He worked as PYI in the Semantic Scholar team from Jan 2021 to July 2021. Haokun authored around ten papers in natural language understanding evaluation, transfer learning and probing linguistic knowledge of pretrained models. Haokun will start his PhD at UNC-Chapel Hill in fall 2021. He is interested in transfer learning and empirical analysis in NLP.

  • Suchin Gururangan

    Suchin Gururangan

    Suchin graduated from the University of Chicago in 2014 with a bachelor’s in Mathematics, and earned a Master’s in Computational Linguistics at the University of Washington in 2018 after a few years in industry as a data scientist. He is now a PhD student at the University of Washington, advised by Noah Smith and Luke Zettlemoyer. At AI2, Suchin had the opportunity to do research in a number of different areas of NLP, including domain adaptation, safety, efficiency, and reproducibility.

  • Vivek Ramanujan

    Vivek Ramanujan

    Education: Past: Brown University, BSc Math and Computer Science. Graduated 2018 Current: University of Washington Computer Science PhD program, Fall 2020 Publications: What’s Hidden in a Randomly Weighted Neural Network? CVPR 2020, Soft Threshold Weight Reparameterization for Learnable Sparsity ICML 2020, Supermasks in Superposition ICML 2020 Workshop with spotlight (in submission to NeurIPS 2020), and Parameter Norm Growth during Training Transformers (in submission to NeurIPS 2020, no preprint link

  • Sarah Pratt

    Sarah Pratt

    Sarah Pratt completed her undergraduate studies at Brown University in Applied Mathematics and Computer Science before joining the PRIOR team at AI2. While at AI2, she worked on grounded situation recognition, which was published at ECCV 2020. She is now a PhD student at the University of Washington where she works with Prof. Ali Farhadi on computer vision and deep learning.

  • Chaitanya Malaviya

    Chaitanya Malaviya

    Chaitanya graduated with a masters degree from Carnegie Mellon University and joined the Mosaic team at AI2. At AI2, he worked on Commonsense knowledge graph construction: link 1 and link 2, Generative data augmentation, and Curation of a dataset for abductive reasoning. He will join the University of Pennsylvania as a PhD student in fall 2020.

  • Isabel Cachola

    Isabel Cachola

    Isabel is currently attending Johns Hopkins University as a PhD student in Computer Science, and did her Undergrad at the University of Texas at Austin. Her research interests include Natural Language Processing and Computational Social Science. Her published papers with AI2 include TLDR: Extreme Summarization of Scientific Documents, Citation Text Generation, and Improving the Accessibility of Scientific Documents: Current State, User Needs, and a System Solution to Enhance Scientific PDF Accessibility for Blind and Low Vision Users.

  • Jaemin Cho

    Jaemin Cho

    Jaemin graduated from Seoul National University in 2018 with a BS in Industrial Engineering. He will join a PhD program in Computer Science at UNC-Chapel Hill in the fall of 2020. While at AI2 he published Mixture Content Selection for Diverse Sequence Generation for EMNLP 2019 and X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers for EMNLP 2020.

  • Mitchell Wortsman

    Mitchell Wortsman

    Mitchell is now a PhD student at the University of Washington working on neural networks with Ali Farhadi. At AI2, Mitchell explored self-adaptive navigation with Kiana Ehsani & Roozbeh Mottahgi and learning neural network wirings with Mohammad Rastegari. Mitchell is from Toronto and completed his undergraduate studies at Brown University in Applied Mathematics and Computer Science before joining AI2.

  • Kevin Lin

    Kevin Lin

    Kevin Lin joined the AllenNLP team at AI2 after graduating with a BS in Computer Science from Columbia University. At AI2 he worked on structured models for code generation and reading comprehension, as well as benchmarks for situated and qualitative reasoning. He started his PhD at UC Berkeley in 2019.

  • Noah Siegel's Profile Photo

    Noah Siegel

    Noah Siegel was a Research Engineer at AI2 after receiving a BS in Computer Science and Economics from the University of Washington in 2015. He worked on computer vision and machine learning, specifically on applications to information extraction. Noah is currently a research engineer at DeepMind. He is first author on the paper FigureSeer: Parsing Result-Figures in Research Papers (ECCV 2016) and Extracting Scientific Figures with Distantly Supervised Neural Networks (JCDL 2018).

  • Roie Levin's Profile Photo

    Roie Levan

    Roie Levin graduated from Brown University in 2015 with a BS in Applied Math/Computer Science and Math. He worked as a Research Engineer on the Euclid project at AI2. Roie accepted an offer of enrollment from CMU beginning in 2017. His publications with AI2 include FigureSeer: Parsing Result-Figures in Research Papers (ECCV 2016) and Beyond Sentential Semantic Parsing: Tackling the Math SAT with a Cascade of Tree Transducers (EMNLP 2017).

  • Chris Clark

    Chris Clark

    Chris Clark received a BS in Computer Science and a BA in Philosophy from the University of Washington, and went on to receive an MS in Artificial Intelligence from the University of Edinburgh where he studied machine learning and completed a thesis on using deep neural networks to play Go. Chris’s research at AI2 focused on information extraction, and his work was deployed as a key feature of the Semantic Scholar project. Chris enrolled in a PhD program at UW in 2015. He is first author on the papers PDFFigures 2.0: Mining Figures from Research Papers (JCDL 2016) and Looking Beyond Text: Extracting Figures, Tables and Captions from Computer Science Papers (AAAI Workshop on Scholarly Big Data 2015).

Applying

To apply, please submit an appliction through the link below.

APPLY NOW

Other AI2 Programs

Internships
“Please join us to tackle an extraordinary set of scientific and engineering challenges. Let’s make history together.”
Oren Etzioni, Founding CEO