RuleTaker: Transformers as Soft Reasoners over Language

Aristo • 2020
Can transformers be trained to reason (or emulate reasoning) over rules expressed in language? In the associated paper and demo we provide evidence that they can. Our models, that we call RuleTakers, are trained on datasets of synthetic rule bases plus derived conclusions, provided here. The resulting models provide the first demonstration that this kind of soft reasoning over language is indeed learnable.
License: Apache 2.0

To try a live demo of RuleTaker, click on “View Website” above.

Beginning with McCarthy’s Advice Taker (1959), AI has pursued the goal of providing a system with explicit, general knowledge and having the system reason over that knowledge. However, expressing the knowledge in a formal (logical or probabilistic) representation has been a major obstacle to this research. This work investigates a modern approach to this problem where the facts and rules are provided as natural language sentences, thus bypassing a formal representation. Our RuleTaker models provide the first demonstration that this is possible. For example, given the rules:

Metals conduct electricity. 
Insulators do not conduct electricity. 
If something is made of iron then it is metal. 
Nails are made of iron.

RuleTaker will (correctly) predict:

Nails conduct electricity? TRUE

However, if we were to change the rules, e.g., the counterfactual the iron is an insulator:

Metals conduct electricity. 
Insulators do not conduct electricity. 
If something is made of iron then it is an insulator.
Nails are made of iron.

RuleTaker will (correctly) change its answer to false:

Nails conduct electricity? FALSE

The ability of transformers to emulate rule-based reasoning like this opens up significant new opportunities for explainability, correctability, and counterfactual reasoning in question-answering.

Authors

Peter Clark, Oyvind Tafjord, Kyle Richardson