Additive deductive embeddings for planning of natural language proofs




Sprague, Zayne

Journal Title

Journal ISSN

Volume Title



Constructing a logical argument to support a claim entails selecting relevant evidence from a collection of facts and formulating a sequence of reasoning steps. Current natural language systems designed for claim validation employ large language models to reason and plan deductions. In this paper, we investigate whether embedding spaces can adequately represent natural language deductions for use in proof-generating systems. We introduce a novel method for planning natural language proof generation called Additive Deduction, which leverages simple arithmetic operations performed exclusively in an embedding space. We explore multiple sources of off-the-shelf dense embeddings in addition to sparse embeddings from BM25. We devise three experiments to demonstrate the effectiveness of embedding models for natural language planning. The first experiment consists of two intrinsic evaluations of Additive Deduction using two off-the-shelf sentence encoders. The second incorporates an embedding-based heuristic into planning for natural language proof generation datasets, namely EntailmentBank and Everyday Norms: Why Not. Lastly, we create a dataset that benchmarks various reasoning categories and common reasoning failures. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they lack the ability to fully express certain categories of reasoning, hurting their proof generation performance.


LCSH Subject Headings