Additive deductive embeddings for planning of natural language proofs
dc.contributor.advisor | Durrett, Greg | |
dc.creator | Sprague, Zayne | |
dc.creator.orcid | 0009-0000-6887-8029 | |
dc.date.accessioned | 2023-08-25T17:12:53Z | |
dc.date.available | 2023-08-25T17:12:53Z | |
dc.date.created | 2023-05 | |
dc.date.issued | 2023-04-21 | |
dc.date.submitted | May 2023 | |
dc.date.updated | 2023-08-25T17:12:54Z | |
dc.description.abstract | Constructing a logical argument to support a claim entails selecting relevant evidence from a collection of facts and formulating a sequence of reasoning steps. Current natural language systems designed for claim validation employ large language models to reason and plan deductions. In this paper, we investigate whether embedding spaces can adequately represent natural language deductions for use in proof-generating systems. We introduce a novel method for planning natural language proof generation called Additive Deduction, which leverages simple arithmetic operations performed exclusively in an embedding space. We explore multiple sources of off-the-shelf dense embeddings in addition to sparse embeddings from BM25. We devise three experiments to demonstrate the effectiveness of embedding models for natural language planning. The first experiment consists of two intrinsic evaluations of Additive Deduction using two off-the-shelf sentence encoders. The second incorporates an embedding-based heuristic into planning for natural language proof generation datasets, namely EntailmentBank and Everyday Norms: Why Not. Lastly, we create a dataset that benchmarks various reasoning categories and common reasoning failures. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they lack the ability to fully express certain categories of reasoning, hurting their proof generation performance. | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/2152/121237 | |
dc.identifier.uri | http://dx.doi.org/10.26153/tsw/48065 | |
dc.language.iso | en | |
dc.subject | NLP | |
dc.subject | Natural Language Inference | |
dc.subject | Textual reasoning | |
dc.subject | Natural Language Deduction | |
dc.subject | NLI | |
dc.subject | Reasoning in text | |
dc.title | Additive deductive embeddings for planning of natural language proofs | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Computer Sciences |
Access full-text files
Original bundle
1 - 1 of 1