Improving Causal Reasoning in Large Language Models: A Survey
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world. While large language models (LLMs) can generate rationales for their outputs, their ability to reliably perform causal reasoning remains uncertain, often falling sh...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Causal reasoning (CR) is a crucial aspect of intelligence, essential for
problem-solving, decision-making, and understanding the world. While large
language models (LLMs) can generate rationales for their outputs, their ability
to reliably perform causal reasoning remains uncertain, often falling short in
tasks requiring a deep understanding of causality. In this survey, we provide a
comprehensive review of research aimed at enhancing LLMs for causal reasoning.
We categorize existing methods based on the role of LLMs: either as reasoning
engines or as helpers providing knowledge or data to traditional CR methods,
followed by a detailed discussion of the methodologies in each category. We
then evaluate the performance of LLMs on various causal reasoning tasks,
providing key findings and in-depth analysis. Finally, we provide insights from
current studies and highlight promising directions for future research. We aim
for this work to serve as a comprehensive resource, fostering further
advancements in causal reasoning with LLMs. Resources are available at
https://github.com/chendl02/Awesome-LLM-causal-reasoning. |
---|---|
DOI: | 10.48550/arxiv.2410.16676 |