Detecting hallucinations in large language models using semantic entropy
Large language model (LLM) systems, such as ChatGPT 1 or Gemini 2 , can show impressive reasoning and question-answering capabilities but often ‘hallucinate’ false outputs and unsubstantiated answers 3 , 4 . Answering unreliably or without the necessary information prevents adoption in diverse field...
Saved in:
Published in | Nature (London) Vol. 630; no. 8017; pp. 625 - 630 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
20.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!