The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs

Hallucination detection in Large Language Models (LLMs) is crucial for ensuring their reliability. This work presents our participation in the CLEF ELOQUENT HalluciGen shared task, where the goal is to develop evaluators for both generating and detecting hallucinated content. We explored the capabil...

Full description

Saved in:
Bibliographic Details
Main Authors Bui, Anh Thu Maria, Brech, Saskia Felizitas, Hußfeldt, Natalie, Jennert, Tobias, Ullrich, Melanie, Breuer, Timo, Khasmakhi, Narjes Nikzad, Schaer, Philipp
Format Journal Article
LanguageEnglish
Published 12.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Hallucination detection in Large Language Models (LLMs) is crucial for ensuring their reliability. This work presents our participation in the CLEF ELOQUENT HalluciGen shared task, where the goal is to develop evaluators for both generating and detecting hallucinated content. We explored the capabilities of four LLMs: Llama 3, Gemma, GPT-3.5 Turbo, and GPT-4, for this purpose. We also employed ensemble majority voting to incorporate all four models for the detection task. The results provide valuable insights into the strengths and weaknesses of these LLMs in handling hallucination generation and detection tasks.
DOI:10.48550/arxiv.2407.09152