The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs
Hallucination detection in Large Language Models (LLMs) is crucial for ensuring their reliability. This work presents our participation in the CLEF ELOQUENT HalluciGen shared task, where the goal is to develop evaluators for both generating and detecting hallucinated content. We explored the capabil...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Hallucination detection in Large Language Models (LLMs) is crucial for
ensuring their reliability. This work presents our participation in the CLEF
ELOQUENT HalluciGen shared task, where the goal is to develop evaluators for
both generating and detecting hallucinated content. We explored the
capabilities of four LLMs: Llama 3, Gemma, GPT-3.5 Turbo, and GPT-4, for this
purpose. We also employed ensemble majority voting to incorporate all four
models for the detection task. The results provide valuable insights into the
strengths and weaknesses of these LLMs in handling hallucination generation and
detection tasks. |
---|---|
DOI: | 10.48550/arxiv.2407.09152 |