Vision-Language and Large Language Model Performance in Gastroenterology: GPT, Claude, Llama, Phi, Mistral, Gemma, and Quantized Models
Background and Aims: This study evaluates the medical reasoning performance of large language models (LLMs) and vision language models (VLMs) in gastroenterology. Methods: We used 300 gastroenterology board exam-style multiple-choice questions, 138 of which contain images to systematically assess th...
Saved in:
Main Authors | , , , , , , , , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Background and Aims: This study evaluates the medical reasoning performance
of large language models (LLMs) and vision language models (VLMs) in
gastroenterology.
Methods: We used 300 gastroenterology board exam-style multiple-choice
questions, 138 of which contain images to systematically assess the impact of
model configurations and parameters and prompt engineering strategies utilizing
GPT-3.5. Next, we assessed the performance of proprietary and open-source LLMs
(versions), including GPT (3.5, 4, 4o, 4omini), Claude (3, 3.5), Gemini (1.0),
Mistral, Llama (2, 3, 3.1), Mixtral, and Phi (3), across different interfaces
(web and API), computing environments (cloud and local), and model precisions
(with and without quantization). Finally, we assessed accuracy using a
semiautomated pipeline.
Results: Among the proprietary models, GPT-4o (73.7%) and Claude3.5-Sonnet
(74.0%) achieved the highest accuracy, outperforming the top open-source
models: Llama3.1-405b (64%), Llama3.1-70b (58.3%), and Mixtral-8x7b (54.3%).
Among the quantized open-source models, the 6-bit quantized Phi3-14b (48.7%)
performed best. The scores of the quantized models were comparable to those of
the full-precision models Llama2-7b, Llama2--13b, and Gemma2-9b. Notably, VLM
performance on image-containing questions did not improve when the images were
provided and worsened when LLM-generated captions were provided. In contrast, a
10% increase in accuracy was observed when images were accompanied by
human-crafted image descriptions.
Conclusion: In conclusion, while LLMs exhibit robust zero-shot performance in
medical reasoning, the integration of visual data remains a challenge for VLMs.
Effective deployment involves carefully determining optimal model
configurations, encouraging users to consider either the high performance of
proprietary models or the flexible adaptability of open-source models. |
---|---|
DOI: | 10.48550/arxiv.2409.00084 |