Effectiveness assessment of recent large vision-language models
The advent of large vision-language models (LVLMs) represents a remarkable advance in the quest for artificial general intelligence. However, the models’ effectiveness in both specialized and general tasks warrants further investigation. This paper endeavors to evaluate the competency of popular LVL...
Saved in:
Published in | Visual Intelligence Vol. 2; no. 1 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
28.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The advent of large vision-language models (LVLMs) represents a remarkable advance in the quest for artificial general intelligence. However, the models’ effectiveness in both specialized and general tasks warrants further investigation. This paper endeavors to evaluate the competency of popular LVLMs in specialized and general tasks, respectively, aiming to offer a comprehensive understanding of these novel models. To gauge their effectiveness in specialized tasks, we employ six challenging tasks in three different application scenarios: natural, healthcare, and industrial. These six tasks include salient/camouflaged/transparent object detection, as well as polyp detection, skin lesion detection, and industrial anomaly detection. We examine the performance of three recent open-source LVLMs, including MiniGPT-v2, LLaVA-1.5, and Shikra, on both visual recognition and localization in these tasks. Moreover, we conduct empirical investigations utilizing the aforementioned LVLMs together with GPT-4V, assessing their multi-modal understanding capabilities in general tasks including object counting, absurd question answering, affordance reasoning, attribute recognition, and spatial relation reasoning. Our investigations reveal that these LVLMs demonstrate limited proficiency not only in specialized tasks but also in general tasks. We delve deep into this inadequacy and uncover several potential factors, including limited cognition in specialized tasks, object hallucination, text-to-image interference, and decreased robustness in complex problems. We hope that this study can provide useful insights for the future development of LVLMs, helping researchers improve LVLMs for both general and specialized applications. |
---|---|
ISSN: | 2731-9008 2731-9008 |
DOI: | 10.1007/s44267-024-00050-1 |