VisionGPT: LLM-Assisted Real-Time Anomaly Detection for Safe Visual Navigation

This paper explores the potential of Large Language Models(LLMs) in zero-shot anomaly detection for safe visual navigation. With the assistance of the state-of-the-art real-time open-world object detection model Yolo-World and specialized prompts, the proposed framework can identify anomalies within...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Wang, Hao, Qin, Jiayou, Bastola, Ashish, Chen, Xiwen, Suchanek, John, Gong, Zihao, Razi, Abolfazl
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 19.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper explores the potential of Large Language Models(LLMs) in zero-shot anomaly detection for safe visual navigation. With the assistance of the state-of-the-art real-time open-world object detection model Yolo-World and specialized prompts, the proposed framework can identify anomalies within camera-captured frames that include any possible obstacles, then generate concise, audio-delivered descriptions emphasizing abnormalities, assist in safe visual navigation in complex circumstances. Moreover, our proposed framework leverages the advantages of LLMs and the open-vocabulary object detection model to achieve the dynamic scenario switch, which allows users to transition smoothly from scene to scene, which addresses the limitation of traditional visual navigation. Furthermore, this paper explored the performance contribution of different prompt components, provided the vision for future improvement in visual accessibility, and paved the way for LLMs in video anomaly detection and vision-language understanding.
ISSN:2331-8422