Video saliency detection based on color-contrast, motion and texture distinctiveness model

Saliency object detection in still images has been the subject of research for a very long period, but the video saliency detection has not been well investigated, and the set of methods for calculating the salient object in videos is very limited. Researching how attention is allocated not only in...

Full description

Saved in:
Bibliographic Details
Published inProceedings / ACS/IEEE International Conference on Computer Systems and Applications pp. 1 - 8
Main Authors Talbi, Amal, Abdellaoui, Mehrez
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2023
Subjects
Online AccessGet full text
ISSN2161-5330
DOI10.1109/AICCSA59173.2023.10479291

Cover

Loading…
More Information
Summary:Saliency object detection in still images has been the subject of research for a very long period, but the video saliency detection has not been well investigated, and the set of methods for calculating the salient object in videos is very limited. Researching how attention is allocated not only in space but also over time. This is particularly important for tracking moving objects or understanding attention shifts in dynamic scenes. In this publication, we contribute to this field a new saliency object detection approach, that integrates the texture characteristics to the existing method combining color-contrast and motion. In particular, we first detect the spatial saliency map from the visual models based on color-contrast and texture features, and then we suggest to detecting the temporal saliency map via motion. Finally, we propose a new saliency fusion CMT method by merging three different features (color, motion, and texture) to integrate the spatial and temporal maps. Determining which regions or objects in an image or scene are visually salient or attract human attention. This is a fundamental problem, and various algorithms have been developed to predict salient regions. Expriments for video saliency detection based on three datasets, highlight that the proposed CMT method enhances the performance of saliency estimation model and achieves the best precision by including texture, which minimizes the impact of a busy and noisy background on the saliency calculation.
ISSN:2161-5330
DOI:10.1109/AICCSA59173.2023.10479291