Locating robust patterns based on invariant of LTP-based features

•A novel concept of robust patterns (called RLTP) is introduced by considering the invariance of LTP-based features.•A crucial derivative of CLTP (called RCLTP) is proposed to be in accordance with RLTP.•An efficient application is presented to take into account RCLTP for local DT representation.•Lo...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition letters Vol. 165; pp. 9 - 16
Main Authors Nguyen, Thanh Tuan, Nguyen, Thanh Phuong, Thirion-Moreau, Nadège
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.01.2023
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A novel concept of robust patterns (called RLTP) is introduced by considering the invariance of LTP-based features.•A crucial derivative of CLTP (called RCLTP) is proposed to be in accordance with RLTP.•An efficient application is presented to take into account RCLTP for local DT representation.•Local optical-flow-based features are taken into account DT motions for boosting the discrimination.•Just addressing the robust patterns, our RCLTP descriptor obtains very good performance compared to state of the art. Efficiently representing Dynamic Textures (DTs) based on salient features is one of the considerable challenges in computer vision. Locating these features can be obstructed due to the impact of encoding factors. In this article, a novel concept of Robust Local Ternary Patterns (RLTP) is introduced in consideration of the invariance of Local Ternary Patterns (LTP) subject to the deviation of thresholds. Our locating process is able to simultaneously encapsulate the discrimination of local features, and deal with the noise sensibility caused by a small gray-scale change of local neighbors. RLTP is then adapted to the completed LTP model to form an efficient operator for capturing completely properties of RLTP. Finally, RLTP is taken into account for the DT description, where the robust patterns of spatial-temporal features and optical-flow-based motions are exploited to improve the performance. Experiments have clearly corroborated the efficacy of our proposed approach.
ISSN:0167-8655
DOI:10.1016/j.patrec.2022.11.008