Fusion YOLO: Fusion Module Assisted Network in Detection for Automatic Target Scoring
The processing, analysis, and understanding of chest bitmaps with bullet holes are crucial for automatic target scoring. With the development of technology, computer vision-based techniques have shown significant advantages in this task. By obtaining the size, shape, position, distribution of bullet...
Saved in:
Published in | 2024 7th International Conference on Algorithms, Computing and Artificial Intelligence (ACAI) pp. 1 - 6 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
20.12.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/ACAI63924.2024.10899598 |
Cover
Loading…
Summary: | The processing, analysis, and understanding of chest bitmaps with bullet holes are crucial for automatic target scoring. With the development of technology, computer vision-based techniques have shown significant advantages in this task. By obtaining the size, shape, position, distribution of bullet holes, as well as the spatial relationships between bullet holes and target rings, it is possible to provide precise and real-time feedback on the shooter's performance and offer corrections and assistance. However, the bullet holes in chest bitmaps are characterized by small volume and few appearance features, making it difficult for existing obtject detection technologies to accurately extract their features, often leading to low detection accuracy. To address this issue, this paper propose a multi-fusion network called Fusion YOLO for the detection of bullet holes in chest bitmaps. Specifically, first inserted a MdF Module (Multi-domain Fusion Module) at the front of the overall network to integrate information from different transformation domains, using high-pass filters to fuse spatial domain visual information with frequency domain details and edge information. Secondly, constructed a simple super-resolution reconstruction module that calculates loss from the feature maps extracted by the backbone network, thereby controlling the precision of the information extracted by the backbone network. Additionally, to better fuse high-resolution, semantically-weak, and low-resolution, semantically-strong feature maps, and to give small object information more influence in the global context, this paper proposed the MFPAN (Multi-feature Fusion Path Aggregation Network) feature fusion network. The experimental results show that compared to existing methods, Fusion YOLO achieved superior performance in mAP while reducing the number of parameters, reaching 71.7%. This research provides an advanced method for automatic target scoring. |
---|---|
DOI: | 10.1109/ACAI63924.2024.10899598 |