YOLOv11-based multi-task learning for enhanced bone fracture detection and classification in X-ray images
This study presents a multi-task learning framework based on the YOLOv11 architecture to improve both fracture detection and localization. The goal is to provide an efficient solution for clinical applications. We used a large dataset of X-ray images, including both fracture and non-fracture cases f...
Saved in:
Published in | Journal of radiation research and applied sciences Vol. 18; no. 1; p. 101309 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.03.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 1687-8507 1687-8507 |
DOI | 10.1016/j.jrras.2025.101309 |
Cover
Loading…
Summary: | This study presents a multi-task learning framework based on the YOLOv11 architecture to improve both fracture detection and localization. The goal is to provide an efficient solution for clinical applications.
We used a large dataset of X-ray images, including both fracture and non-fracture cases from the upper and lower extremities. The dataset was divided into three parts: training (70%), validation (15%), and test (15%). The training set had 10,966 cases (5778 normal, 5188 with fractures), while the validation and test sets each contained 2350 cases (1238 normal, 1112 with fractures). A multi-task learning model based on YOLOv11 was trained for fracture classification and localization. We applied data augmentation to prevent overfitting and improve generalization. Model performance was evaluated using two metrics: mean Average Precision (mAP) and Intersection over Union (IoU), with comparisons made to Faster R-CNN and SSD models. Training was done with a learning rate of 0.001 and a batch size of 16, using the Adam optimizer for better convergence. We also benchmarked the YOLOv11 model against Faster R-CNN and SSD to assess performance using mAP and IoU scores at different thresholds.
The YOLOv11 model achieved excellent results, with a mean Average Precision (mAP) of 96.8% at an IoU threshold of 0.5 and an IoU of 92.5%. These results were better than Faster R-CNN (mAP: 87.5%, IoU: 85.23%) and SSD (mAP: 82.9%, IoU: 80.12%), showing that YOLOv11 outperformed these models in fracture detection and localization. This improvement highlights the model's strength and efficiency for real-time use.
The YOLOv11-based multi-task learning framework significantly outperforms traditional methods, offering high accuracy and real-time fracture localization. This model shows great potential for clinical use, improving diagnostic accuracy, increasing productivity, and streamlining the workflow for radiologists. |
---|---|
ISSN: | 1687-8507 1687-8507 |
DOI: | 10.1016/j.jrras.2025.101309 |