EASeg: Environmental adaptation for weakly-supervised autonomous driving semantic segmentation
Weakly supervised semantic segmentation (WSSS) offers a promising solution to reduce annotation costs in autonomous driving perception systems. However, existing methods struggle with the complex environmental conditions inherent to real-world driving scenarios, including adverse weather, variable l...
Saved in:
Published in | Information processing & management Vol. 63; no. 1; p. 104349 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2026
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Weakly supervised semantic segmentation (WSSS) offers a promising solution to reduce annotation costs in autonomous driving perception systems. However, existing methods struggle with the complex environmental conditions inherent to real-world driving scenarios, including adverse weather, variable lighting, and challenging visibility conditions. To address these limitations, we introduce EASeg, a novel framework that enhances segmentation robustness across diverse environmental conditions while requiring only image-level supervision. Our approach introduces three key innovations: (1) a multi-scale feature module that captures objects at varying scales followed by a boundary-aware enhancement component for precise delineation; (2) a dual-stream environmental adaptation mechanism that separately models global weather patterns and local illumination variations; and (3) a reliability-guided feature integration strategy that dynamically combines backbone features with foundation models based on their estimated reliability. Extensive experiments demonstrate that EASeg outperforms previous best methods, increasing mIoU by 24.5% on Cityscapes, 27.5% on CamVid, and 22.5% on WildDash2. Ablation studies confirm that our work represents a significant advancement toward practical, all-weather autonomous driving systems that enhance safety through improved segmentation of small objects and precise boundary delineation, while minimizing annotation requirements.
•Weakly-supervised framework for adaptive autonomous driving segmentation.•Dual-stream network models global weather and local illumination variations.•Dynamic feature fusion with reliability-aware cross-condition optimization.•Achieves 76.6%, 83.2%, 54.7% mIoU on Cityscapes, CamVid, and WildDash2 datasets.•Near fully-supervised performance with image-level labels only. |
---|---|
ISSN: | 0306-4573 |
DOI: | 10.1016/j.ipm.2025.104349 |