Fine-tuning vision foundation model for crack segmentation in civil infrastructures
Large-scale foundation models have become the mainstream deep learning method, while in civil engineering, the scale of AI models is strictly limited. In this work, a vision foundation model is introduced for crack segmentation. Two parameter-efficient fine-tuning methods, adapter and low-rank adapt...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Large-scale foundation models have become the mainstream deep learning
method, while in civil engineering, the scale of AI models is strictly limited.
In this work, a vision foundation model is introduced for crack segmentation.
Two parameter-efficient fine-tuning methods, adapter and low-rank adaptation,
are adopted to fine-tune the foundation model in semantic segmentation: the
Segment Anything Model (SAM). The fine-tuned CrackSAM shows excellent
performance on different scenes and materials. To test the zero-shot
performance of the proposed method, two unique datasets related to road and
exterior wall cracks are collected, annotated and open-sourced, for a total of
810 images. Comparative experiments are conducted with twelve mature semantic
segmentation models. On datasets with artificial noise and previously unseen
datasets, the performance of CrackSAM far exceeds that of all state-of-the-art
models. CrackSAM exhibits remarkable superiority, particularly under
challenging conditions such as dim lighting, shadows, road markings,
construction joints, and other interference factors. These cross-scenario
results demonstrate the outstanding zero-shot capability of foundation models
and provide new ideas for developing vision models in civil engineering. |
---|---|
DOI: | 10.48550/arxiv.2312.04233 |