Lane Departure Identification for Advanced Driver Assistance
In this paper, a technique for the identification of the unwanted lane departure of a traveling vehicle on a road is proposed. A piecewise linear stretching function (PLSF) is used to improve the contrast level of the region of interest (ROI). Lane markings on the road are detected by dividing the R...
Saved in:
Published in | IEEE transactions on intelligent transportation systems Vol. 16; no. 2; pp. 910 - 918 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, a technique for the identification of the unwanted lane departure of a traveling vehicle on a road is proposed. A piecewise linear stretching function (PLSF) is used to improve the contrast level of the region of interest (ROI). Lane markings on the road are detected by dividing the ROI into two subregions and applying the Hough transform in each subregion independently. This segmentation approach improves the computational time required for lane detection. For lane departure identification, a distance-based departure measure is computed at each frame, and a necessary warning message is issued to the driver when such measure exceeds a threshold. The novelty of the proposed algorithm is the identification of the lane departure only using three lane-related parameters based on the Euclidean distance transform to estimate the departure measure. The use of the Euclidean distance transform in combination with the PLSF keeps the false alarm around 3% and the lane detection rate above 97% under various lighting conditions. Experimental results indicate that the proposed system can detect lane boundaries in the presence of several image artifacts, such as lighting changes, poor lane markings, and occlusions by a vehicle, and it issues an accurate lane departure warning in a short time interval. The proposed technique shows the efficiency with some real video sequences. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2014.2347400 |