Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(2): 105-113

Published online June 25, 2024

https://doi.org/10.5391/IJFIS.2024.24.2.105

© The Korean Institute of Intelligent Systems

PolyLaneDet: Lane Detection with Free-Form Polyline

Jeongmin Kim1 and Hyukdoo Choi2

1AI Technology Development Team 2, Autonomous A2Z, Anyang, Korea
2Department of Electronic Materials, Devices, and Equipment Engineering, Soonchunhyang University, Asan, Korea

Correspondence to :
Hyukdoo Choi (hyukdoo.choi@sch.ac.kr)

Received: August 1, 2023; Revised: May 4, 2024; Accepted: May 27, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Lane detection is a critical component of autonomous driving technologies that face challenges such as varied road conditions and diverse lane orientations. In this study, we aim to address these challenges by proposing PolyLaneDet, a novel lane detection model that utilizes a freeform polyline, termed ‘polylane,’ which adapts to both vertical and horizontal lane orientations without the need for post-processing. Our method builds on the YOLOv4 architecture to avoid restricting the number of detectable lanes. This model can regress both vertical and horizontal coordinates, thereby improving the adaptability and accuracy of lane detection in various scenarios. We conducted extensive experiments using the CULane benchmark and a custom dataset to validate the effectiveness of the proposed approach. The results demonstrate that PolyLaneDet achieves a competitive performance, particularly in detecting horizontal lane markings and stop lines, which are often omitted in traditional models. In conclusion, PolyLaneDet advances lane detection technology by combining flexible lane representation with robust detection capabilities, making it suitable for real-world applications with diverse road geometries.

Keywords: Lane detection, CNN, Deep learning

This study was supported by the Soonchunhyang University Research Fund.

The authors declare no conflicts of interest regarding this study.

Jeongmin Kim received his M.S. degrees (electronics and information engineering) from Soonchunhyang university, in 2013 and 2023, respectively. He has been working at AUTONOMOUS A2Z since 2023. His research interests are computer vision, object detection, deep learning, and unsupervised learning.

Hyukdoo Choi received the B.S. and Ph.D. degrees in electrical and electronics engineering from Yonsei University, Seoul, Republic of Korea, in 2009 and 2014, respectively. From 2014 to 2017, he worked for LG Electronics as a senior research engineer. Since 2018, he has been an assistant professor with the department of electronics and information engineering in Soonchunhyang university. His research interests include simultaneous localization and mapping (SLAM), visual odometry, computer vision, object detection, deep learning, and unsupervised learning.

Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(2): 105-113

Published online June 25, 2024 https://doi.org/10.5391/IJFIS.2024.24.2.105

Copyright © The Korean Institute of Intelligent Systems.

PolyLaneDet: Lane Detection with Free-Form Polyline

Jeongmin Kim1 and Hyukdoo Choi2

1AI Technology Development Team 2, Autonomous A2Z, Anyang, Korea
2Department of Electronic Materials, Devices, and Equipment Engineering, Soonchunhyang University, Asan, Korea

Correspondence to:Hyukdoo Choi (hyukdoo.choi@sch.ac.kr)

Received: August 1, 2023; Revised: May 4, 2024; Accepted: May 27, 2024

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Lane detection is a critical component of autonomous driving technologies that face challenges such as varied road conditions and diverse lane orientations. In this study, we aim to address these challenges by proposing PolyLaneDet, a novel lane detection model that utilizes a freeform polyline, termed ‘polylane,’ which adapts to both vertical and horizontal lane orientations without the need for post-processing. Our method builds on the YOLOv4 architecture to avoid restricting the number of detectable lanes. This model can regress both vertical and horizontal coordinates, thereby improving the adaptability and accuracy of lane detection in various scenarios. We conducted extensive experiments using the CULane benchmark and a custom dataset to validate the effectiveness of the proposed approach. The results demonstrate that PolyLaneDet achieves a competitive performance, particularly in detecting horizontal lane markings and stop lines, which are often omitted in traditional models. In conclusion, PolyLaneDet advances lane detection technology by combining flexible lane representation with robust detection capabilities, making it suitable for real-world applications with diverse road geometries.

Keywords: Lane detection, CNN, Deep learning

Fig 1.

Figure 1.

PolyLaneDet architecture. CSPDarkNet53 is used as a backbone network, and PAN is employed as a neck network. A couple of convolutions are applied to the middle level PAN feature in the head.

The International Journal of Fuzzy Logic and Intelligent Systems 2024; 24: 105-113https://doi.org/10.5391/IJFIS.2024.24.2.105

Fig 2.

Figure 2.

PolyLane NMS. (a) Polylane is represented by a polyline, and it is simplified to a line segment before the NMS process. (b) Orthogonal distances from a line endpoint to the other line is used to compute the distance between the polylanes.

The International Journal of Fuzzy Logic and Intelligent Systems 2024; 24: 105-113https://doi.org/10.5391/IJFIS.2024.24.2.105

Fig 3.

Figure 3.

Qualitative results from CULane. There are 8 grid cells, and each cell shows a pack of 4 images. They are, from the top, a source image, GT label, CLRNet result and our PolyLaneDet result

The International Journal of Fuzzy Logic and Intelligent Systems 2024; 24: 105-113https://doi.org/10.5391/IJFIS.2024.24.2.105

Fig 4.

Figure 4.

Qualitative results from the custom dataset. There are four samples, and each sample consists of three rows: (from the top) a source image, GT label, and our PolyLaneDet result.

The International Journal of Fuzzy Logic and Intelligent Systems 2024; 24: 105-113https://doi.org/10.5391/IJFIS.2024.24.2.105

Table 1 . Quantitative results from CULane benchmark.

MethodBackboneF1@50
SCNN [10]VGG1690.60
RESA [11]ResNet5092.10
UFLD [15]ResNet3490.70
CondLane [5]ResNet10193.47
CLRNet [6]DLA3493.73
PolyLaneDetCSPDarkNet5384.29

The metric is F1@50 of the Normal category in the CULane benchmark..


Table 2 . F1@50 metrics of Normal category with the different numbers of vertices of polylanes.

ParameterF1@50
n = 584.29
n = 1061.54
n = 2053.43

Table 3 . Quantitative results (%) from the custom road dataset.

ClassRecallPrecisionF1@50
Lane68.7868.9468.86
Stop line62.9045.2452.63

Share this article on :

Related articles in IJFIS