International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(2): 105-113
Published online June 25, 2024
https://doi.org/10.5391/IJFIS.2024.24.2.105
© The Korean Institute of Intelligent Systems
Jeongmin Kim1 and Hyukdoo Choi2
1AI Technology Development Team 2, Autonomous A2Z, Anyang, Korea
2Department of Electronic Materials, Devices, and Equipment Engineering, Soonchunhyang University, Asan, Korea
Correspondence to :
Hyukdoo Choi (hyukdoo.choi@sch.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Lane detection is a critical component of autonomous driving technologies that face challenges such as varied road conditions and diverse lane orientations. In this study, we aim to address these challenges by proposing PolyLaneDet, a novel lane detection model that utilizes a freeform polyline, termed ‘polylane,’ which adapts to both vertical and horizontal lane orientations without the need for post-processing. Our method builds on the YOLOv4 architecture to avoid restricting the number of detectable lanes. This model can regress both vertical and horizontal coordinates, thereby improving the adaptability and accuracy of lane detection in various scenarios. We conducted extensive experiments using the CULane benchmark and a custom dataset to validate the effectiveness of the proposed approach. The results demonstrate that PolyLaneDet achieves a competitive performance, particularly in detecting horizontal lane markings and stop lines, which are often omitted in traditional models. In conclusion, PolyLaneDet advances lane detection technology by combining flexible lane representation with robust detection capabilities, making it suitable for real-world applications with diverse road geometries.
Keywords: Lane detection, CNN, Deep learning
The authors declare no conflicts of interest regarding this study.
International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(2): 105-113
Published online June 25, 2024 https://doi.org/10.5391/IJFIS.2024.24.2.105
Copyright © The Korean Institute of Intelligent Systems.
Jeongmin Kim1 and Hyukdoo Choi2
1AI Technology Development Team 2, Autonomous A2Z, Anyang, Korea
2Department of Electronic Materials, Devices, and Equipment Engineering, Soonchunhyang University, Asan, Korea
Correspondence to:Hyukdoo Choi (hyukdoo.choi@sch.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Lane detection is a critical component of autonomous driving technologies that face challenges such as varied road conditions and diverse lane orientations. In this study, we aim to address these challenges by proposing PolyLaneDet, a novel lane detection model that utilizes a freeform polyline, termed ‘polylane,’ which adapts to both vertical and horizontal lane orientations without the need for post-processing. Our method builds on the YOLOv4 architecture to avoid restricting the number of detectable lanes. This model can regress both vertical and horizontal coordinates, thereby improving the adaptability and accuracy of lane detection in various scenarios. We conducted extensive experiments using the CULane benchmark and a custom dataset to validate the effectiveness of the proposed approach. The results demonstrate that PolyLaneDet achieves a competitive performance, particularly in detecting horizontal lane markings and stop lines, which are often omitted in traditional models. In conclusion, PolyLaneDet advances lane detection technology by combining flexible lane representation with robust detection capabilities, making it suitable for real-world applications with diverse road geometries.
Keywords: Lane detection, CNN, Deep learning
PolyLaneDet architecture. CSPDarkNet53 is used as a backbone network, and PAN is employed as a neck network. A couple of convolutions are applied to the middle level PAN feature in the head.
PolyLane NMS. (a) Polylane is represented by a polyline, and it is simplified to a line segment before the NMS process. (b) Orthogonal distances from a line endpoint to the other line is used to compute the distance between the polylanes.
Qualitative results from CULane. There are 8 grid cells, and each cell shows a pack of 4 images. They are, from the top, a source image, GT label, CLRNet result and our PolyLaneDet result
Qualitative results from the custom dataset. There are four samples, and each sample consists of three rows: (from the top) a source image, GT label, and our PolyLaneDet result.
Table 2 . F1@50 metrics of Normal category with the different numbers of vertices of polylanes.
Parameter | F1@50 |
---|---|
84.29 | |
61.54 | |
53.43 |
Table 3 . Quantitative results (%) from the custom road dataset.
Class | Recall | Precision | F1@50 |
---|---|---|---|
Lane | 68.78 | 68.94 | 68.86 |
Stop line | 62.90 | 45.24 | 52.63 |
Gayoung Kim
International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(3): 287-294 https://doi.org/10.5391/IJFIS.2024.24.3.287Xinzhi Hu, Wang-Su Jeon, Grezgorz Cielniak, and Sang-Yong Rhee
International Journal of Fuzzy Logic and Intelligent Systems 2024; 24(1): 1-9 https://doi.org/10.5391/IJFIS.2024.24.1.1Igor V. Arinichev, Sergey V. Polyanskikh, Irina V. Arinicheva, Galina V. Volkova, and Irina P. Matveeva
International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(1): 106-115 https://doi.org/10.5391/IJFIS.2022.22.1.106PolyLaneDet architecture. CSPDarkNet53 is used as a backbone network, and PAN is employed as a neck network. A couple of convolutions are applied to the middle level PAN feature in the head.
|@|~(^,^)~|@|PolyLane NMS. (a) Polylane is represented by a polyline, and it is simplified to a line segment before the NMS process. (b) Orthogonal distances from a line endpoint to the other line is used to compute the distance between the polylanes.
|@|~(^,^)~|@|Qualitative results from CULane. There are 8 grid cells, and each cell shows a pack of 4 images. They are, from the top, a source image, GT label, CLRNet result and our PolyLaneDet result
|@|~(^,^)~|@|Qualitative results from the custom dataset. There are four samples, and each sample consists of three rows: (from the top) a source image, GT label, and our PolyLaneDet result.