Article Search
닫기

Original Article

Split Viewer

Int. J. Fuzzy Log. Intell. Syst. 2017; 17(3): 162-169

Published online September 30, 2017

https://doi.org/10.5391/IJFIS.2017.17.3.162

© The Korean Institute of Intelligent Systems

Hough Transform-based Road Boundary Localization

Beomseong Kim1,2, Seongkeun Park3, and Euntai Kim2

1Intelligence Lab., LG Electronics, Seoul, Korea, 2School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, 3Department of Smart Automobile, Soonchunhyang University, Asan, Korea

Correspondence to :
Euntai Kim (etkim@yonsei.ac.kr)

Received: August 18, 2017; Revised: September 13, 2017; Accepted: September 24, 2017

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

The advanced driver-assistance system (ADAS) is designed to help drivers while they are driving. To help the drivers, ADAS first comprehends the situation by analyzing the data obtained from the road surroundings. In this process, the road boundary is one of the most important targets to detect for safe driving, but is frequently misdetected on crowded roads. Therefore, a new method for robustly detecting road boundaries on crowded roads is presented in this paper. First, road-boundary detection using a standard Hough transform is described, and its limitations are shown. Second, the cause of the limitations is explained by the measurement model of a laser scanner. Then, the standard Hough transform is modified to reflect the measurement model of the laser scanner; this change reduces the effect of closed obstacles. Finally, the proposed method is tested in the real-world environment, and it shows better performance than previous works in crowded environments.

Keywords: Road boundary detection, Laser scanner, Hough transform

Road-boundary detection systems divide surroundings areas into the road area and other areas. This system is applicable for detecting drivable areas, building a map, localizing, and so on. The vision, radar, and laser scanner are the most well-known sensors in the field of intelligent vehicles, and these kinds of sensors are also used for detecting road boundaries.

Road-boundary detection by camera is frequently used by researchers for its economic cost. The methods with vision image use lanes on the road [13] and the color difference between the road and the other areas [4] for road-boundary detection. In [2], Jain proposed a road-boundary detection method using a multi-resolution Hough transform. Wang et al. [3] designed the road model using B-spline for detecting road boundaries. Thorpe et al. [4] applied a Monte Carlo simulation for detecting road boundaries.

However, detection using a camera is difficult in situations with worn roads, unmarked boundaries, and low-resolution cameras. Therefore, Wen et al. [1] proposed a novel method for overcoming low resolution problems. Tsai and Sun [5] proposed a method for shadowy road conditions by using a fuzzy rule.

Radar is also widely used in the field of intelligent vehicles. Lundquist et al. [6] proposed a method to detect road boundaries by using occupancy grid mapping and the outlier rejection method. Lundquist et al. [7] used a Gaussian mixture probability hypothesis density filter to detect road boundaries in sequential radar measurement. However, the radar sensor has a low resolution and small number of measurements per scan. Therefore, sequential measurement is required, i.e., sequential time is needed. For this reason, road-boundary detection using radar is more suitable for map building applications than for ADAS or autonomous driving systems.

The laser scanner has a high resolution and large number of measurement per scan. In addition, measurements from laser scanners have less measurement error than the image of a camera or radar. Therefore, laser scanners are widely used for detecting road boundaries.

The methods for detecting road boundaries can be divided into two types based on the measuring angle of a laser scanner. First, using a laser scanner that is installed below the car is the easiest way to detect road boundaries. This laser scanner can track road lanes using signal power from the reflected beam, and distinguishes road boundaries using the height difference between road and curb. However, this method requires an additional laser scanner to detect road boundaries [8].

Using a laser scanner that is installed parallel to the road, i.e., horizontally, is the second method. This laser scanner detects guardrails, fences, and walls as parts of road boundaries. These kinds of obstacles exist on well-constructed roads, like highways, which are the main locations where ADAS and autonomous driving works, currently. This laser scanner is already widely used for detecting vehicles, pedestrians, and other obstacles. Therefore, sharing the laser scanner data of the vehicle-and pedestrian-detection system could be possible, which is an advantage of this method.

Kirchner and Heinrich [9] used the Kalman-filter tracking method for road-boundary detection. Sparbert et al. [10] set up a region of interest and detected road boundaries at the most interesting area. Garcia et al. [11] extracts road boundaries using segmentation and histogram compensation methods. However, in these papers, the situation in which the other vehicles are adjacent to the host vehicle was not considered.

In [1217], the algorithms are designed to overcome the limitation for the cases of object alignment. In [1214], a specific model is proposed for estimating objects in the occluded area. Koch [15] and Ahmed et al. [16] used data fusion to solve the problem of tracking the occluded object using a single laser scanner. An et al. [17] eliminated measurements from moving objects on the grid map to estimate the exact road model. However, such research is only effective for tracking, and not detection.

Therefore, we proposed a novel method for road-boundary detection using a laser scanner that is installed horizontally, by applying a modified Hough transform. The proposed method deals with the occlusion problem, and then provides stable performance in crowded situations. The Hough transform method is already used in [12] for road-boundary detection, but a camera is used in [12]. This method is unsuitable for use in laser scanners and is quite different from the method proposed in this study.

2.1 Standard Hough Transform

In a given data set, detecting specific shapes like lines and circles has been researched previously. Detecting a line from a data set that consists of point measurement called point-to-line mapping (PTLM). Hough transform (HT) is the most popular method, and is widely used for solving this problem in the field of vision. HT extracts certain shapes by a voting procedure.

To execute the voting process, a mathematical model for the shape, and parameters to represent this mathematical model are needed. Duda and Hart [18] proposed the Hesse normal form to resolve unbounded problem of linear equations. Eq. (1) is the Hesse normal form. In general, the parameter space of Hesse normal form is called Hough space.

ρ=xcos α+ysin α.

ρ is the distance between the origin and the closest point on the line, and α is the angle between the x-axis and the closest point. When ρmax is the maximum distance of the laser scanner, the range of parameters could be ρ ∈ [−ρmax,ρmax], α ∈ [0,π), or ρ ∈ [0,ρmax], α ∈ [0, 2π). In this paper, the former range is used. A single point in the xy plane is transformed into a sinusoidal curve in Hough space.

2.2 Limitation of Standard Hough Transform

To implement HT, Hough space must be quantized, and this quantized Hough space forms a 2-dimensional array called an accumulator. Each bin in the accumulator has predefined sizes Δρ, Δα, represents each set of ρ, α, and increases as measurements transform the same set. This means the accumulator of measurement could be calculated by accumulating each measurement point by using Eq. (3). The higher value of bin means the higher probability of existence of the line that the corresponding parameter of the bin represents.

HoughSpace(Zt)Accumulator(Zt),Accumulator(Zt)=i=1NAccumulator(pi)         for i=1,,N=i=1NAccumulator(ri,θi)         for i=1,,N.

Using this voting process, the line that contains the largest point could be detected, but this line is not always the road boundary.

3.1 Measurement Model of Laser Scanner

The laser scanner uses the laser beam for measuring the distance between a sensor and its closest obstacle. The laser scanner emits a laser beam, the beam reaches the obstacle, and then it returns to its original position. The laser scanner measures the time of flight (return time) and this time can be tuned to the distance owing to the constant speed of the laser beam.

Standard HT increases the possibility of line existence according to the measurement point by voting process, and does nothing in the empty areas. From the perspective of standard HT, the area that contains a measurement is occupied, and the other areas are unknown.

However, in the perspective of a laser scanner, the area where the measurement was taken is occupied, but the areas before the measurement are empty, and the areas after the measurement are unknown. This difference could create an error with road-boundary detection.

Figure 1 denotes the occupancy grid map of the laser scanner measurement. The black point denotes the measurement and the occupied area. The gray area denotes unknown areas, and the white area denotes the empty area. If these measurements are obtained on the road, a human can estimate the road boundary as the blue dashed line on Figure 1 by using the information regarding empty areas. The red dashed line that has been selected by standard HT cannot be considered the road boundary, as the line contains the empty area.

In the road environment, the road boundary is the farthest obstacle. Therefore, a human can estimate the road boundary despite the presence of closer obstacles. In this paper, the proposed method could provide stable performance in a crowded environment by using the information concerning empty areas.

3.2 Modified HT Considering the Assumption of Laser Scanner

For this reason, in this paper, a negative voting process is added to the standard HT. In the proposed method, positive voting is executed in the occupied area as standard HT. However, in the empty area, the accumulator is calculated as standard HT, and then subtracted during the overlapping process, as Eq. (4).

Accumulator*(Zt)=i=1N(Accumulator(ri,θi)-j=1[ri]-1Accumulator(ri-j,θi)).

In Eq. (4), Accumulator* (Zt) denotes the overlapped Hough space calculated by the proposed method. Accumulator (ri,θi) is the same as standard HT, and j=1[ri]-1Accumulator(ri-j,θi) denotes the accumulator from the empty area. In this paper, the sampling interval of the empty area is 1, and j is a fixed number from 1 to [ri − 1]. Therefore, rij denotes distances before the measurement point with uniform interval. When overlapping the accumulators, the accumulator from the empty area is subtracted.

Figure 2 depicts the result of the proposed method. In Figure 2(a), the blue box denotes the overlapped result of standard HT and the red box denotes the overlapped result of additional terms in the proposed method. Figure 2(b) denotes the final accumulator, and the red star is a peak point. Figure 2(c) is the same measurement as Figure 1, and the red line is the line according to the parameters of the red star in Figure 2(b).

3.3 Road Boundary Selection

As mentioned before, standard HT is usually used in vision images. In many cases where the number of lines in images is unknown, a threshold is used for choosing the lines. This method chooses all the lines that have a larger value in the Hough space than the threshold. However, the goal for this paper is to detect road boundaries. Road boundaries exist independently at the left right sides, and only one boundary exists at each region.

Figure 3 shows the road boundary area of the left and right side in the accumulator. The right road boundary is located in r ∈ [0, rmax], θ[0,π2) or r ∈ [−rmax, 0), θ[π2,π). The left road boundary is located in r ∈ [0, rmax], θ[π2,π) or r ∈ [−rmax, 0), θ[0,π2).

In this experiment, we set up an IBEO LUX2010 laser scanner on the front bumper of a Kia K900. The LUX2010 has a total of four layers, and its horizontal and vertical resolutions are 0.125° and 0.8°, respectively. And the minimum and maximum measuring distance are 0.3 m and 200 m, respectively. A camera is installed on the same vehicle, and is used to obtain images of the actual ground of the environment. This sensor configuration is same as that in our previous works [19, 20].

Figure 4 is the result of using road-boundary detection in a common road environment. Figure 4(a) and 4(b) are the results of the standard HT and the proposed method, respectively. Black points are the measurements of the laser scanner. The blue line and the red line are the road boundary of the left and right side, respectively. In this situation, road boundaries have the largest number of measurement at the left and right sides. Therefore, the result of the standard HT and that of the proposed method are similar. Figure 4(c) is the camera image at the same time. This image is used for checking the surroundings.

Figure 5 is the result of road-boundary detection in a crowded road environment. Figure 5(a) and 5(b) are the results of using the standard HT and the proposed method, respectively. The black points are the measurements of the laser scanner. The blue line and the red line are the road boundary of the left side and right side, respectively. In this situation, road boundaries have the largest number of measurement at the left side. However, in this situation, occlusion was caused by vehicles at the front. Moreover, the measurement points from road boundary are smaller than measurement points from vehicles. Therefore, road boundary of standard HT is estimated at incorrect positions; however, the proposed method works well in this situation.

Tables 1 and 2 describe the quantitative results of using standard HT and the proposed method. Tables 1 and 2 show the error and standard deviation between the estimated and true values, respectively. DL and DR are the distance from origin to the x-intercept of the left and right road boundaries. αL and αR are the angle of the left and right road boundary. The number of errors incurred when using the proposed method is competitively smaller than those by the standard HT for all parameters.

When experimenting, the host vehicle was located on the left side of road, therefore, the number of vehicles on the right side of the host vehicle exceeds the number of vehicles on the left side of the host vehicle. As a result, the occlusions mainly occur in the right-side road boundary. For this reason, the distance and angular error of right side rapidly reduced, in particular. These effects can be verified by the standard deviation in Table 2.

In this paper, a novel method for detecting road boundaries was proposed. The proposed method was designed to overcome occlusion problems by modifying standard HT for detecting road boundaries. The proposed method was implemented on a vehicle, and showed outstanding performance in a crowded environment.

This method for perception is necessary for ADAS and autonomous driving vehicles. By applying the proposed method on ADAS and autonomous driving systems, these systems could perform more stably and safely.

Fig. 1.

Occupancy grid map of laser scanner measurement.


Fig. 2.

Result of proposed Hough transform (HT). (a) Standard HT and the proposed method. (b) Final accumulator. (c) The same measurement as Figure 1.


Fig. 3.

Road boundary areas of the left and right side in the accumulator.


Fig. 4.

(a) Result of standard HT in a common road environment. (b) Result of the proposed method in a common road environment. (c) Corresponding image.


Fig. 5.

(a) Result of the standard HT in a crowded road environment. (b) Result of the proposed method in a crowded road environment. (c) Corresponding image.


Table. 1.

Table 1. Error of standard HT and proposed method.

AlgorithmDLαLDRαR
Standard HT 0.286  0.024  12.353  0.564 
Proposed method 0.1370.0210.9060.030

Table. 2.

Table 2. Standard deviation of standard HT and proposed method.

AlgorithmDLαLDRαR
Standard HT 0.338  0.025  14.600  0.603 
Proposed method 0.0990.0121.3930.045

  1. Wen, Q, Yang, Z, Song, Y, and Jia, P 2008. Road boundary detection in complex urban environment based on low-resolution vision., Proceeding of 11th Joint International Conference on Information Sciences, Shenzhen, China, pp.1-7.
  2. Yu, B, and Jain, AK 1997. Lane boundary detection using a multiresolution Hough transform., Proceeding of International Conference on Image Processing, Santa Barbara, CA, Array, pp.748-751.
  3. Wang, Y, Teoh, EK, and Shen, D (2004). Lane detection and tracking using B-Snake. Image and Vision Computing. 22, 269-280.
    CrossRef
  4. Thorpe, C, Hebert, MH, Kanade, T, and Shafer, SA (1988). Vision and navigation for the Carnegie-Mellon Navlab. IEEE Transactions on Pattern Analysis and Machine Intelligence. 10, 362-373.
    CrossRef
  5. Tsai, SJ, and Sun, TY 2005. The robust and fast approach for vision-based shadowy road boundary detection., Proceedings of IEEE Intelligent Transportation Systems, Vienna, Austria, pp.486-491.
  6. Lundquist, C, Schon, TB, and Orguner, U (2009). Estimation of the free space in front of a moving vehicle. Linköping, Sweden: Linköping University
  7. Lundquist, C, Hammarstrand, L, and Gustafsson, F (2011). Road intensity based mapping using radar measurements with a probability hypothesis density filter. IEEE Transactions on Signal Processing. 59, 1397-1408.
    CrossRef
  8. Zhang, W 2010. LIDAR-based road and road-edge detection., Proceeding of IEEE Intelligent Vehicles Symposium, San Diego, CA, Array, pp.845-848.
  9. Kirchner, A, and Heinrich, T 1998. Model based detection of road boundaries with a laser scanner., Proceeding of IEEE International Conference on Intelligent Vehicles, Stuttgart, Germany, pp.93-98.
  10. Sparbert, J, Dietmayer, K, and Streller, D 2001. Lane detection and street type classification using laser range images., Proceeding of IEEE Intelligent Transportation Systems, Oakland, CA, Array, pp.454-459.
  11. Garcia, F, Jimenez, F, Naranjo, JE, Zato, JG, Aparicio, F, Armingol, JM, and Escalera, A (2012). Environment perception based on LIDAR sensors for real road applications. Robotica. 30, 185-193.
    CrossRef
  12. Wyffels, K, and Campbell, M (2015). Negative information for occlusion reasoning in dynamic extended multiobject tracking. IEEE Transactions on Robotics. 31, 425-442.
    CrossRef
  13. Petrovskaya, A, and Thrun, S (2009). Model based vehicle detection and tracking for autonomous urban driving. Autonomous Robots. 26, 123-139.
    CrossRef
  14. Bishop, AN, and Ristic, B 2011. Fusion of natural language propositions: Bayesian random set framework., Proceeding of the 14th International Conference on Information Fusion, Chicago, IL, pp.1-8.
  15. Koch, W 2004. On ‘negative’ information in tracking and sensor data fusion: discussion of selected examples., Proceedings of the 7th International Conference on Information Fusion, Piscataway, NJ, pp.91-98.
  16. Ahmed, NR, Sample, EM, and Campbell, M (2013). Bayesian multicategorical soft data fusion for human–robot collaboration. IEEE Transactions on Robotics. 29, 189-206.
    CrossRef
  17. An, J, Choi, B, Sim, KB, and Kim, E (2016). Novel intersection type recognition for autonomous vehicles using a multilayer laser scanner. Sensors. 16.
    CrossRef
  18. Duda, RO, and Hart, PE (1972). Use of the Hough transformation to detect lines and curves in pictures. Communications of the ACM. 15, 11-15.
    CrossRef
  19. Kim, B, Choi, B, Yoo, M, Kim, H, and Kim, E (2014). Robust object segmentation using a multi-layer laser scanner. Sensors. 14, 20400-20418.
    Pubmed KoreaMed CrossRef
  20. Kim, B, Choi, B, Park, S, Kim, H, and Kim, E (2016). Pedestrian/vehicle detection using a 2.5-D multi-layer laser scanner. IEEE Sensors Journal. 16, 400-408.
    CrossRef

Beomseong Kim was born in Seoul, Korea, in 1987. He received his B.S. and Ph. D degree in electrical and electronics engineering from Yonsei University, Seoul, Korea, in 2009 and 2015, respectively. He is currently a senior engineer of R&D Division in LG electronics. His current research interests include advanced driver assistance system(ADAS) and autonomous driving vehicle system.


Seongkeun Park was born in Seoul, Korea, in 1981. He received his B.S. and Ph.D. degree in electrical and electronics engineering from Yonsei University, Seoul, Korea, at 2004 and 2011, respectively. He is currently an assistant professor in department of smart automobile, Soonchunhyang university. Before joining Soonchunhyang university, he was senior research engineer in ADAS Recognition development team, R&D Division in Hyundai Motors Company. He was also a visiting researcher at driving research group, Stanford university, Palo Alto, CA, USA, in 2012. His research interests include machine learning and its application to autonomous vehicle and ADAS perception system such as sensor signal processing, sensor fusion and object path prediction.


Euntai Kim was born in Seoul, Korea, in 1970. He received B.S., M.S., and Ph.D. degrees in Electronic Engineering, all from Yonsei University, Seoul, Korea, in 1992, 1994, and 1999, respectively. From 1999 to 2002, he was a Full-Time Lecturer in the Department of Control and Instrumentation Engineering, Hankyong National University, Kyonggi-do, Korea. Since 2002, he has been with the faculty of the School of Electrical and Electronic Engineering, Yonsei University, where he is currently a Professor. He was a Visiting Scholar at the University of Alberta, Edmonton, AB, Canada, in 2003, and also was a Visiting Researcher at the Berkeley Initiative in Soft Computing, University of California, Berkeley, CA, USA, in 2008. His current research interests include computational intelligence and statistical machine learning and their application to intelligent robotics, unmanned vehicles, and robot vision.


Article

Original Article

Int. J. Fuzzy Log. Intell. Syst. 2017; 17(3): 162-169

Published online September 30, 2017 https://doi.org/10.5391/IJFIS.2017.17.3.162

Copyright © The Korean Institute of Intelligent Systems.

Hough Transform-based Road Boundary Localization

Beomseong Kim1,2, Seongkeun Park3, and Euntai Kim2

1Intelligence Lab., LG Electronics, Seoul, Korea, 2School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, 3Department of Smart Automobile, Soonchunhyang University, Asan, Korea

Correspondence to: Euntai Kim (etkim@yonsei.ac.kr)

Received: August 18, 2017; Revised: September 13, 2017; Accepted: September 24, 2017

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The advanced driver-assistance system (ADAS) is designed to help drivers while they are driving. To help the drivers, ADAS first comprehends the situation by analyzing the data obtained from the road surroundings. In this process, the road boundary is one of the most important targets to detect for safe driving, but is frequently misdetected on crowded roads. Therefore, a new method for robustly detecting road boundaries on crowded roads is presented in this paper. First, road-boundary detection using a standard Hough transform is described, and its limitations are shown. Second, the cause of the limitations is explained by the measurement model of a laser scanner. Then, the standard Hough transform is modified to reflect the measurement model of the laser scanner; this change reduces the effect of closed obstacles. Finally, the proposed method is tested in the real-world environment, and it shows better performance than previous works in crowded environments.

Keywords: Road boundary detection, Laser scanner, Hough transform

1. Introduction

Road-boundary detection systems divide surroundings areas into the road area and other areas. This system is applicable for detecting drivable areas, building a map, localizing, and so on. The vision, radar, and laser scanner are the most well-known sensors in the field of intelligent vehicles, and these kinds of sensors are also used for detecting road boundaries.

Road-boundary detection by camera is frequently used by researchers for its economic cost. The methods with vision image use lanes on the road [13] and the color difference between the road and the other areas [4] for road-boundary detection. In [2], Jain proposed a road-boundary detection method using a multi-resolution Hough transform. Wang et al. [3] designed the road model using B-spline for detecting road boundaries. Thorpe et al. [4] applied a Monte Carlo simulation for detecting road boundaries.

However, detection using a camera is difficult in situations with worn roads, unmarked boundaries, and low-resolution cameras. Therefore, Wen et al. [1] proposed a novel method for overcoming low resolution problems. Tsai and Sun [5] proposed a method for shadowy road conditions by using a fuzzy rule.

Radar is also widely used in the field of intelligent vehicles. Lundquist et al. [6] proposed a method to detect road boundaries by using occupancy grid mapping and the outlier rejection method. Lundquist et al. [7] used a Gaussian mixture probability hypothesis density filter to detect road boundaries in sequential radar measurement. However, the radar sensor has a low resolution and small number of measurements per scan. Therefore, sequential measurement is required, i.e., sequential time is needed. For this reason, road-boundary detection using radar is more suitable for map building applications than for ADAS or autonomous driving systems.

The laser scanner has a high resolution and large number of measurement per scan. In addition, measurements from laser scanners have less measurement error than the image of a camera or radar. Therefore, laser scanners are widely used for detecting road boundaries.

The methods for detecting road boundaries can be divided into two types based on the measuring angle of a laser scanner. First, using a laser scanner that is installed below the car is the easiest way to detect road boundaries. This laser scanner can track road lanes using signal power from the reflected beam, and distinguishes road boundaries using the height difference between road and curb. However, this method requires an additional laser scanner to detect road boundaries [8].

Using a laser scanner that is installed parallel to the road, i.e., horizontally, is the second method. This laser scanner detects guardrails, fences, and walls as parts of road boundaries. These kinds of obstacles exist on well-constructed roads, like highways, which are the main locations where ADAS and autonomous driving works, currently. This laser scanner is already widely used for detecting vehicles, pedestrians, and other obstacles. Therefore, sharing the laser scanner data of the vehicle-and pedestrian-detection system could be possible, which is an advantage of this method.

Kirchner and Heinrich [9] used the Kalman-filter tracking method for road-boundary detection. Sparbert et al. [10] set up a region of interest and detected road boundaries at the most interesting area. Garcia et al. [11] extracts road boundaries using segmentation and histogram compensation methods. However, in these papers, the situation in which the other vehicles are adjacent to the host vehicle was not considered.

In [1217], the algorithms are designed to overcome the limitation for the cases of object alignment. In [1214], a specific model is proposed for estimating objects in the occluded area. Koch [15] and Ahmed et al. [16] used data fusion to solve the problem of tracking the occluded object using a single laser scanner. An et al. [17] eliminated measurements from moving objects on the grid map to estimate the exact road model. However, such research is only effective for tracking, and not detection.

Therefore, we proposed a novel method for road-boundary detection using a laser scanner that is installed horizontally, by applying a modified Hough transform. The proposed method deals with the occlusion problem, and then provides stable performance in crowded situations. The Hough transform method is already used in [12] for road-boundary detection, but a camera is used in [12]. This method is unsuitable for use in laser scanners and is quite different from the method proposed in this study.

2. Hough Transfrom for Road-Boundary Detection

2.1 Standard Hough Transform

In a given data set, detecting specific shapes like lines and circles has been researched previously. Detecting a line from a data set that consists of point measurement called point-to-line mapping (PTLM). Hough transform (HT) is the most popular method, and is widely used for solving this problem in the field of vision. HT extracts certain shapes by a voting procedure.

To execute the voting process, a mathematical model for the shape, and parameters to represent this mathematical model are needed. Duda and Hart [18] proposed the Hesse normal form to resolve unbounded problem of linear equations. Eq. (1) is the Hesse normal form. In general, the parameter space of Hesse normal form is called Hough space.

ρ=xcos α+ysin α.

ρ is the distance between the origin and the closest point on the line, and α is the angle between the x-axis and the closest point. When ρmax is the maximum distance of the laser scanner, the range of parameters could be ρ ∈ [−ρmax,ρmax], α ∈ [0,π), or ρ ∈ [0,ρmax], α ∈ [0, 2π). In this paper, the former range is used. A single point in the xy plane is transformed into a sinusoidal curve in Hough space.

2.2 Limitation of Standard Hough Transform

To implement HT, Hough space must be quantized, and this quantized Hough space forms a 2-dimensional array called an accumulator. Each bin in the accumulator has predefined sizes Δρ, Δα, represents each set of ρ, α, and increases as measurements transform the same set. This means the accumulator of measurement could be calculated by accumulating each measurement point by using Eq. (3). The higher value of bin means the higher probability of existence of the line that the corresponding parameter of the bin represents.

HoughSpace(Zt)Accumulator(Zt),Accumulator(Zt)=i=1NAccumulator(pi)         for i=1,,N=i=1NAccumulator(ri,θi)         for i=1,,N.

Using this voting process, the line that contains the largest point could be detected, but this line is not always the road boundary.

3. Proposed Method

3.1 Measurement Model of Laser Scanner

The laser scanner uses the laser beam for measuring the distance between a sensor and its closest obstacle. The laser scanner emits a laser beam, the beam reaches the obstacle, and then it returns to its original position. The laser scanner measures the time of flight (return time) and this time can be tuned to the distance owing to the constant speed of the laser beam.

Standard HT increases the possibility of line existence according to the measurement point by voting process, and does nothing in the empty areas. From the perspective of standard HT, the area that contains a measurement is occupied, and the other areas are unknown.

However, in the perspective of a laser scanner, the area where the measurement was taken is occupied, but the areas before the measurement are empty, and the areas after the measurement are unknown. This difference could create an error with road-boundary detection.

Figure 1 denotes the occupancy grid map of the laser scanner measurement. The black point denotes the measurement and the occupied area. The gray area denotes unknown areas, and the white area denotes the empty area. If these measurements are obtained on the road, a human can estimate the road boundary as the blue dashed line on Figure 1 by using the information regarding empty areas. The red dashed line that has been selected by standard HT cannot be considered the road boundary, as the line contains the empty area.

In the road environment, the road boundary is the farthest obstacle. Therefore, a human can estimate the road boundary despite the presence of closer obstacles. In this paper, the proposed method could provide stable performance in a crowded environment by using the information concerning empty areas.

3.2 Modified HT Considering the Assumption of Laser Scanner

For this reason, in this paper, a negative voting process is added to the standard HT. In the proposed method, positive voting is executed in the occupied area as standard HT. However, in the empty area, the accumulator is calculated as standard HT, and then subtracted during the overlapping process, as Eq. (4).

Accumulator*(Zt)=i=1N(Accumulator(ri,θi)-j=1[ri]-1Accumulator(ri-j,θi)).

In Eq. (4), Accumulator* (Zt) denotes the overlapped Hough space calculated by the proposed method. Accumulator (ri,θi) is the same as standard HT, and j=1[ri]-1Accumulator(ri-j,θi) denotes the accumulator from the empty area. In this paper, the sampling interval of the empty area is 1, and j is a fixed number from 1 to [ri − 1]. Therefore, rij denotes distances before the measurement point with uniform interval. When overlapping the accumulators, the accumulator from the empty area is subtracted.

Figure 2 depicts the result of the proposed method. In Figure 2(a), the blue box denotes the overlapped result of standard HT and the red box denotes the overlapped result of additional terms in the proposed method. Figure 2(b) denotes the final accumulator, and the red star is a peak point. Figure 2(c) is the same measurement as Figure 1, and the red line is the line according to the parameters of the red star in Figure 2(b).

3.3 Road Boundary Selection

As mentioned before, standard HT is usually used in vision images. In many cases where the number of lines in images is unknown, a threshold is used for choosing the lines. This method chooses all the lines that have a larger value in the Hough space than the threshold. However, the goal for this paper is to detect road boundaries. Road boundaries exist independently at the left right sides, and only one boundary exists at each region.

Figure 3 shows the road boundary area of the left and right side in the accumulator. The right road boundary is located in r ∈ [0, rmax], θ[0,π2) or r ∈ [−rmax, 0), θ[π2,π). The left road boundary is located in r ∈ [0, rmax], θ[π2,π) or r ∈ [−rmax, 0), θ[0,π2).

4. Experiment

In this experiment, we set up an IBEO LUX2010 laser scanner on the front bumper of a Kia K900. The LUX2010 has a total of four layers, and its horizontal and vertical resolutions are 0.125° and 0.8°, respectively. And the minimum and maximum measuring distance are 0.3 m and 200 m, respectively. A camera is installed on the same vehicle, and is used to obtain images of the actual ground of the environment. This sensor configuration is same as that in our previous works [19, 20].

Figure 4 is the result of using road-boundary detection in a common road environment. Figure 4(a) and 4(b) are the results of the standard HT and the proposed method, respectively. Black points are the measurements of the laser scanner. The blue line and the red line are the road boundary of the left and right side, respectively. In this situation, road boundaries have the largest number of measurement at the left and right sides. Therefore, the result of the standard HT and that of the proposed method are similar. Figure 4(c) is the camera image at the same time. This image is used for checking the surroundings.

Figure 5 is the result of road-boundary detection in a crowded road environment. Figure 5(a) and 5(b) are the results of using the standard HT and the proposed method, respectively. The black points are the measurements of the laser scanner. The blue line and the red line are the road boundary of the left side and right side, respectively. In this situation, road boundaries have the largest number of measurement at the left side. However, in this situation, occlusion was caused by vehicles at the front. Moreover, the measurement points from road boundary are smaller than measurement points from vehicles. Therefore, road boundary of standard HT is estimated at incorrect positions; however, the proposed method works well in this situation.

Tables 1 and 2 describe the quantitative results of using standard HT and the proposed method. Tables 1 and 2 show the error and standard deviation between the estimated and true values, respectively. DL and DR are the distance from origin to the x-intercept of the left and right road boundaries. αL and αR are the angle of the left and right road boundary. The number of errors incurred when using the proposed method is competitively smaller than those by the standard HT for all parameters.

When experimenting, the host vehicle was located on the left side of road, therefore, the number of vehicles on the right side of the host vehicle exceeds the number of vehicles on the left side of the host vehicle. As a result, the occlusions mainly occur in the right-side road boundary. For this reason, the distance and angular error of right side rapidly reduced, in particular. These effects can be verified by the standard deviation in Table 2.

5. Conclusion

In this paper, a novel method for detecting road boundaries was proposed. The proposed method was designed to overcome occlusion problems by modifying standard HT for detecting road boundaries. The proposed method was implemented on a vehicle, and showed outstanding performance in a crowded environment.

This method for perception is necessary for ADAS and autonomous driving vehicles. By applying the proposed method on ADAS and autonomous driving systems, these systems could perform more stably and safely.

Acknowledgements

This work was supported by the Hyundai Motor Company.

Fig 1.

Figure 1.

Occupancy grid map of laser scanner measurement.

The International Journal of Fuzzy Logic and Intelligent Systems 2017; 17: 162-169https://doi.org/10.5391/IJFIS.2017.17.3.162

Fig 2.

Figure 2.

Result of proposed Hough transform (HT). (a) Standard HT and the proposed method. (b) Final accumulator. (c) The same measurement as Figure 1.

The International Journal of Fuzzy Logic and Intelligent Systems 2017; 17: 162-169https://doi.org/10.5391/IJFIS.2017.17.3.162

Fig 3.

Figure 3.

Road boundary areas of the left and right side in the accumulator.

The International Journal of Fuzzy Logic and Intelligent Systems 2017; 17: 162-169https://doi.org/10.5391/IJFIS.2017.17.3.162

Fig 4.

Figure 4.

(a) Result of standard HT in a common road environment. (b) Result of the proposed method in a common road environment. (c) Corresponding image.

The International Journal of Fuzzy Logic and Intelligent Systems 2017; 17: 162-169https://doi.org/10.5391/IJFIS.2017.17.3.162

Fig 5.

Figure 5.

(a) Result of the standard HT in a crowded road environment. (b) Result of the proposed method in a crowded road environment. (c) Corresponding image.

The International Journal of Fuzzy Logic and Intelligent Systems 2017; 17: 162-169https://doi.org/10.5391/IJFIS.2017.17.3.162

Table 1 . Error of standard HT and proposed method.

AlgorithmDLαLDRαR
Standard HT 0.286  0.024  12.353  0.564 
Proposed method 0.1370.0210.9060.030

Table 2 . Standard deviation of standard HT and proposed method.

AlgorithmDLαLDRαR
Standard HT 0.338  0.025  14.600  0.603 
Proposed method 0.0990.0121.3930.045

References

  1. Wen, Q, Yang, Z, Song, Y, and Jia, P 2008. Road boundary detection in complex urban environment based on low-resolution vision., Proceeding of 11th Joint International Conference on Information Sciences, Shenzhen, China, pp.1-7.
  2. Yu, B, and Jain, AK 1997. Lane boundary detection using a multiresolution Hough transform., Proceeding of International Conference on Image Processing, Santa Barbara, CA, Array, pp.748-751.
  3. Wang, Y, Teoh, EK, and Shen, D (2004). Lane detection and tracking using B-Snake. Image and Vision Computing. 22, 269-280.
    CrossRef
  4. Thorpe, C, Hebert, MH, Kanade, T, and Shafer, SA (1988). Vision and navigation for the Carnegie-Mellon Navlab. IEEE Transactions on Pattern Analysis and Machine Intelligence. 10, 362-373.
    CrossRef
  5. Tsai, SJ, and Sun, TY 2005. The robust and fast approach for vision-based shadowy road boundary detection., Proceedings of IEEE Intelligent Transportation Systems, Vienna, Austria, pp.486-491.
  6. Lundquist, C, Schon, TB, and Orguner, U (2009). Estimation of the free space in front of a moving vehicle. Linköping, Sweden: Linköping University
  7. Lundquist, C, Hammarstrand, L, and Gustafsson, F (2011). Road intensity based mapping using radar measurements with a probability hypothesis density filter. IEEE Transactions on Signal Processing. 59, 1397-1408.
    CrossRef
  8. Zhang, W 2010. LIDAR-based road and road-edge detection., Proceeding of IEEE Intelligent Vehicles Symposium, San Diego, CA, Array, pp.845-848.
  9. Kirchner, A, and Heinrich, T 1998. Model based detection of road boundaries with a laser scanner., Proceeding of IEEE International Conference on Intelligent Vehicles, Stuttgart, Germany, pp.93-98.
  10. Sparbert, J, Dietmayer, K, and Streller, D 2001. Lane detection and street type classification using laser range images., Proceeding of IEEE Intelligent Transportation Systems, Oakland, CA, Array, pp.454-459.
  11. Garcia, F, Jimenez, F, Naranjo, JE, Zato, JG, Aparicio, F, Armingol, JM, and Escalera, A (2012). Environment perception based on LIDAR sensors for real road applications. Robotica. 30, 185-193.
    CrossRef
  12. Wyffels, K, and Campbell, M (2015). Negative information for occlusion reasoning in dynamic extended multiobject tracking. IEEE Transactions on Robotics. 31, 425-442.
    CrossRef
  13. Petrovskaya, A, and Thrun, S (2009). Model based vehicle detection and tracking for autonomous urban driving. Autonomous Robots. 26, 123-139.
    CrossRef
  14. Bishop, AN, and Ristic, B 2011. Fusion of natural language propositions: Bayesian random set framework., Proceeding of the 14th International Conference on Information Fusion, Chicago, IL, pp.1-8.
  15. Koch, W 2004. On ‘negative’ information in tracking and sensor data fusion: discussion of selected examples., Proceedings of the 7th International Conference on Information Fusion, Piscataway, NJ, pp.91-98.
  16. Ahmed, NR, Sample, EM, and Campbell, M (2013). Bayesian multicategorical soft data fusion for human–robot collaboration. IEEE Transactions on Robotics. 29, 189-206.
    CrossRef
  17. An, J, Choi, B, Sim, KB, and Kim, E (2016). Novel intersection type recognition for autonomous vehicles using a multilayer laser scanner. Sensors. 16.
    CrossRef
  18. Duda, RO, and Hart, PE (1972). Use of the Hough transformation to detect lines and curves in pictures. Communications of the ACM. 15, 11-15.
    CrossRef
  19. Kim, B, Choi, B, Yoo, M, Kim, H, and Kim, E (2014). Robust object segmentation using a multi-layer laser scanner. Sensors. 14, 20400-20418.
    Pubmed KoreaMed CrossRef
  20. Kim, B, Choi, B, Park, S, Kim, H, and Kim, E (2016). Pedestrian/vehicle detection using a 2.5-D multi-layer laser scanner. IEEE Sensors Journal. 16, 400-408.
    CrossRef

Share this article on :

Most KeyWord