Title Author Keyword ::: Volume ::: Vol. 17Vol. 16Vol. 15Vol. 14Vol. 13Vol. 12Vol. 11Vol. 10Vol. 9Vol. 8Vol. 7Vol. 6Vol. 5Vol. 4Vol. 3Vol. 2Vol. 1 ::: Issue ::: No. 4No. 3No. 2No. 1

Vision-Based Vehicle Detection System Applying Hypothesis Fitting

Minkyu Cheon1, and Heesung Lee2

1Department of Electrical and Control Engineering, Gyeonggi College of Science and Technology, Siheung, Korea, 2Department of Railroad Electrical and Electronics Engineering, Korea National University of Transportation, Chungju, Korea
Correspondence to: Heesung Lee (hslee0717@ut.ac.kr)
Received March 30, 2017; Revised June 23, 2017; Accepted June 23, 2017.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

In this paper, we propose the improved vision-based vehicle detection system which is added a hypothesis fitting (HF) step to the typical vehicle detection system consisting of the hypothesis generation (HG) and hypothesis verification (HV) step. In the HG step, the system generates hypotheses using shadow regions appearing under vehicles. In the HV step, the system verifies whether a hypothesis is a vehicle or not by applying a classifier and feature vectors extracted from the hypothesis. The proposed HF step is conducted between HG and HV steps and help to improve performance of the vehicle verification by adapting region of hypotheses. We verify the performance of our proposed HF method through a total of 4,797 hypotheses, 1,606 positive hypotheses, and 3,191 negative hypotheses data set.

Keywords : Vehicle detection, Hypothesis generation, Hypothesis verification, Hypothesis fitting, Histogram of oriented gradients, HOG symmetry, TER-RM
1. Introduction

A vision-based vehicle detection system is a component of automatic driving systems and plays a very important role to reduce traffic accidents. Many vision-based vehicle detection systems using a single camera have been proposed and a typical system consists of two steps, i.e., hypothesis generation (HG) and hypothesis verification (HV) steps in order to reduce computational time and achieve real-time processing [1, 2].

The purpose of the HG step is extracting vehicle hypotheses from road images. The results of the HG step can have considerable effect on the verification accuracy of the HV step. To extract hypotheses more accurately, many methods for the hypothesis generation have been studied. The HG step generally uses one of the following three methods: the knowledge-based method, the stereo vision-based method or the motion-based method. To detect objects and hypothesizes the location of object, the knowledge-based method uses characteristic information such as symmetry [36], corners [7, 8], shadows [9], edges [1012], textures [13, 14], and vehicle lights [1517]. There are two types of the stereo vision-based method for detecting vehicles. One of those methods uses disparity map [18], and the other method uses the Inverse Perspective Mapping method [19, 20]. The motion-based method extracts hypotheses using optical flow by obtaining their relative motions: approaching vehicles produce diverging flow while overtaking vehicles produce converging flow [21, 22].

The input of the HV step is the hypothesized vehicles obtained from the HG step. In the HV step, hypotheses are verified whether these candidates are vehicles or not. Typical HV methods are the template-based and the appearance-based method. The template-based methods calculate the correlation between hypotheses and the pre-defined pattern [2325]. Otherwise, the appearance methods regard the HV as a classification problem of distinguishing vehicle (positive) and nonvehicle (negative) data. To generate classifier parameters, this method extracts features from training images (including vehicle and nonvehicle data) [2628].

In this paper, we improve the vision-based vehicle detection system which is proposed in [29]. The system applies the knowledge-based HG method which extracts hypotheses using shadow regions [9, 29]. In the HV step, the system applies an appearance-based method which uses the histogram of oriented gradients (HOG) [30, 31] and HOG symmetry vectors [29] as feature vectors and applies total error rate minimization using reduced model (TER-RM) [29, 32] as a classifier.

In order to improve performance of the system, we propose a hypothesis fitting (HF) step in this paper. If hypotheses are incorrectly extracted in the HG step, classification accuracies decrease and the HOG symmetry extracted from hypotheses cannot be good feature to verify those hypotheses. The HF step conducted between the HG and HV steps adapts the region of a hypothesis generated in the HG step and helps to enhance verification accuracy in the HV step.

We introduce our system briefly in Section 2, demonstrate the HG step and HV step in Sections 3 and 4 respectively, and discuss our proposed HF step in Section 5. In Section 6, we assess the experimental performance of our proposed system.

2. Overview of the Proposed System

In this paper, we propose the improved vision-based vehicle detection system using a single camera which is added a HF step to the typical vehicle detection system consisting of HG and HV step.

In the HG step, hypotheses, i.e., candidates of vehicles are extracted from a road image. In the HV step, hypotheses extracted in the HG step are verified to determine whether or not they are vehicles.

As shown in Figure 1, the HF step is conducted between HG and HV steps and help the improving performance of the vehicle verification by adapting region of hypotheses.

The proposed system in this paper applies the HG and HV step which are introduced in [29].

3. Hypothesis Generation Step

The proposed system extracts shadow regions to generate hypotheses of vehicle locations from road images. This idea is based on the fact that the shaded area under vehicle is always darker than the road area surrounding the shadow. Therefore, if the gray level of the paved road is roughly estimated, we can expect that the gray level of the shaded area will be less than this level.

The system applies the road area, i.e., the driving space extracting method using the edge information which is introduced in [9]. This method regards the driving space as the lowest central homogeneous area in the image demarcated by edges. After extracting the road area, dark regions including shaded areas under vehicles are defined as regions that have smaller intensity than a threshold value m − 3σ, where m and σ are the average and standard deviation of the frequency distribution of the road pixels. To obtain shadow regions underneath vehicles, horizontal edges between dark areas and road regions are extracted, and hypotheses are determined based on the locations of those edges [9]. Figure 2 shows the hypothesis generation step briefly.

4. Hypothesis Verification Step

In HV step, our proposed system verifies whether the hypotheses extracted in the HG step are vehicles or not. To verify a hypothesis, the system extracts feature vectors from the hypothesis image and classifies those vectors.

### 4.1 Feature

In the HV step, the proposed system use two kinds of feature vectors: HOG [30, 31] and HOG symmetry [29]. The HOG is one of the most popular features in vision-based target object detection fields, especially human detection systems. The HOG symmetry vector is composed of numerical elements that represent symmetrical characteristics.

The HOG vectors are extracted in three step sequences which are gradient computation, orientation binning and histogram generation [30]. In the gradient computation step, the gradient of an image is obtained by applying two 1-D filters: (−1 0 1) for the horizontal direction and (−1 0 1)T for the vertical direction. By applying these two filters, orientation and magnitude values of gradients for each pixel are obtained. In orientation binning step, the orientation bins are evenly divided into the predefined range, e.g., 6 bins with π/3 range. After the number of the orientation bins is determined, a histogram is generated by accumulating a magnitude value for each pixel of an image in the histogram generation step. If an orientation value of a pixel is within an angle range of a specific bin, a magnitude value for the pixel is accumulated in the corresponding bin. In the proposed system, hypothesis images are resized into 64×64 pixels and divided into four blocks with 32×32 pixels as shown in

The HOG symmetry vectors represent symmetrical characteristics numerically using two different HOG vectors [29]. The HOG vectors extracted from four blocks of a vehicle hypothesis image show symmetry characteristics: HOG1 and HOG2 are symmetric to each other, as are HOG3 and HOG4, as shown in Figure 4. Therefore, two HOG symmetry vectors are extracted from upper and lower parts of a hypothesis image, and each element of the HOG symmetry vector represents how similar each element of two different HOG vectors is.

Consider that a symmetrical image is divided into two blocks and two 8-dimensional HOG vectors are extracted from the left and right blocks as shown in

The HOG symmetry vector $C=[c1c2c3c4c5c6c7c8]T$ obtained using 2 HOGs, $H1=[h11h12h13h14h15h16h17h18]T$ and $H2=[h21h22h23h24h25h26h27h28]T$ is given by [29]

$ci={h1i∑k=18h1k/h2i′∑k=18h2k′ifh2i′∑k=18h2k′≥h1i∑k=18h1k,h2i′∑k=18h2k′/h1i∑k=18h1k otherwise,$

where $H2′$ is rearranged H2

$H2′=[h21′h22′h23′h24′h25′h26′h27′h28′]T=[h25h24h23h22h21h28h27h26]T.$

### 4.2 Classification

In the HV step, the system verifies hypotheses extracted in the HG step. The system extracts feature vectors, i.e., HOG and HOG symmetry and these vectors are applied to a classifier TER-RM with data importance which is introduced in [29].

TER-RM is a classification method using the reduced model [33] to minimize the total error rate (TER) of training data, which is the sum of the false positive (FP) and false negative (FN) rates [32]. TER-RM with data importance is a classifier which is applied importance value of training data to the TER-RM, and it minimizes the sums of the FP and FN rate multiplied by importance value [29].

The reduced model is used to produce more sophisticated separating surfaces than linear discriminant functions and the model is given by [33]

$f^RM(α,x)=α0+∑k=1r∑j=1lαkjxjk+∑j=1rαrl+j(x1+x2+⋯+xl)j+∑j=2r(α1T·x)(x1+x2+⋯+xl)j-1,$

where $x=[x1x2⋯xl]T$ is a l-dimensional feature vector, and $α=[α0α1⋯αk]T$ is the weight parameter vector.

The goal of TER-RM with importance value can be expressed as follows [29]:

$arg minTER(α,x+,x-)=arg limα {1m-∑j=1m-C1sj- L(ɛ(α,xj-))+1m+∑i=1m+-C2si+ L(ɛ(α,xi+))}$

where $ɛ(α,xj-)=g(α,xj-)-τ=p(xj-)·α-τ$ and $ɛ(α,xi+)=τ-g(α,xi+)=τ-p(xi+)·α$, and p(x) is the reduced model of vector x, x+ and x are the positive and negative feature vectors, m+ and m are the number of positive and negative data, τ is the threshold value for classification, C1 and C2 are penalty values for FN and FP, and L is a 0–1 loss function such that L(ɛ) = 1 when ɛ ≥ 0. The importance value, s is determined by the size(z) and horizontal position(x) of the hypothesis as follows [29]:

$s=μx(x)+μsize(z)2,$

where

$μx(x)={x-min(x)average(x)-min(x),ifxaverage(x),μsize(z)=max(z)-zmax(z)-min(z).$

The optimization problem in (4) can be approximated by using the quadratic function, and (4) can be represented by [29, 32]

$TER(α,x+,x-)=b2‖α‖22+12m-∑j=1m-C1sj- [ɛ(α,xj-)+η]2+12m+∑i=1m+C2si+ [ɛ(α,xi+)+η]2,$

where η is the positive offset value.

The solution of the minimization condition for (6) can be written as [29]

$α=[bI+1m-∑j=1m-C1sj-pjTpj+1m+∑i=1m+C2si+piTpi]-1×[(τ-η)m-∑j=1m-C1sj-pjT+(τ+η)m+∑i=1m+C2si+piT],$

where $pj=p(xj-)$ and $pi=p(xi+)$.

5. Hypothesis Fitting Step

In this section, we introduce a hypothesis fitting (HF) method.

In late afternoon or early morning, vehicles cast relatively long shadows compared to their size. In this situation, the proposed system which uses shaded area based HG method cannot extract the hypothesis correctly as shown in Figure 6 and it can decrease the classification accuracies. To reduce classification error caused by those incorrectly extracted hypotheses, we apply the HF step to the system. The proposed HF method is conducted between the HG and HV steps and adapts the region of a hypothesis generated in the HG step using the symmetry characteristics of vehicle images.

Generally, edge information is used to extract symmetry characteristics of vehicles. Representatively, Kuehnle [3] and Zielke et al. [4] proposed vehicle detection system using the edge based symmetry.

However, the proposed HF method uses HOG which is used as features in the system instead of edges information to find symmetry characteristics of vehicles.

It can reduce the running time of the system, instead of generating the orientation and the magnitude map of the gradients twice, thereby sharing the map both in the hypothesis fitting step and the feature extraction step.

Figure 7 shows the process of the proposed HF method briefly. First, the method applies the smallest sized window to left end of a hypothesis image and extracts HOG from the window. At the same time, the same-sized window is applied to immediate right of the window and HOG is also extracted.

After obtaining two HOGs are obtained from the left and the right windows, the numerical symmetry of those HOGs is calculated. The HOG vector that is obtained from the right window is rearranged as (2), and the numerical symmetry is defined as a magnitude of HOG symmetry for two HOGs. In other words, the symmetry, S of two HOGs is obtained as follows:

$S=‖HOG symmetry of H1 and H2′‖22,$

where H1 is HOG which is extracted from the left window, and $H2′$ is rearranged HOG extracted from the right window. Then, the axis (which is the right side of the left window, i.e., the left side of the right window) moves to right by increasing the size of the windows as shown in Figure 7(a). The minimum width of the window is a quarter-sized width of the hypothesis, and the maximum width of the window is a half-sized width of the hypothesis. The height of the window is half of the hypothesis height. These maximum and minimum for the width and height of the window are determined by trial and error. After the width of the window increases to the maximum and all the numerical symmetry values are obtained, the method applies the window of the minimum width to right end of the hypothesis and repeats the process that is mentioned before as shown in

After the process is completed, it finds the location of the symmetry axis that has the maximum S and adjusts the hypothesis image according to the location.

If the system corrects hypotheses depending only on the maximum symmetry values without any specific conditions, some of hypotheses are not correctly adjusted and it can cause worse classification accuracy because of a decrease in resolution of hypotheses. Therefore, a threshold value is defined and the system adjusts the hypothesis when its maximum symmetry value is larger than the threshold value. If the maximum symmetry value is smaller than the threshold value, the hypothesis is not adjusted and the system uses an unadjusted hypothesis (an original hypothesis image) for the HV step.

In addition, if hypotheses have smaller resolution than a specific threshold value, the proposed HF step is not applied. This is also because the low resolution of hypothesis images can cause decrease in the verification accuracy. These threshold values are also determined by trial and error. Figure 8 shows the result of the proposed HF method.

6. Experiment

### 6.1 Dataset

For experiments, a total of 4,797 hypotheses, 1,606 positive hypotheses, and 3,191 negative hypotheses, are extracted from road images collected during real on-road driving tests in Seoul, Korea. Gray level hypothesis image are used, and those images are divided into four blocks. (upper left, upper right, lower left, and lower right) The HOG vector is extracted from each block and two HOG symmetry vectors are obtained from upper blocks and lower blocks, respectively. Then, TER-RM with importance value is applied to verify hypotheses.

Figure 9 shows positive and negative hypotheses which the proposed system extracts. Most of negative data set are guard rails, and some of those are structures and buildings near roads, signs on roads, parts of vehicles, and so on.

### 6.2 Selection of Dimension

In this section, the classification results according to the dimensions of HOG are compared. 4, 6, 8, and 10-dimensional HOG vectors are extracted from obtained hypothesis images and those vectors are applied to TER-RM with importance classification method.

As this paper mentioned in the previous Section 4, Four HOG vectors and two HOG symmetry vectors are obtained from a hypothesis. Table 1 shows the dimension of feature vectors according to the dimension of HOG.

The 24, 32, 48, and 60-dimensional feature vectors are applied to the TER-RM with importance (RM order is 3), and classification results of these features are compared using 10-fold cross validation.

Figure 10 and Table 2 show classification result, i.e., error rate according to the dimensions of HOG (4, 6, 8, and 10). Figure 11 and Table 3 show TER according to the dimensions of HOG (4, 6, 8, and 10).

As shown in Figures 10 and 11 and Tables 2 and 3, when the dimension of HOG is 8, i.e., 48-dimensional feature vectors are applied, the classification result is relatively better than other cases.

Generally, as the dimension of feature vectors increases, the accuracy of the classification increases. However, too large dimension of feature vectors sometimes cause worse classification test result because of the overfitting problem. For this reason, the error rate of the classification applying 10-dimensional HOG is worse than the case that 8-dimensional HOG is applied. Based on the result, 8-dimensional HOG, i.e., 48-dimensional feature vectors are applied to the system for the hypothesis verification.

### 6.3 Performance of Proposed Hypothesis Fitting Method

In this section, the performance of the proposed HF method which is introduced in Section 5 is verified. To assess the performance of the method, two kinds of data sets are created from the hypotheses which are mentioned in Section 6.1. One of the data sets is applied the hypothesis fitting method, and the other one is not.

Four 8-dimensional HOG and two 8-dimensional HOG symmetry vectors are extracted from the two data sets, and the vectors are applied to the TER-RM with importance (RM order is 3) to classify data sets. The performance of the classification applying each data set is assessed by the 10-fold cross validation.

Figures 12 and 13 and Tables 4 and 5 show numerical performances of the 10-fold cross validation (error rate and TER) when the two data sets are applied to the classification method.

According to above Figure 12 and Table 4, when the hypothesis fitting method is applied to a data set, the accuracy is averagely better (approximately 1.2%) than the case that the unfitted data set is applied. As shown in Figure 13 and Table 5, the hypothesis fitting method also leads to 0.054 lower TER.

7. Conclusions

In this paper, we propose the improved vision-based vehicle detection system using a single camera which is added a HF step to the typical vehicle detection system consisting of the HG and HV step. The proposed HF step is conducted between HG and HV steps and help the improving performance of the vehicle verification by adapting region of hypotheses.

We show performance of the proposed system through 4,797 images including 1,606 positive hypotheses, and 3,191 negative hypotheses.

According to the classification results in Section 6, the proposed hypothesis fitting method leads to approximately 1.2% higher accuracy.

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science, ICT & Future Planning [MSIP]) (No. 2017R1C1B5018408).

Conflict of Interest

Figures
Fig. 1.

Overview of our proposed system.

Fig. 2.

Process of the HV step (a) road region, (b) dark region, (c) edges of shadows underneath vehicles, (d) extracted hypotheses based on Figure 2(c).

Fig. 3.

Four HOGs from a hypothesis.

Fig. 4.

Four HOG vectors are generated from four blocks and two HOG symmetry vectors (C1 and C2) are obtained from upper blocks and lower blocks, respectively.

Fig. 5.

The HOG symmetry of a octagon: Bins 4, 5, and 6 of HOG1 and Bins 2, 1, and 8 of HOG2 are symmetric to each other, and HOG symmetry is generated from HOG1 and rearrangement of HOG2, i.e., HOG2’.

Fig. 6.

Examples of hypotheses which are incorrectly extracted due to long shadows.

Fig. 7.

Hypothesis fitting method.

Fig. 8.

Original hypothesis images and adjusted images by the proposed method.

Fig. 9.

(a) Positive (vehicle) and (b) negative (non-vehicle) images.

Fig. 10.

Ten-fold cross-validation result (error rate) according to the dimensions of HOG.

Fig. 11.

Ten-fold cross-validation result (TER) according to the dimensions of HOG.

Fig. 12.

10-fold cross validation result (error rate) for the adjusted data set and the unadjusted data set.

Fig. 13.

Ten-fold cross-validation result (TER) for the adjusted data set and the unadjusted data set.

TABLES

### Table 1

Dimensions of feature vectors according to the dimensions of HOG

Dimension of HOGNumber of HOGsDimension of HOG symmetryNumber of HOG symmetriesDimension of feature vector
444224
646232
848248
10410260

### Table 2

Average, the best, and the worst error rate according to the dimensions of HOG

Dimension of HOGAVG±SDBestWorst
40.1277±0.02780.10210.2021
60.0827±0.03240.05630.1750
80.0575±0.01160.04170.0792
100.0777±0.03700.04580.1750

### Table 3

Average, the best, and the worst TER according to the dimensions of HOG

Dimension of HOGAVG±SDBestWorst
40.2680±0.02640.22840.3150
60.1818±0.04670.12960.2688
80.1212±0.02310.08630.1573
100.1407±0.04870.09310.2658

### Table 4

Average, the best, and the worst error rate for the adjusted data set and the unadjusted data set

Data setAVG±SDBestWorst

### Table 5

Average, the best, and the worst TER for the adjusted data set and the unadjusted data set

Data setAVG±SDBestWorst

References
1. Sun, Z, Bebis, G, and Miller, R (2006). On-road vehicle detection: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence. 28, 694-711.
2. Truong, QB, Geon, HN, and Lee, BR 2009. Vehicle detection and recognition for automated guided vehicle., Proceedings of 2009 ICROS-SICE International Joint Conference (ICCAS-SICE), Fukuoka, Japan, pp.671-676.
3. Kuehnle, A (1991). Symmetry-based recognition of vehicle rears. Pattern Recognition Letters. 12, 249-258.
4. Zielke, T, Brauckmann, M, and Vonseelen, W (1993). Intensity and edge-based symmetry detection with an application to car-following. CVGIP: Image Understanding. 58, 177-190.
5. Alessandretti, G, Broggi, A, and Cerri, P (2007). Vehicle and guard rail detection using radar and vision data fusion. IEEE Transactions on Intelligent Transportation Systems. 8, 95-105.
6. Cheng, G, and Chen, X 2011. A vehicle detection approach based on multi-features fusion in the fisheye images., Proceedings of 3rd International Conference on Computer Research and Development, Shanghai, China, Array, pp.1-5.
7. Bertozzi, M, Broggi, A, and Castelluccio, S (1997). A real-time oriented system for vehicle detection. Journal of Systems Architecture. 43, 317-325.
8. Jazayeri, A, Cai, H, Zheng, JY, and Tuceryan, M (2011). Vehicle detection and tracking in car video based on motion model. IEEE Transactions on Intelligent Transportation Systems. 12, 583-595.
9. Tzomakas, C, and von Seelen, W (1998). Vehicle Detection in Traffic Scenes Using Shadows: Internal Report 98–06. Bochum, Germany: Institute for Neuroinformatics, Ruhr-University Bochum
10. Matthews, ND, An, PE, Charnley, D, and Harris, CJ (1996). Vehicle detection and recognition in greyscale imagery. Control Engineering Practice. 4, 473-479.
11. Boumediene, M, Ouamri, A, and Keche, M 2011. Vehicle detection algorithm based on horizontal/vertical edges., Proceedings of 7th International Workshop on Systems, Signal Processing and Their Applications, Tipaza, Algeria, Array, pp.396-399.
12. Pirzada, SJH, Haq, EU, and Shin, H 2011. A multi feature based on-road vehicle recognition., Proceedings of 6th International Conference on Computer Sciences and Convergence Information Technology, Seogwipo, Korea, pp.173-178.
13. Kalinke, T, Tzomakas, C, and von Seelen, W 1998. A texture-based object detection and an adaptive model-based classification., Proceedings of 1998 IEEE International Conference on Intelligent Vehicles, Stuttgart, Germany, pp.143-148.
14. Haralick, RM, Shanmugam, K, and Dinstein, I (1973). Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics. 3, 610-621.
15. Cucchiara, R, and Piccardi, M 1999. Vehicle detection under day and night illumination., Proceedings of 3rd International ICSC Symposium on Intelligent Industrial Automation, Genova, Italy.
16. Fossati, A, Schonmann, P, and Fua, P (2011). Real-time vehicle tracking for driving assistance. Machine Vision and Applications. 22, 439-448.
17. Zhang, W, Wu, QMJ, Wang, G, and You, X (2012). Tracking and pairing vehicle headlight in night scenes. IEEE Transactions on Intelligent Transportation Systems. 13, 140-153.
18. Mandelbaum, R, McDowell, L, Bogoni, L, Reich, B, and Hansen, M 1998. Real-time stereo processing, obstacle detection, and terrain estimation from vehicle-mounted stereo cameras., Proceedings of 4th IEEE Workshop on Applications of Computer Vision, Princeton, NJ, Array, pp.288-289.
19. Mallot, HA, Bulthoff, HH, Little, JJ, and Bohrer, S (1991). Inverse perspective mapping simplifies optical flow computation and obstacle detection. Biological Cybernetics. 64, 177-185.
20. Arrospide, J, and Salgado, L (2012). On-road visual vehicle tracking using Markov chain Monte Carlo particle filtering with metropolis sampling. International Journal of Automotive Technology. 13, 955-961.
21. Giachetti, A, Campani, M, and Torre, V (1998). The use of optical flow for road navigation. IEEE Transactions on Robotics and Automation. 14, 34-48.
22. Kruger, W, Enkelmann, W, and Rossle, S 1995. Real-time estimation and tracking of optical flow vectors for obstacle detection., Proceedings of Intelligent Vehicles ’95 Symposium, Detroit, MI, Array, pp.304-309.
23. Ito, T, Yamada, K, and Nishioka, K 1995. Understanding driving situations using a network model., Proceedings of Intelligent Vehicles ’95 Symposium, Detroit, MI, Array, pp.48-53.
24. Regensburger, U, and Graefe, V 1994. Visual recognition of obstacles on roads., Proceedings of 1994 International Conference on Intelligent Robots and Systems, Munich, Germany, Array, pp.73-86.
25. Bensrhair, A, Bertozzi, M, Broggi, A, Miche, P, Mousset, S, and Toulminet, G 2001. A cooperative approach to vision-based vehicle detection., Proceedings of 2001 IEEE Intelligent Transportation Systems, Oakland, CA, Array, pp.209-214.
26. Wu, J, and Zhang, X 2001. A PCA classifier and its application in vehicle detection., Proceedings of 2001 International Joint Conference on Neural Networks, Washington, DC, Array, pp.600-604.
27. Sun, Z, Bebis, G, and Miller, R 2002. On-road vehicle detection using Gabor filters and support vector machines., Proceedings of 14th International Conference on Digital Signal Processing, Santorini, Greece, Array, pp.1019-1022.
28. Cheon, M, Yoon, C, Kim, E, and Park, M 2008. Vehicle detection using fuzzy twin support vector machines., Proceedings of Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on Advanced Intelligent Systems (SCIS & ISIS 2008), Nagoya, Japan, Array, pp.2043-2048.
29. Cheon, M, Lee, W, Yoon, C, and Park, M (2012). Vision-based vehicle detection system with consideration of the detecting location. IEEE Transactions on Intelligent Transportation Systems. 13, 1243-1252.
30. Dalal, N, and Triggs, B 2005. Histograms of oriented gradients for human detection., Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, Array, pp.886-893.
31. Zhu, Q, Yeh, MC, Cheng, KT, and Avidan, S 2006. Fast human detection using a cascade of histograms of oriented gradients., Proceedings of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, Array, pp.1491-1498.
32. Toh, KA, and Eng, HL (2008). Between classification-error approximation and weighted least-squares learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. 30, 658-669.
33. Toh, KA, Tran, QL, and Srinivasan, D (2004). Benchmarking a reduced multivariate polynomial pattern classifier. IEEE Transactions on Pattern Analysis and Machine Intelligence. 26, 740-755.
Biographies

Minkyu Cheon received the B.S. and Ph.D. degree in Electrical and Electronic Engineering from Yonsei University, Seoul, Korea, in 2006 and 2013, respectively. From 2013 to 2015, he was a Managing Researcher with the Image & Video Research Group, S1 Corporation. He is currently an assistant professor with the Department of Electrical and Control Engineering, Gyeonggi College of Science and Technology. His main research interests include machine learning, pattern recognition, and computer vision.

E-mail: cheonmk@gtec.ac.kr

Heesung Lee received the B.S., M.S., and Ph.D. degrees in Electrical and Electronic Engineering from Yonsei University, Seoul, Korea, in 2003, 2005, and 2010, respectively. From 2011 to 2014, he was a managing researcher with the S1 Corporation, Seoul, Korea. Since 2015, he has been with the Railroad Electrical and Electronics Engineering at Korea National University of Transportation Gyeonggi-do, Korea, where he is currently an assistant professor. His current research interests include computational intelligence, biometrics, and intelligent railroad system.

E-mail: hslee0717@ut.ac.kr

December 2017, 17 (4)