search for




 

Viewpoint Classification for the Bus-Waiting Blinds in Congested Traffic Environment
International Journal of Fuzzy Logic and Intelligent Systems 2019;19(1):48-58
Published online March 25, 2019
© 2019 Korean Institute of Intelligent Systems.

Watcharin Tangsuksant, Masashi Noda, Kodai Kitagawa, and Chikamune Wada

Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Fukuoka, Japan
Correspondence to: Watcharin Tangsuksant, (w.tangsuksant.m@hotmail.com)
Received August 13, 2018; Revised March 11, 2019; Accepted March 13, 2019.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

To provide an effective notification service for the blinds awaiting the bus, it is crucial to have a viewpoint classification technique in which a viewpoint is defined with tilt and panning of camera. This paper proposes a viewpoint classification method using the car distribution information in the congested traffic environment. The proposed method takes four steps for classification. First, the YOLO algorithm is used to detect the car positions in the images. Second, the car positions are normalized for feature computation. Third, nineteen simple features are extracted and finally, the viewpoint classification is conducted. The proposed method uses the information gain measure to select relevant ones from the extracted features, and uses the Random Forest algorithm as a classifier. In the experiments, the proposed method has been tested for various roadside scenarios of congested traffic in day and night. The accuracies for car detection and viewpoint classification were 79.90% and 86.00%, respectively, which are improved compared to the prior work.

Keywords : Viewpoints classification, Car distribution, Blind people, Congested traffic enverionment
1. Introduction

The Ministry of Social Development and Human Security of Thailand has found the official number of visually impaired people in Thailand to be 204,012 in 2017 [1]. However, an estimated 680,000 unofficial visually impaired people, including completely blind individuals, also live in Thailand according to the Thailand Association of the Blind [2]. For travels in daily life, blind individuals face many problems because they cannot visualize the obstructions in their way. However, some problems facing blind people, such as navigation assistance and obstacles detection, have been addressed; for example, the tactile graphics at railway stations in Japan and Poland [3, 4] offer important information helping blind travelers. However, there are still no well systems design to support those blind people in many countries including Thailand.

Blind people face huge problems, especially in daily life activities. This research focuses upon the problem of bus service for blind individuals who travel independently. Public-transportation systems are quite poor in Thailand, and support systems for disabled people are not provided. Moreover, the arrival times of oncoming buses cannot be estimated and the drivers make no announcements concerning bus number or destination. Passengers are forced to observe the bus number or destination themselves at the bus stop, and then to signal the driver by waving their hands. By obtaining help from their neighbors, some blind people can wait for and take the bus; unfortunately, most blind people cannot take the bus because they have no help.

The existing bus-identification systems for blind people are divided into two major types. The first system makes use of transceiver-based communication using RFID, Bluetooth, Zig-Bee, and GPS [59]. For example, Al-Kalbani et al. [5] presented a bus-detection system for the blind using RFID. Their system involved cooperation among bus modules, bus-station systems, and blind tags, with each part of the system communicating via a database. Similarly, Santos [6] proposed an interactive system for city-bus transport using a Bluetooth module to connect buses, bus stops, and blind users, who could select bus lines via smartphone. Moreover, other types of transceivers have been used in similar systems such as ZigBee [7] and wireless sensor networks (WSNs) [8]. In addition, Thai developers presented the ViaBus smartphone application [9], which applied the GPS on a smartphone to searching the bus location and estimating the arrival times of buses. However, this application could search only buses that had the GPS module installed. While transceiver systems seem like a good idea for identifying buses for blind people, they are too large for modules to be practically installed in all buses and bus stations and given to all blind people. Further, when some parts of the system malfunction, maintenance is quite difficult.

The second proposed system makes use of computer-vision techniques [1014]. Computer vision or image processing is useful and widely used for various applications, including bus identification for blind individuals. For instance, the automatic Thai bus-number-recognition system [10] was proposed in 2016, and can segment and recognize bus-number digits with an accuracy of 73.47%. The problem of bus identification for the blind, not only in Thailand, has also piqued the interest of many researchers who live in other countries. For example, Chun-Ming et al. in Taiwan, [11, 12] presented an image-based method comprising three main processes, namely moving-object detection, bus-panel extraction, and bus-route-number detection. Further, the similar system of Lee et al. [13] used gradient-based features for bus-number recognition with classification of a support-vector machine (SVM). Blind individuals were notified of the bus-identification result by a text-to-speech program via a speaker embedded in a tablet PC. In addition, bus detection and recognition was also studied in New York, USA. Pan et al. [14] proposed a general two-step traveling-assistant system for visually impaired people. A HOG-based feature was applied to detecting bus images and performing optical-character recognition for bus-number reading. In practice, image-processing techniques are simpler than transceiver-based systems.

Figure 1 shows the concept of our system, which comprises four main steps. In Step 1, users turn the application on or off by voice command. Then (Step 2), users have to hold their smartphone to take a video while awaiting an oncoming bus. When a bus arrives at the bus stop (Step 3), the application will inform users via voice announcement by smartphone speaker (Step 4). Although our concept is similar to previous researches [1014], these methods had only two main steps, namely bus detection and bus-number recognition, as shown in Step 2 of Figure 1. In fact, blind users, in real situations, will hold their smartphones freely to take videos. The problem is that they may not know the viewpoint of an image on a smartphone screen. For instance, if they face obstacles such as electric poles or other individuals on the roadside, blind users might obtain an image of those obstacles instead of the oncoming bus. Although existing methods have good algorithms for recognizing the bus number, they cannot do so for this example case. Moreover, even certain obstacle-free viewpoints may have difficulty recognizing bus number, and this problem needs to be considered; some viewpoints are unsuitable for capturing bus data or even make this process impossible. Therefore, this paper proposes viewpoint classification in order to fulfill existing methods.

The final goal of our algorithm is to let the blind users know how to adjust the camera to obtain suitable viewpoints of the oncoming bus. Figure 2 shows a four-step algorithm for obtaining such viewpoints. Firstly, images are acquired by a smartphone camera; second, obstacle detection will be performed. When obstacles appear, the algorithm will notify users to adjust their camera position through translation, panning leftward/rightward, or tilting the camera upward/downward. Then, the classification of viewpoints will start when obstacles do not show up in the image. Moreover, two main situations on the road are categorized between congested and non-congested traffic. Since different clues are used for the two situations, we must consider viewpoint classification separately. For the non-congested situation, our prior research [15] proposed using road-area segmentation and the vanishing point in the image for viewpoint classification. On the other hand, the previous proposed method is difficult to apply to a congested-traffic situation because the vehicles obscure the road area. Therefore, in order to achieve the final goal of viewpoint classification with a daily-use system, this research proposes classification of viewpoints in congested traffic.

This paper consists of six sections. In Section 2, a suitable viewpoint for bus waiting is explained because no definition has been presented in previous works. In Section 3, the method of viewpoint classification is proposed using car distribution in the images as a clue. Then, the experiments and results of the proposed method are shown in Section 4. Section 5 presents the conclusion and discussion is presented in Section 6, together with future work.

2. Suitable Viewpoint Definition for Bus Waiting

Generally, each viewpoint of a taken photo depends on two main factors: tilt and panning of the camera. Figure 3 shows the relationship between the viewpoint of each image and the camera’s tilt and panning. For example, Viewpoints 1, 2, and 3 have been shown to be unsuitable. In Viewpoints 1 and 2, the images were taken at unsuitable tilts; namely, they were very-low and very-high-angle shots, respectively. In addition, Viewpoint 3 was taken with a suitable tilt but an unsuitable panning, which might detect bus numbers, but only when the oncoming bus is too near the users. However, Viewpoint 4 is suitable, as it was taken with both suitable tilt and panning of the camera’s position. In practice, suitable tilt can be achieved and defined when a smartphone camera is held vertically. On the other hand, the definition of suitable panning is more complex because the possibility of bus-number detection must be considered. With suitable panning defined, this paper proposes a method that can estimate the bus position in images.

In order to estimate the oncoming-bus position, in the case where no bus appears in the image, a horizontal line (D) between the vanishing point and the bus position will be calculated, as shown in Figure 4(a). First, we consider the possible longest distance for bus-number recognition. The sizes of bus numbers in Thailand are simulated as 15 cm × 15 cm, and all bus route and number characteristics, which consist of numbers, English letters and Thai characters, are recognized at different distances. In experiment, those characters are recognized with an image size of 800 px × 600 px under both ideal day and night conditions by length of 210.00–230.00 lux and 0.80–1.00 lux, respectively. According to the experiment, a distance of 15 meters is recognized as the longest from which bus numbers can be recognized. Next, a photo of the oncoming bus within the 15 m of where the user is standing is captured, as shown in Figure 4(a). After that, the length of D in the image can be measured between the vanishing point (Vp) of the perspective image and the square boundary of the detected bus. The D line is measured as 8.70% compared to the original width of the image (5.63 in). Finally, the D line is applied to estimate the position of the oncoming bus in the case where it does not appear, as shown in Figure 4(b). Figure 4(b) shows the perfect camera panning to recognize the bus number, because this viewpoint can recognize the bus number at the farthest right-hand side of image; this is the first position in the image at which the bus is seen. On the other hand, it would be very impractical to define a sole suitable viewpoint for users holding smartphones.

This research defines the suitable viewpoint to be within 25% of the range between Vp and the farthest left side in the perfect camera panning that the Vp have to fall in, as shown in Figure 4(b). However, 25% is set arbitrarily for this proposed application. For this paper, we propose viewpoint classification for congested traffic; thus, Figure 4(c) and 4(d) show examples of suitable and unsuitable viewpoints, as per the following definition.

3. Proposed Method

This paper aims to classify the viewpoints of blind individuals waiting for buses by the roadside, especially in the case of congested traffic. Moreover, in order to classify the suitable panning of the camera, this research assumes that all images were taken by holding smartphones vertically. According to the suitable-viewpoint definition in Section 2, the vanishing point (Vp) comprises two convergent lines on the road; however, it is difficult to find these lines in congested traffic. Consequently, our method proposes a technique that finds the car distribution in an image related to definition in Section 2. The four main steps of the proposed method consist of car detection using the YOLO technique, data normalization, feature extraction, and viewpoint classification, as shown in Figure 5.

3.1 Car Detection using YOLO

Although, there are many vision-based methods detecting the cars on the road [16], our proposed method apply a technique of convolutional network that was used widely in several applications [17]. The first step of the proposed method is to find the car distribution in the image under both daytime and nighttime illumination conditions. Figure 6(a) to (d) show exemplary images from real congested-traffic situations. To realize a real-time system for viewpoint classification, this research applies the You Only Look Once (YOLO) technique [18] for car detection. YOLO is a real-time object-detection system that applies the convolutional layers of a neural network. There are many versions of YOLO; this research uses YOLOv2 [19], which is faster and more accurate than original YOLO. Furthermore, a Microsoft COCO Dataset is used for model data from which 80 kinds of object can be detected. Generally, the output of the YOLO technique provides four parameters; (1) the objects labeled as humans, dogs, horses, cars, and so on; (2) confidence values of each objects labeled; (3) x, y coordinates of the top-left boundary; and (4) x, y coordinates of the bottom-right boundary. Although many kinds of object can be detected, this paper solely needs to detect cars in the images. Thus, the car-, truck-, and bus-labeled objects are considered with a threshold confidence value of 0.38 for this research. Since some farther cars were unnecessary, 150 px was set for the distance criterion between the x, y coordinate points at the top-left and bottom-right.

After setting all parameters, YOLO was applied for prepared images showing various real situations under congested-traffic conditions. Although the cars detected using YOLO provide the square boundary, as shown in Figure 6(a)–(h), the proposed method solely uses the x, y coordinates of the center boundary as a blue point in Figure 6(a)–(h). All center points were provided for the feature-extraction process.

3.2 Data Normalization

In order to provide the data for feature extraction, data normalization was necessary because of different sizes of original-input images. Each center point (xn, yn) of a detected car was normalized as a percentage. Eq. (1) and Eq. (2) show the normalized calculation for x′n, y′n, and Figure 7 shows the outcome of this step:

xn=xnnumber column of original image×100;yn=ynnumber row of original image×100.

3.3 Feature Extraction

The feature extraction process is an important, which has various calculations such as in [20]. Moreover, the previous study [21] extracted thirteen features, but some possibly useful ones may have been neglected. Moreover, the car distribution in each image appeared randomly depending on different congested-traffic situations, as shown in Figure 7, and there are a few data points that can be calculated for the feature-extraction process, as shown in Figure 7(c). Thus, each image was calculated for providing the feature vector, such as statistics calculation, coefficient of linear regression and geometric data. This research calculated nineteen possible features from datapoints, as shown in Table 1. Since there are different numbers of datapoints depending on the number of detected cars in the image, some features could not be calculated. For two detected cars, the biggest of triangle area feature will be set as zero value because the data point is not enough for calculation. In addition, when only one car was detected, the standard deviations, ranges, and centers of these ranges for x and y, as well as the slope, y-intersection, and R2 of linear regression were set to zero. However, these nineteen features were just all feasible-feature values from the car distribution. Section 4 shows the experiment for feature selection that provides the best results for the classification process.

Up to now, feature vectors have been provided for the classification process. For this process, this research used supervised machine learning. Five general different types of supervised machine learning were selected, namely simple-decision tree, Random Forest, Naïve Bayes, multi-layer perceptron, and SVM. Moreover, each classifier was tested with different numbers of features in order to find the best classification result between the classifier and the feature-selected method, as show in Section 4.

Generally, there are two main steps of supervised learning, namely training and testing. All feature vectors were separated into two classes as suitable and unsuitable viewpoints, following the definition in Section 2. In Figure 8, a feature matrix shows N selected features. In addition, M represents the number of datapoints, which was four hundred in this paper. Each datapoint was labeled as 1 or 0 for suitable and unsuitable viewpoints, respectively.

Ten-fold cross-validation was applied for classification. The cross-validation technique is widely used for data classification, for which the whole dataset is divided into training and evaluation datasets repeatedly. Moreover, 10-fold cross-validation means the dataset is partitioned into 10 equal subsamples. Then, one part of partitioned dataset was evaluated while others were used for data training. The process was repeated ten times, and the average of 10 results was used as the final accuracy of the classification step.

4. Experiments and Results

There were two experiments, which consisted of car detection and viewpoint classification. In addition, 400 images under both day and night conditions were taken in real congested-traffic situations. The original images were in an RGB-color format with different sizes because we used three smartphones to collect them.

4.1 Car-Detection Performance

The car-detection process was one crucial step of the proposed system. YOLOv2 was used for car detection with the parameter setting mentioned in Section 3. Herein, the outcome performance was tested by 400 images for day and night. Table 2 shows the results of the detection process with 79.90% accuracy, as was compared to the actual number of cars counted by humans. However, the accuracy of the nighttime situation was lower than during the daytime (76.74% and 83.72%, respectively), because of visibility can be difficult at nighttime due to darkness and the headlights of cars.

4.2 Viewpoint-Classification Performance

Although 19 features have been provided, as shown in Section 3, this experiment was performed to find the highest accuracy with each matching feature and classifier. This experiment applied the WEKA 3.8 software [22] developed by the University of Waikato, New Zealand, because it offers useful feature-selection and classification tools.

This section shows results for six different methods of feature selection and five general classifiers of supervised machine learning. Table 3 shows the feature-selection methods comprising no selection, CfsSubsetEval, correlation, information gain (InfoGain), OneRAttibute, and principal components analysis (PCA). The following five classifiers were used: simple decision tree (J48), Random Forest, Naïve Bayes, multi-layer perceptron, and the SVM for sequential minimal optimization (SMO). Each classifier was tested by different feature-selection methods, with the highest outcome shown by the Random Forest classifier with no selection and InfoGain. Although, these two cases both showed 86.00% accuracy, the number of selected features of InfoGain was smaller than any other (namely, 17). Therefore, to attain the best performance for our proposed method, this research selected a Random Forest classifier and the seventeen features shown in Table 1, excluding the mean of y and the y-intersection features.

After the classifier and features were selected, the confusion matrix of viewpoint classification is shown in Table 4. Since there were two classes of label, namely suitable and unsuitable viewpoints, the confusion matrix generally provides true-positive (TP), false-positive (FP), false-negative (FN), and true-negative (TN) results. Based on this information, the precision and F-measure can be calculated by Eqs. (3), (4), and (5), respectively:

recall=TPActual suitable veiwpoint=0.89;precision=TPPredicted suitable veiwpoint=0.84;F-measure=2×recall×precicionrecall+precicion=0.86.

All outcomes were quite high by 0.89, 0.84 and 0.86 for recall, precision and F-measure, respectively. According to these results, it was found that the proposed method performed better than that from prior research [18].

5. Discussion

Based on the results in Section 4, the performances of car detection and viewpoint classification were tested under daytime and nighttime illumination. The average accuracy of car detection was 79.90% in Table 2, and was lower at nighttime than at daytime due to darkness and the shine from car headlights. Furthermore, in order to remove some cars located on other lanes of roads, the square-size parameter was set to 150 px. Then, the number of detected cars was smaller than the number of actual cars seen in Table 2. However, detection accuracy may be improved by adjusting some parameters of the YOLOv2 algorithm, such as confidence value.

For viewpoint classification, the Random Forest classifier showed the highest performance with 86% accuracy. Moreover, seventeen features were chosen, as shown in Table 1, without the y-mean and y-intersection of the linear regression. In addition, the recall and precision were 0.89 and 0.84, respectively, which were both quite high. The F-measure shows the relationship between recall and precision, and thus had a high performance of 0.86. According to the result, it shows that some extracted features with a suitable classifier could improve the performance of our proposed application, which the optimization is necessary for evaluating the best result.

The final goal of the real-time application is to assist blind individuals waiting for the bus. The application’s performance should be perfect (i.e., 100% accuracy), because they cannot see and decide viewpoints by themselves. For future work, we plan to reduce the classification error for real-time implementation, with consideration of the obstacles obscuring suitable viewpoints.

6. Conclusion

This paper proposed a novel viewpoint-classification application for assisting blind individuals using computer-vision techniques while waiting for buses. Classification under congested traffic was considered. Our algorithm had four main steps, namely car detection, data normalization, feature extraction, and viewpoint classification. YOLOv2 was used for car detection because it can be implemented fast in real-time. Moreover, all centers of detected cars were normalized, and 19 features were extracted. In order to classify the viewpoints, this research applied supervised learning. From the experimental results, the car-detection performance showed 79.90% accuracy. The classification accuracy was compared between different feature-selection methods and classifiers. The results showed 17 features and the Random Forest classifier provided the highest accuracy (86.00%). Additionally, the recall, precision, and F-measure were shown as 0.89, 0.84, and 0.86, respectively. Our proposed method is considered to be feasible for real-time implementation with high performance in future work.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.


Figures
Fig. 1.

Overview of system design for bus identification using a smartphone camera.


Fig. 2.

Block diagram of viewpoint classification and the proposed method.


Fig. 3.

Examples of viewpoints with different camera positions.


Fig. 4.

Estimation for an oncoming bus: (a) oncoming bus appearing at a distance of 15 m from camera; (b) viewpoint estimation without an oncoming bus using the D line (perfect viewpoint), (c) an example of a suitable viewpoint for congested traffic, and (d) an example of an unsuitable viewpoint for congested traffic.


Fig. 5.

Main processes of the proposed method.


Fig. 6.

Original images and detected cars: (a)–(d) original images at both of day and night under congested-traffic conditions; (e)–(h) outcomes of car detection using the YOLO technique.


Fig. 7.

Data normalization: (a) and (d) example of data normalization for suitable viewpoints; (b) and (c) example of data normalization for unsuitable viewpoints.


Fig. 8.

Feature-matrix arrangement and its labels.


TABLES

Table 1

List of features and their descriptions

Feature Description
Number of cars n | X = {x1, …, xn} and Y = {y1, …, yn}

Maximum value of x and y xmax = max(X), ymax = max(Y )

Minimum value of x and y xmin = min(X), ymin = min(Y)

Mean of x and y x¯=inxin,y¯=inyin

Median of x and y medx=X(n+12),medy=Y(n+12)

Standard deviation of x and y sdx=1n-1i=1n(xi-x¯)2,sdy=1n-1i=1n(yi-y¯)2
Range of x and y Rx = xmaxxmin, Ry = ymaxymin

Center of range for x and y CenRx=Rx2,CenRy=Ry2

Slope (m), y-interception (c) of linear regression m=ninxiyi-(inxiinyi)ninxie-(inxi)2, c = mχ̄

R2 of linear regression R2=1-in(yi-y^i)2(yi-y¯i)2, ŷi represents the y value of linear regression.

The biggest of triangle area Area=S(S-A)(S-B)(S-C),S=A+B+C2

Table 2

Car-detection performance using YOLOv2 and the proposed parameter setting

Conditions Number of cars Accuracy (%)
Actual Detected
Day time 860 720 83.72
Night time 1,106 851 76.74
Day and night times 1,966 1,571 79.90

Table 3

Comparison of accuracy with different feature selections and classifiers

Feature selection methods Number of selected features Accuracy (%)
J48 Random Forest Naïve Bayes Multi-layer perceptron SMO
No selection 19 81.00 86.00 78.00 85.00 78.75
CfsSubsetEval 6 76.75 85.75 82.50 83.25 75.50
Correlation 13 78.25 85.25 77.25 81.50 77.25
InfoGain 17 81.50 86.00 78.50 83.00 77.00
OneRAttribute 10 80.00 84.75 77.00 80.50 76.25
PCA 6 75.00 81.25 76.75 85.50 79.25

Table 4

Confusion matrix for viewpoint classification

Actual
Suitable viewpoints Unsuitable viewpoints
Predicted Suitable viewpoints TP= 178 FP= 34 212
Unsuitable viewpoints FN= 22 TN= 166 188

References
  1. Ministry of Social Development and Human Security. (2018) . Statistics for hearing impaired people. Available http://nadt.or.th/pages/stat61.html
  2. Thailand Association of The Blind. (2016) . The Association of the Blind in Thailand and the Thai Blind Foundation invites to support the development of the quality of life of the Thai blind. Available http://tabgroup.tab.or.th/node/124
  3. Polinski, J, and Ochocinski, K (2017). Tactile graphics at railway stations–an important source of information for blind and visually impaired travellers. Problemy Kolejnictwa. 175, 63-69.
  4. Wiener, WR, Welsh, RL, and Blasch, BB (2010). Instructional Strategies and Practical Applications. New York, NY: AFB Press
  5. Al-Kalbani, J, Suwailam, RB, Al-Yafai, A, Al-Abri, D, and Awadalla, M 2015. Bus detection system for blind people using RFID., Proceedings of 2015 IEEE 8th GCC Conference & Exhibition, Muscat, Oman, Array, pp.1-6. https://doi.org/10.1109/IEEEGCC.2015.7060038
    CrossRef
  6. Santos, EAB 2015. Design of an interactive system for city bus transport and visually impaired people using wireless communication, smartphone and embedded system., Proceedings of 2015 SBMO/IEEE MTT-S International Microwave and Optoelectronics Conference (IMOC), Porto de Galinhas, Brazil, Array, pp.1-5. https://doi.org/10.1109/IMOC.2015.7369087
    CrossRef
  7. Lavanya, G, Preethy, W, Shameem, A, and Sushmitha, R 2013. Passenger bus alert system for easy navigation of blind., Proceedings of 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT), Nagercoil, India, Array, pp.798-802. https://doi.org/10.1109/ICCPCT.2013.6529043
    CrossRef
  8. Quoc, TP, Kim, MC, Lee, HK, and Eom, KH (2010). Wireless sensor network apply for the blind U-bus system. International Journal of u-and e-Service, Science and Technology. 3, 13-24.
  9. Wongta, P, Kobchaisawat, T, and Chalidabhongse, TH 2016. An automatic bus route number recognition., Proceedings of 2016 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), Khon Kaen, Thailand, Array, pp.1-6. https://doi.org/10.1109/JCSSE.2016.7748910
  10. Tsai, CM, and Yeh, ZM (2014). Detection of bus routes number in bus panel via learning approach. Intelligent Information and Database Systems. Cham: Springer, pp. 302-311 https://doi.org/10.1007/978-3-319-05458-232
    CrossRef
  11. Cheng, CC, Tsai, CM, and Yeh, ZM 2014. Detection of bus route number via motion and YCbCr features., Proceedings of 2014 International Symposium on Computer, Consumer and Control, Taichung, Taiwan, Array, pp.31-34. https://doi.org/10.1109/IS3C.2014.21
  12. Lee, D, Yoon, H, Park, C, Kim, J, and Park, CH 2013. Automatic number recognition for bus route information aid for the visually-impaired., Proceedings of 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, Array, pp.280-284. https://doi.org/10.1109/URAI.2013.6677367
  13. Pan, H, Yi, C, and Tian, Y 2013. A primary travelling assistant system of bus detection and recognition for visually impaired people., Proceedings of 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), San Jose, CA, Array, pp.1-6. https://doi.org/10.1109/ICMEW.2013.6618346
  14. Tangsuksant, W, and Wada, C (2018). Classification of viewpoints related to bus-waiting for the assistance of blind people. International Journal of New Technology and Research. 4, 43-52.
    CrossRef
  15. Cheon, M, and Lee, H (2017). Vision-based vehicle detection system applying hypothesis fitting. International Journal of Fuzzy Logic and Intelligent Systems. 17, 58-67. https://doi.org/10.5391/IJFIS.2017.17.2.58
    CrossRef
  16. Kim, B, and Lee, J (2018). A deep-learning based model for emotional evaluation of video clips. International Journal of Fuzzy Logic and Intelligent Systems. 18, 245-253. https://doi.org/10.5391/IJFIS.2018.18.4.245
    CrossRef
  17. Redmon, J, Divvala, S, Girshick, R, and Farhadi, A 2016. You only look once: unified, real-time object detection., Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, Array, pp.779-788. https://doi.org/10.1109/CVPR.2016.91
    CrossRef
  18. Redmon, J, and Farhadi, A. (2016) . YOLO9000: better, faster, stronger. Available https://arxiv.org/abs/1612.08242
  19. Lee, H (2018). Combining locality preserving projection with global information for efficient recognition. International Journal of Fuzzy Logic and Intelligent Systems. 18, 120-125. https://doi.org/10.5391/ijfis.2018.18.2.120
    CrossRef
  20. Noda, M, Miyamoto, H, Tangsuksant, W, Kitagawa, K, and Wada, C 2018. Basic study on viewpoints classification method using car distribution on the road., Proceedings of 2018 International Conference on Information and Communication Technology Robotics (ICT-ROBOT), Busan, Korea, Array, pp.1-3. https://doi.org/10.1109/ICTROBOT.2018.8549907
  21. Weka 3: Data Mining Software in Java. Available https://www.cs.waikato.ac.nz/ml/weka/
Biographies

Watcharin Tangsuksant received his B.Eng. degree in Biomedical Engineering from Srinakharinwirot University, Bangkok, Thailand, in 2013 and his M.Eng. degree in Biomedical Engineering from King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand, in 2015. He was a Lecturer with Rangsit University, Pathum Thani, Thailand, in 2016. He is currently a Ph.D. student of the Graduate School of Life Science and System Engineering, Kyushu Institute of Technology, Japan. His research interests include the image processing, signal processing, and assistive technology for disabled people.

E-mail: w.tangsuksant.m@hotmail.com


Masashi Noda received his B.Eng. degree in Department of Mechanical and Control Engineering from the Kyushu Institute of Technology in 2018. He is currently a master student of the Graduate School of Life Science and System Engineering, Kyushu Institute of Technology, Japan. His current research interests include computer vision and artificial intelligence.

E-mail: noda.masashi430@mail.kyutech.jp


Kodai Kitagawa received his B.Eng. degree from the National Institution for Academic Degrees and Quality Enhancement of Higher Education, Japan, in 2017. He is currently a master student of the Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Japan. His current research interests are biomedical engineering, physical therapy science, wearable sensing, and occupational health. He is a member of Society of Physical Therapy Science (SPTS).

E-mail: kitagawakitagawa156@gmail.com


Chikamune Wada received the B.Eng. degree in mechanical engineering from the Osaka University, Japan, in 1990 and the Ph.D. degree in biomedical engineering from Hokkaido University, Japan, in 1996. From 1996 to 2001, he was an Assistant Professor with the Sensory Information Laboratory, in Hokkaido University. In 2001, he became an Associate Professor with Human-function Substitution System Laboratory, Kyushu Institute of Technology. Since 2016, he has been a Professor with Human-function Substitution System Laboratory. His research interests include assistive technology, especially measuring human motion and informing the disabled people of the necessary information to improve their QOLs. He is a senior member of the Institute of Electronics, Information and Communication Engineers (IEICE).

E-mail: wada@brain.kyutech.ac.jp