search for




 

A New Automatic Gait Cycle Partitioning Method and Its Application to Human Identification
International Journal of Fuzzy Logic and Intelligent Systems 2017;17(2):51-57
Published online July 1, 2017
© 2017 Korean Institute of Intelligent Systems.

Sungjun Hong, and Euntai Kim

School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
Correspondence to: Euntai Kim (etkim@yonsei.ac.kr)
Received May 21, 2017; Revised June 23, 2017; Accepted June 23, 2017.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

Gait cycle partitioning is very important prior to human gait analysis such as gait modeling, gait recognition, gait feature analysis, etc. In this paper, we propose a new automatic gait cycle partitioning method based on two kinds of simple gait representation. Then, the propose method is applied to find keyframe corresponding the rest position for gait-based human identification system. To demonstrate the validity of the proposed method, the CASIA gait dataset A and the SOTON gait database are used to evaluate the recognition performance of the gait recognition system identifying subjects using the decision level fusion based on majority voting.

Keywords : Gait cycle partitioning, Gait recognition, Human identification, Width vector, Mass vector
1. Introduction

Human gait has raised attention in biometrics research field over recent years. For human identification, a unique advantage of human gait is ability to operate at a distance. Further, it can be measured not intrusive and captured at low resolution. Since most biometrics such as fingerprint, face, and iris are restricted to controlled environment, gait can be alternately used in situations where other biometrics might not be applicable. Research related to human gait includes gait modeling [1], gait recognition based on model [25] or appearance [613], gait feature analysis [4, 14], silhouette extraction and refinement [3, 7, 15], etc. It has great potential in human identification and visual surveillance systems.

However, human gait is complex and cyclical process requiring the synergy of muscles, bones, and nervous system [16]. Especially, periodicity of gait is one of the important characteristics out of other biometrics. Precise period estimation and gait cycle partitioning for a given gait sequence are with no doubt essential to any gait recognition system. The autocorrelation method [17, 18] is common way to estimate the average walking period from the foreground sum signal. In [19], sinusoidal signal was adopted to fit such foreground sum curve so that the period was ready to read. Minima of the sum signal were detected in [5] and used to divide cycles. In our previous work in [14], to find keyframe corresponding to the rest position, we detected minima of the norm of the width vector as a function of time for a given gait sequence.

The limitations of previous gait cycle partitioning methods lie in two aspects. The first problem is badly segmented silhouettes caused by complex background. So, the foreground sum signal is usually noisy so that preprocessing is required prior to analysis. The other problem is that there is no systematical algorithm to find local minima of foreground sum signal or norm as a function of time. So, in some previous works, it has been assumed that such a complete gait partitioning is available to the gait recognition system.

In this paper, in order to divide a given gait sequence into complete rest-to-rest gait cycles, we introduce a new automatic gait cycle partitioning method based on two kinds of gait feature; width vector and mass vector. Since it is simple and precise methods, filtering for preprocessing is not required. The proposed gait partitioning method is adopted to detect keyframe silhouette in the gait recognition system in [14] and evaluated on the CASIA gait dataset A and the SOTON gait database.

The rest of the paper is organized as follows. Section 2 describes two kinds of gait representation. Section 3 presents an automatic gait cycle partitioning technique and gait feature extraction. Section 4 presents a gait recognition system for human identification. Section 5 reports the experimental results on two gait databases. Finally, conclusions are drawn in Section 6.

2. Gait Representation

Studies in medical science and psychophysics indicate that there exists distinct information in gait to discriminate subjects. For example, joint angles may be sufficient to identify human by their manner of walking. But, it is difficult to recover joint angles reliably from a video sequence. It is thus reasonable to extract discriminative features from their appearance. Kale and his associates [9, 10] proposed the width of silhouette as a suitable gait representation called as the width vector for human identification. The width vector is defined as the horizontal distance between the x-coordinate values of the leftmost nonzero pixel and the rightmost nonzero pixel in row of a silhouette image. Thus, for a given W × H sized silhouette image, the width vector can be written as

w(t)=[w1(t),w2(t),,wH(t)]H,

where wh(t)=xhR(t)-xhL(t)0, for ∀ h = 1, 2, …, H is the difference in location of the rightmost boundary pixel xhR(t) and the leftmost boundary pixel xhL(t) for a given row h in the tth frame.

Instead of using the width vector directly as in [9], we employ the mean of the width as a gait feature vector to save the high cost of the frame-by-frame matching such as dynamic time wrapping. Explicitly, we calculate the mean of width vectors as

w¯=1Tt=1Tw(t)H

with T being the number of frames in a complete rest-to-rest gait cycle. The samples of the normalized silhouette images in a complete gait cycle for a single person and their corresponding width vectors are shown in Figure 1(a) and 1(b). The rightmost image in Figure 1(b) is the mean of width vectors for a complete gait cycle.

On the other hands, we proposed the projection of a silhouette image as a gait feature called the mass vector in our previous work [12]. The mass vector is defined as the number of nonzero pixels in row of a silhouette image. Thus, for a given preprocessed binary silhouette image I(t) at time t, the mass vector can be written mathematically as

m(t)=[m1(t),m2(t),,mH(t)]H,

where my(t)=x=1WIx,y(t), for ∀ y = 1, 2, …, H with t being the frame number in a gait sequence, x and y being indices in the 2D image coordinate, and W and H being the width and height of a silhouette image, respectively. Similar to the width vector, arithmetic mean of the mass vectors extracted from T frames as follows:

m¯=1Tt=1Tm(t)H

is also employed as a feature vector. Some samples of the mass vector are shown in Figure 1(c). The rightmost image in Figure 1(c) is the mean of mass vectors for a complete gait cycle.

3. Gait Cycle Partitioning Method

The partitioning of a gait sequence into cycle depicting a complete walking period is an important issue of the gait recognition system. In most of previous works, the gait cycles are detected using a function of time corresponding to a measure extracted from a gait sequence. This kind of time series is achieved using the sum of foreground pixels of silhouettes [5, 20], the norm of the width vector as a function of time for a given gait sequence [14], and so on. However, the former is usually very noisy and the latter is sensitive to spurious pixels in a silhouette image. Although Boulgouris et. al take the autocorrelation of the foreground sum signal to estimate the walking period and compute the coefficients of an optimal filter for decreasing of the noises of the sum signal [20], the complete gait cycles could not be achieved. So, in some previous works, it has been assumed that such a complete gait partitioning is available to the gait recognition system.

It is noted that a complete rest-to-rest gait cycle can be achieved by detecting an exact keyframe which corresponds to the rest position. Here, to divide a gait sequence into complete gait cycles exactly, we present a keyframe detection algorithm using the periodic characteristics of the width vector and the mass vector. Figure 2 shows the norm of the width and mass vector as a function of time for a given gait sequence. The valleys or local minima of each signal correspond to the rest positions during the gait cycle. However, the valley of the waveform obtained from the mass vector is too ambiguous to be found and the norm of the width vector is easy to be corrupted by spurious pixels in a silhouette image.

On the other hands, as shown in Figure 1, the width vector and the mass vector of a silhouette image corresponding to the rest position are almost same. It means that the norm of difference between the width vector and the mass vector corresponding to the rest position approaches zero. Here, a new automatic gait cycle partitioning method is inspired. Given a width vector w(t) and a mass vector m(t) at time t, the norm of difference vector can be written as

d(t)=w(t)-m(t)2,

where || · ||2 is l2 norm. Then, the consecutive frame index t whose d(t) for a given silhouette image is under a specific threshold δ is collected as a set of a candidate, S. When there is not any more consecutive norm of difference vector is under a threshold, the keyframe index is given by

τ=argmintSd(t),

and the set of a candidate is reinitialized as an empty set to detect next keyframe. This procedure is repeated during a gait sequence. Then, we obtain M keyframe indices and M − 1 complete gait cycles. Finally, the mean of width vectors and mass vectors are generated from a given complete rest-to-rest gait cycle according to Eq. (2) and (4). The whole algorithm is listed in Table 1. Figure 3 shows silhouettes images which correspond to the keyframe indices detected from the given waveform in Figure 2 utilizing the proposed keyframe detection algorithm.

4. Gait Recognition

For gait-based human identification, basically, we adopt the gait recognition system proposed in our previous work [14] as shown in Figure 4. First, a given silhouette sequence is divided into some gait cycles such that each cycle corresponds to one complete cycle from rest position to-foot-forward-to-rest position adopting an automatic gait cycle partitioning introduced in Section 4. We then generate the width vector and the mass vector from each complete cycle and classify them, respectively. Then, the nearest neighbor (NN) classifier is adopted for simple classification process. The feature vector χ̄(m) obtained from the mth complete gait cycle is classified as identity ID(m) from minimizing the Euclidean distance given by:

ID(m)=argminiE(x¯(m),x¯cn)

for ∀ n = 1, 2, …, Nc and ∀ c = 1, 2, …, C, where Nc is the number of gait feature vectors of training data in class c. Finally, majority voting is used to obtain the final decision for a gait sequence as shown in Figure 4.

5. Experiments

The gait sequences used in our experiments comes from two gait databases. One is the CASIA gait dataset A (former NLPR gait database) [8] and the other database is the SOTON gait database [20]. The CASIA gait dataset A is captured on two different days in an outdoor environment. All subjects walk along a straight-line path under three different camera views with respect to the image plane, namely canonical, oblique, and frontal views. It is consisting of 20 subjects and four sequences per camera view per subject, leading to a total of 80 sequences per view. In our experiments, canonical view of the CASIA gait dataset A is only considered. On the other hands, a larger SOTON database came from University of Southampton contains over 100 subjects. The SOTON gait database includes 113 different subjects with 4 sequences per subject where the images are captured at one canonical view.

In preprocessing, a foreground image involving a silhouette should be extracted from an original colorful image captured from a camera. Fortunately, since the above databases provide foreground images, any additional background subtraction technique is not necessary. Next, all silhouette images are normalized into 150×120 sizes using the silhouette normalization technique from [11].

The leave-one-out cross-validation test will be applied to the small-subject CASIA gait dataset A and the 4-folds cross-validation test will be used for the large-subject SOTON gait database to evaluate the gait recognition performance. We first conduct three groups of experiments to examine the effectiveness of the width vector and the mass vector for gait-based human identification. The groups consist of experiments in which the width vector (W), the mass vector (M), and both of them (W/M) are used as a gait feature. Tables 2 and 3 list the correct classification rate (CCR) and false acceptance rate (FAR) for the CASIA gait dataset A and the SOTON gait database when the different gait features are used for gait recognition. The use of the mass vector as a feature vector shows significant improvement in performance as compared to use of the width vector regardless of gait database. And, the performance obtained by using both of representations is improved slightly for the CASIA gait dataset A and held for the SOTON gait database. From these results, we see that the mass vector provides a powerful discrimination as gait feature and the width vector gives additional information which makes the performance improved.

6. Conclusions

In this paper, we propose a new automatic gait cycle partitioning method which divides a given gait sequence into complete rest-to-rest gait cycles. The proposed method utilizes the periodic characteristics of the width vector and the mass vector and is applied to find keyframe corresponding the rest position for gait-based human identification system. Both the small-subject and the large-subject gait databases are used to evaluate the gait recognition performance. As the proposed cycle partitioning method is simple and precise, it can be applied to various human gait research including gait modeling, gait recognition, gait feature analysis, etc.

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) through the Biometrics Engineering Research Center (BERC) at Yonsei University in 2010 (Grant No. R11-2002-105-09002-0).

Conflict of Interest

No potential conflict of interest relevant to this article was reported.


Figures
Fig. 1.

(a) Examples of normalized silhouette images in a complete rest-to-rest gait cycle for a single person, (b) the corresponding width vectors w(t), (c) the corresponding mass vectors m(t). The rightmost images in bottom two rows show the mean of width vectors and mass vectors, respectively.


Fig. 2.

The norm of the width vector (red line), the mass vector (blue line), and their difference vector (magenta line) as a function of time.


Fig. 3.

Silhouette images selected as a keyframe by the proposed keyframe detection algorithm.


Fig. 4.

Procedure of the gait recognition system using gait cycle partitioning and decision fusion.


TABLES

Table 1

Keyframe detection and feature generation algorithm

Step 1.Keyframe detection; find the index of a frame involving a silhouette image corresponding to the rest position.
1:Initialize M = 0 and S = φ
2:for t = 1 to T
3:Extractw(t) andm(t) from a given silhouette image
4:Calculate d(t) = ||w(t) −m(t)||2
5:if d(t) < δ then
6:  M = M + 1
7:  τ(M)=argmintSd(t)
8:  Reinitialize S as φ
9:else
10:  Add t into S
11:end
12:end
Step 2.Feature generation; generate the two kinds of gait feature vectors from the M − 1 complete rest-to-rest gait cycles within a gait sequence.
13:for m = 1 to M − 1
14:w(m)=1τ(m+1)-τ(m)+1t=τ(m)τ(m+1)w(t)
15:m(m)=1τ(m+1)-τ(m)+1t=τ(m)τ(m+1)m(t)
16:end

Table 2

Correct classification rates

DBFeature
WMW/M
CASIA A0.71250.91250.9250
SOTON0.80530.96020.9602

Table 3

False acceptance rates

DBFeature
WMW/M
CASIA A0.01510.00460.0039
SOTON0.00170.00040.0004

References
  1. Lu, H, Plataniotis, KN, and Venetsanopoulos, AN 2006. A layered deformable model for gait analysis., Proceeding of 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, Array, pp.249-256.
  2. Lee, L, and Grimson, WEL 2002. Gait analysis for recognition and classification., Proceeding of 5th IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, Array, pp.155-162.
  3. Lee, L, Dalley, G, and Tieu, K 2003. Learning pedestrian models for silhouette refinement., Proceeding of 9th IEEE International Conference on Computer Vision, Array, pp.663-670.
  4. Wagg, DK, and Nixon, MS 2004. On automated model-based extraction and analysis of gait., Proceeding of 6th IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, Array, pp.11-16.
  5. Sundaresan, A, RoyChowdhury, A, and Chellappa, R 2003. A hidden Markov model based framework for recognition of humans from gait sequences., Proceeding of 2003 International Conference on Image Processing, Barcelona, Spain, Array, pp.14-17.
  6. Man, J, and Bhanu, B (2006). Individual recognition using gait energy image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 28, 316-322.
    CrossRef
  7. Sarkar, S, Phillips, PJ, Liu, Z, Vega, IR, Grother, P, and Bowyer, KW (2005). The humanID gait challenge problem: data sets, performance, and analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 27, 162-177.
    Pubmed CrossRef
  8. Wang, L, Tan, T, Ning, H, and Hu, W (2003). Silhouette analysis-based gait recognition for human identification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 25, 1505-1518.
    CrossRef
  9. Kale, A, Cuntoor, N, Yegnanarayana, B, Rajagopalan, AN, and Chellappa, R 2003. Gait analysis for human identification., Proceeding of International Conference on Audio-and Video-Based Biometric Person Authentication, Guilford, UK, Array, pp.706-714.
  10. Kale, A, Sundaresan, A, Rajagopalan, AN, Cuntoor, NP, Roy-Chowdhury, AK, Kruger, V, and Chellappa, R (2004). Identification of humans using gait. IEEE Transactions on Image Processing. 13, 1163-1173.
    Pubmed CrossRef
  11. Hong, S, Lee, H, Toh, KA, and Kim, E (2009). Gait recognition using multi-bipolarized contour vector. International Journal of Control, Automation and Systems. 7, 799-808.
    CrossRef
  12. Hong, S, Lee, H, Nizami, IF, and Kim, E 2007. A new gait representation for human identification: mass vector., Proceeding of 2nd IEEE Conference on Industrial Electronics and Applications, Harbin, China, Array, pp.669-673.
  13. Hong, S, Lee, H, and Kim, E (2013). Probabilistic gait modeling and recognition. IET Computer Vision. 7, 56-70.
    CrossRef
  14. Hong, S, and Kim, E (2016). Sensitivity analysis of width representation for gait recognition. International Journal of Fuzzy Logic and Intelligent Systems. 16, 87-94.
    CrossRef
  15. Lee, H, Hong, S, and Kim, E (2009). An efficient gait recognition with backpack removal. EURASIP Journal on Advances in Signal Processing. 2009.
    CrossRef
  16. Saunders, JB, Inman, VT, and Eberhart, HD (1953). The major determinants in normal and pathological gait. Journal of Bone & Joint Surgery. 35, 543-558.
    CrossRef
  17. Boulgouris, NV, Hatzinakos, D, and Plataniotis, KN (2005). Gait recognition: a challenging signal processing technology for biometric identification. IEEE Signal Processing Magazine. 22, 78-90.
    CrossRef
  18. Boulgouris, NV, Plataniotis, KN, and Hatzinakos, D 2004. Gait recognition using dynamic time warping., Proceeding of 2004 IEEE 6th Workshop on Multimedia Signal Processing, Siena, Italy, Array, pp.263-266.
  19. Little, JJ, and Boyd, JE (1998). Recognizing people by their gait: the shape of motion. Videre: Journal of Computer Vision Research. 1, 1-32.
  20. Shutler, J, Grant, M, Nixon, MS, and Carter, JN 2002. On a large sequence-based human gait database., Proceeding of 4th International Conference on Recent Advances in Soft Computing, Nottingham, UK, pp.66-71.
Biographies

Sungjun Hong is a research professor in the School of Electrical and Electronic Engineering at Yonsei University, Seoul, Korea. He received the B.S. degree in electrical and electronic engineering and computer science, and the Ph.D. degree in electrical and electronic engineering from Yonsei University in 2005 and 2012, respectively. Upon his graduation, he worked in the connected car industry in LG Electronics as a senior researcher from 2012 to 2013. He worked for three years as a lead software engineer in SMARTSTUDY from 2013 to 2015 prior to his current appointment. He received the IET computer vision premium (best paper) award from the Institution of Engineering and Technology (IET), UK in 2015. His research interests include machine learning, deep learning, computer vision and their various applications.

E-mail: imjune@yonsei.ac.kr


Euntai Kim received his B.S., M.S., and Ph.D. degrees in Electronic Engineering from Yonsei University, Seoul, Korea, in 1992, 1994, and 1999, respectively. He was a fulltime lecturer with the Department of Control and Instrumentation Engineering, Hankyong National University, Anseong, Korea from 1999 to 2002. Since 2002, he has been with the faculty of the School of Electrical and Electronic Engineering, Yonsei University, where he is currently a Professor. He was a visiting researcher at Berkeley Initiative in Soft Computing, UC at Berkeley, Berkeley, CA, USA. His current research interests include computational intelligence and statistical machine learning and their application to intelligent robot, vehicle, and machine vision.

Tel: +82-2-2123-2863

Fax: +82-2-313-2875

E-mail: etkim@yonsei.ac.kr