Title Author Keyword ::: Volume ::: Vol. 18Vol. 17Vol. 16Vol. 15Vol. 14Vol. 13Vol. 12Vol. 11Vol. 10Vol. 9Vol. 8Vol. 7Vol. 6Vol. 5Vol. 4Vol. 3Vol. 2Vol. 1 ::: Issue ::: No. 4No. 3No. 2No. 1

A Scalable Feature Based Clustering Algorithm for Sequences with Many Distinct Items

Sangheum Hwang, and Dohyun Kim

1Department of Industrial & Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Korea, 2Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
Correspondence to: Correspondence to: Dohyun Kim, (ftgog@mju.ac.kr)
Received November 14, 2018; Revised December 15, 2018; Accepted December 21, 2018.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

Various sequence data have grown explosively in recent years. As more and more of such data become available, clustering is needed to understand the structure of sequence data. However, the existing clustering algorithms for sequence data are computationally demanding. To avoid such a problem, a feature-based clustering algorithm has been proposed. Notwithstanding that, the algorithm uses only a subset of all possible frequent sequential patterns as features, which may result in the distortion of similarities between sequences in practice, especially when dealing with sequence data with a large number of distinct items such as customer transaction data. Developed in this article is a feature-based clustering algorithm using a complete set of frequent sequential patterns as features for sequences of sets of items as well as sequences of single items which consist of many distinct items. The proposed algorithm projects sequence data into feature space whose dimension consists of a complete set of frequent sequential patterns, and then, employs K-means clustering algorithm. Experimental results show that the proposed algorithm generates more meaningful clusters than the compared algorithms regardless of the dataset and parameters such as the minimum support value of frequent sequential patterns and the number of clusters considered. Moreover, the proposed algorithm can be applied to a large sequence database since it is linearly scalable to the number of sequence data.

Keywords : Sequence data, Feature-based clustering, Frequent sequential patterns
1. Introduction

The amount of sequence data has grown explosively in recent years, and is expected to grow more rapidly than ever. Understanding and clustering such sequence data are invaluable for most companies in identifying different groups of customers, and thereby, develop a marketing strategy tailored to each group. In the area of biological science, grouping protein sequences that share similar structures is important because such sequences would have similar functionality.

Commonly employed algorithms for clustering sequence data are based on sequence alignment methods [13], which have been mainly studied in the areas of bioinformatics and computational biology to deal with such biological data as DNA and protein sequences. Most of the existing sequence alignment methods are focused on developing effective similarity measures between sequences and efficient algorithms to calculate the measures. Then, with such computed similarity measures, any traditional hierarchical or partitional clustering algorithm can be employed. In these methods, calculation of the similarity or distance between two sequences can be reformulated as an optimal sequence alignment problem (see Section 2.1), which fits well in the framework of dynamic programming [3]. However, the quadratic computational complexity of pairwise comparison in the dynamic programming algorithm requires much computational time to obtain clustering results, which makes it impractical for most applications that require clustering of a large sequence dataset [4].

Other widely used algorithms for clustering sequence data are model-based methods. In these methods, analytical or statistical models are constructed to describe the nature of each cluster of sequences, and for that purpose, suffix trees and Markov models [59] are frequently used. However, these methods are not without drawbacks. They also require much computational time in general since they are iteratively implemented [10].

To overcome the drawbacks of the existing approaches, a feature-based clustering approach has been proposed. For instance, Guralnik and Karypis [4] suggested a feature-based approach which finds a set of independent frequent sequential patterns called features, and projects the sequences into a new space whose dimensions consist of those features. Then, a vector-space clustering algorithm based on K-mean is applied. To determine a set of features, they suggested a global as well as a local approach. In the global approach, two frequent sequential patterns are defined as dependent if one is sub-pattern of the other or the number of sequences which support both of them is greater than a predefined threshold. Otherwise, in a local approach, two frequent sequential patterns are regarded as dependent if they are partially overlapped in a particular sequence. With these criteria for determining the dependency between two frequent sequential patterns, they extracted a set of independent patterns from mined frequent sequential patterns to minimize the distortion of similarity.

The feature-based clustering approach has several advantages over the others. First, the high computational complexity of computing pairwise similarities (as in the dynamic programming approach) can be avoided, and therefore, a reduction in the computational time for clustering can be achieved especially for large-scale sequence databases [3]. Second, the feature-based approach deals with the global information of the whole sequence data in the clustering process. This means that only those sequences that have frequently occurred items are clustered, while those sequences that have rarely occurred items are excluded from the clustering process even if they are much similar to each other. Consequently, it is more robust to outliers. However, the existing feature-based clustering algorithm by Guralnik and Karypis [4] uses only a subset of all possible frequent sequential patterns as features, which may result in the distortion of similarities between sequences in practice, especially when dealing with sequence data with a large number of distinct items such as customer transaction data. Moreover, it has some difficulties in dealing with sequences of sets of many distinct items. A sequence of sets of many distinct items differs from a sequence of single items in that the former consists of an ordered list of sets of many distinct items such as appeared in the sequence of customer transactions over time.

To overcome such drawbacks, a new feature-based clustering algorithm is developed for clustering of sequences with many distinct items. It is designed to deal with sequences of sets of items as well as sequences of single items. The proposed feature-based clustering algorithm regards each sequence as a binary vector in the feature space which has an orthogonal basis whose elements are the complete set of frequent sequential patterns, and employs the K-means clustering algorithm for clustering binary vectors. The proposed approach is similar to Guralnik and Karypis [4] in that it adopts the idea of projecting each sequence into a new space to treat it as a binary vector. However, in Guralnik and Karypis [4], only a subset of all possible frequent sequential patterns is used as features to be able to handle sequences with a few items (e.g., biological sequences) because these sequences may have too many possible frequent sequential patterns. However, using a subset of all possible sequential patterns may result in the distortion of similarity as can be seen in Figure 3 in Section 2.1. To deal with this problem, an additional investigation process of original sequence data to determine the independency among features is required in Guralnik and Karypis [4], while, in the proposed approach, such an additional investigation process is not required since the complete set of sequential patterns is used to minimize the distortion of similarity, and thereby, the computational time can be significantly reduced.

The rest of this paper is organized as follows: in Section 2, the proposed clustering algorithm is described in detail, and Section 3 presents the experimental and computational results. Finally, conclusions are provided in Section 4.

2. Proposed Method

The proposed algorithm for clustering sequence data consists of: 1) mining the complete set of frequent sequential patterns and simultaneously representing the sequences as binary vectors; and 2) clustering binary vectors by employing the K-means clustering algorithm. In this section, clustering procedures are discussed in detail, and then these are illustrated with an example.

### 2.1 Mining Frequent Sequential Patterns

Sequential pattern mining is widely used in various areas for analyzing biological sequences, customer purchase behaviors, web usage patterns, intrusion detection, etc. [11, 12]. Frequent sequential patterns obtained from the mining process can explain the nature of sequence data, and therefore, can be used as a set of features for clustering [4, 13, 14]. Several methods have been developed for efficient mining of frequent sequential patterns (e.g., see [1518]). Agrawal and Srikant [15] may be the first who dealt with the frequent sequential pattern mining problem. Given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items purchased by a customer, all the transactions of a specific customer can be represented as a sequence of sets of items. Given a user-defined minimum threshold, the problem of mining frequent sequential patterns is to find all subsequences that appear in the database more often than a user-defined minimum threshold.

Let E = {e1, e2, …, eT} be a set of T distinct items in which et = t for t = 1, 2, …, T. Suppose that the database D of l distinct sequences is given. A sequence Si for i = 1, 2, …, l is an ordered list of itemsets sij for j = 1, 2, …, mi, and is denoted by [sij, j = 1, 2, …, mi]. An itemset sij is a set of items, and is denoted by sij = (eijk, k = 1, 2, …, nij, eijkE). Without loss of generality, it is assumed that the elements in an itemset are sorted in ascending order of items. In a case of all nij = 1 for j = 1, 2, …, mi, a sequence Si, of course, can be regarded as a sequence of single items. For example, sequence S1 in Figure 1(a) has five itemsets: s11 = (1), s12 = (2), s13 = (3), s14 = (4) and s15 = (2, 2, 5, 6), where the elements of each itemset represent items. A sequence Si = [si1, si2, …, simi] is said to support a sequence Si = [si′1, si′2, …, simi] if there exist positive integers c1 < c2 < ⋯ < cmi such that si′1sic1, si′2sic2, …, simisicmi, and this relationship is denoted by SiSi. The support value of a sequence S* (e.g., sequential pattern) is defined as the percentage of the sequences that support S*. That is,

$Support(S*)=∣S∣S∈D,S⊃S*∣∣D∣×100,$

where |X| represents the number of elements in a set X. In Figure 1, sequences S1 = [s11, s12, s13, s14, s15] = [(1)(2)(3)(4) (2, 2, 5, 6)] and S3 = [s31, s32, s33, s34] = [(3)(7)(3, 4)(2)] support p10 = [p10,1p10,2] = [(3)(2)] because p10,1s13, p10,2s15 and p10,1s31, p10,2s34, respectively. Therefore, the support value of p10 is 50%.

Given a user-defined minimum support threshold α, a set of all sequences whose support value is greater than or equal to α is called the complete set of frequent sequential patterns. The complete set of frequent sequential patterns implies that there are no constraints in the mining process except for the minimum support threshold. Obviously, the complete set includes all possible subsequences of a mined frequent sequential pattern. For example, as can be seen in Figure 1, if a sequential pattern p15 = [(3)(3, 4)] is obtained as one of frequent sequential patterns, all subsets p3 = [(3)], p4 = [(4)], p11 = [(3)(3)], p12 = [(3, 4)] are also obtained.

In the present study, the PrefixSpan [16] algorithm is employed to obtain the complete set of frequent sequential patterns which is used as a set of features for clustering sequence data. A binary matrix of original sequences can be simultaneously constructed with the mining process. The PrefixSpan algorithm mines frequent sequential patterns through examining only prefixes of sequence data and projecting their corresponding postfixes into smaller projected databases. Frequent sequential patterns can be obtained by recursively building and scanning the projected databases. The process is repeated until no frequent itemset is found in the projected database. This algorithm mines all possible frequent sequential patterns (i.e., the complete set of sequential patterns) and linearly scalable. For a detailed description of the algorithm, the reader is referred to [16].

Example 2.1

Assume that the purchase records of four customers are given. In Figure 1(a), a total of 23 sequential patterns are obtained when the minimum support value is set to 50%, and they compose the complete set of frequent sequential patterns. Figure 1(b) shows a binary matrix that is constructed simultaneously with the pattern mining process.

Guralnik and Karypis [4] argued that the problem of similarity distortion is arisen because of high dependency and partial overlap between patterns. So, the original sequence data should be scanned to determine the independency or partial overlap among sequential patterns. And then, they insert additional constraints (e.g., the minimum length or the maximum length of sequential patterns) into the mining process to obtain an independent subset of the complete set of frequent sequential patterns. Then, original sequences are projected into a new space whose dimensions consist of selected sequential patterns by pruning one of dependent or partially overlapped sequential patterns. In Figure 1, sequential patterns from p6 to p23 are obtained under the additional constraint that the minimum length of patterns should be greater than 1. High dependency means that if a sequential pattern (e.g., p14 in Figure 1) is supported by a sequence (e.g., S3 in Figure 1), then a sub-pattern (e.g., p3 in Figure 1) of former is also supported by the latter. Those two sequential patterns (e.g., p14 and p3 in Figure 1) are called dependent. Partial overlap means that two patterns have duplicated region in a particular sequence. For example, in Figure 1, two sequential patterns p14 and p23 supported by a sequence S3 are defined as partially overlapped patterns due to the duplicated region (Item 7) in S3.

However, the problem of similarity distortion is actually caused by using a subset of all possible sequential patterns (i.e., a subset of the complete set of frequent sequential patterns) as features rather than high dependency and partial overlap between patterns. In other words, the problem is mainly caused by the lack of information that sufficiently describes the whole sequential characteristic or nature. In Figure 1, Guralnik and Karypis [4] fail to utilize the sequential patterns from p1 to p5 due to the additional constraint even though they have sufficient information.

Especially for the case of sequences with many distinct items, the distortion of similarity can be minimized by using the complete set of frequent sequential patterns to construct a basis of the feature space. Thus, the above unnecessary data scan process can be avoided. Moreover, there is little difference between the computational times to obtain the subset (e.g., from p6 to p23 in Figure 1) and the complete set of frequent sequential patterns because most of sequential pattern mining algorithms are based on the Apriori property [19], in other words, shorter sequential patterns should be obtained first to get longer ones. Therefore, the whole computational time can be significantly reduced through eliminating unnecessary data scan process.

Before discussing the similarity distortion issue, the sequence alignment method for directly calculating the similarity between sequences is introduced. Let S1 and S2 be two sequences of m1 and m2 itemsets, respectively. First, the similarity between two itemsets s1i for i = 1, 2, …, m1 and s2j for j = 1, 2, …, m2 is defined as the ratio of the number of common items between the two itemsets to the average number of items of the two itemsets. Then, an alignment score between S1 and S2 can be calculated as the sum of similarities of itemsets for a possible alignment of S1 against S2. The optimal alignment score is defined as the maximum alignment score between two sequences, and the similarity between the two sequences is determined as the ratio of the optimal alignment score to the average length (defined as the average number of itemsets) of the two sequences. For example, the similarity between S1 = [s11, s12] = [(1, 2, 3)(1, 2, 4, 5)] and S2 = [s21] = [(1, 2)] can be calculated as follows. There are two possible alignments between S1 and S2 as depicted in

In the case of Figure 2(a), the alignment score is 0.8 because similarity between two itemsets s11 and s21 is 0.8. On the other hand, in the case of Figure 2(b), the alignment score is 0.67. Therefore, the optimal alignment score is 0.8, and then the similarity between S1 and S2 can be calculated as the optimal alignment score divided by the average number of itemsets of two sequences, i.e., 0.53.

In Figure 3, the distortion of similarity issue is illustrated with the previous example. The elements of matrices represent the similarity between sequences. Figure 3(b) shows the similarity matrix obtained by the sequence alignment method and provides the optimal sequence similarity score based on the dynamic programming. Figure 3(c) is the proposed method based on the complete set of frequent sequential patterns. And Figures 3(d), and 3(e) show similarity matrices of projected sequences (e.g., see Figure 1(b)) computed by ‘cosine’ similarity under the different constraints on the minimum length of sequential patterns. The cosine similarity can be defined as follows. Let X and Y are vectors of n dimensions. The cosine similarity between X and Y is

$〈X·Y〉〈X·X〉〈Y·Y〉,$

where 〈X · Y〉 represents an inner product between X and Y. For example, similarity between projected sequence data S1 and S3 in Figure 1(b) can be calculated as follows:

$sim(S1,S3)=〈S1·S3〉〈S1·S1〉〈S3·S3〉=71218=0.48.$

To measure the distortion of similarity in Figure 3, various matrix norms including 1-norm, 2-norm, infinity norm and Frobenius norm were used. Matrix norms are generally used to quantify the difference between two matrices. For a detailed description of the matrix norms, the reader is referred to Kim et al. [20, 21]. From the comparisons using the matrix norms (Table 1), we can find using a complete set of frequent sequential patterns as features (C in Table 1) has the smallest difference from the optimal alignment score based on the dynamic programming (B in Table 1) regardless of the choice of norm. That is, the proposed method causes less distortion of similarity than existing feature-based method using a subset of frequent sequential patterns (D and E in Table 1).

### 2.2 K-Means Clustering

The binary matrix obtained from the mining process is used for clustering the original sequences. By treating original sequences as binary vectors, the computation time for calculating similarity between sequences can be significantly reduced compared to the sequence alignment method [3].

Rows of the binary matrix represent the original sequences in the feature space. The proposed method employs the K-means clustering algorithm with the cosine distance, commonly used for document clustering [22]. Detailed procedures are as follows:

1. Randomly select K rows of the binary matrix as initial cluster centroids.

2. Assign each point to its closest centroid by computing the cosine distance between the point and each cluster centroid.

3. Compute a new cluster centroid of each cluster, which is the mean vector of the points in that cluster after normalizing the length of each points to the unit Euclidean length.

4. Repeat Steps 2 and 3 until cluster centroids converge.

The time complexity of the K-mean clustering algorithm is O(lKI), where l is the number of cases (i.e., sequences in the present problem), K is the number of clusters, and I is the number of iterations of the algorithm to converge. Since l is usually much larger than both K and I, the time complexity becomes near linear to l [23].

3. Experimental Results

In order to verify the effectiveness and accuracy of the proposed method (i.e., the feature-based approach plus K-means clustering), computational experiments were conducted using a real dataset and a synthetic dataset. There are some difficulties in directly comparing the proposed method with Guralnik and Karypis [4] because there exist some ambiguities in choosing parameters such as the minimum and the maximum length of sequential patterns or a predefined threshold in the global approach. Therefore, the proposed method was compared to: 1) the sequence alignment method combined with K-medoid clustering; 2) the sequence alignment method combined with hierarchical clustering; and 3) the sequence alignment method combined with the density peaks clustering.

The K-medoids clustering algorithm [24] is similar to K-mean clustering algorithm except for the step of calculating cluster centers. The K-medoids clustering algorithm regards the cluster center of each cluster as one of the existing points that is located most centric, while in the K-mean clustering algorithm, the cluster center of each cluster is computed as the mean vector of those points in that cluster. It is impossible to compute the center of each cluster in sequence data, and therefore, the K-medoids clustering algorithm in which cluster centers need not be computed is compared with the proposed method.

The hierarchical clustering algorithms can be classified into three categories according to how the distance between clusters is defined. They include single-linkage, average-linkage, and complete-linkage methods [25]. Sometimes, the single-linkage method has a tendency to group a large number of sequences into a single large cluster. This phenomenon is called the “chain effect”. The single-linkage method was not considered since both datasets show a high “chain effect” when the method was used. Accordingly, the average-linkage and complete-linkage methods are compared with the proposed method.

The density peak clustering algorithm assumes that the cluster centers have higher densities than their neighbors and they are at a relatively large distance from points with higher densities [26]. This approach has an advantage that the cluster centers can be intuitively selected and the clusters are recognized easily regardless of their shape and of the dimensionality of data. Moreover, this algorithm can be conducted based only on the similarity matrix. For this comparison, the cluster centers are selected using γ computed by ρ (density) × δ (minimum distance between the sequence and any other sequence with higher density), ρ and δ are computed on the basis of the similarity between sequences. For a detailed description of the density clustering method, the reader is referred to Rodriguez and Laio [26].

### 3.1 Measures of Cluster Quality

There are two types of measures for validating clustering results. The external quality measure compares the discovered clusters to a priori clusters. On the other hand, the internal quality measure evaluates if the discovered clusters are inherently appropriate for the data without reference to external knowledge. Assuming no a priori knowledge on the clusters of the two experimental datasets, we adopt the internal quality measure in the present study.

The internal measure of cluster quality used in the present study is a weighted average similarity where the similarity between sequences is based on the optimal alignment score discussed in Section 2.1.

Let C = {C1, C2, …, CK} be a set of clusters. The weighted average similarity is calculated as follows. A higher WAS value represents a better clustering solution.

1. Compute the average similarity (AS) for each cluster Ck for 1 ≤ kK.

$AS(Ck)=∑sim(Si,Si′)∣Ck∣ (∣Ck∣-1)for all Si,Si′∈Ck and i

where sim(Si, Si) is the similarity between sequences Si and Si.

2. Calculate the weighted average similarity (WAS) of clusters C as follows.

$WAS(C)=∑k=1K∣Ck∣·AS(Ck).$

### 3.2 Retail Market Dataset

The retail market dataset used in the experiment was obtained from an e-commerce company. The dataset contains 57,686 transactions on 305 items by 5,001 customers during a six-month period in a certain year. In addition, 5,001 customers who made 3 or more transactions were considered. As a preprocessing step, we generated a sequence which is a list of transactions in ascending order of transaction time for each customer. Subsequently, 5,001 sequences of sets of items were generated, and therefore, this data can be categorized as “sequences of sets of items with many distinct items”.

To analyze the sensitivity of clustering results to the minimum support threshold, two different support threshold values, 1% and 3%, were considered. In this dataset, we found 657 and 110 frequent sequential patterns with the minimum support of 1% and 3%, respectively. Only the sequences that support at least one of the mined frequent sequential patterns were clustered to compare the five clustering algorithms on the same condition. Those sequences that support none of the patterns are treated as outliers. As a result, 4 original sequences were excluded with the minimum support of 1%, and 96 sequences were excluded with 3%. Then, the WASs were calculated when the number of clusters are 10, 20, and 30, respectively.

Figures 4(a) and 4(b) show the WASs for the five clustering algorithms. Experimental results show that the proposed clustering method performs better than the sequence alignment methods combined with other traditional and recent clustering algorithms regardless of the support value and the number of clusters. It is interesting to note that the WAS of the proposed feature-based clustering combined with K-means clustering is much higher even than that of the recent clustering algorithm based on optimal alignment score. This implies that clustering task is readily done in a vector space spanned by binary vectors obtained from the complete set of frequent sequential patterns and the proposed feature-based approach better reflects the nature of the global sequence by using the complete set of frequent sequential patterns.

### 3.3 Synthetic Dataset

The synthetic dataset T10I4D100K.dat [27] generated by IBM Almaden Quest research group was used in this study. This dataset contains 100,000 transactional data with 1,000 items. We randomly chose 5,000 transactional data to reduce the time for calculating the weighted average similarity. Actually, this dataset is not a sequential one, but just a transactional one. That is, the order of items is not important. Without loss of generality, we assumed that each sequence represents a sequence of items, and treated the dataset as “sequences of items with many distinct items”.

In a similar manner to the previous experiment, we considered two support threshold values of 0.5% and 1%, and found 1,406 and 419 frequent sequential patterns, respectively. Every sequence supported at least one of the patterns with the support of 0.5%, and 4 sequences do not support any sequential patterns with the support of 1%. As mentioned earlier, sequences that support none of the sequential patterns are treated as outliers, and thus excluded.

Figures 5(a) and 5(b) show the weighted average similarities for the five clustering algorithms. Experimental results show that the proposed clustering method performs better than the sequence alignment methods combined with other clustering algorithms regardless of the support value and the number of clusters. Compared to the previous experiment on sequences of a set of items, these results show that the quality of clusters which are obtained by the proposed method is much better than other approaches, implying that it can be used as a useful alternative when clustering sequences with many distinct items.

4. Conclusion

In this paper, a feature-based clustering algorithm for sequences with many distinct items is developed. Given two parameters, a minimum support threshold and the number of clusters, the proposed method finds meaningful clusters of sequence. It uses a complete set of frequent sequential patterns as features for clustering and employs K-means clustering with cosine distance to obtain meaningful clusters. Unlike the existing approaches which have difficulty in dealing with sequences of sets of items, the proposed method can deal with sequences of sets of items as well as sequences of items if there exist many distinct items. Moreover, it can be applied to large scale databases since it is linearly scalable.

To verify the effectiveness of the proposed method, it is compared to other traditional and recent clustering algorithms in terms of the weighted average similarity. Two datasets were used in the experiment. One is a retail market dataset which consists of sequences of sets of items with many distinct items, and the other is a synthetic dataset of sequences of single items with many distinct items. Experimental results show that a projection of sequences into a binary vector space helps to find better clusters, and thereby the proposed method outperforms the sequence alignment based clustering approaches regardless of the support threshold value and the number of clusters considered.

The above encouraging results require the proper value of minimum support threshold for sequential patterns. Therefore, it may be a fruitful area of future research to develop an efficient method for determining a proper support value. In addition, to reduce the computational time, a direct clustering method through merging two processes mining frequent sequential patterns and clustering projected binary vectors can be developed in further study.

Acknowledgments

This study was supported by the Research fund for a new professor by the Seoul National University of Science and Technology (SeoulTech).

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Figures
Fig. 1.

Example of constructing the binary matrix. (a) Sequence data and the complete set of frequent sequential patterns with support of 50%. (b) Constructed binary matrix.

Fig. 2.

Example of two possible alignments: (a) first alignment and (b) second alignment.

Fig. 3.

Similarity distortion problem with constrained patterns. (a) Sequence data. (b) Similarity based on the optimal alignment score. (c) Similarity based on a complete set (constraints: support= 50%). (d) Similarity based on a subset of the complete set (constraints: support= 50% min length= 2). (e) Similarity based on a subset of the complete set (constraints: support= 50% min length= 3).

Fig. 4.

Experimental results for the retail market dataset with the support of 1% (a) and 3% (b).

Fig. 5.

Experimental results for the synthetic dataset with the support of 0.5% (a) and 1% (b).

TABLES

### Table 1

Comparison of similarity matrices using matrix norms

Optimal alignment score(B)Complete set(C)Difference(B–C)Subset length= 2(D)Difference(B–D)Subset length= 3(E)Difference(B–E)
1-Norm2.7802.7300.0502.4900.2902.2000.580
2-Norm2.6492.4840.1662.2420.4081.9990.650
Infinity norm2.7802.7300.0502.4900.2902.2000.580
Frobenius norm2.8042.7280.0762.6060.1982.6140.190

References
1. Noh, SK, Kim, YM, Kim, DK, and Noh, BN (2006). An efficient similarity measure for clustering of categorical sequences. AI 2006: Advances in Artificial Intelligence. Heidelberg: Springer, pp. 372-382 https://doi.org/10.1007/11941439_41
2. Oh, SJ, and Kim, JY (2004). A hierarchical clustering algorithm for categorical sequence data. Information Processing Letters. 91, 135-140. https://doi.org/10.1016/j.ipl.2004.04.002
3. Xu, R, and Wunsch, D (2005). Survey of clustering algorithms. IEEE Transactions on Neural Networks. 16, 645-678. https://doi.org/10.1109/TNN.2005.845141
4. Guralnik, V, and Karypis, G 2001. A scalable algorithm for clustering sequential data., Proceedings of IEEE International Conference on Data Mining, San Jose, CA, Array, pp.179-186. https://doi.org/10.1109/ICDM.2001.989516
5. Bicego, M, Murino, V, and Figueiredo, MAT (2003). Similarity-based clustering of sequences using hidden Markov models. Machine Learning and Data Mining in Pattern Recognition. Heidelberg: Springer, pp. 86-95 https://doi.org/10.1007/3-540-45065-3_8
6. de Angelis, L, and Diasb, JG (2014). Mining categorical sequences from data using a hybrid clustering method. European Journal of Operational Research. 234, 720-730. https://doi.org/10.1016/j.ejor.2013.11.002
7. Smyth, P (1997). Clustering sequences with hidden Markov models. Advances in Neural Information Processing Systems. 9, 648-654.
8. Xiong, T, Wang, S, Jiang, Q, and Huang, JZ (2004). A novel variable-order Markov model for clustering categorical sequences. IEEE Transactions on Knowledge and Data Engineering. 26, 2339-2353. https://doi.org/10.1109/TKDE.2013.104
9. Yang, J, and Wang, W 2003. CLUSEQ: efficient and effective sequence clustering., Proceedings of the 19th International Conference on Data Engineering, Bangalore, India, Array, pp.101-112. https://doi.org/10.1109/ICDE.2003.1260785
10. Enright, AJ, and Ouzounis, CA (2000). GeneRAGE: a robust algorithm for sequence clustering and domain detection. Bioinformatics. 16, 451-457. https://doi.org/10.1093/bioinformatics/16.5.451
11. Kim, M (2007). Simultaneous learning of sentence clustering and class prediction for improved document classification. International Journal of Fuzzy Logic and Intelligent Systems. 17, 35-42. https://doi.org/10.5391/IJFIS.2017.17.1.35
12. Laxman, S, and Sastry, PS (2006). A survey of temporal data mining. Sadhana. 31, 173-198. https://doi.org/10.1007/BF02719780
13. Morzy, T, Wojciechowski, M, and Zakrzewicz, M. (2001) . Scalable hierarchical clustering method for sequences of categorical values. Advances in Knowledge Discovery and Data Mining, 282-293. https://doi.org/10.1007/3-540-45357-1_31
14. Yang, Y, and Padmanabhan, B 2003. Segmenting customer transactions using a pattern-based clustering approach., Proceedings of the 3rd IEEE International Conference on Data Mining, Melbourne, FL, Array, pp.411-418. https://doi.org/10.1109/ICDM.2003.1250947
15. Agrawal, R, and Srikant, R 1995. Mining sequential patterns., Proceedings of the 11th International Conference on Data Engineering, Taipei, Taiwan, Array, pp.3-14. https://doi.org/10.1109/ICDE.1995.380415
16. Pei, J, Han, J, Mortazavi-Asl, B, Pinto, H, Chen, Q, Dayal, U, and Hsu, M-C 2001. PrefixSpan: mining sequential patterns efficiently by prefix-projected pattern growth., Proceedings of the 17th International Conference on Data Engineering, Heidelberg, Germany, Array, pp.215-224. https://doi.org/10.1109/ICDE.2001.914830
17. Zaki, MJ (2001). SPADE: an efficient algorithm for mining frequent sequences. Machine Learning. 42, 31-60. https://doi.org/10.1023/A:1007652502315
18. Zhang, Z, Huang, J, and Wei, Y (2005). FI-FG: frequent item sets mining from datasets with high number of transactions by granular computing and fuzzy set theory. Mathematical Problems in Engineering. 2005. article ID. 623240
19. Zhao, Q, and Bhowmick, SS (2003). Sequential pattern mining: a survey,” Nanyang Technological University, Singapore. Technical Report No 2003118.
20. Kim, D, Lee, B, Lee, HJ, Lee, SP, Moon, YH, and Jeong, MK (2002). Automated detection of influential patents using singular values. IEEE Transactions on Automation Science and Engineering. 9, 723-733. http://doi.org/10.1109/TASE.2012.2210214
21. Seber, GA (2008). A Matrix Handbook for Statisticians. Hoboken, NJ: John Wiley & Sons
22. Zhao, Y, and Karypis, G (2001). Criterion functions for document clustering: experiments and analysis. Technical Report No. 01-040. MN: Department of Computer Science, University of Minnesota
23. Kantardzic, M (2002). Data Mining: Concepts, Models, Methods and Algorithms. Hoboken, NJ: John Wiley & Sons
24. Hastie, T, Tibshirani, R, and Friedman, J (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York, NY: Springer
25. Dong, G, and Pei, J (2007). Sequence Data Mining. New York, NY: Springer
26. Rodriguez, A, and Laio, A (2014). Clustering by fast search and find of density peaks. Science. 344, 1492-1496. http://doi.org/10.1126/science.1242072
27. Frequent Itemset Mining Dataset Repository (FIMI Repository). http://fimi.cs.helsinki.fi/data/
Biographies

Sangheum Hwang received the Ph.D. degree in industrial and systems engineering from KAIST, Korea, in 2012. Currently, he is an Assistant Professor with the Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology. His research interests are in the areas of statistical learning methods, kernel machines and deep learning.

E-mail: shwang@seoultech.ac.kr

Dohyun Kim received the M.S. and Ph.D. degrees in industrial engineering from KAIST, Korea, in 2002 and 2007, respectively. Currently, he is an Associate Professor with the Department of Industrial and Management Engineering, Myongji University. His research interests include statistical data mining, deep learning and graph data analysis.

E-mail: ftgog@mju.ac.kr

December 2018, 18 (4)