Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(3): 205-212

Published online September 25, 2021

https://doi.org/10.5391/IJFIS.2021.21.3.205

© The Korean Institute of Intelligent Systems

Similarity based Deep Neural Networks

Seungyeon Lee1*, Eunji Jo1*, Sangheum Hwang2, Gyeong Bok Jung3, and Dohyun Kim1

1Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
2Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Korea
3Department of Physics Education, Chosun University, Gwangju, Korea

Correspondence to :
Dohyun Kim (ftgog@mju.ac.kr)
*These authors contributed equally to this work.

Received: February 10, 2021; Accepted: September 2, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Deep neural networks (DNNs) have recently attracted attention in various areas. Their hierarchical architecture is used to model complex nonlinear relationships in high-dimensional data. DNNs generally require large numbers of data to train millions of parameters. However, the training of a DNN with a small number of high-dimensional data can result in an overfitting. To alleviate this problem, we propose a similarity-based DNN that can effectively reduce the dimensionality of the data. The proposed method utilizes a kernel function to calculate pairwise similarities of observations as input, and the nonlinearity based on the similarities is then explored using a DNN. Experiment results show that the proposed method performs effectively regardless of the dataset used, implying that it can be applied as an alternative when learning a small number of high-dimensional data.

Keywords: Deep neural networks (DNNs), Kernel method, Feature extraction, High-dimensional data

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No. NRF-2017R1E1A1A01077375).

No potential conflict of interest relevant to this article was reported.

Seungyeon Lee received her B.S. and M.S. degrees in industrial and management engineering from Myongji University, Korea, in 2018 and 2020, respectively. Her research interests include statistical data mining, deep learning, recommender systems, and graph data analysis.

E-mail: sylee1@mju.ac.kr

Eunji Jo received her B.S. and M.S. degrees in industrial and management engineering from Myongji University, Korea, in 2018 and 2020, respectively. Her research interests include statistical data mining, machine learning, deep learning, and series data analysis.

E-mail: goodji@mju.ac.kr

Sangheum Hwang received his Ph.D. degree in industrial and systems engineering from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2012. He is currently an assistant professor at the Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology. His research interests are in the areas of statistical learning methods, kernel machines, and deep learning.

E-mail: shwang@seoultech.ac.kr

Gyeong Bok Jung received her Ph.D. degree from the Department of Applied Physics at the School of Engineering at the University of Tokyo, Japan. She is currently an assistant professor with the Department of Physics Education at Chosun University in Korea. Her research interests include surface-enhanced Raman scattering and biomedical applications.

E-mail: gbjung@Chosun.ac.kr

Dohyun Kim received his M.S. and Ph.D. degrees in industrial engineering from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2002 and 2007, respectively. He is currently an associate professor with the Department of Industrial and Management Engineering, Myongji University. His research interests include statistical data mining, deep learning, and graph data analysis.

E-mail: ftgog@mju.ac.kr

Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(3): 205-212

Published online September 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.3.205

Copyright © The Korean Institute of Intelligent Systems.

Similarity based Deep Neural Networks

Seungyeon Lee1*, Eunji Jo1*, Sangheum Hwang2, Gyeong Bok Jung3, and Dohyun Kim1

1Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
2Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Korea
3Department of Physics Education, Chosun University, Gwangju, Korea

Correspondence to:Dohyun Kim (ftgog@mju.ac.kr)
*These authors contributed equally to this work.

Received: February 10, 2021; Accepted: September 2, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Deep neural networks (DNNs) have recently attracted attention in various areas. Their hierarchical architecture is used to model complex nonlinear relationships in high-dimensional data. DNNs generally require large numbers of data to train millions of parameters. However, the training of a DNN with a small number of high-dimensional data can result in an overfitting. To alleviate this problem, we propose a similarity-based DNN that can effectively reduce the dimensionality of the data. The proposed method utilizes a kernel function to calculate pairwise similarities of observations as input, and the nonlinearity based on the similarities is then explored using a DNN. Experiment results show that the proposed method performs effectively regardless of the dataset used, implying that it can be applied as an alternative when learning a small number of high-dimensional data.

Keywords: Deep neural networks (DNNs), Kernel method, Feature extraction, High-dimensional data

Fig 1.

Figure 1.

Input layer of proposed method and basic DNN.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 205-212https://doi.org/10.5391/IJFIS.2021.21.3.205

Fig 2.

Figure 2.

Flowchart of proposed method.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 205-212https://doi.org/10.5391/IJFIS.2021.21.3.205

Fig 3.

Figure 3.

Selection of important observations.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 205-212https://doi.org/10.5391/IJFIS.2021.21.3.205

Fig 4.

Figure 4.

Plot of prediction accuracy.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 205-212https://doi.org/10.5391/IJFIS.2021.21.3.205

Table 1 . Experimental data.

DatasetObservationsFeaturesClasses
Brain302,5013
BV302,3013

Table 2 . Classification results for each dataset.

DatasetMethodAccuracyNumber of parametersNumber of input nodes removed
BrainDNNs0.979 ± 0.06450,8800
Proposed method (linear kernel)0.967 ± 0.0002,4900
Proposed method (RBF kernel)0.985 ± 0.01722,3000
Proposed sparse method (RBF kernel)0.912 ± 0.0591,2604 (20%)

BVDNNs0.868 ± 0.0304,304,0000
Proposed method (linear kernel)0.767 ± 0.02096,9000
Proposed method (RBF kernel)0.889 ± 0.02524,4700
Proposed sparse method (RBF kernel)0.866 ± 0.0253,6703 (15%)

Share this article on :

Related articles in IJFIS