International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(3): 205-212
Published online September 25, 2021
https://doi.org/10.5391/IJFIS.2021.21.3.205
© The Korean Institute of Intelligent Systems
Seungyeon Lee1*, Eunji Jo1*, Sangheum Hwang2, Gyeong Bok Jung3, and Dohyun Kim1
1Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
2Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Korea
3Department of Physics Education, Chosun University, Gwangju, Korea
Correspondence to :
Dohyun Kim (ftgog@mju.ac.kr)
*These authors contributed equally to this work.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Deep neural networks (DNNs) have recently attracted attention in various areas. Their hierarchical architecture is used to model complex nonlinear relationships in high-dimensional data. DNNs generally require large numbers of data to train millions of parameters. However, the training of a DNN with a small number of high-dimensional data can result in an overfitting. To alleviate this problem, we propose a similarity-based DNN that can effectively reduce the dimensionality of the data. The proposed method utilizes a kernel function to calculate pairwise similarities of observations as input, and the nonlinearity based on the similarities is then explored using a DNN. Experiment results show that the proposed method performs effectively regardless of the dataset used, implying that it can be applied as an alternative when learning a small number of high-dimensional data.
Keywords: Deep neural networks (DNNs), Kernel method, Feature extraction, High-dimensional data
No potential conflict of interest relevant to this article was reported.
E-mail: sylee1@mju.ac.kr
E-mail: goodji@mju.ac.kr
E-mail: shwang@seoultech.ac.kr
E-mail: gbjung@Chosun.ac.kr
E-mail: ftgog@mju.ac.kr
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(3): 205-212
Published online September 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.3.205
Copyright © The Korean Institute of Intelligent Systems.
Seungyeon Lee1*, Eunji Jo1*, Sangheum Hwang2, Gyeong Bok Jung3, and Dohyun Kim1
1Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
2Department of Industrial and Information Systems Engineering, Seoul National University of Science and Technology, Seoul, Korea
3Department of Physics Education, Chosun University, Gwangju, Korea
Correspondence to:Dohyun Kim (ftgog@mju.ac.kr)
*These authors contributed equally to this work.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Deep neural networks (DNNs) have recently attracted attention in various areas. Their hierarchical architecture is used to model complex nonlinear relationships in high-dimensional data. DNNs generally require large numbers of data to train millions of parameters. However, the training of a DNN with a small number of high-dimensional data can result in an overfitting. To alleviate this problem, we propose a similarity-based DNN that can effectively reduce the dimensionality of the data. The proposed method utilizes a kernel function to calculate pairwise similarities of observations as input, and the nonlinearity based on the similarities is then explored using a DNN. Experiment results show that the proposed method performs effectively regardless of the dataset used, implying that it can be applied as an alternative when learning a small number of high-dimensional data.
Keywords: Deep neural networks (DNNs), Kernel method, Feature extraction, High-dimensional data
Input layer of proposed method and basic DNN.
Flowchart of proposed method.
Selection of important observations.
Plot of prediction accuracy.
Table 1 . Experimental data.
Dataset | Observations | Features | Classes |
---|---|---|---|
Brain | 30 | 2,501 | 3 |
BV | 30 | 2,301 | 3 |
Table 2 . Classification results for each dataset.
Dataset | Method | Accuracy | Number of parameters | Number of input nodes removed |
---|---|---|---|---|
Brain | DNNs | 0.979 ± 0.064 | 50,880 | 0 |
Proposed method (linear kernel) | 0.967 ± 0.000 | 2,490 | 0 | |
Proposed method (RBF kernel) | 0.985 ± 0.017 | 22,300 | 0 | |
Proposed sparse method (RBF kernel) | 0.912 ± 0.059 | 1,260 | 4 (20%) | |
BV | DNNs | 0.868 ± 0.030 | 4,304,000 | 0 |
Proposed method (linear kernel) | 0.767 ± 0.020 | 96,900 | 0 | |
Proposed method (RBF kernel) | 0.889 ± 0.025 | 24,470 | 0 | |
Proposed sparse method (RBF kernel) | 0.866 ± 0.025 | 3,670 | 3 (15%) |
P. Murugeswari and S. Vijayalakshmi
International Journal of Fuzzy Logic and Intelligent Systems 2020; 20(4): 336-345 https://doi.org/10.5391/IJFIS.2020.20.4.336Jihad Anwar Qadir, Abdulbasit K. Al-Talabani, and Hiwa A. Aziz
International Journal of Fuzzy Logic and Intelligent Systems 2020; 20(4): 272-277 https://doi.org/10.5391/IJFIS.2020.20.4.272Ali Rohan and Sung Ho Kim
International Journal of Fuzzy Logic and Intelligent Systems 2019; 19(2): 78-87 https://doi.org/10.5391/IJFIS.2019.19.2.78Input layer of proposed method and basic DNN.
|@|~(^,^)~|@|Flowchart of proposed method.
|@|~(^,^)~|@|Selection of important observations.
|@|~(^,^)~|@|Plot of prediction accuracy.