Chan Sik Han and Keon Myung Lee
*Keywords :
Tosin Akinwale Adesuyi, Byeong Man Kim , and Jongwan Kim
*Keywords :
Ismail Kich, El Bachir Ameur, and Youssef Taouil
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 358-368 https://doi.org/10.5391/IJFIS.2021.21.4.358*Keywords :
Nishant Chauhan and Byung-Jae Choi
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 349-357 https://doi.org/10.5391/IJFIS.2021.21.4.349*Keywords :
A. Naresh Kumar, M. Ramesha, S. Jagadha, Bharathi Gururaj, M. Suresh Kumar, and Kommera Chaitanya
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 391-400 https://doi.org/10.5391/IJFIS.2021.21.4.391*Keywords :
Moa’ath N. Oqielat, Ahmad El-Ajou, Zeyad Al-Zhour , Tareq Eriqat, and Mohammed Al-Smadi;
*Keywords :
*Keywords :
Minh-Thanh Vo and Seong G. Kong
*Keywords :
Aref Safari, Rahil Hosseini , and Mahdi Mazinani
*Keywords :
Ezreen Farina Shair, Radhi Hafizuddin Razali, Abdul Rahim Abdullah, and Nurul Fauzani Jamaluddin
International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(1): 11-22 https://doi.org/10.5391/IJFIS.2022.22.1.11*Keywords :
Sangyun Lee and Sungjun Hong
International Journal of Fuzzy Logic and Intelligent Systems 2022;22: 339-349 https://doi.org/10.5391/IJFIS.2022.22.4.339Hamzeh Zureigat, Abd Ulazeez Alkouri, Areen Al-khateeb, Eman Abuteen, and Sana Abu-Ghurra
International Journal of Fuzzy Logic and Intelligent Systems 2023;23: 11-19 https://doi.org/10.5391/IJFIS.2023.23.1.11Hamzeh Zureigat, Abd Ulazeez Alkouri, Areen Al-khateeb, Eman Abuteen, and Sana Abu-Ghurra
International Journal of Fuzzy Logic and Intelligent Systems 2023;23: 11-19Sangyun Lee and Sungjun Hong
International Journal of Fuzzy Logic and Intelligent Systems 2022;22: 339-349Architecture of a conventional SCNN. The network is trained by contrastive loss in the training stage, whereas a distance function is used to compute the similarity metric in the testing stage.
|@|~(^,^)~|@|The proposed ESCNN architecture, which consists of three parts: (a) Siamese, (b) extension, and (c) decision parts. The feature dimensions are denoted as
Visualization of the features learned by the ESCNN: (a) positive and (b) negative samples.
|@|~(^,^)~|@|Training strategy of the proposed network. The network is optimized by a combination of two loss functions: 1) contrastive loss for the Siamese part and 2) cross-entropy loss for all parts, including the extension and decision parts.
|@|~(^,^)~|@|Examples from the iLIDS–VID dataset.
|@|~(^,^)~|@|Some example results: (a) positive and (b) negative samples.
|@|~(^,^)~|@|ROC curves for the methods under consideration.