International Journal of Fuzzy Logic and Intelligent Systems 2019; 19(4): 315-322
Published online December 25, 2019
https://doi.org/10.5391/IJFIS.2019.19.4.315
© The Korean Institute of Intelligent Systems
Nishant Chauhan and Byung-Jae Choi
Department of Electronic Engineering, Daegu University, Gyeongsan, Korea
Correspondence to :
Byung-Jae Choi (bjchoi@daegu.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Artificial intelligence enhances the boundaries and capabilities of medical imaging. Hence, researchers are continuously attempting to develop an efficient and automated diagnosis system to increase the accuracy and performance to diagnose the brain abnormality. Therefore, it is required that a suitable method to diagnose and classify brain-related diseases such as Alzheimer disease, cancer, dementia, etc. Magnetic resonance imaging (MRI) is a powerful imaging technique in neuroscience for studying brain images. In the past years, many brain MRI classification techniques were proposed. Machine learning and deep learning have demonstrated a wonderful performance in the classification task. In this paper, the study of various brain MRI classification techniques has been provided. The aim of this study is to help the doctors/neurologists in selection of appropriate classification method based on several parameters like accuracy, computer complexity, and low training data availability. We also analyze and compare the performance of different classification methods based on several evaluation metrics.
Keywords: Machine learning, Deep learning, Support vector machine, Discrete wavelet transformation, Convolutional neural network, Autoencoder
The advancement of research in medical image processing shows the significance of human life. The process of capturing and storing of digital medical images has satisfactory development, but the analysis of these images has always been challenging and time-consuming. Hence, there has been always an urgent need for accurate, robust, reliable and user-friendly techniques for early detection and diagnosis of deadly diseases for reducing the death rate. Brain magnetic resonance imaging is a well-known medical imaging technique used to analyze and diagnose many neurological diseases such as a brain tumor, cancer, dementia, Alzheimer disease, epilepsy, sclerosis, etc. [1, 2]. Due to its high resolution and balanced contrast, MRI images have a big impact to provide useful information about the brain structure and abnormalities in the brain tissues. The human brain is mainly composed of three materials: white material, grey matter, and cerebral spinal fluid. The doctors and radiologists can analyze the brain abnormality based on these materials and classify the disease for the treatment process [3]. When a cell in the body grows and divides in an uncontrollable manner, it is called cancer. If this happens in brain, it is called brain tumor. As shown in Figure 1, the abnormality in the brain (tumor) can be identified by visualization. Another severe brain disease is Alzheimer disease which cause memory loss and dementia (brain disorder). Generally, it is observed in aged elderly individuals, but it can also be caused by brain injuries or concussions. The study founds the Alzheimer disease travel along white matter fibers in the brain from one part to another part. Alzheimer disease destroys neurons and their connections in parts of the brain involved in memory, it later also affects the areas in cerebral cortex which is responsible for language, reasoning and social behaviors. However, due to complex brain cell structure, sometimes the magnetic resonance imaging (MRI) images are not enough to detect and identify brain abnormalities. In such cases, the brain MRI classification techniques plays a very important role [4].
In recent years, tremendous research has been carried out in the classification of brain MRI processing [5]. Machine learning (ML) and deep learning (DL) have promising results in the field of classification techniques [6–8]. Where the ML algorithms are developed to learn to do things by labeled data and produce output, on the other hand, DL algorithms learn to do things from unlabeled data using artificial neural networks (ANN) and network layers. In practical way, DL is just a subset of ML which functions in a similar way, but both have a different capability. In ML the trained models become progressively better at whatever their function is, but it still needs some guidance (supervised learning). For example, if an algorithm returns an inaccurate prediction, then the user/developer need to step in and make adjustment to improve the functionality. However, in DL the algorithm can measure on its own if a prediction is accurate or not through its own neural network. DL structure algorithms can learn and make intelligent decisions on its own by creating its own ANN. Both ML and DL are a subset of artificial intelligence (AI). In both learning algorithms, the ‘data’ is the most important parameter which decides the quality and accuracy of the result.
An advanced kernel-based approach such as support vector machine (SVM) was described earlier [9]. A discrete wavelet transformation (DWT) algorithm was used to extract the features from MRI images. The proposed method was designed to classify the input brain MRI images as normal or abnormal. The accuracy of this method was good for the small amount of data (65%), however, due to limited data precision and computational complexity the SVM classifier with the advanced kernel can only work with a small amount of data.
In [10], the extracted features by DWT were reduced using Principal Component Analysis (PCA) and the reduced feature vector was used as an input to SVM classifier.
The dimensionally reduced feature data set has more effective results. This brings the use of feature reduction techniques such as PCA.
In recent times, DL, a remarkable ML methodology has made an expansion in detecting and classifying patterns of images. Deep neural networks (DNN) have brought the interest of many researchers in the past few years. The concept of ANN inspired by biological neurons in the brain and its goal is to solve the problem in similar that the human brain does. It consists of input layers, hidden layers, and output layers. ANN is an interconnected group of these layers. A deep wavelet autoencoder (DWA) based DNN approach was introduced in [11]. An autoencoder (AE) is a type of ANN that learns efficient data in an unsupervised manner. In this approach, the AE and DWT were combined and its fusion with DNN provided better accuracy and performance.
There are many DL architectures. The convolutional neural network (CNN) is one of the most popular and commonly used architecture which can efficiently perform complex operations using convolutional filters. There are many application of CNN like image recognition, object detection, face recognition, fingerprint pattern classification etc. [12]. A standard CNN architecture consists of a combination of convolution layers (feed-forward layers) and pooling layers and after the last pooling layer, the network is connected to a fully connected layer. The work of fully connected layers is to convert the 2D feature maps coming from previous layers into a 1D vector for classification purposes. The CNN architecture does not require the feature extraction process because convolution layers can extract the features and store the important features using pooling. However, training a CNN is a complicated and time-consuming task that requires a large labeled dataset which is sometimes not easily available. A typical CNN architecture has shown in Figure 2.
In [13], a CNN architecture was proposed for the detection of Alzheimer disease. This architecture has 5 convolution layers and 5 pooling layers followed by fully connected layers.
This model achieved high accuracy because of the powerful feature extraction properties of convolutional layers. However, this model lacks in the performance where the feature properties and their inter relationship are used to measure the output.
In this paper, we implement three brain MRI classification methods and analyze their performance on the same dataset.
These methods were used to determine the abnormity in brain such as brain tumor or the structure difference in brain tissues in such diseases like Alzheimer disease. This paper provides a comparative description of brain MRI classification methods based on their statistical measures and built-in capabilities to reduce the manual efforts of doctors/radiologists in selection of appropriate classification method.
The performance comparison is done based on the model accuracy (%) and sensitivity (%). The rest of the paper is organized as following sections. Section 2 highlights the brief introduction of three classification methods and their methodology. The results obtained from the implementation of three methods are discussed in Section 3. Lastly, the conclusion is presented in Section 4.
The objective of any classification algorithms is to group together the items that have the similar features. Classifier makes the classification decision based on the linear/nonlinear combination of evaluated features. The feature extraction from an image is very important task in classification methods. Sometimes, the MRI images include several noises during the transmission and digitization process. In this method [9], the DWT approach had been applied to MRI images to remove the noise. The process of noise removal can be done by a wavelet transform which eliminates the noisy point from the image. The inverse wavelet transform can recover the denoised image. DWT is also an effective tool used for feature extraction because it allows the detailed analysis of images at different resolution levels. The Daubechies-4 (DAUB4) wavelet transform was used to extract the wavelet approximation coefficient of the brain MRI image and it was used as a feature vector in the classification process. In the above process, the MRI image had decomposed into second-level approximate and detailed components. Here, the main purpose of feature extraction was to reduce the original data set by evaluation of certain properties or features that differentiate one input pattern from another. The extracted feature vector becomes the input vector for the classifier which will examine the suitable properties in the feature space.
The goal of classification is to group together the similar feature value items from a group of mix feature value items. The SVM is a binary classification method that takes as input labeled data from two classes and output new labeled data into one or two classes. Like other ML techniques, the SVM has two steps: training and testing. In the training step, known data along with previously known decision values has been feed to the SVM. The training data makes the SVM intelligent to classify the unknown images. In the SVM, for the two-class classification problem, input data is mapped into higher dimensional space using radial basis function (RBF) kernel. Then a hyperplane linear classifier is applied in this transformed space utilizing those patterns vectors that are closest to the decision boundary. Let
The decision function of the SVM will be:
where
Figure 3 represents the architecture of the DWA-DNN model for brain MRI image classification for disease detection based on DWA based DNN [11]. An image dataset is loaded for training the network. All images have been preprocessed to reduce the risk of given bad or distorted data. After preprocessing, the images have been fattened in a 2D array format to represent the dataset in the 2D dataset format. For better performance, all 2D arrays have been split into several tiny sub-arrays. These image sub-arrays are then processed through DWA to get the encoded images. Finally, only encoded approximation images are further used for training and testing of DNNs. For more details about this method, please refer to the [11].
The proposed DWA layer is a combination of DWT and AE. In this layer, the image has been encoded using AE and then processed through DWT using Daubechies mother wavelet of 2nd order to collect the approximate and detailed coefficients by passing it through low pass and high pass filters respectively. The approximation coefficient is further used for the classification using DNN model.
Parameter setup for DWA DNN is used similar way as in [11]. In AE, total 5 layers were used with 64 × 64 encoded units. Weight decay parameter, weight of sparsity penalty team and sparsity parameter values were set to 0.002, 6 and 0.01, respectively.
The complete algorithm of DWA-DNN has been described below:
Processing of DICOM images to extract the specific image matrix only. | |
Flattening of image matrices to construct image dataset. | |
Splitting of dataset into sub arrays. | |
For each sub array of dataset, repeat from Step 5 to Step 9. | |
Input the image sub array to DWA for encoding. | |
Send the encoded image through low pass and high pass filter using DWT for decomposition. | |
Apply inverse wavelet transform to combine and decode the images to get original image. | |
Run the autoencoder for number of epochs to get optimized weight and bias values. | |
Extract approximation coefficients from the hidden layer, combine them and provide as input to a deep neural network for classification. | |
Train the DNN with the inputs provided by Step 9 and test the network for different metrics measurement. |
CNN is inspired by the human visual system and is similar to the conventional neural network. Small portions of the image (effective receptive fields) are treated as inputs. As discussed earlier, CNN has multiple functional layers such as the pooling layer which is entered between two consecutive convolutional layers. Its role is to reduce sample size and to control overfitting. Next is fully connected layer, where each neurons are connected to all neurons in the next layer. The features collected by convolutional layers then feed to the FC layer for classification, etc. The model architecture [13] is shown in Figure 4. In this approach [10], the model has 5 convolution layers.
The first 3 convolution layers starts with filter size 32, 64, and 128, respectively, followed by 2 convolution layer with filters size 64 and 32, respectively. Each convolution layer is accompanied by a max-pooling of 5 layers. After the final convolution layer, the model propagates the results through a fully connected layer of 1, 024 nodes with a dropout rate of 0.8, RELU was used as primary activation function and later the activation function was set to SoftMax in order to facilitate the binary classification done by the final layer consisting of 2 nodes. Finally, the model was trained with Adam optimizer and back-propagation.
All the above mentioned classification methods have been implemented and their performance has been analyzed. The dataset of human brain MRI used in this experiment are available on the e-health laboratory webpage, Department of Computer Science, University of Cyprus [14] and cancer imaging archive [15]. In e-health laboratory dataset, 1, 600 normal and abnormal human brain MRI images are available. The data consists of MS lesions and normal-appearing white matter (NAWM) from MS patients and normal white matter (NWM) for healthy volunteers obtained from serial brain MR imaging scans (0 and 6–12 months). The brain tumor data set is taken from 20 subjects that contain T1-weighted, T2-weighted of normal and glioblastoma (tumor) images in DICOM (Digital Imaging and Communication in Medicine) format.
DWT and SVM based classification methods have been implemented in MATLAB 2014b using image processing toolbox and wavelet toolbox [16]. The benefit of wavelet is that they provide localized frequency information about a function of a signal, which is perfectly suitable for classification. In feature extraction using DWT, the matrix of a coefficient is 133 × 133 double in every MRI image. These data is used to classify that whether the given test image is normal or abnormal. The SVM classifier is used with kernel type RBF with default settings. The two classes are used in the experiment that are normal and abnormal. The labels for these classes are 1 and 0 for normal and abnormal, respectively. A set of 70 brain MRI images has been taken for the experiment which consists of normal and abnormal (brain tumor) brain MRI images.
The DWA DNN and CNN based brain MRI classification models have been implemented using in Samsung PC (Intel Core i5-4590, 16 GB RAM, NVIDIA GeFORCE GTX 1070 TI GPU). Python 3.6 and TensorFlow 1.5 libraries with CUDA 10.0 are used to implement these models. A set of 500 brain MRI images has been used for training and a set of 50 images are used for testing.
For DWA-DNN, batch size = 16 and model has been trained with learning rate = 0.01 for 100 epochs. Sigmoid activation function and Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm have been used. The BFGS method belongs to quasi-Newton methods, a class of hill-climbing optimization techniques that search for a stationary point of a function. The BFGS algorithm has been described below:
Let | |
| |
| |
exit | |
| |
| |
Find | |
| |
| |
| |
| |
The BFGS method is prevalent as a quasi-Newton method. The BFGS algorithms has been described above to show how the approximated Hessian is obtained. In BFGS algorithm,
In the preprocessing, DICOM format dataset have been processed to extract the specific image matrix. The Python libraries such as pydicom, opencv, pillow and pandas are used to extract the images dataset from DICOM format dataset.
In the CNN classification model, all the parameters were the same as DWA-DNN but the RELU was used as the primary activation function in convolution layers and SoftMax activation function in order to help the binary classification was done at the final output layer. Adam optimizer and backpropagation algorithm were selected during training.
To analyze the performance for all three methods, we used two evaluation matrices: accuracy and sensitivity.
In binary classification under supervised learning, a confusion matrix is formed which consists of the four outcomes [18]. The produced outcomes after classifications are as follows:
Correct positive prediction - true positive (TP);
Incorrect positive prediction - true negative (TN);
Correct negative prediction - false positive (FP);
Incorrect negative prediction - false negative (FN).
There are basic measure derived from confusion matrix for the performance evaluation of the classification model.
The confusion matrix for DWT-SVM method is shown in Figure 5.
Accuracy (
The best accuracy value is 1.0 (100%) and worst is 0.0 (0%).
The sensitivity (S) can be calculated as the number of correct positive predictions divided by total number of positives. It is also known as true positive rate (TPR).
The accuracy and sensitivity of implemented models have been shown in Figure 6.
As shown in Figure 6, DWA-DNN achieved the highest accuracy. This method learns the properties of features and uses the relationship between features for classification. Although CNN is also a powerful method for image classification, it has limitations that neurons are activated only when there is a chance to detect a feature. They don’t consider the properties and relationship between features. So, the good quality dataset is required to achieve better result. DWT-SVM method has achieved the accuracy of 72% (approx.) because it has limited capability of data precision.
The analysis of medical images has always been a challenging and time-consuming task. In this paper, the performance analysis has been done for brain MRI classification methods. The result shows that DWA-DNN is more accurate than two other methods and can handle the large dataset volume. Although the training time will be long, the result would be more satisfactory. The accuracy of CNN model is almost similar to that of DWA-DNN but not as efficient because of the use of deep neural networks. This implies that the classification accuracy improves when the extracted features are accurate. From Figure 5, we can conclude that DWA-DNN is better than the other two classification methods for human brain MRI images. For future work, it would be interesting to develop an AI algorithm which can classify the structural as well as neurological difference in brain simultaneously by combining the DNN with powerful feature extraction and selection techniques.
No potential conflict of interest relevant to this article was reported.
This research was supported (in part) by the Daegu University Research Grant.
E-mail: nishantsep1090@daegu.ac.kr.
E-mail: bjchoi@daegu.ac.kr
International Journal of Fuzzy Logic and Intelligent Systems 2019; 19(4): 315-322
Published online December 25, 2019 https://doi.org/10.5391/IJFIS.2019.19.4.315
Copyright © The Korean Institute of Intelligent Systems.
Nishant Chauhan and Byung-Jae Choi
Department of Electronic Engineering, Daegu University, Gyeongsan, Korea
Correspondence to:Byung-Jae Choi (bjchoi@daegu.ac.kr)
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Artificial intelligence enhances the boundaries and capabilities of medical imaging. Hence, researchers are continuously attempting to develop an efficient and automated diagnosis system to increase the accuracy and performance to diagnose the brain abnormality. Therefore, it is required that a suitable method to diagnose and classify brain-related diseases such as Alzheimer disease, cancer, dementia, etc. Magnetic resonance imaging (MRI) is a powerful imaging technique in neuroscience for studying brain images. In the past years, many brain MRI classification techniques were proposed. Machine learning and deep learning have demonstrated a wonderful performance in the classification task. In this paper, the study of various brain MRI classification techniques has been provided. The aim of this study is to help the doctors/neurologists in selection of appropriate classification method based on several parameters like accuracy, computer complexity, and low training data availability. We also analyze and compare the performance of different classification methods based on several evaluation metrics.
Keywords: Machine learning, Deep learning, Support vector machine, Discrete wavelet transformation, Convolutional neural network, Autoencoder
The advancement of research in medical image processing shows the significance of human life. The process of capturing and storing of digital medical images has satisfactory development, but the analysis of these images has always been challenging and time-consuming. Hence, there has been always an urgent need for accurate, robust, reliable and user-friendly techniques for early detection and diagnosis of deadly diseases for reducing the death rate. Brain magnetic resonance imaging is a well-known medical imaging technique used to analyze and diagnose many neurological diseases such as a brain tumor, cancer, dementia, Alzheimer disease, epilepsy, sclerosis, etc. [1, 2]. Due to its high resolution and balanced contrast, MRI images have a big impact to provide useful information about the brain structure and abnormalities in the brain tissues. The human brain is mainly composed of three materials: white material, grey matter, and cerebral spinal fluid. The doctors and radiologists can analyze the brain abnormality based on these materials and classify the disease for the treatment process [3]. When a cell in the body grows and divides in an uncontrollable manner, it is called cancer. If this happens in brain, it is called brain tumor. As shown in Figure 1, the abnormality in the brain (tumor) can be identified by visualization. Another severe brain disease is Alzheimer disease which cause memory loss and dementia (brain disorder). Generally, it is observed in aged elderly individuals, but it can also be caused by brain injuries or concussions. The study founds the Alzheimer disease travel along white matter fibers in the brain from one part to another part. Alzheimer disease destroys neurons and their connections in parts of the brain involved in memory, it later also affects the areas in cerebral cortex which is responsible for language, reasoning and social behaviors. However, due to complex brain cell structure, sometimes the magnetic resonance imaging (MRI) images are not enough to detect and identify brain abnormalities. In such cases, the brain MRI classification techniques plays a very important role [4].
In recent years, tremendous research has been carried out in the classification of brain MRI processing [5]. Machine learning (ML) and deep learning (DL) have promising results in the field of classification techniques [6–8]. Where the ML algorithms are developed to learn to do things by labeled data and produce output, on the other hand, DL algorithms learn to do things from unlabeled data using artificial neural networks (ANN) and network layers. In practical way, DL is just a subset of ML which functions in a similar way, but both have a different capability. In ML the trained models become progressively better at whatever their function is, but it still needs some guidance (supervised learning). For example, if an algorithm returns an inaccurate prediction, then the user/developer need to step in and make adjustment to improve the functionality. However, in DL the algorithm can measure on its own if a prediction is accurate or not through its own neural network. DL structure algorithms can learn and make intelligent decisions on its own by creating its own ANN. Both ML and DL are a subset of artificial intelligence (AI). In both learning algorithms, the ‘data’ is the most important parameter which decides the quality and accuracy of the result.
An advanced kernel-based approach such as support vector machine (SVM) was described earlier [9]. A discrete wavelet transformation (DWT) algorithm was used to extract the features from MRI images. The proposed method was designed to classify the input brain MRI images as normal or abnormal. The accuracy of this method was good for the small amount of data (65%), however, due to limited data precision and computational complexity the SVM classifier with the advanced kernel can only work with a small amount of data.
In [10], the extracted features by DWT were reduced using Principal Component Analysis (PCA) and the reduced feature vector was used as an input to SVM classifier.
The dimensionally reduced feature data set has more effective results. This brings the use of feature reduction techniques such as PCA.
In recent times, DL, a remarkable ML methodology has made an expansion in detecting and classifying patterns of images. Deep neural networks (DNN) have brought the interest of many researchers in the past few years. The concept of ANN inspired by biological neurons in the brain and its goal is to solve the problem in similar that the human brain does. It consists of input layers, hidden layers, and output layers. ANN is an interconnected group of these layers. A deep wavelet autoencoder (DWA) based DNN approach was introduced in [11]. An autoencoder (AE) is a type of ANN that learns efficient data in an unsupervised manner. In this approach, the AE and DWT were combined and its fusion with DNN provided better accuracy and performance.
There are many DL architectures. The convolutional neural network (CNN) is one of the most popular and commonly used architecture which can efficiently perform complex operations using convolutional filters. There are many application of CNN like image recognition, object detection, face recognition, fingerprint pattern classification etc. [12]. A standard CNN architecture consists of a combination of convolution layers (feed-forward layers) and pooling layers and after the last pooling layer, the network is connected to a fully connected layer. The work of fully connected layers is to convert the 2D feature maps coming from previous layers into a 1D vector for classification purposes. The CNN architecture does not require the feature extraction process because convolution layers can extract the features and store the important features using pooling. However, training a CNN is a complicated and time-consuming task that requires a large labeled dataset which is sometimes not easily available. A typical CNN architecture has shown in Figure 2.
In [13], a CNN architecture was proposed for the detection of Alzheimer disease. This architecture has 5 convolution layers and 5 pooling layers followed by fully connected layers.
This model achieved high accuracy because of the powerful feature extraction properties of convolutional layers. However, this model lacks in the performance where the feature properties and their inter relationship are used to measure the output.
In this paper, we implement three brain MRI classification methods and analyze their performance on the same dataset.
These methods were used to determine the abnormity in brain such as brain tumor or the structure difference in brain tissues in such diseases like Alzheimer disease. This paper provides a comparative description of brain MRI classification methods based on their statistical measures and built-in capabilities to reduce the manual efforts of doctors/radiologists in selection of appropriate classification method.
The performance comparison is done based on the model accuracy (%) and sensitivity (%). The rest of the paper is organized as following sections. Section 2 highlights the brief introduction of three classification methods and their methodology. The results obtained from the implementation of three methods are discussed in Section 3. Lastly, the conclusion is presented in Section 4.
The objective of any classification algorithms is to group together the items that have the similar features. Classifier makes the classification decision based on the linear/nonlinear combination of evaluated features. The feature extraction from an image is very important task in classification methods. Sometimes, the MRI images include several noises during the transmission and digitization process. In this method [9], the DWT approach had been applied to MRI images to remove the noise. The process of noise removal can be done by a wavelet transform which eliminates the noisy point from the image. The inverse wavelet transform can recover the denoised image. DWT is also an effective tool used for feature extraction because it allows the detailed analysis of images at different resolution levels. The Daubechies-4 (DAUB4) wavelet transform was used to extract the wavelet approximation coefficient of the brain MRI image and it was used as a feature vector in the classification process. In the above process, the MRI image had decomposed into second-level approximate and detailed components. Here, the main purpose of feature extraction was to reduce the original data set by evaluation of certain properties or features that differentiate one input pattern from another. The extracted feature vector becomes the input vector for the classifier which will examine the suitable properties in the feature space.
The goal of classification is to group together the similar feature value items from a group of mix feature value items. The SVM is a binary classification method that takes as input labeled data from two classes and output new labeled data into one or two classes. Like other ML techniques, the SVM has two steps: training and testing. In the training step, known data along with previously known decision values has been feed to the SVM. The training data makes the SVM intelligent to classify the unknown images. In the SVM, for the two-class classification problem, input data is mapped into higher dimensional space using radial basis function (RBF) kernel. Then a hyperplane linear classifier is applied in this transformed space utilizing those patterns vectors that are closest to the decision boundary. Let
The decision function of the SVM will be:
where
Figure 3 represents the architecture of the DWA-DNN model for brain MRI image classification for disease detection based on DWA based DNN [11]. An image dataset is loaded for training the network. All images have been preprocessed to reduce the risk of given bad or distorted data. After preprocessing, the images have been fattened in a 2D array format to represent the dataset in the 2D dataset format. For better performance, all 2D arrays have been split into several tiny sub-arrays. These image sub-arrays are then processed through DWA to get the encoded images. Finally, only encoded approximation images are further used for training and testing of DNNs. For more details about this method, please refer to the [11].
The proposed DWA layer is a combination of DWT and AE. In this layer, the image has been encoded using AE and then processed through DWT using Daubechies mother wavelet of 2nd order to collect the approximate and detailed coefficients by passing it through low pass and high pass filters respectively. The approximation coefficient is further used for the classification using DNN model.
Parameter setup for DWA DNN is used similar way as in [11]. In AE, total 5 layers were used with 64 × 64 encoded units. Weight decay parameter, weight of sparsity penalty team and sparsity parameter values were set to 0.002, 6 and 0.01, respectively.
The complete algorithm of DWA-DNN has been described below:
Processing of DICOM images to extract the specific image matrix only. | |
Flattening of image matrices to construct image dataset. | |
Splitting of dataset into sub arrays. | |
For each sub array of dataset, repeat from Step 5 to Step 9. | |
Input the image sub array to DWA for encoding. | |
Send the encoded image through low pass and high pass filter using DWT for decomposition. | |
Apply inverse wavelet transform to combine and decode the images to get original image. | |
Run the autoencoder for number of epochs to get optimized weight and bias values. | |
Extract approximation coefficients from the hidden layer, combine them and provide as input to a deep neural network for classification. | |
Train the DNN with the inputs provided by Step 9 and test the network for different metrics measurement. |
CNN is inspired by the human visual system and is similar to the conventional neural network. Small portions of the image (effective receptive fields) are treated as inputs. As discussed earlier, CNN has multiple functional layers such as the pooling layer which is entered between two consecutive convolutional layers. Its role is to reduce sample size and to control overfitting. Next is fully connected layer, where each neurons are connected to all neurons in the next layer. The features collected by convolutional layers then feed to the FC layer for classification, etc. The model architecture [13] is shown in Figure 4. In this approach [10], the model has 5 convolution layers.
The first 3 convolution layers starts with filter size 32, 64, and 128, respectively, followed by 2 convolution layer with filters size 64 and 32, respectively. Each convolution layer is accompanied by a max-pooling of 5 layers. After the final convolution layer, the model propagates the results through a fully connected layer of 1, 024 nodes with a dropout rate of 0.8, RELU was used as primary activation function and later the activation function was set to SoftMax in order to facilitate the binary classification done by the final layer consisting of 2 nodes. Finally, the model was trained with Adam optimizer and back-propagation.
All the above mentioned classification methods have been implemented and their performance has been analyzed. The dataset of human brain MRI used in this experiment are available on the e-health laboratory webpage, Department of Computer Science, University of Cyprus [14] and cancer imaging archive [15]. In e-health laboratory dataset, 1, 600 normal and abnormal human brain MRI images are available. The data consists of MS lesions and normal-appearing white matter (NAWM) from MS patients and normal white matter (NWM) for healthy volunteers obtained from serial brain MR imaging scans (0 and 6–12 months). The brain tumor data set is taken from 20 subjects that contain T1-weighted, T2-weighted of normal and glioblastoma (tumor) images in DICOM (Digital Imaging and Communication in Medicine) format.
DWT and SVM based classification methods have been implemented in MATLAB 2014b using image processing toolbox and wavelet toolbox [16]. The benefit of wavelet is that they provide localized frequency information about a function of a signal, which is perfectly suitable for classification. In feature extraction using DWT, the matrix of a coefficient is 133 × 133 double in every MRI image. These data is used to classify that whether the given test image is normal or abnormal. The SVM classifier is used with kernel type RBF with default settings. The two classes are used in the experiment that are normal and abnormal. The labels for these classes are 1 and 0 for normal and abnormal, respectively. A set of 70 brain MRI images has been taken for the experiment which consists of normal and abnormal (brain tumor) brain MRI images.
The DWA DNN and CNN based brain MRI classification models have been implemented using in Samsung PC (Intel Core i5-4590, 16 GB RAM, NVIDIA GeFORCE GTX 1070 TI GPU). Python 3.6 and TensorFlow 1.5 libraries with CUDA 10.0 are used to implement these models. A set of 500 brain MRI images has been used for training and a set of 50 images are used for testing.
For DWA-DNN, batch size = 16 and model has been trained with learning rate = 0.01 for 100 epochs. Sigmoid activation function and Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm have been used. The BFGS method belongs to quasi-Newton methods, a class of hill-climbing optimization techniques that search for a stationary point of a function. The BFGS algorithm has been described below:
Let | |
| |
| |
exit | |
| |
| |
Find | |
| |
| |
| |
| |
The BFGS method is prevalent as a quasi-Newton method. The BFGS algorithms has been described above to show how the approximated Hessian is obtained. In BFGS algorithm,
In the preprocessing, DICOM format dataset have been processed to extract the specific image matrix. The Python libraries such as pydicom, opencv, pillow and pandas are used to extract the images dataset from DICOM format dataset.
In the CNN classification model, all the parameters were the same as DWA-DNN but the RELU was used as the primary activation function in convolution layers and SoftMax activation function in order to help the binary classification was done at the final output layer. Adam optimizer and backpropagation algorithm were selected during training.
To analyze the performance for all three methods, we used two evaluation matrices: accuracy and sensitivity.
In binary classification under supervised learning, a confusion matrix is formed which consists of the four outcomes [18]. The produced outcomes after classifications are as follows:
Correct positive prediction - true positive (TP);
Incorrect positive prediction - true negative (TN);
Correct negative prediction - false positive (FP);
Incorrect negative prediction - false negative (FN).
There are basic measure derived from confusion matrix for the performance evaluation of the classification model.
The confusion matrix for DWT-SVM method is shown in Figure 5.
Accuracy (
The best accuracy value is 1.0 (100%) and worst is 0.0 (0%).
The sensitivity (S) can be calculated as the number of correct positive predictions divided by total number of positives. It is also known as true positive rate (TPR).
The accuracy and sensitivity of implemented models have been shown in Figure 6.
As shown in Figure 6, DWA-DNN achieved the highest accuracy. This method learns the properties of features and uses the relationship between features for classification. Although CNN is also a powerful method for image classification, it has limitations that neurons are activated only when there is a chance to detect a feature. They don’t consider the properties and relationship between features. So, the good quality dataset is required to achieve better result. DWT-SVM method has achieved the accuracy of 72% (approx.) because it has limited capability of data precision.
The analysis of medical images has always been a challenging and time-consuming task. In this paper, the performance analysis has been done for brain MRI classification methods. The result shows that DWA-DNN is more accurate than two other methods and can handle the large dataset volume. Although the training time will be long, the result would be more satisfactory. The accuracy of CNN model is almost similar to that of DWA-DNN but not as efficient because of the use of deep neural networks. This implies that the classification accuracy improves when the extracted features are accurate. From Figure 5, we can conclude that DWA-DNN is better than the other two classification methods for human brain MRI images. For future work, it would be interesting to develop an AI algorithm which can classify the structural as well as neurological difference in brain simultaneously by combining the DNN with powerful feature extraction and selection techniques.
No potential conflict of interest relevant to this article was reported.
This research was supported (in part) by the Daegu University Research Grant.
Sample human brain MRI images: (a) normal and (b) abnormal.
A typical architecture of CNN.
The architecture of DWA-DNN.
CNN architecture for brain MRI classification.
Confusion matrix of DWT-SVM.
Performance comparison based on accuracy and sensitivity of classification methods.
Processing of DICOM images to extract the specific image matrix only. | |
Flattening of image matrices to construct image dataset. | |
Splitting of dataset into sub arrays. | |
For each sub array of dataset, repeat from Step 5 to Step 9. | |
Input the image sub array to DWA for encoding. | |
Send the encoded image through low pass and high pass filter using DWT for decomposition. | |
Apply inverse wavelet transform to combine and decode the images to get original image. | |
Run the autoencoder for number of epochs to get optimized weight and bias values. | |
Extract approximation coefficients from the hidden layer, combine them and provide as input to a deep neural network for classification. | |
Train the DNN with the inputs provided by Step 9 and test the network for different metrics measurement. |
Let | |
| |
| |
exit | |
| |
| |
Find | |
| |
| |
| |
| |
Ezreen Farina Shair, Radhi Hafizuddin Razali, Abdul Rahim Abdullah, and Nurul Fauzani Jamaluddin
International Journal of Fuzzy Logic and Intelligent Systems 2022; 22(1): 11-22 https://doi.org/10.5391/IJFIS.2022.22.1.11Chan Sik Han and Keon Myung Lee
International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 317-337 https://doi.org/10.5391/IJFIS.2021.21.4.317Urtnasan Erdenebayar, Jong-Uk Park, SooYong Lee, Eun-Yeon Joo, and Kyoung-Joung Lee
International Journal of Fuzzy Logic and Intelligent Systems 2020; 20(2): 138-144 https://doi.org/10.5391/IJFIS.2020.20.2.138Sample human brain MRI images: (a) normal and (b) abnormal.
|@|~(^,^)~|@|A typical architecture of CNN.
|@|~(^,^)~|@|The architecture of DWA-DNN.
|@|~(^,^)~|@|CNN architecture for brain MRI classification.
|@|~(^,^)~|@|Confusion matrix of DWT-SVM.
|@|~(^,^)~|@|Performance comparison based on accuracy and sensitivity of classification methods.