Article Search
닫기

Original Article

Split Viewer

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 349-357

Published online December 25, 2021

https://doi.org/10.5391/IJFIS.2021.21.4.349

© The Korean Institute of Intelligent Systems

DNN-Based Brain MRI Classification Using Fuzzy Clustering and Autoencoder Features

Nishant Chauhan* and Byung-Jae Choi*

Department of Electronic Engineering, Daegu University, Gyeongsan, Korea

Correspondence to :
Byung-Jae Choi (bjchoi@daegu.ac.kr)
*These authors contributed equally to this work.

Received: June 16, 2021; Revised: October 6, 2021; Accepted: December 22, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Worldwide interest has been noted in medical image analysis and classification using machine learning techniques. Magnetic resonance imaging (MRI) is one of the safe and painless procedures for human brain scanning. During the MRI procedure, magnetic fields and radio waves are used to scan and map the extended view of brain tissues for further pathological processes and analysis. For a qualitative and quantitative MRI analysis, the manual capability of radiologists and/or doctors is limited and time-consuming in complex and group-level diagnoses. Hence, the development of an intelligent, robust, and reliable support system for the diagnosis of brain-related diseases is a top priority. In this paper, a new deep neural network based MRI image classification approach is proposed that uses fuzzy c-mean (FCM) and an autoencoder to classify brain MRI as normal or abnormal, diminishing human error during the diagnosis of diseases in MRI scans. Here, FCM is utilized for abnormal tissue segmentation from brain MRI images, followed by an autoencoder, for extraction and dimensionality reduction of features. Finally, a deep neural network was used for the classification of brain MRI images that were trained using FCM-extracted features and sample data. Considering the availability of raw MRI data, data augmentation techniques have also been used to increase the number of data required to train a deep neural network. The experiment results achieved 96% accuracy and a 95% sensitivity rate for classification. The results demonstrate that the proposed well-trained deep learning technology has the potential to make solid predictions regarding brain abnormalities; therefore, it can be used as a prominent tool in clinical practice.

Keywords: Magnetic resonance imaging (MRI), Fuzzy c-mean (FCM) clustering, Deep neural network, Autoencoder, Classification

The rapid growth in imaging technology has proven to be an important tool in the medical industry for image diagnosis and therapy. The early diagnosis and detection of diseases in medical imaging can save millions of lives. The human brain is a complicated system that needs a great deal of meticulous data to comprehend the pathological process [1]. Alzheimer’s disease, Parkinson’s disease, neuro-infections, neurological illnesses, and brain cancers have varying impacts on the structure and function of the brain [2]. Tumors are abnormal tissue growths, and when they occur in the brain, they are known as brain tumors. There are two types of brain tumors: benign (non-cancerous) and malignant (cancerous). Malignant brain tumors are a type of metastatic cancer that spreads to the brain from another region of the body. Early diagnosis and treatment of a brain tumor or any other type of brain disease are crucial for successful treatment [3]. Magnetic resonance imaging (MRI) is a medical imaging technique that produces high-quality pictures of human body components such as brain tumors, shoulders, and ankles. MRI differs from computed axial tomography in that it does not involve the use of radiation [4]. MRI makes it easier to identify several brain disorders such as tumors, cysts, internal bleeding, edema, developmental and structural abnormalities in tissues, infections, inflammatory diseases in tissues, and blood vessel disorders, among others. MRI can also reveal damage to brain tissues or structural abnormalities caused by an accident or stroke. With the benefit of delivering clear and detailed information in images, MRI has shown its usefulness and significance. Based on a visual interpretation, doctors/radiologists examine brain MRIs and detect the existence of abnormal tissues or deficits in brain function. However, owing to the complex anatomy of the human brain, MRI scans are not always sufficient to diagnose and detect abnormalities by visual or manual examination [1]. Under such circumstances, brain MRI classification techniques are useful.

Segmentation is one of the most helpful approaches for extracting significant information from medical images [5]. Brain tumors and other brain illnesses, such as Alzheimer’s disease, alter the basic anatomy of the human brain (Figure 1). Figure 1(b) depicts an example of a human brain tumor with aberrant development, and Figure 1(c) depicts aberrant amounts of beta-amyloid protein in the brain from Alzheimer’s disease [6].

As a benefit of MRI, it produces precise images of soft tissues such as the brain, allowing it to evaluate nearly every region of the body. The following are some examples of brain MRIs.

1.1 Fluid-Attenuated Inversion Recovery (FLAIR)

FLAIR is used to assess white matter abnormalities in the brain.

1.2 T1-Weighted

This is a general MRI that reveals the anatomy and structure of the brain.

1.3 T2-Weighted

T2-weighted images are similar to typical MRI types; however, unlike T1-weighted images, they highlight the fluid content in white. For example, traumatic brain injury (TBI) enables the visualization of severe diffuse axonal injury.

1.4 Diffusion-Weighted Imaging (DWI)

This emphasizes the integrity of brain tissue. During brain strokes, or when blood cannot reach all parts of the brain, brain cells die as a result of a chemical process that increases the sodium and water content of the tissues. This mechanism alters tissue integrity, as observed through DWI.

1.5 Functional Magnetic Resonance Imaging (fMRI)

This is a newer type of MRI that captures images using the iron in the blood. When one neuron sends a signal to another neuron to perform a task, such as moving the right hand, blood flow increases in parts of the brain involved in the task. As a result, those neurons involved in moving the right hand will show increased signals. To explain how the brain performs activities, doctors use fMRI to simulate these shifts in increased signals through images.

The MRI images listed above have their own benefits and properties. Figure 2 shows examples of images obtained from several MRI scans.

Although the visualization of MRI images can reveal brain tumors, they are not always sufficient to detect and diagnose brain abnormalities [7] (Figure 1). Image processing techniques are certainly useful in these situations. Image segmentation is a technique that separates an image into relevant sections during image processing [8]. Clustering is a multivariate dataset segregation and grouping approach for unlabelled patterns. K-means clustering is a basic and unsupervised clustering technique in which k centroids are defined for each cluster [9]. With this method, each pixel in a picture is assigned to a cluster. It is well known in medical imaging because of its simple, efficient, and self-organizing nature. However, predicting the value of k is difficult, and does not perform well with the global cluster. The fuzzy k-means or FCM technique is an extension of k-means clustering [10]. In FCM, the data unit may belong to one or more clusters, and its membership in that cluster is also linked to it [8]. FCM is a well-known unsupervised data-clustering technique. The use of fuzzy sets and membership of belonging are the reasons for the success of FCM. FCM allows a single data piece to belong to multiple clusters. Medical images with complex structures require proper segmentation for clinical diagnostics [11]. Although there are several strategies for segmenting medical images, most fail owing to a poor contrast, unknown noise, and weak borders [12]. FCM is fascinating for use in medical images because it retains a large amount of information from the image by allowing pixels to belong to several clusters [13]. In this study, FCM was used to segment the brain MRI images.

Recent research has focused on categorizing brain MRI images using machine learning (ML) and deep learning (DL) [14]. Researchers from all over the world have been drawn to this topic because of the potential outcomes. ML algorithms are created to learn from labeled data and create output, whereas DL-based algorithms use artificial neural networks (ANNs) to train to do things from unlabeled data. Although DL is only a subset of ML, differences in capabilities and performance [15] are significant. ML algorithms require a direction to increase their performance by modifying their functions and introducing additional labeled data (supervised learning). By contrast, DL algorithms use neural networks to enhance their accuracy and reduce errors from the expected output (unsupervised learning). The most crucial component of all learning approaches is the data, which assist the model in collecting the most relevant information, also known as features. Based on these features, the DL model learns and predicts the output. An autoencoder (AE) is an artificial network that is used to reconstruct its input. It learns the underlying structure of the data to recreate it as accurately as possible. The AE model retains crucial and relevant data while intelligently removing redundancy. In simple terms, an AE learns the compressed form of data. In an instance of an image, it learns to retain the best parts of the image and eliminate the rest. This also creates an AE, feature extraction, and dimensional reduction tool [16].

An support vector machine (SVM) is an enhanced kernel-based technique [17]. A discrete wavelet transformation (DWT) technique was used to extract features from the MRI images. The proposed classification model was developed to determine whether the input brain MRI images were normal or pathological. However, owing to limitations of the SVM classifier such as restricted data precision and computational cost, the model can only function with a limited quantity of data. The features retrieved from MRI images using DWT have been reduced using a principal component analysis [18], and the reduced feature vector was fed into the SVM classifier. Subsequently, a deep wavelet transformation and an AE-based deep neural network (DNN) technique were presented [19]. An AE attempts to learn an efficient feature of the data in an unsupervised manner. The DWT and AE were merged through this technique, and their fusion with the DNN resulted in an improved accuracy and performance.

In this study, we propose a new classification approach for brain MRI images. The classification was binary and the images were divided into two categories: normal and abnormal. The data used in this study contain healthy brain MRIs, tumorous and non-tumorous MRIs, and MRIs showing Alzheimer’s disease. The proposed method employs data augmentation techniques to expand the number of data. FCM algorithms have also been used to segment brain MRI images. The tumorous or abnormal portion of the human brain was segmented from the MRI image using FCM, exposing the abnormalities in the brain MRI scans. Thereafter, the entire dataset and segmented data were fed into an AE to extract high-level features for a typical brain anatomy. An AE is widely used to extract features while simultaneously reducing the feature dimensions. This approach was adopted as an image compression approach as well as a feature selection approach. Subsequently, for the final classification, the DNN was trained to utilize these features. The observed results of the proposed method outperform existing techniques owing to its potential to manage with less raw data and achieve a better classification accuracy.

The remainder of this paper is organized as follows: Section 3 focuses on the materials employed and the technique used in the proposed method. Section 4 discusses the findings of the proposed method. Finally, some concluding remarks are presented in Section 5.

For brain MRI classification, a DL strategy based on FCM and AE approaches, was developed. FCM is a technique used for extracting useful information from MRI scans (features). The AE attempts to reduce the dimensionality of the data [20], and the feature data retrieved from FCM were reduced using this AE characteristic. The data were later separated into test and training sets for a DNN for classification. The proposed model segments the MRI image using FCM, which separates the various brain tissues such as gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and skull from the tumor tissues. These data were utilized as an abnormality feature set, which is useful for training and final classification. An AE, which is a strong approach, is utilized for feature extraction and a reduction of irrelevant features.

The architecture of the proposed model for brain MRI classification based on FCMs clustering and an AE is shown in Figure 3. The brain MRI image dataset was obtained from the ehealth laboratory homepage of the Department of Computer Science, University of Cyprus (http://www.ehealthlab.cs.ucy.ac.cy). They have free access to high-quality brain MRI images (e.g., normal, tumor, Alzheimer’s disease, and dementia). It includes T1-weighted and T2-weighted MRI images.

3.1 Brain MRI Dataset

T1-weighted and T2-weighted datasets from the e-health laboratory homepage, Department of Computer Science, University of Cyprus, were utilized in this study. The MRI images utilized in this study had a pixel resolution of 256 × 256 and were acquired using a T2-weighted turbo spin-echo pulse sequence (repetition time= 4406 ms, echo duration= 100 ms, and echo spacing= 10.6 ms).

3.2 Brain MRI Data Augmentation

In general, the available dataset does not necessarily include all of the variety in the data that aids the model in learning essential and unique features. Image augmentation is the process of creating several images from a single image by modifying it in a certain manner, such as by rotation and inversion. It expands the quantity of the data available for training and allows feature-extraction algorithms to acquire more information than a single image. This method is beneficial when the number of training data is extremely restricted. Figure 4 shows several examples of augmented images.

As shown in Figure 4, the intensity and spatial augmentation are utilized to induce variety in a dataset by flipping left or right, applying random deformities, zooming in or out, and through a random contrast modification, among other approaches. Spatial augmentation prevents the network from being particularly focused on features found mostly in a specific spatial region and helps the model learn spatially invariant features. In medical imaging, where images are acquired using various equipment in different locations, the pixel intensities and saturation may be heterogeneous. Under this scenario, intensity augmentation helps the model learn the features during training by providing augmented images with different intensities. Furthermore, the features learned by the model were extensive and reduced the risk of overfitting.

3.3 MRI Image Segmentation using FCM Clustering

FCM is one of the most successful, efficient, and efficient credible data-clustering techniques. The use of fuzzy sets and membership of belonging are the reasons for the success of FCM. FCM allows a single data element to belong to two or more clusters with the associated membership values. It works by minimizing the following objective function:

Jm=i=1Nj=1Cuijmxi-cj2.

Here, 1 ≤ m < ∞. In addition, uij is the degree of membership of xi in cluster j, xi is the i-th number of d-dimensional uniform data, cj is the cluster’s center, and ||xi – cj|| is the expression used to assess the similarity between any measured data and the center.

The aforementioned objective function is utilized to quantify fuzzy partitioning through the optimization process, which employs the following membership and center function equations:

uij=1k=1C(xi-cjxi-ck)2m-1,

where k is the number of iterations, and the cluster center of the d-dimension can be represented as

cj=i=1Nuijm.xii=1Nuijm.

The minimization of the FCM objective function is determined based on the criterion in which high membership values are allocated to pixels closer to the centroids and low membership values are allocated to pixels farther away from the centroids.

The augmented data from the previous stage, as well as other brain MRI data, are then segmented using FCM, exposing all features of a healthy brain and abnormalities in the brain MRI. These datasets were used as training data for our primary purpose and classification. Figure 5 shows example images as well as segmented images.

The primary goal of MRI segmentation is to segment an image into well-defined areas. Each area is composed of pixels with comparable intensities, texture qualities, or neighbors. For healthy brain images, the features revealed during this stage are comparable in terms of pattern and look, which has a significant influence on DNN classification learning.

3.4 Feature Extraction and Reduction using Autoencoder

An AE is an optimization approach that can be used to extract and learn the main components of large data distribution scenarios. Following the segmentation procedure, the AE extracts and learns the unique properties of the normal and abnormal brain structures (tumorous or other illnesses). An AE compresses the image data into a smaller dimension and then reproduces the result using the compressed data. The compressed data are a collection of image properties known as a latent-space representation, which was later used to rebuild the image. In this step, the AE extracts and minimizes features from the segmented image. Feature reduction occurs when a layer with a number of dimensions lower than that of the input layer is placed between the encoder and decoder. Because the input picture size is large, we employ an additional hidden layer for both encoding and decoding. The AE architecture is shown in Figure 6. The middle layer is present in the decoded image with a pixel resolution of 64 × 64.

The generated feature vectors are then utilized to train the DNN. Figure 7 shows several feature vector patterns.

The features retrieved from the AE and original dataset were merged and fed into the DNN classifier for training and testing.

3.5 Deep Neural Network Classification

The DNN was used for classification after feature extraction and reduction. Classification was accomplished by constructing and training a DNN with seven hidden layers using an 11-fold cross-validation approach. The proposed technique classifies the brain MRI data into two categories: normal and abnormal.

Figure 8 depicts the DNN classifier model. The DNN classifier was trained to utilize these two classes as well as the features retrieved from the AE to identify the test image as normal or abnormal (tumor or Alzheimer’s).

This section addresses the experiment results and parametric configuration of the proposed brain MRI classification model. The proposed model is validated using various experiment results. Python 3.6 platforms were used for the experiment setup, along with some fundamental helpful packages such as SciPy, NumPy, and MatPlotLib, and data analysis packages such as Keras and Scikit-learn. The hardware configuration for the experiment was a Samsung PC (with an Intel Core i5-4590 CPU, 16 GB of RAM, and an NVIDIA GeForce GTX 1070 Ti GPU). The proposed model was implemented using TensorFlow 1.5 libraries and CUDA 10.0. A total of 500 brain MRI images were used for training, and 50 images were used for testing.

For 400 epochs, the learning rate was set as 0.01. The activation function and optimization methods in DWA-DNN [21] are sigmoid and Broyden–Fletcher–Goldfarb–Shanno, respectively, with a batch size of 20. The softmax function provides the range of probabilities for each class, with the target class having a high probability. The range is 0 to 1, and softmax will ensure that the total probability of the output classes equals 1. This is often the final layer of the classification model. The proposed technique employs the Adam optimizer as an optimization technique, and binary classification is accomplished using the softmax activation function.

A confusion matrix consisting of four outcomes was generated in the binary classification. Following this classification, the following conclusions were drawn:

  • Correct positive prediction- True Positive (TP),

  • Incorrect positive prediction- True Negative (TN),

  • Correct negative prediction- False Positive (FP),

  • Incorrect negative prediction- False Negative (FN).

A fundamental metric obtained from the confusion matrix was used to evaluate the performance of the classification model.

4.1 Accuracy

The accuracy was calculated by dividing the total number of predictions/datasets by the total number of correct predictions. Eq. (4) represents the accuracy of the model.

Accuracy=TP+TNTP+TN+FN+FP.

The highest accuracy value is 1.0 (100%), and the lowest is 0. (0%).

4.2 Sensitivity

The sensitivity is determined when the number of correct positive predictions is divided by the total number of positives. This is sometimes referred to as the true positive rate (TPR).

S=TPTP+FN.

As shown in Figure 9, the accuracy of the proposed model (96%) was greater than that of the current models. The model attained greater accuracy because of its self-learning and generalization capabilities. The AE recognizes and collects the most essential features during the feature extraction and reduction stage, which assists the model in learning the major patterns of abnormal or normal MRI images. The high sensitivity rate (95%) also demonstrates the superiority of the proposed model in the field of medical imaging. Sensitivity assesses how well a test detects a disease, whereas accuracy assesses how well a diagnostic test correctly identifies and removes a particular condition. In the medical field, a high sensitivity rate indicates a robust model performance and reduces the chance of disease diagnosis errors. In our previous research [22], we compared several brain MRI classification models, such as the DWT-SVM-, DWA-DNN-, and CNN-based models. For further details on the statistical comparisons of other model performances, refer to our previous study [22]. In this study, we compared the performance of DWT-SVM- and DWA-DNN-based models with that of our proposed model. These two methods were chosen because they follow a similar pattern (feature extraction + classification) as our proposed model.

As shown in Figure 10, the training curve closely tracked the validation curve, indicating that the proposed model performed superbly.

The proposed method employs a data augmentation strategy that allows the model to work efficiently with minimal training data. The AE lowers the feature map and saves only the most significant feature maps once the FCM has exposed the significant features. The DNN learns the most important features and identifies normal and abnormal data in the final layer. Because no external classifier technique was employed, the computational complexity was reduced. In addition, the DNN classifier outperformed conventional classifiers in terms of accuracy. The high sensitivity rate also demonstrates the robustness and reliability of the proposed model for medical imaging.

This research was partly supported by a Daegu University and Korea Institute of Advancement of Technology (KIAT) grant funded by the Korean government (MOTIE) (P0012724) and the Competency Development Program for Industry Specialists.
Fig. 1.

Brain MRI scans: (a) normal brain, (b) brain tumor, and (c) brain damaged by Alzheimer’s disease.


Fig. 2.

Types of brain MRI images: (a) FLAIR, (b) T1, (c) T2, (d) DWI, and (e) fMRI.


Fig. 3.

Proposed model architecture based on deep learning approach for brain MRI classification using FCM and AE features.


Fig. 4.

Augmented images were obtained through spatial and intensity augmentation.


Fig. 5.

Brain MRI segmentation using FCM.


Fig. 6.

AE architecture.


Fig. 7.

Features extracted using AE.


Fig. 8.

DNN classifier.


Fig. 9.

Performance comparison.


Fig. 10.

The training and validation accuracy curves of the proposed method.


  1. Reardon, S (2019). Rise of robot radiologists. Nature. 576, S54-S54. https://doi.org/10.1038/d41586-019-03847-z
    Pubmed CrossRef
  2. Chauhan, N, and Choi, BJ (2018). Performance analysis of denoising algorithms for human brain image. International Journal of Fuzzy Logic and Intelligent Systems. 18, 175-181. https://doi.org/10.5391/IJFIS.2018.18.3.175
    CrossRef
  3. Alzheimer’s Association (2017). 2017 Alzheimer’s disease facts and figures. Alzheimer’s & Dementia. 13, 325-373. https://doi.org/10.1016/j.jalz.2017.02.001
    CrossRef
  4. Sauwen, N, Acou, M, Van Cauter, S, Sima, DM, Veraart, J, Maes, F, Himmelreich, U, Achten, E, and Van Huffel, S (2016). Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI. NeuroImage: Clinical. 12, 753-764. https://doi.org/10.1016/j.nicl.2016.09.021
    CrossRef
  5. Chauhan, N, and Choi, BJ (2019). Denoising approaches using fuzzy logic and convolutional autoencoders for human brain MRI image. International Journal of Fuzzy Logic and Intelligent Systems. 19, 135-139. https://doi.org/10.5391/IJFIS.2019.19.3.135
    CrossRef
  6. Despotovic, I, Goossens, B, and Philips, W (2015). MRI segmentation of the human brain: challenges, methods, and applications. Computational and Mathematical Methods in Medicine. 2015. article no 450341
    KoreaMed CrossRef
  7. Backstrom, K, Nazari, M, Gu, IYH, and Jakola, AS . An efficient 3D deep convolutional network for Alzheimer’s disease diagnosis using MR images., Proceedings of 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI), 2018, Washington, DC, Array, pp.149-153. https://doi.org/10.1109/ISBI.2018.8363543
  8. Kannan, SR, Devi, R, Ramathilagam, S, and Takezawa, K (2013). Effective FCM noise clustering algorithms in medical images. Computers in Biology and Medicine. 43, 73-83. https://doi.org/10.1016/j.compbiomed.2012.10.002
    CrossRef
  9. Elazab, A, AbdulAzeem, YM, Wu, S, and Hu, Q (2016). Robust kernelized local information fuzzy C-means clustering for brain magnetic resonance image segmentation. Journal of X-ray Science and Technology. 24, 489-507. https://doi.org/10.3233/XST-160563
    Pubmed CrossRef
  10. Saradhi, AV, and Srinivas, L (2018). An Efficient method k-means clustering for detection of tumour volume in brain MRI scans. International Journal of Trend in Scientific Research and Development. 2, 723-732. https://doi.org/10.31142/ijtsrd13027
    CrossRef
  11. Ji, Z, Liu, J, Cao, G, Sun, Q, and Chen, Q (2014). Robust spatially constrained fuzzy c-means algorithm for brain MR image segmentation. Pattern Recognition. 47, 2454-2466. https://doi.org/10.1016/j.patcog.2014.01.017
    CrossRef
  12. Pohle, R, and Toennies, KD (2001). Segmentation of medical images using adaptive region growing. Proceedings of SPIE. 4322, 1337-1346. https://doi.org/10.1117/12.431013
    CrossRef
  13. Chang, PL, and Teng, WG . Exploiting the self-organizing map for medical image segmentation., Proceedings of the 20th IEEE International Symposium on Computer-Based Medical Systems (CBMS), 2007, Maribor, Slovenia, Array, pp.281-288. https://doi.org/10.1109/CBMS.2007.48
  14. Kannan, SR (2008). A new segmentation system for brain MR images based on fuzzy techniques. Applied Soft Computing. 8, 1599-1606. https://doi.org/10.1016/j.asoc.2007.10.025
    CrossRef
  15. Lundervold, AS, and Lundervold, A (2019). An overview of deep learning in medical imaging focusing on MRI. Zeitschrift f?r Medizinische Physik. 29, 102-127. https://doi.org/10.1016/j.zemedi.2018.11.002
    CrossRef
  16. Dokur, Z (2008). A unified framework for image compression and segmentation by using an incremental neural network. Expert Systems with Applications. 34, 611-619. https://doi.org/10.1016/j.eswa.2006.09.017
    CrossRef
  17. Xiao, Y, Wu, J, Lin, Z, and Zhao, X (2018). A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data. Computer Methods and Programs in Biomedicine. 166, 99-105. https://doi.org/10.1016/j.cmpb.2018.10.004
    Pubmed CrossRef
  18. Othman, MFB, Abdullah, NB, and Kamal, NFB . MRI brain classification using support vector machine., Proceedings of 2011 4th International Conference on Modeling, Simulation and Applied Optimization, 2011, Kuala Lumpur, Malaysia, Array, pp.1-4. https://doi.org/10.1109/ICMSAO.2011.5775605
  19. Abdullah, N, Chuen, LW, Ngah, UK, and Ahmad, KA . Improvement of MRI brain classification using principal component analysis., Proceedings of 2011 IEEE International Conference on Control System, Computing and Engineering, 2011, Penang, Malaysia, Array, pp.557-561. https://doi.org/10.1109/ICCSCE.2011.6190588
  20. Petscharnig, S, Lux, M, and Chatzichristofis, S . Dimensionality reduction for image features using deep learning and autoencoders., Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, 2017, Florence, Italy, Array, pp.1-6. https://doi.org/10.1145/3095713.3095737
  21. Mallick, PK, Ryu, SH, Satapathy, SK, Mishra, S, Nguyen, GN, and Tiwari, P (2019). Brain MRI image classification for cancer detection using deep wavelet autoencoder-based deep neural network. IEEE Access. 7, 46278-46287. https://doi.org/10.1109/ACCESS.2019.2902252
    CrossRef
  22. Chauhan, N, and Choi, BJ (2019). Performance analysis of classification techniques of human brain MRI images. International Journal of Fuzzy Logic and Intelligent Systems. 19, 315-322. https://doi.org/10.5391/IJFIS.2019.19.4.315
    CrossRef

Nishant Chauhan received his B.S. degree in computer science from AKTU, India in 2012, and an M.S. degree in control and instrumentation from Daegu University, Korea in 2020. He is currently a full-time Ph.D. student in control and instrumentation at Daegu University, Department of Electronic Engineering (Graduate School). His research interests include intelligent control, fuzzy logic, image processing, machine learning, and deep learning for medical images, brain MRI, and fMRI analysis.

E-mail: nishantsep1090@daegu.ac.kr


Byung-Jae Choi received his B.S. degree in electronic engineering in 1987 from Kyungpook National University, Daegu. He received his M.S. and a Ph.D. degrees in electrical and electronic engineering, in 1989 and 1998, respectively, at KAIST, Daejeon, Korea. He has been a professor at the School of Electronic and Electrical Engineering, Daegu University, Daegu, Korea, since 1999. His current research interests include intelligent control and applications.

E-mail: bjchoi@daegu.ac.kr


Article

Original Article

International Journal of Fuzzy Logic and Intelligent Systems 2021; 21(4): 349-357

Published online December 25, 2021 https://doi.org/10.5391/IJFIS.2021.21.4.349

Copyright © The Korean Institute of Intelligent Systems.

DNN-Based Brain MRI Classification Using Fuzzy Clustering and Autoencoder Features

Nishant Chauhan* and Byung-Jae Choi*

Department of Electronic Engineering, Daegu University, Gyeongsan, Korea

Correspondence to:Byung-Jae Choi (bjchoi@daegu.ac.kr)
*These authors contributed equally to this work.

Received: June 16, 2021; Revised: October 6, 2021; Accepted: December 22, 2021

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Worldwide interest has been noted in medical image analysis and classification using machine learning techniques. Magnetic resonance imaging (MRI) is one of the safe and painless procedures for human brain scanning. During the MRI procedure, magnetic fields and radio waves are used to scan and map the extended view of brain tissues for further pathological processes and analysis. For a qualitative and quantitative MRI analysis, the manual capability of radiologists and/or doctors is limited and time-consuming in complex and group-level diagnoses. Hence, the development of an intelligent, robust, and reliable support system for the diagnosis of brain-related diseases is a top priority. In this paper, a new deep neural network based MRI image classification approach is proposed that uses fuzzy c-mean (FCM) and an autoencoder to classify brain MRI as normal or abnormal, diminishing human error during the diagnosis of diseases in MRI scans. Here, FCM is utilized for abnormal tissue segmentation from brain MRI images, followed by an autoencoder, for extraction and dimensionality reduction of features. Finally, a deep neural network was used for the classification of brain MRI images that were trained using FCM-extracted features and sample data. Considering the availability of raw MRI data, data augmentation techniques have also been used to increase the number of data required to train a deep neural network. The experiment results achieved 96% accuracy and a 95% sensitivity rate for classification. The results demonstrate that the proposed well-trained deep learning technology has the potential to make solid predictions regarding brain abnormalities; therefore, it can be used as a prominent tool in clinical practice.

Keywords: Magnetic resonance imaging (MRI), Fuzzy c-mean (FCM) clustering, Deep neural network, Autoencoder, Classification

1. Introduction

The rapid growth in imaging technology has proven to be an important tool in the medical industry for image diagnosis and therapy. The early diagnosis and detection of diseases in medical imaging can save millions of lives. The human brain is a complicated system that needs a great deal of meticulous data to comprehend the pathological process [1]. Alzheimer’s disease, Parkinson’s disease, neuro-infections, neurological illnesses, and brain cancers have varying impacts on the structure and function of the brain [2]. Tumors are abnormal tissue growths, and when they occur in the brain, they are known as brain tumors. There are two types of brain tumors: benign (non-cancerous) and malignant (cancerous). Malignant brain tumors are a type of metastatic cancer that spreads to the brain from another region of the body. Early diagnosis and treatment of a brain tumor or any other type of brain disease are crucial for successful treatment [3]. Magnetic resonance imaging (MRI) is a medical imaging technique that produces high-quality pictures of human body components such as brain tumors, shoulders, and ankles. MRI differs from computed axial tomography in that it does not involve the use of radiation [4]. MRI makes it easier to identify several brain disorders such as tumors, cysts, internal bleeding, edema, developmental and structural abnormalities in tissues, infections, inflammatory diseases in tissues, and blood vessel disorders, among others. MRI can also reveal damage to brain tissues or structural abnormalities caused by an accident or stroke. With the benefit of delivering clear and detailed information in images, MRI has shown its usefulness and significance. Based on a visual interpretation, doctors/radiologists examine brain MRIs and detect the existence of abnormal tissues or deficits in brain function. However, owing to the complex anatomy of the human brain, MRI scans are not always sufficient to diagnose and detect abnormalities by visual or manual examination [1]. Under such circumstances, brain MRI classification techniques are useful.

Segmentation is one of the most helpful approaches for extracting significant information from medical images [5]. Brain tumors and other brain illnesses, such as Alzheimer’s disease, alter the basic anatomy of the human brain (Figure 1). Figure 1(b) depicts an example of a human brain tumor with aberrant development, and Figure 1(c) depicts aberrant amounts of beta-amyloid protein in the brain from Alzheimer’s disease [6].

As a benefit of MRI, it produces precise images of soft tissues such as the brain, allowing it to evaluate nearly every region of the body. The following are some examples of brain MRIs.

1.1 Fluid-Attenuated Inversion Recovery (FLAIR)

FLAIR is used to assess white matter abnormalities in the brain.

1.2 T1-Weighted

This is a general MRI that reveals the anatomy and structure of the brain.

1.3 T2-Weighted

T2-weighted images are similar to typical MRI types; however, unlike T1-weighted images, they highlight the fluid content in white. For example, traumatic brain injury (TBI) enables the visualization of severe diffuse axonal injury.

1.4 Diffusion-Weighted Imaging (DWI)

This emphasizes the integrity of brain tissue. During brain strokes, or when blood cannot reach all parts of the brain, brain cells die as a result of a chemical process that increases the sodium and water content of the tissues. This mechanism alters tissue integrity, as observed through DWI.

1.5 Functional Magnetic Resonance Imaging (fMRI)

This is a newer type of MRI that captures images using the iron in the blood. When one neuron sends a signal to another neuron to perform a task, such as moving the right hand, blood flow increases in parts of the brain involved in the task. As a result, those neurons involved in moving the right hand will show increased signals. To explain how the brain performs activities, doctors use fMRI to simulate these shifts in increased signals through images.

The MRI images listed above have their own benefits and properties. Figure 2 shows examples of images obtained from several MRI scans.

2. Literature Review

Although the visualization of MRI images can reveal brain tumors, they are not always sufficient to detect and diagnose brain abnormalities [7] (Figure 1). Image processing techniques are certainly useful in these situations. Image segmentation is a technique that separates an image into relevant sections during image processing [8]. Clustering is a multivariate dataset segregation and grouping approach for unlabelled patterns. K-means clustering is a basic and unsupervised clustering technique in which k centroids are defined for each cluster [9]. With this method, each pixel in a picture is assigned to a cluster. It is well known in medical imaging because of its simple, efficient, and self-organizing nature. However, predicting the value of k is difficult, and does not perform well with the global cluster. The fuzzy k-means or FCM technique is an extension of k-means clustering [10]. In FCM, the data unit may belong to one or more clusters, and its membership in that cluster is also linked to it [8]. FCM is a well-known unsupervised data-clustering technique. The use of fuzzy sets and membership of belonging are the reasons for the success of FCM. FCM allows a single data piece to belong to multiple clusters. Medical images with complex structures require proper segmentation for clinical diagnostics [11]. Although there are several strategies for segmenting medical images, most fail owing to a poor contrast, unknown noise, and weak borders [12]. FCM is fascinating for use in medical images because it retains a large amount of information from the image by allowing pixels to belong to several clusters [13]. In this study, FCM was used to segment the brain MRI images.

Recent research has focused on categorizing brain MRI images using machine learning (ML) and deep learning (DL) [14]. Researchers from all over the world have been drawn to this topic because of the potential outcomes. ML algorithms are created to learn from labeled data and create output, whereas DL-based algorithms use artificial neural networks (ANNs) to train to do things from unlabeled data. Although DL is only a subset of ML, differences in capabilities and performance [15] are significant. ML algorithms require a direction to increase their performance by modifying their functions and introducing additional labeled data (supervised learning). By contrast, DL algorithms use neural networks to enhance their accuracy and reduce errors from the expected output (unsupervised learning). The most crucial component of all learning approaches is the data, which assist the model in collecting the most relevant information, also known as features. Based on these features, the DL model learns and predicts the output. An autoencoder (AE) is an artificial network that is used to reconstruct its input. It learns the underlying structure of the data to recreate it as accurately as possible. The AE model retains crucial and relevant data while intelligently removing redundancy. In simple terms, an AE learns the compressed form of data. In an instance of an image, it learns to retain the best parts of the image and eliminate the rest. This also creates an AE, feature extraction, and dimensional reduction tool [16].

An support vector machine (SVM) is an enhanced kernel-based technique [17]. A discrete wavelet transformation (DWT) technique was used to extract features from the MRI images. The proposed classification model was developed to determine whether the input brain MRI images were normal or pathological. However, owing to limitations of the SVM classifier such as restricted data precision and computational cost, the model can only function with a limited quantity of data. The features retrieved from MRI images using DWT have been reduced using a principal component analysis [18], and the reduced feature vector was fed into the SVM classifier. Subsequently, a deep wavelet transformation and an AE-based deep neural network (DNN) technique were presented [19]. An AE attempts to learn an efficient feature of the data in an unsupervised manner. The DWT and AE were merged through this technique, and their fusion with the DNN resulted in an improved accuracy and performance.

In this study, we propose a new classification approach for brain MRI images. The classification was binary and the images were divided into two categories: normal and abnormal. The data used in this study contain healthy brain MRIs, tumorous and non-tumorous MRIs, and MRIs showing Alzheimer’s disease. The proposed method employs data augmentation techniques to expand the number of data. FCM algorithms have also been used to segment brain MRI images. The tumorous or abnormal portion of the human brain was segmented from the MRI image using FCM, exposing the abnormalities in the brain MRI scans. Thereafter, the entire dataset and segmented data were fed into an AE to extract high-level features for a typical brain anatomy. An AE is widely used to extract features while simultaneously reducing the feature dimensions. This approach was adopted as an image compression approach as well as a feature selection approach. Subsequently, for the final classification, the DNN was trained to utilize these features. The observed results of the proposed method outperform existing techniques owing to its potential to manage with less raw data and achieve a better classification accuracy.

The remainder of this paper is organized as follows: Section 3 focuses on the materials employed and the technique used in the proposed method. Section 4 discusses the findings of the proposed method. Finally, some concluding remarks are presented in Section 5.

3. Materials and Methods

For brain MRI classification, a DL strategy based on FCM and AE approaches, was developed. FCM is a technique used for extracting useful information from MRI scans (features). The AE attempts to reduce the dimensionality of the data [20], and the feature data retrieved from FCM were reduced using this AE characteristic. The data were later separated into test and training sets for a DNN for classification. The proposed model segments the MRI image using FCM, which separates the various brain tissues such as gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and skull from the tumor tissues. These data were utilized as an abnormality feature set, which is useful for training and final classification. An AE, which is a strong approach, is utilized for feature extraction and a reduction of irrelevant features.

The architecture of the proposed model for brain MRI classification based on FCMs clustering and an AE is shown in Figure 3. The brain MRI image dataset was obtained from the ehealth laboratory homepage of the Department of Computer Science, University of Cyprus (http://www.ehealthlab.cs.ucy.ac.cy). They have free access to high-quality brain MRI images (e.g., normal, tumor, Alzheimer’s disease, and dementia). It includes T1-weighted and T2-weighted MRI images.

3.1 Brain MRI Dataset

T1-weighted and T2-weighted datasets from the e-health laboratory homepage, Department of Computer Science, University of Cyprus, were utilized in this study. The MRI images utilized in this study had a pixel resolution of 256 × 256 and were acquired using a T2-weighted turbo spin-echo pulse sequence (repetition time= 4406 ms, echo duration= 100 ms, and echo spacing= 10.6 ms).

3.2 Brain MRI Data Augmentation

In general, the available dataset does not necessarily include all of the variety in the data that aids the model in learning essential and unique features. Image augmentation is the process of creating several images from a single image by modifying it in a certain manner, such as by rotation and inversion. It expands the quantity of the data available for training and allows feature-extraction algorithms to acquire more information than a single image. This method is beneficial when the number of training data is extremely restricted. Figure 4 shows several examples of augmented images.

As shown in Figure 4, the intensity and spatial augmentation are utilized to induce variety in a dataset by flipping left or right, applying random deformities, zooming in or out, and through a random contrast modification, among other approaches. Spatial augmentation prevents the network from being particularly focused on features found mostly in a specific spatial region and helps the model learn spatially invariant features. In medical imaging, where images are acquired using various equipment in different locations, the pixel intensities and saturation may be heterogeneous. Under this scenario, intensity augmentation helps the model learn the features during training by providing augmented images with different intensities. Furthermore, the features learned by the model were extensive and reduced the risk of overfitting.

3.3 MRI Image Segmentation using FCM Clustering

FCM is one of the most successful, efficient, and efficient credible data-clustering techniques. The use of fuzzy sets and membership of belonging are the reasons for the success of FCM. FCM allows a single data element to belong to two or more clusters with the associated membership values. It works by minimizing the following objective function:

Jm=i=1Nj=1Cuijmxi-cj2.

Here, 1 ≤ m < ∞. In addition, uij is the degree of membership of xi in cluster j, xi is the i-th number of d-dimensional uniform data, cj is the cluster’s center, and ||xi – cj|| is the expression used to assess the similarity between any measured data and the center.

The aforementioned objective function is utilized to quantify fuzzy partitioning through the optimization process, which employs the following membership and center function equations:

uij=1k=1C(xi-cjxi-ck)2m-1,

where k is the number of iterations, and the cluster center of the d-dimension can be represented as

cj=i=1Nuijm.xii=1Nuijm.

The minimization of the FCM objective function is determined based on the criterion in which high membership values are allocated to pixels closer to the centroids and low membership values are allocated to pixels farther away from the centroids.

The augmented data from the previous stage, as well as other brain MRI data, are then segmented using FCM, exposing all features of a healthy brain and abnormalities in the brain MRI. These datasets were used as training data for our primary purpose and classification. Figure 5 shows example images as well as segmented images.

The primary goal of MRI segmentation is to segment an image into well-defined areas. Each area is composed of pixels with comparable intensities, texture qualities, or neighbors. For healthy brain images, the features revealed during this stage are comparable in terms of pattern and look, which has a significant influence on DNN classification learning.

3.4 Feature Extraction and Reduction using Autoencoder

An AE is an optimization approach that can be used to extract and learn the main components of large data distribution scenarios. Following the segmentation procedure, the AE extracts and learns the unique properties of the normal and abnormal brain structures (tumorous or other illnesses). An AE compresses the image data into a smaller dimension and then reproduces the result using the compressed data. The compressed data are a collection of image properties known as a latent-space representation, which was later used to rebuild the image. In this step, the AE extracts and minimizes features from the segmented image. Feature reduction occurs when a layer with a number of dimensions lower than that of the input layer is placed between the encoder and decoder. Because the input picture size is large, we employ an additional hidden layer for both encoding and decoding. The AE architecture is shown in Figure 6. The middle layer is present in the decoded image with a pixel resolution of 64 × 64.

The generated feature vectors are then utilized to train the DNN. Figure 7 shows several feature vector patterns.

The features retrieved from the AE and original dataset were merged and fed into the DNN classifier for training and testing.

3.5 Deep Neural Network Classification

The DNN was used for classification after feature extraction and reduction. Classification was accomplished by constructing and training a DNN with seven hidden layers using an 11-fold cross-validation approach. The proposed technique classifies the brain MRI data into two categories: normal and abnormal.

Figure 8 depicts the DNN classifier model. The DNN classifier was trained to utilize these two classes as well as the features retrieved from the AE to identify the test image as normal or abnormal (tumor or Alzheimer’s).

4. Results and Discussion

This section addresses the experiment results and parametric configuration of the proposed brain MRI classification model. The proposed model is validated using various experiment results. Python 3.6 platforms were used for the experiment setup, along with some fundamental helpful packages such as SciPy, NumPy, and MatPlotLib, and data analysis packages such as Keras and Scikit-learn. The hardware configuration for the experiment was a Samsung PC (with an Intel Core i5-4590 CPU, 16 GB of RAM, and an NVIDIA GeForce GTX 1070 Ti GPU). The proposed model was implemented using TensorFlow 1.5 libraries and CUDA 10.0. A total of 500 brain MRI images were used for training, and 50 images were used for testing.

For 400 epochs, the learning rate was set as 0.01. The activation function and optimization methods in DWA-DNN [21] are sigmoid and Broyden–Fletcher–Goldfarb–Shanno, respectively, with a batch size of 20. The softmax function provides the range of probabilities for each class, with the target class having a high probability. The range is 0 to 1, and softmax will ensure that the total probability of the output classes equals 1. This is often the final layer of the classification model. The proposed technique employs the Adam optimizer as an optimization technique, and binary classification is accomplished using the softmax activation function.

A confusion matrix consisting of four outcomes was generated in the binary classification. Following this classification, the following conclusions were drawn:

  • Correct positive prediction- True Positive (TP),

  • Incorrect positive prediction- True Negative (TN),

  • Correct negative prediction- False Positive (FP),

  • Incorrect negative prediction- False Negative (FN).

A fundamental metric obtained from the confusion matrix was used to evaluate the performance of the classification model.

4.1 Accuracy

The accuracy was calculated by dividing the total number of predictions/datasets by the total number of correct predictions. Eq. (4) represents the accuracy of the model.

Accuracy=TP+TNTP+TN+FN+FP.

The highest accuracy value is 1.0 (100%), and the lowest is 0. (0%).

4.2 Sensitivity

The sensitivity is determined when the number of correct positive predictions is divided by the total number of positives. This is sometimes referred to as the true positive rate (TPR).

S=TPTP+FN.

As shown in Figure 9, the accuracy of the proposed model (96%) was greater than that of the current models. The model attained greater accuracy because of its self-learning and generalization capabilities. The AE recognizes and collects the most essential features during the feature extraction and reduction stage, which assists the model in learning the major patterns of abnormal or normal MRI images. The high sensitivity rate (95%) also demonstrates the superiority of the proposed model in the field of medical imaging. Sensitivity assesses how well a test detects a disease, whereas accuracy assesses how well a diagnostic test correctly identifies and removes a particular condition. In the medical field, a high sensitivity rate indicates a robust model performance and reduces the chance of disease diagnosis errors. In our previous research [22], we compared several brain MRI classification models, such as the DWT-SVM-, DWA-DNN-, and CNN-based models. For further details on the statistical comparisons of other model performances, refer to our previous study [22]. In this study, we compared the performance of DWT-SVM- and DWA-DNN-based models with that of our proposed model. These two methods were chosen because they follow a similar pattern (feature extraction + classification) as our proposed model.

As shown in Figure 10, the training curve closely tracked the validation curve, indicating that the proposed model performed superbly.

5. Conclusion

The proposed method employs a data augmentation strategy that allows the model to work efficiently with minimal training data. The AE lowers the feature map and saves only the most significant feature maps once the FCM has exposed the significant features. The DNN learns the most important features and identifies normal and abnormal data in the final layer. Because no external classifier technique was employed, the computational complexity was reduced. In addition, the DNN classifier outperformed conventional classifiers in terms of accuracy. The high sensitivity rate also demonstrates the robustness and reliability of the proposed model for medical imaging.

Fig 1.

Figure 1.

Brain MRI scans: (a) normal brain, (b) brain tumor, and (c) brain damaged by Alzheimer’s disease.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 2.

Figure 2.

Types of brain MRI images: (a) FLAIR, (b) T1, (c) T2, (d) DWI, and (e) fMRI.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 3.

Figure 3.

Proposed model architecture based on deep learning approach for brain MRI classification using FCM and AE features.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 4.

Figure 4.

Augmented images were obtained through spatial and intensity augmentation.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 5.

Figure 5.

Brain MRI segmentation using FCM.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 6.

Figure 6.

AE architecture.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 7.

Figure 7.

Features extracted using AE.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 8.

Figure 8.

DNN classifier.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 9.

Figure 9.

Performance comparison.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

Fig 10.

Figure 10.

The training and validation accuracy curves of the proposed method.

The International Journal of Fuzzy Logic and Intelligent Systems 2021; 21: 349-357https://doi.org/10.5391/IJFIS.2021.21.4.349

References

  1. Reardon, S (2019). Rise of robot radiologists. Nature. 576, S54-S54. https://doi.org/10.1038/d41586-019-03847-z
    Pubmed CrossRef
  2. Chauhan, N, and Choi, BJ (2018). Performance analysis of denoising algorithms for human brain image. International Journal of Fuzzy Logic and Intelligent Systems. 18, 175-181. https://doi.org/10.5391/IJFIS.2018.18.3.175
    CrossRef
  3. Alzheimer’s Association (2017). 2017 Alzheimer’s disease facts and figures. Alzheimer’s & Dementia. 13, 325-373. https://doi.org/10.1016/j.jalz.2017.02.001
    CrossRef
  4. Sauwen, N, Acou, M, Van Cauter, S, Sima, DM, Veraart, J, Maes, F, Himmelreich, U, Achten, E, and Van Huffel, S (2016). Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI. NeuroImage: Clinical. 12, 753-764. https://doi.org/10.1016/j.nicl.2016.09.021
    CrossRef
  5. Chauhan, N, and Choi, BJ (2019). Denoising approaches using fuzzy logic and convolutional autoencoders for human brain MRI image. International Journal of Fuzzy Logic and Intelligent Systems. 19, 135-139. https://doi.org/10.5391/IJFIS.2019.19.3.135
    CrossRef
  6. Despotovic, I, Goossens, B, and Philips, W (2015). MRI segmentation of the human brain: challenges, methods, and applications. Computational and Mathematical Methods in Medicine. 2015. article no 450341
    KoreaMed CrossRef
  7. Backstrom, K, Nazari, M, Gu, IYH, and Jakola, AS . An efficient 3D deep convolutional network for Alzheimer’s disease diagnosis using MR images., Proceedings of 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI), 2018, Washington, DC, Array, pp.149-153. https://doi.org/10.1109/ISBI.2018.8363543
  8. Kannan, SR, Devi, R, Ramathilagam, S, and Takezawa, K (2013). Effective FCM noise clustering algorithms in medical images. Computers in Biology and Medicine. 43, 73-83. https://doi.org/10.1016/j.compbiomed.2012.10.002
    CrossRef
  9. Elazab, A, AbdulAzeem, YM, Wu, S, and Hu, Q (2016). Robust kernelized local information fuzzy C-means clustering for brain magnetic resonance image segmentation. Journal of X-ray Science and Technology. 24, 489-507. https://doi.org/10.3233/XST-160563
    Pubmed CrossRef
  10. Saradhi, AV, and Srinivas, L (2018). An Efficient method k-means clustering for detection of tumour volume in brain MRI scans. International Journal of Trend in Scientific Research and Development. 2, 723-732. https://doi.org/10.31142/ijtsrd13027
    CrossRef
  11. Ji, Z, Liu, J, Cao, G, Sun, Q, and Chen, Q (2014). Robust spatially constrained fuzzy c-means algorithm for brain MR image segmentation. Pattern Recognition. 47, 2454-2466. https://doi.org/10.1016/j.patcog.2014.01.017
    CrossRef
  12. Pohle, R, and Toennies, KD (2001). Segmentation of medical images using adaptive region growing. Proceedings of SPIE. 4322, 1337-1346. https://doi.org/10.1117/12.431013
    CrossRef
  13. Chang, PL, and Teng, WG . Exploiting the self-organizing map for medical image segmentation., Proceedings of the 20th IEEE International Symposium on Computer-Based Medical Systems (CBMS), 2007, Maribor, Slovenia, Array, pp.281-288. https://doi.org/10.1109/CBMS.2007.48
  14. Kannan, SR (2008). A new segmentation system for brain MR images based on fuzzy techniques. Applied Soft Computing. 8, 1599-1606. https://doi.org/10.1016/j.asoc.2007.10.025
    CrossRef
  15. Lundervold, AS, and Lundervold, A (2019). An overview of deep learning in medical imaging focusing on MRI. Zeitschrift f?r Medizinische Physik. 29, 102-127. https://doi.org/10.1016/j.zemedi.2018.11.002
    CrossRef
  16. Dokur, Z (2008). A unified framework for image compression and segmentation by using an incremental neural network. Expert Systems with Applications. 34, 611-619. https://doi.org/10.1016/j.eswa.2006.09.017
    CrossRef
  17. Xiao, Y, Wu, J, Lin, Z, and Zhao, X (2018). A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data. Computer Methods and Programs in Biomedicine. 166, 99-105. https://doi.org/10.1016/j.cmpb.2018.10.004
    Pubmed CrossRef
  18. Othman, MFB, Abdullah, NB, and Kamal, NFB . MRI brain classification using support vector machine., Proceedings of 2011 4th International Conference on Modeling, Simulation and Applied Optimization, 2011, Kuala Lumpur, Malaysia, Array, pp.1-4. https://doi.org/10.1109/ICMSAO.2011.5775605
  19. Abdullah, N, Chuen, LW, Ngah, UK, and Ahmad, KA . Improvement of MRI brain classification using principal component analysis., Proceedings of 2011 IEEE International Conference on Control System, Computing and Engineering, 2011, Penang, Malaysia, Array, pp.557-561. https://doi.org/10.1109/ICCSCE.2011.6190588
  20. Petscharnig, S, Lux, M, and Chatzichristofis, S . Dimensionality reduction for image features using deep learning and autoencoders., Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, 2017, Florence, Italy, Array, pp.1-6. https://doi.org/10.1145/3095713.3095737
  21. Mallick, PK, Ryu, SH, Satapathy, SK, Mishra, S, Nguyen, GN, and Tiwari, P (2019). Brain MRI image classification for cancer detection using deep wavelet autoencoder-based deep neural network. IEEE Access. 7, 46278-46287. https://doi.org/10.1109/ACCESS.2019.2902252
    CrossRef
  22. Chauhan, N, and Choi, BJ (2019). Performance analysis of classification techniques of human brain MRI images. International Journal of Fuzzy Logic and Intelligent Systems. 19, 315-322. https://doi.org/10.5391/IJFIS.2019.19.4.315
    CrossRef

Share this article on :

Related articles in IJFIS

Most KeyWord