Skip to main content
  • Research Article
  • Open access
  • Published:

Brain medical image diagnosis based on corners with importance-values

Abstract

Background

Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis.

Results

Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image.

Conclusions

In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection method utilizing the diagnostic information from neurologists and a corner matching method based on the uncertainty and structure of brain medical images. Additionally, we present a similarity calculation method for brain image classification. Experimental results on two brain image sets show the proposed corner-based brain medical image classifier outperforms the state-of-the-art studies.

Background

Image-based brain disorder diagnosis has attracted increasing interest in computer-assisted interventions [1]. This study has been handled efficiently with the application of machine learning methods [2]. Among numerous machine learning methods, deep learning has been showing the state-of-the-art performance in the recent years. It has been applied in many fields of computer vision, natural language process, and medical image analysis. However, deep learning is still limited because of its vast number of network parameters that must be learned from a large amount of data. It is noteworthy that data sets collected in brain imaging studies are commonly very small. Thus deep learning is still a challenging method in brain image analysis. In the prevalent framework of machine learning for brain imaging data diagnosis, feature extraction is an essential step [1]. Rong et al. have proposed a symmetry-based brain CT image classification method consisting of three stages [3]. First, the weak symmetry of the histograms of two hemispheres is used to classify brain CT images into normal or abnormal categories. Second, the strong symmetry of the textures between two hemispheres in abnormal images are extracted to locate lesions. At last, the abnormal images are classified into benign or malignant ones based on the features of lesions. A few caveats exist in this method. For instance, for abnormal images with similar lesions in both hemispheres or with lesions in the middle of brain, the weak or strong symmetry features are insufficient to distinguish normal and abnormal images. Ding et al. have proposed a joint feature selection method from voxel-based morphometry (VBM) and texture analysis to distinguish Alzheimer’s disease (AD) from the normal controls [4]. However, this method is not stable for brain disease diagnosis using brain medical images with different intensity ranges.

Corners are the small points of interest with variation in two dimensions [5]. They are essential features of images and play a critical role in grasping objects. Tremendous progress has been made over corner research [5,6,7,8], but many fundamental problems are still open, such as corner detection, matching, and corner-based image classification.

As a fundamental and important step in many vision tasks such as image matching, recognition and tracking [5, 6, 9], corner detection has been well studied for many years. Existing methods are divided into three main categories: gray intensity-based, contour-based, and model-based methods [10]. One of the most classical gray intensity-based methods is the Harris method [11]. It is broadly applied in many cases due to its advantages of rotation, translation, and illumination invariance [10]. Nevertheless, Harris is prone to produce numerous useless corners when used over a whole grayscale image. In the field of medical image analysis, doctors generally pay considerable attention to the morphology of ROIs (Regions of Interest). Thus, corners detected from those uninteresting regions are prone to be useless. Moreover, Harris only stores the coordinates of corners, leading to the loss of important medical domain knowledge. Concretely, these corners do not preserve the uncertainty and structure of pixels in brain medical images [12]. To address the above inadequacy, contour-based corner detection methods [13,14,15] might be considered. This type of methods first extracts curves of images using edge detectors, and then searches for the curvature maxima along those curves as corners. Though these methods detect corners on image contours to reduce useless corners, the extracted contours are regarded equally. This does not integrate doctors’ diagnostic information. Thus, the detected corners lose the essential domain knowledge. In terms of the model-based methods, existing methods are required to fit images into a predefined model. These methods are often limited to specific types of points [16]. Moreover, these methods are time-consuming and detected corners do not have the significant diagnostic information either.

Corner matching is to realize the correspondences of two corner sets by estimating the transformation from a corner set to the other [17]. Existing corner matching studies mainly fall into two categories: rigid and non-rigid. Rigid methods are mainly developed based on the classical iterative closest point (ICP) [18], which uses affine transformations to find the closest corresponding points between two surfaces. Nonetheless, methods based on the ICP do not fully capture the anatomical variability, especially comparing shape with significant differences [19]. Thus, rigid methods are less applicable in real-world problems [20]. Non-rigid corner matching methods generally produce an initial matched corner pair sequence first. Then these methods evaluate the transformation between initial matched corner pair sequences to generate a final matched corner pair sequence. Yi et al. [17] have used l 1-norm to formalize the transformation between initial matched corner pair sequences. However, they have not argued the generation of initial matched corner pair sequences. Thus, the uncertainty of corners is not taken into account in the corner matching process. Myronen and Song [20] have proposed a probabilistic method to realize the alignment between corner sets. They regard corners as a group and move a group of corners coherently to keep the topological structure among corners. Since the deformations of human brains usually happen in local regions, coherent movement causes improper matching among corners from other regions. This would lead to the failure of the generation of the best correspondence. Thus, this method is inappropriate for corner matching in brain medical images. Moreover, the above corner matching methods are used for 3D image registration. They are to match corners of the images from the same object or person, unlike the corner matching of the brain medical images from different patients.

To address these problems, an automatic multilayer texture image (MTI) extraction method is first proposed for corner detection. The method integrates the diagnostic knowledge from neurologists and assigns each corner an importance-value representing the uncertainty of a pixel. Second, a corner matching method is presented combining the uncertainty and structure of brain medical images. It generates an initial matched corner pair sequence based on the definition of matched corner pairs and utilizes a bipartite graph model to reduce the redundancy of the initial matched corner pair sequence. At last, a similarity function is proposed based on the matched and unmatched corner pairs to make diagnoses for brain medical images. Experimental results on brain CT and MRI image sets show that the proposed corner-based brain medical image diagnosis method outperforms three state-of-the-art methods on accuracy and F1-score. Additionally, it is more robust to different intensity ranges of brain medical image.

Methods

Workflow

The workflow is schematically illustrated in Fig. 1. It comprises two parts. The first part is to train a classifier with blue arrows as guidance. A set of labeled brain medical images are divided into training and validation images to train the classifier. First, each image is normalized into uniform size. Second, each normalized image generates a MTI and corners with importance-values are detected on the MTI under a specific θ. Third, each group of corner sequences, one from the validation images and the other from the training images, is matched to generate an initial matched corner pair sequence. The initial matched corner pair sequence is transformed to a bipartite graph to yield the final matched corner pair sequence. After that, the similarity between validation and training images is determined based on their final matched corner sequence. The labels of validation images are predicted based on the KNN model to validate the values of θ and K. Finally, the best θc of θ and Kc of K are utilized to construct the classifier. From the above steps, a corner-based classifier with θc and Kc is generated. In the second part (with black arrows as guidance), given a test image set, after preprocessing, corners are detected and matched with the corners of the training images. Then, the labels of testing images can be predicted based on the trained classifier. Table 1 displays the frequently used symbols.

Fig. 1
figure 1

Workflow of the analysis. The sequence with the blue arrows depicts the training of the classifier with the corner response threshold θ in corner detection and with the K in the K-nearest neighbor model. The other sequence with the black arrows aims to test the trained classifier.

Table 1 Frequently used symbols

Brain medical image sets

Two brain medical image sets are used in this study. The first one consists of 500 brain CT images and is denoted as Dct (Images are detailed in Additional file 1). It is made up of 3 types of images: 380 normal, 92 cerebral infarction (CI), and 28 cerebral hemorrhage (CH) images. These images are divided into two categories: “Normal” and “Abnormal”. The 380 normal images belong to the “Normal” category and the others comprise the “Abnormal” one. Each image in Dct has 256 grayscale values and is labeled by specialists from a neurological department. The second image set is made of 850 brain MRI images and is denoted as the Dmri. These images are downloaded from ADNI (http://adni.loni.usc.edu/). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial MRI, positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). For up-to-date information, see www.adni-info.org. The downloaded Dmri contains two categories of MRI images: 430 normal and 420 AD images. Each image in Dmri has 65536 grayscale values. Additionally, all of the images in Dct and Dmri contain the ventricle portion. Figure 2 shows some “Normal” and “Abnormal” image examples.

Fig. 2
figure 2

Examples of “Normal” and “Abnormal” images. Brain MRI images in the first row belong to “Normal” category and that in the second row are “Abnormal” ones. Brain CT images in the third row are “Normal” and that in the fourth row are “Abnormal”

Brain medical image normalization

Original brain medical images are usually produced in different sizes and angles and also display some useless information. These original brain images consist of three portions: background, skull and the intracranial portion. Background is useless for doctors’ diagnosis and always contains noise. Moreover, when doctors make diagnoses, they usually focus on the intracranial portion. Thus, these original images need to be preprocessed as follows: extracting the intracranial portion using the Canny operator [21] because of the clear disparity of the grayscale values between the skull and the intracranial portion, rotating the intracranial portion into the vertical direction using the method in [22], cropping the image based on the vertical external matrix of its intracranial portion and unifying image size into Row ×Column, where Row/Column is the mean of all the row/column numbers of the brain medical image set. Counting the image size of the two brain medical image sets seperately, we normalize all brain CT images into 285×260 and all the brain MRI images into 175×141.

Figure 3 is an example of normalizing a brain CT image. Figure 3a is an original brain CT image with background and skull. Its intracranial portion is extracted as shown in Fig. 3b. Then its angle is rectified into roughly vertical direction shown in Fig. 3c. After that, the rectified image is cropped and unified into Row×Column size. The final normalized grayscale image GI is displayed in Fig. 3e, where GI(m, n) is the grayscale value of the pixel located in (m, n).

Fig. 3.
figure 3

Example of normalizing a brain CT image a Original image I. b Image with the extracted intracranial portion. c Rotated image. d Image with its vertical external matrix. e Normalized grayscale image GI.

Corner detection based on multilayer texture images

In the diagnostic process of brain medical images, doctors generally pay more attention to the morphology of the hypodense (dark) and hyperdense (bright) regions than that of other regions [23]. That means different regions in brain medical image embody diverse diagnostic information. Thus, corners detected from these more hypodense and hyperdense regions are more essential than those detected from other regions. Under this guidance, a multilayer texture image (MTI) is first proposed to distinguish the significance of regions in brain medical images. Then, corners are detected on the MTI and are assigned different importance-values to show their significance. Thus, corners detected from MTIs employ certain diagnostic information.

Given a Row×Column GI, its MTI is comprised of multiple layers of textures. Each layer of textures have the same pixel value. The pixel values denote the significance of pixels and the pixel value and the significance of pixels have a direct correlation. The MTI extraction method is similar to the Canny method [20]. It contains smoothing an image, discovering its intensity gradient, determining the hysteresis double thresholds and generating edges based on the pair of double thresholds. However, the Canny method can only generate a binary texture image by using one pair of double thresholds, and the binary texture image cannot describe the significance of different regions. To handle this, we propose an automatic MTI generation method by using multiple pairs of double thresholds based on the intensity gradient of a GI.

The morphology of the hypodense and hyperdense regions is reflected by the pixels with larger variations in a GI, i.e., the pixels with larger intensity gradient values. Moreover, the clearness degree of the morphology of a region has a positive relation to the intensity gradient values of pixels. Thus, the intensity gradient values of a GI are utilized to generate the MTI of the GI. First, pixels of the GI are ordered in descending way based on the intensity gradient values of the GI. Then, the number of these ordered pixels is accumulated to generate a sequence of pairs of double thresholds. Specifically, the histogram H of the intensity gradient values of a GI is calculated with Bn bins, where H(j) is the number of pixels in the jth bin. If

$$ \underset{s^i\in \left\{1,\kern0.5em 2,\kern0.5em \dots, Bn\right\}}{\arg \kern0.2em \min }{\sum}_{j=1}^{s^i}H(j)\ge {R}^i\times Row\times Column, $$
(1)

then

$$ {highTh}^i={s}^i/ Bn,\kern0.75em {lowTh}^i= rTh\times {highTh}^i. $$
(2)

[highTh i, lowTh i] is the ith pair of double thresholds and is used to extract textures in the ith layer.

In Eqs. 1 and 2, s i represents the index value of H(j) and its maximum value is Bn. The histogram value of each bin is the number of pixels with several continuous intensity gradient values. Provided the maximum intensity gradient value of an image is M, each bin counts the number of pixels with M/Bn continuous intensity gradient values. R i is a decimal iterating from R 1 to 0 with 0.1 drop. It controls the ratio of pixel number whose index gradient values are ordered in descending way. For each R i, Eq. 1 returns a s i value. This indicates that the corresponding pixels in the first s i bins of H(j) have clearer edges than that in the rear bins. Thus, s i dividing the maximum bin number Bn is used to generate the thresholds, as shown in Eq. 2.

In our study, the Bn and rTh are set to 64 and 0.3, respectively. Generally, the morphology of the ventricles is the clearest part in a brain medical image. Based on experiments, R 1 is set to 0.8 to produce the first pair of double thresholds [highTh 1, lowTh 1], which is used to extract the first layer of textures containing the edges of ventricles. Then R i is decreased by 0.1 each time until 0 to produce the remaining pairs of double thresholds. Next these pairs of double thresholds are used to extract the remaining (N-1) layers of textures.

It is notable that the pairs of double thresholds are generated in a descending order, and the preceding thresholds are used to extract the textures of clearer regions. Based on this, a MTI is created by setting MTI(m, n)=1/i when textures in the ith layer pass the pixel (m, n) yet textures in the preceding (i-1) layers do not pass (m, n). When no texture passes a pixel, that means the pixel has no specific diagnostic information and is thus set to 0 as the background. From the MTI extraction process, the textures in preceding layers have larger importance-values than those in rear layers. This indicates that the textures in preceding layers embody more essential diagnostic information. It can be seen that MTIs can not only describe the edges of ROIs but also can distinguish the significance of these ROIs.

After a MTI is generated, corners are detected on the MTI utilizing the Harris algorithm [11], where the corner response threshold θ is discussed in the experiment. Besides storing the coordinate of the corner (m, n), we also preserve MTI(m, n) as the importance-value of (m, n).

The details of corner detection on a MTI are outlined in Algorithm 1. Line 1-13 describes the idea of MTI extraction, and line 14-21 explains corner detection on the MTI. F(m, n) is the corner response function to determine whether the (m, n) is a corner or not [11]. Through Algorithm 1, a brain medical image can be represented by a corner sequence C. Since the loops in terms of R i, m and n are consistent and s i can also be solved in consistent steps (i.e., less than 64 steps), the time complexity of Algorithm 1 is Ο(1). An example of corner detection on MTIs is demonstrated in Fig. 4.

Fig. 4
figure 4

Example of corner detection over a MTI. a GI. b MTI of the GI. c Detected Corners mapped to the GI.

Corner matching based on the uncertainty and structure of corners

There exist uncertainty and structure in the pixels of brain medical images [23]. Thus, corners detected from MTIs also remain the uncertainty and structure. In detail, the corners from the same regions among similar brain medical images have similar locations, i.e., the locations of these corners have certain mobility. Moreover, the mobility of corners is inversely proportional to the importance-values of corners, i.e., low importance-value indicates high mobility and vice versa. Thus, we define the mobility Mov(m, n) of a corner (m, n) as below:

$$ Mov\left(m,n\right)=\frac{1}{MTI\left(m,n\right)} $$
(3)

Based on the mobility of corners, matched corner pairs are generated.

Definition 1

Given two corner sequences C and C', c=((m, n), MTI(m, n))C and c′=((m′, n′), MTI'(m', n'))C', c and c′ are defined a matched corner pair [c, c′] if they satisfy

$$ dis\left(c,{c}^{\prime}\right)\le Mov(c)+ Mov\left({c}^{\prime}\right), $$
(4)

where dis(c, c′)=||((m, n), (m′, n′)||2.

Initial matched corner pair sequences are generated based on Definition 1. Nevertheless, the sequences have redundant matched corner pairs. In order to remove the redundancy and meanwhile preserve information to maximum extent, we propose a finial matched corner pair sequence.

Definition 2

Given an initial matched corner pair sequence CI={([c Ii , c Ij ′], dis(c Ii , c Ij ′)) | c Ii C, c Ij C′}, CM={([c Mi , c Mj ′], dis(c Mi , c Mj ′)) | c Mi C, c Mj C′} is the corresponding final matched corner pair sequence satisfying: (1) each corner pair [c Mi , c Mj' ] is one-to-one; (2) |CM| is the maximum; (3) ∑[c Mi ,  c Mj ′] CM dis(c Mi , c Mj ) is the minimum.

figure a
figure b

The first case in Definition 2 is to eliminate redundant matched corner pairs. The second and third cases ensure to preserve the longest and most matched corner pairs.

In order to realize CM, a biparitite graph G=(V, E, We) is generated based on CI. The problem of generating the final matched corner pair sequence is transferred to solve the maximum matching of G, which has been solved by using the Hungarian method [24, 25].

Algorithm 2 outlines the main ideas of the proposed corner matching method. The vertex number of G is (|C|+|C′|), the maximum edge number of G is |C|×|C′|, and the time complexity of the Hungarian method is Ο(|C|×|C′|×(|C|+|C′|)), thus the time complexity of our corner matching is Ο(|C|×|C′|× (|C|+|C′|)).

Figure 5 is a corner matching example. Figure 5a and b show two corner sequences C and C′ in a normalized coordinate system, where |C|=12 and |C′|=9 and the dotted circles around these corners reflect their mobility. Figure 5c is the overlapping of C and C′ in the same normalized coordinate system. If the circles around two corners overlap, which means the two corners satisfy Eq. 4, then they are a matched corner pair. Thus, we obtain an initial matched corner sequential CI={([c1, c1′], dis(c1, c1′)), ([c2, c1′], dis(c2, c1′)), ([c3, c2′], dis(c3, c2′)), ([c4, c3′], dis(c4, c3′)), ([c6, c5′], dis(c6, c5′ )), ([c7, c5′], dis(c7, c5′ )), ([c8, c6′], dis(c8, c6′ )), ([c9, c7′], dis(c9, c7′ )), ([c10, c7′], dis(c10, c7′ )), ([c10, c8′], dis(c10, c8′)), ([c11, c8′], dis(c11, c8′)), ([c12, c9′], dis(c12, c9′))}. We can see that CI has redundant matched corner pairs, i.e., one-to-many matched corner pairs. For instance, [c1, c1′] and [c2, c1′], [c6, c5′] and [c7, c5′]. Then a bipartite graph G=(V, E, We) is constructed based on CI as showed in Fig. 5d. In G, V=V1V2, V1={v1, v2, v3, v4, v6, v7, v8, v9, v10, v11, v12}, V2={v1′, v2′, v3′, v5′, v6′, v7′, v8′, v9′}, E={(v1, v1′), (v2, v1′), (v3, v2′), (v4, v3′), (v6, v5′), (v7, v5′), (v8, v6′), (v9, v7′), (v10, v7′), (v10, c8′), (v11, v8′), (v12, v9′)} and We={we1, we2, we3, we4, we5, we6, we7, we8, we9, we10, we11}. Based on the Hungarian method, Fig. 5e shows the maximum matching result of G, i.e., the vertex pairs connected by edges. Thus, the final matched corner pair sequence CM = {[c1, c1′], [c3, c2′] [c4, c3′], [c6, c5′] [c8, c6′], [c9, c7′], [c11, c8′], [c12, c9′]}.

Fig. 5
figure 5

Corner matching example. a Corner sequence C b Corner sequence C' c Overlaying of C and C'. d Bipartite graph G based on C and C'. e Final matching result of the G

Corner-based brain image classification

In two brain medical images, when location of the corners in each matched corner pair is close and the number of unmatched corner pairs is small, the two images tend to be more similar. Based on this, the similarity calculation between brain medical images I and I′ is proposed based on the CM={([c Mi , c Mj ′], dis(c Mi , c Mj ′)) | c Mi C, c Mj C′} of I and I′, as shown in Eq. 5.

$$ SIM\left(I,{I}^{\hbox{'}}\right)=W\left(I,{I}^{\hbox{'}}\right)\times {\sum}_{\left[{c}_{Mi},{c_{Mj}}^{\hbox{'}}\right]\in CM} MD\left({c}_{Mi},{c_{Mj}}^{\hbox{'}}\right) $$
(5)
$$ MD\left({c}_{Mi},{c_{Mj}}^{\hbox{'}}\right)=\frac{1}{dis\left({c}_{Mi},{c_{Mj}}^{\hbox{'}}\right)} $$
$$ W\left(I,{I}^{\hbox{'}}\right)=\left\{\begin{array}{c}0,\kern22em \mathrm{if}\kern0.5em \mid C\mid =\mid {C}^{\hbox{'}}\mid \\ {}\begin{array}{l}\frac{1}{\log \left|\left(|C|-| CM|\right)-\Big(|{C}^{\hbox{'}}|-| CM|\Big)\right|}\\ {}=\frac{1}{\log \left|\left(|C|-|{C}^{\hbox{'}}|\right)\right|},\kern13.5em \mathrm{else}\kern1.5em \end{array}\end{array}\right. $$

In Eq. 5, MD(c Mi , c Mj ′) is the matched degree of [c Mi , c Mj ′]. It indicates that the smaller the distance of [c Mi , c Mj ′] is, the larger the matched degree of [c Mi , c Mj ′] will be. W(I, I′) is a weight, where |C| and |C′| are the number of corners detected from I and I′, respectively; |(|C|-|CM|)-(|C′|-|CM|)| is the number diversity of the unmatched corners between I and I′. The value of W(I, I′) indicates that a small number of the unmatched corners between I and I′ makes a large effect on the similarity between images.

Based on Eq. 5, I can be classified using the KNN model. Specifically, the K most similar images of I among the training images are selected first. Then I is assigned a label whose number among the K most-similar images is the largest.

Results

Experimental environment

We evaluated the proposed corner-based brain medical image classification method (denoted as the Corner-based method) on the two brain medical image sets: Dct and Dmri. All the experiments were implemented on a computer with an Intel Core (7) processor of 2.60 GHZ and a RAM of 16GB. The development environment includes Eclipse 4.6 and Matlab 2015a.

Evaluation of the corner-based brain medical image classification

The proposed Corner-based method is evaluated on Dct and Dmri, respectively. First, Dct is randomly divided into two groups of images to generate the training and test image sets. Concretely, 70% of images are separately selected from each of the three types (normal, CI, and CH) of images to form the training set; the remaining 30% of images comprise the test set. In both the training and test sets, the normal type of images are labeled with the “Normal” labels and other two types of images are labeled with the “Abnormal” labels. In this study, the “Normal” category is regarded as positive one and the “Abnormal” category as the negative one. Second, images in the training set are used to construct a classifier. In this process, the 5-folder cross validation approach is employed to measure the performance of the classifier with different θ and K values. The specific θ and K with the best experimental performance are chosen to generate the ultimate classifier. Finally, the ultimate classifier is evaluated by predicting the labels of the independent test set. The above steps are repeated on Dmri. Different from Dct, Dmri uses 88% of images as the training set and the others as the test set. The detailed information of the two image sets is summarized in Table 2.

Table 2 All brain medical image sets

The performance of the classifiers is evaluated based on four standard indictors: accuracy, precision, recall and F1-score [26]. They are formalized as follows, where TP, FP, TN and FN stand for True Positive, False Positive, True Negative, and False Negative, respectively.

$$ Accuracy=\frac{TP+ TN}{TP+ TN+ FP+ FN} $$
$$ Precision=\frac{TP}{TP+ FP} $$
$$ Recall=\frac{TP}{TP+ FN} $$
$$ F1- score=\frac{2\times precision\times recall}{precision+ recall} $$

Two parameters, θ and K, are required to be trained. Specifically, the best θ and K values are selected based on the accuracy of the validation images to construct classifiers. Figure 6 displays the accuracy of the validation images in Dct and Dmri using the proposed method, respectively. In Dct, θ iterates from 0.01 to 0.1 with the growth of 0.01 each time and K increases from 3 to 21 with 2 intervals. It can be seen that when θ=0.04 and K=7, the accuracy reaches the highest point of 0.826 (the red × in the figure). In Dmri, the range of K is the same as that in Dct, while the range of θ is from 0.03 to 0.12 with the growth of 0.01 each time. It can be seen that θ is set to 0.05 and K is set to 3 to attain the highest accuracy of 0.828. The two groups of θ and K are used to construct our two classifiers for Dct and Dmri classification, respectively.

Fig. 6
figure 6

Accuracy of the validation images using the proposed method. K is the parameter of the KNN model and θ is the corner response threshold for corner detection. a Dct. b Dmri.

To validate the effectiveness of the proposed method, the method is compared with three baseline brain medical image classification methods: Harris-based method, Texture-based method and Symmetry-based method.

The procedure of the Harris-based method is similar to that of the proposed method. It also includes corner detection, corner matching and θ and K selection for classifiers. However, its corners are detected using Harris [11] directly. Since Harris only preserves the coordinates of corners, the importance-values of the detected corners are set to 0 for corner matching. Figure 7 show the accuracy of the validation images in Dct and Dmri using the Harris-based method, respectively. The range of K keeps the same. In Dct, θ varies from 0.001 to 0.01 with the growth of 0.001 each time. It can be seen that the accuracy reaches the highest point of 0.758 (the red × in the figure) when θ=0.04 and K=5. In Dmri, θ varies from 0.0001 to 0.001 with the growth of 0.001 each time. It can be seen that the highest accuracy is 0.721 when θ is set to 0.0006 and K is set to 5. The two groups of θ and K are used to construct two Harris-based classifiers for Dct and Dmri classification, respectively.

Fig. 7
figure 7

Accuracy of the validation images using the Harris-based method. K is the parameter of the KNN model and θ is the corner response threshold for corner detection. a Dct. b Dmri.

The other two comparison methods are based on the methods in [3, 4]. In this study, all the images are 2-D. Since the VBM in [4] is 3-D features, only the texture features are used in the comparison experiment. This method is referred as the Texture-based method. There is no threshold in the Texture-based method according to [4]. In [3], brain medical images are classified into “Normal” or “Abnormal” categories in the first stage by using the symmetry of brain medical images. The method applied in the first stage of [3] is used as another comparison method. It is denoted as the Symmetry-based method. Based on [3], the threshold of the Symmetry-based method is set to 0.88 for brain CT image classification.

We compare the performance of the proposed method with that of the three comparison methods in terms of the test set in Dct. The results are summarized in Table 3. The proposed method is evaluated with θ=0.04 and K=7 and the Harris-based method is measured with θ=0.004 and K=5. As can be seen from the table, the proposed method achieves the highest accuracy and F1-score compared with the three comparison methods. This indicates the best performance and good balance of precision and recall of the proposed method. The Harris-based method obtains the best recall, but lowest precision. The Symmetry-based method and the Texture-based method have the leading precision, yet their accuracy, recall and F1-score are all much lower than that of the proposed method.

Table 3 Performance of the four comparison methods on Dct

The performance of the four comparison methods regarding the test set in Dmri is compared in Table 4. It is noteworthy that θ=0.05 and K=3 in the proposed method and θ=0.0006 and K=5 in the Harris-based method. Since the Symmetry-based method was not evaluated on Dmri, we iterated its threshold from 0.01 to 1.0 with the growth of 0.01 at each time. The accuracy reaches the maximum value when the threshold of the Symmetry-based method is between 0.5 and 1.0. It can be seen from the table that the accuracy and F1-score of the proposed method still outweigh that of the three comparison methods. In addition, the recall of the proposed method is also the highest. Though the precision of the Symmetry-based method is larger than that of the proposed method, the other measurements of the Symmetry-based method fall behind that of the proposed method greatly. The results also indicate that the proposed method can achieve the best comprehensive performance on Dmri.

Table 4 Performance of the four comparison methods on Dmri

Table 5 displays the runtime of the four comparison methods when they perform on the test sets of Dct and Dmri. The number in the front of / denotes the entire runtime of testing Dct or Dmri and the rear number denotes the average runtime of testing Dct or Dmri. It can be seen that in both Dct and Dmri, the trends in runtime among the four methods are the same. The proposed method is less efficient than the Symmetry-based method yet more efficient than both the Harris-based method and the Texture-based method. Though the efficiency of the proposed method is not as good as that of the Symmetry-based method, its average runtime for classifying each brain CT and MRI image is 1.73s and 0.61s, respectively. Thus, the proposed method is still acceptable.

Table 5 Runtime(s) of the four comparison methods on Dct and Dmri

Discussion

In this study, we develop a robust corner-based brain medical image classifier. First, we propose a corner detection method combing diagnostic information from neurologists. This method consists of MTI extraction and corners detection on MTIs. Second, we present the definition of matched corner pairs and put forward a bipartite model to generate the final matched corner pair sequence. Finally, we propose a similarity calculation method based on the matched and unmatched corner pairs for brain medical image diagnosis. We demonstrate that the proposed corner-based classifier can efficiently achieve the highest accuracy and F1-score for brain medical image classification and is robust to the intensity ranges of brain medical images caused by different brain imaging modalities.

Based on the priori diagnostic information that different regions in brain medical images have diverse diagnostic information, we extract MTIs. Specifically, we utilize multiple groups of thresholds to separately extract the multilayers of textures from different regions and assign these textures different significance to form MTIs. Then, corners are detected on MTIs and are assigned specific importance-values based on the corresponding significance in MTIs. Thus, these detected corners can not only describe the morphology of brain medical images but also demonstrate their specific significance. Moreover, since brain medical images have the uncertainty and structure, corners also inherit these characteristics. The uncertainty makes corners have mobility, yet the structure limits the movable ranges of corners. Specifically, the movable ranges of significant corners are relative small yet the movable ranges of insignificant corners are comparative large. Based on the movable ranges and importance-values of corners, we define a matched corner pair to produce an initial matched corner pair sequence. We further propose a bipartite graph to reduce the redundancy of the initial matched corner pair sequence and generate the final matched corner pair sequence for classification. The proposed method takes advantage of the priori diagnostic information and the characteristics of brain medical images. This makes it achieve great performance and robust to different intensity ranges of brain medical images.

Different from the Texture-based method [4] and Symmetry-based method [3], both the proposed method and the Harris-based method utilizes corners as features to classify brain medical images. Moreover, compared with the Harris-based method, the proposed method assigns corners importance-values by employing the priori diagnostic information. In Dct, the proposed method outperforms the three comparison methods in terms of accuracy and F1-score. The Harris-based method achieves better accuracy and F1-score than the Texture-based method and attains competitive accuracy and F1-score compared with the Symmetry-based method. In Dmri, both the proposed method and the Harris-based method gain higher accuracy and F1-score compared with the Texture-based method and the Symmetry-based method. The proposed method still outweighs the Harris-based method. These results indicate that (1) corners are discriminative features for both brain CT and brain MRI image classification; (2) employing the diagnostic information and the characteristics of brain medical images enhances the performance of the proposed method. These results are because corners are the key points of the morphology of images and the morphology of organs or lesions in brain medical images are the main evidence of doctors’ diagnosis. Though textures can reflect the morphology of images, the Texture-based method regards the textures of different regions equally while the proposed method distinguish the significance of the textures of different regions. Moreover, in terms of the proposed method, its accuracy and F1-score on Dmri decrease 4.4% and 8.3% compared with that on Dct. In terms of the Harris-based method, its accuracy and F1-score on Dmri decrease 4.9% and 10.1% compared with that on Dct. In terms of the Symmetry-based method, its accuracy and F1-score on Dmri decrease 11.1% and 21.4% compared with that on Dct. In terms of the Texture-based method, its F1-score on Dmri decreases 66.7%; though its accuracy keep the same, the accuracy values on both Dct and Dmri are 50%, which is a random value in binary classification. These results indicate that (1) the corner-based methods (i.e., the proposed method and the Harris-based method) are more robust to different intensity ranges of brain medical images caused by the different imaging modalities in this study (the maximum intensity value of Dct is 256 and the Dmri’s is 65536); (2) employing the diagnostic information in the proposed method enhances the robustness compared with the Harris-based method. These are because of the following reasons: during the process of MTI extraction based on the diagnostic information, different layers of textures are assigned different importance-values to reflect the diagnostic information. These importance-values are calculated based on the histogram of an image’ gradient values not an image’s intensity values. Since corners are detected on MTIs, this alleviates the dependence of the proposed corner detection method on the intensity values of images. Thus, the proposed method can endure the variability of the intensity ranges caused by different brain imaging modalities in this study. Furthermore, the proposed method is more efficient than the Harris-based method and Texture-based method yet less efficient than the Symmetry-based method. However, the proposed method does not use any optimization, which will be handled in our future work.

Conclusion

In this study, we first propose a corner detection method combining the diagnostic information of brain medical images. It consists of two steps: MTI extraction and corner detection based on MTIs. These detected corners are assigned different importance-values to distinguish their significance. Second, we put forward a corner matching method. It produces an initial matched corner pair sequence based on the uncertainty and structure of brain medical images and further generates a final matched corner pair sequence via a bipartite graph model. Finally, a similarity function is proposed based on the final matched corner pair sequence and is used to make diagnoses for brain medical images. Experimental results on brain CT and MRI medical image sets show that the proposed corner-based brain medical image diagnosis method outperforms the three comparison methods, and is more robust to the intensity ranges of brain medical images caused by different brain imaging modalities.

Recent studies have shown multimodal medical images can provide complementary information to enhance diagnostic accuracy. In the future, we will focus on corner detection and matching based on the multimodal brain medical images. Moreover, the proposed method does not take parallel computing into account, we will boost the efficiency of the proposed method in a parallel way.

References

  1. Suk HI, Lee SW, Shen D, et al. Deep ensemble learning of sparse regression models for brain disease diagnosis. Med Image Anal. 2017;37:101–13.

    Article  PubMed  Google Scholar 

  2. Davatzikos C, Fan Y, Wu X, et al. Detection of prodromal Alzheimer's disease via pattern classification of magnetic resonance imaging. Neurobiol Aging. 2008;29(4):514–23.

    Article  PubMed  Google Scholar 

  3. Jing Sh R, Hai WP, Peng YL, et al. Symmetry theory based classification algorithm in brain computed tomography image database. J Med Imaging Health Inform. 2016;6(1):22–33.

    Article  Google Scholar 

  4. Ding, Yi, et al. "Classification of Alzheimer's disease based on the combination of morphometric feature and texture feature."Bioinformatics and Biomedicine (BIBM), 2015 IEEE International Conference on. IEEE, 2015.

  5. Rosten E, Porter R, Drummond T. Faster and better: A machine learning approach to corner detection. IEEE Trans PAMI. 2010;32(1):105–19.

    Article  Google Scholar 

  6. Shi J, Tomasi C. Good features to track. Computer Vision and Pattern Recognition. IEEE Computer Society Conf on CVPR, 1994: 593-600.

  7. Bengoechea JJ, Cerrolaza JJ, Villanueva A, et al. Evaluation of accurate eye corner detection methods for gaze estimation. J Eye Mov Res. 2014;7(3)

  8. Lederman D. Texture-based continuous probabilistic framework for medical image representation and classification. 2012 Sixth UKSim/AMSS European Symposium on EMS, 2012: 148-152.

  9. Yilmaz A, Javed O, Shah M. Object tracking: A survey. ACM Computing Surveys (CSUR). 2006;38(4):13.

    Article  Google Scholar 

  10. Awrangjeb M, Lu G. An Improved curvature scale-space corner detector and a robust corner matching approach for transformed image identification. IEEE Trans Image Process. 2008;17(12):2425–41.

    Article  Google Scholar 

  11. Harris CG, Stephens MJA. combined corner and edge detector. In: Proc of Fourth Alvey Vision Conference; 1988. p. 147–51.

    Google Scholar 

  12. Pengyuan L, Haiwei P, et al. A novel model for medical image similarity retrieval. Web-Age Inf Manag. 2013:595–606.

  13. Rattarangsi A, Chin RT. Scale-based detection of corners of planar curves. IEEE Trans PAMI. 1992;1(4):430–49.

    Article  Google Scholar 

  14. Mokhtarian F, Bober M. Robust image corner detection through curvature scale space. IEEE Trans PAMI. 1998;20(12):1376–81.

    Article  Google Scholar 

  15. He XC, Yung NHC. Curvature scale space corner detector with adaptive threshold and dynamic region of support. IEEE Int Conf ICPR. 2004;2:23–6.

    Google Scholar 

  16. Schmid C, Mohr R, Bauckhage C. Evaluation of interest point detectors. Int J Comput Vis. 2000;37(2):151–72.

    Article  Google Scholar 

  17. Yi J, Li YR, Yang X, et al. Robust point matching by l1 regularization. IEEE Int Conf BIBM. 2015:369–74.

  18. Besl PJ, McKay ND. A method for registration of 3-D shapes. IEEE Trans PAMI. 1992;14(2):239–56.

    Article  Google Scholar 

  19. Guo J, Mei X, Tang K. Automatic landmark annotation and dense correspondence registration for 3D human facial images. BMC Bioinformatics. 2013;14(1):1.

    Article  Google Scholar 

  20. Myronenko A, Song X. Point set registration: coherent point rrift. IEEE Trans PAMI. 2010;32(12):2262–75.

    Article  Google Scholar 

  21. Canny J. A computational approach to edge detection. IEEE Trans PAMI. 1986;8(6):679–98.

    Article  CAS  Google Scholar 

  22. Liao CC, Xiao F, Wong JM, et al. Automatic recognition of midline shift on brain CT images. Comput Biol Med. 2010;40(3):331–9.

    Article  PubMed  Google Scholar 

  23. Pan H, Li P, Li Q, et al. Brain CT image similarity retrieval method based on uncertain location graph. IEEE J Biomed Health Inform. 2014;18(2):574–84.

    Article  PubMed  Google Scholar 

  24. Lovász L, Plummer MD. Matching theory. Ann Discrete Math. 1986;29

  25. Kuhn HW. The hungarian method for the assignment problem. Naval Res Logistics Q. 1955;2(2):83–97.

    Article  Google Scholar 

  26. Jones DT, Cozzetto D. DISOPRED3: precise disordered region predictions with annotated protein-binding activity. Bioinformatics. 2015;31(6):857–63.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the ADNI database (http://adni.loni.usc.edu/) for the collection of brain MRI images.

the Alzheimer’s Disease Neuroimaging Initiativea

aData used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc. edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at:http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf

Funding

The paper is partly supported by the National Natural Science Foundation of China under Grant No. 61672181, No. 51679058. Natural Science Foundation of Heilongjiang Province under Grant No. F2016005. Council Scholarship of China. Data collection and sharing for this study was funded by the ADNI (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health ( www.fnih.org ). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. Bioengineering.

Availability of data and materials

Dmri dataset supporting the conclusions of this article is available in the ADNI repository [http://adni.loni.usc.edu/].

Dct dataset supporting the conclusions of this article is available from the corresponding author upon reasonable request.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

LG, HP QL, XX and ZZ conceived of the study. LG, HP and JH performed the collection and label annotation of brain CT images. LG and XZ carried out the experiment. GL, HP and JH analyzed the results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Haiwei Pan.

Ethics declarations

Ethics approval and consent to participate

For Dmri, ethics approval is not required as the human data were publicly available by ADNI website, and all the data are not identifiable. For Dct, informed consent was obtained in accordance with institutional policies in use in each country.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, L., Pan, H., Li, Q. et al. Brain medical image diagnosis based on corners with importance-values. BMC Bioinformatics 18, 505 (2017). https://doi.org/10.1186/s12859-017-1903-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12859-017-1903-6

Keywords