Skip to main content

Learning to detect boundary information for brain image segmentation

Abstract

MRI brain images are always of low contrast, which makes it difficult to identify to which area the information at the boundary of brain images belongs. This can make the extraction of features at the boundary more challenging, since those features can be misleading as they might mix properties of different brain regions. Hence, to alleviate such a problem, image boundary detection plays a vital role in medical image segmentation, and brain segmentation in particular, as unclear boundaries can worsen brain segmentation results. Yet, given the low quality of brain images, boundary detection in the context of brain image segmentation remains challenging. Despite the research invested to improve boundary detection and brain segmentation, these two problems were addressed independently, i.e., little attention was paid to applying boundary detection to brain segmentation tasks. Therefore, in this paper, we propose a boundary detection-based model for brain image segmentation. To this end, we first design a boundary segmentation network for detecting and segmenting images brain tissues. Then, we design a boundary information module (BIM) to distinguish boundaries from the three different brain tissues. After that, we add a boundary attention gate (BAG) to the encoder output layers of our transformer to capture more informative local details. We evaluate our proposed model on two datasets of brain tissue images, including infant and adult brains. The extensive evaluation experiments of our model show better performance (a Dice Coefficient (DC) accuracy of up to \(5.3\%\) compared to the state-of-the-art models) in detecting and segmenting brain tissue images.

Peer Review reports

Introduction

MRI brain images are always of low contrast, which makes it difficult to identify which area the information at the boundary of brain images belongs to. To alleviate such a problem, image boundary detection plays a vital role in medical image segmentation [1, 2], as unclear boundaries can worsen brain segmentation results. Yet, given the low quality of brain images and blurry image boundaries, boundary detection in the context of brain image segmentation remains a research challenge. Results of existing segmentation models can be influenced by blurry image boundaries, which is due to bad boundary pixel differentiation [3]. In brain segmentation, boundary refers to the area that divides brain regions. For example, the dividing area between the white region (WM) and grey region (GM) of the brain is considered as a boundary. The boundary is crucial in brain segmentation, since if it is unclear, the boundary information between WM and GM would also be unclear.

Despite the research invested to improve boundary detection and brain segmentation, these two problems were addressed independently. Moreover, extracting features at the image boundary remains challenging, since those features can be misleading, since they might mix properties of different brain regions [4]. Many models were proposed to detect or segment human brain tissues [5,6,7]. Despite the highly reported performance of these models, they suffer from an extreme problem concerning the extraction of local details in ambiguous boundaries [8,9,10]. Much research has addressed such a problem [8, 11, 12]. Traditional methods that are atlas-based are not accurate and not robust [13]. Also, deep learning models were introduced to address this problem, yet, ambiguous boundaries have not been sufficiently resolved. What complicates the detection of image boundaries for brain tissues segmentation is the low contrast and unclear boundaries between WM and GM. Figure 1 shows an example of ambiguous boundaries between WM and GM.

Fig. 1
figure 1

Examples show the ambiguous boundaries between WM and GM

Therefore, in this paper, we propose a boundary detection-based model for brain image segmentation. In particular, we focus on the boundary information between WM and GM, especially for low contrast images. First, we design a boundary segmentation network for detecting and segmenting brain tissues. Second, we design a boundary information module (BIM) to help distinguish between the boundaries of three different brain tissues. Finally, we add a boundary attention gate (BAG) to each output layer of the encoder of our transformer to capture more informative local details. We evaluate our proposed model on two datasets of brain tissue images: infant and adult brains. Our model achieves higher results (i.e., a Dice Coefficient (DC) accuracy of up to \(5.3\%\)) compared to the state-of-the-art models. In addition, our model is less complex and performs faster than the state-of-the-art models. In summary, this paper makes the following contributions:

  • We design a network model that performs both boundary detection and brain tissues segmentation to improve the segmentation accuracy.

  • We design a boundary information module (BIM) to distinguish the boundaries of different brain tissues.

  • We design a boundary attention gate (BAG) to capture more local details about brain tissues.

The rest of this paper is organized as follows. Section 2 presents the prior models related to the boundary detection of brain segmentation. Section 3 presents the design of our proposed model. Section 4 presents our experimental design and evaluation. Section 5 presents our evaluation results and discusses the strengths and limitations of our model. Finally, Sect. 6 concludes the paper and discusses future work.

Related work

This section reviews the state-of-the-art techniques for boundary detection and brain segmentation. In Table 1, we provide a summary of the recent works in medical imaging.

Boundary detection

Boundary detection has recently been an active research problem for which many techniques have been proposed to extract boundary information, thus mitigating the problem of ambiguous boundaries [14,15,16]. However, the problem of unclear boundaries between (WM) and (GM) remains challenging due to the low contrast of MRI images. This problem has also been studied extensively [17,18,19]. The main focus of these studies was on mixed features between WM and GM, in which the boundary information of these two regions is unclear and hard to identify. Specifically, the research conducted in [12, 20,21,22] focused on skin lesions segmentation from dermoscopy images in which the contrast between the lesion and normal skin is fairly low. Features used in [12, 21, 22] to detect boundaries achieved a significant improvement to the state-of-the-art techniques. To deal with the global context to segment lesion from normal skin, Blackmon et al. [8] proposed a model to help segmenting lesions. To improve boundary detection results, whereas Andrews et al. [9] proposed a novel unsupervised pre-training framework using boundary-aware preserving learning.

Despite the effort invested in boundary detection, little attention was paid to applying it to brain tissues segmentation, which is usually affected by unclear boundary areas.

Table 1 Summary of the state-of-the-art techniques in medical image

Brain segmentation

There have been many proposed models (e.g., [38, 39]) for brain tissues segmentation. These models divided the brain image into multiple regions. For example, [40, 41] divided the brain into eight regions), whereas [42, 43] divided the brain into three regions. Dolz et al. [44] proposed 3D and fully CNN for the segmentation of the subcortical brain structure. Later on, Bao and Chung [7] have improved the model proposed by Dolz et al. using a multi-scale structured CNN with label consistency. Jin et al. [45] have also proposed CNNs models with the use of residual connections to segment white matter hyperintensity from T1 and flair images. Their models outperformed previous models with an overall dice coefficient of 0.75% on H95 and 27.26% on an average surface distance. Fechter et al. [6] also used fully CNNs for brain segmentation. Using five datasets, they obtained dice coefficient ranging between 0.82 and 0.91 for each dataset. de Brebisson and Montana [46] proposed a random walker approach driven by a 3D fully CNN for different tissue classes. Their model was able to segment the esophagus using CT images. Ma et al. [47] proposed a visual detection of cells in brain tissue slice for patch clamp system.

Khaled et al. proposed two brain tissues segmentation models, one using FCN + MIL + G + K [17] and another using a multi-stage GAN model [38]. They evaluated their models on two infants and adults brain images and obtained good segmentation results, expressed by dice coefficients of up to 94% for segmenting GM and WM.

Despite the effort invested in brain tissue segmentation, segmentation results still suffer from mixed tissue information caused by unclear image boundaries, which confuses models in precisely identifying what features belong to which region of the brain.

Highlights on related work

Unlike previous work, our objective in this paper is to solve the problem of unclear boundaries in brain segmentation. In particular, the state-of-the-art techniques either performed boundary detection or image segmentation, independently, thus not considering the fusion of both detection and segmentation in one model. Hence, in this paper, we design a boundary segmentation network for detecting and segmenting images of brain tissues. Then, we design a boundary information module (BIM) to distinguish boundaries from the three different brain tissues. After that, we add a boundary attention gate (BAG) to the encoder output layers of our transformer to capture more informative local details.

Method

We propose a model in which we take advantage of the connection between both boundary detection and brain segmentation. To this end, we design a boundary segmentation network for the detection and segmentation of brain tissues. Then, we design the boundary information module (BIM) to distinguish boundaries of the three different brain tissues. Figure 2 gives an overview of architecture of our proposed model. We use the ResNet50 network [48] to extract feature maps from input images. Inspired by the excellent success of region proposal networks (RPN), we use it in our model to generate a bbox detector and mask detector. Then, the model has two branches: one for detection, which follows the non maximum suppression (NMS), and another for segmentation, which follows the transformer whose architecture is shown in detail in Fig. 3. Table 2 lists all the symbols we refer to in this paper.

Fig. 2
figure 2

An overview of the proposed model

Fig. 3
figure 3

The architecture of our model’s transformer

Table 2 List of symbols referred to in this paper

Boundary information module (BIM)

Feature maps are obtained from the segmentation branch and detection branch, and R channels are consider. Feature maps are divided into groups M where each group maintains a vector at every position.

$$\begin{aligned} X = \{{x_{\text {1}}^{cls}, \ldots , x_{\text {s}}^{cls}}\}, \; x_{\text {i}}^{cls} \in R^{C/G} \end{aligned}$$
(1)

The global statistical feature is used to approximate the vector by a spatial averaging function, \((F_{\text {gp}})\), as follows.

$$\begin{aligned} g = F_{\text {gp}} =1/s \sum \limits _{{\text {i=1}}}^{s}x_{\text {i}}^{mask}, \end{aligned}$$
(2)

To measure the similarity between vectors and features, we generate the correlation coefficient, \((c_{\text {i}})\), as follows.

$$\begin{aligned} c_{\text {i}} =||g|| \; ||x_{\text {i}}^{cls}|| \; \cos (\theta {\text {i}}) \end{aligned}$$
(3)

Normalization is then used to avoid the biased magnitude of \(c_{\text {i}}\), as follows.

$$\begin{aligned} \bar{c_{\text {i}}}= c_{\text {i}} - \mu _{\text {c}}/\sigma _{\text {c}}+\epsilon , \end{aligned}$$
(4)

where \(\epsilon =1e-6\).

Two parameters, \(\alpha\) and \(\beta\), are used to represent the identification and localization of features, as follows.

$$\begin{aligned} a_{\text {i}}= & {} \alpha \bar{c_{\text {i}}} + \beta , \end{aligned}$$
(5)
$$\begin{aligned} X_{\text {i}}^{mask}= & {} x_{\text {i}}^{mask} \; . \; \sigma (a_{\text {i}}), \end{aligned}$$
(6)

where \(x_{\text {i}}^{mask}\) denotes the segmentation feature vector and \(\sigma\) denotes the sigmoid function.

The output of BIM is represented as follows.

$$\begin{aligned} X= \{x_{\text {i}}^{mask}, \ldots , x_{\text {s}}^{mask}\}, \; x_{\text {i}}^{mask} \in R^{c} \end{aligned}$$
(7)

Loss functions

Loss functions are related to two parts: the boundary detection part and the segmentation part. A Dice loss function \((\Phi DICE)\) is used to reduce the difference between the ground truth and the segmentation map \((L_{seg})\). A cross-entropy loss function \((\Phi CE)\) is used to minimize the difference between the ground truth and predicted-key map \((L_{Map})\).

$$\begin{aligned} l_{\text {seg}}= & {} \Phi DICE(S_{\text {GT}},S_{\text {pred}}), \end{aligned}$$
(8)
$$\begin{aligned} l_{\text {Map}}^{i}= & {} \Phi DICE(M_{\text {GT}},M_{\text {pred}}), \end{aligned}$$
(9)

where \(S_{\text {GT}}\) is the ground truth and \(S_{\text {pred}}\) is the segmentation map.

$$\begin{aligned} L_{\text {whole}} = \sum \limits _{{\text {i=1}}}^{n+1}l_{\text {Map}}^{i} +L_{\text {seg}}, \end{aligned}$$
(10)

where \(M_{\text {GT}}\) is the ground truth key patch map and \(M_{\text {pred}}\) is the predicted-key map.

Boundary aware transformer

To improve boundary detection and the extraction of boundary information in brain segmentation with ambiguous boundaries, we use a transformer, in which a BAG is added to the end of its encoder layer. As shown in Fig. 2, BAG consists of a key patch map generator. The generator takes the transformed feature as input and generates a binary patch map as output. The boundary-aware transformed feature is represented as follows.

$$\begin{aligned} {V}^{i-1}= MSA (Z^{i-1}) + MLP (MSA(Z^{i-1})), \end{aligned}$$
(11)
$$\begin{aligned} {Z}^{i}= V^{i-1} + (V^{i-1}* \hat{M}^{i-1}), \end{aligned}$$
(12)

where \(+\) and \(*\) denote the element-wise addition and channel-wise multiplication, respectively.

Experiments

This section presents our experimental design and evaluation. First, we give a more detailed description of the datasets used in our experiments. Then, we describe the Dice Coefficient (DC) of the segmentation evaluation. Finally, we describe our experimental setup.

Overview of the datasets

Datasets

In our experiments, we use two datasets for evaluating our model: the \(MICCAI\ iSEG\) infant dataset and MRBrainS adult dataset. The MICCAI iSEG-2017 dataset contains training and testing data of 6-month infants, whereas the MRBrainS-2013 dataset contains training and testing data for adults. The two datasets are obtained from different organizations, and there are significant differences between images in the infant dataset and the adult dataset in terms of image data characteristics, such as the bunch of tables images and the number of available modalities. In addition, both datasets were used to evaluate the previous models in this context.

The MICCAI iSEG-2017 dataset

The aim of the evaluation frameworkFootnote 1 introduced by the MICCAI iSEG organizers is to compare segmentation of WM, GM and CSF on T1 and T2. The training dataset contains 10 images, named T1-1 through T1-10, T2-2 through T2-10, and a ground truth. The testing dataset contains 13 images, named T-11 through T-23. Figure 4 shows an example of the \(MICCAI\ iSEG\) dataset. Table 3 shows the parameters used to create T1 and T2. Two different times were used to create T1 and T2, which are the longitudinal relaxation time and transverse relaxation time.

The MRBrainS-2013 dataset

The MRBrainS datasetFootnote 2 contains 20 subjects on T1, T2, and FLAIR. The dataset contains five subjects for as a training set and 15 subjects as a testing set. In this dataset, adult brain images has multiple regions to segment, including (a) white matter lesions, (b) basal ganglia, (c) lateral ventricles, (d) cortical gray matter, (e) peripheral cerebrospinal fluid, (f) white matter, (g) cerebellum, and (h) brain stem.

Fig. 4
figure 4

An example of the \(MICCAI\ iSEG\) dataset (T1, T2, manual reference contour)

Table 3 Parameters used to generate T1 and T2

Dice coefficient (DC)

We use the Dice Coefficient (DC) metric for evaluating our model. This metric assesses how effective and robust the model is. DC has been widely used as a benchmark in the literature to compare brain segmentation models. The DC is given by the following equation (defined in [49]):

$$\begin{aligned} {DC}(V_{\text {ref}}, V_{\text {auto}}) = \frac{2 V_{\text {ref}} \bigcap V_{\text {auto}} |}{|V_{\text {ref}}| + |V_{\text {auto}}|} \end{aligned}$$
(13)

where \(V_{\text {ref}}\) denotes for the reference segmentation, \(V_{\text {auto}}\) denotes for the automated segmentation. DC values are given in the range of [0, 1], where 1 denotes a perfect overlap and 0 denotes a complete mismatch.

Experiment environment

We implement our proposed model using Python TensorFlow on a computer with a NVIDIA GPU and the Ubuntu 16.04 operating system. We train and test our model on each of the two datasets independently.

Results and discussion

This section discusses the evaluation results of our model compared to the state-of-the-art models.

Analysis of the results

Table 4 shows the performance of our model on the MICCAI SEG dataset, compared to the state-of-the-art models. The results show that our model achieved high results compared to the state-of-the-art models. In particular, we observe an increase in the accuracy of segmenting the GM using our model. This result suggests that BIM has contributed the improved distinction between the boundaries for GM. However, for segmenting CSF and WM, we observe that the result of our model was \(1\%\) lower than those proposed in [17] and [38], which is likely due to the inclusion of some irrelevant information of the GM in CSF and WM. This encourages us to further improve the boundary detection to carefully account for the features missed by our current model. Besides, we plan in the future to apply boundary detection to multi-stage segmentation models, given their current high accuracy even when no boundary detection is adopted.

Table 4 Segmentation performance in Dice Coefficient (DC) obtained on the \(MICCAI \ iSEG\) dataset achieved by our model (with and without BIM), compared to the state-of-the-art models

Table 5 shows the performance of our model on the MRBrainS dataset, compared to the state-of-the-art models. We observe an increase in the accuracy of segmenting both the GM and WM using our model. This result suggests that BIM has contributed the improved distinction between the boundaries for the GM and WM. Once again, we observe that our model performs \(1\%\) lower than the multi-stage model in segmenting CSF, thus suggesting a limitation of our boundary detection at that region of the brain. Figure 5 visualizes the results of our model on the images used as a validation set. As we can see, the segmentation results achieved by our model are fairly close to the manual reference contour (i.e., ground truth) provided by the MICCAI iSEG organizers. Additionally, we observe an improvement of segmentation accuracy between WM and GM.

Table 5 Segmentation performance in Dice Coefficient (DC) obtained on the MRBrainS dataset achieved by our model (with and without BIM), compared to the state-of-the-art models
Fig. 5
figure 5

Visualization results on MRBrainS dataset

Ablation experiment

In the context of research, where deep learning is employed, an ablation experiment is important to describe a model and give a better understanding of the model’s performance. The ablation study helps reveal the effectiveness of BIM in our model.

Effectiveness of BIM To demonstrate the effectiveness of BIM, we run our model without BIM on both datasets and compare the results with the state-of-the-art models in the last rows of Tables 4 and 5. We observe that BIM helped our model distinguish between the boundaries of the three brain tissues. In particular, BIM improved segmentation accuracy by 4.0–5.3%.

Execution time

Table 6 shows the execution time (in minutes) and the standard deviation (SD) for our model on the MRBrainS dataset, compared to the state-of-the-art models. We observe that our model is faster than all the state-of-the-art models, except one where our model took a few minutes long. We conjecture that such longer execution time is likely due to the additional steps required for boundary detection, which added some level of complexity to proposed model. Still, given the better segmentation results of our model, accuracy should be given more preference than efficiency, since the gap in execution time is not considerably large.

Table 6 Average execution time (in minutes) and standard deviation (SD) on the MRBrainS dataset

Highlights of our model

Boundary detection for brain segmentation To the best of our knowledge, our proposed model is the first attempt to apply boundary detection for the segmentation of brain tissues, which has shown a significant improvement to segmentation results. Our model outperformed previous models not only in terms of segmentation accuracy, especially for segmenting GM and WM, but also in terms of execution time.

BIM+BAG Our model adopts the BIM and BAG mechanisms to focus on boundaries while performing the segmentation tasks. The \(BIM+BAG\) addition to our model shows a positive effect to the effectiveness of our model. Still, these two mechanisms may have introduced some level of complexity to our model, but still performs faster than all the state-of-the-art models, except one. Nevertheless, we believe that more preference should be given to producing better segmentation results regardless of execution time. Hence, sacrificing efficiency for a better accuracy is a viable option.

Accuracy on two different datasets Our model is evaluated on two completely different datasets of brain images, one for infants and one for adults. Each of these datasets contains a limited number of images with low contrast. Yet, our models shows high results for segmenting brain tissues, most particularly the GM and WM, which outperformed the state-of-the-art models in this context.

Limitations and future work

Limited dataset Our model is evaluated on datasets including infant and adult images. However, these images are limited and of poor quality, which could have influenced the performance of our model. Future research should consider extending the evaluation of boundary detection+segmentation on additional, more realistic datasets.

Network design Our model employs ResNet50 to extract feature maps from input images and RPN to generate a bbox detector and mask detector. However, these networks might not be the best alternative for this particular problem. Future work should investigate other networks (CNN, RNN, Unit Network, etc.)

Further improvement of boundary detection Our models achieved a higher performance, compared to the state-of-the-art models, for segmenting GM and WM. However, the performance of our model compared to the multi-stage model was lower on CSF. This indicates that there is still room for improve segmentation accuracy by considering more sophisticated boundary detection and/or applying it to other segmentation models.

Model complexity It can be argued that our model has become more complex with the additional networks and layers employed to perform boundary detection followed by tissue segmentation. However, our model shows better efficiency, expressed by the faster execution times compared to the state-of-the-art models. Still, we aim in the future to optimize our model further to mitigate the accuracy versus efficiency trade-off by reducing any level of complexity.

Conclusion

In this paper, we proposed a boundary detection-based model for brain image segmentation. To this end, we designed a boundary segmentation network for detecting and segmenting brain tissues. Then, we designed a boundary information module (BIM) to distinguish boundaries from the three different brain tissues. After that, we added a boundary attention gate (BAG) to the encoder output layers to capture more informative local details. We evaluated our proposed model on two datasets of brain tissue images, including infant and adult brains. Our evaluation results of our model show better performance (a Dice Coefficient (DC) accuracy of up to \(5.3\%\) compared to the state-of-the-art models) in detecting and segmenting brain tissue images, which proves the importance of boundary detection for brain segmentation tasks.

We plan in the future to expand the evaluation of our model to consider additional datasets with more brain images and tissues. We also plan to extend our model to perform segmentation of pathological brain and skin lesion dermoscopy images. Moreover, we plan to investigate other networks than RPN (e.g., Cascade Mask \(R-CNN\) networks) to further improve segmentation accuracy. Finally, We plan to develop a framework to support boundary detection in other segmentation models.

Availability of data and materials

The data that supports the findings of this study is available at MICCAI Grand challenge on 6-month infant brain MRI segmentation (http://iseg2017.web.unc.edu) and MRBrains (https://mrbrains13.isi.uu.nl/results.php) and are both publicly available.

Notes

  1. http://iseg2017.web.unc.edu.

  2. https://mrbrains13.isi.uu.nl/results.php.

Abbreviations

G:

Generator

D:

Discriminator

z:

Noise

G(z):

Generated data

x:

Real data

WM:

White matter

GM:

Gray matter

CSF:

Cerebrospinal fluid

Conv:

Convolutional

LeReLU:

Activation function

GAN:

Generative adversarial network

E:

Expected value

DC:

Dice Coefficient

MRI:

Magnetic resonance imaging

T1:

subject-1-to-subject-10

T2:

subject-11-to-subject-23

Vauto:

Automated segmentation

Vref:

Reference segmentation

References

  1. Wang W, Li Q, Xiao C, Zhang D, Miao L, Wang L. An improved boundary-aware u-net for ore image semantic segmentation. Sensors. 2021;21(8):2615.

    Article  Google Scholar 

  2. Kim M, Lee B-D. A simple generic method for effective boundary extraction in medical image segmentation. IEEE Access. 2021;9:103875–84.

    Article  Google Scholar 

  3. Huang H, Lin L, Tong R, Hu H, Zhang Q, Iwamoto Y, Han X, Chen YW, Wu J. Unet 3+: a full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE; 2020. p. 1055–59.

  4. Wang B, Wei W, Qiu S, Wang S, Li D, He H. Boundary aware u-net for retinal layers segmentation in optical coherence tomography images. IEEE J Biomed Health Inform. 2021;25(8):3029–40.

    Article  Google Scholar 

  5. Liu X, Yang L, Chen J, Yu S, Li K. Region-to-boundary deep learning model with multi-scale feature fusion for medical image segmentation. Biomed Signal Process Control. 2022;71: 103165.

    Article  Google Scholar 

  6. Fechter T, Adebahr S, Baltas D, Ben Ayed I, Desrosiers C, Dolz J. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk. Med Phys. 2017;44(12):6341–52.

    Article  Google Scholar 

  7. Bao S, Chung AC. Multi-scale structured CNN with label consistency for brain MR image segmentation. Comput Methods Biomech Biomed Eng Imaging Vis. 2018;6(1):113–7.

    Article  Google Scholar 

  8. Blackmon K, Halgren E, Barr WB, Carlson C, Devinsky O, DuBois J, Quinn BT, French J, Kuzniecky R, Thesen T. Individual differences in verbal abilities associated with regional blurring of the left gray and white matter boundary. J Neurosci. 2011;31(43):15257–63.

    Article  CAS  Google Scholar 

  9. Andrews DS, Avino TA, Gudbrandsen M, Daly E, Marquand A, Murphy CM, Lai M-C, Lombardo MV, Ruigrok AN, Williams SC, et al. In vivo evidence of reduced integrity of the gray-white matter boundary in autism spectrum disorder. Cereb Cortex. 2017;27(2):877–87.

    PubMed  PubMed Central  Google Scholar 

  10. Godel M, Andrews D, Amaral D, Ozonoff S, Young G, Lee J, Nordahl C, Schaer M. Altered gray-white matter boundary in toddlers at risk for autism relates to later diagnosis of autism spectrum disorder. PhD thesis, Universite de Geneve; 2020.

  11. Murphy D, Ecker C. The effect of age on vertex-based measures of the grey-white matter tissue contrast in autism spectrum disorder; 2018.

  12. Goyal M, Oakley A, Bansal P, Dancey D, Yap MH. Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access. 2019;8:4171–81.

    Article  Google Scholar 

  13. Yaakub SN, Heckemann RA, Keller SS, McGinnity CJ, Weber B, Hammers A. On brain atlas choice and automatic segmentation methods: a comparison of MAPER & FreeSurfer using three atlas databases. Sci Rep. 2020;10(1):1–15.

    Article  Google Scholar 

  14. Hatamizadeh A, Terzopoulos D, Myronenko A. End-to-end boundary aware networks for medical image segmentation. In: International workshop on machine learning in medical imaging. Springer; 2019. p. 187–94.

  15. Lee HJ, Kim JU, Lee S, Kim HG, Ro YM. Structure boundary preserving segmentation for medical image with ambiguous boundary. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2020. p. 4817–826.

  16. Wang R, Chen S, Ji C, Fan J, Li Y. Boundary-aware context neural network for medical image segmentation. Med Image Anal. 2022;78: 102395.

    Article  Google Scholar 

  17. Khaled A, Own CM, Tao W, Ghaleb TA. Improved brain segmentation using pixel separation and additional segmentation features. In: Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) joint international conference on web and big data. Springer; 2020. p. 85–100.

  18. Liang W, Shunbo H, Changchun L. MR brain segmentation based on DE-ResUnet combining texture features and background knowledge. Biomed Signal Process Control. 2022;75: 103541.

    Article  Google Scholar 

  19. Pulkit K, Pravin N, Chetan A, Anubha G. U-segnet: fully convolutional neural network based automated braintissue segmentation tool; 2018.

  20. Zafar K, Gilani SO, Waris A, Ahmed A, Jamil M, Khan MN, Sohail Kashif A. Skin lesion segmentation from dermoscopic images using convolutional neural network. Sensors. 2020;20(6):1601.

    Article  Google Scholar 

  21. Goyal M, Oakley A, Bansal P, Dancey D, Yap MH. Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access. 2019;8:4171–81.

    Article  Google Scholar 

  22. Al-Masni MA, Al-Antari MA, Choi M-T, Han S-M, Kim T-S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput Methods Programs Biomed. 2018;162:221–31.

    Article  Google Scholar 

  23. Guoqiang W, Dongxue W. Segmentation of brain MRI image with GVF snake model. In: First international conference on pervasive computing, signal processing and applications; 2010. p. 711–14. https://doi.org/10.1109/PCSPA.2010.177.

  24. Wang L, Ji H, Gao X. MR brain image segmentation using a possibilistic entropy based clustering method. In: Proceedings of 7th international conference on signal processing, ICSP ’04, vol. 3; 2004. p. 2241–443. https://doi.org/10.1109/ICOSP.2004.1442225.

  25. Jiao F, Fu D, Bi S. Brain image segmentation based on bilateral symmetry information. In: 2008 2nd International conference on bioinformatics and biomedical engineering; 2008. p. 1951–54. https://doi.org/10.1109/ICBBE.2008.817.

  26. Zanjani FG, Zinger S, Bejnordi BE, van der Laak J. Histopathology stain-color normalization using deep generative models; 2018.

  27. Jimenez-Alaniz JR, Medina-Banuelos V, Yanez-Suarez O. Data-driven brain MRI segmentation supported on edge confidence and a priori tissue information. IEEE Trans Med Imaging. 2006;25(1):74–83. https://doi.org/10.1109/TMI.2005.860999.

    Article  PubMed  Google Scholar 

  28. Ou T, Chunguang J, Huilong D, Weixue L. Automatic segmentation and classification of human brain images based on TT atlas. In: Proceedings of the 20th annual international conference of the IEEE engineering in medicine and biology society, vol. 20. Biomedical engineering towards the year 2000 and beyond (Cat. No.98CH36286), vol. 2; 1998. p. 700–2. https://doi.org/10.1109/IEMBS.1998.745517.

  29. Guibas JT, Virdi TS, Li PS. Synthetic medical images from dual generative adversarial networks. CoRR abs/1709.01872. arXiv:1709.01872 (2017).

  30. Yao Y, Cheng Y. High effective medical image segmentation with model adjustable method. In: IEEE international symposium on circuits and systems (ISCAS), 2013. p. 1512–15. https://doi.org/10.1109/ISCAS.2013.6572145.

  31. Zhang S, Huang J, Uzunbas M, Shen T, Delis F, Huang X, Volkow N, Thanos P, Metaxas D. 3d segmentation of rodent brain structures using active volume model with shape priors. In: IEEE international symposium on biomedical imaging: from nano to macro, 2011. p. 433–6. https://doi.org/10.1109/ISBI.2011.5872439.

  32. Wang L, Li X, Fang K. Object detection based on feature extraction and morphological operations. In: International conference on neural networks and brain, vol. 2; 2005. p. 1001–3. https://doi.org/10.1109/ICNNB.2005.1614787.

  33. Mallick PK, Satapathy BS, Mohanty MN, Kumar SS. Intelligent technique for CY brain image segmentation. In: 2nd International conference on electronics and communication systems (ICECS), 2015. p. 1269–77. https://doi.org/10.1109/ECS.2015.7124789.

  34. Zhou S, Nie D, Adeli E, Yin J, Lian J, Shen D. High-resolution encoder-decoder networks for low-contrast medical image segmentation. IEEE Trans Image Process. 2020;29:461–75. https://doi.org/10.1109/TIP.2019.2919937.

    Article  Google Scholar 

  35. Qu X, Platisa L, Despotovic I, Kumcu A, Bai T, Deblaere K, Philips W. Estimating blur at the brain gray-white matter boundary for FCD detection in MRI. In: 36th Annual international conference of the IEEE engineering in medicine and biology society, 2014. p. 3321–24. https://doi.org/10.1109/EMBC.2014.6944333.

  36. Shen W, Wang B, Jiang Y, Wang Y, Yuille A. Multi-stage multi-recursive-input fully convolutional networks for neuronal boundary detection. In: IEEE international conference on computer vision (ICCV), 2017. p. 2410–19. https://doi.org/10.1109/ICCV.2017.262.

  37. Chakraborty A, Staib LH, Duncan JS. An integrated approach to boundary finding in medical images. In: Proceedings of IEEE workshop on biomedical image analysis, 1994. p. 13–22. https://doi.org/10.1109/BIA.1994.315870.

  38. Afifa K, Jian-Jun H, Taher AG. Multi-model medical image segmentation using multi-stage generative adversarial network. IEEE Access. 2022;10:28590–9.

    Article  Google Scholar 

  39. Dolz J, Gopinath K, Yuan J, Lombaert H, Desrosiers C, Ayed IB. HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. IEEE Trans Med Imaging. 2018;38(5):1116–26.

    Article  Google Scholar 

  40. Anbeek P, IÅ¡gum I, van Kooij BJ, Mol CP, Kersbergen KJ, Groenendaal F, Viergever MA, de Vries LS, Benders MJ. Automatic segmentation of eight tissue classes in neonatal brain MRI. PLoS ONE. 2013;8(12):81895.

    Article  Google Scholar 

  41. Veluchamy M, Subramani B. Brain tissue segmentation for medical decision support systems. J Ambient Intell Humaniz Comput. 2021;12(2):1851–68.

    Article  Google Scholar 

  42. Roy S, Bandyopadhyay SK. A new method of brain tissues segmentation from MRI with accuracy estimation. Procedia Comput Sci. 2016;85:362–9.

    Article  Google Scholar 

  43. Kong Y, Chen X, Wu J, Zhang P, Chen Y, Shu H. Automatic brain tissue segmentation based on graph filter. BMC Med Imaging. 2018;18(1):1–8.

    Article  Google Scholar 

  44. Dolz J, Desrosiers C, Ayed IB. 3D fully convolutional networks for subcortical segmentation in MRI: a large-scale study. Neuroimage. 2018;170:456–70.

    Article  Google Scholar 

  45. Jin D, Xu Z, Harrison AP, Mollura DJ. White matter hyperintensity segmentation from t1 and flair images using fully convolutional neural networks enhanced with residual connections. In: IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE; 2018. p. 1060–64.

  46. de Brebisson A, Montana G. Deep neural networks for anatomical brain segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2015. p. 20–28.

  47. Ma Y, Cai Y, Wang Z, Sun M, Zhao X. Visual detection of cells in brain tissue slice for patch clamp system. In: IEEE 11th annual international conference on cyber technology in automation, control, and intelligent systems (CYBER), 2021. p. 521–26. https://doi.org/10.1109/CYBER53097.2021.9588141.

  48. Celano GGA. A ResNet-50-based convolutional neural network model for language ID identification from speech recordings. In: Proceedings of the third workshop on computational typology and multilingual NLP. Association for Computational Linguistics; 2021. p. 136–44. https://doi.org/10.18653/v1/2021.sigtyp-1.13.

  49. Wang L, Nie D, Li G, Puybareau É, Dolz J, Zhang Q, Wang F, Xia J, Wu Z, Chen J-W, et al. Benchmark on automatic six-month-old infant brain segmentation algorithms: the iSeg-2017 challenge. IEEE Trans Med Imaging. 2019;38(9):2219–30.

    Article  Google Scholar 

  50. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. CoRR abs/1606.06650. arXiv:1606.06650 (2016).

  51. Nie D, Wang L, Gao Y, Shen D. Fully convolutional networks for multi-modality isointense infant brain image segmentation. In: IEEE 13th international symposium on biomedical imaging (ISBI), 2016. p. 1342–45. https://doi.org/10.1109/ISBI.2016.7493515.

  52. Mahbod A, Chowdhury M, Smedby Ö, Wang C. Automatic brain segmentation using artificial neural networks with shape context. Pattern Recognit Lett. 2018;101:74–9.

    Article  Google Scholar 

  53. Stollenga MF, Byeon W, Liwicki M, Schmidhuber J. Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. CoRR abs/1506.07452. arXiv:1506.07452 (2015).

Download references

Acknowledgements

We thank the National Science Foundation of China (NSFC) for supporting this work.

Funding

This work is supported in part by the National Science Foundation of China (NSFC). Awards 61872411 and 61472150.

Author information

Authors and Affiliations

Authors

Contributions

AK: conceptualization, idea, coding, writing-original-draft, writing, and editing. J-JH: resources, supervision, project administration, funding acquisition, reviewing, and editing. TAG: guidance, results analysis, reviewing, and editing. All authors read and approved the final manuscript.

Author information

Afifa Khaled is currently a Ph.D. student in computer science at Huazhong University of Science and Technology (HUST), Wuhan, China. She obtained her M.Sc. in Software Engineering from Tianjin University (2020). Her research interests include machine, deep learning methods for medical image segmentation.

Jian-Jun Han received the Ph.D. degree in computer science and engineering from the Huazhong University of Science and Technology (HUST), in 2005. He is now a professor with the School of Computer Science and Technology in HUST. He worked with the University of California, Irvine as a visiting scholar between 2008 and 2009, and with the Seoul National University between 2009 and 2010. His research interests include AI algorithm, real-time systems as well as parallel computing.

Taher A. Ghaleb is a Postdoctoral Research Fellow at the School of EECS at the University of Ottawa, Canada. Taher obtained his Ph.D. in Computing from Queen’s University, Canada (2021). During his Ph.D., Taher held an Ontario Trillium Scholarship, a highly prestigious award for doctoral students. He worked as a research/teaching assistant since he obtained his B.Sc. in Information Technology from Taiz University, Yemen (2008) and M.Sc. in Computer Science from King Fahd University of Petroleum and Minerals, Saudi Arabia (2016). His research interests include continuous integration, software testing, mining software repositories, applied machine learning, program analysis, and empirical software engineering.

Corresponding author

Correspondence to Afifa Khaled.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no known competing financial interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khaled, A., Han, JJ. & Ghaleb, T.A. Learning to detect boundary information for brain image segmentation. BMC Bioinformatics 23, 332 (2022). https://doi.org/10.1186/s12859-022-04882-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12859-022-04882-w

Keywords