 Methodology
 Open access
 Published:
Semisupervised COVID19 CT image segmentation using deep generative models
BMC Bioinformatics volume 23, Article number: 343 (2022)
Abstract
Background
A recurring problem in image segmentation is a lack of labelled data. This problem is especially acute in the segmentation of lung computed tomography (CT) of patients with Coronavirus Disease 2019 (COVID19). The reason for this is simple: the disease has not been prevalent long enough to generate a great number of labels. Semisupervised learning promises a way to learn from data that is unlabelled and has seen tremendous advancements in recent years. However, due to the complexity of its label space, those advancements cannot be applied to image segmentation. That being said, it is this same complexity that makes it extremely expensive to obtain pixellevel labels, making semisupervised learning all the more appealing. This study seeks to bridge this gap by proposing a novel model that utilizes the image segmentation abilities of deep convolution networks and the semisupervised learning abilities of generative models for chest CT images of patients with the COVID19.
Results
We propose a novel generative model called the shared variational autoencoder (SVAE). The SVAE utilizes a fivelayer deep hierarchy of latent variables and deep convolutional mappings between them, resulting in a generative model that is well suited for lung CT images. Then, we add a novel component to the final layer of the SVAE which forces the model to reconstruct the input image using a segmentation that must match the ground truth segmentation whenever it is present. We name this final model StitchNet.
Conclusion
We compare StitchNet to other image segmentation models on a highquality dataset of CT images from COVID19 patients. We show that our model has comparable performance to the other segmentation models. We also explore the potential limitations and advantages in our proposed algorithm and propose some potential future research directions for this challenging issue.
Background
Modern deep learning based image segmentation techniques tend to require vast amounts of pixellevel labels to be effective. However, to obtain these labels, it is necessary to have someone sit down and categorize every pixel in an image. This requires a massive amount of human effort. In the case of biomedical images, this is made worse by the fact that it is often necessary to have a panel of experts do the labelling. Therefore, any technique that has the potential to reduce the number of labelled images needed has immense value.
This issue can be seen in the diagnosis and prognosis of patients suspected to have the Coronavirus Disease 2019 (COVID19), using computed tomography (CT) scans of their lungs. A pixelwise segmentation of these scans, identifying healthy tissue as well as parts of the lungs affected by either common pneumonia or novel coronavirus pneumonia, can be a powerful tool for diagnosis as well as for identifying how much risk the patient is in, or will be in. Obtaining these segmentations, however, is immensely timeconsuming for medical professionals to do by hand. In response to this, there has been work [1,2,3] in using deep learning models for image segmentation to automate this process. Despite there being massive datasets of CT images, these models can only be trained on CT images that have been hand labelled by skilled radiologists, severely limiting the amount of usable data.
Semisupervised learning has the potential to alleviate this issue. A semisupervised model has the ability to learn from both unlabelled and labelled images simultaneously, drastically reducing the number of labelled images needed to achieve satisfactory performance. For this reason, there has been a surge of research into semisupervised learning in recent years. We will discuss some notable previous work in image segmentation, starting with several fullysupervised models and following with several semisupervised models.
UNet [4] is a deep learning based image segmentation model that has seen great success on medical imagery tasks. It utilizes an encoderdecoder style architecture with skip connections between the encoder and the decoder. SegNet [5] has a similar encoderdecoder style architecture, however instead of skip connections it uses max unpooling layers in the decoder. The MobileNetV2 [6], an image classification network, can be used as the UNet’s encoder. When compared to much larger encoder networks, the MobileNetV2 achieves only slightly worse performance while being much faster. Zhang et al. [1] have used several deep learning based, supervised segmentation models [4, 5, 7] to predict a segmentation for a CT image of a patient’s lungs. Fan et al. [3] and Chen et al. [8] both propose novel supervised segmentation models that have been handcrafted to perform well on chest CT images. Though impressive, these models are still limited by the number of CT images with pixellevel labels.
Moving away from fully supervised models, there is a plethora of papers proposing deep learning models that use imagelevel labels as a supervisory signal for the task of image segmentation. They do not utilize completely unlabelled images. This task is sometimes referred to as pure or true semisupervision, and there are precious few published papers that tackle it [9,10,11,12,13,14,15,16,17,18,19].
The above purely semisupervised models tend to tackle the problem using some form of adversarial training, selftraining, clustering or multiview training. Many papers [10, 13, 16, 18, 19] use an adversarially trained discriminator deep convolutional network to ensure that the prediction of some segmentation model is realistic. This scheme allows them to train their network on unlabelled photos by leveraging the fact that, even if you do not know the ground truth, it should at least belong to the same distribution as the ground truth for the labelled images. The main drawback to this technique is that it can be very difficult to get a model with an adversarial component to converge to a solution.
Other papers [9, 14] use the fact that images–labelled or unlabelled–that have been determined to be similar by some deep learningbased, unsupervised clustering algorithm should also be close to each other in various latent and feature spaces. These techniques are dependent on how you define “close” which can be quite difficult for data that is as high dimensional as images, causing the performance of these models to be underwhelming.
Pseudolabelling [20] is a commonly used semisupervised learning technique where a fully supervised deep network is trained and then used to make predictions on some unlabelled data. The network is then retrained using the model’s most confident predictions as labels. However, if this prediction is of low quality, then this scheme will continuously reinforce this bad behaviour to disastrous effect. As a result, pseudolabelling is typically considered the least effective, but simplest, semisupervised technique. There are several papers [3, 12] that use this general scheme with some significant modifications.
As with this paper, many papers [2, 3] seek to utilize unlabelled CT images from COVID19 patients. Shan et al. [2] use an intriguing humanintheloop strategy. This strategy entails training a deep learningbased, segmentation network on a small dataset of pixelwise labelled data, then using this network to make prediction on a large unlabelled dataset. These predictions are then refined by a skilled radiologist and included in the pixellevel labelled dataset. The network is retrained, and this process repeats until satisfactory performance is achieved. Though the labelling effort is significantly reduced, this technique still requires some manual labelling effort and many research groups will simply not have access to a radiologist.
Fan et al. [3] use pseudolabelling in its most rudimentary form. Despite this, they achieved a sizable increase in segmentation performance compared to their fully supervised baseline. This makes it quite motivating to employ a more sophisticated semisupervised technique, as pseudolabelling is far from capable of making full use of these unlabelled images. Fung et al. [21] proposed a model that does just this. They add a selfsupervised pretraining step to Fan et al.’s InfNet model. During this step, the CT images are obscured with a black rectangle and the model is trained to reconstruct the full CT image. Though this method was able to improve on the InfNet, it is trained in two separate steps and a semisupervised technique that can be trained endtoend may improve the performance even further.
Deep generative models offer an elegant framework for semisupervised models. In essence, they treat the image and label as two random variables in a graphical model and seek to model both using recent advances in variational inference. Some notable examples are the M2 variational autoencoder [22] and the auxiliary deep generative model [23]. Unfortunately, the vast majority of this research has been in the domain of image classification, where the label is simply a single category for each image. The assumption can be made that each of these categories are equally likely to occur. Even though this assumption is very close to the reality, it still allows for easy to compute, closedform calculations. Modern deep generativebased, semisupervised techniques rely heavily on this fact.
A similar assumption cannot be made in the case of image segmentation. This is for several critical reasons. First, due to the fact that each pixel has a label, the number of unique segmentations is exponentially larger than the number of unique imagelevel labels. Furthermore, very few of these unique segmentations are realistic. For example, a set of pixels that have been give the dog label but are in the shape of a human is not a realistic segmentation. This is important because it completely removes our ability to assume that each unique segmentation is equally likely to occur. Finally, the label for each pixel is heavily dependent on the labels of the other pixels in the image, removing the possibility of making any independence assumptions. For these reasons, modern semisupervised techniques tend to fall flat when used for image segmentation.
Though problematic, the issues mentioned above are not at all new. The same issues are encountered while trying to find a distribution capable of modelling images. The variational approach [24, 25] handles this by finding a latent representation of the image as well as a deep learningbased, functional mapping between the image and its latent representation. You are then free to make simplifying assumptions about the latent space’s distribution without making any assumptions about the images’ distribution. In this study, we utilize this approach by finding a latent representation of both the original CT image, and its segmentation. Because we want our model to learn to segment images even when the groundtruth segmentation is not present, we assume that the original CT image is dependent on the segmentation. By doing this, in the absence of the groundtruth segmentation, the model learns to predict a segmentation that is useful for the reconstruction of the original CT image.
Though the original variational autoencoder (VAE) [24] could be used for this task, it lacks the expressivity to sufficiently model large datasets. This is particularly true when the dataset is one of images. The ladder variational autoencoder (LVAE) [26] greatly increases the expressivity of the VAE by introducing a hierarchy of latent variables and a novel way of training such a hierarchy. In this study, we modify the LVAE by sharing several key weights across the inference and generative network. Additionally, we replace all functional mappings in the LVAE with deep convolutional networks that have been handcrafted to work well on CT images. We name the resultant model the shared variational autoencoder (SVAE).
In their original forms, the VAE, LVAE and SVAE are designed to take an image as input and reconstruct that image as its output. We modify the SVAE to output both a segmentation mask and four CT images, one for each of the segmentation labels. Then, we reconstruct the original CT image by stitching together the four CT images based on the segmentation. We name the resultant model StitchNet. In summary, we develop a novel deep generative model called the SVAE. Then, using the SVAE, we create a semisupervised model called StitchNet and test it on a highquality dataset of CT images from COVID19 patients.
Results
Dataset
For the evaluation of StitchNet, the Zhang et al.’s [1] China Consortium of Chest CT Image Investigation (CCCCII), with several modifications, was used. CCCCII contains CT images from 2,778 patients, totalling 444,034 images in total. Eightyfive percent of the patients were from the Chinese cities of Yichang, Hefei or Guangzhou and the remainder were from an international cohort. The patients either had common pneumonia (CP), novelcoronavirus pneumonia (NCP), or were part of the control group. In our dataset, we excluded the patients with CP.
CCCCII contained 750 segmentation masks, which correspond to 150 patients with NCP. The segmentation was completed by five senior radiologists with over 25 years of experience. They segmented three labels: health lung field, groundglass opacity and consolidation.
We will now discuss the data preprocessing and cleaning procedure we employed. We segmented the lung field in each CT image using the UNet semantic segmentation model. The opening and closing morphological transformations were used for noise reduction. The images were then cropped to only include lung field. The result is shown in Fig. 1. Before being used in our models, all images are resized to a resolution of 352⨯352 and the pixels values are scaled to be between zero and one. We randomly separate 60% of the labelled data into the training set, 20% into the testing set and 20% validation set. We do this by patient, not by image, so that all of a single patient’s CT images will be in exactly one of the three sets, thus avoiding data leakage.
Evaluation metrics
For each image, we employ the following four evaluation metrics: the IntersectionoverUnion, F1Score, Recall and Precision.
(1) The IntersectionoverUnion (IoU): The IntersectionoverUnion was used to measure the overlap between the groundtruth infected region (T) and the predicted infected region (P) in a way that controls for the size of the infected region.
(2) The F1Score (F1): The F1Score, also called the Dice Coefficient, was used to measure the overlap between the groundtruth infected region (T) and the predicted infected region (P).
(3) Recall (Rec.): The Recall, also called Sensitivity or True Positive Rate, was used to measure what proportion of the groundtruth infected region (T) was present in the predicted infected region (P).
(4) Precision (Prec.): The Precision was used to measure what proportion of the predicted infected region (P) was present in the groundtruth infected region (T). These four metrics are defined in Eq. (1).
where \(\,\,\,\) is the operator that calculates the number of pixels in the given region, \(\cap\) is the intersection operator, and \(\cup\) is the union operator.
We calculate the above metrics for each CT image and average the results. The Mean and Standard Deviation (STD) are defined as follows:
Let M((x, y)) be the value of the relevant evaluation metric calculated for the data point (x, y). Then let \({\text {Metric}} = \{M((x_i,y_i))\} _{(x_i,y_i)}\in D_{Val}\), where \(D_{Val}\) is the validation dataset.
(1) Mean: Then the Mean is simply \(\underset{{\text {Metric}}}{\sum }\frac{M((x_i,y_i))}{{\text {Metric}}}\).
(2) STD: The STD is \(\sqrt{\frac{1}{{\text {Metric}}}\underset{{\text {Metric}}}{\sum }(M((x_i,y_i))  \mu )^2}\), where \(\mu\) is the mean.
Performance comparison
We compared StitchNet to SegNet and to UNet with a MobileNetV2 encoder. The hyperparameters used to train StitchNet can be found in Additional file 1: Table S1. The results on the test set are shown in Table 1 with some example prediction shown in Fig. 2. The results on the validation and training set can be found in Additional file 1: Tables S2 and S3. Although performance of StitchNet and UNet are comparable when predicting the ground glass opacity label, StitchNet’s precision is higher whereas UNet’s recall is higher. This seems to indicate that StitchNet makes more conservative predictions than UNet. The SegNet fails to predict any lesions, predicting only the background class. This is likely due to the fact that its backbone is based off the outdated VGG16 network [27], whereas StitchNet and UNet’s backbone uses the more sophisticated MobileNetV2.
Discussion
The performance of StitchNet is comparable to that of the fully supervised UNet model on the groundglass opacity and consolidation lesion. This indicates that StitchNet is learning from the labelled data, but not the unlabelled data. When trained on only the labelled data, StitchNet predicts styles that are clearly associated with the appropriate lesions, effectively allowing you to see what a CT image would look like if it were entirely filled with the associated lesion. Because of this, StitchNet seems to perform exactly as expected on the labelled data. When trained on only the unlabelled data, StitchNet learns unique and meaningful styles, learning a meaningful clustering of the data. This is exactly what we would expect from training with no labelled data.
Based on these two observations, when trained on both labelled and unlabelled data, StitchNet should learn to predict styles that are associated with lesions, for both the labelled data and the unlabelled data. StitchNet achieves this on the labelled data, however, on the unlabelled, all the styles are identical, and the segmentations are the exact same for every image. This seems to indicate that the fundamental idea is sound, but that further work needs to be done before StitchNet can outperform the supervised network.
Furthermore, we note that the standard deviation of StitchNet model is consistently lower than that of UNet model. This is due to the fact that the model is able to reinforce the predictions it makes on the labelled data by training on the unlabelled data, resulting in a model that performs more consistently on unseen data.
Conclusion
In conclusion, we proposed a novel generative model called the shared variational autoencoder (SVAE), making a theoretical contribution to the field of generative modelling by introducing shared weights between the encoder and the decoder. We used this model to propose StitchNet, a model capable of tackling the challenging task of semisupervised CT image segmentation. While the theoretical foundation of StitchNet is sound, further work will be needed before it can make full use of unlabelled data.
Methods
Shared variational autoencoder
In this section we will introduce the theory, implementation and optimization of the SVAE. Suppose we have a dataset, \(D = \{x^{(i)}\}_{i = 0}^{N1}\), of N images. Assuming that these images are independent and identically distributed (i.i.d). samples from some groundtruth distribution, p(x), we wish to approximate that ground truth distribution. This allows us to sample from our approximation, synthesizing new images. The Ladder Variational Autoencoder (LVAE) [26] is a recently proposed model that has been shown to be highly effective at modelling such distributions. Here we will briefly summarize their work and discuss some potential issues. The LVAE assumes that the data is generated in a hierarchical sampling process. Specifically, it assumes that, to generate an image, we first take a sample from a unit Gaussian distributedlatent variable, \(z_n\). A function is then applied to that sample, outputting the parameters to a diagonal Gaussian distributedlatent variable, \(z_{n1}\). This will repeat for n levels, with the last output distribution being a distribution for each pixel in the image (their work used a Bernoulli distribution). This allows them to assume independence between the pixels when conditioned on the latent variables. This model is denoted by \(p_\theta\).
The LVAE uses variational inference to learn both the model \(p_\theta\) and an approximate posterior to \(p_\theta\), \(q_\phi\). In previous work, \(q_\phi\) will infer the value for \(z_1\) from x and \(z_2\) from \(z_1\). The LVAE differs from this in that their \(q_\phi\) completes a deterministic down pass, and then each z is inferred from the intermediate layers of this down pass. The dependencies between latent variables are recovered by combining the inferred distributions’ parameters for the latent variables with the generative model’s predicted distributions’ parameters. This is depicted in Fig. 3a.
Though the LVAE is quite interesting, it was not designed to work well on large, complex datasets such as of CT images. In this study, we seek to modify the LVAE so that it will work well on such a dataset. We do this by replacing the mappings between latent variables in the LVAE with deep convolutional layers that have been handcrafted to work well on CT images. Now, to find \(z_1\) given \(z_2\), we apply a deconvolutional layer to \(d_2\) to get \(d_1\) and then apply many convolutional layers to \(d_1\) to get \(z_1\). We note that, in both \(p_\theta\) and \(q_\phi\), we have a mapping between \(d_n\) and \(z_n\). We hypothesize that this mapping serves the exact same purpose in both, and that having both share weights would increase performance. With this final change, we arrive at the shared variational autoencoder (depicted in Fig. 3b)
Here, we will describe the deep convolutional layers used in the SVAE model. Several building blocks of the model are described in Fig. 4. When the number of filters is greater than 64, linear bottleneck convolutions [28] are used instead of the traditional convolution. Batch Normalization [29] followed by the ReLU [30] activation follows every convolutional layer and is suppressed for clarity. SVAE has five layers of latent variables, opposed to the two depicted in Fig. 3b. The dimensionality of these latent variables and their deterministic expansion is shown in Table 2. We use the intermediate and output layers of MobileNetV2 [28] with the image, x, as input to obtain \(d_1,d_2,...d_5\). The mappings between variables are depicted in Fig. 5.
StitchNet
In this section we will introduce the theory, implementation and optimization of the StitchNet. Suppose we have a dataset, \(D_{UN} = \{(x^{(i)})\}_{i=0}^{N1}\), of N CT images, where \(x^{(i)}\) denotes the \(i^{th}\) CT image in the dataset. We will assume that \(x^{(i)}\) is a highdimensional vector with entries ranging from zero to one. Suppose that we have a dataset, \(D_{LAB} = \{(x^{(i)},y^{(i)})\}_{i=0}^{M1}\), of M CT images along with their associated segmentation, \(y^{(i)}\). We will assume that \(y^{(i)}\) is of the same dimension as \(x^{(i)}\) and has entries that belong to the set \(\{0,1,2,3\}\). Here, if the \(n^{th}\) entry of \(y^{(i)}\) is equal to zero, then this indicates that the \(n^{th}\) entry of \(x^{(i)}\) is part of the background of the CT image Furthermore, one, two and three correspond to the healthy tissue, groundglass opacity and consolidation class, respectively. This is depicted in Fig. 6a.
We wish to obtain a model capable of taking a CT image (x) and outputting an accurate segmentation (y). In other words, we wish to approximate the groundtruth p(xy) conditional distribution. Though not typically phrased in these terms, supervised deeplearning techniques do this by introducing the following approximation to this distribution:
where \({\text {CAT}}\) is the Categorical distribution and \(f_\theta\) is some complex function. Due to their tremendous success on image data, \(f_\theta\) is typically chosen to be a convolutional neuralnetwork.
These supervised techniques then aim to find the parameters \(\theta\) that best explains the data we are given. This is done by maximizing the following objective using a numerical approximation algorithm such as gradient descent:
Phrased in this way, the drawback to these supervised techniques is obvious. They can only use the labelled dataset \(D_{LAB}\). To remedy this, instead of approximating p(yx), we can model joint distribution p(x, y) and derive the conditional distribution p(yx) from it. This allows us to use both, \(D_{LAB}\) and \(D_{UN}\) by treating y as a latent variable in the latter case.
To effectively model p(x, y), we will assume that x and y are dependent on the hierarchy of latent variables from the SVAE, which here we will simply denote as z. Now we will model p(x, y, z). Furthermore, we will assume that each of the data points, \((x^{(i)},y^{(i)},z^{(i)})\), were generated in the following way:
where p(z) are assumed to follow the distribution from the SVAE and \(p_\theta (\cdot )\) is assumed to be some distribution parameterized by \(\theta\) (depicted in Fig. 7).
To generate x we will first generate four stylistic representations–referred to as b, h, g and c–of x. These stylistic representations of x show you what the image would look like if the entire lung were background, healthy, groundglass opacity and consolidation, respectively. We then reconstruct x by choosing the pixel from the style associated with the label predicted by y. Examples of these styles are shown in Fig. 6b.
We will use this, as well as the following definitions, to define \(p_\theta\):
where BETA denotes the Beta distribution and \(\alpha _\theta (z)\) and \(\beta _\theta (z)\) are some complex functions parameterized by \(\theta\) which outputs the parameters of the beta distribution.
Finally, we define \(p_\theta\) as
where \(\pi _\theta (z)\) is some complex function parameterized by \(\theta\). We now have a generative model that is well suited to CT image segmentation (depicted in Fig. 7b, c). What remains is outlining an effective means for finding the values of \(\theta\) that best explains our observed data. Concretely, we wish to solve
The existence of latent variable, and, by extension, the need to integrate over them, makes this objective completely intractable. We instead optimize a variational lower bound on the log likelihood of \(p_\theta\). Concretely, we optimize,
Though \(q_\phi\) can be any function of the latent variables, this lower bound is exactly equal to the true log likelihood when \(q_\phi\) is equal to \(p_\theta 's\) posterior, \(p_\theta (zx,y)\). Therefore, \(q_\phi\) has the interpretation of being an approximation to the posterior. When we implement the \(q_\phi\), we will keep this fact in mind.
We can further increase tractability by approximating the calculation of the expectation over \(q_\phi\). We do this by taking a MonteCarlo sample from \(q_\phi\) and evaluating the expectation with just this sample. This approximation can be made more precise by taking multiple samples and averaging the expectation, but, for our work, we used only one. With this, we arrive at our final objective, which can be optimized via any gradient descent algorithm.
Availability of data and materials
The CCCCII dataset used for our analysis can be found at http://ncovai.big.ac.cn/download?lang=en. All code necessary for the implementation of StitchNet and the replication of our results can be found at https://github.com/JudahZammit/stitchnet.
Abbreviations
 COVID19:

Coronavirus disease 2019
 CT:

Computed tomography
 SVAE:

Shared variational autoencoder
 LVAE:

Ladder variational autoencoder
 VAE:

Variational autoencoder
 CCCCII:

China Consortium of Chest CT Image Investigation
 CP:

Common pneumonia
 NCP:

Novelcoronavirus pneumonia
References
...Zhang K, Liu X, Shen J, Li Z, Sang Y, Wu X, Zha Y, Liang W, Wang C, Wang K, Ye L, Gao M, Zhou Z, Li L, Wang J, Yang Z, Cai H, Xu J, Yang L, Cai W, Xu W, Wu S, Zhang W, Jiang S, Zheng L, Zhang X, Wang L, Lu L, Li J, Yin H, Wang W, Li O, Zhang C, Liang L, Wu T, Deng R, Wei K, Zhou Y, Chen T, Lau JYN, Fok M, He J, Lin T, Li W, Wang G. Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID19 pneumonia using computed tomography. Cell. 2020;181(6):1423–143311. https://doi.org/10.1016/j.cell.2020.04.045.
Shan F, Gao Y, Wang J, Shi W, Shi N, Han M, Xue Z, Shen D, Shi Y. Abnormal lung quantification in chest CT images of COVID19 patients with deep learning and its application to severity prediction. Med Phys. 2021;48(4):1633–45. https://doi.org/10.1002/mp.14609
Fan DP, Zhou T, Ji GP, Zhou Y, Chen G, Fu H, Shen J, Shao L. InfNet: automatic COVID19 lung infection segmentation from CT images. IEEE Trans Med Imaging. 2020;39(8):262637. https://doi.org/10.1109/TMI.2020.2996645
Ronneberger O, Fischer P, Brox T. UNet: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computerassisted intervention (MICCAI). Springer; 2015. p. 234–41.
Badrinarayanan V, Kendall A, Cipolla R. SegNet: a deep convolutional encoderdecoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12):2481–95. https://doi.org/10.1109/TPAMI.2016.2644615.
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2018. p. 451020
Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoderdecoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 801–18
Chen X, Yao L, Zhang Y. Residual attention unet for automated multiclass segmentation of COVID19 chest CT images. arXiv preprint. arXiv:2004.05645. 2020.
AlDmour H, AlAni A. MR brain image segmentation based on unsupervised and semisupervised fuzzy clustering methods. In: International conference on digital image computing: techniques and applications (DICTA). IEEE; 2016. p. 631–7.
Mondal AK, Agarwal A, Dolz J, Desrosiers C. Revisiting cyclegan for semisupervised segmentation. arXiv preprint. arXiv:1908.11569. 2019.
Bai W, Oktay O, Sinclair M, Suzuki H, Rajchl M, Tarroni G, Glocker B, King A, Matthews PM, Rueckert D. Semisupervised learning for networkbased cardiac MR image segmentation. In: International conference on medical image computing and computerassisted intervention (MICCAI). Springer; 2017. p. 253–60.
Radosavovic I, Dollár P, Girshick R, Gkioxari G, He K. Data distillation: towards omnisupervised learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2018. p. 4119–28.
Zhang Y, Yang L, Chen J, Fredericksen M, Hughes DP, Chen DZ. Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI). Springer; 2017. p. 408–16.
Baur C, Albarqouni S, Navab N. Semisupervised deep learning for fully convolutional networks. In: International conference on medical image computing and computerassisted intervention (MICCAI). Springer; 2017. p. 311–19.
Kalluri T, Varma G, Chandraker M, Jawahar C. Universal semisupervised semantic segmentation. In: Proceedings of the IEEE international conference on computer vision (ICCV). 2019. p. 5259–70.
Mittal S, Tatarchenko M, Brox T. Semisupervised semantic segmentation with highand lowlevel consistency. IEEE Trans Pattern Anal Mach Intell. 2019;43:1369–79.
Peng J, Estrada G, Pedersoli M, Desrosiers C. Deep cotraining for semisupervised image segmentation. Pattern Recognit. 2020;107: 107269.
Hung WC, Tsai YH, Liou YT, Lin YY, Yang MH. Adversarial learning for semisupervised semantic segmentation. In: 29th British machine vision conference (BMVC) 2018, p. 65:165:12
Souly N, Spampinato C, Shah M. Semi supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE international conference on computer vision (ICCV) 2017, p. 5688–96
Lee DH. Pseudolabel: the simple and efficient semisupervised learning method for deep neural networks. In: ICML 2013 workshop: challenges in representation learning (WREPL). 2013.
Fung DLX, Liu Q, Zammit J, Leung CKS, Hu P. Selfsupervised deep learning model for COVID19 lung CT image segmentation highlighting putative causal relationship among age, underlying disease and COVID19. J Transl Med. 2021;19(1):318. https://doi.org/10.1186/s12967021029922.
Kingma DP, Mohamed S, Jimenez Rezende D, Welling M. Semisupervised learning with deep generative models. In: Ghahramani Z, Welling M, Cortes C, Lawrence N, Weinberger KQ, editors. Advances in neural information processing systems (NIPS), vol. 27. Red Hook: Curran Associates Inc.; 2014, p. 35819
Maaløe L, Sønderby CK, Sønderby SK, Winther O. Auxiliary deep generative models. In: Balcan MF, Weinberger KQ, editors. Proceedings of the 33rd International conference on machine learning. proceedings of machine learning research, vol. 48. PMLR, New York; 2016. p. 1445–53
Kingma DP, Welling M. Autoencoding variational Bayes. In: 2nd International conference on learning representations (ICLR 2014) Conference track proceedings. arXiv:1312.6114.
Rezende DJ, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models. In: Xing EP, Jebara T, editors. Proceedings of machine learning research, vol. 32. PMLR, Bejing; 2014. p. 1278–86. http://proceedings.mlr.press/v32/rezende14.html.
Sønderby CK, Raiko T, Maaløe L, Sønderby SK, Winther O. Ladder variational autoencoders. In: Advances in neural information processing systems (NIPS) 2016, p. 3738–46.
Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition. In: 3rd International conference on learning representations, ICLR 2015—conference track proceedings. https://doi.org/10.48550/arxiv.1409.1556.
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2018, p. 451020
Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach F, Blei D, editors. Proceedings of the 32nd international conference on machine learning. proceedings of machine learning research, vol. 37. PMLR, Lille. 2015. p. 448–56. http://proceedings.mlr.press/v37/ioffe15.html.
Nair V, Hinton GE. Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th international conference on international conference on machine learning. ICML’10. Madison: Omnipress. 2010. p. 807–14.
Kingma DP, Ba JL. Adam: a method for stochastic optimization. In: 3rd International conference on learning representations, ICLR 2015—Conference track proceedings. arXiv:1412.6980.
Acknowledgements
We greatly thank Dr. Guangyu Wang for creating, and making publicly available, the high quality CCCCII dataset.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 23 Supplement 7, 2022 Selected articles from the 20th Asia Pacific Bioinformatics Conference (APBC 2022): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume23supplement7.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
JZ: Designed and implemented the algorithm, drafted the manuscript; DLXF and QL: performed data analysis and participated in algorithm design; CKSL and PH supervised the project and revised the manuscript. All authors read and approved the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Additional file 1
: Table S1. The chosen hyperparameters used to train StitchNet. Table S2. Quantitative results of groundglass opacity (GGO), consolidation (CON), background, and the overall average on the validation dataset. Table S3. Quantitative results of groundglass opacity (GGO), consolidation (CON), Background, and the overall average on the training dataset.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Zammit, J., Fung, D.L.X., Liu, Q. et al. Semisupervised COVID19 CT image segmentation using deep generative models. BMC Bioinformatics 23 (Suppl 7), 343 (2022). https://doi.org/10.1186/s12859022048786
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12859022048786