Skip to main content

Region-based progressive localization of cell nuclei in microscopic images with data adaptive modeling

Abstract

Background

Segmenting cell nuclei in microscopic images has become one of the most important routines in modern biological applications. With the vast amount of data, automatic localization, i.e. detection and segmentation, of cell nuclei is highly desirable compared to time-consuming manual processes. However, automated segmentation is challenging due to large intensity inhomogeneities in the cell nuclei and the background.

Results

We present a new method for automated progressive localization of cell nuclei using data-adaptive models that can better handle the inhomogeneity problem. We perform localization in a three-stage approach: first identify all interest regions with contrast-enhanced salient region detection, then process the clusters to identify true cell nuclei with probability estimation via feature-distance profiles of reference regions, and finally refine the contours of detected regions with regional contrast-based graphical model. The proposed region-based progressive localization (RPL) method is evaluated on three different datasets, with the first two containing grayscale images, and the third one comprising of color images with cytoplasm in addition to cell nuclei. We demonstrate performance improvement over the state-of-the-art. For example, compared to the second best approach, on the first dataset, our method achieves 2.8 and 3.7 reduction in Hausdorff distance and false negatives; on the second dataset that has larger intensity inhomogeneity, our method achieves 5% increase in Dice coefficient and Rand index; on the third dataset, our method achieves 4% increase in object-level accuracy.

Conclusions

To tackle the intensity inhomogeneities in cell nuclei and background, a region-based progressive localization method is proposed for cell nuclei localization in fluorescence microscopy images. The RPL method is demonstrated highly effective on three different public datasets, with on average 3.5% and 7% improvement of region- and contour-based segmentation performance over the state-of-the-art.

Background

Microscopic image analysis is becoming an enabling technology for modern systems-biology research, and cell nucleus segmentation is often the first step in the pipeline. Despite recent advances, the segmentation performance remains unsatisfactory in many cases. For example, on the popular public databases [1], the state-of-the-art segmentation accuracies are just around 85%.

The challenges of automated cell nucleus segmentation mainly arise from two imaging artifacts, as shown in Figure 1. First, the cell nuclei regions are inhomogeneous - the pixels of a cell nucleus exhibit non-uniform intensities and different cell nuclei also display varying patterns. Second, the background is also inhomogeneous and certain regions might have very similar appearance to the cell nuclei. These problems imply that: (1) precise delineation of boundaries between cell nuclei and the background is difficult; (2) some background areas could be mistaken as cell nuclei; and (3) certain cell nuclei could be missed. The segmentation problem can be characterized as a localization issue that includes both object detection and pixel-wise segmentation.

Figure 1
figure 1

An example image (left) and corresponding segmentation ground truth (right) from data set [1]. It can be seen that besides the pixel-wise inhomogeneity within a cell nucleus, some cell nuclei exhibit much lower intensities than the others; and although the background looks generally dark, it is indeed highly inhomogeneous, with some fairly bright areas and also a few noisy regions displaying very high intensities.

Related work

Numerous works have been conducted on segmenting various structures in cell images [2, 3], and unsupervised approaches appear to dominate. For example, the morphological methods based on thresholding, k-means clustering or watershed [4-8] can be quite effective, as long as the objects exhibit good contrast with the background. Watershed methods are also effective in separating touching cells, although the results might deviate from the actual contours slightly. A more popular trend of unsupervised segmentation is the energy-based deformable models, based on active contours [9] or level sets [10-15]. Compared with modeling contours explicitly, level sets have the advantage of being non-parametric and free from topology constraints. It is also relatively easy to incorporate continuous object-level regularization into level sets, such as shape priors. Another type of energy-based model is based on graph search [16, 17], graph cuts [18, 19] or normalized cuts [20]. Such methods attempt to derive the segmentation with global constraints, using well-defined graphical structures to represent the spatial relationships between regions. Many of these methods require good initial seeds or contours. However, the usual initialization techniques, such as thresholding and watershed, would not handle images with high inhomogeneities well, hence causing extra or missing detection of cell regions. Such detection errors during initialization could propagate into the final segmentation outputs.

It has been shown that intensity inhomogeneities can be tackled by integrating convex Bayesian functional with the Chan-Vese model [14], and discrete region-competition [15] based on the piecewise-smooth Mumford-Shah model [>[21]. However, without performing cell detection explicitly, the deformable models might become very complicated in order to filter background regions with cell-like features while keeping cell regions with background-like features. To detect cells from inhomogeneous background, one way is to reconstruct the ideal image [22, 23], which however, requires specific imaging modeling. Reconstruction can also be built into active contours with constrained iterative deconvolution without explicitly computing the inverse problem [24, 25]; however, it requires the point-spread function of a microscope, which is measured or modeled. Another way is to enhance the objects using h-dome transformation [26]; however, it might have difficulties with inhomogeneous foreground. The inhomogeneity can also be reduced with reference-based intensity normalization [27]; however, the image-level normalization would not well handle the intra-image variation. In addition, shape-based nucleus detection has been proposed, with Laplacian of Gaussian (LoG) [28] or sliding band filter (SBF) [29]. While the latter method is less sensitive to low contrast and better representative of irregular shapes, the detection accuracy partially relies on validation from the corresponding cytoplasm image, which is not always available.

Different from the unsupervised approaches, classification-based methods have also been proposed to incorporate prior information from labeled images. These classifiers include Bayesian [30], k-nearest neighbor (kNN) [31], support vector machine (SVM) [32], and atlas-based approaches [33]. Since the apparent difference between the foreground and background is their intensities, simple intensity-based features, such as histograms [30], have been widely used. On the other hand, the effectiveness of classification-based methods depends highly on the separation of feature spaces between foreground and background. Therefore, approaches based on a bag of local classifiers [30], and more complex features such as the local Fourier transform (LFT) [31], spatial information [33], and combination of appearance, shape and context features [32], have been proposed.

While most such supervised approaches describe the pixel- or region-level features, there are methods that tackle intensity inhomogeneity by explicitly modeling the inter-cell variations as more structural features. One way is to perform color standardization within pixelwise classification [34] to account for the inter-image intensity variations. To also address the intra-image variations, contrast information between an image region and the global foreground and local interest regions is computed [35]. A similar approach is to estimate foreground probabilities based on intensity distributions derived from global images and local detection outputs [36]. While both approaches introduce cell-adaptive features, the methods for global feature representation and local region detection might not work well with large feature overlapping between the foreground and the background. An additional false positive reduction step has also been proposed to remove bright background regions that are misidentified as cell nuclei [37]. However, this approach requires a learned classifier, whose performance could be affected by inter-image feature variation. More differently, registration-based approach has also been studied, by creating a template set from training images and segmenting the testing image based on best matches [38]; such templates however, might have difficulties capturing large varieties of object shapes and textures.

Our contribution

The contribution of our work is to localize the cell nuclei in images with high intensity inhomogeneity with various data-adaptive modeling techniques in a progressive manner. Specifically, we design a three-stage cell nucleus localization method that: (1) salient regions representing cell nuclei and cell clusters are extracted with image-adaptive contrast enhancement; (2) the clusters are further processed to identify true cell nuclei based on feature-distance profiles of reference regions with cluster-adaptive probability estimates; and (3) the contours of detected cell nuclei are refined in a graphical model with region-adaptive contrast information. Figure 2 gives an overview of the proposed method.

Figure 2
figure 2

The high-level flow chart of our proposed region-based progressive localization method. In this example, the cell nuclei and clusters are first extracted during initial segmentation with contrast-enhanced salient region detector, then missed or falsely detected cell nuclei are further processed in decluster processing with classification-based candidate identification and probability estimation via distance profile for candidate validation, and better contour delineation is finally achieved with regional contrast-based graphical model. For easier viewing, a quadrant of the original image is shown here, and similarly for Figure 3 and 4. The meaning of the color coding is described in Figures  3, 4 and 5.

We also design distinctive data-adaptive priors that can be categorized by the level of generalization: (1) global-level features modeled as support vectors from training images; (2) image-level features representing the distribution of varying appearances of the nearby cell nuclei; and (3) region-level features computed at all three stages of the method for interest region detection, candidate validation and contour refinement. Being adaptive to the specific image or interest region, the image- and region-level features are especially effective in accommodating the intensity inhomogeneities.

Compared to localization methods based on global criteria (e.g. thresholding or feature-based classification), our approach is more capable of accommodating (1) intensity variations between cell nuclei (intra- and inter-images) and (2) feature overlapping between cell nuclei and background areas. Compared to the energy-based techniques that target pixel-wise segmentation (e.g. level sets and graph cuts), our method has a stronger focus on cell nuclei detection with explicit modeling of cell-specific characteristics, to effectively filter cell-like background regions and identify obscure cell nuclei.

We suggest that the proposed region-based progressive localization (RPL) method can be potentially extended to other localization problems, if the objects of interest can be modeled as regions with distinct features from the surrounding background. A similar three-stage approach would be used, and the application-specific modifications would mainly focus on the feature design. One example could be tumor localization in functional images.

Methods

Initial segmentation

While cell nuclei might appear similar to the background, there is always some degree of contrast between them. Such an observation motivates us to localize the cell nuclei by extracting salient regions. During initial segmentation, we do not have strict requirements about the extracted regions. In particular, if multiple cell nuclei are tightly connected, or cell nuclei are surrounded by high-intensity background and difficult to differentiate, identifying them as a single region is acceptable. We design a contrast-enhanced salient region detector for initial segmentation.

Specifically, an iterative approach is developed based on the maximally stable extremal region (MSER) method [39]. Since MSER does not require any initial contour and the region stability is constrained by local regional information, it is easy to use and able to accommodate large intra-image variations. However, the effectiveness of MSER depends highly on the intensity contrast between the foreground and background. If the contrast is low, some regions would not be detected (e.g. Figure 3c). It is intuitive to add contrast enhancement. However, basic approaches such as intensity stretching would not work due to large intensity span. Instead, we design an iterative approach by alternating between the following two steps. First, interest regions {R} are detected using MSER, as shown in Figure 3c. Second, based on the detection result, the image is enhanced (Figure 3d) by:

I:= I 0 . 5 ( { R } 0 + { R } 2 ) ·C
(1)
Figure 3
figure 3

Illustration of initial segmentation. (a) The original image. (b) The segmentation ground truth. (c) The interest regions detected without iterative contrast enhancement, and darker blue denotes upper-level regions. (d) The image after iterative enhancement. (e) The final interest regions detected. After the initial segmentation, decluster processing is performed, with outputs shown in (f) and dark gray indicating the detected cell nuclei.

where {R}0 and {R}2 denote the minimum and mean intensities of the detected interest regions in I, and C is a scaling constant. The normalization factor 0.5({R}0+{R}2) is chosen based on: (1) it should normally be smaller than C so that all pixels in I are scaled up with contrast between pixels increased proportionally; and (2) it should not be so small that the image becomes distorted from the original patterns with intensities capped at 255 for grayscale images. The iteration stops when the number of regions created does not change any further. With such a contrast-enhanced approach, better region detection output can be seen in Figure 3e. The resultant regions are either single-level, or form a hierarchy of lower- and upper-level regions.

It is also observed that during each iteration, the parameter MaxVariation in MSER (using VLFeat library [40]) needs to vary for individual images to better accommodate the inter-image variations. Therefore, the parameter value is determined at runtime by first setting MaxVariation to v1 then gradually reducing it by a certain step Δ v  until it reaches v2 or the number of region levels is larger than one. Furthermore, while the resultant single- and upper-level regions are mostly cell nuclei, occasionally under-segmentation happens. In other words, a single cell nucleus could be divided into two nested regions and the upper-level region would become a under-segmented portion of the cell nucleus. To reduce such under-segmentation, we find that if the combined area of two nested regions is roughly elliptical with a suitable size, they can be merged as a single region. The shape and size criteria are determined using a linear-kernel binary SVM obtained from the training data. The overall process of initial segmentation is listed in Algorithm 1.

Algorithm 1: Initial segmentation

Decluster processing

As seen in Figure 3e, the detected single- and upper-level regions usually represent the cell nuclei, and lower-level regions usually represent the background with elevated intensities. However, the upper-level regions could contain false positives caused by bright background, and lower-level regions could also include undetected cell nuclei. It is also observed that such incorrect detections are mainly present among the two-level nested regions (i.e. clusters), while the single-level regions are normally true cell nuclei. Therefore, in the second stage, we focus on further processing on the detected clusters, with two objectives. First, we expect to identify any cell nucleus that has not been detected after the initial segmentation. Such cell nuclei typically exhibit similar intensities to the surrounding background, and hence would not be highlighted as salient regions. Second, we need to filter out high-intensity background regions, which usually have rounded or irregular shapes, and could be easily confused as cell nuclei. A two-step approach is designed, using candidate identification then candidate validation. An example output is shown in Figure 3f.

Formally, let U = {u i  : i = 1,...,N U } be a detected cluster, with N U  pixels u i . Define the set of labels {F,B} representing the foreground (i.e. cell nuclei) and background respectively, and a foreground region as a connected component G x  ⊂ U with ∀u i  ∈ G x  : l i =F. The problem is to label each pixel u i ∈U as l i  = {F,B}, with the object-level constraint that any detected foreground region G x  should have suitable characteristics as a cell nucleus.

Candidate identification

In the first step, we try to identify a set of non-overlapping candidate foreground regions {G x } from each cluster U by labeling each pixel u i  as foreground or background. We specify that any upper-level region enclosed in a cluster U is a candidate region G x . To identify more candidates from the cluster U itself, it is observed that to differentiate between F and B pixels, the texture feature in a local patch is more discriminative than pixel intensities. For example, compared with cell nuclei, the background usually has more homogeneous texture that might be dark or bright. In this work, we choose to use the scale-invariant feature transform (SIFT) descriptor [41], which describes the gradient distribution within a local patch and is invariant to scale, translation and rotation. SIFT feature of each pixel u i  is computed, and then labeled using a binary SVM. The SVM kernel is polynomial, with other default settings in LIBSVM [42]. A connected component of F pixels is identified as a candidate region G x .

While most of the candidate regions are true cell nuclei, some are actually bright background areas with round shapes (e.g. the first example in Figure 4). To filter out the false detections, we pass them to the candidate validation stage.

Figure 4
figure 4

Illustration of decluster processing. (a) The example image (after iterative enhancement). (b) Newly identified candidate regions are shown in yellow, with purple indicating the ones detected during initial segmentation, and both gray and purple denoting the reference regions; here to illustrate the probability inference, the light blue circle highlights one reference G k , and pink and orange circles indicate two candidate regions G x . (c) The candidates validated shown as yellow. (d) The KDE plot generated for G k , in which the pink and orange lines represent p ̂ 0 ( δ k , x ) for the pink and orange circled candidates. (e) The probabilities Q(G x ,F) derived for all candidate regions, with P_1 and Y_4 corresponding to the orange and pink circled candidate regions, the red vertical line separating the two clusters, and green horizontal lines indicating the thresholds for l(G x ) = F. (f)-(h): A second example with the same annotations to show that different from the first example, the real cell nuclei here are bright while the filtered candidate region is darker.

Candidate validation

In the second step, we validate if the identified candidate region G x in image I is a cell nucleus. There are two reasons that motivate this step. First, there might be misclassification during candidate identification due to inter-image intensity variations (e.g. different appearances of the cell nuclei between the two examples in Figure 4). The labeling performance could be improved based on reference information gathered from the testing image itself. Second, pixel-level labeling based on SIFT features has limited spatial information and often does not represent the overall region G x . We design a probability estimation via distance profile method to derive the probability Q(G x ,F) of G x being a valid cell nucleus based on the feature-distance profiles of other reference cell nuclei, as detailed below.

Probability inference

 Although cell nuclei in an image could have varying characteristics, we expect that G x , if representing a true cell nucleus, should have similar features to the other cell nuclei in the same image, especially those spatially adjacent to G x , as can be seen from the examples in Figure 4. Therefore, if we have a set of determined cell nuclei in I, we can use them as references to validate G x . To cope with inter-image variations, we would only select references from the image I in which G x  resides. This means we could not use the ground truths for reference construction. Instead, we use the single- and upper-level regions that are detected during initial segmentation as references.

We use these references by first creating a distance profile per reference, and computing the probability of G x being a cell nucleus based on its feature distance to each reference. Specifically, assume within an area near G x , there are K reference regions G={ G k :k=1,...,K}. Here near is defined as both G x  and G k  being in the same quadrant of image I. Let f x  describe the region-level feature of G x , and the feature distance between G x  and G k  as δ(f k ,f x ) (details of f and δ in the next two subsections). Intuitively, the more similar G x  and G k  are, the more likely G x  is a cell nucleus. However, since G k  may be a false positive detection, decision based on direct feature distance δ(f k ,f x ) might be error prone. Therefore, we devise an alternative hypothesis that, if δ(f k ,f x ) is comparable with { k ′ =1,...,K, k ′ ≠k:δ( f k , f k ′ )}, then G x  is likely a cell nucleus.

To measure if δ(f k ,f x ) is comparable with {∀ k ′ :δ( f k , f k ′ )}, we use the non-parametric kernel density estimation (KDE):

p ̂ 0 ( δ k , x )= 1 K - 1 ∑ k ′ ≠ k 1 h k K( δ k , x - δ k , k ′ h k )
(2)

where δk,x is short for δ(f k ,f x ), K(·) is the Gaussian kernel and h k  is the bandwidth approximation following normal distribution of all data samples {∀ k ′ :δ( f k , f k ′ )}. The density value p ̂ 0 ( δ k , x ) is then normalized by the maximum density of the distribution to obtain the comparability measure in terms of probability p ̂ ( δ k , x )∈[0,1]:

p ̂ ( δ k , x )= p ̂ 0 ( δ k , x )/ max k ′ { p ̂ 0 ( δ k , k ′ )}
(3)

With this model, p ̂ ( δ k , x ) is larger when δk,x approaches the Gaussian mean of the samples, which means that G x  is more likely a cell nucleus if the distance between G x  and G k  is similar to how the other references { G k ′ } vary from G k .

Next, by combining the estimates p ̂ ( δ k , x ) from all references {G k }, the final probability of G x  being a cell nucleus is derived:

Q( G x ,F)= 1 K ∑ k p ̂ ( δ k , x )
(4)

The averaging operation helps to ensure that a single reference G k  with very different features from G x  would not affect the overall probability Q(G x ,F) significantly.

Then, based on Q(G x ,F), we define a thresholding rule to determine if G x  is a valid cell nucleus:

l( G x )=F, if Q ( G x , F ) > α 1 max x ′ Q ( G x ′ , F ) , Q ( G x , F ) > α 2 , for G x′ = ∅
(5)

where G x ′ denotes other candidate regions that are within the same cluster U as G x  and x′ ≠ x; α1 and α2 are predefined thresholds. Examples of the density computation and probability derivation are shown in Figure 4, and the overall process of candidate validation is listed in Algorithm 2.

Algorithm 2: Candidate validation
Appearance feature

 We observe that a region tends to comprise patches of similar textures and repetitive patterns. Therefore, we choose to represent G x  with bag-of-features. First, the image I that contains G x  is divided into a grid of patches {P}. Then for each patch, we represent its texture feature by its minimum, maximum, mean intensity, standard deviation, and a histogram of intensity differences between each pair of pixels. Each patch-wise feature is then assigned a feature word. A histogram summarizing the occurrence frequencies of such feature words in G x  is defined as f x . Here each feature vector is normalized by the size of G x , to represent the percentages of various intensities and feature words in G x .

Note that if G x  is a newly identified candidate during decluster processing, G x  might only represent a small under-segmented portion of the actual cell nucleus due to the pixel-level labeling. Therefore, to have a good summary of the actual candidate feature, we first estimate an elliptical region G x o that is a minimum volume ellipsoid covering G x [43]. To avoid including many background pixels into G x o , we ensure G x o is part of the cluster U in which G x  is detected: G x o = G x o ∩U. G x o  is then used in place of G x  as the detected candidate, from which f x  is computed. The refined elliptical regions are shown in Figure 4c.

Appearance distance

 To compute the distances δ(f k ,f x ) between two histogram features, the diffusion distance [44] is used. The diffusion distance models the distance between histogram-based descriptors as heat diffusion process on a temperature field. Compared to the bin-to-bin histogram distances, such as Euclidean distance, the diffusion distance is able to measure cross-bin distances, avoiding explicit computation of histogram alignment. While the earth mover’s distance (EMD) [45] has similar advantages, the computation of diffusion distance is much faster, with O(H) complexity only, where H is the number of histogram bins.

Contour refinement

At this stage, a detected region could contain a single or multiple cell nuclei, which could be under-segmented or include extra background. We thus expect to achieve better contour delineation of cell nuclei. Our idea is that, while the foreground and background are often inhomogeneous, there is always relatively good contrast between them in a local area. Therefore, by performing contour refinement for each detected cell region G individually, the foreground and background can be better differentiated by analyzing the localized contrast information. We employ a regional contrast-based graphical model for the contour refinement.

Specifically, a conditional random field (CRF) [46] with the following energy function is designed:

E ( L | G ¯ ) = ∑ i η ( l i ) + η ( l G ) + 0 . 5 { ∑ i φ ( l i , l G ) + ∑ i , i ′ ϕ ( l i , l i ′ ) }
(6)

where G ¯ denotes the detected region G  plus its surrounding area of a fixed width (half of the short axis of G) (Figure 5b), and L denotes the labeling vector of all pixels in G ¯ . Then, the model attempts to refine the contour of G by relabeling each pixel u i ∈ G ¯ as l i  = {F,B}. Here η(l i ) is the unary contrast-based intensity term, η(l G ) combined with φ(·) is the contrast-based detection term with l G  representing the detected region G, and Ï•(·) is the spatial term associating neighboring pixels u i and u i ′ . The constant 0.5 is set to obtain equal contributions from the unary costs ( ∑ i η( l i )+η( l G )) and the combined pairwise costs ( ∑ i φ( l i , l G )+ ∑ i , i ′ Ï•( l i , l i ′ )). Graph cut [47] method is used to derive the most probable labeling L that minimizes the energy function, to produce the final segmentation of cell nuclei from G ¯ . Here our customized definition of the intensity term and inclusion of the detection term are the main distinctions from the other CRF constructs [35, 48, 49].

Figure 5
figure 5

Illustration of contour refinement. (a) Segmentation output after the decluster processing shown with yellow contours. (b) G ¯ indicated with yellow contours. (c) Visualization of the graphical model, with blue nodes representing l i  and green node l G , and the blue and green edges denoting the pairwise relationships. (d) Results of contour refinement with orange contours.

The contrast-based intensity term η(l i ) describes the unary costs of pixel u i  labeled as l i  ∈ {F,B}. Basically, the costs of l i  = F and l i  = B represent the inverse probabilities, and the probability p r(u i ,F) of l i  = F is computed by:

pr ( u i , F ) = ( 1 + exp ( - 2 ( f i - λ G ) ) ) - 1
(7)
f i = I i / I G
(8)
λ G = f G - γ G ( f G - ⌞ f G )
(9)

where I G  denotes the mean intensity of G, and p r(u i ,F) follows a sigmoid probability distribution based on the contrast feature f i . We expect pixels with f i  > λ G  to more likely represent the foreground. λ G  is computed based on f G  and ⌞ f G , which are the mean and minimum of all feature values {f i  : u i  ∈ G}, and is adjusted by γ G  for a balance of foreground and background partitioning in G. The parameter γ G  is calculated at runtime, by gradually increasing it from γ1 to γ2 with a step value Δ γ , and choosing the smallest γ G  ∈ [ γ1,γ2] that does not cause the entire G to be labeled as B. With p r(u i ,B) = 1 - p r(u i ,F), the cost values for both labels are:

η( l i )=1-pr( u i , l i )
(10)

Note that since λ G  would be closer to f G  in most cases with small γ G , it would cause portions of G to have p r(u i ,F) < 0.5 (i.e. u i  labeled as background), resulting in possible under-segmentation. It is however not advisable to lower λ G , due to considerable overlap between low-intensity areas in G and the background. Therefore, we introduce a second contrast-based detection term to encourage labeling of l i  = F. An auxiliary node l G  is first included to the graph with the following unary costs:

η( l G )= 0 if l G = F N G ¯ otherwise
(11)

where N G ¯ is the number of pixels in G ¯ , and such a large cost of l G  = B ensures l G  is assigned 1. Then for each pixel u i , a pairwise cost φ(l i ,l G ) is computed based on the contrast ν(I i ,I G ) between I i  and the mean intensity of G:

φ( l i , l G )=δ( l i - l G )·ν( I i , I G )
(12)

with δ(l i  - l G ) = 1 if l i  ≠ l G  and 0 otherwise, and ν(I i ,I G ) = 1 if I i  > I G , or:

ν( I i , I G )=exp(- | | I i - I G | | 2 2 〈 | | I i - I G | | 2 〉 )
(13)

where 〈 · 〉 denotes the average Euclidean distances of all such pairwise distances in G ¯ . In this way, pixels with p r(u i ,F) ≈ 0.5 could be better labeled with the additional cost factor; and obvious background pixels would still obtain the correct B label, with φ(l i ,l G ) much lower than η(l i ).

The spatial term ϕ( l i , l i ′ ) then further enhances the delineation by encouraging spatial labeling consistencies between neighboring pixels u i  and u i ′ . A pairwise cost for l i ≠ l i ′ is thus defined as:

ϕ( l i , l i ′ )=δ( l i - l i ′ )·ν( I i , I i ′ )
(14)

where δ(·) and ν(·) follow Eq. (12). Such a cost function implies that pixels with more similar intensities would be more penalized if they take different labels.

Materials and evaluation methods

Three different datasets that are publicly available with segmentation ground truth are used in this study. Their main properties are summarized in Table 1. The images in the first two datasets were acquired with nuclear markers whereas the third dataset also includes the cytoplasm. Detailed information can be found in [1, 50]. Among the three, dataset 1 has higher contrast between cell nuclei and background. Datasets 2 and 3 have large intensity inhomogeneity and considerable degree of intensity overlapping between the cell nuclei and the background. The inclusion of cytoplasm in dataset 3 poses more challenges. The images in dataset 3 are preprocessed to remove the pink areas and converted to grayscale. Figure 6 shows an example image after the preprocessing.

Table 1 Summary of the datasets used
Figure 6
figure 6

Example localization results. The top row from dataset 1, the middle row from dataset 2 and the bottom row from dataset 3. (a) Image with ground truth contours in orange. (b) Results using our proposed RPL method. (c) Results using OT. (d) Results using LS.

Most parameters used in this study are set to the same values for all three datasets: (1) in Initial Segmentation, v1 = 0.7, v2 = 0.4 and Δ v  = 0.1; (2) in Probability Inference, α1 = 0.6 and α2 = 0.4; (3) in Appearance Feature, the number of histogram bins is 64, and the number of feature words is 12; and (4) in Contour Refinement, γ1 = 0.25, γ2 = 1 and Δ γ  = 0.25. While these settings are chosen empirically, using a common setting for all three datasets suggests that the method is robust to different image acquisition and manual tuning of parameters can be minimal. There are only two dataset-specific parameters. One is the patch size in Appearance Feature, which is 8×8 pixels for datasets 1 and 2, and 4×4 pixels for dataset 3. The smaller size for dataset 3 is chosen due to its smaller cell nuclei compared to datasets 1 and 2. The other parameter is C in Initial Segmentation, which is set to 128 for datasets 1 and 2, and 64 for dataset 3. This ensures the contrast enhanced images in dataset 3 would not become too bright to cause distortion.

For dataset 1, four representative images are selected to train two SVM classifiers, for the cell-cluster differentiation and candidate identification. While testing is performed on all images to make the results directly comparable with the state-of-the-art [14, 38], we note that the testing results are not sensitive to the selection of training data, with very similar testing results observed based on different training sets. Similar procedures are performed for dataset 2. For dataset 3, in order to have comparable performance evaluation with [32, 35], half of the images are used for training (images # 2, 3, 4, 5, and 7) and the rest for testing.

We evaluate the localization of cell nuclei by two measures. First, performance of object-level detection is evaluated by recall (R), precision (P), and accuracy (A):

R = TP / ( TP + FN )
(15)
P = TP / ( TP + FP )
(16)
A = TP / ( TP + FN + FP )
(17)

where TP, FN, and FP are the numbers of true positive, false negative and false positive detections of cell nuclei. Given a detected region O d  and the ground truth mask O g t , if the overlap ratio R(O d ) is at least 0.5:

R( O d )=| O d ∩ O gt |/| O d ∪ O gt |
(18)

then the detection is considered TP [51]; and correspondingly FN and FP are determined.

Second, the segmentation performance is evaluated by both region- and contour-based measures, including Dice, normalized sum of distances (NSD) and Hausdorff distance (HD):

Dice = 2 | F ∩ M | / ( | F | + | M | )
(19)
NSD = ∑ u i ∈ ( F △ M ) D ( u i ) / ∑ u i ∈ ( F ∪ M ) D ( u i )
(20)
HD = max u i ∈ ∂F D ( u i )
(21)

Here F represents the foreground pixels identified, M is the ground truth mask, and D(u i ) is the minimal Euclidean distance of pixel u i  to ∂ M of the corresponding reference nuclei, with ∂ indicating the contour.

We have compared with popular cell imaging segmentation techniques, including Otsu thresholding, k-means clustering and watershed [8]. Furthermore, in view of the popularity of level set for cell imaging and our design on tackling the intensity inhomogeneities, we have experimented with a level set method that has a similar focus, using the authors’ released code [52], with initial contours generated using watershed method. For all methods, post-processing is conducted to remove isolated segments that are smaller than 1/10 of the average size of foreground regions detected in the image. In addition, we report direct performance comparisons with the state-of-the-art results reported on the same datasets [14, 32, 35, 38], by including the same performance measures as used in these works.

Results and discussion

Cell detection

We report the object-level detection results in Table 2. Comparing the results at various stages of the methodology, the improvement is larger on dataset 2 than dataset 1, e.g. 8.3% increase in detection accuracy on dataset 2 vs 0.8% increase on dataset 1. This is because inhomogeneity is more prominent on dataset 2 while dataset 1 exhibits clearer contrast between the cell nuclei and the background in most images. In our evaluation, a detection is only considered as TP if the overlap ratio in Eq. (18) is at least 0.5. Therefore, a largely over- or under-segmented object would be counted as FN for the second stage, and corrected after the contour refinement. This explains why although cell nuclei are detected after the decluster processing, the recall results only improve significantly after the third stage. On dataset 3, the presence of cytoplasm causes many cell nuclei to clutter into one region during the initial segmentation; this leads to FN. The third stage better differentiates the cell nuclei and cytoplasm, and the improvement is significant with 18.7% and 3.4% increase in detection recall and precision. Figure 7a gives a better overview of the overlap ratios obtained from the final localization outputs. While most cell nuclei exhibit ratios not less than 0.5, less optimal results are observed on dataset 3 again due to the influence from the cytoplasm.

Table 2 Detection results
Figure 7
figure 7

Cell detection results. Histograms with y-axis as numbers from per 100 cell nuclei, and x-axis as (a) the object-level overlap ratio and (b) the Hausdorff distance, both between the segmented foreground and ground truth.

The performance improvement introduced by the iterative process of interleaving interest region extraction and image enhancement are shown in Figure 8a. The higher recall (i.e. on average 3.6% increase) suggests that such an approach is especially useful for identifying foreground regions that originally display low contrast from the background. The benefits of having candidate validation are shown in Figure 8b. By filtering out interest regions that are very different from the reference regions, the detection precision thus improves by on average 2%. The recall improves by on average 0.7% only, mainly because of the same constraints imposed by the overlap ratio.

Figure 8
figure 8

Cell detection results. Improvement on detection from (a) iterative image enhancement and interest-region extraction, and (b) candidate validation.

To evaluate the effect of the default threshold setting α1 for candidate validation, the receiver operating characteristics (ROC) curves are plotted by varying the threshold value. The probability estimates Q( G x ,F)/ max x ′ Q( G x ′ ,F) from all candidate regions are included in the plot, and candidate regions with at least 0.5 overlapping ratio with the ground truth are marked as foreground class and the rest as the background class. As shown in Figure 9, the 0.6 threshold setting provides a good balance between the TP and FP detections, with close to maximum TP rates. Note that the numbers of true negatives here are small (about 1/5 of positive samples), hence the FP rates appear relatively high.

Figure 9
figure 9

ROC curves of candidate validation. The purple dots indicate the position with the decision threshold α1 = 0.6.

Nucleus segmentation

Table 3 summarizes the region- and contour-based segmentation results. On datasets 2 and 3, the decluster processing improves the Dice measure by about 3% and 4%, due to better object-level labeling of candidate regions. The contour-based measures, however, are mainly enhanced at the third stage of the methodology, with on average more than half reduction in NSD and HD. This is attributed to better contour delineations based on the detection results from the first two stages. Besides the mean values listed in the table, the distributions of Hausdorff distances on the final localization results are also shown in Figure 7b.

Table 3 Segmentation results

To further evaluate the design of the graphical model for contour refinement, the foreground probabilities for all pixels of interest are computed with the intensity term, as summarized in Figure 10. While many pixels exhibit suitable probabilities, some background pixels, especially those in datasets 2 and 3, have larger foreground probabilities and would lead to misclassification. The pixel-level classification is improved by introducing the contrast-based detection and spatial terms, as shown in Figure 11.

Figure 10
figure 10

Nucleus segmentation results. Histograms with x-axis as the foreground probability derived from the intensity term,and y-axis as the numbers from per 100 pixels in G ¯ for (a) the real cell nuclei and (b) background.

Figure 11
figure 11

Nucleus segmentation results. Improvement on segmentation from (a) contrast-based detection term and (b) spatial term.

Performance comparison

The localization results using the standard approaches are listed in Table 4, with example outputs shown in Figure 6. Compared to our proposed method, the level set and watershed techniques produce the second best results for dataset 1, especially with good contour-based measures. However, without explicitly handling high-intensity background regions, both methods result in about 3% lower detection precision. On dataset 2, our proposed method demonstrates stronger advantages, with 8.5% increase in detection accuracy,10.2% increase in Dice coefficient and 10.8 decrease in HD over the second best approach (i.e. level set). Both the level set and watershed approaches face the following challenges: (1) difficulty separating cell nuclei from surrounding background areas with low contrast, and (2) incapability of classifying background regions that resemble cell nuclei. On dataset 3, the intensity inhomogeneities within the cell nuclei and the cytoplasm make it particularly difficult to achieve good segmentation. As a result, the watershed method tends to largely over-segment the cell nuclei, generating many clusters and cause low detection recall and more errors in contour delineation. The level set method based on localized energy optimization is quite effective in splitting the clusters, but is less optimal for areas with high similarity between the cell nuclei and cytoplasm. The thresholding method does not perform as well as the level set or watershed approaches, but it does outperform the clustering-based approach. Compared to level set, our method achieves 7% increase in detection accuracy, 2.7% increase in Dice coefficient and 1.8 decrease in HD. Tables 2, 3 and 4 show that our proposed method delivers better localization even using only the initial segmentation step. Higher performance margins are obtained with decluster processing and contour refinement, especially on datasets 2 and 3.

Table 4 Comparison of localization results

A comparison with the state-of-the-art results reported for the same datasets is summarized in Table 5. Our method achieves better results in most measures, as bold-faced in the table. On dataset 1, 0.93 more FP cell nuclei are detected compared to the level set method [14]. It is possible that such false detections are caused by accidental highlighting of background regions during the iterative image enhancement for the initial segmentation stage. However, our method exhibits overall much better detection performance with minimal numbers of FNs (3.68 fewer than [14]) and only 1.43 FPs. The accuracy of pixel-level segmentation on dataset 2 improves significantly, as indicated by the 5% increase in Dice and Rand indices over [14]. 4.1% performance improvement of object-level accuracy over [35] on dataset 3 is also obtained. These observations suggest that our method is indeed quite effective in handling the intensity inhomogeneity issue that is the major cause hindering satisfactory segmentation on datasets 2 and 3. The improvement on the contour-based measures, i.e. on average 0.03 NSD decrease and 1.45 HD decrease over [14], also demonstrate the suitability of boundary delineation using region-based designs, i.e. the salient region extraction and graphical model-based contour refinement.

Table 5 Comparison with the state-of-the-art results

Our method is currently implemented in Matlab, running on a standard PC with a 2.66-GHz dual core CPU and 3.6 GB RAM. The computational time is related to the number of cells and the size of cells in an image. On a 1344 ×1024 pixel image with about 40 cell nuclei, an average 35 s is needed for the entire localization process. This is faster than applying the level set method [52], which requires about 45 s with 10 iterations.

Conclusions

A fully automatic localization method for cell nuclei in microscopic images is presented in this paper. Intensity inhomogeneities in cell nuclei and the background often cause unsatisfactory localization performance. Not many works have been reported to address this problem in a robust manner. We propose a method that exploits various scales of data-adaptive information to tackle the intensity inhomogeneity. First, the regions of interest, i.e. cell nuclei or clusters, are extracted as salient regions with iterative contrast enhancement. Then with feature-based classification and reference-based probability inference, the clusters are further processed to detect more cell nuclei and filter out spurious regions. Lastly, based on regional contrast information encoded in a graphical model, the pixel-level segmentation is enhanced to create the final contours. This region-based progressive localization (RPL) method has been successfully applied to three publicly available datasets, showing good object-level detection and region- and contour-based segmentation results. Compared to popular approaches in this problem domain such as level sets, our method achieved consistently better performance, with on average 5.2% increase in Dice coefficient and 6% increase in object-level detection accuracy. Our method also outperformed the state-of-the-art with on average 3.5% and 7% improvement of region- and contour-based segmentation measures. We also suggest that the proposed method is general in nature and can be applied to other localization problems, as long as the objects of interest can be modeled as salient regions with measurable contrast from the background.

As a future study, we will investigate improving the graphical model for better contour delineation. A potential approach is to incorporate an additional term as the cost of difference between the model image and the measured image, as inspired by [24]. The model image could be derived as a convolution of a point-spread function of the microscope with an object intensity function defined based on the pixel labels. We will also investigate replacing the pixel-wise labeling with region-level processing for computational efficiency while maintaining the segmentation accuracy. Other future work could explore the applicability of the proposed method on other types of images. Images with nuclear membrane marker and different nuclear markers such as the green fluorescent protein (GFP), and those with higher resolution or dimension, are of particular interest. To accommodate the specific characteristics of these images, possible changes to the method are to design more comprehensive intensity and texture features to differentiate among cell structures and background, and to enhance the contour refinement with boundary constraints.

References

  1. Coelho LP, Shariff A, Murphy RF: Nuclear segmentation in microscope cell images: a hand-segmented dataset and comparison of algorithms. IEEE International Symposium on Biomedical Imaging. 2009, 518-521.

    Google Scholar 

  2. Peng H: Bioimage informatics: a new area of engineering biology. Bioinformatics. 2008, 24 (17): 1827-1836. 10.1093/bioinformatics/btn346.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  3. Meijering E: Cell segmentation: 50 years down the road [life sciences]. IEEE Signal Process Mag. 2012, 29 (5): 140-145.

    Article  Google Scholar 

  4. Long F, Peng H, Myers E: Automatic segmentation of nuclei in 3D, microscopy images of C.elegans. IEEE International Symposium on Biomedical Imaging. 2007, 536-539.

    Google Scholar 

  5. Yan P, Zhou X, Shah M, Wong STC: Automatic segmentation of high-throughput RNAi fluorescent cellular images. IEEE Trans. Inf Technol Biomed. 2008, 12: 109-117.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  6. Chang H, DeFilippis RA, Tlsty TD, Parvin B: Graphical methods for quantifying macromolecules through bright field imaging. Bioinformatics. 2009, 25 (8): 1070-1075. 10.1093/bioinformatics/btn426.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  7. Li F, Zhou X, Ma J, Wong STC: Multiple nuclei tracking using integer programming for quantitative cancer cell cycle analysis. IEEE Trans Med Imag. 2010, 29: 96-105.

    Article  Google Scholar 

  8. Hagwood C, Bernal J, Halter M, Elliott J: Evaluation of segmentation algorithms on cell populations using CDF curves. IEEE Trans Med Imag. 2012, 31 (2): 380-390.

    Article  Google Scholar 

  9. Yang L, Meer P, Foran D: Unsupervised segmentation based on robust estimation and color active contour models. IEEE Trans Inf Technol Biomed. 2005, 9 (3): 475-486. 10.1109/TITB.2005.847515.

    Article  PubMed  Google Scholar 

  10. Mosaliganti K, Gelas A, Gouaillard A, Noche R, Obholzer N, Megason S: Detection of spatially correlated objects in 3D images using appearance models and coupled active contours. International Conference on Medical Image Computing and Computer Assisted Intervention. 2009, 641-648.

    Google Scholar 

  11. Dzyubachyk O, van Cappellen WA, Essers J, Niessen WJ, Meijering E: Advanced level-set-based cell tracking in time-lapse fluorescence microscopy. IEEE Trans Med Imag. 2010, 29 (3): 852-867.

    Article  Google Scholar 

  12. Bergeest JP, Rohr K: Fast Globally Optimal Segmentation of Cells in Fluorescence Microscopy Images. International Conference on Medical Image Computing and Computer Assisted Intervention. 2011, 645-652.

    Google Scholar 

  13. Ali S, Madabhushi A: An integrated region, boundary, shape based active contour for multiple object overlap resolution in histological imagery. IEEE Trans Med Imag. 2012, 31 (7): 1-14.

    Article  Google Scholar 

  14. Bergeest JP, Rohr K: Efficient globally optimal segmentation of cells in flurorescence microscopy images using level sets and convex energy functionals. Med Image Anal. 2012, 16: 1436-1444. 10.1016/j.media.2012.05.012.

    Article  PubMed  Google Scholar 

  15. Cardinale J, Paul G, Sbalzarini IF: Discrete region competition for unknown numbers of connected regions. IEEE Trans Image Process. 2012, 21 (8): 3531-3545.

    Article  PubMed  Google Scholar 

  16. Wahlby C, Raviv TR, Ljosa V, Conery AL, Golland P, Ausubel FM, Carpenter AE: Resolving clustered worms via probabilistic shape models. IEEE International Symposium on Biomedical Imaging. 2010, 552-555.

    Google Scholar 

  17. Raviv TR, Ljosa V, Conery AL, Ausubel FM, Carpenter AE, Golland P, Wahlby C: Morphology-guided graph search for untangling objects: C.elegans analysis. International Conference on Medical Image Computing and Computer Assisted Intervention. 2010, 635-642.

    Google Scholar 

  18. Yang HF, Choe Y: Cell tracking and segmentation in electron microscopy images using graph cuts. IEEE International Symposium on Biomedical Imaging. 2009, 306-309.

    Google Scholar 

  19. Lou X, Koethe U, Wittbrodt J, Hamprecht FA: Learning to segment dense cell nuclei with shape prior. IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1012-1018.

    Google Scholar 

  20. Bernardis E, Yu SX: Pop out many small structures from a very large microscopic image. Med Image Anal. 2011, 15: 690-707. 10.1016/j.media.2011.06.009.

    Article  PubMed  Google Scholar 

  21. Mumford D, Shah J: Optimal approximations by piecewise smooth functions and associated variational problems. Comm Pure Appl Math. 1989, 42: 577-685. 10.1002/cpa.3160420503.

    Article  Google Scholar 

  22. Li K, Kanade T: Nonnegative Mixed-Norm Preconditioning for Microscopy Image Segmentation. International Conference on Information Processing in Medical Imaging. 2009, 362-373.

    Chapter  Google Scholar 

  23. Yin Z, Kanade T, Chen M: Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation. Med Image Anal. 2012, 16: 1047-1062. 10.1016/j.media.2011.12.006.

    Article  PubMed Central  PubMed  Google Scholar 

  24. Helmuth JA, Sbalzarini IF: Deconvolving active contours for fluorescence microscopy images. International Symposium on Visual Computing. 2009, 544-553.

    Google Scholar 

  25. Helmuth JA, Burckhardt CJ, Greber UF, Sbalzarini IF: Shape reconstruction of subcellular structures from live cell fluorescence microscopy images. J Struct Biol. 2009, 167: 1-10. 10.1016/j.jsb.2009.03.017.

    Article  CAS  PubMed  Google Scholar 

  26. Rezatofighi SH, Hartley R, Hughes WE: A new approach for spot detection in total internal reflection fluorescence microscopy. IEEE International Symposium on Biomedical Imaging. 2012, 860-863.

    Google Scholar 

  27. Song Y, Cai W, Feng DD: Microscopic Image Segmentation with Two-Level Enhancement of Feature Discriminability. International Conference on Digital Image Computing Techniques and Applications. 2012, 1-6.

    Google Scholar 

  28. Smith K, Carleton A, Lepetit V: General constraints for batch multiple-target tracking applied to large-scale videomicroscopy. IEEE Conference on Computer Vision and Pattern Recognition. 2008, 1-8.

    Google Scholar 

  29. Quelhas P, Marcuzzo M, Mendonca AM, Campilho A: Cell nuclei and cytoplasm joint segmentation using the sliding band filter. IEEE Trans Med Imag. 2010, 29 (8): 1463-1473.

    Article  Google Scholar 

  30. Yin Z, Bise R, Chen M, Kanade T: Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers. IEEE International Symposium on Biomedical Imaging. 2010, 125-128.

    Google Scholar 

  31. Kong H, Gurcan M, Belkacem-Boussaid K: Partitioning histopathological images: an integrated framework for supervised color-texture segmentation and cell splitting. IEEE Trans Med Imag. 2011, 30 (9): 1661-1677.

    Article  Google Scholar 

  32. Cheng L, Ye N, Yu W, Cheah A: Discriminative Segmentation of Microscopic Cellular Images. International Conference on Medical Image Computing and Computer Assisted Intervention. 2011, 637-644.

    Google Scholar 

  33. Qu L, Long F, Liu X, Kim S, Myers E, Peng H: Simultaneous recognition and segmentation of cells: application in C.elegans. Bioinformatics. 2011, 27 (20): 2895-2902. 10.1093/bioinformatics/btr480.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  34. Monaco J, Raess P, Chawla R, Bagg A, Weiss M, Choi J, Madabhushi A: Image segmentation with implicit color standardization using cascaded EM: detection of myelodysplastic syndromes. IEEE International Symposium on Biomedical Imaging. 2012, 740-743.

    Google Scholar 

  35. Song Y, Cai W, Huang H, Wang Y, Feng DD: Object localization in medical images based on graphical model with contrast and interest-region terms. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2012, 1-7.

    Google Scholar 

  36. Chang H, Loss LA, Spellman PT, Borowsky A, Parvin B: Batch-invariant nuclear segmentation in whole mount histology sections. IEEE International Symposium on Biomedical Imaging. 2012, 856-859.

    Google Scholar 

  37. Song Y, Cai W, Feng DD, Chen M: Cell Nuclei Segmentation in Fluorescence Microscopy Images Using Inter- and Intra-Region Discriminative Information. International Conference of the IEEE Engineering in Medicine and Biology Society. 2013, 1-4.

    Google Scholar 

  38. Chen C, Wang W, Ozolek JA, Lages N, Altschuler SJ, Wu LF, Rohde GK: A template matching approach for segmenting microscopy images. IEEE International Symposium on Biomedical Imaging. 2012, 768-771.

    Google Scholar 

  39. Matas J, Chum O, Urban M, Pajdla T: Robust wide baseline stereo from maximally stable extremal regions. British Machine Vision Conference. 2002, 384-393.

    Google Scholar 

  40. Vedaldi A, Fulkerson B: Vlfeat: an open and portable library of computer vision algorithms. ACM International Conference on Multimedia. 2010, 1469-1472.

    Google Scholar 

  41. Lowe DG: Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004, 60 (2): 91-110.

    Article  Google Scholar 

  42. Chang CC, Lin CJ: LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol. 2011, 2: 1-27.

    Article  Google Scholar 

  43. Boyd S, Vandenberghe L: Convex optimization. 2004, Cambridge University Press

    Book  Google Scholar 

  44. Ling H, Okada K: Diffusion distance for histogram comparison. IEEE Conference on Computer Vision and Pattern Recognition. 2006, 246-253.

    Google Scholar 

  45. Rubner Y, Tomasi C, Guibas LJ: The earth mover’s distance as a metric for image retrieval. Int J Comput Vis. 2000, 40 (2): 99-121. 10.1023/A:1026543900054.

    Article  Google Scholar 

  46. Lafferty J, McCallum A, Pereira F: Conditional random fields: probabilistic models for segmenting and labeling sequence data. International Conference on Machine Learning. 2001, 282-289.

    Google Scholar 

  47. Kolmogorov V, Zabih R: What energy functions can be minimized via graph cuts?. IEEE Trans Pattern Anal Mach Intell. 2004, 26 (2): 147-159. 10.1109/TPAMI.2004.1262177.

    Article  PubMed  Google Scholar 

  48. Huh S, Ker DFE, Bise R, Chen M, Kanade T: Automated mitosis detection of stem cell populations in phase-contrast microscopy images. IEEE Trans Med Imag. 2011, 30 (3): 586-596.

    Article  Google Scholar 

  49. Song Y, Cai W, Kim J, Feng DD: A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE Trans Med Imag. 2012, 31 (5): 1061-1075.

    Article  Google Scholar 

  50. Lezoray O, Cardot H: Cooperation of color pixel classification schemes and color watershed: a study for microscopic images. IEEE Trans Image Proc. 2002, 11 (7): 783-789. 10.1109/TIP.2002.800889.

    Article  Google Scholar 

  51. Everingham M, Gool L, Williams C, Winn J, Zisserman A: The pascal visual object classes (VOC) challenge. Int J Comput Vis. 2010, 88 (2): 303-338. 10.1007/s11263-009-0275-4.

    Article  Google Scholar 

  52. Li C, Huang R, Ding Z, Gatenby JC, Metaxas DN, Gore JC: A level set method for image segmentation in the presence of intensity Inhomogeneities with application to MRI. IEEE Trans Image Proc. 2011, 20 (7): 2007-2016.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by ARC grants.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Song.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

YS designed and carried out research and drafted the manuscript. WC discussed and helped to design the methodology and draft the manuscript. HH, YW and MC helped with the draft manuscript. DF coordinated the designed research. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Song, Y., Cai, W., Huang, H. et al. Region-based progressive localization of cell nuclei in microscopic images with data adaptive modeling. BMC Bioinformatics 14, 173 (2013). https://doi.org/10.1186/1471-2105-14-173

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-14-173

Keywords