A hybrid blob-slice model for accurate and efficient detection of fluorescence labeled nuclei in 3D
© Santella et al; licensee BioMed Central Ltd. 2010
Received: 16 June 2010
Accepted: 29 November 2010
Published: 29 November 2010
To exploit the flood of data from advances in high throughput imaging of optically sectioned nuclei, image analysis methods need to correctly detect thousands of nuclei, ideally in real time. Variability in nuclear appearance and undersampled volumetric data make this a challenge.
We present a novel 3D nuclear identification method, which subdivides the problem, first segmenting nuclear slices within each 2D image plane, then using a shape model to assemble these slices into 3D nuclei. This hybrid 2D/3D approach allows accurate accounting for nuclear shape but exploits the clear 2D nuclear boundaries that are present in sectional slices to avoid the computational burden of fitting a complex shape model to volume data. When tested over C. elegans, Drosophila, zebrafish and mouse data, our method yielded 0 to 3.7% error, up to six times more accurate as well as being 30 times faster than published performances. We demonstrate our method's potential by reconstructing the morphogenesis of the C. elegans pharynx. This is an important and much studied developmental process that could not previously be followed at this single cell level of detail.
Because our approach is specialized for the characteristics of optically sectioned nuclear images, it can achieve superior accuracy in significantly less time than other approaches. Both of these characteristics are necessary for practical analysis of overwhelmingly large data sets where processing must be scalable to hundreds of thousands of cells and where the time cost of manual error correction makes it impossible to use data with high error rates. Our approach is fast, accurate, available as open source software and its learned shape model is easy to retrain. As our pharynx development example shows, these characteristics make single cell analysis relatively easy and will enable novel experimental methods utilizing complex data sets.
Time-lapse imaging of optically sectioned nuclear images has provided an unprecedented opportunity to observe biological processes as they unfold in space and time. Using fluorescent proteins such as GFP to label nuclei, one can image the embryogenesis of diverse organisms such as C. elegans, Drosophila, zebrafish [3, 4] and mouse [5, 6] with single cell resolution over an extended period of time. Given sufficient temporal resolution, individual nuclei can be followed over time, providing a virtually contiguous record of proliferation, differentiation and morphogenesis [2, 5, 7–9]. However, exploiting this source of information is not trivial: such data sets can record thousands of cells over hundreds of frames, adding up to terabytes of files. The data becomes manageable only when tracks of the behavior of individual cells are generated from the raw images.
In response to this, a variety of computational techniques and software packages have been developed to aid in quantitative analysis of images [10–15]. The largest class of image analysis methods focuses on segmenting contiguous regions of pixels, implicitly detecting nuclear locations in the process. The simplest technique, thresholding image intensity, has been supplemented with image processing techniques like smoothing, adaptive thresholds [16, 17], morphological operators, mode finding[18, 19], watershed [20, 21]and level set methods to increase robustness in the face of noise, uneven contrast and touching nuclei. A second class of methods uses matched filters to detect the centers of nuclei. A predefined template of object appearance is compared to every location in an image and local maxima of similarity are assumed to be object centers [1, 23, 24].
The success of these methods is mixed. When nuclei are widely spaced, many methods perform well, with error rates of one percent or less [16, 25, 26]. However, as nuclear density increases and nuclei start to touch, which is typical of late embryonic development and adult tissues, error rates rise to from two up to more than ten percent [1, 3, 18, 20, 22]. Errors are caused by image characteristics that violate the rules or models underlying a detection method. These have their origin in both biology and the imaging process. Variation in nuclear fluorescence and shape is the primary biological complication. Uneven signal within a nucleus (Additional file 1, Figure S1a) can result in over segmentation, separate redundant detections of a nucleus on each mode of intensity. Elongated and irregular nuclear shapes can also contribute to over segmentation (Additional file 1, Figure S1c). Even if expression is uniform, each end or bump may independently match some computational definitions of a nucleus. Biology also contributes to under segmentation or missed nuclei. When nuclei are crowded, nuclei with weaker expression may be masked by brighter neighbors and remain undetected (see Additional file 1, Figure S1b for an illustration of a weaker nucleus).
Imaging distortions and resolution limitations further complicate nuclear detection. An optically sectioned sample yields a series of 2D cross sections, so that nuclei are recorded as slices through bright globular shapes against a dark background. Coordinates within the 2D planes are referred to as x and y while z refers to the direction orthogonal to the image planes. Optical and physical constraints make resolution along the z axis lower than that within the x,y plane. Typically, phototoxicity and image acquisition time also limit the number of optical sections, further reducing the z resolution. In turn, this limits the information available to resolve closely packed nuclei along the z axis (Additional file 1, Figure S1d illustrates two nuclei whose images merge along the z axis). Systematic distortions such as fading of signal with depth and stretching of fluorescent signal along the light path (Additional file 1, Figure S1e) further distort the picture, contributing to error.
All errors are problematic because the quality of information extracted during image analysis limits the kinds of biological questions that can be answered. A low percentage of error, say three or five percent, allows reliable assessment of statistical trends in the behaviors of a homogeneous group of cells, such as drug response in cell culture, or the shape and overall migration of a tissue. On the other hand, this level of accuracy is not always sufficient for detailed analysis of embryogenesis, where tracing the behavior of single cells is often necessary. Investigation of many critical developmental processes such as neural crest cell dispersion or convergent extension  can be most effectively investigated with this kind of single cell record of development. Though desired information may be theoretically present in images, it is large-scale annotation of cell behavior that makes systematic investigation possible. Tracing the paths of cells as they move and divide demands virtually 100% accuracy, as a handful of misidentified cells per time point can lead to a thoroughly fragmented and scrambled lineage . To make use of error filled data, biologists must spend a significant amount of time manually editing the automatically constructed result. Even for simple organisms with a few hundred cells, such as C. elegans, this has previously taken up to days . In an organism like zebrafish this amount of editing would be an impossible task, with correction of even a sublineage of interest being weeks, or months, of effort. Errors quickly make human curation a bottleneck, undermining the premise of high throughput imaging. Often, when combined with time constraints, this inability to accurately find cells forces one to detour around a key question, missing an opportunity for a clean experimental design.
To address this challenge we present a detection and segmentation method that is accurate enough to allow high fidelity analysis over a variety of images while remaining fast enough to run in real time, making it practical for use with large data sets.
This basic shape model is sufficient because nuclei have relatively simple, largely convex shapes. Pixel level segmentation is done within 2D image planes, a shape model is not necessary because resolution is high (Figure 1c). In contrast, the final definition of 3D nuclear extent, which must be done under the constraint of limited resolution along the z axis, is based on pre-segmented slices, allowing computationally efficient use of a model of nuclear shape (Figure 1d). This approach avoids the impossibly high computational cost of fitting complex shape models, like the active shape models  commonly used to segment noisy volumetric data, to hundreds of thousands of nuclei.
Our approach begins with image filtering to find an initial set of seed locations for nuclei. Volume data is filtered with a 3D Difference of Gaussian's (DoG) filter , which can be viewed as a matched filter representing a blurred sphere against a dark background. As in typical use of such filters, 3D maxima of intensity in the filtered image serve as candidate nuclear centers for further analysis. At the same time, the filtering process also highlights information about the individual nuclear slices: 2D local maxima within each image plane represent the centers of nuclear slices, and the edges of nuclei are highlighted as zero crossings of intensity (Figures 1b and 3). See Additional File 1: Supplemental Methods section 1.1 for more details.
Based on the filtered volume data, nuclear slices are extracted as polygonal regions. For every 2D intensity maxima above a base threshold a 2D shape representing that slice through the nucleus is segmented. Following our goal of reducing computational complexity, 2D segmentation is reduced to a set of 1D boundary detection problems. Sixteen evenly spaced rays are sent out from the 2D maxima. These rays terminate when they encounter a zero-crossings (illustrated in Figure 1c). Ray endpoints are then post processed to discard rays that are unusually longer or shorter than their neighbours. These final ray endpoints define a polygonal segmentation boundary that can capture detailed shape, but is computationally very cheap to compute. See Additional File 1: Supplemental Methods section 1.3 for more details.
We then combine extracted slices and the subset of these slices corresponding to 3D maxima to segment nuclei. The slice corresponding to each 3D maximum attempts to claim slices above and below it based on a trained probabilistic model of nuclear shape. The underlying shape model is a set of 7 dimensional gaussian distributions, generated using labeled training data, and representing the expected properties of slices within a nucleus and of 'distractor' slices originating from other nearby nuclei. Dimensions of this distribution include the relative size, intensity and position of slices in relation to the center slice and the closest intervening slice (shown in Figure 1d). Training the model is simple, a superset of slices that might be part of the nucleus is generated automatically and extra slices are deleted to create a corrected result (see readme in Additional File 2: source code for detailed instructions). This model is used in a simple maximum likelihood classifier that assesses whether each nearby slice along the imaging axis is more likely to belong to the nucleus or another nearby 'distractor' nucleus. This shape model, though simple, is flexible enough to capture different levels of shape variation, nuclear separation, and optical distortion, making it adaptable to different organisms and microscopy techniques. See Additional File 1: Supplemental Methods section 1.4-1.6 for more details.
Finding Overlooked Nuclei
When crowded, nuclei with weaker fluorescence intensity are often overshadowed by brighter neighbours, and so are not 3D maxima. Conceptually, we can imagine masking the signal from known detected nuclei. When this is done dimmer nuclei become local maxima. The segmentation of all nuclear signal into the unit of slices provides a practical way to achieve this. All slices claimed by at least one nucleus are marked as accounted for and the remainder examined. In any 3D spherical neighbourhood where multiple unclaimed slices exist, an overlooked nuclear center is seeded at the locally brightest slice. This works because the shape model is accurate enough to prevent the brighter, initially detected nuclei from claiming the slices corresponding to these undiscovered dimmer nuclei. These nuclei are extracted from the full set of slices (including claimed ones) subject to the same nuclear shape model above, and this is repeated iteratively until no unclaimed clusters of slices remain. See Additional File 1: Supplemental Methods section 1.7 for more details.
False positives come largely from multiple detections of the same nucleus caused by variation of intensity within the nucleus. False positives from noise are rare because of the strong smoothing provided by filtering. Most false positive cases are revealed by significantly overlapping claims on slices from multiple (equivalent) nuclei. These cases are judged by a conflict resolution step. Whenever two extracted nuclei both claim a slice, two possibilities are considered, merging the two nuclei, and splitting their overlap between them. These two configurations are scored against the shape model and the option with the best shape score is picked. Merging is scored by using the shape model to calculate the total score of all slices in a nucleus formed from the union of all slices in both nuclei, assuming its center to be the slice closest to the geometric middle of the merged set of slices. The second possibility, splitting, is scored by assigning each slice to the nucleus with the strongest claim on it, subject to the constraint of each nucleus being contiguous and adding up the total of the two nuclei's claim on their slices. See Additional File 1: Supplemental Methods section 1.8 for more details.
Image analysis software
Matlab source of the image analysis software is available as Additional File 2: source code and is distributed under the GNU GPL. The source will be actively maintained; the most recent version is available for download at sourceforge (starrynite.sourceforge.net under the file subheading blob-slice cell detection).
Though the algorithm contains a significant number of parameters, the set of key parameters is relatively small. It is typically possible to get good results by setting only two intuitive parameters: the diameter of the nucleus in the first frame, used to set the filter size, and the noise threshold used to discard filtered maxima corresponding to image noise. A full catalogue of all parameters (Additional Table S3 and S4) and advice on tuning them for new images is provided in Additional File 1, section 2.1. An example parameter file along with a script and instructions for retraining the 3D shape mode is provided in Additional File 2: source code.
Evaluation of Accuracy
Error rates at each stage of the algorithm for the 350- to 500-cell C. elegans embryo.
Initial DoG 3D maxima detection
Overlooked nuclei added
Final error after overlap resolution
False Negatives (%)
False Positives (%)
Total error (%)
Additional file 8:Movie 1: Mouse Reconstruction. 3D movie illustrating somites in the mouse embryo based on our system's analysis of a full volume of mouse test data. (AVI 13 MB)
Additional file 9:Movie 2: Zebrafish Reconstruction. 3D movie showing the internal structure of a Zebrafish embryo based on our system's analysis of a full volume of Zebrafish test data. (AVI 9 MB)
To assess running times and memory loads, we tested our method on a whole volume of each data set on a 2.13 GHz Intel core2 PC using a lightly optimized single threaded Matlab implementation (Additional file 1, Table S2). The runtime consists of several components, with disk access and image filtering scaling with image size while slice and nuclear extraction scale with the number of nuclei present. In larger images computational time is strongly dominated by image filtering, which contributes to 83% of runtime for the zebrafish dataset. For the C. elegans, Drosophila and mouse data, the processing speed is about 3 seconds per megapixel of image data. For the zebrafish data, the speed is reduced to approximately 6 seconds per megapixel. This is likely due to memory management inefficiencies resulting from the need to divide the image filtering into multiple parts that fit within addressable memory on a 32bit architecture.
It is worth noting that, even in this unoptimized implementation on limited hardware, our algorithm is fast and efficient enough for real time analysis of all data sets but zebrafish. As the core image filtering is highly parallelizable and memory management seems to take a large toll in our tests, a parallelized implementation on a fairly typical multicore 64bit workstation with sufficient memory would be qualitatively faster. This should allow the processing of larger volumes such as the zebrafish data in real time with a few CPUs. The computational cost of our method compares favourably with previous work. Our method takes ~23 seconds per volume to process C. elegans data, well below the data sampling rate of one volume per minute. In contrast, mode finding  takes twice as long, on a volume less than half the x,y resolution using comparable hardware. Our detection combined with our previous tracking approach , takes about 37 seconds per volume to detect and track through the 180 cell stage, compared to up to 20 minutes per volume at the 180 cell stage for combined tracking and segmentation with a graph cut approach . Our method is also efficient in comparison with previous zebrafish analysis methods both of which would require around two hours [3, 22], to process a volume of the size that our method can segment in approximately 20 minutes.
3D reconstruction of pharynx development
We demonstrate the potential of our algorithm by analyzing the organogenesis of the pharynx in C. elegans. The pharynx is a prime model for organogenesis and has been widely studied. However, a detailed single cell level record of its formation has never been made. This is because of the image analysis challenge presented by small cells in a crowded configuration during later embryogenesis. With our previous cell detection method  high error made this analysis impractical. More accurate detection opens the door not only for this record of wild type development, but also for novel experimental investigations of late pharynx morphogenesis at the single cell level.
The pharynx is a feeding apparatus that ingests and grinds bacteria. It is made of 80 cells, derived from different lineages and including multiple tissues such as muscles, neurons and glands [33, 34] [WormAtlas.org]. This complex set of tissues with a small set of cells has made the pharynx a powerful model to study organogenesis. Genetic and functional genomic analysis have identified the key signaling pathways, master regulators and the molecular cascade underlying pharyngeal development, leading to a substantial understanding of how the cell lineage generates the particular set of 80 differentiated cells (see Mango,  for a detailed review). However, as in most models of complex organogenesis, how the differentiated cells give rise to the structure of a functioning organ is poorly understood.
Additional file 13:Movie 6: Structure of pharynx and contributions of different sublineages. 3D rotation of the pharynx at time 337 makes the 3D structure clear and allows comparison of the final time points in the coloring schemes of movies 1, 2 and 3. (AVI 9 MB)
More detailed temporal and spatial information is available in movies corresponding to different coloring schemes in Figure 5 (Additional files 10, 11, 12 and 13: Movies 3 to 6). As individual cells within the organ can be analyzed based on our accurate nuclear identification method, morphogenic behaviours can be dissected at single-cell resolution using mutants, gene expression mapping and other approaches. Such studies will ultimately extend the molecular cascade of pharyngeal development from differentiation to morphogenesis, and pave the way for a comprehensive understanding of organogenesis.
Discussion and Conclusion
Our method is a reliable tool for nuclear identification that, by respecting the structure of the underlying image data, achieves robust and fast nuclear detection and segmentation. Three key ideas underlie the strength of our design. First, we separate the problem of 3D nuclear segmentation into 2D slice segmentation and 3D slice grouping. We can achieve reliable 2D segmentation using simple methods because of the high resolution within image planes. We can then efficiently solve the hard problem of 3D segmentation by grouping the relatively small number of slices in the image (typically 3 to 4 per nucleus at late developmental stages). Second, we employ a trainable, probabilistic shape model based on slices, which allows us to consider the variability of nuclear shape and intensity within an easily manageable framework. Third, we use the 3D maxima generated by the DoG filter to guide nuclear segmentation. As 3D maxima typically provide >95% accuracy in nuclear detection, they provide a powerful guide for a greedy slice grouping approach. The strategy of segmenting individual slices of volume data is not unknown [22, 36, 37], but our method is unique in its computational strategy and in being general, fully automatic and capable of good results in the crowded images typical of late embryogenesis. The algorithm remains efficient and fast enough for a typical computer to achieve real-time analysis of in vivo imaging of metazoan embryogenesis, in models ranging from C. elegans to mouse.
Aside from accuracy and speed, our method has the advantages of easy adaptability and an extensible modular framework. Retraining the probabilistic nuclear model for new images provides increased flexibility with minimal effort. Since slices can typically be segmented with >95% accuracy after setting only 2 simple base parameters, manual labeling involves only pruning a list of nearby potential slices. No hand tracing of image regions is necessary. Detailed tuning and training procedures are given in Additional File 7 section 2. Apart from the adaptability of the probabilistic model, our method has the added flexibility of a modular algorithmic design. Each sub task is largely independent of the others and can be replaced by other methods independently of the general strategy. For example, if very differently scaled or shaped nuclei were a concern, the DoG filter could be replaced by a (more computationally expensive) battery of oriented or multi-scale filters while leaving the rest of the framework untouched. Additional specializations can also be added at different stages to address particular concerns, such as false detections outside the boundary of an embryo or adaptation to fading signal.
Currently, our method still produces 2-3% error in the more challenging cases of late embryogenesis. The vast majority of remaining false negatives are crowded nuclei for which secondary recovery failed because a brighter neighbor not only prevented its detection as a 3D maximum but also mistakenly claimed all of its slices. Similarly, the majority of remaining false positives are nuclei split into upper and lower halves along the z axis. These were detected twice but did not claim each other's slices sufficiently to be merged. This suggests that the classifier for slice inclusion is likely the weakest link in our system and could be improved, especially for the elongated nuclei with varying spatial orientation that are frequently seen in the drosophila and mouse embryo. However, at least half of these error cases are ambiguous to the eye, and require examination of adjoining time points to determine if they represent one or two nuclei. This suggests performance of our algorithm may be close to the bound set by information in a single image, and large future improvements may be more easily achieved through improvements to imaging and use of temporal tracking.
As our analysis on the relationship between the error rates and nuclear separation (Figure 4) suggests, the most direct imaging improvement for detection in most situations would be an increase in z resolution. Additional resolution in the x,y plane or improved signal to noise ratio are always useful, but if x,y resolution is already sufficient for segmentation this will not significantly reduce error and might be counterproductive if, for example, it results in increased phototoxicity due to greater magnification or laser power. Our results provide a practical guide for optimizing imaging parameters: ensure a z sample spacing of at least the separation between nuclei. Though acquisition speed and other constraints will not always allow a sufficient z resolution, the curve in Figure 4 allows these factors to be traded off against error in an informed manner. Furthermore, because nuclear separation is a universal metric for fluorescence labeled nuclear images, it is useful in comparing image analysis techniques across different image data. For example, this metric highlights the counterintuitive fact that the ~3 hpf zebrafish embryo with over 1500 cells is about as easy to analyze as the twenty four cell C. elegans embryo.
Zebrafish  and Drosophila data are published data sets captured with DLSM and DLSM-SI microscopy techniques respectively. Imaging resolution and temporal sampling information for these and all other data sets are detailed in Additional file 1, Table S2 and Table S1.
C. elegans 4-D confocal images were recorded with a Zeiss Axio Observer.Z1 with 491-nm laser at a temporal resolution of one minute for embryos between the two cell and ~ 540 cell stages (320 minutes). To normalize for loss of fluorescence in lower focal planes and increase in fluorescence later in development, adjustments were made to laser power (ranging from five to thirty percent) and exposure time (from eighty five ms to 120 ms) according to slice and developmental stages.
Mouse embryos expressing a H2B-GFP reporter specifically within non-axial mesoderm where dissected and imaged as detailed in .
Pharynx Development Analysis
Nuclei were detected using our method, then tracked using StarryNite with errors corrected manually using AceTree . Nuclei were followed until approximately 340 minutes pfc. Pharyngeal cells were identified by their lineage identity, which, given the invariant cell lineage of C. elegans, equates with fate.
Image analysis algorithm details are available in Additional File 1, section 1.
We thank Philipp Keller for supplying zebrafish and Drosophila data as well as advice and critical reading of the manuscript. Thanks to Julia Moore for extensive help with data visualization and to Heather Katzoff for comments on the manuscript. This work is supported by an NIH grant (HG004643) to Z.B.. A.S. is supported by NIH grant F32 GM091874-01.
- Bao Z, Murray JI, Boyle T, Ooi SL, Sandel MJ, Waterston RH: Automated cell lineage tracing in Caenorhabditis elegans. Proc Natl Acad Sci USA 2006, 103: 2707–2712. 10.1073/pnas.0511111103View ArticlePubMedPubMed CentralGoogle Scholar
- McMahon A, Supatto W, Fraser SE, Stathopoulos A: Dynamic analyses of Drosophila gastrulation provide insights into collective cell migration. Science 2008, 322: 1546–1550. 10.1126/science.1167094View ArticlePubMedPubMed CentralGoogle Scholar
- Keller PJ, Schmidt AD, Wittbrodt J, Stelzer EH: Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. Science 2008, 322: 1065–1069. 10.1126/science.1162493View ArticlePubMedGoogle Scholar
- England SJ, Blanchard GB, Mahadevan L, Adams RJ: A dynamic fate map of the forebrain shows how vertebrate eyes form and explains two causes of cyclopia. Development 2006, 133: 4613–4617. 10.1242/dev.02678View ArticlePubMedGoogle Scholar
- Meilhac SM, Adams RJ, Morris SA, Danckaert A, Le Garrec JF, Zernicka-Goetz M: Active cell movements coupled to positional induction are involved in lineage segregation in the mouse blastocyst. Developmental Biology 2009, 331: 210–221. 10.1016/j.ydbio.2009.04.036View ArticlePubMedPubMed CentralGoogle Scholar
- Nowotschin S, Hadjantonakis AK: Use of KikGR a photoconvertible green-to-red fluorescent protein for cell labeling and lineage analysis in ES cells and mouse embryos. Bmc Developmental Biology 2009., 9: 10.1186/1471-213X-9-49Google Scholar
- Murray JI, Bao Z, Boyle TJ, Waterston RH: The lineaging of fluorescently-labeled Caenorhabditis elegans embryos with StarryNite and AceTree. Nat Protoc 2006, 1: 1468–1476. 10.1038/nprot.2006.222View ArticlePubMedGoogle Scholar
- Rembold M, Loosli F, Adams RJ, Wittbrodt J: Individual cell migration serves as the driving force for optic vesicle evagination. Science 2006, 313: 1130–1134. 10.1126/science.1127144View ArticlePubMedGoogle Scholar
- Bischoff M, Parfitt DE, Zernicka-Goetz M: Formation of the embryonic-abembryonic axis of the mouse blastocyst: relationships between orientation of early cleavage divisions and pattern of symmetric/asymmetric divisions. Development 2008, 135: 953–962. 10.1242/dev.014316View ArticlePubMedPubMed CentralGoogle Scholar
- Swedlow JR, Eliceiri KW: Open source bioimage informatics for cell biology. Trends Cell Biol 2009, 19: 656–660. 10.1016/j.tcb.2009.08.007View ArticlePubMedPubMed CentralGoogle Scholar
- Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, Friman O, Guertin DA, Chang JH, Lindquist RA, Moffat J, et al.: CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol 2006, 7: R100. 10.1186/gb-2006-7-10-r100View ArticlePubMedPubMed CentralGoogle Scholar
- Lee JY, Colinas J, Wang JY, Mace D, Ohler U, Benfey PN: Transcriptional and posttranscriptional regulation of transcription factor expression in Arabidopsis roots. Proc Natl Acad Sci USA 2006, 103: 6055–6060. 10.1073/pnas.0510607103View ArticlePubMedPubMed CentralGoogle Scholar
- Schnabel R, Hutter H, Moerman D, Schnabel H: Assessing normal embryogenesis in Caenorhabditis elegans using a 4D microscope: variability of development and regional specification. Developmental Biology 1997, 184: 234–265. 10.1006/dbio.1997.8509View ArticlePubMedGoogle Scholar
- Glory E, Murphy RF: Automated subcellular location determination and high-throughput microscopy. Dev Cell 2007, 12: 7–16. 10.1016/j.devcel.2006.12.007View ArticlePubMedGoogle Scholar
- Neumann B, Walter T, Heriche JK, Bulkescher J, Erfle H, Conrad C, Rogers P, Poser I, Held M, Liebel U, et al.: Phenotypic profiling of the human genome by time-lapse microscopy reveals cell division genes. Nature 2010, 464: 721–727. 10.1038/nature08869View ArticlePubMedPubMed CentralGoogle Scholar
- Harder N, Mora-Bermudez F, Godinez WJ, Wunsche A, Eils R, Ellenberg J, Rohr K: Automatic analysis of dividing cells in live cell movies to detect mitotic delays and correlate phenotypes in time. Genome Res 2009, 19: 2113–2124. 10.1101/gr.092494.109View ArticlePubMedPubMed CentralGoogle Scholar
- Long F, Peng H, Liu X, Kim SK, Myers E: A 3D digital atlas of C. elegans and its application to single-cell analyses. Nat Methods 2009, 6: 667–672. 10.1038/nmeth.1366View ArticlePubMedPubMed CentralGoogle Scholar
- Li G, Liu T, Tarokh A, Nie J, Guo L, Mara A, Holley S, Wong ST: 3D cell nuclei segmentation based on gradient flow tracking. BMC Cell Biol 2007, 8: 40. 10.1186/1471-2121-8-40View ArticlePubMedPubMed CentralGoogle Scholar
- Chen Y, Ladi E, Herzmark P, Robey E, Roysam B: Automated 5-D analysis of cell migration and interaction in the thymic cortex from time-lapse sequences of 3-D multi-channel multi-photon images. J Immunol Methods 2009, 340: 65–80. 10.1016/j.jim.2008.09.024View ArticlePubMedPubMed CentralGoogle Scholar
- Lin G, Adiga U, Olson K, Guzowski JF, Barnes CA, Roysam B: A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry Part A 2003, 56A: 23–36. 10.1002/cyto.a.10079View ArticleGoogle Scholar
- Yan P, Zhou X, Shah M, Wong ST: Automatic segmentation of high-throughput RNAi fluorescent cellular images. IEEE Trans Inf Technol Biomed 2008, 12: 109–117. 10.1109/TITB.2007.902173View ArticlePubMedPubMed CentralGoogle Scholar
- Mosaliganti K, Cooper L, Sharp R, Machiraju R, Leone G, Huang K, Saltz J: Reconstruction of cellular biological structures from optical microscopy data. IEEE Trans Vis Comput Graph 2008, 14: 863–876. 10.1109/TVCG.2008.30View ArticlePubMedGoogle Scholar
- Byun J, Verardo MR, Sumengen B, Lewis GP, Manjunath BS, Fisher SK: Automated tool for the detection of cell nuclei in digital microscopic images: application to retinal images. Mol Vis 2006, 12: 949–960.PubMedGoogle Scholar
- Jaqaman K, Loerke D, Mettlen M, Kuwata H, Grinstein S, Schmid SL, Danuser G: Robust single-particle tracking in live-cell time-lapse sequences. Nat Methods 2008, 5: 695–702. 10.1038/nmeth.1237View ArticlePubMedPubMed CentralGoogle Scholar
- Wang M, Zhou X, Li F, Huckins J, King RW, Wong ST: Novel cell segmentation and online SVM for cell cycle phase identification in automated microscopy. Bioinformatics 2008, 24: 94–101. 10.1093/bioinformatics/btm530View ArticlePubMedGoogle Scholar
- Dzyubachyk O, Jelier R, Lehner B, Niessen W, Meijering E: Model-based approach for tracking embryogenesis in Caenorhabditis elegans fluorescence microscopy data. Conf Proc IEEE Eng Med Biol Soc 2009, 1: 5356–5359.Google Scholar
- Huang X, Saint-Jeannet JP: Induction of the neural crest and the opportunities of life on the edge. Developmental Biology 2004, 275: 1–11. 10.1016/j.ydbio.2004.07.033View ArticlePubMedGoogle Scholar
- Wallingford JB, Fraser SE, Harland RM: Convergent extension: the molecular control of polarized cell movement during embryonic development. Dev Cell 2002, 2: 695–706. 10.1016/S1534-5807(02)00197-1View ArticlePubMedGoogle Scholar
- Boyle TJ, Bao Z, Murray JI, Araya CL, Waterston RH: AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis. BMC Bioinformatics 2006, 7: 275. 10.1186/1471-2105-7-275View ArticlePubMedPubMed CentralGoogle Scholar
- Cootes TF, Taylor CJ, Cooper DH, Graham J: Active Shape Models - Their Training and Application. Computer Vision and Image Understanding 1995, 61: 38–59. 10.1006/cviu.1995.1004View ArticleGoogle Scholar
- Marr D, Hildreth E: Theory of Edge-Detection. Proceedings of the Royal Society of London Series B-Biological Sciences 1980, 207: 187–217. 10.1098/rspb.1980.0020View ArticleGoogle Scholar
- Keller PJ, Schmidt AD, Santella A, Khairy K, Bao Z, Wittbrodt J, Stelzer EH: Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy. Nat Methods 2010, 7(8):637–42. 10.1038/nmeth.1476View ArticlePubMedPubMed CentralGoogle Scholar
- Sulston JE, Schierenberg E, White JG, Thomson JN: The Embryonic-Cell Lineage of the Nematode Caenorhabditis-Elegans. Developmental Biology 1983, 100: 64–119. 10.1016/0012-1606(83)90201-4View ArticlePubMedGoogle Scholar
- Albertson DG, Thomson JN: Pharynx of Caenorhabditis Elegans. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences 1976, 275: 299. 10.1098/rstb.1976.0085View ArticleGoogle Scholar
- Mango SE: The Molecular Basis of Organ Formation: Insights From the C-elegans Foregut. Annual Review of Cell and Developmental Biology 2009, 25: 597–628. 10.1146/annurev.cellbio.24.110707.175411View ArticlePubMedGoogle Scholar
- Hamahashi S, Onami S, Kitano H: Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking. BMC Bioinformatics 2005, 6: 125. 10.1186/1471-2105-6-125View ArticlePubMedPubMed CentralGoogle Scholar
- Gouaillard A, Brown T, Bronner-Fraser M, Fraser SE, Megason SG: GoFigure and The Digital Fish Project: Open tools and open data for an imaging based approach to system biolgy. 2007.Google Scholar
- Nowotschin S, Ferrer-Vaquer A, Hadjantonakis AK: Imaging of mouse embryogenesis with confocal time-lapse microscopy. Methods Enzymol 2010, 476: 351–377.View ArticlePubMedPubMed CentralGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (<url>http://creativecommons.org/licenses/by/2.0</url>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.