 Research article
 Open Access
A distancefield based automatic neuron tracing method
 Jinzhu Yang†^{1, 2},
 Paloma T GonzalezBellido^{2, 3} and
 Hanchuan Peng†^{2, 4}Email author
https://doi.org/10.1186/147121051493
© Yang et al.; licensee BioMed Central Ltd. 2013
Received: 25 May 2012
Accepted: 22 February 2013
Published: 12 March 2013
Abstract
Background
Automatic 3D digital reconstruction (tracing) of neurons embedded in noisy microscopic images is challenging, especially when the cell morphology is complex.
Results
We have developed a novel approach, named DFTracing, to tackle this challenge. This method first extracts the neurite signal (foreground) from a noisy image by using anisotropic filtering and automated thresholding. Then, DFTracing executes a coupled distancefield (DF) algorithm on the extracted foreground neurite signal and reconstructs the neuron morphology automatically. Two distancetransform based “force” fields are used: one for “pressure”, which is the distance transform field of foreground pixels (voxels) to the background, and another for “thrust”, which is the distance transform field of the foreground pixels to an automatically determined seed point. The coupling of these two force fields can “push” a “rolling ball” quickly along the skeleton of a neuron, reconstructing the 3D cell morphology.
Conclusion
We have used DFTracing to reconstruct the intricate neuron structures found in noisy image stacks, obtained with 3D laser microscopy, of dragonfly thoracic ganglia. Compared to several previous methods, DFTracing produces better reconstructions.
Keywords
 Neuron Structure
 Foreground Pixel
 Neuron Region
 Distance Field
 Boundary Pixel
Background
In neuroscience it is important to accurately trace, or reconstruct, a neuron’s 3D morphology. The current neuron tracing methods can be described, according to the necessary manual input, as being manual, semiautomatic or fully automatic. Neurolucida (MBF Bioscience), a largely manual technique, uses straight linesegments to connect manually determined neuron skeleton locations drawn from the 2D crosssectional views of a 3D image stack. In contrast, semiautomatic methods need some prior information, such as the termini of a neuron, for the automated process to find the neuron skeleton. For example, the semiautomatic Vaa3DNeuron 1.0 system (previously called “V3DNeuron”) [1, 2] has been used in systematical and largescale reconstructions of single neurons/neuritetracts from mouse and fruitfly [3, 4].
However, for very complicated neuron structures and/or massive amounts of image data, the semiautomatic methods are still timeconsuming. Thus, a fully automated tracing method is currently highly desired. Early fully automated methods used image thinning to extract skeletons from binary images [57].
These methods iteratively remove voxels from the segmented foregroun region surface of an image. In addition, neurontracing approaches based on pattern recognition were also developed ([813]). However, in cases of low image quality, the tracing accuracy may be greatly compromised. The modelbased approaches, such as those that use a 3D line, sphere or cylinder for identifying and tracing the morphological structures of neurons, are relatively more successful ([1417]). These methods can also be guided using both global prior information and local salient image features ([2, 18, 19]). While the basis of most existing methods is to grow a neuron structure from a predefined or automatically selected “seed” location, the allpath pruning method [20] that iteratively removes the redundant structural elements was recently proposed as a powerful alternative.
Method
An intuitive way to trace a neuron is to start from a predefined or automatically computed location, called “seed”, to grow the neuron morphology until it covers all visible signals in the image. If the image foreground region that corresponds to a neuron can be well extracted, the problem reduces to determine the complete set of reconstruction nodes, i.e. skeletonize the neuron, using the foreground mask of the image, followed by estimating the crosssectional diameter of all reconstruction nodes.
We follow this intuition and design the following threestage neurontracing algorithm:
(Step 1) Enhance linelike structures in the image, followed by adaptive thresholding to remove nonneuron background and noise. (2.1)
(Step 2) Skeletonize the neuron using coupled distance fields. (2.2)
(Step 3) Assemble multiple spatially disconnected pieces of the traced neuron into the final result. (2.3)
Preprocessing: extraction of neuron signal
Since a neuron segment looks like a line or a tube (Figure 2), anisotropic filtering of image noise can enhance the neuron signal in an image. We use nonlinear anisotropic filtering for signal enhancement, followed by automated thresholding.
The function f(u) is defined using the Hessian matrix of each image pixel u,$H=\left[\begin{array}{ccc}\hfill {u}_{\mathit{xx}}\hfill & \hfill {u}_{\mathit{xy}}\hfill & \hfill {u}_{\mathit{xz}}\hfill \\ \hfill {u}_{\mathit{yx}}\hfill & \hfill {u}_{\mathit{yy}}\hfill & \hfill {u}_{\mathit{yz}}\hfill \\ \hfill {u}_{\mathit{zx}}\hfill & \hfill {u}_{\mathit{zy}}\hfill & \hfill {u}_{\mathit{zz}}\hfill \end{array}\right],$ where H is indeed symmetric. Of note, the Hessian method has been well used in medical image computing, especially vessel enhancement and segmentation ([24, 25]).
where α_{ i } (i = 1, 2, 3) are predefined coefficients (α_{1}=0.5, α_{2}=0.5, α_{3}=25), ${k}_{i}=\text{exp}\left({\lambda}_{i}^{2}/{\sum}_{i}^{3}{=}_{1}{\lambda}_{i}^{2}\right)$.
After signal enhancement, extracting the neurite foreground using global thresholding is straightforward. We determine the threshold with the following iterative process. First, the average image intensity is taken as a candidate threshold. We use this candidate threshold to divide the image into two portions: pixels with higher, or lower intensity. We then calculate the average of the two mean intensity values of these two portions and use it as a new candidate threshold. This process is iterated until the candidate threshold value no longer changes. This converging threshold is used as the final global threshold: any image pixel with intensity higher than this threshold is part of the so called “image foreground”, which is assumed to contain the neuron signal.
Lastly, since the 3Dextracted image foreground could contain multiple neurite areas and noise, neuron reconstruction is carried out in individual areas, which are “stitched” together via postprocessing. Discarding the very small pieces (smaller than 10 pixels) removes the noise still present in the image foreground.
Neuron tracing using coupled distance fields
Figure 2 shows that the skeleton of a neuron segment is essentially the medial axis of this segment. How can the medial axis be recovered? Possibly the simplest way is image thinning, which unfortunately has two wellknown problems: (a) sensitivity to orientation of the image region of interest, and (b) unnecessary forks of the skeleton at the ends of the image region. Another intuitive approach is to use tube fitting (e.g. [17]) or a rolling ball fitting. Below, we present a simple method that works robustly without a parameter.
Distance transform of an image region R with respect to another image region T is defined as for each image pixel in R, replace its intensity using the shortest distance to T. Typically T is selected as the image background, but it can also be selected as any specific image pixel. We have the following observation of Figure 2.

In the distance transform of a neuron segment with respect to an arbitrarily selected seed location, s, at a tip point of the neuron segment, the distancetransformed pixel intensity has a larger value than those of the nearby foreground pixels. In another word, any tip/terminal point in a neuron structure will have a local maximum on the boundary pixels in this distance field.

In the distance transform of a neuron segment to the image background, the boundary pixels will have value 1 and the skeleton points will form a ridge of local maximal values, compared to all other image pixel locations that are orthogonal to the tangent direction of the skeleton. Extraction of the ridge will skeletonize the neuron segment.

Assume we have a rolling ball that is pushed forward by these two coupled distance “force” fields. It can be seen that this ball will move toward the skeleton from any starting location and then along the ridge curve. Following this trajectory we can extract the neuron skeleton quickly. For this reason, in Figure 2 we call these two force fields “thrust” (for F_{1}, which is the distance transform field of the foreground pixels to an automatically determined seed point) and “pressure” (for F_{2}, which is the distance transform field of foreground pixels (voxels) to the background) fields, respectively.

Multiple traced neuron segments can be merged at their convergence point to reconstruct the treelike structure of a neuron.
 (1)
Detect neuron region(s) in 3D using method in 2.1. Multiple spatially nonconnected neuron regions may be produced.
 (2)
For each neuron region R, find the set of boundary pixels B, which is defined the set of pixels having at least one neighboring pixel (26neighbors in 3D) as the background.
 (3)
Arbitrarily select a seed location, s, from the boundary pixel set B.
 (4)
Compute both the thrust and pressure distance fields with respect to s and B, using the entire image.
 (5)
In the thrust field, detect the set of all local maxima locations, M.
 (6)
For each point t ∈ M, set an initially empty skeleton C(t) to {t}. Denote the last added skeleton point as p (initially p = t).
 (7)
Denote π(p) = R ∩ Ω(p) (Ω is the 26neighborhood of a pixel in 3D). For all pixels q ∈ π(p), find q* that has the largest pressure field value.
 (8)
If q* also has a lower thrust field value (note that now we are backtracing a path) than p, then add q* to the skeleton set.
 (9)
Assign q* to p, and repeat steps 6 and 7 until the seed location s is met. This completes the skeleton of a neuron segment C(t).
 (10)
Merge the common portion of multiple skeletons.
 (11)
Use the pressure field value of skeleton points as the respective values for the radius estimation.
 (12)
Assemble multiple neurite reconstructions for all neuron regions using the postprocessing method in 2.3.
In its implementation, our DFTracing algorithm can be further optimized. Indeed, steps 69 can be parallelized. Instead of finding the complete skeleton for each terminal point t ∈ M sequentially, we can grow every skeleton one step at a time in parallel. A skeleton stops growing when either the seed location or any other skeleton pixel location has been reached. This process iterates until all skeletons stop growing. The parallelized algorithm will also avoid step 9, i.e. merging common portions of skeletons. In addition, to save computational time when calculating the distance fields, we use the city block distance [26] instead of the Euclidean distance.
Postprocessing: produce the complete reconstruction
The neurontracing algorithm in 2.2 can return a treelike structure for a single 3D connected neuron region. In case that the neuron foreground extraction (2.1) produces multiple spatial disconnected neuron regions, we will have multiple neuron trees. Often these pieces need to be assembled into a full reconstruction. Since a gap between two disconnected neuron regions is typically small; only the nearby pieces with a separation smaller than two times the radius of the nearest nodes are connected. Then, an arbitrarily selected “root” location (usually the first leaf node in the SWC representation) is used to sort the order (i.e. parentchildren relationship) of all neuron reconstruction nodes. Finally, pruning the very short branches whose lengths are less than two pixels completes the reconstruction.
Pros and cons of DFtracing, and comparison to other methods
DFTracing is an efficient, deterministic, and essentially parameterfree method (for the core part of coupled distance fields). Compared to many previous neurontracing methods, this new method avoids the complication introduced by the previous need to select parameters. In addition, DFTracing is a local search method, similar to the major body of existing neurontracing techniques. While local search cannot guarantee the global correctness of the final reconstruction, compared to those use the global guiding information (e.g. Vaa3DNeuron 1.0), DFTracing has the advantage that it uses a smaller amount of computer memory. Moreover, we note that the termini produced in the thrust field (step 5 in the DFTracing algorithm) could be used as global guiding prior information for the Vaa3DNeuron system. In that sense, the steps 610 in DFTracing is basically equivalent to the shortest path algorithm in the graphaugmented deformable model [2]. However, DFTracing does not need to literally produce the graph of image pixels and thus uses less computer memory.
DFTracing has several caveats that deserve further improvement. First, it is based on distance transform, which may be sensitive to neuron boundary and also anisotropic neuron structure in 3D images. This can be refined in the future by (1) replacing distance transform with a more robust statistical test method, similar to the diameter estimation method used in Vaa3DNeuron and (2) smoothing the contour/edge of the extracted image foreground to make the distance transform more robust to image noise. Second, the postprocessing of DFTracing can be further improved by adding machine learning methods (e.g. [27]). Third, the preprocessing step can be further improved using a multiscale anisotropic filtering approach [23].
Results and discussion
To assess the DFTracing method, we consider several datasets, especially one consisting of 3D confocal images of dragonfly neurons (species L. luctuosa), which have very complicated neuron arborization patterns, heavy noise and uneven image background (Figure 1 (a) and (b)). We also tested DFTracing using neuron images from other organisms, such as the fruit fly (species D. melanogaster). We compared DFTracing with existing automatic approaches, especially NeuronStudio [28] and iTube [17], and the semiautomatic approach Vaa3DNeuron 1.0 ([1]).
We tested the neurontracing algorithm on 3D confocal image stacks of neurons in dragonfly (data set was obtained from [29]) and Drosophila (data from the Digital Reconstruction of Axonal and Dentritic Morphology (DIADEM) competition (http://www.diademchallenge.org).
Neuron signal enhancement and neuron tracing
Neuron tracing: comparison with other methods
 (1)
Vaa3DNeuron 1.0 semiautomatic tracing ([2]);
 (2)
NeuronStudio automatic tracing (0.9.92 version, http://research.mssm.edu/cnic/toolsns.html);
 (3)
iTube automatic tracing [17];
To compare the key tracing modules of different methods, we used one confocal image of fruitfly olfactory projection neuron, which has a very high signaltonoise ratio and is also used in several previous studies ([31]). We binarized this image for testing.
Neuron tracing: quantitative analysis
Number of branches of different methods
Method  V3DNeuron 1.0  iTube  NeuronStudio  DFTracing 

Number of branches  17  15  9  17 
Number of branches of different methods
Method  V3DNeuron 1.0  iTube  NeuronStudio  DFTracing 

Number of branches  27  6  3  27 
Neuron tracing: robustness
Automatic tracing of complicated morphology of many neurons
Conclusion
We have developed a automatic neuron tracing method, DFTracing, that outperformed several previous automatic and semiautomatic methods in a very challenging set of dragonfly neurons with complex morphology and high noise levels. This method is efficient and essentially parameterfree. DFTracing has application potential in largescale neuron reconstruction and anatomy projects.
Notes
Declarations
Acknowledgements
In DFTracing we use a neuronsorting module developed by Yinan Wan and Hanchuan Peng for poststitching multiple neuron segments. We thank Anthony Leonardo for providing the equipment necessary for the neuron labeling and image data collection, Ting Zhao for assistance in comparing the iTube software. We thank Erhan Bas and Margaret Jefferies for proofreading this manuscript and suggestions. This research is supported by Howard Hughes Medical Institute. This work was also supported by Chinese National Natural Science Foundation under grant No.61001047 and 61172002.
Authors’ Affiliations
References
 Peng H: V3D enables realtime 3D visualization and quantitative analysis of largescale biological image data sets. Nature Biotechnol 2010, 28: 348353. 10.1038/nbt.1612View ArticleGoogle Scholar
 Peng H: Automatic reconstruction of 3D neuron structures using a graphaugmented deformable model. Bioinformatics 2010a, 26: i38i46. 10.1093/bioinformatics/btq212View ArticleGoogle Scholar
 Li A: Microoptical sectioning tomography to obtain a highresolution atlas of the mouse brain. Science 2010, 330: 14041408. 10.1126/science.1191776View ArticlePubMedGoogle Scholar
 Peng H: BrainAligner: 3D registration atlases of Drosophila brains. Nat Method 2011,8(6):493498. 10.1038/nmeth.1602View ArticleGoogle Scholar
 Lee TC: Building skeleton models via 3D medial surface/axis thinning algorithms. CVGIP: graph. Model Image Process 1994, 56: 46278. 10.1006/cgip.1994.1042View ArticleGoogle Scholar
 Palagyi K: A 3D 6subiteration thinning algorithm for extracting medial lines. Pattern Recognit Lett 1998, 19: 61327. 10.1016/S01678655(98)000312View ArticleGoogle Scholar
 Borgefors G: Computing skeletons in three dimensions. Pattern Recognition 1999, 32: 122536. 10.1016/S00313203(98)00082XView ArticleGoogle Scholar
 AlKofahi KA: Rapid automated threedimensional tracing of neurons from confocal image stacks. IEEE Trans Inf Technol Biomed 2002, 6: 17187. 10.1109/TITB.2002.1006304View ArticlePubMedGoogle Scholar
 Streekstra GJ: Analysis of tubular structures in threedimensional confocal images. Network 2002, 13: 38195. 10.1088/0954898X/13/3/308View ArticlePubMedGoogle Scholar
 Schmitt S: New methods for the computer assisted 3D reconstruction of neurons from confocal image stacks. NeuroImage 2004, 23: 128398. 10.1016/j.neuroimage.2004.06.047View ArticlePubMedGoogle Scholar
 SantamaríaPang A: Automatic centerline extraction of irregular tubular structures using probability volumes from multiphoton imaging. Proc Med Image Comput ComputAssist Interv 2007, 4792: 48694.Google Scholar
 Myatt DR: Towards the automatic reconstruction of dendritic trees using particle filters. Nonlinear Stat Signal Process Workshop 2006, 2006: 1936.View ArticleGoogle Scholar
 Evers JF: Progress in functional neuroanatomy: precise automatic geometric reconstruction of neuronal morphology from confocal image stacks. J Neurophysiol 2005,93(4):23312342.View ArticlePubMedGoogle Scholar
 Meijering E: Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry A 2004,58(2):16776.View ArticlePubMedGoogle Scholar
 Weaver CM: Automated Algorithms for Multiscale Morphometry of Neuronal Dendrites. Neural Computation 2004,16(7):13531383. 10.1162/089976604323057425View ArticlePubMedGoogle Scholar
 Wearne S: New techniques for imaging, digitization and analysis of threedimensional neural morphology on multiple scales. Neuroscience 2005, 136: 661680. 10.1016/j.neuroscience.2005.05.053View ArticlePubMedGoogle Scholar
 Zhao T: Automated reconstruction of neuronal morphology based on local geometrical and global structural models. Neuroinformatics 2011,9(23):24761.PubMed CentralView ArticlePubMedGoogle Scholar
 Zhang Y: (2008) 3D axon structure extraction and analysis in confocal fluorescence microscopy images. Neural Comput 2008, 20: 18991927. 10.1162/neco.2008.0507519PubMed CentralView ArticlePubMedGoogle Scholar
 Xie J: Automatic neuron tracing in volumetric microscopy images with anisotropic path searching. Med Image Comput ComputAssist Interv  MICCAI 2010 PT II 2010, 6362: 472479. 10.1007/9783642157455_58View ArticleGoogle Scholar
 Peng H: Automatic 3D neuron tracing using allpath pruning. Bioinformatics 2011, 28: i239i247.View ArticleGoogle Scholar
 Donohue DE: Automated reconstruction of neuronal morphology: an overview. Brain Res Rev 2011,67(12):94102.PubMed CentralView ArticlePubMedGoogle Scholar
 Cannon RC: An online archive of reconstructed hippocampal neurons. J Neurosci Methods 1998,1998(84):4954.View ArticleGoogle Scholar
 Sato Y: 3D multiscale line filter for segmentation and visualization of curvilinear structure in medical images. Med Image Analysis 1998,2(2):143168. 10.1016/S13618415(98)800091View ArticleGoogle Scholar
 Descoteaux M: A multiscale geometric flow for segmenting vasculature in MRI. Comput Vis Mathematical Method Med Biomed Image Analysis Lecture Notes Comput Sci 2004, 3117: 169180. 10.1007/9783540278160_15View ArticleGoogle Scholar
 Yuan Y: Multiscale modelbased vessel enhancement using local line integrals. 30th Annu Int IEEE EMBS Conf Vancouver 2008, 2008: 22258.Google Scholar
 Borgefors G: (1986) Distance transformations on digital images. Comput Vis Graph Image Process 1986, 34: 344371. 10.1016/S0734189X(86)800470View ArticleGoogle Scholar
 Peng H: Proofediting is the bottleneck of 3D neuron reconstruction: the problem and solutions. Neuroinformatics 2011, 9: 23. 103105 103105View ArticleGoogle Scholar
 Rodriguez A: Rayburst sampling, an algorithm for automated threedimensional shape analysis from laserscanning microscopy images. Nat Protoc 2006,1(4):21562161.View ArticleGoogle Scholar
 GonzalezBellido P: Eight pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction. PNAS 2012. In press In pressGoogle Scholar
 HuynhThu Q: Scope of validity of PSNR in image/video quality assessment. Electron Lett 2008,44(13):800801. 10.1049/el:20080522View ArticleGoogle Scholar
 Jefferis G: Comprehensive maps of Drosophila higher olfactory centers: Spatially segregated fruit and pheromone representation. Cell 2007,128(6):11871203. 10.1016/j.cell.2007.01.040PubMed CentralView ArticlePubMedGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.