Tracing retinal vessel trees by transductive inference
 Jaydeep De^{1, 2},
 Huiqi Li^{3} and
 Li Cheng^{1, 4}Email author
https://doi.org/10.1186/147121051520
© De et al.; licensee BioMed Central Ltd. 2014
Received: 22 March 2013
Accepted: 13 January 2014
Published: 18 January 2014
Abstract
Background
Structural study of retinal blood vessels provides an early indication of diseases such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. These studies require accurate tracing of retinal vessel tree structure from fundus images in an automated manner. However, the existing work encounters great difficulties when dealing with the crossover issue commonlyseen in vessel networks.
Results
In this paper, we consider a novel graphbased approach to address this tracing with crossover problem: After initial steps of segmentation and skeleton extraction, its graph representation can be established, where each segment in the skeleton map becomes a node, and a direct contact between two adjacent segments is translated to an undirected edge of the two corresponding nodes. The segments in the skeleton map touching the optical disk area are considered as root nodes. This determines the number of trees tobefound in the vessel network, which is always equal to the number of root nodes. Based on this undirected graph representation, the tracing problem is further connected to the wellstudied transductive inference in machine learning, where the goal becomes that of properly propagating the tree labels from those known root nodes to the rest of the graph, such that the graph is partitioned into disjoint subgraphs, or equivalently, each of the trees is traced and separated from the rest of the vessel network. This connection enables us to address the tracing problem by exploiting established development in transductive inference. Empirical experiments on public available fundus image datasets demonstrate the applicability of our approach.
Conclusions
We provide a novel and systematic approach to trace retinal vessel trees with the present of crossovers by solving a transductive learning problem on induced undirected graphs.
Background
Topological and geometrical properties of retinal blood vessels from fundus images can provide valuable clinical information in diagnosing diseases. In particular, vascular anomaly in retina is one of the clinical manifestations of retinal diseases such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. Take diabetic retinopathy as an example, it is a leading cause of blindness in the workingage population of most developed countries. Diabetic retinopathy is the result of progressive damage to the network of tiny blood vessels that supply blood to the retina. It is classified into two major groups in clinics according to the severity of the disease: nonproliferative and proliferative. Proliferative diabetic retinopathy is characterized by the formation of newformed vessels in the retina, while nonproliferative diabetic retinopathy refers to the absence of abnormal new blood vessels [1]. The description of blood vessel tree structure is therefore essential in clinical diagnosis of eye diseases such as diabetic retinopathy. Unfortunately, commercial softwares still largely rely on manual tracing of the blood vessel trees. This is tedious and timeconsuming due to the highly variable structure of these retinal vessels, and is not sustainable for highthroughput analysis in clinical setting.
Existing efforts in retinal vessel analysis can be roughly categorized into two groups, namely, segmentationbased and trackingbased. The segmentationbased methods often use pixel classification [2–9] to produce a binary segmentation, where a pixel is classified into vessel or nonvessel. Ricci et al. [10] work with orthogonal line operators and support vector machine to perform pixelwise segmentation. Mendonca et al. [2] use four directional differential operators to detect the vessel centerlines, which are then engaged for morphologically reconstructing the vessels. An unsupervised curvature based vessel segmentation method is proposed by Garg et al. [3]. Meanwhile, a deformable contour model is adopted by Espona et al. [4], by incorporating the snake method with domain specific knowledge such as the topological properties of blood vessels. Soares et al. [6] adopt eighteen dimensional Gabor response feature to train two Gaussian mixture models (GMMs), which are further employed to produce a binary probability map for a test image. The trackingbased methods [11–21], on the other hand, usually start with a seed and track the intended vessel based on local intensity or texture information. The authors of [12] divide the image into nonoverlapping grid and considered each grid separately for seed finding, which are followed by the tracking procedure to uncover the vessel network structure. In a series of research efforts [18–21], the authors extract tubularity measure of all image pixels and connect those pixels having high tubularity measure values. Then, an optimal set of trees over these tubular pixels are selected by minimizing a global objective function with prior geometric constraints, such as orientation and width of the vessel structure. In a recent work [22], the tracking problem is formulated as a constrained optimization problem based on vessel orientation and topology, and a candidate enumeration algorithm is devised for the proposed optimization problem, which prunes the search space by maintaining a lower bound on the objective function. Meanwhile, there are also research efforts from the related neural tracing community that utilize graph based methods and achieve promising results, such as the all path pruning methods of [23, 24].
It has been observed that segmentationbased methods tend to produce many disconnected and isolated segments [10], which are less favourable towards retaining the important topological properties of vessel networks [25]. Vessel tracking methods, on the other hand, often preserve the connectivity structure of vessel segments. Nonetheless they encounter great difficulties with the occurrence of crossover[15] at the junction points. Current methods often fail to trace properly, as it is nontrivial to predict whether the vessel segments touching at a junction point belong to one tree, or two and more trees, and for the later case, to which tree each segment belongs. In this paper, we dedicate our attention to addressing this bottleneck issue, which is referred to as the crossover issue. One important observation is that local and global contextual information is crucial to resolve the crossover issue. For example, at a junction point, it is very helpful to go beyond the current vessel brunch and examine the angular properties of the other vessel brunches of the junction. These information is unfortunately ignored by current tracing methods. This observation inspires us to consider a different tracing approach that can take into account both local and global contextual information of the vessel network: After initial steps of pixelbased segmentation and skeleton extraction, a novel graph representation is formed, where each segment in the skeleton map becomes a node, and a direct contact between two adjacent segments is translated to an edge of the two corresponding nodes. The segments in the skeleton map touching the optical disk area are considered as the root nodes. The number of trees tobefound in the the vessel network thus equals the number of root nodes. This graph representation is further simplified using a modified version of segment ordering [26–28]. Based on the graph representation, the tracing problem is further formulated as a transductive inference problem in machine learning, where the goal becomes that of propagating the tree labels from known root nodes to the rest of the graph, such that the graph will be split into disjoint subgraphs, which corresponds to trees of the vessel network.
The main contributions of this paper are threefold. First, our approach offers a principled way of addressing the crossover issue. By connecting to the wellestablished transductive inference in machine learning [29], both local and global contextual information can be explicitly considered. Second, a novel graph representation is proposed, which can be regarded as an equivalent dual representation of the original vessel network, and is essential for establishing the machine learning connection. Third, the graph representation is simplified using a modified version of segment ordering which is meaningful for tracing purpose. We expect the graph representation, and the transductiveinference connection can open the door to some insightful understanding of the characteristics of crossover sections in vessel networks.
Methods
Segmentation
The goal of the segmentation step in our context is to extract vessel skeletons while maintaining their structural connectivity, as well as the corresponding pointwise thickness along the skeletons – based on which the retinal vessels can be faithfully reconstructed. This differs notably from the usual aim of most existing segmentation work, where the emphasis is to achieve a high classification accuracy. As the number of vessel pixels are much fewer comparing to the number of background pixels, often a high accuracy is achieved by missing many vessel pixels, a situation we try to avoid. In fact, our goal can be better described as segmentation with a high recall. In other words, it is critical for us to retain the vessel pixels that keep the local vein and artery branches from being broken or entirely missing. To achieve this, based on two existing methods [6, 8], our segmenter is formed by merging the results in a sequential manner, while emphasizing on retaining the right network connectivity. Since our main focus is the tracing step, we discuss here only the quantitative analysis of the segmentation results, and leave the detailed description of our segmentation step to Appendix.
Comparison of vessel segmentation performance in DRIVE
At the final stage of the segmentation step, the binary segmentation result is converted into a skeleton map (of one pixel thickness) by standard medialaxis transform. Meanwhile the optical disk region is identified and removed by applying a simple smoothing and thresholding step.
Tracing
Let us start by settling down few definitions. In the skeleton map, pixels can be partitioned into the following three categories (also illustrated in Figure 3):

Body Points: Pixels having two neighbours.

Terminal Points: Pixels having one neighbour.

Branching Points: Pixels having three or more neighbours.
In particular, the terminal points residing inside the removed optical disk area are called the root points, and the remaining terminal points are end points. A segment is thus defined as a continuous subset of skeleton pixels starts with either a root or a branching point, and ends with a branching or an end point.
It is commonly assumed that for tracing, we always start from the optical disk where the root points of the retinal vessel skeleton are present. As a result, the retinal vessels can always be separated into a disjoint set of rooted trees, with each tree possessing a unique label stemming from its root point (and thus the segment it resides in). This is illustrated as the redcoloured segments of the vessel skeleton in Figure 1 (A). Clearly the number of trees is always known a priori at this stage.
From skeleton map to graph representation
The skeleton map of the segmented image is converted into an undirected graph, G=(V,E), where the nodes V are the vessel segments of the image and there is an edge between two nodes if the corresponding vessel segments are connected in the image. Figure 1 (A) and (B) provide an illustrative example of transforming a small fraction of a skeleton. The segments containing the root points are each regarded as the root node of a distinct tree (Figure 1A, B). The number of possible labels per node equals the number of root nodes in the graph. As the label for the root nodes are known, they are regarded as labelled nodes (nodes in red color).
Graph simplification based on segment ordering
Shreve’s ordering
Shreve’s ordering is defined on trees as follows: 1) Each terminal nodes are assigned an order of 1. 2) If two segments of order μ_{1} and μ_{2} meet then the resulting segment obtains an order of μ_{1}+μ_{2}. Figure 4 illustrates an example of Shreve’s ordering. Shreve’s ordering assumes that the network structure is a tree with a single root with no crossover, which is true for river networks. However, we need to take care of crossover situations and multiple roots in our context, with which we propose a modified version of Shreve’s ordering.
Modified Shreve’s ordering
Graph simplification
Tracing as transductive inference
Transductive inference was introduced in the mid70s and has since become popular in machine learning. It is also closely related to semisupervised learning^{1}. Compared to inductive learning, that aims at estimating an unknown function over the entire space, transductive inference focuses on estimating the values of an unknown function on only a few points of interest. This fits precisely into our context. By leveraging this insight, a number of learning algorithms, notably the label propagation algorithm of [30], have been developed to address reallife applications.
Formally, assume there are n nodes in our graph representation from vessel skeleton. Y_{ l } = (y_{1},y_{2},…,y_{ l }) denotes the l root nodes (red nodes) that are observed, while Y_{ U } = (y_{l+1},y_{l+2},…,y_{ n }) denotes the rest unobserved nodes (blue nodes). Given this graph representation, the tracing problem can be reformulated as a problem of transductive inference, by propagating the labels from the known root nodes (red nodes) to the rest nodes (blue nodes) in current graph. Clearly it is an easy problem when the subgraphs are isolated from each other (i.e. without crossover in the skeleton map), and becomes increasingly difficult as there are more and more crossovers in the skeleton map. The label propagation algorithm of [30] is adapted to our context as illustrated in Algorithm 1. By starting with the initial guess ${\hat{Y}}^{\left(0\right)}$, it is not difficult to show that this algorithm will always converge to ${\hat{Y}}^{\left(\infty \right)}=(1\alpha ){(I\mathrm{\alpha L})}^{1}{\hat{Y}}^{\left(0\right)}$, and the rate of convergence depends on the eigenvalues of the graph Laplacian [30].
Algorithm 1 Label Propagation (Zhou et al. [[30]])
In practice, the output label ${\hat{Y}}^{\left(\infty \right)}\triangleq {\hat{Y}}^{\left(T\right)}$ is obtained in finite steps when the change of labels $\parallel {\hat{Y}}^{\left(T\right)}{\hat{Y}}^{(T1)}\parallel \le \epsilon $, with ε ≤ 1e  5. We also empirically fix α to 0.9 during our experiments.
Computing the weight matrix W
The weight matrix W is a real symmetric matrix of size n × n, which is sometimes referred to as the affinity matrix in graph theory. Clearly W is of central importance in our approach as it is assumed to encode sufficient information from the input image data. In this paper, the orientationbased features are proposed as the sufficient statistics toward computing Was below:

Segment orientation and angle between segments: For each skeleton point, the first eigenvector of the Hessian matrix of local image patch determines an orientation. A (usually curved) vessel segment comes with two ends, thus has two local orientations. For each end, its orientation is computed by taking an average of the eigenvector of the last ten skeleton points from this end. It is then used to compute θ∈ [ 0,π), the angle between two adjacent segments.
where the parameters k and θ_{ c } are experimentally fixed as k = 5 and θ_{ c } = 80°.
The diagonal elements of the weight matrix W are always zero, W_{ ii } = 0, ∀i. The computation of the weight matrix elements W_{ ij } for two different segments i≠jinvolves the following rules:

3Clique: i.e. there are three adjacent segments in the junction.

In this scenario we use the following equation:${W}_{\mathit{\text{ij}}}=exp({f}_{1}(\theta \left)\right).$(4)

The rationale behind choosing this function comes from the intuition that we want to encourage small changes of local curvatures between two connected segments, while punishing those with larger curvature changes. Figure 8(A) and (C) shows the effects of varying θ in f_{1} and W.

Figure 9 presents three exemplar 3cliques: subplot (A) (caseA) is the standard branching situation where the segments (marked as i,j and k) belong to the same label. In subplot (B) (caseB), the red segment terminates on a blue segment and creates a 3clique. Here the segment i should have a label different that of segment j and k. In subplot (C) (caseC), a crossover between the blue and the red branches is converted into two 3cliques due to error in skeletonization. We differentiate caseA from caseB,C with the help of the segment ordering algorithm explained previously. CaseB and caseC and further differentiated by checking the length of segment k (C pixels). If C ≤ C_{ critical } then we consider it as caseC, otherwise it is caseB. The value of C_{ critical } is experimentally set as C_{ critical } = 10.

Then we find the orientation difference for two segment pairs (i,k) and (j,k). Those two angles are ψ_{2} and ψ_{1} (shown in Figure 9(B)). Now we calculate the W_{ ik } and W_{ jk } as following: If ψ_{1}≥ψ_{2} then,$\begin{array}{ll}{W}_{\mathit{\text{jk}}}& =exp({f}_{1}(\theta \left)\right)\phantom{\rule{2em}{0ex}}\\ {W}_{\mathit{\text{ik}}}& =exp\left({f}_{1}\right(\theta \left)\right)\phantom{\rule{2em}{0ex}}\end{array}$(5)

else$\begin{array}{ll}{W}_{\mathit{\text{ik}}}& =exp({f}_{1}(\theta \left)\right)\phantom{\rule{2em}{0ex}}\\ {W}_{\mathit{\text{jk}}}& =exp\left({f}_{1}\right(\theta \left)\right)\phantom{\rule{2em}{0ex}}\end{array}$(6)

4Clique: i.e. there are four fullyconnected segments in the junction.

As displayed in Figure 10, if A,B,C and D are four pixels connecting the four segments X, Y, W and Z in a crossover setting, intuitively we can see that only $\overline{\mathit{\text{AC}}}$ and $\overline{\mathit{\text{BD}}}$ line should intersect with each other. The other pairs $(\overline{\mathit{\text{AB}}},\overline{\mathit{\text{CD}}})$ and $(\overline{\mathit{\text{AD}}},\overline{\mathit{\text{BC}}})$ are not able to intersect within the convex hull of the four points (A,B,C,D). Hence from the set of feasible line segments $\left\{\right(\overline{\mathit{\text{AB}}},\overline{\mathit{\text{CD}}}),(\overline{\mathit{\text{AD}}},\overline{\mathit{\text{BC}}}),(\overline{\mathit{\text{AC}}},\overline{\mathit{\text{BD}}}\left)\right\}$ we can easily identify the $(\overline{\mathit{\text{AC}}},\overline{\mathit{\text{BD}}})$ pair that are able to crossover. As a result, higher weight should be assigned to the segment pair (X,Z) (the segments which contains the points A and C) and (Y,W) (the segments which contains the points B and D). The subplots of Figure 8(B) and (D) suggest to define the following function form${W}_{\mathit{\text{ij}}}=exp({f}_{2}(\theta \left)\right)$(7)

for these pairs of interest, as well as the function form${W}_{\mathit{\text{ij}}}=exp({f}_{3}(\theta \left)\right)$(8)

for the rest less favourable pairs.

5/6Clique: As presented in Figure 11(A), for 5clique scenario, one segment (the blue ones) crosses over another segment (the red ones) at the branching point. Here the goal is to divide them in two groups, one having 2 segments (j and k in subplot (A)) and the other having 3 segments (i,l and m). From segment ordering we know (i,j) are assigned larger values than the rest, so we already have one member from each group. The rest of the members can be assigned by the usual “smooth curve around the junction point” assumption and employing f_{1}(θ) in the same way as in 3clique.

As presented in Figure 11(B) for 6clique two crossovers happen at the same location. The target here is also to divide them in two groups: one having 3 segments (i,l and m) and the other having 3 segments (j,n and k). Similarly, from the segment ordering we already know that the two nodes with large index values (i,j), belong to different groups, so we employ f_{1}(θ) in a same way as in 5clique.
Removing spurious segments
This type of tiny spur can be identified and further removed by checking the average angle β and length of the segment C: C will be removed if β≤β_{ critical } and C≤C_{ critical }. In practice, β(β_{ critical })=70° and C(C_{ critical })=10.
Results and discussion
Synthetic datasets

Dataset 1: In this dataset, the tree complexity is varying while the other two parameters are fixed, namely, the number of trees are fixed to 8 and the spread angle is set to 30°. This gives rise to 5 subsets of images within this dataset: (1) All trees are of low complexity; (2) Four out of the eight trees are with low complexity, and the rest four trees are with medium complexity; (3) All trees are of medium complexity; (4) Four out of the eight trees are with medium complexity, and the result four are with high complexity; (5) All trees are of high complexity. Two examples of this dataset are shown in Figure 15. 100 images are produced for each subset, in all 5,000 images are generated in this dataset.

Dataset 2: In this dataset, the number of trees are varying from the set {3,5,7,9,11,12}, while the other two parameters are fixed: 1/3 from each complexity group, and the angles between trees is set to 30°. Two examples are displayed in Figure 16. Each subset contains 1000 images, in all we have 6,000 images for this dataset.

Dataset 3: In this dataset, the spread angle varies from the set {360°,300°,240°,180°,120°,60°}, while and the number of trees is set to 8, and for the tree complexity, the same strategy is used as that in dataset 2 (1/3 from each complexity group). Two examples are presented in Figure 17. Each subset contains 1000 images, in all we have 6,000 images for this dataset.
Preparation for computing the DIADEM score
Experimental results
Throughout this paper, the DIADEM score is utilized to measure the performance of a method on a particular dataset, obtained by averaging the scores over all images of current dataset. It has been extensively used in the neural tracing community as standard evaluation metric [33].
Synthetic datasets
Reallife testbeds: DRIVE and STARE
Comparison of DIADEM score (DS) for DRIVE dataset with other methods
Method  Minimum DS  Maximum DS 

Our method  .703  .765 
Engin et al. [19]  .15  .71 
Figure 13 presents the visual results of our work. The first column shows the original image, while the second and the third columns display the corresponding tracing results without and with graph simplification. On the other hand, row number one, two and three denote the tracing results for an exemplar image from the synthetic dataset, the STARE dataset, and the DRIVE dataset, respectively. In the subplot, a white square denotes a wrong tracing result while a green circle denotes a correct result. From subplot (B) we can see that the 5cliques C_{1},C_{2} and C_{3} are incorrectly traced without the graph simplification module, which become correct when our full model (i.e. with graph simplification) is employed. In both the cases 4cliques C_{4} and C_{5} are correctly traced. In subplot (E) for DRIVE, we can see that the 5cliques C_{1} and C_{2} are incorrectly traced, while in subplot (F), those are also traced out correctly. Note due to topological errors induced from the segmentation and the skeleton extraction steps, there are a few incorrect tracing results such as C_{4},C_{5} persist in both subplots (E) and (F) (i.e. without and with graph simplification). Similar patterns also can be observed from the STARE dataset, as shown in the third row (namely subplots (G)(I)).
Conclusion
In this paper, we propose a novel approach for tracing retinal blood vessels from fundus images, where we formulate the tracing problem as an equivalent transductive learning problem. Our tracing approach performs very well in resolving many crossover scenarios and various complex situations. It sometimes fails due to imperfect segmentation or in complex scenarios with more than five segments at a junction point. Current results suggest that orientation features are important but might not be sufficient for solving very complex scenarios. As a future direction we are currently working on vessel thickness and texture information for resolving these complex scenarios.
Endnotes
^{1} More details on transductive inference can be found in Chapter 11 and 24 of [34].
Appendix
Details of our segmentation step
To facilitate the tracing step of our pipeline, the goal of the segmentation step here is to extract the vessel skeleton while maintaining the structural connections, as well as the corresponding pointwise radii along the skeleton (A point radius is to measure the thickness of a skeleton point in the orthogonal direction), based on which the retinal vessels can be faithfully reconstructed. This differs notably from the usual aim of most existing segmentation work, where the emphasis is to achieve a high classification accuracy. As the number of vessel pixels are much fewer comparing to the number of background pixels, often a high accuracy is achieved by missing many vessel pixels – a situation we try to avoid. In fact, our goal can be better described as segmentation with a high recall. It is critical for us to retain the vessel pixels that keep the local vein and artery branches from being broken or entirely missing. To achieve this, we resort to a cascade of two segmentation modules for producing our final segments. The first one in the cascade is a supervised segmenter as being described next, and the second one is an unsupervised segmenter that is specialized at recovering parallel thin branches, which often tend to be merged into one thick branch by the first module.
The first segmentation module: supervised segmenter
In the first module, we implement a supervised segmenter using Gabor filters and GMMs, which is inline with existing supervised methods used for segmenting retinal vessels [6]. For each pixel in the training set, the Gabor response feature of 18 directions are computed and normalized to form the input features [6]. Two GMMs, each having 20 Gaussian components, with one for vessel and the other for nonvessel background pixels, are trained on these features as a pixel classifier. Then for a test image, by applying the trained GMMs we obtain the probability of a pixel being vessel or not. A probability map of the image is produced by maximizing over these two probabilities for each of the pixels.
Unfortunately, the result after merging the highest F1 and the highest recall outputs are still not satisfactory: It seems a characteristic of the supervised methods is that they tend to merge very close parallel branches into one branch, undesirable for our purpose of tracing. So we need to consider using a second module from unsupervised segmentation.
The second module: unsupervised segmenter
We have attempted with a few existing methods and observed that the segmenter of [8], the one using Isotropic Undecimated Wavelet Transform (IUWT), empirically produces the best segmentation for the close and parallel branches, as illustrated in Figure 21(D) and (G). As a second addon module of the cascade, based on the current partial result from supervised segmentation, the wronglymerged thick branches are identified and replaced by the parallel branches from the second module.
Combining supervised and unsupervised method of segmentation
For combining the images from supervised and unsupervised method of segmentation (total 3 images) we have followed these steps.

We have used the binary segmented images and extracted the skeleton from them.

Depending on the number of neighbours, we have marked the skeleton pixels as body pixels (those pixels with 2 neighbours), branching pixels (those with 3 or more neighbours) or terminal pixels (those with one neighbour). We define a vessel segment as a group of body pixels which are connected together.

We calculate the median diameter of each vessel segment by estimating the diameter on each point of the vessel segment skeleton by following the method described in [8]. Then we calculate the mean diameter (d_{ m }) of all the vessel segments for a particular image.

We have replaced the segments which have diameter less d_{ m } from Figure 21B by the same segments from Figure 21C. While replacing we always took care about the continuity of the connected segments and we have always preferred thin and longer segments than thick and shorter segments.

Then we have taken those segments from Figure 22B, which are one standard deviation more than the d_{ m } and replaced them with the segments from Figure 21D.
Resolving the disconnection issue
So far the partial result is able to retain the small branches, and it works well with the close and parallel branches, as displayed in Figure 22(A). Still some small branches are still disconnected as in Figure 22(C)(D). The is resolved by first fitting a 3rdorder curve to the skeleton of those disconnected branches, and second, reconnecting them by incrementally and carefully extending a fitted curve from both ends in parallel till it retains contact to a main branch. The radius of each point on the extended curve is estimated as a convex combination of the radii of its neighbouring points, with the weight being in proportion to the inverse distance between the point and its neighbouring point. This finally produces a wellconnected structure suitable and ready for tracing purpose (Figure 22(E)(F)). Note that the segmentation output of an image is represented as the skeleton plus their corresponding pointwise radii.
Declarations
Acknowledgements
We would like to thank Dr. Weimiao Yu and Mr. Tengfei Ma for discussions, and thank Dr. Sohail Ahmed, Dr. Hwee Kuan Lee, and Dr. Manoranjan Dash for their supports. JD and LC are partially supported by JCO grants (1231BFG040, 1231BEG034, 12302FG010, and 1231BFG047). HL is partially supported by NSFC (No. 81271650) and NCET100041.
Authors’ Affiliations
References
 Viswanath K, McGavin D: Diabetic retinopathy: clinical findings and management. Commun Eye Health. 2003, 16 (46): 2124.Google Scholar
 Mendonca A, Campilho A: Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE Trans Med Imag. 2006, 25 (9): 12001213.View ArticleGoogle Scholar
 Garg S, Sivaswamy J, Chandra S: Unsupervised curvaturebased retinal vessel segmentation. IEEE International Symposium on Biomedical Imaging ISBI. 2007, USA: IEEE, 12001213.Google Scholar
 Espona L, Carreira M, Penedo M, Ortega M: Retinal vessel tree segmentation using a deformable contour model. ICPR. 2008, International Association of Pattern Recognition,Google Scholar
 MartinezPerez M, Hughes A, Thom S, Bharath A, Parker K: Segmentation of blood vessels from redfree and uorescein retinal images. Med Image Anal. 2007, 11: 4761. 10.1016/j.media.2006.11.004.View ArticlePubMedGoogle Scholar
 Soares J, Leandro J, Cesar R, Jelinek H, Cree M: Retinal vessel segmentation using the 2D Gabor wavelet and supervised classification. IEEE Trans Med Imag. 2007, 25 (9): 12141222.View ArticleGoogle Scholar
 Marin D, Aquino A, GegundezArias M, Brav J: A new supervised method for blood vessel segmentation in retinal images by using graylevel and moment invariantsbased features. IEEE Trans Med Imag. 2011, 30: 146158.View ArticleGoogle Scholar
 Bankhead P, Scholfield C, McGeown J, Curtis T: Fast retinal vessel detection and measurement using wavelets and edge location refinement. PLoS ONE. 2012, 7 (3): e3243510.1371/journal.pone.0032435.View ArticlePubMed CentralPubMedGoogle Scholar
 Wang L, Bhalerao A: Model based segmentation for retinal fundus images. Scandinavian Conference on Image Analysis. 2003, 422429. http://hatutus.org/scia2013/,View ArticleGoogle Scholar
 Ricci E, Perfetti R: Retinal blood vessel segmentation using line operators and support vector classification. IEEE Trans Med Imag. 2007, 26 (10): 13571365.View ArticleGoogle Scholar
 Xu X, Niemeijer M, Song Q, Sonka M, Garvin M, Reinhardt J, Abramoff M: Vessel boundary delineation on fundus images using graphbased approach. IEEE Trans. Med. Imag. 2011, USA: IEEE, 11841191.Google Scholar
 Can A, Shen H, Turner J, Tanenbaum H, Roysam B, Roysam B: Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms. IEEE Trans. Med. Imag. 1999, USA: IEEE, 125138.Google Scholar
 Grisan E, Pesce A, Giani A, Foracchia M, Ruggeri A: A new tracking system for the robust extraction of retinal vessel structure.IEEE EMBS. 2004, USA: IEEE,Google Scholar
 Bekkers E, Duits R, ter Haar Romeny B, Berendschot T: A new retinal vessel tracking method based on orientation scores. CoRR. 2012, abs/1212.3530,Google Scholar
 AlDiri B, Hunter A, Steel D: An active contour model for segmenting and measuring retinal vessels. IEEE Trans Med Imaging. 2009, 28 (9): 14881497.View ArticlePubMedGoogle Scholar
 Chothani P, Mehta V, Stepanyants A: Automated tracing of neurites from light microscopy stacks of images. Neuroinformatics. 2011, 9 (2–3): 263278.View ArticlePubMed CentralPubMedGoogle Scholar
 Tolias Y, Panas S: A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. Med Imaging, IEEE Trans. 1998, 17 (2): 263273. 10.1109/42.700738.View ArticleGoogle Scholar
 Turetken E, Benmansour F, Fua P: Automated reconstruction of tree structures using path classifiers and mixed integer programming.Conference on Computer Vision and Pattern Recognition. 2012, USA: IEEE,Google Scholar
 Turetken E, Gonzalez G, Blum C, Fua P: Automated reconstruction of dendritic and axonal trees by global optimization with geometric priors. Neuroinformatics. 2011, 9: 279302. 10.1007/s1202101191221.View ArticlePubMedGoogle Scholar
 Turetken E, Blum C, Gonzalez G, Fua P: Reconstructing geometrically consistent tree structures from noisy images. International Conference on Medical Image Computing and Computer Assisted Intervention. 2010, Beijing, China, USA: MICCAI Society,Google Scholar
 Gonzalez G, Turetken E, Fleuret F, Fua P: Delineating trees in noisy 2d images and 3D image stacks. Conference on Computer Vision and Pattern Recognition. 2010, USA: IEEE,Google Scholar
 Lau Q, Lee M, Hsu W, Wong T: Simultaneously identifying all true vessels from segmented retinal images. IEEE Trans Biomed Eng. 2013, 60 (7): 18511858.View ArticlePubMedGoogle Scholar
 Peng H, Long F, Myers EW: Automatic 3D neuron tracing using allpath pruning. Bioinformatics. 2011, 27 (13): 239247. 10.1093/bioinformatics/btr237.View ArticleGoogle Scholar
 Xiao H, Peng H: APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of grayweighted image distancetrees. Bioinformatics. 2013, 29 (11): 14481454. 10.1093/bioinformatics/btt170.View ArticlePubMed CentralPubMedGoogle Scholar
 MartinezPerez M, Hughes AD, Stanton AV, Thom S, Chapman N, Bharath A, Parker K: Retinal vascular tree morphology: a semiautomatic quantification. IEEE Trans Biomed Eng. 2002, 49 (8): 912917. 10.1109/TBME.2002.800789.View ArticlePubMedGoogle Scholar
 Horton RE: Erotional development of their streams and thier drainage basins. Geol Soc Am Bul. 1945, 56 (3): 275300. 10.1130/00167606(1945)56[275:EDOSAT]2.0.CO;2.View ArticleGoogle Scholar
 Strahlar AN: Quantitive analysis of watershed geomorphology. Am Geophysics Union Tran. 1957, 38 (6): 913920. 10.1029/TR038i006p00913.View ArticleGoogle Scholar
 Shreve RL: Infinite topologically random channel networks. J Geology. 1967, 75 (2): 178186. 10.1086/627245.View ArticleGoogle Scholar
 Vapnik V: Statistical Learning Theory. 1998, USA: Wiley Press,Google Scholar
 Zhou D, Bousquet O, Lal T, Weston J, Scholkopf B: Learning with local and global consistency. Neural Information Processing Systems NIPS. 2004, https://nips.cc/,Google Scholar
 Abramoff M, Niemeijer M, Viergever M, Ginneken B: Ridge based vessel segmentation in color images of the retina. IEEE Trans Med Imag. 2004, 23 (4): 501509. 10.1109/TMI.2004.825627.View ArticleGoogle Scholar
 Hoover A, Kouznetsova V, Goldbaum M: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. Med Imaging, IEEE Trans. 2000, 19 (3): 203210. 10.1109/42.845178.View ArticleGoogle Scholar
 Gillette TA, Brown KM, Ascoli GA: The DIADEM metric: comparing multiple reconstructions of the same neuron. Neuroinformatics. 2011, 9 (2–3): 233245.View ArticlePubMed CentralPubMedGoogle Scholar
 Chapelle O, Scholkopf B, Zien A: SemiSupervised Learning. 2006, USA: MIT Press,View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.