Skip to main content

Local edge-enhanced active contour for accurate skin lesion border detection

Abstract

Background

Dermoscopy is one of the common and effective imaging techniques in diagnosis of skin cancer, especially for pigmented lesions. Accurate skin lesion border detection is the key to extract important dermoscopic features of the skin lesion. In current clinical settings, border delineation is performed manually by dermatologists. Operator based assessments lead to intra- and inter-observer variations due to its subjective nature. Moreover it is a tedious process. Because of aforementioned hurdles, the automation of lesion boundary detection in dermoscopic images is necessary. In this study, we address this problem by developing a novel skin lesion border detection method with a robust edge indicator function, which is based on a meshless method.

Result

Our results are compared with the other image segmentation methods. Our skin lesion border detection algorithm outperforms other state-of-the-art methods. Based on dermatologist drawn ground truth skin lesion borders, the results indicate that our method generates reasonable boundaries than other prominent methods having Dice score of 0.886 ±0.094 and Jaccard score of 0.807 ±0.133.

Conclusion

We prove that smoothed particle hydrodynamic (SPH) kernels can be used as edge features in active contours segmentation and probability map can be employed to avoid the evolving contour from leaking into the object of interest.

Background

Image segmentation is a process of finding meaningful regions in an image. Many of the image processing and analysis methods rely on the accuracy of a proper image segmentation method. In dermoscopic image processing and analysis, image segmentation corresponds to detection of lesion border precisely. Accuracy of skin lesion border detection in dermoscopic images is critical [1] to extract important structural features, such as irregularity, symmetry, and abrupt border cutoff; and dermoscopic features, such as globules, blue-white areas, and atypical pigment network. However, automated border detection is a challenging task especially among the lesions with a) fuzzy borders, b) low contrast between lesion boundary and surrounding skin, c) low color and texture variations, and d) existence of artifacts such as sweat, hair, and blood vessels.

In the USA approximately 3.5 million people are diagnosed with skin cancers in a year. Skin cancer is rarely fatal except for melanoma, which is malignancy of melanocytes [2]. In its January 2017 report, American Cancer Society estimates that in the U.S. 87,110 adults will be diagnosed with melanoma, and approximately 9730 cases are expected to be fatal [2]. Since melanoma develops in melanocytes, which are special cells on epidermis, it can be detected by visual inspection of skin. Early diagnosis and treatment of melanoma are key to increase chances of survival [3]. However, high rate of false-negative diagnosis in melanoma cases poses challenge for early treatments [3].

Dermoscopy is an effective and noninvasive imaging modality in diagnosis of skin cancers, especially for pigmented lesions. It enables clinicians to closely examine predefined diagnostic features that are not seen otherwise. For this very reason, accurate skin lesion border detection is key to extract important dermoscopic features of the lesion. These features are evaluated to detect melanoma and other skin diseases [47]. It is shown that dermoscopy increases accuracy of naked eye examination of clinicians [8]. There are various methods used to segment skin lesions [9]. One of these methods is using the algorithm of active contour.

Active contour based methods (a.k.a. snakes) are widely used in image segmentation. These methods are also used in lesion segmentation [1013]. Active contours can be categorized into two main groups: edge-based methods [14] and region-based methods [15]. The former employs edge information [14] while the latter selects a region feature to adjust the movement of active contour toward the boundary of object(s) to be segmented [16, 17]. Active contour methods start with a curve around the region of interest (ROI) to be detected, the curve moves toward its interior normals and has to stop on the boundary of the ROI. While some parameters control the smoothness of the contour, others attract the contour toward the center of the ROI. The most optimum state of the contour is selected using an iterative process, in which internal and external energy functions reach equilibrium and stop the further iterations. Edge based active contours use level sets and have the advantage of handling complicated shapes. However, their parameters are not naturally connected to visual features; therefore, very difficult to use for naive users. The edge based active contours are found more suitable for lesion boundary detection [10, 11]. On the other hand, for border detection of skin lesions, active contours were reported [13] to have slower computation time since they require to solve the underlying optimization problem. In general, for an active contour method to achieve high accuracy for skin lesion detection, the lesion is expected to have strong edges to stop at the border.

Edge-based active contour methods suffer from poorly defined edges, whereas region-based methods are sensitive to inhomogeneity of image intensities. For the images with weakly formed object boundaries (e.g., skin lesions with fuzzy borders), the edge-stop function (ESF) fails to cease the curve move and as a result contour leaks through the object border [18]. Thus, they suffer in skin lesion segmentation when morphological and color variations exist. Specifically, for the cases where skin lesion doesn’t have a strong border (e.g., fuzzy borders, or insufficient contrast between lesion boundary and surrounding skin), active contour methods fail to find lesion borders accurately. One of the main contributions of this study is to overcome this failing point. The proposed method of segmentation starts with a novel local edge extraction algorithm using smoothed particle hydrodynamics (SPH). Using the edge information coming from SPH, object border is strengthened using geodesic distances that involves probability of pixels (whether they are foreground, background, or border pixels). Later we give the object edge information into active contours to accurately detect skin lesion borders. Due to the additional edge information given to active contours, they become robust to leaks.

Process flow of skin lesion border detection with Local Edge-Enhanced Active Contour (LEEAC) is as follows (see Fig. 1). We first apply intermeans thresholding [19] on the given dermoscopy image. This leads us to coarsely locate lesion pixels and background pixels to extract sample patches which will be used for background/foreground probability map. Then we perform image filtering using Perona-Malik [20] denoising method to eliminate active contours trapped at relatively strong edges at the background pixels. Later, SPHs are calculated to find local edges of an image. We incorporate Probability Maps in to SPH kernels in order to make the lesions’ edges even stronger. In this study, it is proven that probability maps incorporated with SPH kernels are robust edge indicator functions that eliminate unwanted leakage problems [18] encountered in active contours. This novel SPH based robust edge indicator function is then solved using Level Sets [14], which in turn generates accurate skin lesion border detection even for lesions with fuzzy borders.

Fig. 1
figure 1

LEEAC takes the original dermoscopy image and generates segmented lesion as shown in the second row. It represents computational pipeline where LEEAC takes image creates background and foreground patches, denoises image, extracts local edge features with SPH, generates probability maps to further eliminate leaking problem, applies active contour, and finally generates the final segmented image using level sets

Methods

This section reviews the developed computational core for lesion segmentation. Figure 1 shows its processing steps. Each of these steps are detailed in the following subsections.

Filtering

We use Perona-Malik filtering [20] method that aims to smooth noises while preserving significant information, in our case edges. Perona-Malik filtering is chosen since it preserves edges. Formal representation of this filtering method is as follows;

$$ g(\nabla(I))=\frac{1}{1+\sqrt{1+\frac{{\nabla(I)}^{2}}{\gamma^{2}}}} $$
(1)

where g(I) represents the diffusion coefficient, (I) gradient map of the image I. As can be inferred from the Eq. 1, (I) and g are inversely proportional to maintain the notion of Perona-Malik method. γ is a constant to control the sensitivity against gradients on the image domain. Diffusion process will be declined at the regions where Iγ. Without smoothing, initial contour is trapped by noise(s) (weak edges) and cannot delineate the lesion border. After denoising completed, SPH kernel is used to overcome active contour leaking problems, especially for the fuzzy borders of skin lesions.

A new local edge extraction method: SPH

SPH is an interpolation method which is used for numerically solving diffusion equations. It is used in various applications such as highly deformable bodies simulations, lava flows, and computational fluid dynamics. Its principle is based on dividing the fluid (medium) into a set of discrete elements which are called particles. Equation 2 describes this interpolation where h is the length of smoothing function W, and r is the descriptor of the medium entity, r is the adjacent entities in the range of h. There are many kernel functions defined in the literature [21, 22]. One of them is Monoghan’s cubic spline [21] as given in Eq. 2 that provides the temperature (in its specific case) A, at position r relying on the temperatures of all particles in a radial distance h.

$$ A_{t}(r)= \int \mathrm{A}(r^{\prime}) \mathrm W\{ |r-r^{\prime}|,h\} \mathrm{d}r^{\prime} $$
(2)

In Eq. 2, the contribution of a particle to a physical property are weighted by their proximity to the particle of interest and its density. Commonly used kernel functions deploy cubic spline and Gaussian functions. Cubic spline is exactly zero for particles located at a distance equals two times of the smoothing length, 2h. This decreases the computational cost by discarding the particles’ minor contributions to the interpolation.

To expand the representation of SPH kernel in physics, let us take another particle j and associate it to a fixed volume ΔVj with a lump shape, which leads determining the computational domain with a finite number of particles. Concerning the mass and density of the particle, the lump volume can be rewritten as the ratio of mass to density mj/pj [23]. Mathematical representation is given in the following equation,

$$ \mathrm{A}(r)= \sum\limits_{j} m_{j} \frac{A_{j}}{p_{j}}W\left(\left| r-r_{j} \right|\right),h) $$
(3)

where A is any quantity at r; mj is the mass of particle j; Aj is the value of the quantity A for particle j; pj is the density of particle j; r is spatial location; and W is the kernel function. The density of particle i, pi can be expressed as in the Eq. 4.

$$ {\begin{aligned} \rho_{i}= \mathrm{(\rho(r_{i}))}&=\sum\limits_{j} m_{j} \frac{p_{j}}{p_{j}}W\left(\left| r-r_{j} \right|\right),h)\\&= \sum\limits_{j} m_{j} W\left(\left| r-r_{j} \right|\right),h) \end{aligned}} $$
(4)

where the summation over j covers all particles. Since m is a scalar, gradient of a quantity can be found easily by the derivative as seen in Eq. 5.

$$ \nabla \mathrm{A} (r)= \sum\limits_{j} m_{j} \nabla W\left(\left| r-r_{j} \right|\right),h) $$
(5)

Kernel approximation

A feasible kernel must have two following properties,

$$ \int\limits_{\Omega}\mathrm{W(r,h) }= 1 $$
(6)

and

$$ {\lim}_{h \to 0} \mathrm{W} (r,h)=\delta(r) $$
(7)

where δ is the Dirac Delta function.

$$ \delta(r)= \left\{\begin{array}{ll} \infty, & \,\, \text{if}\ a=0 \\ 0, & \,\, \text{otherwise} \end{array}\right\} $$
(8)

Kernel must be an even function and greater than zero all time [23]. These cases are expressed formally as in the following;

$$ \mathrm{W(r,h)} \geq 0 \hspace{1 cm}\text{and}\hspace{1 cm} \mathrm{W(r,h)}=\mathrm{W(-r,h)} $$
(9)

Several kernels for SPH are proposed including Gaussian, B-Spline, and Q-spline [21, 22, 24]. Even though, Q-spline is considered the best in [24] in terms of accuracy, it is computationally expensive due to the square root computations. We propose to use 6th degree polynomial kernel suggested by [24] as the default kernel, which is expressed below,

$$ W_{default}(r,h)=\frac{315}{64*\pi*h^{9}}{\left(h^{2}- \left| r \right|^{2}\right)^{2}} $$
(10)

with the gradient,

$$ \nabla W_{default}(r,h)=\frac{945}{32*\pi*h^{9}}{\left(h^{2}- \left| r \right|^{2}\right)^{2}} $$
(11)

Once SPH is applied to dermoscopy images, it generates all local edge features. Figure 2 illustrates edge features derived by SPH on a dermoscopy Image. In our experiments, we empirically selected 1 for h, and 6th degree kernel for interpolation. Obtained SPH map will be used as the edge indicator function in the lesion border segmentation. Formal representation of the edge indicator function is given as in the Eq. 12,

$$ g=\frac{1}{1+\mid \nabla (G_{\alpha}*I) \mid^{p}}, p=1,2 $$
(12)
Fig. 2
figure 2

a A dermoscopy image; as can be seen in b, Blue lines represent normals of edges on the image, which later used for lesion border segmentation

where I is the image, G is denoising function, and (GαI)p is the edge map produced by image gradients. In this paper, we used SPH formulations that are to calculate surface normals, instead of image gradients. Edge indicator functions are commonly represented by g as shown in Eq. 12. Next subsection reviews the mathematical pipeline that robustly minimizes the obtained g function.

Probability map for stronger edges

Probability map

To address the drawback seen at traditional ESFs in edge-based AC, this study introduces a computational pipeline which is based on constructing a robust ESF that utilizes probability scores (between 0−1) rather than predicted class labels provided by a classifier as given in [18]. Probability scores indicate whether a pixel is foreground or background pixel. These scores are computed in O(n) where n is the number of pixels. Whereas Pratondo et al. [18] uses fuzzy KNN or SVM classifiers to predict whether a pixel is a foreground or background pixel in O(n2). Classifier scores (between 0−1) at boundary pixels tend to be close to zero. So far, considerable amount of work has been done to have ESF collaborate with the likelihood of pixels (whether a pixel belongs to background, foreground, or edge) to avoid contour leakages through the border. Pratondo et al. [18] extended methods of [25, 26] that rely on only class probability using Bayes rule, by utilizing the probability scores from fuzzy KNN and SVM classifiers.

We adopted the image segmentation approach studied in [27] that combines pixels Gaussian probability distribution (in terms of being a foreground or background pixel) with their geodesic distances to patches selected on foreground and background. Even though this method fails in the dermoscopy images displaying lesion with weak or fuzzy edges, it provides reliable results for mapping probability of pixels that estimates whether they are background or foreground. We approach this feature of [27] such that we minimize the probability matrix where lesion edges are located. Then, we multiply the minimized matrix with the edge indicator function generated by SPH to have a more robust edge indicator function in the segmentation. In our case, object (foreground) is skin lesion and the background is healthy tissue. Figure 3 shows a comparison of segmentation results of methods which use conventional gradients [28], the approach proposed in [18], and our approach to form edge indicator function, respectively.

Fig. 3
figure 3

Contribution of attaining a robust edge indicator function is shown. The blue rectangle in a marks automatically placed initial contour for each segmentation method. Red line represents the dermatologist drawn ground truth lesion border. Results for Li et al. [28] Pratondo et al. [18], and our method in b, c, d are displayed, respectively, where can be seen that LEEAC outperforms others

First step of the probability map generation is to have the regions (boxes) from foreground and background. Boxes (patches) in size of (average) 70x90 collect pixel samples from foreground (lesion) and background (healthy tissue) to create the color models. Pixels on an image will have a value from 0 to 255 at any channel. In probability computation, each of these values is assigned to a probability range between 0 and 1, and the sum of these probabilities for each pixels should be 1. Formal representation is shown as in the Eq. 13,

$$ p(I(x,y)=k) $$
(13)

where p represents the probability that the pixel has an intensity of k. Hence, all these can be expressed by a sum as in the Eq. 14.

$$ \sum p(I(x,y)=k)=1 $$
(14)

To perform background subtraction, let us label the boxes as l1 and l2 respectively, where l1 is from background Ω1 and l2 is from foreground Ω2 of the image I. We can approximate the probability density function (PDF) using a Gaussian fitting function shown as in the Eq. 15,

$$ p(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{{x-\mu}^{2}}{2\sigma^{2}}} $$
(15)

where, μ and σ represent mean and standard deviation, respectively, estimated on the histogram of data stored in the l1 and l2. Figure 4 shows histogram of background and foreground patches obtained from the image displayed in Fig. 5. Using the generic formula given in Eq. 16, the likelihood (in terms of being foreground or background) of a pixel x on the channel C is given in the Eq. 16.

$$ P^{i}_{1|2}(x)= \frac{p^{i}_{1}\left(C_{i}(x)\right)}{p^{i}_{l}\left(C_{i}(x)\right)+p^{i}_{2}\left(C_{i}(x)\right)} $$
(16)
Fig. 4
figure 4

Red and blue curves represent the paths from pixel X to the foreground and background boxes

Fig. 5
figure 5

A dermoscopy image displaying a lesion with fuzzy borders and also its interior (as can be seen in pixel X) has similar color features with the background. Border detection for these kinds of lesions is very challenging for most of the methods

where pi represents the PDF of ωj on channel C, i is the channel number, and in our case j=1 and 2 since we have only two labels (foreground and backgorund). Additionally, a weight can be assigned to each channel, then probability of a pixel x assigned to l1 can be computed as in Eq. 17.

$$ P^{i}_{1|2}(x)= P_{r}\left(x \in l_{1}\right)=\sum\limits_{i=1}^{N_{c}} w^{i}P^{i}_{1|2}(x)] $$
(17)

where wi represents the weights which are to impose the channel (iNc) capacity in terms of abstracting the foreground from background, and Nc represents the channel number.

However, image segmentation merely relies on PDF since probability map is potent to fail. As seen in Fig. 5, pixel X, which locates inside of the object of interest, has similar intensity features with the background. In order to address this problem, [27] combined the PDF distribution with geodesic distances of each pixels to these boxes. Following subsection reviews the geodesic distances concept offered by [27].

Geodesic distances

Geodesic distances are weighted distances. To expand, assume that going to city of B from city of A takes two hours, and distance between A and B is 100 km. Whereas, going to city of C from city of A takes four hours while the distance between A and C is only 50 km. Since C is a city of another country, traveling from A to C requires more effort. Hence, the weight of passing a country border increased the travel time from city A to city C.

Likewise, weighted geodesic distance of each pixels to background and foreground boxes can be computed by Eq. 18 where W is the weight, s1 and s2 represent the boxes, and d represents all possible distances from pixel X to background and foreground boxes (see Fig. 4). If the weight W is high, then the distance d will be high.

$$\begin{array}{*{20}l} d(s)(s_{1},s_{2}) = min\underset{s_{1},s_{2}}{C} \int_{C_{s_{1},s_{2}}}{Wd}_{s} \end{array} $$
(18)

Here, pixel assignment to background or foreground is performed by comparing the minimum distance to the background and the minimum distance to the foreground. Let us select a pixel X; if this pixel’s minimum distance to the background is less than the minimum distance to the foreground, then this pixel is assigned to background, or vice versa.

Geodesic image segmentation selects the gradient of PDF, PF,B(x), as the weight W. That means, spatial connectivity between observed pixels and pixels of boxes, is constrained by the change of probability (see Fig. 4) as given in Eqs. 19 and 20.

$$\begin{array}{*{20}l} W = |\nabla P_{F,B}(x).\overset{\rightarrow}{C^{\prime}}_{s_{1},s_{2}(x)} | \end{array} $$
(19)
$$\begin{array}{*{20}l} D_{l}(x) = \underset{s\in\Omega_{1}}{min}\hspace{0.1 cm} d(s,x),l \in\{F,B\} \end{array} $$
(20)

For instance, if PF,B(x) applies more weight, which means more probability change along the path, in Eq. 19, that yields increase in distance and decrease the possibility of being foreground. Consequently, pixel labeling is conducted by comparing minimum of the distances of DF(x) (foreground) and DB(x) (background) which are represented in Eq. 20. All of these computations toward generating the probability map are performed in linear time. Interested readers are referred to [27], for more details.

Note that resulting probability maps are used to further strengthen edge indicator function using the formulations given in [18]. Therefore, the edge indicator function will be more robust for the active contour guided image segmentation. Next subsection reviews how we minimize the obtained pixel probability matrix.

Minimizing probability matrix

Pixels probability matrix suggests that probability values which are close to 0 and 1 represent background and foreground, respectively; whereas probability values which are close to 0.5 represent edges according to [18]. Pratondo et al. [18] also tries to have more robust edge indicator function to avoid leaks. They applied three different formulas on probability matrix to minimize the values which represent the edges. Finally, they multiplied the minimized probability matrix with their edge indicator function generated from image gradients. In terms of minimizing probability matrix in the edge pixels, we adopt the following formula expressed in Eq. 21 from [18] and replace 0.5 with 0.7 based on our experiments in dermoscopy images.

$$ prob(s)=\left.\left(2(s-0.7)^{2}\right)\right) $$
(21)

where s represents the probability matrix for foreground. And new edge indicator function, gnew can be obtained as in the following equation, Eq. 22,

$$ g_{new}=g*prob $$
(22)

The equations given in Eqs. 21 and 22 help us minimize the edge indicator function even at the poorly defined object boundaries for the active contours guided segmentation. Therefore, contour evolution will terminate at the desired boundaries.

Figure 6 illustrates the obtained edge map which is a matrix in the image size and will be passed to energy function in the active contours configuration. Note that Fig. 6e and f show more robust edges compared to Fig. 6c and d. Figure 6e and f are created using geodesic probability map. Figure 6b is the conventional edge map that relies solely on image gradients. The enhancement provided by geodesic probability map can be realized qualitatively by naked eye, we also proved it quantitatively using 100 images and displayed the results in the results section. Moreover, training KNN and SVM for the results displayed in Fig. 6c and d are computationally heavy; while in our case, we used geodesic probability map which is obtained in linear time. Average time on generating new edge indicator function for the used data is also discussed in the results section. It is arguable that we continued segmentation after obtaining the outcome shown in Fig. 6e and f. Notably, it is not a binary image that can be used as a segmentation result, and still requires to be processed for abstracting the lesion from background. To address this challenge, we solved the segmentation function (in the spirit of active contours) using level sets. Next subsection reviews the level set evolutions.

Fig. 6
figure 6

a A dermoscopy image with skin lesion, b represents a g map obtained using only image gradients, c represents a g map obtained using SVM and 0.5 in Equation 16, d represents a g map obtained using KNN and 0.5 in Equation 16, e represents a g map obtained using geodesic probability map and 0.5 in Equation 16, f represents the edge map obtained by geodesic probability map and 0.7 in Equation 16. Additionally we applied a thresholding on e to make edges stronger

Level set configuration

A level set function (LSF) ϕ(x,y,t) can represent a planar closed curve as in the implicit fashion given in Equation 23,

$$ CR(t)={(x,y)|\phi(x,y,t)=0}. $$
(23)

where CR is curve; t is time; and x and y are spatial coordinates of the given curve. The evolution of the implicit function ϕ can be written as in Eq. 24.

$$ \frac{\partial \phi}{\partial t} + F\mid\phi\mid = 0 $$
(24)

where F represents evolution speed and is gradient operator.

Caselles et al. [14] employed the curve evolution expressed in Eq. 24 for image segmentation. Notion of Caselles’ method relies on constraining curve propagation by image features (commonly gradients). Equation 25 governs the formulations offered by Caselles et al. [14]

$$ \frac{\partial \phi}{\partial t} = \mid\phi\mid div\left(g\frac{\nabla \phi}{\mid \nabla \phi \mid}\right)+vg\mid \nabla \phi \mid) $$
(25)

where g is the edge indicator function, and v represents a constant coefficient that is to be used for adjusting curve speed. Recall that traditional representation of the g function is given in Eq. 12. Furthermore, we obtained gnew using geodesic probability map. Consequently, we can now plug our new edge indicator function to level set formulations. Equation 26 is offered by [29] as

$$ \begin{aligned} \frac{\partial \phi}{\partial t} = \mu div (d_{p}(\mid \nabla \phi \mid)\nabla \phi) + \\ \lambda \rho(\phi)div\left(g_{new}\frac{\nabla \phi}{\mid \nabla \phi \mid} \right) +\beta g_{new} \rho_{e} (\phi) \end{aligned} $$
(26)

where dp is obtained from a potential function derived as dp(s)=p(s)/s, δε is the Dirac delta function, and μ, λ and β are constants to weight data terms in Eq. 26. The first term on the right side of Eq. 26 is the distance regularization term, the second term represents the length term, and the third term is the area term.

Even though Eq. 26 is able to handle topology changes, that requires re-initialization of level sets, we have adopted a way to tackle re-initialization [30] problem of level set method by using Reaction Diffusion (RD) based Level Set Evolution (LSE) [31]. This method is consisted of two steps such that it first iterates the LSE equation, then solves the diffusion equation. The second step is to regularize the level set function obtained in the first step to remove computationally expensive reinitialization procedure from LSE and to ensure numerical stability in the process of solving Eq. 20. Equation 27 formulizes RD in discrete form as

$$ \begin{aligned} \phi^{n+\frac{1}{2}}= \phi^{n} + \tau_{1} \left((\kappa + g_{new}v)+ |\nabla \phi^{n}|\right) \\ \phi^{n+1}=\phi^{n} + \tau_{2}\Delta\phi^{n} \end{aligned} $$
(27)

where, ϕ represents the level set function, ϕn equals to \(\phi ^{n+\frac {1}{2}}\) (for the second raw), τ1 and τ2 are time steps of the gradient descent which is to solve the Eq. 27, κ is curvature of the level set function, |ϕn| is magnitude of the gradient of the level set function, Δϕn is Laplacian of the level set function, gnew is the new edge indicator function, and v is a constant to adjust propagation velocity of the level set function. Next section presents the results of our segmentation method including the comparisons with state-of-the art methods.

Results and discussion

We tested our novel skin lesion border detection method (LEEAC) on the data set that has 100 dermoscopy images provided by [32]. Note that we kept the images in their original sizes to avoid data losses due to down-sampling. We utilized the level set implementation given in [31]. To segment an image, our method requires two patches (boxes); one for background and one for foreground. Then probability map is generated based on the pixels bounded by these rectangular areas. Other algorithms [18, 31, 33] used Gaussian filtering to denoise the data set in their applications, which requires adjusting the standard deviation of Gaussian filter in order to avoid leakages (small values of standard deviation) and long delays in border detection (large values of standard deviation). In this context, delay refers that the initial contour trapped by artifacts in dermoscopy images such as hairs and/or sweat/bubble, ruler markings etc., which cast strong edges on the images.

In SPH map, we used 6th polynomial kernel and selected the smoothing range, h as 1. The default parameter values for level set scheme are set as τ1=0.3, which is the time-step for level set evolution equation; τ2=0.01, which is the time-step of the equation for diffusion regularization; and v=0.7, which is to adjust the speed of curve evolution toward the skin lesion boundary. Iteration loop is stopped when polygon area of the closed contour does not show change more than 10 units compared to area of the contour in previous iteration. In order to conduct a fair comparison, we changed the segmentation configuration given in [18] from [28], to [14], otherwise nested iterations in the implementation of [28] increases computational time drastically. The algorithm of [28] generates inaccurate segmentation results. Note that, in region based segmentation [31, 33] output may contain regions which are not part of the lesion; however, these regions are represented by similar intensity features to the skin lesion. This ultimately decreases their segmentation accuracy. While evaluating our method, we did not take any post-segmentation action such as removing irrelevant connected components (dilation & eroding) far from the lesion to abstract the lesion alone.

To perform evaluation, we adopted commonly used quantitative measurements, i.e., Dice Coefficient (DC), Jaccard Index (JI), and Border Error (BE). Let us say O and G are the results of segmentation and the ground truth, respectively, DC is calculated by (2OG)/(O+G), and JI is by (OG)/(OG). BE is calculated as in the Eq. 28.

$$ BE=\frac{False Negative+False Positive}{True Negative+True Positive} $$
(28)

FN pixels refer to pixels falsely detected as background, FP pixels refer to the pixels falsely segmented as foreground (lesion), TN pixels refer to pixels correctly detected as background, TP pixels correctly segmented as foreground. Figure 7 shows the BE evaluations in the box-plot representation. Table 1 shows comparisons between the prominent segmentation methods [13, 18, 31, 33] and ours. Note that [31, 33] fall in the category of region based active contours, and segmentation functions are governed in the spirit of local binary fitting. Figure 8 includes a gallery that displays qualitative results of our method and competitor methods. Since Pratondo et al. [18] is the only competitor method which is also edge based like ours, our results are especially compared against that method (see Table 1).

Fig. 7
figure 7

In border error assessment, our method outperformed again with the score of (mean ±standard deviation) 0.1989±0.1428, while Mete et al. [13] reads 0.2273±0.09, Zhang et al. [31] reads 0.406±0.2726, Li et al. [33] reads 0.4061±0.2726, Pratondo [18] reads 0.4079±0.6417 for SVM, and Pratondo [18] reads 0.6276±0.6288 for KNN

Fig. 8
figure 8

a Blue line shows the segmentation result of our method b shows the segmentation result of Mete et al. [13] in yellow. Both in (a) and (b) red line represents the ground truth border of the skin lesion

Table 1 Evaluation of Segmentation Methods with respect to the Ground Truth

Mete et al. [13] is a clustering based segmentation method. It is parameter dependent (radius of evolving clusters) and does not incorporate with local information while finding the lesion border. Thus, its segmentation result cannot precisely separate non-lesion patches from lesion if both have similar color values (e.g. fuzzy borders). In Figs. 8 and 9 display qualitative comparisons of our method to competitors [18, 31, 33]. Figure 7 shows the border error evaluations in the box-plot representation.

Fig. 9
figure 9

a A dermoscopy image of a skin lesion, b represents our method (in blue) vs. ground truth (always in red in this figure) lesion border drawn by an expert dermatologist, c represents Pratondo et al. [18] using SVM (in yellow) vs. ground truth, and d represents Pratondo et al. [18] using KNN (in yellow) vs. ground truth (in blue),e represents Zhang et al. [31] (in magenta) vs. ground truth,f represents Li et al. [33] vs. ground truth

We have observed that selection of noise filtering technique has a tremendous impact on the duration of segmentation. If we consider images with their actual sizes, methods of [18, 31, 33] used merely Gaussian filtering for denoising purpose and average time for segmenting an image with the size of 484x737 takes more than 10 min. Average time for training KNN and SVM in the approach of [18] is almost an hour. Segmentation method of Mete et al. [13] involves density based clustering, hence it is very sensitive to parameters and computationally expensive.

Conclusions

This study introduces an accurate skin lesion border detection method based on active contours. One of the main problems of active contours is leaking problem. This problem becomes especially visible in dermoscopy images when there are fuzzy lesion borders and/or dermoscopic artifacts, such as hair and water. When such features exist, active contour is not able to properly find skin lesions or region of interest. We overcome these problems by introducing SPH kernels and probability maps into active contours (called LEEAC). This in turn removed leaking problems and increased accuracy of segmentation. We tested our approach on 100 dermoscopy images and compared our results with the state of the art methods. LEEAC outperforms other prominent methods as reported in the “Results and discussion” section.

Abbreviations

AC:

Active contour

ESF:

Edge stop function

FN:

False negative

FP:

False positive

KNN:

K-nearest neighbors algorithm

LEEAC:

Local Edge-Enhanced Active Contour

LSE:

Level set evolution

LSF:

Level set function

PDF:

Probability density function

RD:

Reaction Diffusion

ROI:

Region of interest

SPH:

Smoothed particle hydrodynamics

SVM:

Support Vector Machines

TN:

True negative

TP:

True positive

References

  1. Celebi ME, Iyatomi H, Schaefer G, Stoecker WV. Lesion border detection in dermoscopy images. CoRR. 2010; abs/1011.0640. http://arxiv.org/abs/1011.0640.

  2. American Cancer Society’s. Cancer Facts. Am Cancer Soc. 2018:1–76. Accessed: 2017-11-21. https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf.

  3. David B T. Trends in pathology malpractice claims. Am J Surg Pathol. 2012; 36(1):1–5. https://doi.org/10.1097/PAS.0b013e31823836bb.

    Article  Google Scholar 

  4. di Meo N, Stinco G, Bonin S, Gatti A, Trevisini S, Damiani G, Vichi S, Trevisan G. Cash algorithm versus 3-point checklist and its modified version in evaluation of melanocytic pigmented skin lesions: The 4-point checklist. J Dermatol. 2016; 43(6):682–5. https://doi.org/10.1111/1346-8138.13201.

    Article  Google Scholar 

  5. Walter F, Provest A, Vasconcelos J, Burrows PHN, Morris H, Kinmonth A, Emery J. Using the 7-point checklist as a diagnostic aid for pigmented skin lesions in general practice: a diagnostic validation study. Br J Gen Pract. 2013; 63(610):345–53. https://doi.org/10.3399/bjgp13X667213.

    Article  Google Scholar 

  6. Nachbar F, Stolz W, Merkle T. The abcd rule of dermatoscopy. J Am Acad Dermatol. 1994; 30:551–9.

    Article  CAS  Google Scholar 

  7. Stolz W, Riemann A, Cognetta A, Pillet L, Abmayr W. Abcd rule of dermatoscopy: a new practical method for early recognition of malignant melanoma. Eur J Dermatol. 1994; 4:521–7.

    Google Scholar 

  8. Vestergaard M, Macaskill P, Holt P, Menzies S. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical settings. Br J Dermatology. 2008; 159(3):669–76. https://doi.org/10.1111/j1365-2133.2008.08713.x.

    CAS  Google Scholar 

  9. Celebi EM, Wen Q, Iyatomi H, Shimizu K, Zhou H, Schaefer G. A state of art survey on lesion border detection in dermoscopy images In: Celebi E, Mendoca T, Marquez JS, editors. Dermoscopy Image Analysis. Chap. 4. Baco Raton, FL: CRC Press: 2015. p. 97–130.

    Chapter  Google Scholar 

  10. Erkol B, Moss H, Stanley J, William S, Hvatum E. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes. Skin Res Technology. 2005; 1(1):17–26. https://doi.org/10.1111/j.1600-0846.2005.00092.x.

    Article  Google Scholar 

  11. Abbas Q, Fondon I, Sarmiento A, Emre MC. An improved segmentation method for non-melanoma skin lesions using active contour model; 2014, pp. 193–200. https://doi.org/10.1007/978-3-319-1175-3_22.

  12. Kass M, Witkin A TD. Snakes: Active contour models. Int J Comput Vis. 1998; 1(4):321–31.

    Article  Google Scholar 

  13. Mete M, Sirakov NM. Lesion detection in demoscopy images with novel density-based and active contour approaches. BMC Bioinforma. 2010; 11(6):23. https://doi.org/10.1186/1471-2105-11-S6-S23.

    Article  Google Scholar 

  14. Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vis. 1997; 22(1):61–79. https://doi.org/doi:10.1023/A:1007979827043.

    Article  Google Scholar 

  15. Vese LA, Chan TF. A multiphase level set framework for image segmentation using the mumford and shah model. Int J Comput Vis. 2002; 50(3):271–93. https://doi.org/doi:10.1023/A:1020874308076.

    Article  Google Scholar 

  16. Chan T, Vese L. In: Nielsen M, Johansen P, Olsen OF, Weickert J, (eds).An Active Contour Model without Edges. Berlin, Heidelberg: Springer; 1999, pp. 141–51. https://doi.org/10.1007/3-540-48236-9_13. https://doi.org/10.1007/3-540-48236-9_13.

    Google Scholar 

  17. Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun Pur Appl Math. 1989; 42(5):577–685. https://doi.org/10.1002/cpa.3160420503.

    Article  Google Scholar 

  18. Pratondo A, Chui CK, Ong SH. Robust edge-stop functions for edge-based active contour models in medical image segmentation. IEEE Signal Proc Lett. 2016; 23(2):222–6. https://doi.org/10.1109/LSP.2015.2508039.

    Article  Google Scholar 

  19. RIDLER T. Picture thresholding using an iterative selection method. IEEE Trans Syst Man Cybern. 1978; 8(8):630–2. https://doi.org/10.1109/TSMC.1978.4310039.

    Article  Google Scholar 

  20. Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell. 1990; 12(7):629–39. https://doi.org/10.1109/34.56205.

    Article  Google Scholar 

  21. Monaghan JJ. Smoothed particle hydrodynamics. Rep Prog Phys. 2015; 68(8):1703–59. https://doi.org/doi:10.1088/0034-4885/68/8/R01.

    Article  Google Scholar 

  22. Kelager M. Lagrangian fluid dynamics using smoothed particle hydrodynamics. University of Copenhagen: Department of Computer Science; 2006. http://image.diku.dk/projects/media/kelager.06.pdf.

    Google Scholar 

  23. Liu MB, Liu GR. Smoothed particle hydrodynamics (sph): an overview and recent developments. Arch Comput Methods Eng. 2010; 17(1):25–76. https://doi.org/10.1007/s11831-010-9040-7.

    Article  CAS  Google Scholar 

  24. Müller M, Charypar D, Gross M. Particle-based fluid simulation for interactive applications; 2003, pp. 154–9.

  25. Smeets D, Loeckx D, Stijnen B, Dobbelaer BD, Vandermeulen D, Suetens P. Semi-automatic level set segmentation of liver tumors combining a spiral-scanning technique with supervised fuzzy pixel classification. Med Image Anal. 2010; 14(1):13–20. https://doi.org/10.1016/j.media.2009.09.002.

    Article  Google Scholar 

  26. Wu J, Yin Z, Xiong Y. The fast multilevel fuzzy edge detection of blurry images. IEEE Signal Proc Lett. 2007; 14(5):344–7. https://doi.org/10.1109/LSP.2006.888087.

    Article  Google Scholar 

  27. Protiere A, Sapiro G. Interactive image segmentation via adaptive weighted distances. IEEE Trans Image Process. 2007; 16(4):1046–57. https://doi.org/10.1109/TIP.2007.891796.

    Article  Google Scholar 

  28. Li C, Xu C, Gui C, Fox MD. Distance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process. 2010; 19(12):3243–54. https://doi.org/10.1109/TIP.2010.2069690.

    Article  Google Scholar 

  29. Chunming L, Chenyang X, Changfeng G, Fox MD. Level set evolution without reinitialization: a new variational formulation. In: Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition CVPR’05: 2005. p. 430–436.

  30. Malladi R, Sethian JA, Vemuri BC. Shape modeling with front propagation: a level set approach. IEEE Trans Pattern Anal Mach Intell. 1995; 17(2):158–75. https://doi.org/10.1109/34.368173.

    Article  Google Scholar 

  31. Zhang K, Zhang L, Song H, Zhang D. Reinitialization-free level set evolution via reaction diffusion. IEEE Trans Image Process. 2013; 22(1):258–71. https://doi.org/10.1109/TIP.2012.2214046.

    Article  Google Scholar 

  32. Argenziano G, Soyer HP, De Giorgio V, Piccolo D, Carli P, Delfino M, Ferrari A, Hofmann-Wellenhof R, Massi D, Mazzocchetti G, Scalvenzi M, Wolf IH. Interactive atlas of dermoscopy.Edra Medical Publishing & New Media; 1999. p. 208. ISBN: 8886457308, 1999, 208 pages. (Book & CD/Web Resource) Interactive multimedia. 5.25 in. disc. https://espace.library.uq.edu.au/view/UQ:229410.

  33. Li C, Kao C, Gore JC, Ding Z. Implicit active contours driven by local binary fitting energy; 2007. https://doi.org/10.1109/CVPR.2007.383014.

Download references

Acknowledgements

We would like to thank Dr. Sreekanth Arikatla from Kitware, Inc. for providing insights on smoothed particle hydrodynamics. We also would like to thank Dr. Deepak Chittajallu from Kitware, Inc. for validating accuracy of our results and checking mathematical notations in this study.

Funding

This study is supported by the Arkansas INBRE program, with an award# P20 GM103429 from the National Institutes of Health/the National Institute of General Medical Sciences (NIGMS).

Availability of data and material

Not applicable.

About this supplement

This article has been published as part of BMC Bioinformatics Volume 20 Supplement 2, 2019: Proceedings of the 15th Annual MCBIOS Conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-2.

Author information

Authors and Affiliations

Authors

Contributions

SK, TH, and MM conceived the study. MB did all experiments and implementations. HKW guided SK in skin lesion understanding. KI helped MB in development of active contour algorithms. All authors read and approve the manuscript.

Corresponding author

Correspondence to Sinan Kockara.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bayraktar, M., Kockara, S., Halic, T. et al. Local edge-enhanced active contour for accurate skin lesion border detection. BMC Bioinformatics 20 (Suppl 2), 91 (2019). https://doi.org/10.1186/s12859-019-2625-8

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12859-019-2625-8

Keywords