 Research
 Open Access
 Published:
Local edgeenhanced active contour for accurate skin lesion border detection
BMC Bioinformatics volume 20, Article number: 91 (2019)
Abstract
Background
Dermoscopy is one of the common and effective imaging techniques in diagnosis of skin cancer, especially for pigmented lesions. Accurate skin lesion border detection is the key to extract important dermoscopic features of the skin lesion. In current clinical settings, border delineation is performed manually by dermatologists. Operator based assessments lead to intra and interobserver variations due to its subjective nature. Moreover it is a tedious process. Because of aforementioned hurdles, the automation of lesion boundary detection in dermoscopic images is necessary. In this study, we address this problem by developing a novel skin lesion border detection method with a robust edge indicator function, which is based on a meshless method.
Result
Our results are compared with the other image segmentation methods. Our skin lesion border detection algorithm outperforms other stateoftheart methods. Based on dermatologist drawn ground truth skin lesion borders, the results indicate that our method generates reasonable boundaries than other prominent methods having Dice score of 0.886 ±0.094 and Jaccard score of 0.807 ±0.133.
Conclusion
We prove that smoothed particle hydrodynamic (SPH) kernels can be used as edge features in active contours segmentation and probability map can be employed to avoid the evolving contour from leaking into the object of interest.
Background
Image segmentation is a process of finding meaningful regions in an image. Many of the image processing and analysis methods rely on the accuracy of a proper image segmentation method. In dermoscopic image processing and analysis, image segmentation corresponds to detection of lesion border precisely. Accuracy of skin lesion border detection in dermoscopic images is critical [1] to extract important structural features, such as irregularity, symmetry, and abrupt border cutoff; and dermoscopic features, such as globules, bluewhite areas, and atypical pigment network. However, automated border detection is a challenging task especially among the lesions with a) fuzzy borders, b) low contrast between lesion boundary and surrounding skin, c) low color and texture variations, and d) existence of artifacts such as sweat, hair, and blood vessels.
In the USA approximately 3.5 million people are diagnosed with skin cancers in a year. Skin cancer is rarely fatal except for melanoma, which is malignancy of melanocytes [2]. In its January 2017 report, American Cancer Society estimates that in the U.S. 87,110 adults will be diagnosed with melanoma, and approximately 9730 cases are expected to be fatal [2]. Since melanoma develops in melanocytes, which are special cells on epidermis, it can be detected by visual inspection of skin. Early diagnosis and treatment of melanoma are key to increase chances of survival [3]. However, high rate of falsenegative diagnosis in melanoma cases poses challenge for early treatments [3].
Dermoscopy is an effective and noninvasive imaging modality in diagnosis of skin cancers, especially for pigmented lesions. It enables clinicians to closely examine predefined diagnostic features that are not seen otherwise. For this very reason, accurate skin lesion border detection is key to extract important dermoscopic features of the lesion. These features are evaluated to detect melanoma and other skin diseases [4–7]. It is shown that dermoscopy increases accuracy of naked eye examination of clinicians [8]. There are various methods used to segment skin lesions [9]. One of these methods is using the algorithm of active contour.
Active contour based methods (a.k.a. snakes) are widely used in image segmentation. These methods are also used in lesion segmentation [10–13]. Active contours can be categorized into two main groups: edgebased methods [14] and regionbased methods [15]. The former employs edge information [14] while the latter selects a region feature to adjust the movement of active contour toward the boundary of object(s) to be segmented [16, 17]. Active contour methods start with a curve around the region of interest (ROI) to be detected, the curve moves toward its interior normals and has to stop on the boundary of the ROI. While some parameters control the smoothness of the contour, others attract the contour toward the center of the ROI. The most optimum state of the contour is selected using an iterative process, in which internal and external energy functions reach equilibrium and stop the further iterations. Edge based active contours use level sets and have the advantage of handling complicated shapes. However, their parameters are not naturally connected to visual features; therefore, very difficult to use for naive users. The edge based active contours are found more suitable for lesion boundary detection [10, 11]. On the other hand, for border detection of skin lesions, active contours were reported [13] to have slower computation time since they require to solve the underlying optimization problem. In general, for an active contour method to achieve high accuracy for skin lesion detection, the lesion is expected to have strong edges to stop at the border.
Edgebased active contour methods suffer from poorly defined edges, whereas regionbased methods are sensitive to inhomogeneity of image intensities. For the images with weakly formed object boundaries (e.g., skin lesions with fuzzy borders), the edgestop function (ESF) fails to cease the curve move and as a result contour leaks through the object border [18]. Thus, they suffer in skin lesion segmentation when morphological and color variations exist. Specifically, for the cases where skin lesion doesn’t have a strong border (e.g., fuzzy borders, or insufficient contrast between lesion boundary and surrounding skin), active contour methods fail to find lesion borders accurately. One of the main contributions of this study is to overcome this failing point. The proposed method of segmentation starts with a novel local edge extraction algorithm using smoothed particle hydrodynamics (SPH). Using the edge information coming from SPH, object border is strengthened using geodesic distances that involves probability of pixels (whether they are foreground, background, or border pixels). Later we give the object edge information into active contours to accurately detect skin lesion borders. Due to the additional edge information given to active contours, they become robust to leaks.
Process flow of skin lesion border detection with Local EdgeEnhanced Active Contour (LEEAC) is as follows (see Fig. 1). We first apply intermeans thresholding [19] on the given dermoscopy image. This leads us to coarsely locate lesion pixels and background pixels to extract sample patches which will be used for background/foreground probability map. Then we perform image filtering using PeronaMalik [20] denoising method to eliminate active contours trapped at relatively strong edges at the background pixels. Later, SPHs are calculated to find local edges of an image. We incorporate Probability Maps in to SPH kernels in order to make the lesions’ edges even stronger. In this study, it is proven that probability maps incorporated with SPH kernels are robust edge indicator functions that eliminate unwanted leakage problems [18] encountered in active contours. This novel SPH based robust edge indicator function is then solved using Level Sets [14], which in turn generates accurate skin lesion border detection even for lesions with fuzzy borders.
Methods
This section reviews the developed computational core for lesion segmentation. Figure 1 shows its processing steps. Each of these steps are detailed in the following subsections.
Filtering
We use PeronaMalik filtering [20] method that aims to smooth noises while preserving significant information, in our case edges. PeronaMalik filtering is chosen since it preserves edges. Formal representation of this filtering method is as follows;
where g∇(I) represents the diffusion coefficient, ∇(I) gradient map of the image I. As can be inferred from the Eq. 1, ∇(I) and g are inversely proportional to maintain the notion of PeronaMalik method. γ is a constant to control the sensitivity against gradients on the image domain. Diffusion process will be declined at the regions where ∣∇I∣≫γ. Without smoothing, initial contour is trapped by noise(s) (weak edges) and cannot delineate the lesion border. After denoising completed, SPH kernel is used to overcome active contour leaking problems, especially for the fuzzy borders of skin lesions.
A new local edge extraction method: SPH
SPH is an interpolation method which is used for numerically solving diffusion equations. It is used in various applications such as highly deformable bodies simulations, lava flows, and computational fluid dynamics. Its principle is based on dividing the fluid (medium) into a set of discrete elements which are called particles. Equation 2 describes this interpolation where h is the length of smoothing function W, and r is the descriptor of the medium entity, r ^{′} is the adjacent entities in the range of h. There are many kernel functions defined in the literature [21, 22]. One of them is Monoghan’s cubic spline [21] as given in Eq. 2 that provides the temperature (in its specific case) A, at position r relying on the temperatures of all particles in a radial distance h.
In Eq. 2, the contribution of a particle to a physical property are weighted by their proximity to the particle of interest and its density. Commonly used kernel functions deploy cubic spline and Gaussian functions. Cubic spline is exactly zero for particles located at a distance equals two times of the smoothing length, 2h. This decreases the computational cost by discarding the particles’ minor contributions to the interpolation.
To expand the representation of SPH kernel in physics, let us take another particle j and associate it to a fixed volume ΔV_{j} with a lump shape, which leads determining the computational domain with a finite number of particles. Concerning the mass and density of the particle, the lump volume can be rewritten as the ratio of mass to density m_{j}/p_{j} [23]. Mathematical representation is given in the following equation,
where A is any quantity at r; m_{j} is the mass of particle j; A_{j} is the value of the quantity A for particle j; p_{j} is the density of particle j; r is spatial location; and W is the kernel function. The density of particle i, p_{i} can be expressed as in the Eq. 4.
where the summation over j covers all particles. Since m is a scalar, gradient of a quantity can be found easily by the derivative ∇ as seen in Eq. 5.
Kernel approximation
A feasible kernel must have two following properties,
and
where δ is the Dirac Delta function.
Kernel must be an even function and greater than zero all time [23]. These cases are expressed formally as in the following;
Several kernels for SPH are proposed including Gaussian, BSpline, and Qspline [21, 22, 24]. Even though, Qspline is considered the best in [24] in terms of accuracy, it is computationally expensive due to the square root computations. We propose to use 6^{th} degree polynomial kernel suggested by [24] as the default kernel, which is expressed below,
with the gradient,
Once SPH is applied to dermoscopy images, it generates all local edge features. Figure 2 illustrates edge features derived by SPH on a dermoscopy Image. In our experiments, we empirically selected 1 for h, and 6^{th} degree kernel for interpolation. Obtained SPH map will be used as the edge indicator function in the lesion border segmentation. Formal representation of the edge indicator function is given as in the Eq. 12,
where I is the image, G is denoising function, and ∣∇(G_{α}∗I)∣^{p} is the edge map produced by image gradients. In this paper, we used SPH formulations that are to calculate surface normals, instead of image gradients. Edge indicator functions are commonly represented by g as shown in Eq. 12. Next subsection reviews the mathematical pipeline that robustly minimizes the obtained g function.
Probability map for stronger edges
Probability map
To address the drawback seen at traditional ESFs in edgebased AC, this study introduces a computational pipeline which is based on constructing a robust ESF that utilizes probability scores (between 0−1) rather than predicted class labels provided by a classifier as given in [18]. Probability scores indicate whether a pixel is foreground or background pixel. These scores are computed in O(n) where n is the number of pixels. Whereas Pratondo et al. [18] uses fuzzy KNN or SVM classifiers to predict whether a pixel is a foreground or background pixel in O(n^{2}). Classifier scores (between 0−1) at boundary pixels tend to be close to zero. So far, considerable amount of work has been done to have ESF collaborate with the likelihood of pixels (whether a pixel belongs to background, foreground, or edge) to avoid contour leakages through the border. Pratondo et al. [18] extended methods of [25, 26] that rely on only class probability using Bayes rule, by utilizing the probability scores from fuzzy KNN and SVM classifiers.
We adopted the image segmentation approach studied in [27] that combines pixels ^{′} Gaussian probability distribution (in terms of being a foreground or background pixel) with their geodesic distances to patches selected on foreground and background. Even though this method fails in the dermoscopy images displaying lesion with weak or fuzzy edges, it provides reliable results for mapping probability of pixels that estimates whether they are background or foreground. We approach this feature of [27] such that we minimize the probability matrix where lesion edges are located. Then, we multiply the minimized matrix with the edge indicator function generated by SPH to have a more robust edge indicator function in the segmentation. In our case, object (foreground) is skin lesion and the background is healthy tissue. Figure 3 shows a comparison of segmentation results of methods which use conventional gradients [28], the approach proposed in [18], and our approach to form edge indicator function, respectively.
First step of the probability map generation is to have the regions (boxes) from foreground and background. Boxes (patches) in size of (average) 70x90 collect pixel samples from foreground (lesion) and background (healthy tissue) to create the color models. Pixels on an image will have a value from 0 to 255 at any channel. In probability computation, each of these values is assigned to a probability range between 0 and 1, and the sum of these probabilities for each pixels should be 1. Formal representation is shown as in the Eq. 13,
where p represents the probability that the pixel has an intensity of k. Hence, all these can be expressed by a sum as in the Eq. 14.
To perform background subtraction, let us label the boxes as l_{1} and l_{2} respectively, where l_{1} is from background Ω_{1} and l_{2} is from foreground Ω_{2} of the image I. We can approximate the probability density function (PDF) using a Gaussian fitting function shown as in the Eq. 15,
where, μ and σ represent mean and standard deviation, respectively, estimated on the histogram of data stored in the l_{1} and l_{2}. Figure 4 shows histogram of background and foreground patches obtained from the image displayed in Fig. 5. Using the generic formula given in Eq. 16, the likelihood (in terms of being foreground or background) of a pixel x on the channel C is given in the Eq. 16.
where p_{i} represents the PDF of ω_{j} on channel C, i is the channel number, and in our case j=1 and 2 since we have only two labels (foreground and backgorund). Additionally, a weight can be assigned to each channel, then probability of a pixel x assigned to l_{1} can be computed as in Eq. 17.
where w^{i} represents the weights which are to impose the channel (i∈N_{c}) capacity in terms of abstracting the foreground from background, and N_{c} represents the channel number.
However, image segmentation merely relies on PDF since probability map is potent to fail. As seen in Fig. 5, pixel X, which locates inside of the object of interest, has similar intensity features with the background. In order to address this problem, [27] combined the PDF distribution with geodesic distances of each pixels to these boxes. Following subsection reviews the geodesic distances concept offered by [27].
Geodesic distances
Geodesic distances are weighted distances. To expand, assume that going to city of B from city of A takes two hours, and distance between A and B is 100 km. Whereas, going to city of C from city of A takes four hours while the distance between A and C is only 50 km. Since C is a city of another country, traveling from A to C requires more effort. Hence, the weight of passing a country border increased the travel time from city A to city C.
Likewise, weighted geodesic distance of each pixels to background and foreground boxes can be computed by Eq. 18 where W is the weight, s_{1} and s_{2} represent the boxes, and d represents all possible distances from pixel X to background and foreground boxes (see Fig. 4). If the weight W is high, then the distance d will be high.
Here, pixel assignment to background or foreground is performed by comparing the minimum distance to the background and the minimum distance to the foreground. Let us select a pixel X; if this pixel’s minimum distance to the background is less than the minimum distance to the foreground, then this pixel is assigned to background, or vice versa.
Geodesic image segmentation selects the gradient of PDF, ∇P_{F,B}(x), as the weight W. That means, spatial connectivity between observed pixels and pixels of boxes, is constrained by the change of probability (see Fig. 4) as given in Eqs. 19 and 20.
For instance, if ∇P_{F,B}(x) applies more weight, which means more probability change along the path, in Eq. 19, that yields increase in distance and decrease the possibility of being foreground. Consequently, pixel labeling is conducted by comparing minimum of the distances of D_{F}(x) (foreground) and D_{B}(x) (background) which are represented in Eq. 20. All of these computations toward generating the probability map are performed in linear time. Interested readers are referred to [27], for more details.
Note that resulting probability maps are used to further strengthen edge indicator function using the formulations given in [18]. Therefore, the edge indicator function will be more robust for the active contour guided image segmentation. Next subsection reviews how we minimize the obtained pixel probability matrix.
Minimizing probability matrix
Pixels probability matrix suggests that probability values which are close to 0 and 1 represent background and foreground, respectively; whereas probability values which are close to 0.5 represent edges according to [18]. Pratondo et al. [18] also tries to have more robust edge indicator function to avoid leaks. They applied three different formulas on probability matrix to minimize the values which represent the edges. Finally, they multiplied the minimized probability matrix with their edge indicator function generated from image gradients. In terms of minimizing probability matrix in the edge pixels, we adopt the following formula expressed in Eq. 21 from [18] and replace 0.5 with 0.7 based on our experiments in dermoscopy images.
where s represents the probability matrix for foreground. And new edge indicator function, g_{new} can be obtained as in the following equation, Eq. 22,
The equations given in Eqs. 21 and 22 help us minimize the edge indicator function even at the poorly defined object boundaries for the active contours guided segmentation. Therefore, contour evolution will terminate at the desired boundaries.
Figure 6 illustrates the obtained edge map which is a matrix in the image size and will be passed to energy function in the active contours configuration. Note that Fig. 6e and f show more robust edges compared to Fig. 6c and d. Figure 6e and f are created using geodesic probability map. Figure 6b is the conventional edge map that relies solely on image gradients. The enhancement provided by geodesic probability map can be realized qualitatively by naked eye, we also proved it quantitatively using 100 images and displayed the results in the results section. Moreover, training KNN and SVM for the results displayed in Fig. 6c and d are computationally heavy; while in our case, we used geodesic probability map which is obtained in linear time. Average time on generating new edge indicator function for the used data is also discussed in the results section. It is arguable that we continued segmentation after obtaining the outcome shown in Fig. 6e and f. Notably, it is not a binary image that can be used as a segmentation result, and still requires to be processed for abstracting the lesion from background. To address this challenge, we solved the segmentation function (in the spirit of active contours) using level sets. Next subsection reviews the level set evolutions.
Level set configuration
A level set function (LSF) ϕ(x,y,t) can represent a planar closed curve as in the implicit fashion given in Equation 23,
where CR is curve; t is time; and x and y are spatial coordinates of the given curve. The evolution of the implicit function ϕ can be written as in Eq. 24.
where F represents evolution speed and ∇ is gradient operator.
Caselles et al. [14] employed the curve evolution expressed in Eq. 24 for image segmentation. Notion of Caselles’ method relies on constraining curve propagation by image features (commonly gradients). Equation 25 governs the formulations offered by Caselles et al. [14]
where g is the edge indicator function, and v represents a constant coefficient that is to be used for adjusting curve speed. Recall that traditional representation of the g function is given in Eq. 12. Furthermore, we obtained g_{new} using geodesic probability map. Consequently, we can now plug our new edge indicator function to level set formulations. Equation 26 is offered by [29] as
where d_{p} is obtained from a potential function derived as d_{p}(s)=p^{′}(s)/s, δ_{ε} is the Dirac delta function, and μ, λ and β are constants to weight data terms in Eq. 26. The first term on the right side of Eq. 26 is the distance regularization term, the second term represents the length term, and the third term is the area term.
Even though Eq. 26 is able to handle topology changes, that requires reinitialization of level sets, we have adopted a way to tackle reinitialization [30] problem of level set method by using Reaction Diffusion (RD) based Level Set Evolution (LSE) [31]. This method is consisted of two steps such that it first iterates the LSE equation, then solves the diffusion equation. The second step is to regularize the level set function obtained in the first step to remove computationally expensive reinitialization procedure from LSE and to ensure numerical stability in the process of solving Eq. 20. Equation 27 formulizes RD in discrete form as
where, ϕ represents the level set function, ϕ^{n} equals to \(\phi ^{n+\frac {1}{2}}\) (for the second raw), τ_{1} and τ_{2} are time steps of the gradient descent which is to solve the Eq. 27, κ is curvature of the level set function, ∇ϕ^{n} is magnitude of the gradient of the level set function, Δϕ^{n} is Laplacian of the level set function, g_{new} is the new edge indicator function, and v is a constant to adjust propagation velocity of the level set function. Next section presents the results of our segmentation method including the comparisons with stateofthe art methods.
Results and discussion
We tested our novel skin lesion border detection method (LEEAC) on the data set that has 100 dermoscopy images provided by [32]. Note that we kept the images in their original sizes to avoid data losses due to downsampling. We utilized the level set implementation given in [31]. To segment an image, our method requires two patches (boxes); one for background and one for foreground. Then probability map is generated based on the pixels bounded by these rectangular areas. Other algorithms [18, 31, 33] used Gaussian filtering to denoise the data set in their applications, which requires adjusting the standard deviation of Gaussian filter in order to avoid leakages (small values of standard deviation) and long delays in border detection (large values of standard deviation). In this context, delay refers that the initial contour trapped by artifacts in dermoscopy images such as hairs and/or sweat/bubble, ruler markings etc., which cast strong edges on the images.
In SPH map, we used 6^{th} polynomial kernel and selected the smoothing range, h as 1. The default parameter values for level set scheme are set as τ_{1}=0.3, which is the timestep for level set evolution equation; τ_{2}=0.01, which is the timestep of the equation for diffusion regularization; and v=0.7, which is to adjust the speed of curve evolution toward the skin lesion boundary. Iteration loop is stopped when polygon area of the closed contour does not show change more than 10 units compared to area of the contour in previous iteration. In order to conduct a fair comparison, we changed the segmentation configuration given in [18] from [28], to [14], otherwise nested iterations in the implementation of [28] increases computational time drastically. The algorithm of [28] generates inaccurate segmentation results. Note that, in region based segmentation [31, 33] output may contain regions which are not part of the lesion; however, these regions are represented by similar intensity features to the skin lesion. This ultimately decreases their segmentation accuracy. While evaluating our method, we did not take any postsegmentation action such as removing irrelevant connected components (dilation & eroding) far from the lesion to abstract the lesion alone.
To perform evaluation, we adopted commonly used quantitative measurements, i.e., Dice Coefficient (DC), Jaccard Index (JI), and Border Error (BE). Let us say O and G are the results of segmentation and the ground truth, respectively, DC is calculated by (2∣O∩G∣)/(∣O∣+∣G∣), and JI is by (O∩G)/(O∪G). BE is calculated as in the Eq. 28.
FN pixels refer to pixels falsely detected as background, FP pixels refer to the pixels falsely segmented as foreground (lesion), TN pixels refer to pixels correctly detected as background, TP pixels correctly segmented as foreground. Figure 7 shows the BE evaluations in the boxplot representation. Table 1 shows comparisons between the prominent segmentation methods [13, 18, 31, 33] and ours. Note that [31, 33] fall in the category of region based active contours, and segmentation functions are governed in the spirit of local binary fitting. Figure 8 includes a gallery that displays qualitative results of our method and competitor methods. Since Pratondo et al. [18] is the only competitor method which is also edge based like ours, our results are especially compared against that method (see Table 1).
Mete et al. [13] is a clustering based segmentation method. It is parameter dependent (radius of evolving clusters) and does not incorporate with local information while finding the lesion border. Thus, its segmentation result cannot precisely separate nonlesion patches from lesion if both have similar color values (e.g. fuzzy borders). In Figs. 8 and 9 display qualitative comparisons of our method to competitors [18, 31, 33]. Figure 7 shows the border error evaluations in the boxplot representation.
We have observed that selection of noise filtering technique has a tremendous impact on the duration of segmentation. If we consider images with their actual sizes, methods of [18, 31, 33] used merely Gaussian filtering for denoising purpose and average time for segmenting an image with the size of 484x737 takes more than 10 min. Average time for training KNN and SVM in the approach of [18] is almost an hour. Segmentation method of Mete et al. [13] involves density based clustering, hence it is very sensitive to parameters and computationally expensive.
Conclusions
This study introduces an accurate skin lesion border detection method based on active contours. One of the main problems of active contours is leaking problem. This problem becomes especially visible in dermoscopy images when there are fuzzy lesion borders and/or dermoscopic artifacts, such as hair and water. When such features exist, active contour is not able to properly find skin lesions or region of interest. We overcome these problems by introducing SPH kernels and probability maps into active contours (called LEEAC). This in turn removed leaking problems and increased accuracy of segmentation. We tested our approach on 100 dermoscopy images and compared our results with the state of the art methods. LEEAC outperforms other prominent methods as reported in the “Results and discussion” section.
Abbreviations
 AC:

Active contour
 ESF:

Edge stop function
 FN:

False negative
 FP:

False positive
 KNN:

Knearest neighbors algorithm
 LEEAC:

Local EdgeEnhanced Active Contour
 LSE:

Level set evolution
 LSF:

Level set function
 PDF:

Probability density function
 RD:

Reaction Diffusion
 ROI:

Region of interest
 SPH:

Smoothed particle hydrodynamics
 SVM:

Support Vector Machines
 TN:

True negative
 TP:

True positive
References
 1
Celebi ME, Iyatomi H, Schaefer G, Stoecker WV. Lesion border detection in dermoscopy images. CoRR. 2010; abs/1011.0640. http://arxiv.org/abs/1011.0640.
 2
American Cancer Society’s. Cancer Facts. Am Cancer Soc. 2018:1–76. Accessed: 20171121. https://www.cancer.org/content/dam/cancerorg/research/cancerfactsandstatistics/annualcancerfactsandfigures/2018/cancerfactsandfigures2018.pdf.
 3
David B T. Trends in pathology malpractice claims. Am J Surg Pathol. 2012; 36(1):1–5. https://doi.org/10.1097/PAS.0b013e31823836bb.
 4
di Meo N, Stinco G, Bonin S, Gatti A, Trevisini S, Damiani G, Vichi S, Trevisan G. Cash algorithm versus 3point checklist and its modified version in evaluation of melanocytic pigmented skin lesions: The 4point checklist. J Dermatol. 2016; 43(6):682–5. https://doi.org/10.1111/13468138.13201.
 5
Walter F, Provest A, Vasconcelos J, Burrows PHN, Morris H, Kinmonth A, Emery J. Using the 7point checklist as a diagnostic aid for pigmented skin lesions in general practice: a diagnostic validation study. Br J Gen Pract. 2013; 63(610):345–53. https://doi.org/10.3399/bjgp13X667213.
 6
Nachbar F, Stolz W, Merkle T. The abcd rule of dermatoscopy. J Am Acad Dermatol. 1994; 30:551–9.
 7
Stolz W, Riemann A, Cognetta A, Pillet L, Abmayr W. Abcd rule of dermatoscopy: a new practical method for early recognition of malignant melanoma. Eur J Dermatol. 1994; 4:521–7.
 8
Vestergaard M, Macaskill P, Holt P, Menzies S. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a metaanalysis of studies performed in a clinical settings. Br J Dermatology. 2008; 159(3):669–76. https://doi.org/10.1111/j13652133.2008.08713.x.
 9
Celebi EM, Wen Q, Iyatomi H, Shimizu K, Zhou H, Schaefer G. A state of art survey on lesion border detection in dermoscopy images In: Celebi E, Mendoca T, Marquez JS, editors. Dermoscopy Image Analysis. Chap. 4. Baco Raton, FL: CRC Press: 2015. p. 97–130.
 10
Erkol B, Moss H, Stanley J, William S, Hvatum E. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes. Skin Res Technology. 2005; 1(1):17–26. https://doi.org/10.1111/j.16000846.2005.00092.x.
 11
Abbas Q, Fondon I, Sarmiento A, Emre MC. An improved segmentation method for nonmelanoma skin lesions using active contour model; 2014, pp. 193–200. https://doi.org/10.1007/978331911753_22.
 12
Kass M, Witkin A TD. Snakes: Active contour models. Int J Comput Vis. 1998; 1(4):321–31.
 13
Mete M, Sirakov NM. Lesion detection in demoscopy images with novel densitybased and active contour approaches. BMC Bioinforma. 2010; 11(6):23. https://doi.org/10.1186/1471210511S6S23.
 14
Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vis. 1997; 22(1):61–79. https://doi.org/doi:10.1023/A:1007979827043.
 15
Vese LA, Chan TF. A multiphase level set framework for image segmentation using the mumford and shah model. Int J Comput Vis. 2002; 50(3):271–93. https://doi.org/doi:10.1023/A:1020874308076.
 16
Chan T, Vese L. In: Nielsen M, Johansen P, Olsen OF, Weickert J, (eds).An Active Contour Model without Edges. Berlin, Heidelberg: Springer; 1999, pp. 141–51. https://doi.org/10.1007/3540482369_13. https://doi.org/10.1007/3540482369_13.
 17
Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun Pur Appl Math. 1989; 42(5):577–685. https://doi.org/10.1002/cpa.3160420503.
 18
Pratondo A, Chui CK, Ong SH. Robust edgestop functions for edgebased active contour models in medical image segmentation. IEEE Signal Proc Lett. 2016; 23(2):222–6. https://doi.org/10.1109/LSP.2015.2508039.
 19
RIDLER T. Picture thresholding using an iterative selection method. IEEE Trans Syst Man Cybern. 1978; 8(8):630–2. https://doi.org/10.1109/TSMC.1978.4310039.
 20
Perona P, Malik J. Scalespace and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell. 1990; 12(7):629–39. https://doi.org/10.1109/34.56205.
 21
Monaghan JJ. Smoothed particle hydrodynamics. Rep Prog Phys. 2015; 68(8):1703–59. https://doi.org/doi:10.1088/00344885/68/8/R01.
 22
Kelager M. Lagrangian fluid dynamics using smoothed particle hydrodynamics. University of Copenhagen: Department of Computer Science; 2006. http://image.diku.dk/projects/media/kelager.06.pdf.
 23
Liu MB, Liu GR. Smoothed particle hydrodynamics (sph): an overview and recent developments. Arch Comput Methods Eng. 2010; 17(1):25–76. https://doi.org/10.1007/s1183101090407.
 24
Müller M, Charypar D, Gross M. Particlebased fluid simulation for interactive applications; 2003, pp. 154–9.
 25
Smeets D, Loeckx D, Stijnen B, Dobbelaer BD, Vandermeulen D, Suetens P. Semiautomatic level set segmentation of liver tumors combining a spiralscanning technique with supervised fuzzy pixel classification. Med Image Anal. 2010; 14(1):13–20. https://doi.org/10.1016/j.media.2009.09.002.
 26
Wu J, Yin Z, Xiong Y. The fast multilevel fuzzy edge detection of blurry images. IEEE Signal Proc Lett. 2007; 14(5):344–7. https://doi.org/10.1109/LSP.2006.888087.
 27
Protiere A, Sapiro G. Interactive image segmentation via adaptive weighted distances. IEEE Trans Image Process. 2007; 16(4):1046–57. https://doi.org/10.1109/TIP.2007.891796.
 28
Li C, Xu C, Gui C, Fox MD. Distance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process. 2010; 19(12):3243–54. https://doi.org/10.1109/TIP.2010.2069690.
 29
Chunming L, Chenyang X, Changfeng G, Fox MD. Level set evolution without reinitialization: a new variational formulation. In: Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition CVPR’05: 2005. p. 430–436.
 30
Malladi R, Sethian JA, Vemuri BC. Shape modeling with front propagation: a level set approach. IEEE Trans Pattern Anal Mach Intell. 1995; 17(2):158–75. https://doi.org/10.1109/34.368173.
 31
Zhang K, Zhang L, Song H, Zhang D. Reinitializationfree level set evolution via reaction diffusion. IEEE Trans Image Process. 2013; 22(1):258–71. https://doi.org/10.1109/TIP.2012.2214046.
 32
Argenziano G, Soyer HP, De Giorgio V, Piccolo D, Carli P, Delfino M, Ferrari A, HofmannWellenhof R, Massi D, Mazzocchetti G, Scalvenzi M, Wolf IH. Interactive atlas of dermoscopy.Edra Medical Publishing & New Media; 1999. p. 208. ISBN: 8886457308, 1999, 208 pages. (Book & CD/Web Resource) Interactive multimedia. 5.25 in. disc. https://espace.library.uq.edu.au/view/UQ:229410.
 33
Li C, Kao C, Gore JC, Ding Z. Implicit active contours driven by local binary fitting energy; 2007. https://doi.org/10.1109/CVPR.2007.383014.
Acknowledgements
We would like to thank Dr. Sreekanth Arikatla from Kitware, Inc. for providing insights on smoothed particle hydrodynamics. We also would like to thank Dr. Deepak Chittajallu from Kitware, Inc. for validating accuracy of our results and checking mathematical notations in this study.
Funding
This study is supported by the Arkansas INBRE program, with an award# P20 GM103429 from the National Institutes of Health/the National Institute of General Medical Sciences (NIGMS).
Availability of data and material
Not applicable.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 20 Supplement 2, 2019: Proceedings of the 15th Annual MCBIOS Conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume20supplement2.
Author information
Affiliations
Contributions
SK, TH, and MM conceived the study. MB did all experiments and implementations. HKW guided SK in skin lesion understanding. KI helped MB in development of active contour algorithms. All authors read and approve the manuscript.
Corresponding author
Correspondence to Sinan Kockara.
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Bayraktar, M., Kockara, S., Halic, T. et al. Local edgeenhanced active contour for accurate skin lesion border detection. BMC Bioinformatics 20, 91 (2019). https://doi.org/10.1186/s1285901926258
Published:
Keywords
 Dermoscopy
 Skin lesion segmentation
 Skin lesion border detection