Cascaded discrimination of normal, abnormal, and confounder classes in histopathology: Gleason grading of prostate cancer
- Scott Doyle^{1}Email author,
- Michael D Feldman^{2},
- Natalie Shih^{2},
- John Tomaszewski^{3} and
- Anant Madabhushi^{4}Email author
DOI: 10.1186/1471-2105-13-282
© Doyle et al.; licensee BioMed Central Ltd. 2012
Received: 6 January 2012
Accepted: 3 September 2012
Published: 30 October 2012
Abstract
Background
Automated classification of histopathology involves identification of multiple classes, including benign, cancerous, and confounder categories. The confounder tissue classes can often mimic and share attributes with both the diseased and normal tissue classes, and can be particularly difficult to identify, both manually and by automated classifiers. In the case of prostate cancer, they may be several confounding tissue types present in a biopsy sample, posing as major sources of diagnostic error for pathologists. Two common multi-class approaches are one-shot classification (OSC), where all classes are identified simultaneously, and one-versus-all (OVA), where a "target” class is distinguished from all "non-target” classes. OSC is typically unable to handle discrimination of classes of varying similarity (e.g. with images of prostate atrophy and high grade cancer), while OVA forces several heterogeneous classes into a single "non-target” class. In this work, we present a cascaded (CAS) approach to classifying prostate biopsy tissue samples, where images from different classes are grouped to maximize intra-group homogeneity while maximizing inter-group heterogeneity.
Results
We apply the CAS approach to categorize 2000 tissue samples taken from 214 patient studies into seven classes: epithelium, stroma, atrophy, prostatic intraepithelial neoplasia (PIN), and prostate cancer Gleason grades 3, 4, and 5. A series of increasingly granular binary classifiers are used to split the different tissue classes until the images have been categorized into a single unique class. Our automatically-extracted image feature set includes architectural features based on location of the nuclei within the tissue sample as well as texture features extracted on a per-pixel level. The CAS strategy yields a positive predictive value (PPV) of 0.86 in classifying the 2000 tissue images into one of 7 classes, compared with the OVA (0.77 PPV) and OSC approaches (0.76 PPV).
Conclusions
Use of the CAS strategy increases the PPV for a multi-category classification system over two common alternative strategies. In classification problems such as histopathology, where multiple class groups exist with varying degrees of heterogeneity, the CAS system can intelligently assign class labels to objects by performing multiple binary classifications according to domain knowledge.
Background
Digital pathology (DP) has allowed for the development of computerized image-based classification algorithms to be applied to digitized tissue samples. Recent research has focused on developing computer-aided diagnostic (CAD) tools that can classify tissues into one of two classes, such as identifying "cancer” vs. "non-cancer” tissues [1–5]. However, in the case of CaP, the "non-cancer” class includes various heterogeneous tissue types such as epithelium and stroma tissue as well as confounding classes such as atrophy, PIN, and perineural invasion. Ideally one would wish to employ a multi-class approach to distinguish between several different tissue types at once.
Automated, computerized image analysis of histopathology has the potential to greatly reduce the inter-observer variability in diagnosis of biopsy samples [12–15], and algorithms have been developed for detecting neuroblastoma [15], quantification of lymphocytic infiltration on breast biopsy tissue [16], and grading astrocytomas on brain tissue [17], to name a few. In the context of CaP, researchers have used a variety of features to analyze tissue ranging from low-level image features (color, texture, wavelets) [18], second-order co-occurrence features [19], and morphometric attributes [1]. Farjam, et al. [20] employed gland morphology to identify the malignancy of biopsy tissues, while Diamond, et al. [21] used morphological and texture features to identify 100-by-100 pixel tissue regions as either stroma, epithelium, or cancerous tissue (a three-class problem). Tabesh, et al. [1] developed a CAD system that employs texture, color, and morphometry on tissue microarrays to distinguish between cancer and non-cancer regions, as well as between high and low Gleason grade prostate cancers (both cases used binary classification).
In this work, we apply the cascaded classifier in the context of classifying regions of interest (ROIs) of prostate tissue into one of seven classes: Gleason grades 3, 4, and 5 (abbreviated G3, G4, and G5, respectively), benign epithelium (BE), benign stroma (BS), tissue atrophy (AT), and PIN. From each ROI, a set of novel image features are extracted which quantify the architecture (location and arrangement of cell nuclei) and texture of the region. These feature vectors are used in a cascaded classification approach whereby similar classes are grouped according to domain knowledge, and binary classification is performed at increasing levels of granularity. We test our algorithm by comparing the cascaded approach with two traditional multi-class approaches: the OSC approach, where classification algorithms attempt to distinguish all classes simultaneously, and the OVA approach, where individual classes are classified independently from all other classes. We show that by incorporating domain knowledge and utilizing the cascaded classifier, we can more accurately identify nested subclasses.
This work is an extension of our previous work in identifying regions of cancer vs. non-cancer in prostate biopsy images on a pixel-by-pixel basis using a hierarchical classifier [5]. Our previous approach was developed to identify suspicious regions on very large images, using pyramidal decomposition until individual pixels could be classified as cancer or non-cancer. The major differences in the current work are the following: (1) we are classifying tissue regions as opposed to individual pixels, so our analysis and feature extraction are necessarily different, and (2) dealing with multiple categories of tissue types instead of the "cancer” vs. "non-cancer” question. Additionally, an important objective of this work is to illustrate the performance increase obtained by the CAS approach compared with OVA and OSC.
Methods
Cascaded multi-category classification
Notation and definitions used
An example of an annotated digital biopsy sample is shown in Figure 1, with zoomed in exemplars of each tissue class. Mathematically, we denote an ROI as $\left(\right)close="">\mathcal{R}=(R,g)$, where R is a 2D set of pixels r∈R and g(r) is an intensity function that assigns a triplet of intensity values to each pixel (corresponding to the red, green, and blue color channels of the image). The class of $\left(\right)close="">\mathcal{R}$ is denoted as ω_{ i } for i∈{1,⋯ ,k} classes, and we use the notation $\left(\right)close="">\mathcal{R}\hookrightarrow {\omega}_{i}$ to indicate that $\left(\right)close="">\mathcal{R}$ belongs to class ω_{ i }. In this work, k=7.
Class groupings in cascaded classifier
To classify $\left(\right)close="">\mathcal{R}$, we employ the cascaded approach illustrated in Figure 3. The cascaded setup consists of a series of binary classifications, which divides the multi-category classification into multiple two-category problems. Each bifurcation in Figure 3 represents a separate, independent task with an independently-trained classifier, amounting to six binary divisions. The motivation for the chosen class groups is based on domain knowledge. The first bifurcation handles all the samples in the database, classifying them as "cancer” or "non-cancer” images. Within the cancer group, we further classify samples into either G5 or a class group containing G3 plus G4; this is done because within the cancer group, G3 and G4 are more similar to one another than either is to G5. (Note that in this paper, when we refer to "Gleason grades 3+4”, we are referring to the group of images that are members of either primary Gleason grade 3 or primary grade 4 CaP as opposed to images representing a Gleason pattern of 3+4, i.e. Gleason sum 7. All ROIs are considered to be homogeneous regions of a single tissue pattern.) Similarly, non-cancer samples are identified as either "confounder” classes, which contain abnormal but non-cancerous tissue, or "normal” class groups. Finally, each of the remaining class groups is further classified to obtain the final classification for all samples: the Gleason grade 3+4 group is separated into G3 and G4, the confounder images are classified as AT or PIN, and normal tissues are classified as BE or BS.
Cascaded decision tree classifier
For each binary classification task in the cascade, we use a decision tree classifier [22]. Decision trees use a training set of labeled samples to learn a series of rules or "branches” based on feature values. These rules attempt to optimally distinguish between each of the class labels, which are represented by "leaves” at the end of the tree. Classification can then be performed on a testing set, using the features of each testing sample to traverse the tree and arrive at the leaf representing the correct class of the sample. While any classification algorithm may be used in the framework of the cascaded classification, we chose to decision trees for a number of reasons: (1) Decision trees can inherently deal with several classes by creating multiple different class leaves, allowing us to implement the OSC classification strategy directly for comparison. (2) The structure of the tree can be examined to determine which features appear closest to the top of the tree, which are typically the most discriminating features for that classification task. Additionally, these features are selected independently for each of the classification tasks, allowing us to use an optimal set of features for each level of the cascade.
Detection and segmentation of nuclei
Color deconvolution for nuclei region detection
Finding nuclear centroids via watershed segmentation
- 1.
Binarize the image using Otsu’s thresholding method [28] to yield the set of pixels within the nuclear region, denoted N.
- 2.
The set of pixels on the boundary of N (immediately adjacent) are denoted C, N ∪ C = ∅.
- 3.
The Euclidean distance transform is applied to the binarized image to generate a distance map $\left(\right)close="">\mathcal{D}=(R,d)$, where d(r) is the distance from pixel r to the closest point on C.
- 4.
Local maxima in $\left(\right)close="">\mathcal{D}$ are identified as the start points for the watershed algorithm, which iterates until all pixels in N are segmented.
Shown in Figure 4 are examples of the watershed algorithm’s steps, including the binarized image (Figures 4(c) and 4(h)), the distance map $\left(\right)close="">\mathcal{D}$ (Figures 4(d) and 4 (i)), and the resulting watershed contours (Figures 4(e) and 4(j)). Different colors in Figures 4(e) and (j) indicate different pools or segmentations, and black dots indicate the centroids of the detected regions.
Quantitative image feature extraction
List of features
Feature type | Feature subtype | Features | Total |
---|---|---|---|
Architecture | Voronoi Diagram | Area, chord length, perimeter | 12 |
Delaunay Triangulation | Area, perimeter | 8 | |
Minimum Spanning Tree | Branch Length | 4 | |
Nuclear Density | Nearest Neighbors, distance to neighbors | 24 | |
Texture | First-Order | Statistics, Sobel and Kirsch filters, Gradients | 135 |
Co-occurrence | Autocorrelation, Contrast, Correlation, Cluster | 189 | |
Prominence, Cluster Shade, Dissimilarity, | |||
Energy, Entropy, Homogeneity, Maximum | |||
probability, Variance, Sum average, Sum | |||
variance, Sum entropy, Difference variance, | |||
Difference entropy, two information measures | |||
of correlation, Inverse difference, Normalized | |||
inverse difference, inverse difference moment | |||
Steerable Filter | Frequency and Orientation Parameters | 216 |
Nuclear architecture feature extraction
Voronoi diagram ($\left(\right)close="">{\mathcal{G}}_{\text{Vor}}$)
where a,j ∈ {1,2,⋯ ,m} and ||•|| is the Euclidean distance between two points. That is, pixels are assigned to the polygon of the nearest centroid. This yields a tessellation of the image, as shown in Figure 5(a). Pixels that are equidistant from exactly two centroids make up E (edges of the graph, shown in red), while pixels equidistant from three or more centroids make up the intersections of multiple edges. Note that in this case V are not the endpoints of the edges in the graph, but are the centroids around which the polygons are constructed. The perimeter, area, and chord lengths of each polygon in $\left(\right)close="">{\mathcal{G}}_{\text{Vor}}$ are computed, and the average, standard deviation, disorder^{a}, and minimum to maximum ratio of each are calculated for a total of 12 Voronoi-based features per $\left(\right)close="">\mathcal{R}$.
Delaunay triangulation ($\left(\right)close="">{\mathcal{G}}_{\text{Del}}$)
The Delaunay Triangulation is a triangulation of vertices V such that the circumcircle of each triangle contains no other vertices. This corresponds to the dual graph of the Voronoi Diagram, meaning that centroid points v_{ a } and v_{ b } are connected in $\left(\right)close="">{\mathcal{G}}_{\text{Del}}$ if and only if polygons P_{ a } and P_{ b } share an edge in $\left(\right)close="">{\mathcal{G}}_{\text{Vor}}$. An example of $\left(\right)close="">{\mathcal{G}}_{\text{Del}}$ is given in Figure 5(b); shown faded is $\left(\right)close="">{\mathcal{G}}_{\text{Vor}}$ to illustrate the relationship between the two. In this graph, the vertices V constitute the endpoints of the edges E. From this graph, we compute the area and perimeter of each triangle, and the average, standard deviation, disorder, and minimum to maximum ratio of these are calculated to yield 8 Delaunay-based features per $\left(\right)close="">\mathcal{R}$.
Minimum spanning tree ($\left(\right)close="">{\mathcal{G}}_{\text{MST}}$)
A spanning tree is an undirected, fully connected graph on V . The weight W of the graph is the sum total of all edges E, and the Minimum Spanning Tree is the spanning tree with the lowest overall W . The Minimum Spanning Tree (MST), denoted $\left(\right)close="">{\mathcal{G}}_{\text{MST}}$, is a subgraph of the $\left(\right)close="">{\mathcal{G}}_{\text{Del}}$. An example of $\left(\right)close="">{\mathcal{G}}_{\text{MST}}$ is given in Figure 5(c); again, we superimpose $\left(\right)close="">{\mathcal{G}}_{\text{Del}}$ to show the relationship between the two. We calculate the average, standard deviation, disorder, and minimum to maximum ratio of the weights W to yield 4 MST-based features per $\left(\right)close="">\mathcal{R}$.
Nuclear density
Finally, we calculate a set of features that quantify the density of the nuclei without reliance on graph structures. Nuclear density features are calculated in two different ways: (1) We construct a circle around each point in V with a fixed radius (black circle in Figure 5(d)), and count the number of neighboring points in V that fall within that circle. This is done for radii of 10, 20, 30, 40, and 50 pixels, and for each point in V . The average, standard deviation, and disorder is computed across all points in V to yield 15 features for each $\left(\right)close="">\mathcal{R}$. (2) We calculate the distance from a point in V to the nearest 3, 5, and 7 neighbors (red lines in Figure 5(d)). This is done for each point in V , and the average, standard deviation, and disorder is computed to yield 9 additional features, for a total of 24 features based on nuclear density.
Image texture feature extraction
First-order statistics
We calculate 15 different first-order statistics from each image, including average, median, standard deviation, and range of the image intensities within the sliding neighborhood, as well as the Sobel filters in the vertical, horizontal, and both diagonal axes, 3 Kirsch filter features, gradients in the vertical and horizontal axes, difference of gradients, and diagonal derivative. By calculating these 15 features for each channel in the image, and then calculating the mean, standard deviation, and mode of the feature images, we obtain a total of 135 first-order statistics for $\left(\right)close="">\mathcal{R}$. An example of the average hue feature image is shown in Figure 8(a).
Co-occurrence features
Co-occurrence features [29] are computed by constructing a symmetric 256×256 co-occurrence matrix which describes the frequency with which two different pixel intensities appear together within a fixed neighborhood. The number of rows and columns in the matrix are determined by the maximum possible value in a channel of $\left(\right)close="">\mathcal{R}$; for 8-bit images, this corresponds to 2^{8}=256. Element (a b) in the matrix is equal to the number of times pixel value a occurs adjacent to pixel value b in $\left(\right)close="">\mathcal{R}$. From the co-occurrence matrix, a set of 21 features are calculated: autocorrelation, contrast, correlation, cluster prominence, cluster shade, dissimilarity, energy, entropy, homogeneity, maximum probability, variance, sum average, sum variance, Sum entropy, difference variance, difference entropy, two information measures of correlation, inverse difference, normalized inverse difference, and inverse moment [29, 30]. Extracting these values from each channel and taking the mean, standard deviation, and mode of each feature image yields a total of 189 co-occurrence features. An example of the contrast entropy image is shown in Figure 8(b).
Steerable filters
where x^{ ′ }=x cos(θ) + y sin(θ), y^{ ′ }=y cos(θ) + x sin(θ), κ is the filter’s frequency shift, θ is the filter phase, σ_{ x }and σ_{ y } are the standard deviations along the horizontal and vertical axes. We utilize a filter bank consisting of two different frequency-shift values κ ∈ {5,9} and six orientation parameter values ($\left(\right)close="">\theta =\frac{\in \u2022\Pi}{6}$ where ∈ ∈ {0,1,⋯ ,5}), generating 12 different filters. Each filter yields a real and imaginary response, which is calculated for each of the three channels. An example of two Gabor-filtered images is shown in Figures 8(c) and (d), illustrating the real and imaginary response, respectively, for a filter with κ=5 and $\left(\right)close="">\theta =\frac{5\u2022\Pi}{6}$. Taking the mean, standard deviation, and mode of each feature image yields a total of 216 steerable filter texture features.
Experimental setup
Prostate biopsy tissue preparation, Digitization, and ROI Identification
Prostate biopsy samples were acquired from 214 patients at the Department of Surgical Pathology at the University of Pennsylvania in the course of normal clinical treatment. Tissue samples were stained with hematoxylin and eosin (H&E) to highlight nuclear and cytoplasmic material in the cells. Following fixation, the slides were scanned into a computer workstation at 40x optical magnification using an Aperio whole-slide digital scanner (Aperio, Vista, CA). The acquisition was performed following an automated focusing procedure as per the recommended software settings, and the resulting files were saved as ScanScope Virtual Slide (SVS) file format, which are similar to multi-image tagged image file format (TIFF) files. Each patient study resulted in a single image (214 images total), which contained between 2-3 tissue samples each. In terms of pixel size, each image measures from 10,000 to 100,000 pixels in a dimension, depending on the amount of tissue on the slide. Uncompressed images range from 1 gigabyte (GB) to over 20 GB in hard drive space. At the time of scanning, images were compressed using the JPEG standard to a quality of 70 (compression ratio of approximately 1:15); at the image magnification that was captured, this compression did not result in a significant loss of quality of the acquired images.
ROIs corresponding to each class of interest are manually delineated by an expert pathologist, with the goal of obtaining relatively homogeneous tissue patches (i.e. patches that express only a single tissue type). Due to the widely varied presentation of the target classes on patient biopsy, the number of ROIs obtained per patient was greatly varied (between a minimum of 5 and a maximum of 30). It should be noted that the annotation of individual tissue types on pathology is not a common practice within clinical diagnosis and prognosis of prostate biopsy samples. Thus, there are no generally-accepted guidelines for drawing exact boundaries for regions of cancer, PIN, or atrophy; however, the annotating pathologists were only told to try and ensure that the majority of each ROI was from the same tissue class. Following annotation, the images are down-sampled to a resolution equivalent to 20x optical magnification. A total of 2,256 ROIs were obtained.
Experiment 1: classifier comparison
Our main hypothesis is that for multi-category classification, the CAS methodology will provide increased performance when compared with the OSC and OVA strategies. The differences between each of the three strategies are summarized below: Cascade (CAS): The cascaded strategy is our proposed method, described in the Methods section above. One-Shot Classification (OSC): For the OSC strategy, the entire dataset is classified into seven classes simultaneously. This is handled implicitly by the decision tree construction, where rule branches terminate at several different class labels. One-Versus-All (OVA): For the OVA strategy, a binary classifier is used to identify a single target class apart from a single non-target class made up of the remaining classes. Each class is classified independently of the others, meaning that errors in one class do not affect the performance of the others.
Evaluation is done on a per-class basis, to ensure that comparisons between different classification strategies were standardized.
Experiment 2: feature ranking
- 1.
At iteration t, each feature is evaluated in terms of its discriminative power for the current classification task.
- 2.
The feature that provides the highest accuracy is selected as the t th iteration returned by the algorithm.
- 3.
A weight α _{ t }is assigned to the selected feature, which is proportional to the feature’s discriminative power.
- 4.
α _{ t }is used to modulate the performance of feature t in subsequent iterations, forcing the algorithm to select features which focus on correctly classifying difficult samples.
- 5.
When t = T, the algorithm returns the set of selected features and their corresponding weights.
As the algorithm progresses, learners are selected which correctly classify samples which were misclassified by previously-selected learners. Based on the weights, we obtain a ranking of the ten most discriminating weak learners for each task, with α_{ t }>α_{t + 1}. The obtained weights are summed across the twenty trials to obtain a final weight and ranking for the learner.
Experiment 3: evaluation of automated nuclei detection algorithm
Our final experiment is performed to determine whether our automated nuclear detection algorithm is accurately identifying nuclear centroids. To do this, we consider that we are not interested in perfect segmentation of nuclei, but rather a segmentation that is accurate enough to generate useful and descriptive feature values. Since exact delineation of each nuclear centroid in the image is not our main goal, traditional methods of segmentation evaluation (such as percentage overlap, Hausdorff distance, and Dice coefficients) are not appropriate for evaluating this task. To ensure that our feature extraction is performing appropriately, a subset of images from four classes (epithelium, stroma, and Gleason grades 3 and 4) had nuclear centroids manually annotated. We compared the features obtained through our automated detection algorithm, using color deconvolution and watershed segmentation, with the features obtained using manual annotation. Comparison was performed using a Student’s t-test to determine how many features had no statistically significant difference between the two sets of feature values.
The research was conducted with approval from the Institutional Review Boards at both the University of Pennsylvania and Rutgers University.
Results and discussion
Experiment 1: classifier comparison
Quantiative classification results
BE | BS | G3 | G4 | G5 | AT | PN | ||
---|---|---|---|---|---|---|---|---|
OVA | 0.90 | 0.98 | 0.74 | 0.83 | 0.97 | 0.92 | 0.96 | |
Accuracy | OSC | 0.89 | 0.97 | 0.75 | 0.83 | 0.97 | 0.92 | 0.97 |
CAS | 0.98 | 0.98 | 0.77 | 0.76 | 0.95 | 0.88 | 0.89 | |
OVA | 0.71 | 0.90 | 0.68 | 0.73 | 0.78 | 0.79 | 0.79 | |
PPV | OSC | 0.67 | 0.87 | 0.69 | 0.71 | 0.81 | 0.76 | 0.81 |
CAS | 0.99 | 0.97 | 0.79 | 0.73 | 0.76 | 0.91 | 0.88 | |
OVA | 0.92 | 0.98 | 0.78 | 0.84 | 0.97 | 0.93 | 0.96 | |
NPV | OSC | 0.93 | 0.98 | 0.79 | 0.85 | 0.97 | 0.93 | 0.97 |
CAS | 0.96 | 0.98 | 0.72 | 0.78 | 0.96 | 0.85 | 0.89 |
The CAS strategy does not out-perform the OSC or OVA strategies with respect to ACC or NPV, but there is a modest improvement in terms of PPV. The majority of errors when using the CAS approach are false positives; that is, images are more likely to identify a non-target class as the target class. However, this leads to a tradeoff in NPV, which is lower for CAS than for the alternate strategies by a small amount.
In terms of PPV, there are only two classes in which CAS is not the top-performing classification strategy: G4, which yields the same PPV as the OVA strategy, and G5, where it under-performs both strategies. These represent two very similar classes of CaP on the grading scale and are difficult to distinguish automatically [8, 10]. Despite not yielding the highest PPV, the difference in the G5 class between CAS and OSC (the top-performing strategy) is 0.05.
Experiment 2: feature ranking
In distinguishing different grades of cancer (G5 vs. G3/G4 and G3 vs. G4), all of the top five selected features are texture-based features. The subtle differences between Gleason grades of prostate tissue are not picked up by quantitative architecture, as the biological variation in the features likely eliminates any discriminating power these features have. The more granular texture features, however, are capable of identifying these subtle changes in nuclear proliferation and lumen area which are major indicators of progressing disease.
For the non-cancer tasks – BE vs. BS, and AT vs. PN – we find that both architectural and textural features are in the top-ranked features. This can be appreciated by referring to the examples of tissue shown in Figure 1 as well as the architectural heat map in Figure 7. In both sets of non-cancer classification tasks, the target classes have either large, well-organized glandular structures (BE and AT) or sparse, less-structured tissue types with fewer arranged nuclei (BS and PN). Architectural features are well-suited to quantify the differences represented by these large structures, and so we see these features receiving higher weight than they do when distinguishing Gleason grades.
Experiment 3: evaluation of automated nuclei detection algorithm
Statistical differences between automatic and manual architectural features
Class | p > 0.05 | p > 0.01 |
---|---|---|
Epithelium | 13 | 17 |
Stroma | 24 | 26 |
Grade 3 | 6 | 9 |
Grade 4 | 11 | 15 |
Grade 5 | 28 | 29 |
Conclusions
In this work, we have presented a cascaded multi-class system that incorporates domain knowledge to accurately classify cancer, non-cancer, and confounder tissue classes on H&E stained prostate biopsy samples. By dividing the classification into multiple class groups and performing increasingly granular classifications, we can utilize a priori domain knowledge to help tackle difficult classification problems. This cascaded approach can be generalized to any multi-class problem that involves classes which can be grouped in a way that maximizes intra-group homogeneity while maximizing inter-group heterogeneity. We have developed a set of quantitative features that can accurately characterize the architecture and texture of prostate biopsy tissues, and use this information to discriminate between different tissue classes. We have shown that our automated nuclei detection algorithm generates feature values which are comparable to those obtained by manual delineation of nuclei, a more appropriate evaluation of detection than a point-by-point comparison between the two methods. Finally, we analyzed the discriminating power of each of our features with respect to each classification task in the cascade, and we found that for class groups with highly structured tissues, architecture plays an important role; however, in cases where tissue types are very similar (i.e. distinguishing Gleason grade), texture is more important to capture the subtle differences in tissue structure.
In our current implementation of the CAS approach, we made the assumption that domain knowledge should be the driving force behind the order of the cascaded classifiers. However, this may not be optimal, and other cascaded setups could also be used. For example, we would calculate an image metric from the training data that would allow us to divide the data into homogeneous groups based on the feature values, thus further separating the classes in each task. Using a proper distance metric to drive the initial design of the system might increase the classifier’s overall performance. In addition, we would like to investigate the use of alternative classification algorithms capable of performing one-shot classification, such as neural networks.
Endnotes
^{a}For a feature with standard deviation A and mean B, the disorder is calculated as: $\left(\right)close="">1-\frac{1}{1+\frac{A}{B}}$.
Declarations
Acknowledgements
This work was made possible by the Wallace H. Coulter Foundation, New Jersey Commission on Cancer Research, National Cancer Institute (R01CA136535-01, R01CA140772-01, and R03CA143991-01), US Dept. of Defense (W81XWH-08-1-0145), and The Cancer Institute of New Jersey.
Authors’ Affiliations
References
- Tabesh A, Teverovskiy M, Pang H, Verbel VKD, Kotsianti A, Saidi O: Multifeature prostate cancer diagnosis and Gleason grading of histologicalimages. IEEE Trans Med Imaging 2007, 26(10):1366–1378.View ArticlePubMedGoogle Scholar
- Kong J, Shimada H, Boyer K, Saltz J, Gurcan M: Image Analysis For Automated Assessment Of Grade Of Neuroblastic Differentiation. In Proc 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro ISBI 2007. 2007, 61–64.View ArticleGoogle Scholar
- Petushi S, Katsinis C, Coward C, Garcia F, Tozeren A: Automated identification of microstructures on histology slides. Proc. IEEE International Symposium on Biomedical Imaging: Macro to Nano 2004, 424–427.Google Scholar
- Weyn B, Wouwer G, Daele A, Scheunders P, Dyck D, Marck E, Jacob W: Automated breast tumor diagnosis and grading based on wavelet chromatin texture description. Cytometry 1998, 33: 32–40. 10.1002/(SICI)1097-0320(19980901)33:1<32::AID-CYTO4>3.0.CO;2-DView ArticlePubMedGoogle Scholar
- Doyle S, Feldman M, Tomaszewski J, Madabhushi A: A Boosted Bayesian Multi-Resolution Classifier for Prostate Cancer Detection from Digitized Needle Biopsies. IEEE Trans on Biomed Eng 2010, 59(5):1205–1218.View ArticleGoogle Scholar
- ACS: Cancer Facts and Figures. Atlanta: American Cancer Society; 2011.Google Scholar
- Gleason D: Classification of prostatic carcinomas. Cancer Chemother Rep 1966, 50(3):125–128.PubMedGoogle Scholar
- Epstein J, Allsbrook W, Amin M, Egevad L: The 2005 International Society of Urological Pathology (ISUP) Consensus Conference on Gleason Grading of Prostatic Carcinoma. Am J Surgical Pathol 2005, 29(9):1228–1242. 10.1097/01.pas.0000173646.99337.b1View ArticleGoogle Scholar
- Epstein J, Walsh P, Sanfilippo F: Clinical and cost impact of second-opinion pathology. Review of prostate biopsies prior to radical prostatectomy. Am J Surgical Pathol 1996, 20(7):851–857. 10.1097/00000478-199607000-00008View ArticleGoogle Scholar
- Oppenheimer J, Wills M, Epstein J: Partial Atrophy in Prostate Needle Cores: Another Diagnostic Pitfall for the Surgical Pathologist. Am J Surgical Pathol 1998, 22(4):440–445. 10.1097/00000478-199804000-00008View ArticleGoogle Scholar
- Bostwick D, Meiers I: Prostate Biopsy and Optimization of Cancer Yield. Eur Urology 2006, 49(3):415–417. 10.1016/j.eururo.2005.12.052View ArticleGoogle Scholar
- Allsbrook W, Mangold K, Johnson M, Lane R, Lane C, Epsein J: Interobserver Reproducibility of Gleason Grading of Prostatic Carcinoma: General Pathologist. Human Pathol 2001, 32: 81–88. 10.1053/hupa.2001.21135View ArticleGoogle Scholar
- Madabhushi A, Doyle S, Lee G, Basavanhally A, Monaco J, Masters S, Tomaszewski J, Feldman MD: Integrated Diagnostics: A Conceptual Framework with Examples. Clin Chem and Lab Med 2010, 48(7):989–99.View ArticleGoogle Scholar
- Hipp J, Flotte T, Monaco J, Cheng J, Madabhushi A, Yagi Y, Rodriguez-Canales J, Emmert-Buck M, Dugan M, Hewitt M, Stephenand Toner, Tompkins R, Lucas D, Gilbertson J, Balis U: Computer aided diagnostic tools aim to empower rather than replace pathologists: Lessons learned from computational chess. J Pathol Inf 2011, 2: 25. 10.4103/2153-3539.82050View ArticleGoogle Scholar
- Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B: Histopathological Image Analysis: A Review. IEEE Rev Biomed Eng 2009, 2: 147–171.PubMed CentralView ArticlePubMedGoogle Scholar
- Basavanhally A, Ganesan S, Agner S, Monaco J, Feldman M, Tomaszewski J, Bhanot G, Madabhushi A: Computerized Image-Based Detection and Grading of Lymphocytic Infiltration in HER2+ Breast Cancer Histopathology. IEEE Trans on Biomed Eng 2010, 57(3):642–653.View ArticleGoogle Scholar
- Glotsos D, Kalatzis I, Spyridonos P, Kostopoulos S, Daskalakis A, Athanasiadis E, Ravazoula P, Nikiforidis G, Cavourasa D: Improving accuracy in astrocytomas grading by integrating a robust least squares mapping driven support vector machine classifier into a two level grade classification scheme. Comput Methods Programs Biomed 2008, 90(3):251–261. 10.1016/j.cmpb.2008.01.006View ArticlePubMedGoogle Scholar
- Wetzel A, Crowley R, Kim S, Dawson R, Zheng L, andY Yagi YJ, Gilbertson J, Gadd C, Deerfield D, Becich M: Evaluation of prostate tumor grades by content based image retrieval. Proc of SPIE 1999, 3584: 244–252. 10.1117/12.339826View ArticleGoogle Scholar
- Esgiar A, Naguib R, Sharif B, Bennett M, Murray A: Fractal analysis in the detection of colonic cancer images. IEEE Trans Inf Technol Biomed 2002, 6: 54–58. 10.1109/4233.992163View ArticlePubMedGoogle Scholar
- Farjam R, Soltanian-Zadeh H, Jafari-Khouzani K, Zoroofi R: An Image Analysis Approach for Automatic Malignancy Determination of Prostate Pathological Images. Cytometry Part B (Clinical Cytometry) 2007, 72(B):227–240.View ArticleGoogle Scholar
- Diamond J, Anderson N, Bartels P, Montironi R, Hamilton P: The Use of Morphological Characteristics and Texture Analysis in the Identification of Tissue Composition in Prostatic Neoplasia. Human Pathol 2004, 35(9):1121–1131. 10.1016/j.humpath.2004.05.010View ArticleGoogle Scholar
- Quinlan JR: Decision Trees and Decision-Making. IEEE Trans Syst, Man, and Cybernetics 1990, 20(2):339–346. 10.1109/21.52545View ArticleGoogle Scholar
- Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE: Adaptive Mixtures of Local Experts. Neural Comput 1991, 3: 79–87. 10.1162/neco.1991.3.1.79View ArticleGoogle Scholar
- Ruifrok A, Johnston D: Quantification of histochemical staining by color deconvolution. Anal Quant Cytology Histology 2001, 23: 291–299.Google Scholar
- McNaught AD, Wilkinson A (Eds): Compendium of Chemical Terminology. Oxford: Blackwell Science; 1997.Google Scholar
- Meyer F: Topographic distance and watershed lines. Signal Process 1994, 38: 113–125. 10.1016/0165-1684(94)90060-4View ArticleGoogle Scholar
- Beucher S, Lantuejoul C: Use of Watersheds in Contour Detection. International Workshop on Image Processing, Real-time Edge and Motion Detection 1979.Google Scholar
- Otsu N: A threshold selection method from gray-level histograms. IEEE Trans Syst, Man, and Cybernetics 1979, 9: 62–66.View ArticleGoogle Scholar
- Haralick R, Shanmugam K, Dinstein I: Textural Features for Image Classification. IEEE Trans Syst, Man and Cybernetics 1973, 3(6):610–621.View ArticleGoogle Scholar
- Soh L, Tsatsoulis C: Texture Analysis of SAR Sea Ice Imagery Using Gray Level Co-Occurrence Matrices. IEEE Trans Geoscience and Remote Sensing 1999, 37(2):780–795. 10.1109/36.752194View ArticleGoogle Scholar
- Jain A, Farrokhnia F: Unsupervised Texture Segmentation Using Gabor Filters. Pattern Recognition 1991, 24(12):1167–1186. 10.1016/0031-3203(91)90143-SView ArticleGoogle Scholar
- Manjunath B, Ma W: Texture features for browsing and retrieval of image data. Trans on Pattern Anal and Machine Intelligence 1996, 18(8):837–842. 10.1109/34.531803View ArticleGoogle Scholar
- Cortes C, Vapnik V: Support-Vector Networks. Machine Learning 1995, 20(3):273–297.Google Scholar
- Tu Z: Probabilistic Boosting-Tree: Learning Discriminative Models for Classification, Recognition, and Clustering. Comput Vision, IEEE Int Conference on 2005, 2: 1589–1596.Google Scholar
- Freund Y, Schapire R: Experiments with a New Boosting Algorithm. Machine Learning: Proceedings of the Thirteenth International Conference 1996, 148–156.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.