Data sets used
In the current research work, publicly available six miRNA expression data sets with accession number GSE17681, GSE17846, GSE21036, GSE24709, GSE28700, and GSE31408 are used, which are downloaded from Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/).
GSE17681
This data set has been generated to detect specific patterns of miRNAs in peripheral blood samples of lung cancer patients. As controls, blood of donors without known affection have been tested. The number of miRNAs, samples, and classes in this data sets are 866, 36, and 2, respectively [38].
GSE17846
This data set represents the analysis of miRNA profiling in peripheral blood samples of multiple sclerosis and in the blood of normal donors. It contains 864 miRNAs, 41 samples, and 2 classes [39].
GSE21036
This data set contains miRNA expression profiles of 218 prostate tumors with primary or metastatic prostate cancer with a median of 5 years clinical follow‐up. The number of miRNAs and samples are 373 and 141, respectively [40].
GSE24709
It analyzes peripheral miRNA blood profiles of patients with lung diseases. The miRNA expression profiling has been done for patients with lung cancer, chronic obstructive pulmonary disease, and normal controls. It contains total 863 miRNAs, 71 samples, and 3 classes.
GSE28700
This data set contains expression profiles of miRNAs from 22 paired gastric cancer and normal tissues. It contains total 44 samples and 470 miRNAs. The samples are grouped into 2 classes [41].
GSE31408
It analyzes miRNA expression profiles of cutaneous T‐cell lymphomas and benign inflammation of skin. It consists of total 705 miRNAs, 148 samples, and 2 classes [42].
Method
Hypercuboid equivalence partition matrix
Let be the set of n objects or samples and denotes the set of m attributes or miRNAs of a given microarray data set , where w
i
j
∈ℜ is the measured expression value of the miRNA in the sample x
j
. Let be the set of class labels or sample categories of n samples. In rough set theory, the attribute sets and are termed as the condition and decision attribute sets in , respectively.
If denotes c equivalence classes or information granules of generated by the equivalence relation induced from the decision attribute set , then c equivalence classes of can also be generated by the equivalence relation induced from each condition attribute . If denotes c equivalence classes or information granules of induced by the condition attribute and n is the number of objects in , then c‐partitions of are the sets of (cn) values that can be conveniently arrayed as a (c×n) matrix . The matrix is denoted by
(1)
(2)
The tuple [L
i
,U
i
] represents the interval of ith class β
i
according to the decision attribute set . The interval [L
i
,U
i
] is the value range of condition attribute with respect to class β
i
. It is spanned by the objects with same class label β
i
. That is, the value of each object x
j
with class label β
i
falls within interval [L
i
,U
i
]. This can be viewed as a supervised granulation process, which utilizes class information.
Generally, an m‐dimensional hypercuboid or hyperrectangle is defined in the m‐dimensional Euclidean space, where the space is defined by the m variables measured for each sample or object. In geometry, a hypercuboid or hyperrectangle is the generalization of a rectangle for higher dimensions, formally defined as the Cartesian product of orthogonal intervals. A d‐dimensional hypercuboid with d attributes as its dimensions is defined as the Cartesian product of d orthogonal intervals. It encloses a region in the d‐dimensional space, where each dimension corresponds to a certain attribute. The value domain of each dimension is the value range or interval that corresponds to a particular class.
The c×n matrix is termed as hypercuboid equivalence partition matrix of the condition attribute . It represents the c‐hypercuboid equivalence partitions of the universe generated by an equivalence relation. Each row of the matrix is a hypercuboid equivalence partition or class. Here represents the membership of object x
j
in the ith equivalence partition or class β
i
satisfying following two conditions:
(3)
(4)
The above axioms should hold for every equivalence partition, which correspond to the requirement that an equivalence class is non‐empty. However, in real data analysis, uncertainty arises due to overlapping class boundaries. Hence, such a granulation process does not necessarily result in a compatible granulation in the sense that every two class hypercuboids or intervals may intersect with each other. The intersection of two hypercuboids also forms a hypercuboid, which is referred to as implicit hypercuboid. The implicit hypercuboids encompass the misclassified samples or objects those belong to more than one classes. The degree of dependency of the decision attribute set or class label on the condition attribute set depends on the cardinality of the implicit hypercuboids. The degree of dependency increases with the decrease in cardinality. Hence, the degree of dependency of decision attribute on a condition attribute set is evaluated by finding the implicit hypercuboids that encompass misclassified objects. Using the concept of hypercuboid equivalence partition matrix, the misclassified objects of implicit hypercuboids can be identified based on the confusion vector defined next
(5)
(6)
According to the rough set theory, if an object x
j
belongs to the lower approximation of any class β
i
, then it does not belong to the lower or upper approximations of any other classes and . On the other hand, if the object x
j
belongs to the boundary region of more than one classes, then it should be encompassed by the implicit hypercuboid and . Hence, the hypercuboid equivalence partition matrix and corresponding confusion vector of the condition attribute can be used to define the lower and upper approximations of the ith class β
i
of the decision attribute set .
Let . β
i
can be approximated using only the information contained within by constructing the A‐lower and A‐upper approximations of β
i
:
(7)
(8)
where equivalence relation A is induced from attribute . The boundary region of β
i
is then defined as
(9)
Dependency
Combining (1), (5), and (7), the dependency between condition attribute and decision attribute can be defined as follows:
(10)
(11)
where . If , depends totally on , if , depends partially on , and if , then does not depend on . The is also termed as the relevance of attribute with respect to class .
Significance
Given two condition attributes and , the c×n hypercuboid equivalence partition matrix corresponding to the set can be calculated from two c×n hypercuboid equivalence partition matrices and as follows:
(12)
(13)
The change in dependency when an attribute is removed from the set of condition attributes, is a measure of the significance of the attribute. To what extent an attribute is contributing to calculate the dependency on decision attribute can be calculated by the significance of that attribute. The significance of the attribute with respect to the condition attribute set is given by
(14)
where . Hence, the higher the change in dependency, the more significant the attribute is. If significance is 0, then the attribute is dispensable.
μHEM: proposed miRNA selection method
Let be the relevance of the miRNA with respect to the class labels and is the significance of the miRNA with respect to another miRNA , where is the set of selected miRNAs. The average relevance of all selected miRNAs is, therefore, given by
(15)
while the average significance among the selected miRNAs is as follows
(16)
Therefore, the problem of selecting a set of relevant and significant miRNAs from the whole miRNA set is equivalent to maximize and , that is, to maximize the objective function , where
(17)
where ω is a weight parameter. To solve the above problem, the following greedy algorithm is used.
-
1.
Initialize .
-
2.
Generate hypercuboid equivalence partition matrix and corresponding confusion vector for each miRNA using (1) and (5), respectively.
-
3.
Calculate the relevance of each miRNA using (11).
-
4.
Select the miRNA as the most relevant miRNA that has highest relevance value . In effect, and .
-
5.
Repeat the following two steps until or the desired number of miRNAs is selected.
-
6.
Repeat the following four steps for each of the remaining miRNAs of .
-
(a)
Generate hypercuboid equivalence partition matrix using (12) between each selected miRNA and each miRNA .
-
(a)
Generate corresponding confusion vector for two miRNAs and using (5).
-
(a)
Calculate the significance of each miRNA with respect to each of the already selected miRNAs of using (14).
-
(a)
Remove from if it has zero significance value with respect to any one of the selected miRNAs. In effect, .
-
7.
From the remaining miRNAs of , select miRNA that maximizes the following condition:
(18)
As a result of that, and .
-
8.
Stop.
Computational complexity
The proposed μHEM method has low computational complexity with respect to the number of miRNAs, samples, and classes. Prior to computing the relevance or significance of a miRNA, the hypercuboid equivalence partition matrix and confusion vector for each miRNA are to be generated first, which are carried out in Step 2 of the proposed algorithm. The computational complexity to generate a (c×n) hypercuboid equivalence partition matrix is , where c and n represent the number of classes and objects in the data set, respectively, while the generation of confusion vector has also time complexity. In effect, the computation of the relevance of a miRNA has time complexity. Hence, the total complexity to compute the relevance of m miRNAs, which is carried out in Step 3 of the proposed algorithm, is . The selection of most relevant miRNA from the set of m miRNAs, which is carried out in Step 4, has a complexity .
There is only one loop in Step 5 of the proposed miRNA selection method, which is executed (d−1) times, where d represents the number of selected miRNAs. The complexity to compute the significance of a candidate miRNA with respect to another miRNA has also the complexity . If represents the cardinality of the already selected miRNA set, the total complexity to compute the significance of candidate miRNAs, which is carried out in Step 6, is . The selection of a miRNA from candidate miRNAs by maximizing relevance and significance, which is carried out in Step 7, has a complexity . Hence, the total complexity to execute the loop (d−1) times is .
In effect, the selection of a set of d relevant and significant miRNAs from the whole set of m miRNAs using the proposed hypercuboid equivalence partition matrix based first order incremental search method has an overall computational complexity of as .
B.632+ error rate
In order to minimize the variability and biasedness of derived result, the so‐called B.632+ bootstrap approach [37] is used, which is defined as follows:
(19)
where AE denotes the proportion of the original training samples misclassified, termed as apparent error rate, and B1 is the bootstrap error, defined as follows:
(20)
where n is the number of original samples and M is the number of bootstrap samples. If the sample x
j
is not contained in the kth bootstrap sample, then I
j
k
=1, otherwise 0. Similarly, if x
j
is misclassified, Q
j
k
=1, otherwise 0. The weight parameter is given by
(21)
(22)
(23)
where c is the number of classes, p
i
is the proportion of the samples from the ith class, and q
i
is the proportion of them assigned to the ith class. Also, γ is termed as the no‐information error rate that would apply if the distribution of the class‐membership label of the sample x
j
did not depend on its feature vector.
Support vector machine
In the current study, the support vector machine (SVM) [43] is used to evaluate the performance of the proposed μHEM algorithm as well as several other feature selection algorithms. The SVM is a margin classifier that draws an optimal hyperplane in the feature vector space; this defines a boundary that maximizes the margin between data samples in different classes, therefore leading to good generalization properties. A key factor in the SVM is to use kernels to construct nonlinear decision boundary. In the present work, linear kernels are used. The source code of the SVM has been downloaded from Library for Support Vector Machines (http://www.csie.ntu.edu.tw/~cjlin/libsvm/).
To compute different types of error rates obtained using the SVM, bootstrap approach is performed on each miRNA expression data set. For each training set, a set of differential miRNAs is first generated, and then the SVM is trained with the selected miRNAs. After the training, the information of miRNAs those were selected for the training set is used to generate test set and then the class label of the test sample is predicted using the SVM. For each data set, fifty top‐ranked miRNAs are selected for the analysis.
In order to calculate the B.632+ error rate, apparent error (AE) is first calculated. This error is obtained when the same original data set is used to train and test a classifier. After that, the B1 error is computed from M bootstrap samples. Finally, the no‐information error (γ) is calculated by randomly perturbing the class label of a given data set. The mutated data set is used for miRNA selection and the selected miRNA set is used to build the SVM. Then, the trained SVM is used to classify the original data set. The error generated by this procedure is known as γ rate. Finally, the B.632+ error rate is computed based on the AE, B1 error, and γ error using (19).