### Datasets

The datasets were obtained from SwissProt [81], except for TC8.A, which was downloaded from Transport Classification Database (TCDB) [41]. These datasets were chosen for their functional diversity, sample size and the range of reported family member prediction accuracies. As SVM is essentially a statistical method, the datasets cannot be too small; yet it would also be convenient for the purposes of this study if they were not too large as to be unwieldy computationally. These downloaded datasets were used to construct the positive dataset for the corresponding SVM classification system. A negative dataset, representing non-class members, was generated by a well-established procedure [2, 3, 21, 30] such that all proteins was grouped into domain families [82] in the PFAM database, and the representative proteins of these families unrelated to the protein family being studied were chosen as negative samples.

These proteins, positive and negative, were further divided into separate training, testing and independent evaluation sets by the following procedure: First, proteins were converted into descriptor vectors and then clustered using hierarchical clustering into groups in the structural and physicochemical feature space [83], where more homologous sequences will have shorter distances between them, and the largest separation between clusters was set to a ceiling of 20. One representative protein was randomly selected from each group to form a training set that is sufficiently diverse and broadly distributed in the feature space. Another protein within the group was randomly selected to form the testing set. The selected proteins from each group were further checked to ensure that they are distinguished from the proteins in other groups. The remaining proteins were then designated as the independent evaluation set, also checked to be at a reasonable level of diversity. Fragments, defined as smaller than 60 residues, were discarded. This selection process ensures that the training, testing and evaluation sets constructed are sufficiently diverse and broadly distributed in the feature space. Though an analysis of the 'similar' proteins in each cluster showed that the majority of the proteins in a cluster are quite non-homologous, the program CDHIT (Cluster Database at High Identity with Tolerance) [62–64] was further used after the SVM model was trained to remove redundancy at both 90% and 70% sequence identity, so as to avoid bias as far as possible. CDHIT removes homologous sequences by clustering the protein dataset at some user-defined sequence identity threshold, for example 90%, and then generating a database of only the cluster representatives, thus eliminating sequences with greater than 90% identity. The statistical details are given in Tables 2 and 3.

### Algorithms for generating protein descriptors

Ten sets of commonly used composition and physicochemical descriptors were generated from the protein sequence (see Table 1). These descriptors can be computed via the PROFEAT server [22].

Amino acid composition (Set D1) is defined as the fraction of each amino acid type in a sequence

where

*r* = 1, 2, ..., 20,

*N*
_{
r
}is the number of amino acid of type

*r*, and

*N* is the length of the sequence. Dipeptide composition (Set D2) is defined as

where *r*, *s* = 1, 2, ..., 20, *N*
_{
ij
}is the number of dipeptides composed of amino acid types *r* and *s*.

Autocorrelation descriptors are a class of topological descriptors, also known as molecular connectivity indices, describe the level of correlation between two objects (protein or peptide sequences) in terms of their specific structural or physicochemical property [84], which are defined based on the distribution of amino acid properties along the sequence [85]. Eight amino acid properties are used for deriving the autocorrelation descriptors: hydrophobicity scale [86]; average flexibility index [87]; polarizability parameter [88]; free energy of amino acid solution in water [88]; residue accessible surface areas [89]; amino acid residue volumes [90]; steric parameters [91]; and relative mutability [92].

These autocorrelation properties are normalized and standardized such that

where

is the average value of a particular property of the 20 amino acids.

and σ are given by

Moreau-Broto autocorrelation descriptors (Set D3) [

84,

93] are defined as

where

*d* = 1, 2, ..., 30 is the lag of the autocorrelation, and

*P*
_{
i
}and

*P*
_{i+d}are the properties of the amino acid at positions

*i* and

*i+d* respectively. After applying normalization, we get

Moran autocorrelation descriptors (Set D4) [

94] are calculated as

where

*d*,

*P*
_{
i
}and

*P*
_{i+d}are defined in the same way as that for Moreau-Broto autocorrelation and

is the average of the considered property

*P* along the sequence:

Geary autocorrelation descriptors (Set D5) [

95] are written as

where *d*,
, *P*
_{
i
}and *P*
_{i+d}are defined as above. Comparing the three autocorrelation descriptors: while Moreau-Broto autocorrelation uses the property values as the basis for measurement, Moran autocorrelation utilizes property deviations from the average values, and Geary utilizes the square-difference of property values instead of vector-products (of property values or deviations). The Moran and Geary autocorrelation descriptors measure spatial autocorrelation, which is the correlation of a variable with itself through space.

The descriptors in Set D6 comprise of the composition (*C*), transition (*T*) and distribution (*D*) features of seven structural or physicochemical properties along a protein or peptide sequence [5, 29]. The seven physicochemical properties [2, 5, 29] are hydrophobicity; normalized Van der Waals volume; polarity; polarizibility; charge; secondary structures; and solvent accessibility. For each of these properties, the amino acids are divided into three groups such that those in a particular group are regarded to have approximately the same property. For instance, residues can be divided into hydrophobic (CVLIMFW), neutral (GASTPHY), and polar (RKEDQN) groups. *C* is defined as the number of residues with that particular property divided by the total number of residues in a protein sequence. *T* characterizes the percent frequency with which residues with a particular property is followed by residues of a different property. *D* measures the chain length within which the first, 25%, 50%, 75% and 100% of the amino acids with a particular property are located respectively. There are 21 elements representing these three descriptors: 3 for *C*, 3 for *T* and 15 for *D*, and the protein feature vector is constructed by sequentially combining the 21 elements for all of these properties and the 20 residues, resulting in a total of 188 dimensions.

The quasi-sequence order descriptors (Set D7) [

96] are derived from both the Schneider-Wrede physicochemical distance matrix [

10,

18,

97] and the Grantham chemical distance matrix [

31], between each pair of the 20 amino acids. The physicochemical properties computed include hydrophobicity, hydrophilicity, polarity, and side-chain volume. Similar to the descriptors in Set D6, sequence order descriptors can also be used for representing amino acid distribution patterns of a specific physicochemical property along a protein or peptide sequence [

18,

31]. For a protein chain of

*N* amino acid residues R

_{1}R

_{2}...R

_{
N
}, the sequence order effect can be approximately reflected through a set of sequence order coupling numbers

where τ

_{
d
}is the

*d*th rank sequence order coupling number (

*d* = 1, 2, ..., 30) that reflects the coupling mode between all of the most contiguous residues along a protein sequence, and

*d*
_{i,i+d}is the distance between the two amino acids at position

*i* and

*i+d*. For each amino acid type, the type 1 quasi sequence order descriptor can be defined as

where

*r* = 1, 2, ..., 20,

*f*
_{
r
}is the normalized occurrence of amino acid type

*i* and

*w* is a weighting factor (

*w* = 0.1). The type 2 quasi sequence order is defined as

where *d* = 21, 22, ..., 50. The combination of these two equations gives us a vector that describes a protein: the first 20 components reflect the effect of the amino acid composition, while the components from 21 to 50 reflect the effect of sequence order.

Similar to the quasi-sequence order descriptor, the pseudo amino acid descriptor (Set D8) is made up of a 50-dimensional vector in which the first 20 components reflect the effect of the amino acid composition and the remaining 30 components reflect the effect of sequence order, only now, the coupling number τ

_{
d
}is now replaced by the sequence order correlation factor θ

_{λ} [

32]. The set of sequence order correlated factors is defined as follows:

where θ

_{λ} is the first-tier correlation factor that reflects the sequence order correlation between all of the λ-most contiguous resides along a protein chain (λ = 1,...30) and

*N* is the number of amino acid residues. Θ(R

_{
i
}, R

_{
j
}) is the correlation factor and is given by

where

*H*
_{1}(R

_{
i
}),

*H*
_{2}(R

_{
i
}) and

*M*(R

_{
i
}) are the hydrophobicity [

98], hydrophilicity [

99] and side-chain mass of amino acid R

_{
i
}, respectively. Before being substituted in the above equation, the various physicochemical properties

*P*(

*i*) are subjected to a standard conversion,

This sequence order correlation definition [Eqs. (14), (15)] introduce more correlation factors of physicochemical effects as compared to the coupling number [Eq. (11)], and has shown to be an improvement on the way sequence order effect information is represented [

32,

35,

100]. Thus, for each amino acid type, the first part of the vector is defined as

where

*r* = 1, 2, ..., 20,

*f*
_{
r
}is the normalized occurrence of amino acid type

*i* and

*w* is a weighting factor (

*w* = 0.1), and the second part is defined as

### Support Vector Machines (SVM)

As the SVM algorithms have been extensively described in the literature [2, 3, 101], only a brief description is given here. In the case of a linear SVM, a hyperplane that separates two different classes of feature vectors with a maximum margin is constructed. One class represents positive samples, for example EC2.4 proteins, and the other the negative samples. This hyperplane is constructed by finding a vector **w** and a parameter *b* that minimizes ||**w**||^{2} that satisfies the following conditions: **w**·**x**
_{
i
}+ *b* ≥ 1, for *y*
_{
i
}= 1 (positive class) and **w**·**x**
_{
i
}+ *b* ≤ -1, for *y*
_{
i
}= -1 (negative class). Here **x**
_{
i
}is a feature vector, *y*
_{
i
}is the group index, **w** is a vector normal to the hyperplane,
is the perpendicular distance from the hyperplane to the origin, and ||**w**||^{2} is the Euclidean norm of **w**. In the case of a nonlinear SVM, feature vectors are projected into a high dimensional feature space by using a kernel function such as
. The linear SVM procedure is then applied to the feature vectors in this feature space. After the determination of **w** and *b*, a given vector **x** can be classified by using *sign* [(**w**.**x**) + *b*], a positive or negative value indicating that the vector **x** belongs to the positive or negative class respectively.

As a discriminative method, the performance of SVM classification can be accessed by measuring the true positive

*TP* (correctly predicted positive samples), false negative

*FN* (positive samples incorrectly predicted as negative), true negative

*TN* (correctly predicted negative samples), and false positive

*FP* (negative samples incorrectly predicted as positive) [

4,

102,

103]. As the numbers of positive and negative samples are imbalanced, the positive prediction accuracy or sensitivity

*Q*
_{
p
}=

*TP*/(

*TP+FN*) and negative prediction accuracy or specificity

*Q*
_{
n
}=

*TN*/(

*TN+FP*) [

101] are also introduced. The overall accuracy is defined as

*Q* = (

*TP+TN*)/(

*TP+FN+TN+FP*). However, in some cases,

*Q*,

*Q*
_{
p
}, and

*Q*
_{
n
}are insufficient to provide a complete assessment of the performance of a discriminative method [

102,

104]. Thus the Matthews correlation coefficient (MCC) was used in this work to evaluate the randomness of the prediction:

where MCC ∈ [-1,1], with a negative value indicating disagreement of the prediction and a positive value indicating agreement. A zero value means the prediction is completely random. The MCC utilizes all four basic elements of the accuracy and it provides a better summary of the prediction performance than the overall accuracy.