Skip to main content

Table 4 illustrates the optimal-performing MLM and preprocessing, as well as tuned hyper-parameters with the highest AAC

From: Prediction of diabetes disease using an ensemble of machine learning multi-classifier models

MLMs

Optimal-performing preprocessing

Hyper-parameter tuning methods

Optimal hyper-parameters

Performance

k-NN

I + N

MRMR = 6

Grid search

Algorithm = auto

leaf_size = 5

n_neighbors = 25

weight = uniform

\({L}_{2}\)- norm (Euclidean)

\(0.971\pm 0.003\)

SVM

I + N

Bayesian optimization

C = 1

Gamma = 0.1

Kernel = RBF

Decision_function_shape = OVO

\(0.948\pm 0.003\)

DT

I + Z

MRMR = 10

Bayesian optimization

Criterion = gini

bootstrap = True

min_samples_leaf = 1

max_depth = 8

max_features = auto

min_samples_leaf = 2

min_samples_split = 0.2

\(0.968\pm 0.003\)

RF

I + N

MRMR = 6,8,10

Bayesian optimization

Criterion = gini

n_estimator = 150

bootstrap = True

min_samples_leaf = 1

max_depth = 8

max_features = sqrt

\(0.988\pm 0.003\)

GNB

I + Z + PCA = 12

Grid search

var_smoothing = 08112

\(0.926\pm 0.006\)

AdaBoost

I + MMR = 10

Grid search

boosting algorithm = AdaBoost.MH

n_estimator = 150

learninh_rate = 0.1

\(0.961\pm 0.003\)