Skip to main content

Table 3 Parameter settings of classifiers

From: Identifying tweets of personal health experience through word embedding and LSTM neural network

Classifier Parameter Settings
Logistic Regression penalty:'l2’, tol = 1e-4, C = 1.0, solver:'liblinear’,max_iter = 100
Decision Tree (J48) criterion = ‘entropy’, max_depth = 30, min_samples_split = 2, min_samples_leaf = 1
KNN n_neighbors = 1, p = 2, metric = ‘minkowski’,algorithm = ‘auto’
SVM C = 1.0, kernel = ‘rbf’, tol = 1e-4, gomma = 0.001
BoW + Logistic Regr. C = 1000, random_state = 0
Word Embedding + LSTM In LSTM layer, the input and output dimensions: 128, L2 for regularizer, and the parameter for L2: 0.01. 30% of training dataset was used as validation dataset. Class weight for PET class: 6547/2650, and for non-PET class: 2650/6547