From: Identifying tweets of personal health experience through word embedding and LSTM neural network
Classifier | Parameter Settings |
---|---|
Logistic Regression | penalty:'l2ā, tolā=ā1e-4, Cā=ā1.0, solver:'liblinearā,max_iterā=ā100 |
Decision Tree (J48) | criterionā=āāentropyā, max_depthā=ā30, min_samples_splitā=ā2, min_samples_leafā=ā1 |
KNN | n_neighborsā=ā1, pā=ā2, metricā=āāminkowskiā,algorithmā=āāautoā |
SVM | Cā=ā1.0, kernelā=āārbfā, tolā=ā1e-4, gommaā=ā0.001 |
BoW + Logistic Regr. | Cā=ā1000, random_stateā=ā0 |
Word Embedding + LSTM | In LSTM layer, the input and output dimensions: 128, L2 for regularizer, and the parameter for L2: 0.01. 30% of training dataset was used as validation dataset. Class weight for PET class: 6547/2650, and for non-PET class: 2650/6547 |