Skip to main content

Table 6 XGBoost classification algorithm hyperparameters and hyperparameter ranges used in grid-search tuning

From: Predicting chemotherapy response using a variational autoencoder approach

Hyperparameter name Hyperparameter description Hyperparameter range
n_estimators Number of trees to fit (1, 2, 3, \(\ldots\), 40)
max_depth Maximum tree depth (1, 2, 3, \(\ldots\), 10)
learning_rate Boosting learning rate (0.05, 0.1, 0.2, 0.4, 0.6, 0.8)
min_child_weight Minimum sum of instance weight needed in a child (1, 2, 3, \(\ldots\), 10)
subsample Sub-sample ratio of the training instance (0.1, 0.2, 0.3, \(\ldots\), 1.0)
colsample_bytree Sub-sample ratio of columns when constructing each tree (0.1, 0.2, 0.3, \(\ldots\), 1.0)
reg_alpha Coefficient of L1 regularization for the node weights (0, 1, 2, 3)
reg_lambda Coefficient of L2 regularization for the node weights (1, 2, \(\ldots\), 100)