Skip to main content

Advertisement

Figure 1 | BMC Bioinformatics

Figure 1

From: A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data

Figure 1

Decision trees separating two classes: Classification problem with uncorrelated features (left), and a distorted version resulting from an additive noise process (right). The said process induces correlation by adding a random value to both features, thus mimicking the acquisition process of many absorption, reflectance or resonance spectra (see Methods section). Growing orthogonal decision trees on such a data set – shown on the right – results in deeply nested trees with complex decision boundaries. (Both trees not grown to full depth for visualization purposes).

Back to article page