Decision tree ensembles based on kernel features

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

A classifier ensemble is a set of classifiers whose individual decisions are combined to classify new examples. Classifiers, which can represent complex decision boundaries are accurate. Kernel functions can also represent complex decision boundaries. In this paper, we study the usefulness of kernel features for decision tree ensembles as they can improve the representational power of individual classifiers. We first propose decision tree ensembles based on kernel features and found that the performance of these ensembles is strongly dependent on the kernel parameters; the selected kernel and the dimension of the kernel feature space. To overcome this problem, we present another approach to create ensembles that combines the existing ensemble methods with the kernel machine philosophy. In this approach, kernel features are created and concatenated with the original features. The classifiers of an ensemble are trained on these extended feature spaces. Experimental results suggest that the approach is quite robust to the selection of parameters. Experiments also show that different ensemble methods (Random Subspace, Bagging, Adaboost.M1 and Random Forests) can be improved by using this approach.

Original languageEnglish
Pages (from-to)855-869
Number of pages15
JournalApplied Intelligence
Volume41
Issue number3
DOIs
Publication statusPublished - Sep 18 2014
Externally publishedYes

Keywords

  • AdaBoost.M1
  • Bagging
  • Classifier ensembles
  • Decision trees
  • Kernel features
  • Random forests
  • Random subspaces

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Decision tree ensembles based on kernel features'. Together they form a unique fingerprint.

Cite this