首页|The influence of dimensions on the complexity of computing decision trees

The influence of dimensions on the complexity of computing decision trees

扫码查看
A decision tree recursively splits a feature space R~d and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work considers heuristic algorithms that compute a decision tree from training data, usually aiming to minimize in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number d of dimensions of the feature space R~d, which contains n training examples. We show that it can be solved in O(n~(2d+1)) time, but under reasonable complexity-theoretic assumptions it is not possible to achieve f(d) · n~(O(d/log d)) running time. The problem is solvable in (d R)~(O(d R)) • n~(1+o(1)) time if there are exactly two classes and R is an upper bound on the number of tree leaves labeled with the first class.

Decision treesParameterized complexityMachine learning

Stephen Kobourov、Maarten Loeffler、Fabrizio Montecchiani、Marcin Pilipczuk、Ignaz Rutter、Raimund Seidel、Manuel Sorge、Jules Wulms

展开 >

Technical University Munich, Department of Computer Science, Germany

Utrecht University, Department of Information and Computing Sciences, the Netherlands

University of Perugia, Department of Engineering, Italy

University of Warsaw, Faculty of Mathematics, Informatics, and Mechanics, Poland

University of Passau, Faculty of Computer Science and Mathematics, Germany

Saarland University, Department of Computer Science, Germany

TU Wien, Institute of Logic and Computation, Austria

展开 >

2025

Artificial intelligence

Artificial intelligence

SCI
ISSN:0004-3702
年,卷(期):2025.343(Jun.)
  • 32