TY - JOUR
T1 - On Computing Probabilistic Explanations for Decision Trees
AU - Arenas, Marcelo
AU - Barceló, Pablo
AU - Kozachinskiy, Alexander
AU - Romero, Miguel
AU - Subercaseaux, Bernardo
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025
Y1 - 2025
N2 - Formal XAI (explainable AI) is a growing area that focuses on computing explanations with mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of the most studied cases is that of explaining the choices taken by decision trees, as they are traditionally deemed as one of the most interpretable classes of models. Recent work has focused on studying the computation of sufficient reasons, a kind of explanation in which given a decision tree and an instance, one explains the decision by providing a subset of the features of such that for any other instance compatible with, it holds that, intuitively meaning that the features in are already enough to fully justify the classification of It has been argued, however, that sufficient reasons constitute a restrictive notion of explanation. For such a reason, the community has started to study their probabilistic counterpart, in which one requires that the probability of must be at least some value (0,1], where is a random instance that is compatible with. Our paper settles the computational complexity of-sufficient-reasons over decision trees, showing that both (1) finding-sufficient-reasons that are minimal in size, and (2) finding-sufficient-reasons that are minimal inclusion-wise, are computationally intractable. By doing this, we answer two open problems originally raised by Izza et al. (2021), and extend the hardness of explanations for Boolean circuits presented by Wäldchen et al. (2021) to the more restricted case of decision trees. Furthermore, we present sharp non-approximability results under a widely believed complexity hypothesis. On the positive side, we identify structural restrictions of decision trees that make the problem tractable.
AB - Formal XAI (explainable AI) is a growing area that focuses on computing explanations with mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of the most studied cases is that of explaining the choices taken by decision trees, as they are traditionally deemed as one of the most interpretable classes of models. Recent work has focused on studying the computation of sufficient reasons, a kind of explanation in which given a decision tree and an instance, one explains the decision by providing a subset of the features of such that for any other instance compatible with, it holds that, intuitively meaning that the features in are already enough to fully justify the classification of It has been argued, however, that sufficient reasons constitute a restrictive notion of explanation. For such a reason, the community has started to study their probabilistic counterpart, in which one requires that the probability of must be at least some value (0,1], where is a random instance that is compatible with. Our paper settles the computational complexity of-sufficient-reasons over decision trees, showing that both (1) finding-sufficient-reasons that are minimal in size, and (2) finding-sufficient-reasons that are minimal inclusion-wise, are computationally intractable. By doing this, we answer two open problems originally raised by Izza et al. (2021), and extend the hardness of explanations for Boolean circuits presented by Wäldchen et al. (2021) to the more restricted case of decision trees. Furthermore, we present sharp non-approximability results under a widely believed complexity hypothesis. On the positive side, we identify structural restrictions of decision trees that make the problem tractable.
KW - decision trees
KW - machine learning
KW - mathematical foundations
UR - https://www.scopus.com/pages/publications/105019311819
U2 - 10.1613/jair.1.17494
DO - 10.1613/jair.1.17494
M3 - Article
AN - SCOPUS:105019311819
SN - 1076-9757
VL - 83
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
M1 - 34
ER -