• Classified by Research Topic • Sorted by Date • Classified by Publication Type •
Efficient Learning of Interpretable Classification Rules
Bishwamittra Ghosh, Dmitry Malioutov and Kuldeep S. Meel.
Journal of Artificial Intelligence Research (JAIR), 74:1823–1863, September 2022.
Machine learning has become omnipresent with applications in various safety-critical domains such as medical, law, and transportation. In these domains, high-stake decisions provided by machine learning necessitate researchers to design interpretable models, where the prediction is understandable to a human. In interpretable machine learning, rule-based classifiers are particularly effective in representing the decision boundary through a set of rules comprising input features. Examples of such classifiers include decision trees, decision lists, and decision sets. The interpretability of rule-based classifiers is in general related to the size of the rules, where smaller rules are considered more interpretable. To learn such a classifier, the brute-force direct approach is to consider an optimization problem that tries to learn the smallest classification rule that has close to maximum accuracy. This optimization problem is computationally intractable due to its combinatorial nature and thus, the problem is not scalable in large datasets. To this end, in this paper we study the triangular relationship among the accuracy, interpretability, and scalability of learning rule-based classifiers. The contribution of this paper is an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for synthesizing classification rules expressible in proposition logic. IMLI considers a joint objective function to optimize the accuracy and the interpretability of classification rules and learns an optimal rule by solving an appropriately designed MaxSAT query. Despite the progress of MaxSAT solving in the last decade, the straightforward MaxSAT-based solution cannot scale to practical classification datasets containing thousands to millions of samples. Therefore, we incorporate an efficient incremental learning technique inside the MaxSAT formulation by integrating mini-batch learning and iterative rule-learning. The resulting framework learns a classifier by iteratively covering the training data, wherein in each iteration, it solves a sequence of smaller MaxSAT queries corresponding to each mini-batch. In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability. For instance, IMLI attains a competitive prediction accuracy and interpretability w.r.t. existing interpretable classifiers and demonstrates impressive scalability on large datasets where both interpretable and non-interpretable classifiers fail. As an application, we deploy IMLI in learning popular interpretable classifiers such as decision lists and decision sets. The source code is available at https://github.com/meelgroup/mlic.
@article{GMM22,
author={Ghosh, Bishwamittra and Malioutov, Dmitry and Meel, Kuldeep S.},
title={Efficient Learning of Interpretable Classification Rules},
journal=JAIR,
volume={74},
pages={1823--1863},
year={2022},
month=sep,
bib2html_rescat={Formal Methods 4 ML},
url={https://doi.org/10.1613/jair.1.13482},
bib2html_pubtype={Journal},
abstract={
Machine learning has become omnipresent with applications in various
safety-critical domains such as medical, law, and transportation. In these
domains, high-stake decisions provided by machine learning necessitate
researchers to design interpretable models, where the prediction is
understandable to a human. In interpretable machine learning, rule-based
classifiers are particularly effective in representing the decision boundary
through a set of rules comprising input features. Examples of such
classifiers include decision trees, decision lists, and decision sets. The
interpretability of rule-based classifiers is in general related to the size
of the rules, where smaller rules are considered more interpretable. To
learn such a classifier, the brute-force direct approach is to consider an
optimization problem that tries to learn the smallest classification rule
that has close to maximum accuracy. This optimization problem is
computationally intractable due to its combinatorial nature and thus, the
problem is not scalable in large datasets. To this end, in this paper we
study the triangular relationship among the accuracy, interpretability, and
scalability of learning rule-based classifiers.
The contribution of this paper is an interpretable learning framework IMLI,
that is based on maximum satisfiability (MaxSAT) for synthesizing
classification rules expressible in proposition logic. IMLI considers a
joint objective function to optimize the accuracy and the interpretability
of classification rules and learns an optimal rule by solving an
appropriately designed MaxSAT query. Despite the progress of MaxSAT solving
in the last decade, the straightforward MaxSAT-based solution cannot scale
to practical classification datasets containing thousands to millions of
samples. Therefore, we incorporate an efficient incremental learning
technique inside the MaxSAT formulation by integrating mini-batch learning
and iterative rule-learning. The resulting framework learns a classifier by
iteratively covering the training data, wherein in each iteration, it solves
a sequence of smaller MaxSAT queries corresponding to each mini-batch. In
our experiments, IMLI achieves the best balance among prediction accuracy,
interpretability, and scalability. For instance, IMLI attains a competitive
prediction accuracy and interpretability w.r.t. existing interpretable
classifiers and demonstrates impressive scalability on large datasets where
both interpretable and non-interpretable classifiers fail. As an
application, we deploy IMLI in learning popular interpretable classifiers
such as decision lists and decision sets. The source code is available at
https://github.com/meelgroup/mlic.
},
}
Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Tue Apr 28, 2026 01:27:21