Classified by Research TopicSorted by DateClassified by Publication Type

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules.
Bishwamittra Ghosh, and Kuldeep S. Meel.
In Proceedings of AAAI/ACM Conference on AI, Ethics, and Society(AIES), January 2019.

Download

[PDF] 

Abstract

The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end user to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability.Motivated by the success of Max-SAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, theruntime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e. a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances?In this paper, we take a step towards answering the above question in affirmation. We propose an incremental approach to Max-SAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.

BibTeX

@inproceedings{GM19,
author={Ghosh, Bishwamittra and Meel, Kuldeep S.},
title={IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules},
bib2html_dl_pdf={../Papers/aies19-gm.pdf},
code={https://github.com/meelgroup/mlic},
booktitle=AIES,
month=jan,
year={2019},
bib2html_rescat={Formal Methods 4 ML},	
bib2html_pubtype={Refereed Conference},
abstract={The wide adoption of machine learning in the critical domains such as medical diagnosis, law, education had propelled the need for interpretable techniques due to the need for end user to understand the reasoning behind decisions due to learning systems. The computational intractability of interpretable learning led practitioners to design heuristic techniques, which fail to provide sound handles to tradeoff accuracy and interpretability.
Motivated by the success of Max-SAT solvers over the past decade, recently MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the problem of learning interpretable rules expressed in Conjunctive Normal Form (CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to that of other state of the art black-box classifiers while generating small interpretable CNF formulas, the
runtime performance of MLIC is significantly lagging and renders approach unusable in practice. In this context, authors raised the question: Is it possible to achieve the best of both worlds, i.e. a sound framework for interpretable learning that can take advantage of MaxSAT solvers while scaling to real-world instances?
In this paper, we take a step towards answering the above question in affirmation. We propose an incremental approach to Max-SAT based framework that achieves scalable runtime performance via partition-based training methodology. Extensive experiments on benchmarks arising from UCI repository demonstrate that IMLI achieves up to three orders of magnitude runtime improvement without loss of accuracy and interpretability.},
} 

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Sun Apr 14, 2024 11:15:51