Classified by Research TopicSorted by DateClassified by Publication Type

Scalable Quantitative Verification For Deep Neural Networks

Scalable Quantitative Verification For Deep Neural Networks.
Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel and Prateek Saxena.
In Proceedings of International Conference on Software Engineering (ICSE), May 2021.

Download

[PDF] 

Abstract

Verifying security properties of deep neural networks (DNNs) is becoming increasingly important. This paper introduces a new quantitative verification framework for DNNs that can decide, with user-specified confidence, whether a given logical property \psi defined over the space of inputs of the given DNN holds for less than a user-specified threshold,\theta. We present new algorithms that are scalable to large real-world models as well as proven to be sound. Our approach requires only black-box access to the models. Further, it certifies properties of both deterministic and non-deterministic DNNs. We implement our approach in a tool called PROVERO. We apply PROVERO to the problem of certifying adversarial robustness. In this context, PROVERO provides an attack-agnostic measure of robustness for a given DNN and a test input. First, we find that this metric has a strong statistical correlation with perturbation bounds reported by 2 of the most prominent white-box attack strategies today. Second, we show that PROVERO can quantitatively certify robustness with high confidence in cases where the state-of-the-art qualitative verification tool (ERAN) fails to produce conclusive results. Thus, quantitative verification scales easily to large DNNs.

BibTeX

@inproceedings{BCMS21,
title={Scalable Quantitative Verification For Deep Neural Networks},
author={Baluta, Teodora and  Chua, Zheng Leong and  Meel, Kuldeep S. and  Saxena, Prateek},
booktitle=ICSE,
bib2html_pubtype = {Refereed Conference},
year={2021},
month=may,
bib2html_rescat={Formal Methods 4 ML},	
bib2html_dl_pdf={https://arxiv.org/abs/2002.06864},
abstract={Verifying security properties of deep neural networks (DNNs) is becoming increasingly important. This paper introduces a new quantitative verification framework for DNNs that can decide, with user-specified confidence, whether a given logical property {\psi} defined over the space of inputs of the given DNN holds for less than a user-specified threshold,{\theta}. We present new algorithms that are scalable to large real-world models as well as proven to be sound. Our approach requires only black-box access to the models. Further, it certifies properties of both deterministic and non-deterministic DNNs. We implement our approach in a tool called PROVERO. We apply PROVERO to the problem of certifying adversarial robustness. In this context, PROVERO provides an attack-agnostic measure of robustness for a given DNN and a test input. First, we find that this metric has a strong statistical correlation with perturbation bounds reported by 2 of the most prominent white-box attack strategies today. Second, we show that PROVERO can quantitatively certify robustness with high confidence in cases where the state-of-the-art qualitative verification tool (ERAN) fails to produce conclusive results. Thus, quantitative verification scales easily to large DNNs.}
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Sun Apr 14, 2024 11:15:51