Classified by Research TopicSorted by DateClassified by Publication Type

Quantitative Verification of Neural Networks And its Security Applications

Quantitative Verification of Neural Networks And its Security Applications .
Teodora Baluta, Shiqi Shen, Shine,Shweta, Kuldeep S. Meel and Prateek Saxena.
In Proceedings of ACM Conference on Computer and Communications Security (CCS), November 2019.

Download

[PDF] 

Abstract

Neural networks are increasingly employed in safety-critical domains. This has prompted interest in verifying or certifying logically encoded properties of neural networks. Prior work has largely focused on checking existential properties, wherein the goal is to check whether there exists any input that violates a given property of interest. However, neural network training is a stochastic process, and many questions arising in their analysis require probabilistic and quantitative reasoning, i.e., estimating how many inputs satisfy a given property. To this end, our paper proposes a novel and principled framework to quantitative verification of logical properties specified over neural networks. Our framework is the first to provide PAC-style soundness guarantees, in that its quantitative estimates are within a controllable and bounded error from the true count. We instantiate our algorithmic framework by building a prototype tool called NPAQ that enables checking rich properties over binarized neural networks. We show how emerging security analyses can utilize our framework in 3 concrete point applications: quantifying robustness to adversarial inputs, efficacy of trojan attacks, and fairness/bias of given neural networks.

BibTeX

@inproceedings{BSSMS19,
  title={
    Quantitative Verification of Neural Networks And its Security Applications
  },
  author={
    Baluta, Teodora and Shen, Shiqi and Shine,Shweta and Meel, Kuldeep S. and
    Saxena, Prateek
  },
  booktitle=CCS,
  year={2019},
  month=nov,
  bib2html_dl_pdf={http://arxiv.org/abs/1906.10395},
  code={https://teobaluta.github.io/npaq/},
  bib2html_pubtype={Refereed Conference},
  bib2html_rescat={Formal Methods 4 ML},
  abstract={
    Neural networks are increasingly employed in safety-critical domains. This
    has prompted interest in verifying or certifying logically encoded
    properties of neural networks. Prior work has largely focused on checking
    existential properties, wherein the goal is to check whether there exists
    any input that violates a given property of interest. However, neural
    network training is a stochastic process, and many questions arising in
    their analysis require probabilistic and quantitative reasoning, i.e.,
    estimating how many inputs satisfy a given property. To this end, our paper
    proposes a novel and principled framework to quantitative verification of
    logical properties specified over neural networks. Our framework is the
    first to provide PAC-style soundness guarantees, in that its quantitative
    estimates are within a controllable and bounded error from the true count.
    We instantiate our algorithmic framework by building a prototype tool called
    NPAQ that enables checking rich properties over binarized neural networks.
    We show how emerging security analyses can utilize our framework in 3
    concrete point applications: quantifying robustness to adversarial inputs,
    efficacy of trojan attacks, and fairness/bias of given neural networks.
  },
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Tue Apr 28, 2026 01:27:21