Classified by Research TopicSorted by DateClassified by Publication Type

On Testing of Samplers

On Testing of Samplers.
Kuldeep S. Meel ⓡ Yash Pote ⓡ Sourav Chakraborty.
In Proceedings of Advances in Neural Information Processing Systems(NeurIPS), December 2020.

Download

[HTML] 

Abstract

Given a set of items F and a weight function W, the problem of sampling seeks to sample an item proportional to its weight. Sampling is a fundamental problem in machine learning. The daunting computational complexity of sampling with formal guarantees leads designers to propose heuristics-based techniques for which no rigorous theoretical analysis exists to quantify the quality of generated distributions.This poses a challenge in designing a testing methodology to test whether a sampler under test generates samples according to a given distribution. Only recently, Chakraborty and Meel (2019) designed the first scalable verifier, called Barbarik1, for samplers in the special case when the weight function W is constant, that is, when the sampler is supposed to sample uniformly from F . The techniques in Barbarik1, however, fail to handle general weight functions.The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik1 but using different techniques and analysis, we design Barbarik2 an algorithm to test whether the distribution generated by a sampler is $\varepsilon$-close or $\eta$-far from any target distribution. In contrast to black-box sampling techniques that require a number of samples proportional to |F| , Barbarik2 requires only $\mathcalO(titl(W,\varphi)2/\eta(\eta-6\varepsilon)3)$ samples, where the tilt is the maximum ratio of weights of two satisfying assignments. Barbarik2 can handle any arbitrary weight function. We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers.

BibTeX

@inproceedings{MPC20,
  author    = { Meel, Kuldeep S. and
                Pote, Yash and
                Chakraborty, Sourav},
  title     = {On Testing of Samplers},
  booktitle=NIPS,
  year={2020},
  month=dec,
  nameorder={random},
    bib2html_rescat={Distribution Testing},	
  url       = {https://arxiv.org/abs/2010.12918},
  bib2html_pubtype = {Refereed Conference},
  abstract = { Given a set of items F and a weight function W, the problem of sampling seeks to sample an item proportional to its weight. Sampling is a fundamental problem in machine learning. The daunting computational complexity of sampling with formal guarantees leads designers to propose heuristics-based techniques for which no rigorous theoretical analysis exists to quantify the quality of generated distributions.
This poses a challenge in designing a testing methodology to test whether a sampler under test generates samples according to a given distribution. Only recently, Chakraborty and Meel (2019) designed the first scalable verifier, called Barbarik1, for samplers in the special case when the weight function W is constant, that is, when the sampler is supposed to sample uniformly from F . The techniques in Barbarik1, however, fail to handle general weight functions.
The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik1 but using different techniques and analysis, we design Barbarik2 an algorithm to test whether the distribution generated by a sampler is $\varepsilon$-close or $\eta$-far from any target distribution. In contrast to black-box sampling techniques that require a number of samples proportional to |F| , Barbarik2 requires only  $\mathcal{O}(titl(W,\varphi)2/\eta(\eta-6\varepsilon)3)$ samples, where the tilt is the maximum ratio of weights of two satisfying assignments. Barbarik2 can handle any arbitrary weight function. We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers. },
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Sun Apr 14, 2024 11:15:51