Classified by Research TopicSorted by DateClassified by Publication Type

Equivalence Testing: The Power of Bounded Adaptivity

Equivalence Testing: The Power of Bounded Adaptivity.
Diptarka Chakraborty, Sourav Chakraborty, Gunjan Kumar and Kuldeep S. Meel.
In Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS), April 2024.

Download

[PDF] 

Abstract

Equivalence testing, a fundamental problem in the field of distribution testing, seeks to infer if two unknown distributions on $[n]$ are the same or far apart in the total variation distance. Conditional sampling has emerged as a powerful query model and has been investigated by theoreticians and practitioners alike, leading to the design of optimal algorithms albeit in a sequential setting (also referred to as adaptive tester). Given the profound impact of parallel computing over the past decades, there has been a strong desire to design algorithms that enable high parallelization. Despite significant algorithmic advancements over the last decade, parallelizable techniques (also termed non-adaptive testers) have $\TildeO(łog^12n)$ query complexity, a prohibitively large complexity to be of practical usage. Therefore, the primary challenge is whether it is possible to design algorithms that enable high parallelization while achieving efficient query complexity. Our work provides an affirmative answer to the aforementioned challenge: we present a highly parallelizable tester with a query complexity of $\TildeO(łog n)$, achieved through a single round of adaptivity, marking a significant stride towards harmonizing parallelizability and efficiency in equivalence testing.

BibTeX

@inproceedings{CCKM24,
	author={Chakraborty, Diptarka and Chakraborty, Sourav  and Kumar,  Gunjan and Meel,  Kuldeep S.},
	title={	Equivalence Testing: The Power of Bounded Adaptivity},
	abstract={  Equivalence testing, a fundamental problem 
	in the field of distribution testing, 
	seeks to infer if two unknown distributions
	on $[n]$ are the same or far apart in the
	total variation distance. Conditional
	sampling has emerged as a powerful query
	model and has been investigated by
	theoreticians and practitioners alike,
	leading to the design of optimal algorithms
	albeit in a sequential setting (also
	referred to as adaptive tester). 
	Given the profound impact of parallel
	computing over the past decades, there has been a
	strong desire to design algorithms that
	enable high parallelization. Despite
	significant algorithmic advancements over
	the last decade, parallelizable techniques
	(also termed non-adaptive testers) have
	$\Tilde{O}(\log^{12}n)$ query complexity, a
	prohibitively large complexity to be of
	practical usage. 
	Therefore, the primary challenge is whether
	it is possible to design algorithms that
	enable high parallelization while achieving
	efficient query complexity. 
	Our work provides an affirmative answer to
	the aforementioned challenge: we present a
	highly parallelizable tester with a query
	complexity of $\Tilde{O}(\log n)$, achieved
	through a single round of adaptivity,
	marking a significant stride towards
	harmonizing parallelizability and
	efficiency in equivalence testing.},
	year={2024},
	month=apr,
	booktitle=AISTATS,
	bib2html_pubtype={Refereed Conference},
	bib2html_rescat={Distribution Testing},
	bib2html_dl_pdf={https://arxiv.org/abs/2403.04230.pdf},
}

Generated by bib2html.pl (written by Patrick Riley with layout from Sanjit A. Seshia ) on Sun Apr 14, 2024 11:15:51