The modern AI systems significantly differ from traditional systems in their reliance on probabilistic reasoning. We focus on one core component of probabilistic reasoning: sampling from a discrete distribution specified by a probabilistic model. The widespread usage of heuristics in different sampling-based techniques creates a gap between theory and practice: In theory, heuristics would nullify guarantees while in practice the heuristics seem to work well for problems arising from real-world instances. Often statistical tests are employed to argue for the quality of distributions, but such statistical tests are usually performed on a tiny number of samples for which no theoretical guarantees exist for their accuracy. In contrast to testing for deterministic programs, where one trace is sufficient to prove the existence of a bug; such is not the case for samplers as one sample is typically not sufficient to prove non-conformity of the sampler to the desired distribution.
This makes one wonder: whether it is possible to design testing methodology to test whether a sampler under test generates samples close to a given distribution. We will discuss, to the best of our knowledge, the first algorithmic framework, Barbarik, to test whether the distribution generated is close to the uniform distribution. In contrast to the sampling techniques that require an exponential or sub-exponential number of samples for sampler whose support can be represented by n bits, Barbarik requires samples independent of n. We present a prototype implementation of Barbarik and use it to test three state of the art uniform samplers over the support defined by combinatorial constraints. Barbarik is able to provide a certificate of uniformity to one sampler and demonstrate non-uniformity for the other two samplers (joint work with Sourav Chakraborty)
Room 602, School of Information Management & Engineering, Shanghai University of Finance & Economics