Latest YouTube Video

Thursday, June 23, 2016

Robust Learning of Fixed-Structure Bayesian Networks. (arXiv:1606.07384v1 [cs.DS])

We investigate the problem of learning Bayesian networks in an agnostic model where an $\epsilon$-fraction of the samples are adversarially corrupted. Our agnostic learning model is similar to -- in fact, stronger than -- Huber's contamination model in robust statistics. In this work, we study the fully observable Bernoulli case where the structure of the network is given. Even in this basic setting, previous learning algorithms either run in exponential time or lose dimension-dependent factors in their error guarantees. We provide the first computationally efficient agnostic learning algorithm for this problem with dimension-independent error guarantees. Our algorithm has polynomial sample complexity, runs in polynomial time, and achieves error that scales nearly-linearly with the fraction of adversarially corrupted samples.



from cs.AI updates on arXiv.org http://ift.tt/28Qf8Z6
via IFTTT

No comments: