While Bayesian methods are praised for their ability to incorporate useful prior knowledge, in practice, priors that allow for computationally convenient or tractable inference are more commonly used. In this paper, we investigate the following question: for a given model, is it possible to use any convenient prior to infer a false posterior, and afterwards, given some true prior of interest, quickly transform this result into the true posterior?
We present a procedure to carry out this task: given an inferred false posterior and true prior, our algorithm generates samples from the true posterior. This transformation procedure, which we call "prior swapping" works for arbitrary priors. Notably, its cost is independent of data size. It therefore allows us, in some cases, to apply significantly less-costly inference procedures to more-sophisticated models than previously possible. It also lets us quickly perform any additional inferences, such as with updated priors or for many different hyperparameter settings, without touching the data. We prove that our method can generate asymptotically exact samples, and demonstrate it empirically on a number of models and priors.
from cs.AI updates on arXiv.org http://ift.tt/1XUgiZm
via IFTTT
No comments:
Post a Comment