Adaptive Strategy for Resetting a Non-stationary Markov Chain during Learning via Joint Stochastic Approximation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 36
  • Download : 0
In this paper, we tackle the non-stationary kernel problem of the JSA algorithm by Ou and Song 2020, a recent proposal that learns a deep generative model $p_\theta(\mathbf{x},\mathbf{h})$ and a corresponding approximate posterior $q_\phi(\mathbf{h}|\mathbf{x})$ by drawing samples from a non-stationary Markov chain and estimating gradients with these samples. The non-stationary kernel problem refers to the degraded performance of the algorithm due to the constant change of the transition kernel of the chain throughout the run of the algorithm. We present an automatic adaptive strategy for checking whether this change is significant at each gradient-update step or not, and resetting the chain with a sample drawn from the current approximate posterior $q_\phi(\mathbf{h}|\mathbf{x})$ if the answer to the check is yes. In our experiments with the binarized MNIST, our strategy gives results comparable with or slightly better than those reported in the original paper on JSA, while avoiding the nontrivial manual intervention required for handling the non-stationary kernel problem in the original JSA algorithm.
Organisers of advances in approximate Bayesian inference (AABI)
Issue Date

The 3rd Symposium on Advances in Approximate Bayesian Inference (AABI 2021)

Appears in Collection
RIMS Conference PapersCS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.


  • mendeley


rss_1.0 rss_2.0 atom_1.0