Papers
arxiv:1711.02679

Neural Variational Inference and Learning in Undirected Graphical Models

Published on Nov 7, 2017
Authors:
,

Abstract

Variational inference methods for undirected graphical models use neural networks to approximate log-partition functions and enable efficient learning and sampling across hybrid model architectures.

AI-generated summary

Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 1711.02679
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1711.02679 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1711.02679 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.