The best directed models should always be a worse at generating samples than the best undirected models, even if their log likelihoods are similar for a simple reason.

If we have an undirected model, then it defines a probability distribution by the equation

As always, the standard objective of unsupervised learning is to find a distribution so that the average log probability of the data distribution is as large as possible.

In theory, if we learn successfully, we should reach a local maxima of the average log probability. Taking the derivative and setting it to zero yields

(here are the maximum likelihood parameters). Notice that this equation is a statement about the samples produced by the distribution : the gradient of the goodness averaged over the data distribution is equal to the same gradient averaged over the model’s distribution . Therefore, the samples from must somehow be related to the samples from the data distribution . This is a “promise” made to us by the learning objective of unsupervised learning.

However, directed models do not offer such a guarantee; instead, it promises that the conditional distributions of the data distribution will be similar to the conditional distributions of the model’s distribution, when the conditioned data is sampled form the data distribution. This is the critical point.

More formally, a directed model defines a distribution . Plugging it in into the objective of maximizing the average log likelihood of the data distribution , we get the following:

,

which is a sum of indepedent problems.

IF the ‘s don’t share parameters for different ‘s, then the problems are truly independent and could be solved completely separately. So let’s say we found a that makes all these objectives happy. Then will be happy, which means that is similar, more or less, to for being sampled from — which is the critical implied assumption made by the maximum likelihood objective applied to directed models. Why is it a problem when generating samples? It’s bad because this objective makes no “promises” about the behaviour of when . It is easy to imagine that a will be somewhat different from , and say that was sampled from . Then will freak out, having never seen anything like , which will make the sample look even less like a sample from . Etc. This “chain reaction” will likely cause the directed model to produce worse-looking samples than an undirected model with a similar log probability.

But something should be odd: after all, any undirected model (or distribution for that matter) can be decomposed with the chain rule, . Why won’t the above argument apply to an undirected model, which I claim is to be superior at sampling? An answer can be given, but it involves lots of handwaving.

If an undirected model is expressed as a directed model using the chain rule, then the conditional probabilities will involve massive marginalizations. What’s more, all the conditional distributions will share parameters in a very complicated way for different values of . In all likelihood (and that’s the weak part of the argument), the parameterization is so complex that it’s not possible to make all the objectives happy for all simultaneously; that is, the undirected model will not necessarily make similar to when . This is why I assumed that the little conditionals don’t share parameters.

So to summarize, directed models are worse at sampling because of the sequential nature of their sampling procedure. By sampling in sequence, the directed model is “fed” data which is unlike the training distribution, causing it to freak out. In contrast, sampling from undirected models requires an expensive Markov chain, which ensures the “self-consistency” of the sample. And intuitively, since we invest more work into obtaining the sample, it must be better.

## 2 Comments

Doesn’t Gibbs sampling also converge to the true distribution for infinite steps?

I guess it’s just a tradeoff– if you know a lot about the nature of your problem (distributions and dependencies), directed models let you model it rather precisely. Of course, you are more prone to errors in your assumptions.

It does. And Gibbs would suffer from similar problems if the model were trained with Pseudo-likelihood, which

learns conditionals that are accurate only on the data distribution.

And it’s always good to wisely use prior knowledge about the problem, though it can be difficult

to “explain” the nature of our prior knowledge to the model.