A generative model is pretty pointless on its own unless the generative structure itself holds intrinsic interest. Hence, papers justify their generative models either by comparing its predictive performance against another model or by extending the model to accommodate for standard machine learning tasks of dimensionality reduction, prediction, or classification.
A prime example of this is the LDA paper that evaluates the model’s usefulness for classification and collaborative filtering in addition to comparing its performance against its ancestor – the PLSA model – whose intention was to use the latent variables for indexing documents. These tasks were performed without modification to the generative model because they only required the evaluation of probability densities such as for document density,
for query relevance against a document, and
for recommending an item given a set of items.
Lack of supervision
It was realized early on that supervision would be required to get the most out of a generative model. Consider the case of clustering a set of points where we let a Gaussian mixture model group together points into clusters. The entire idea is that we hope for the clusters of points to have some significance to us. It’s quite unlikely in most non-trivial circumstances for the clusters to make complete sense to us after a run of the GMM.
So, a logical modification of clustering comes about when the sample points (the independent variable) come along with accompanying points
(the dependent variable). While we may just get lucky and find that the clusters perfectly partitions
such that each cluster contains a disjoint subset of the
s, we can’t expect this to happen unaided. Thus, the way to proceed is to treat
as constraints and to find clusters that do their best to encompass a disjoint subset of
s.
What I want to look at is one way [1] the LDA model can be augmented to learn topics with the help of document labels such as categories, ratings, or other information.
Regression
The independent variables are taken to be the topic-count vector for each document
and the dependent variable are ordered user ratings (such as a score from 1 to 10)
. The hunch is that the ratings change linearly with respect to topic counts, i.e, two different ratings are differentiated by a linear change in the topic-counts of the corresponding documents with possible variations entertained by a Gaussian noise.
The independent variables are the normalized topic counts
and the ratings are modeled as values drawn from a Gaussian distribution with mean given by a linear combination via
of the above topic counts
To keep this post short, I’ll postpone the inference discussion to another post, which will also give me a chance to walk through variational inference.
Generalization
What if we want to entertain other responses such as a binomial or categorical response? For this, we employ the Generalized Linear Model (GLM). Note first the salient components of the Gaussian model above:
- The mean
is linearly determined as
. This linear combination is also called the linear predictor in the paper.
- The dispersion (variance) parameter
controls how much the response varies from the linearly determined
The GLM defines the probability of the dependent variable as
where is the natural parameter,
the dispersion parameter,
is the base measure and
is the log-normalizer. Let’s see if the normal distribution fits this form
This fits the bill with ,
,
, and
. In the next post I’ll look at how other response types fit into the Generalized Linear Model format.
[1] Jon D. Mcauliffe and David M. Blei. 2008. “Supervised Topic Models.” Advances in Neural Information Processing Systems 20