Title

Carreira-Perpinan, M. A. (2001): Continuous latent variable models for dimensionality reduction and sequential data reconstruction. PhD thesis, Dept. of Computer Science, University of Sheffield, UK.

Abstract

Continuous latent variable models (cLVMs) are probabilistic models that represent a distribution in a high-dimensional Euclidean space using a small number of continuous, latent variables. This thesis explores, theoretically and practically, the ability of cLVMs for dimensionality reduction and sequential data reconstruction.

The first part of the thesis reviews and extends the theory of cLVMs: definition in terms of a prior distribution in latent space, a mapping to data space and a noise model; maximum likelihood parameter estimation with an expectation-maximisation (EM) algorithm; specific cLVMs (factor analysis, principal component analysis (PCA), independent component analysis, independent factor analysis and the generative topographic mapping (GTM)); mixtures of cLVMs; identifiability, interpretability and visualisation; and derivation of mappings for dimensionality reduction and reconstruction and their properties, such as continuity, for each cLVM. We extend GTM to diagonal noise and give a corresponding EM algorithm.

We also describe a discrete LVM for binary data, Bernoulli mixtures, widely used in practice. We show that their log-likelihood surface has no singularities, unlike other mixture models, which makes EM estimation practical; and that their theoretical non-identifiability is rarely realised in actual estimates, which makes them interpretable.

The second part deals with dimensionality reduction. We define the problem and give an extensive, critical review of nonprobabilistic methods for it: linear methods (PCA, projection pursuit), nonlinear autoassociators, kernel methods, local dimensionality reduction, principal curves, vector quantisation methods (elastic net, self-organising map) and multidimensional scaling methods. We then empirically evaluate, in terms of reconstruction error, computation time and visualisation, several latent-variable methods for dimensionality reduction of binary electropalatographic (EPG) data: PCA, factor analysis, mixtures of factor analysers, GTM and Bernoulli mixtures. We compare these methods with earlier, nonadaptive EPG data reduction methods and derive 2D maps of EPG sequences for use in speech research and therapy.

The last part of this thesis proposes a new method for missing data reconstruction of sequential data that includes as particular case the inversion of many-to-one mappings. We define the problem, distinguish it from inverse problems, and show when both coincide. The method is based on multiple pointwise reconstruction and constraint optimisation. Multiple pointwise reconstruction uses a Gaussian mixture joint density model for the data, conveniently implemented with a nonlinear cLVM (GTM). The modes of the conditional distribution of missing values given present values at each point in the sequence represent local candidate reconstructions. A global sequence reconstruction is obtained by efficiently optimising a constraint, such as continuity or smoothness, with dynamic programming. We give a probabilistic interpretation of the method. We derive two algorithms for exhaustive mode finding in Gaussian mixtures, based on gradient-quadratic search and fixed-point search, respectively; as well as estimates of error bars for each mode and a measure of distribution sparseness. We discuss the advantages of the method over previous work based on the conditional mean or on universal mapping approximators (including ensembles and recurrent networks), conditional distribution estimation, vector quantisation and statistical analysis of missing data. We study the performance of the method with synthetic data (a toy example and an inverse kinematics problem) and real data (mapping between EPG and acoustic data). We describe the possible application of the method to several well-known reconstruction or inversion problems: decoding of neural population activity for hippocampal place cells; wind field retrieval from scatterometer data; inverse kinematics and dynamics of a redundant manipulator; acoustic-to-articulatory mapping; audiovisual mappings for speech recognition; and recognition of occluded speech.

Contents (abridged)

  1. Introduction
  2. The continuous latent variable modelling formalism
  3. Some properties of finite mixtures of multivariate Bernoulli distributions
  4. Dimensionality reduction
  5. Dimensionality reduction of electropalatographic (EPG) data
  6. Inverse problems and mapping inversion
  7. Sequential data reconstruction
  8. Exhaustive mode finding in Gaussian mixtures
  9. Experiments with synthetic data
  10. Experiments with real-world data: the acoustic-to-articulatory mapping problem
  11. Conclusions
  12. Appendices

Keywords

Continuous latent variable models, generative models, generative topographic mapping (GTM), identifiability, Bernoulli mixtures, electropalatography, dimensionality reduction, inverse problems, missing data reconstruction, sequential data reconstruction, mapping inversion, conditional distribution, mode finding, continuity constraints, distribution sparseness, universal mapping approximators, vector quantisation, inverse kinematics, acoustic-to-articulatory mapping.

Download

* You can download the full thesis (333 pages, 130 figures, 24 tables, 445 references) in the following formats:

All files are in European size A4, but should print fine in US letter size (if not, you can always use "fit to page" from e.g. Acroread to print the PDF file). Although many pages contain colour figures, they should be readable when printed in black and white except perhaps some of those in chapters 9 and 10.

The following chapters, available separately, contain tutorial material:

* A BibTeX file containing the bibliography for the thesis (.tar.gz, 445 references, 164K). Most of the entries contain also the abstract, keywords, URL (if the reference is available online), related software and other additional information.

* Matlab implementation of several of the models and algorithms discussed:


Miguel A Carreira-Perpinan
Last modified: Sun Apr 4 23:08:33 EDT 2004

University of Toronto | Dept. of Computer Science | Machine learning group | MACP's Home Page |