Probabilistic kernel machines based on Gaussian Processes (GPs) are popular in several applied domains due to their flexible modeling capabilities and interpretability.

Standard learning methods to optimize or infer GP covariance parameters require repeatedly calculating the GP marginal likelihood.

This poses a formidable computational challenge, as the marginal likelihood can only be computed exactly in the case of GP models with Gaussian likelihoods applied to datasets with a limited number of input vectors (a few thousand).

For large datasets, or for GP models with non-Gaussian likelihoods, it is not possible to compute the marginal likelihood exactly, and this has motivated the research community to develop a variety of approximations.

While such approximations recover computational tractability, they introduce bias in predictions and in any conclusions drawn from the analysis of GP models, making them unsuitable for applications where quantification of uncertainty is of primary interest.

In this talk, I will present my research activities aimed at developing learning methods for GP models that avoid these approximations altogether, but that are tractable and scalable.

I will focus in particular on approaches that rely exclusively on either unbiased estimates of the marginal likelihood or stochastic gradients (unbiased estimates of the gradient of the logarithm of the marginal likelihood).

I will demonstrate the effectiveness of these “noisy” learning approaches on several benchmark data and on a multiple-class multiple-kernel classification problem with neuroimaging data.

# Unbiased computations for tractable and scalable learning of Gaussian processes

Institutional tag:

Thematic tag(s):

Dates:

Monday, April 24, 2017 - 14:00 to 15:15

Location:

Amphi Boda, Bâtiment B, Centrale Lille

Speaker(s):

Maurizio Filippone

Affiliation(s):

Eurecom, Univ. Sophia-Antipolis, France

Speaker's URL: