Choisir la langue :

Split-and-augmented Gibbs sampler: A divide-and-conquer approach to solve large scale inference problems

Institutional tag: 
Thematic tag(s): 

Recently, a new class of Markov chain Monte Carlo
(MCMC) algorithms took advantage of convex optimization to
build efficient and fast sampling schemes from high-dimensional
distributions. Variable splitting methods have become classical
in optimization to divide difficult problems in simpler ones and
have proven their efficiency in solving high-dimensional inference
problems encountered in machine learning and signal processing.
This paper derives two new optimization-driven sampling
schemes inspired from variable splitting and data augmentation.
In particular, the formulation of one of the proposed approaches
is closely related to the alternating direction method of multipliers
(ADMM) main steps. The proposed framework enables to derive
faster and more efficient sampling schemes than the current
state-of-the-art methods and can embed the latter. By sampling
efficiently the parameter to infer as well as the hyperparameters
of the problem, the generated samples can be used to approximate
maximum a posteriori (MAP) and minimum mean square error
(MMSE) estimators of the parameters to infer. Additionally, the
proposed approach brings confidence intervals at a low cost
contrary to optimization methods. Simulations on two oftenstudied
signal processing problems illustrate the performance of
the two proposed samplers. All results are compared to those
obtained by recent state-of-the-art optimization and MCMC
algorithms used to solve these problems.

voir l’article associé (soumis) à : http://arxiv.org/abs/1804.05809

Dates: 
Tuesday, May 15, 2018 - 14:00 to 15:15
Location: 
Amphi Goubet, Ecole centrale
Speaker(s): 
Maxime Vono
Affiliation(s): 
IRIT, Univ. Toulouse, France