Nowadays, a plenty of data are available and many applications need to make use of supervised machine learning methods able to take into account different information sources. One natural solution consists in “combining” these sources. Here we focus on a particular combination: the PAC-Bayesian weighted majority vote. PAC-Bayesian majority vote is an ensemble method where several models are assigned a specific weight. Such approaches are motivated by the idea that a careful combination can potentially compensate for the individual model's errors and thus achieve better robustness and performance on unseen data. In statistical machine learning, the capacity of a model to generalize on a data distribution is measured through generalization bounds. In this talk, after recalling the usual PAC-Bayesian generalization bound (the PAC-Bayesian Theorem), we extend it to two transfer learning tasks:

(i) Multiview learning where the objective is to take advantage of different descriptions of the data (i.e. different input spaces);

(ii) Domain adaptation where the objective is to adapt a model from one source data distribution to a different, but related, target distribution.

# When PAC-Bayesian Majority Vote Meets Transfer Learning

Institutional tag:

Thematic tag(s):

Dates:

Tuesday, February 20, 2018 - 11:00 to 12:00

Location:

Inria Lille - Nord Europe, salle A00

Speaker(s):

Emilie Morvant

Affiliation(s):

Laboratoire Hubert Curien, University Jean Monnet, St-Étienne

Speaker's URL: