Choisir la langue :

Modeling Meaning with Latent Factorization Models

Institutional tag: 

In the course of the last two decades, significant progress has been
made with regard to the automatic extraction of meaning from
large-scale text corpora. The most successful models are based on
distributional data, calculating the meaning of words according to the
contexts in which those words appear. In this presentation, we'll look
at a number of factorization models that are able to induce latent
semantics from distributional data. First, we'll look at matrix
factorization. Matrix factorization models are useful for the
induction of topical dimensions, which is useful for a number of
applications. Next, we'll look at a number of tensor factorization
models. Where matrices are restricted to two-way data, tensors are
able to handle multi-way co-occurrences. Using tensors - combined with
appropriate factorization models - we are able to build semantically
richer language models, that are useful in applications such as
selectional preference induction and the modeling of semantic
compositionality.

 

Dates: 
Thursday, April 21, 2016 - 11:00
Location: 
Inria B21
Speaker(s): 
Tim Van de Cruys
Affiliation(s): 
CNRS, IRIT