Choisir la langue :

Structure theory of Recurrent Neural Networks: a control-theoretic perspective.

Institutional tag: 
Thematic tag(s): 

One of the challenges in deep learning is to provide a mathematical theory for analyzing learning algorithms, and for understanding why they are very successful in practice. Among the class of models used in deep learning, Recurrent Neural Networks (RNN) are the most popular ones. They can be viewed as non-linear dynamical systems equipped with an internal state, input and output. Learning dynamical models from time-series data has been a standard
topic in control theory, where it is known under the name of systems identification. In particular, there is a rich literature on the mathematical theory of learning linear dynamical system, which form a subclass of RNNs. One of they key steps in the development of this theory was the development of realization theory.
We may think realization theory as a way to understand the relationship an observed input/output behavior and dynamical systems producing this behavior. Realization theory useful for tackling the issue of identifiability (non-existence of two different dynamical systems that yield the same observed behavior), and for finding canonical parameterizations (parameterizations which are guaranteed to be identifiable), and for proving consistency of learning algorithms.

 

In this talk we aim at developing realization theory of RNNs. We present the problem formulation, some existing
results and a new result which link RNNs with a class of non-linear dynamical systems, called rational systems, for which realization theory has been fully understood. 

Dates: 
Friday, January 18, 2019 - 11:00
Location: 
Inria Lille - Nord Europe, A00
Speaker(s): 
Thibault Defourneau