Classical machine learning approaches are usually seen as atomic processes where the same computation is applied to any input datapoint. They do not consider that the relevant features can be different depending on the input and that different computations could be applied. This is particularly important when facing problems that are constrained by an underlying cost such as CPU consumption fir example, where it is crucial to limit the average number of acquired features, or the overall cost of the inference process.
We will present the work made during the last years on the development of sequential adaptive algorithms which are able to learn how to acquire information, and which computations to make depending on the input. They are based on both reinforcement learning and representation learning techniques. In the first part of the seminar, we will present a general formalism to learn to acquire information under budgeted constraints, and we will focus on a recently proposed cost-sensitive predictive model based on recurrent neural networks. In the second part, we will show that the same types of ideas have been used to extend classical neural networks and to give them the ability to choose how to process a particular input. We will illustrate these models by different applications on different datasets but also on text and image.