[ContinualAI Reading Group] Efficient Continual Learning with Modular Networks and Task-Driven Priors

This Friday 15-1-2021, 5.30pm CET, for the ContinualAI Reading Group, Tom Veniat (Sorbonne Universite) will present the paper:

Title: Efficient Continual Learning with Modular Networks and Task-Driven Priors

Abstract: Existing literature in Continual Learning (CL) has focused on overcoming catastrophic forgetting, the inability of the learner to recall how to perform tasks observed in the past. There are however other desirable properties of a CL system, such as the ability to transfer knowledge from previous tasks and to scale memory and compute sub-linearly with the number of tasks. Since most current benchmarks focus only on forgetting using short streams of tasks, we first propose a new suite of benchmarks to probe CL algorithms across these new axes. Finally, we introduce a new modular architecture, whose modules represent atomic skills that can be composed to perform a certain task. Learning a task reduces to figuring out which past modules to re-use, and which new modules to instantiate to solve the current task. Our learning algorithm leverages a task-driven prior over the exponential search space of all possible ways to combine modules, enabling efficient learning on long streams of tasks. Our experiments show that this modular architecture and learning algorithm perform competitively on widely used CL benchmarks while yielding superior performance on the more challenging benchmarks we introduce in this work.

The event will be moderated by: Vincenzo Lomonaco

:pushpin: Eventbrite Event (to save it on your calendar): Efficient Continual Learning with Modular Networks and Task-Driven Priors Tickets, Fri, Jan 15, 2021 at 5:30 PM | Eventbrite
:pushpin: Miscrosoft Teams link: click here to join
:pushpin: YouTube Recording: https://youtu.be/C-st5PH0ayw
:pushpin: Slides: Slides_ContinualAI_RG-CTrL_MNTDP.pdf - Google Drive