[ContinualAI Reading Group] Continual Learning from the Perspective of Compression

[02-10-2020] This Friday 5pm CEST , for the ContinualAI Reading Group, Min Lin will present the paper:

Title: “Continual Learning from the Perspective of Compression

Abstract: Connectionist models such as neural networks suffer from catastrophic forgetting. In this work, we study this problem from the perspective of information theory and define forgetting as the increase of description lengths of previous data when they are compressed with a sequentially learned model. In addition, we show that continual learning approaches based on variational posterior approximation and generative replay can be considered as approximations to two prequential coding methods in compression, namely, the Bayesian mixture code and maximum likelihood (ML) plug-in code. We compare these approaches in terms of both compression and forgetting and empirically study the reasons that limit the performance of continual learning methods based on variational posterior approximation. To address these limitations, we propose a new continual learning method that combines ML plug-in and Bayesian mixture codes.

The event will be moderated by: Vincenzo Lomonaco

:round_pushpin: Eventbrite event (to save it in you calendar and get reminders) : https://www.eventbrite.com/e/continualai-rg-continual-learning-from-the-perspective-of-compression-tickets-124522416901
:round_pushpin: Google Meet Link : https://meet.google.com/sof-dbec-tpi
:round_pushpin: Youtube Recording: https://youtu.be/_LGU5MBjJAQ