[23-10-2020] This Friday 5.30 PM CEST, for the ContinualAI Reading Group, Ahmet Iscen will present the paper:
Abstract: We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes, instead of the images themselves, unlike most existing work. Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly. We assume that the model is updated incrementally for new classes as new data becomes available sequentially.This requires adapting the previously stored feature vectors to the updated feature space without having access to the corresponding original training images. Feature adaptation is learned with a multi-layer perceptron, which is trained on feature pairs corresponding to the outputs of the original and updated network on a training image. We validate experimentally that such a transformation generalizes well to the features of the previous set of classes, and maps features to a discriminative subspace in the feature space. As a result, the classifier is optimized jointly over new and old classes without requiring old class images. Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks, while having at least an order of magnitude lower memory footprint compared to image-preserving strategies.
The event will be moderated by: Vincenzo Lomonaco
Please note that we are moving to Microsoft Teams as Conference Tool. You’ll be able to join via browser as well.
Eventbrite event (to save it in you calendar and get reminders) : https://www.eventbrite.com/e/memory-efficient-incremental-learning-through-feature-adaptation-tickets-126047851517
Microsoft Teams Link: click here