[ContinualAI Meetup] Continual Learning with Sequential Streaming Data

We are finally ready for the next ContinualAI meetup this Friday on " Continual Learning with Sequential Streaming Data "!

This will be a great chance to discuss about challenges & opportunities when learning continually from a temporally coherent, high-dimensional stream of data! For this meetup we will have 4 exceptional speakers:

  • @Qi (Roger) She (Intel Labs ): “OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning”
  • @Gabriele Graffieti (University of Bologna) : “Continual Learning Over Small non I.I.D Batches of Natural Video Streams”
  • @Tyler L. Hayes (Rochester Institute of Technology) : “REMIND Your Neural Network to Prevent Catastrophic Forgetting”
  • @German I. Parisi (University of Hamburg) : “Rethinking Continual Learning for Robotics”

Save the date on your calendar with the EventBrite event below :point_down:

:round_pushpin: EventBrite link : https://lnkd.in/evTehWT
:round_pushpin: Google Meet Link : https://meet.google.com/wta-dvnd-rtx
:round_pushpin: Youtube Video Recording: https://www.youtube.com/watch?v=Qo2JKIDZz6w&feature=youtu.be

1 Like

Hi everyone, I hope you enjoyed the meetup.
Here are the slides of my talk:

I also share the link of our two recent papers on the topic, in the first you can find more details about the CORe50 NICv2 benchmark and about the AR1* strategy.

In the second you can find more details about the latent replay approach and more experimental details.

Have a nice day, and if you have any questions don’t hesitate to ask!

2 Likes

Hi everyone, thanks for having me speak at the meetup!

Here is a pre-print of our paper on arXiv:
https://arxiv.org/abs/1910.02509

Here are the slides from my talk:
https://github.com/tyler-hayes/tyler-hayes.github.io/blob/master/data/ContinualAI_Meetup_Hayes.pdf

1 Like

Hi Tyler,

Thanks for the talk, it was interesting! I find the idea of having frozen layers (G) of the network interesting. For ImageNet, you mentioned after training on the first 100 classes, G is frozen till the end of the experiment. Let’s say we introduce classes from another dataset, e.g. SVHN (resized to suit the image space dimension), or any new class whose initial features overlap 0% with the trained features of G, will the plastic layers (F) have sufficient capacity to learn, or will we require re-training G again?

Hi @ssgkirito,

It will depend on how much of the network G consists of. If G only consists of a few very early layers, then it is certainly possible that F will have sufficient capacity to learn the new dataset.

Alternatively, it also will depend on how general the features learned by G are. For example, many people successfully perform transfer learning with pre-trained ImageNet weights to another dataset. In this case, you could imagine G would be the entire backbone of the network and F would only consist of the classifier, which has worked in practice.

That said, I think the transferability of the features from G will depend on: 1) how many layers G consists of and 2) how general the learned features in G are. Most of these items will depend on the task/dataset you try to perform/learn incrementally.

Thanks for the reply!

1 Like