[ContinualAI Reading Group]: "Pseudo Rehearsal Using non Photo-Realistic Images"

[May 1st 2020] ContinualAI Online Meetup: Pseudo Rehearsal Using non Photo-Realistic Images

Abstract: Deep Neural networks forget previously learnt tasks when they are faced with learning new tasks. This is called catastrophic forgetting. Rehearsing the neural network with the training data of the previous task can protect the network from catastrophic forgetting. Since rehearsing requires the storage of entire previous data, Pseudo rehearsal was proposed, where samples belonging to the previous data are generated synthetically for rehearsal. In an image classification setting, while current techniques try to generate synthetic data that is photo-realistic, we demonstrated that Neural networks can be rehearsed on data that is not photo-realistic and still achieve good retention of the previous task. We also demonstrated that forgoing the constraint of having photo realism in the generated data can result in a significant reduction in the consumption of computational and memory resources for pseudo rehearsal.

The speakers for this reading group were:

Suri Bhasker Sri Harsha, Indian Institute of Technology Tirupat

:round_pushpin: You can find the recording of this meeting on youtube: https://www.youtube.com/watch?v=SH7IgdiH1FE
:round_pushpin: Paper pre-print: https://arxiv.org/abs/2004.13414

A clarification - is there an assumption that the weights preceding the latent space (i.e. encoder) are frozen so that the initial feature layers do not change with incoming new tasks? If the encoder weights change, is it not likely that the separating hyperplane also changes?

Hello ssgkirito. I am not sure I understand your question. :thinking: Would you please elaborate so that I can answer your question better. :smiley: Are you referring to the “Solver” neural network as encoder in your question?

Hi @BhaskerSriHarsha, nice work!

I am referring to the classifier network that ultimately distinguishes the different classes using the boundary. If a new task comes in, the classifier network will have to learn the corresponding features before making the decision, correct? If those feature weights (e.g. CNN layers) change for new tasks, will the decision boundary still be the same? Thanks!

Thanks. :smiley:

No, when the classifier learns the new task it will definitely change it’s decision boundary.

However, if we perform some kind of pseudo - rehearsal, the decision boundary will change in such a manner that the neural network can show high performance on both the previous and new tasks. Hope I answered your question.

I am always open for more questions. :smile: Feel free to drop any.