[ContinualAI Reading Group]: “Explaining How Deep Neural Networks Forget by Deep Visualization”

[June 5th, 2020] ContinualAI Reading Group : “Explaining How Deep Neural Networks Forget by Deep Visualization

Abstract: Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this paper proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability.

The speakers for this reading group were:

Giang Nguyen

:round_pushpin: Youtube recording: https://www.youtube.com/watch?v=4cqyKoIPa8Q
:round_pushpin: Paper pre-print: https://arxiv.org/abs/2005.01004
:round_pushpin: Slides: https://drive.google.com/file/d/1a0dSuqgoTZl3ezZwdT_yjJ2cO_qD31po/view?usp=sharing

1 Like

You can see the slide here:

1 Like

Thanks @luulinh90s! :smiley: And welcome to the forum!

1 Like

We will release the code soon here: https://github.com/luulinh90s/CFD
Stay tuned for information!