View in #questions-forum on Slack
@Tobias_Kalb: Hi everyone! I am currently investigating how a neural network is changing during incremental learning, in order to get an understanding where different approaches might suffer from forgetting.
Thererfore, I am looking into exisiting work that investigates similarity of NNs, either on representation level or on weight space.
I am currently aware of the following methods based on the learned represetnations:
• Explaining How Deep Neural Networks Forget by Deep Visualization: https://arxiv.org/abs/2005.01004
• I am also currently using CKA as a similiratiy measure as proposed here: https://arxiv.org/abs/1905.00414 to check similarity representations of the model at each step.
Do you know maybe some other methods relating to this, that I am currently not aware of? Maybe even looking into similarity in parameter space?
I am looking forward for your suggestions!:man_bowing:
arXiv.org: Explaining How Deep Neural Networks Forget by Deep Visualization
arXiv.org: Similarity of Neural Network Representations Revisited
@Martin_Mundt: Hey @Tobias_Kalb ,
that’s quite an interesting research direction!
We are about to publish something ourselves in this direction (don’t worry, it has a different flavor for sure , we are focusing more on the commonalities in learning, rather than forgetting.), so I can’t share it yet, but feel free to ping me if you want to discuss.
But I can link some references (none of which are mine) that should be of interest to you:
• Canonical correlation analysis representation similarity in NNs: https://papers.nips.cc/paper/2018/file/a7a3d70c6d17a73140918996d03c014f-Paper.pdf
• Convergent Learning: Do different NNs learn the same representations:https://arxiv.org/abs/1511.07543
• Critical Learning Periods in DNNs: https://arxiv.org/abs/1711.08856
• Towards understanding learning representations: To what extent do different neural networks learn the same representation: https://arxiv.org/abs/1810.11750
As you can see from the questions asked (do NNs learn the same representations), all of these papers have techniques for the analysis you should be interested in.
arXiv.org: Convergent Learning: Do different neural networks learn the same…
arXiv.org: Critical Learning Periods in Deep Neural Networks
arXiv.org: Towards Understanding Learning Representations: To What Extent Do…
@Tobias_Kalb: Perfect, thank you Martin! This is exactly what I was looking for I am alos really looking forward to your publication on the topic.! I will definitely get back to you on the offer to discuss when I am little deeper into the topic:grinning:
@Aish: Joining the party a little late, but there’s also this work that might be of interest
arXiv.org: Visualizing the PHATE of Neural Networks
@Tobias_Kalb: Thank you Aish! This looks promising, will definitley check it out as well
@vlomonaco: this is a very interesting thread thanks @Tobias_Kalb for starting the discussion