View in #questions-forum on Slack
@Irina_Rish: Dear all, can you please suggest good papers/surveys focusing on continual/lifelong RL in neuroscience? Thank you!
@vlomonaco: @Keiland, @Jeremy_Forest and @Subutai_Ahmad we need your help here!
@Subutai_Ahmad: @Irina_Rish It’s really interesting you ask that. The strange thing is that the concept of batch learning just doesn’t exist in biology. Everything in neuroscience is continuous learning! There are lots of studies on forgetting, memory interference, etc. but the core underlying assumption always is continuous learning.
Here’s a very good recent good review of learning in neuroscience that covers a broad spectrum: https://www.annualreviews.org/doi/abs/10.1146/annurev-neuro-090919-022842
@Irina_Rish: @Subutai_Ahmad i agree with you at a high level, of course, but more specifically, in the context of reinforcement learning literatures in nonstationary environments - what biologically-inspired ideas that has been applied to this setting, are there survey on that etc Of course you may say that the whole RL is already bio-inspired, but there is much more biology that can be modeled than what the current RL does (we worked on RL with positive vs negative reward processing streams for example, which is more bio-inspired than classical Q-lerning, for example, had a recent paper that just got out).
@Subutai_Ahmad: @Irina_Rish Agreed, RL & neuroscience have a rich history, and those specifically are great questions. Ilana Witten gave a great talk at Cosyne a couple of years ago, focused on what we have learned from neuroscience that is different from RL in machine learning. She looked at dopamine projections in the striatum. One of her findings was that the error signals were much more nuanced than simple reward prediction error. She found representations of lots of different types of prediction error. She wasn’t specifically focused on non-stationary environments though.
I’ll see if I can track down some papers. Would be interested in any you find as well.
This paper contains some of what Ilana discussed: https://www.nature.com/articles/s41586-019-1261-9#citeas. It addresses the question “what have we learned from biology that is different from RL?” but probably not your question regarding non-stationary environments.
Nature: Specialized coding of sensory, motor and cognitive variables in VTA do
@Keiland: Thanks for the wonderful question @Irina_Rish! @Subutai_Ahmad covered a lot of what would have been my initial response.
I’ll add with asking you to potentially clarify what it might be your looking for?
As Subutai mentioned, continual learning in AI is distinct from that in neuroscience, as the underlying assumption is that the brain is learning from continuos data. (Albeit, I think there are examples where this assumption and the experimental designs have been mismatched, which I’d be happy to get into)
Frankly, the push for understanding CL in AI derives from the inability to replicate what the brain can do, primarily catastrophic forgetting (CF) in neural nets, though among other various deficits. There are examples of people trying to directly study CF in neuroscience, though rare, and historically having been branded the “stability-plasticity problem”. Most of these are at the circuit and synapse level, which are cited in the EWC approach to overcoming CF, for instance.
Another case example, at a higher level, and near and dear given it is my own lab working on it, is examining replay as a CL brain mechanism. “Hippocampal ripples and replay” or the “complementary learning systems” theory will garner a lot of good results, as it is emerging as what might be the central dogma of systems memory consolidation in neuroscience. (Though bias disclosure, my PI championed the theory, and I’m happy to refute a few of it’s points )
I also highly recommend some of the citations in @German_I._Parisi’s wonderful paper on the subject: https://www.sciencedirect.com/science/article/pii/S0893608019300231
Lastly, my attention has also been towards CL, RL, and neuro, as I think RL is a natural way to reframe many neuroscience issues as CL. A good review I came across that may help is: https://www.sciencedirect.com/science/article/pii/S1878929319303202
I’m excited to talk more and hear what your thoughts are on all of this. I’d be happy to hear what you’re working on and what you’ve found also! Let’s keep this conversation going
Continual lifelong learning with neural networks: A review
Reinforcement learning across development: What insights can we draw from a decade of research?
@Jeremy_Forest: @Irina_Rish I think @Subutai_Ahmad and @Keiland were very thorough on this, so the only thing I’m going to add is that if you search the neuroscience literature on RL don’t limit yourself to ‘RL’ as a key term. Look for ‘associative learning’ , ‘instrumental learning’ , ‘conditioning’ for example. Historically RL as a term doesn’t really exist in the neuroscience literature but a lot of the behavioral tasks can be seen through a RL lens. Also if you have more precise questions don’t hesitate to asks because this is a very broad topic and can cover decades of neuroscience literature differing on the main focus: it can range from behavioral to molecular levels as well as cover different brain regions and focus on anything between learning, memory and forgetting if not all three.
@Irina_Rish: Thanks to all who replied, your feedback is greatly appreciated!