This Friday [11-12-2020] 5.30pm CET, for the ContinualAI Reading Group, Arijit Patra (Oxford University) will present the paper:
Abstract: Humans learn new information incrementally while consolidating old information at every stage in a lifelong learning process. While this appears perfectly natural for humans, the same task has proven to be challenging for learning machines. Deep neural networks are still prone to catastrophic forgetting of previously learnt information when presented with information from a sufficiently new distribution. To address this problem, we present NeoNet, a simple yet effective method that is motivated by recent findings in computational neuroscience on the process of long-term memory consolidation in humans. The network relies on a pseudorehearsal strategy to model the working of relevant sections of the brain that are associated with long-term memory consolidation processes. Experiments on benchmark classification tasks achieve state-of-the-art results that demonstrate the potential of the proposed method, with improvements in additions of novel information attained without requiring to store exemplars of past classes.
The event will be moderated by: Vincenzo Lomonaco.
Eventbrite Event (to save it on your calendar): https://www.eventbrite.it/e/learn-more-forget-less-cues-from-human-brain-tickets-132084924565
Microsoft Teams link: click here to join
YouTube recording: https://www.youtube.com/watch?v=gMHPjdxBfjo