Online Continual Learning

I was trying to list down the Online Continual Learning papers. Here it goes:

  • Online Continual Learning with Maximally Interfered Retrieval, NeurIPS 19
  • Gradient based sample selection for online continual learning, NeurIPS 19
  • A-GEM, ICLR 19
  • GEM, NeurIPS 17

Is there some other work that I have missed?

2 Likes
  • S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “iCaRL: Incremental Classifier and Representation Learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2017.
  • R. Aljundi, K. Kelchtermans, and T. Tuytelaars, “Task-Free Continual Learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2019.
1 Like

Hi @andrea, Thank you for your reply.

Is iCaRL online? Is takes multiple pass through data rt?

Hi @Joseph! I think you need to clarify what do you mean by Online CL since there are multiple interpretation of it! :slight_smile:

1 Like

Hi @vincenzo.lomonaco, by online, I meant seeing a datapoint only once by the model; and they arrive in a continual fashion.

Is this an acceptable definition for online continual learning? What other alternatives can we have?

1 Like

Does your definition admit the possibility to store a fixed buffer of old patterns?

I think its okay to have a buffer to rehearse… Would love to hear if the community would agree for that.

1 Like

I agree about the buffer. Even if replay is not elegant, it works super well. The questions is, will it scale?

Gonna shamelessly plug some other Online CL paper of mine now :slight_smile:

Online Learned Continual Compression with Stacked Quantization Module

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

I’m sure you can find more on my repo, with a quick search for “online”

1 Like

Thanks Massimo! @optimass

Would this be an agreeable definition for Online Continual learning algo:
“An algorithm that learns from a stream of datapoints from progressively changing data distributions. A datapoint can be seen only once by the learner, but for a small set of points in the exemplar store.”

That seems sensible.

Personally, I think an Online method should be able to make predictions on-the-fly as well. So, these methods should make a prediction before the next datapoint comes in.
However, I think the examplar storing is orthogonal. If you can store as much data as you want, why not? As long as you spend your compute wisely.

1 Like

Recently, @vincenzo.lomonaco and I have discussed definitions and approaches for Online Continual Learning on a book chapter, with particular focus on the processing of sequential input:

Online Continual Learning on Sequences: https://arxiv.org/abs/2003.09114

2 Likes

Thank you @optimass @giparisiCL!

If your paper is actually a book chapter, maybe you can add the missing references mentionned in this post :stuck_out_tongue:

@optimass The chapter specifically focuses on approaches that address online continual learning on sequential data - in contrast to most papers that keep using MNIST (and artificially “sequential” datasets generated from it) and CIFAR. It’s not a review paper so I’m sure many interesting approaches didn’t make it but will for sure keep them in mind for future papers. Thanks much!

I see.
I’m curious, why shouldn’t we work on MNIST and Cifar10 if they are far from solved?
Seems like the fastest way to iterate on ideas.

(also, there is ImageNet experiments in my three papers mentioned here. Not asking for citations. It just seems like your argument is not quite valid)

@optimass I didn’t say we shouldn’t work on them at all. We are interested in approaches that do assume input streams are temporally correlated, which is also the case in nature and thus makes the problem interesting in multiple fields and real-world scenarios. If you’re interested in knowing what are the shortcomings derived from working ONLY with static datasets (e.g. MNIST, CIFAR, ImageNet), there are a few interesting papers that discuss that (I will post some links in the next days). However, I didn’t say that the papers you referenced above are all necessarily subject to this limitation - Unfortunately (or fortunately), papers on continual learning pop up so fast that is hard to catch up and cite all of the relevant ones.

1 Like

Yes I agree.
I try not to play the citation police!
(this case felt different because it’s a book chapter, but I understand now that the setting is different than in the others.)