View in #questions-forum on Slack
@Markus: Hey All, I am looking for some online learning approaches (rehearsal free) to compare with on iCifar-100. The only paper I can find is the one of you @vlomonaco (Continuous Learning in Single-Incremental-Task Scenarios). There you also compare to other approaches on iCIFAR-100 (LwF, EWC, SI etc.). Did you implement all these approaches or where did you get the different results from? And do you also have more detailed results? I would be very interested in the detailes behind figure 8. And if anyone of you knows some online learning papers where they also evaluated on iCIFAR-100 I would be very thankful.
@andcos: In my experience, if you try with standard regularization approaches alone like EWC and SI, you will get 0 accuracy on all previous tasks in a split scenario like Split MNIST/CIFAR. The only way out is to use the multi-head approach by providing task labels at both training and test time. However, task labels at test time is a very unrealistic assumption for CL and I would advise you against it, unless you are interested in studying a very specific behavior in that setting. If you want to learn without rehearsal you can choose between architectural strategies or a mix of existing strategies. There are also sparse approaches, bayesian CL which you may want to look into. It really depends on your objective, though.
@Markus: Hey @andcos, thank you for your answer. I am aware of that and I already implemented an rehearsal free approach. The next step is to compare it with others. I started with the CORe50 dataset and now I am looking for other aproaches, which used iCIFAR-100 to compare with them.
@vlomonaco: you may take a look also at this follow up paper: https://arxiv.org/pdf/1907.03799.pdf
@Markus: Hey @vlomonaco
. Yes thanks, I know this one, but there you did not compare on iCIFAR-100.
@vlomonaco: Yes that’s true… you should move to CORe50, is more flexible and natural than CIFAR 
@Markus: I am already using CORe50
but I need more than one dataset to evaluate my approach
@vlomonaco: ahah I see… internally we decided to move away from MNIST and CIFAR as much as possible… other possible choices:
• Stream-51
• OpenLoris
• iCub-Transformation
@Markus: @vlomonaco do you have the tabular results of figure 8 of your paper “Continuous Learning in Single-Incremental-Task Scenarios”?
Good to know. Okay, I will have a look on those datasets. Thanks.
@vlomonaco: mmh… I don’t think I have them…
I will try to find them later on!
@Markus: Thanks! 