View in #questions-forum on Slack
@Yang: Hi guys, it seems that Average Incremental Accuracy is a widely-used metrics in class incremental learning problems but I’m a little bit confused about what it really means… Does it mean: 1) average accuracy of all seen classes at current task; or 2) average of average accuracy over all tasks?
@andcos: Hi @Yang! TL;DR: 2) I suggest you to take a look at the GEM paper where the authors define the Average Accuracy (I think you are referring to this metric). Basically, at the end of training on all tasks (or experiences, as we call them in our CL framework Avalanche) you measure the average accuracy over all classes encountered in all tasks.
arXiv.org: Gradient Episodic Memory for Continual Learning
@Adrian_Popescu: Hi @Yang
See also https://arxiv.org/pdf/1807.09536.pdf
for the class-incremental learning scenario.
The measure defined there is the one you describe in (2): the average of average accuracies per incremental state. If the number of classes per incremental state is the same in all states, (1) and (2) are equivalent.
Note that this measure discards the first state of the process since it is not considered incremental.
@Arthur_Douillard: @andcos If I may be pedantic, I see a lot of people citing GEM for that metric, but it was actually introduced earlier by iCaRL: https://arxiv.org/abs/1611.07725
arXiv.org: iCaRL: Incremental Classifier and Representation Learning
@andcos: You can and you should I prefer GEM because they provide a formal view of the metric, including it into a more homogenous “framework”. But you are completely right, iCarl introduced the ACC first (incremental accuracy), preceding GEM by few months.