I’m trying to find a benchmark in which the highest level of interference can happen between tasks in the task-incremental scenario such that experience replay may fail since samples from old and new tasks completely interfere with each other when they are used together to train the model. In one paper, I’ve seen that the authors propose a benchmark called “Regression CIFAR-10” in which each label is mapped to an angle between 0-360, and the angles for subsequent tasks are chosen in a way that can either cause maximum or minimum interference:
AFEC: Active Forgetting of Negative Transfer in Continual Learning: [2110.12187] AFEC: Active Forgetting of Negative Transfer in Continual Learning
This somehow looks like a task-incremental setup where in each task, concept drift happens.
My first question is, do you think such benchmarks are a good way to study maximum interference in incremental scenarios in a controlled way? if not, where do you see the problem?
I would also appreciate it if you’d let me know about any benchmark that provides a way to study maximum negative transfer in incremental learning.