I hope to find you well.
I am approaching the CL framework implemented in Avalanche.
However I am not totally sure about many aspect and details of my sperimentation settings.
I have a regression problem 1 input and 5 outputs let’s say. Those patterns have clearly a change in the distributions (both in the input and output) and i want to shape the problem as a multitask problem differenciating the task/kind of experiences that I have identified in the dataset.
Let’s say I have 100 and 200 entries for 2 different experiences (they all have 1in 5out) and want to use 50% as validation test.
1. Is this following code for the generic CL Scenario semantically correct?
Dummy initialization of the tensors
x_train_1e, x_test_1e = torch.Tensor(np.random.rand((1+5)*100)).reshape(2,50,6) y_train_1e, y_test_1e = x_train_1e[:,1:], x_test_1e[:,1:] x_train_1e, x_test_1e = x_train_1e[:,:1], x_test_1e[:,:1] x_train_2e, x_test_2e = torch.Tensor(np.random.rand((1+5)*200)).reshape(2,100,6) y_train_2e, y_test_2e = x_train_2e[:,1:], x_test_2e[:,1:] x_train_2e, x_test_2e = x_train_2e[:,:1], x_test_2e[:,:1]
The initialization of the scenario is
X_train = [ (x_train_1e, y_train_1e), (x_train_2e, y_train_2e) ] X_test = [ (x_test_1e, y_test_1e) (x_test_2e, y_test_2e) ] for pattern in X_train+X_test: print(pattern.shape,pattern.shape) # torch.Size([50, 1]) torch.Size([50, 5]) # torch.Size([100, 1]) torch.Size([100, 5]) # torch.Size([50, 1]) torch.Size([50, 5]) # torch.Size([100, 1]) torch.Size([100, 5])
generic_scenario = tensors_scenario(train_tensors= X_train , test_tensors=X_test, task_labels=[0, 1])
2. Are this the right setting implementation of my problem?
(with this setting and the naive vanilla strategy, I am obtaining an evaluation for each experience in each task, that sounds good. How ever I am not totally sure about the differencies between task and experiences…)
I would like to use Avalanche and try the baselines to start with.
I’m going for the Naive/Finetuning by continuing back-prop.
model = SimpleMLP(input_size=1, num_classes=5) optimizer = SGD(model.parameters(), lr=0.001, momentum=0.9) # Loss for regression criterion = MSELoss() cl_strategy = Naive( model, optimizer, criterion, train_epochs=2,evaluator=eval_plugin)
Assuming this is the right setting for the finetuning baseline (so without any smart CL strategy applied, like batch learning: No plugins)
3. What is the best way to try the JointTraining/Offline baseline (train the model with all the data) and evaluating it in the same test settings?
Could it be achieved simply by changing the scenario (so stacking the different experiences together) while keeping the same model?
new_x = torch.Tensor(np.vstack(list(zip(*X_train)))) new_y = torch.Tensor(np.vstack(list(zip(*X_train)))) new_x.shape,new_y.shape #(torch.Size([150, 1]), torch.Size([150, 5]))
scenario2 = tensors_scenario(train_tensors= , test_tensors=X_test, task_labels=[0, 1])
4. There is an easy way to specify explicitly (automatically by Avalanche) the task label as input for the model?