We are excited to release the code for our latest algorithm AR1 with Latent Replay* in PyTorch!
This approach works even in the challenging case of almost 400 small non-i.i.d. training batches on CORe50!
Very simple and clean code
Constant computational and memory overhead over time
Running time of just 24m on a Single GPU
Check it out at the link below!
I first saw the results of your work from a video of Core app on youtube and they are truly something worthy of attention. I have tried to reproduce the experiment on my machine, yet I do have a much weaker GPU than the one mentioned in the paper (gtx 1050 3 gb against GTX 1080 Ti 11 gb) and experiment keeps crashing on first step, is there any way of customizing it to run it on weaker machines without sacrificing the performance too much, and if yes in which direction should i look at?
Sorry if answer would be obvious, since i don’t yet have much experience with the instruments, but i hope to learn not only from mistakes))
Anyway good luck with your work and thank you in advance for your reply.
Hi @Ostturm, thanks for your kind words!
Mmmh that’s very strange, the code should work even on low-ended hardware with a small memory and no GPU… would you mind writing me a private message on slack so that I can help you out?