Multiplicative Weight Updates

How you update weights in a neural network may have a profound effect.
Multiplicative weight updates induce sparsity and modularity.
There is exponentiation of the weights, the weights can be increased by say (1+alpha) or decreased by (1-alpha.) Those weights that have been increased a number of times can have vastly higher magnitude that those decremented. And that is what leads to sparsity.
https://youtu.be/F8aPV6chfyA
https://arxiv.org/abs/2006.14560
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0070444

Can you use similar ideas for continual learning? Not using exponentiation, maybe say the logarithm or square root. Where it would take a massive number of increments to get a large weight magnitude. Which would make it very difficult to reverse the process and make the weight magnitude small again.
https://discourse.numenta.org/t/multiplicative-mutations-lead-to-sparse-systems/9177