Suppose you have a layer of 1 million ReLU neurons in a conventional artificial neural network. Each neuron has 1 million weights to connect back to the neurons in the prior layer. The neural network is trying to fit some complicated curve in higher dimensions. Each neuron is trying to match a place where the curve changes, and change the response of the neural network there too.
I think they are sometimes called break points.
The problem is you are using 1 million weights just to get 1 change decision ( break point decision.) That seems excessive and inefficient for curve fitting.
What is the optimal number of weight parameters to use per change decision, 1000, 100? Or is it actually way over to the other extreme and you should only use 1 or 2 parameters per change decision?
That is possible using fast distributive random projections or fast distributive transforms and parametric activation functions.
The biological brain because of the high connectivity (to 1000 to 10000 other neurons) of biological neurons would only need 2 layers of 1 million neurons to do a large fast random projection. A algorithm on a digital computer would need 20 mathematical layers to reach the same level of connectivity for full random distribution.
If there was evolutionary pressure to reduce the number of parameters that needed to be adjusted to get a behavior or response then it is possible that nature could find such an arrangement.
To get even more complicated you could mix parametric activation functions and standard non-parametric activation functions with fast random projections or fast transforms to get a fractional number of parameters per change decision, for example 0.5 parameters per change decision. I haven’t tried that yet, but it is certainly possible.