
Pruning neural networks without any data by iteratively conserving synaptic flow - blopeur
https://arxiv.org/abs/2006.05467
======
seesawtron
Most of the pruning algorithms depend on training data iterations to evaluate
which synaptic weights can be pruned/removed to make the network optimal
without affect performance on test data. New class of pruning algorithms have
been developed which prune during initialization without looking at the data
which however suffer from catastrophic layer-collapse or require an
impractical amount of computation to obtain them. In this study, the authors
claim to present a method that is (i) data independent, (ii) computationally
efficiet, and (iii) achieves better performance to existing pruning
algorithms.

Key idea of their iterative approach:

"..conservation alone leads to layer-collapse by assigning parameters in the
largest layers with lower scores relative to parameters in smaller layers.
However, if conservation is coupled with iterative pruning, then when the
largest layer is pruned, becoming smaller, then in subsequent iterations the
remaining parameters of this layer will be assigned higher relative scores.
With sufficient iterations, conservation coupled with iteration leads to a
self-balancing pruning strategy allowing IMP to avoid layer-collapse."

