How can the knowledge of lottery tickets be used to adapt SynFlow to early pruning after pre-training?
Summary
This research focuses on adapting an early-pruning algorithm called SynFlow to prune after pre-training. Adapting in this way was inspired by work around the Lottery Ticket Hypothesis by Frankle et al. We show that the instability analysis is a way to determine the stability of a network at an iteration k, after which post-training a subnetwork will result in it matching a full network’s accuracy. While this still reduces complexity and training time during training as the original SynFlow, it also improves accuracy significantly. In the current training setup, a full ResNet-20 network is able to achieve accuracies as high as 90.14%. SynFlow on average gets up to a 80% accuracy, sometimes reaching the low 80 percentages. This research presents AdSynFlow, where we consistently achieve higher accuracies than SynFlow, with a high of 85.27%. Though not yet matching the full network, AdSynFlow promises a bright future for early pruning methods.