Compressing deep learning models: rewinding (Ep.105)

As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.

Don’t forget to join our Slack channel and discuss previous episodes or propose new ones.

This episode is supported by Pryml.io
Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.

References

Comparing Rewinding and Fine-tuning in Neural Network Pruning https://arxiv.org/abs/2003.02389


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Data Science

Discord community chat

Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!


Support us