Compressing deep learning models: distillation (Ep.104)

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

Come join us on Slack

Reference


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Data Science

Slack community chat

Join our Slack community to discuss the show, suggest new episodes and chat with other listeners!


Support us