Compressing deep learning models: rewinding (Ep.105)

As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one. Don't fo...

Data Science

Welcome to DSH Podcast

The output of the show, produced by Amethix Technologies is the weekly episodes explaining the latest and most relevant finding in machine learning and artificial intelligence, interviewing researchers and influential scientists in the field.

The show is freely available and all episodes can be downloaded via web syndication and many mobile and desktop podcast clients.

Recent Posts

  • Compressing deep learning models: rewinding (Ep.105)

    As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one. Don't fo...

  • Compressing deep learning models: distillation (Ep.104)

    Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference. In this episode I explain one of the first m...

  • Pandemics and the risks of collecting data (Ep. 103)

    Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.   Join our new Slack channel   This episode is supported by Proton. You can check them out at protonmail.com or protonvpn.com

  • Why average can get your predictions very wrong (ep. 102)

    Whenever people reason about probability of events, they have the tendency to consider average values between two extremes. In this episode I explain why such a way of approximating is wrong and dangerous, with a numerical example. We are moving our comm...

  • Activate deep learning neurons faster with Dynamic RELU (ep. 101)

    In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I ...

  • WARNING!! Neural networks can memorize secrets (ep. 100)

    One of the best features of neural networks and machine learning models is to memorize patterns from training data and apply those to unseen observations. That's where the magic is. However, there are scenarios in which the same machine learning models l...

  • Attacks to machine learning model: inferring ownership of training data (Ep. 99)

    In this episode I explain a very effective technique that allows one to infer the membership of any record at hand to the (private) training dataset used to train the target model. The effectiveness of such technique is due to the fact that it works on b...

  • Don’t be naive with data anonymization (Ep. 98)

    Masking, obfuscating, stripping, shuffling. All the above techniques try to do one simple thing: keeping the data private while sharing it with third parties. Unfortunately, they are not the silver bullet to confidentiality. All the players in the synthe...

  • Why sharing real data is dangerous (Ep. 97)

    There are very good reasons why a financial institution should never share their data. Actually, they should never even move their data. Ever.In this episode I explain you why.    

  • Building reproducible machine learning in production (Ep. 96)

    Building reproducible models is essential for all those scenarios in which the lead developer is collaborating with other team members. Reproducibility in machine learning shall not be an art, rather it should be achieved via a methodical approach. In th...

Data Science

Slack community chat

Join our Slack community to discuss the show, suggest new episodes and chat with other listeners!


Support us