podcast

June 17, 2020

Rust and machine learning #1 (Ep. 107)

This is the first episode of a series about the Rust programming language and the role it can play in the machine learning field. Rust is one of the most beautiful languages I have ever studied so far. I personally come from the C programming language, t...
June 1, 2020

Compressing deep learning models: rewinding (Ep.105)

As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one. Don't fo...
May 20, 2020

Compressing deep learning models: distillation (Ep.104)

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference. In this episode I explain one of the first m...
May 8, 2020

Pandemics and the risks of collecting data (Ep. 103)

Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.   Join our new Slack channel   This episode is supported by Proton. You can check them out at protonmail.com or protonvpn.com
April 19, 2020

Why average can get your predictions very wrong (ep. 102)

Whenever people reason about probability of events, they have the tendency to consider average values between two extremes. In this episode I explain why such a way of approximating is wrong and dangerous, with a numerical example. We are moving our comm...
April 2, 2020

Activate deep learning neurons faster with Dynamic RELU (ep. 101)

In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I ...
March 23, 2020

WARNING!! Neural networks can memorize secrets (ep. 100)

One of the best features of neural networks and machine learning models is to memorize patterns from training data and apply those to unseen observations. That's where the magic is. However, there are scenarios in which the same machine learning models l...
Attacks to machine learning model: inferring ownership of training data (Ep. 99)
Our website uses cookies to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept,” you consent to use ALL the cookies.
Read more