June 1, 2020
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't fo...
May 20, 2020
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first m...
May 8, 2020
Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.
Join our new Slack channel
This episode is supported by Proton. You can check them out at protonmail.com or protonvpn.com
April 19, 2020
Whenever people reason about probability of events, they have the tendency to consider average values between two extremes. In this episode I explain why such a way of approximating is wrong and dangerous, with a numerical example.
We are moving our comm...
April 2, 2020
In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I ...
March 23, 2020
One of the best features of neural networks and machine learning models is to memorize patterns from training data and apply those to unseen observations. That's where the magic is. However, there are scenarios in which the same machine learning models l...
March 20, 2020
In this episode I explain a very effective technique that allows one to infer the membership of any record at hand to the (private) training dataset used to train the target model. The effectiveness of such technique is due to the fact that it works on b...
March 20, 2020
Masking, obfuscating, stripping, shuffling. All the above techniques try to do one simple thing: keeping the data private while sharing it with third parties. Unfortunately, they are not the silver bullet to confidentiality. All the players in the synthe...
March 20, 2020
There are very good reasons why a financial institution should never share their data. Actually, they should never even move their data. Ever.In this episode I explain you why.
February 22, 2020
Building reproducible models is essential for all those scenarios in which the lead developer is collaborating with other team members. Reproducibility in machine learning shall not be an art, rather it should be achieved via a methodical approach. In th...