podcast

December 3, 2019

The dark side of AI: social media and the optimization of addiction

Chamath Palihapitiya, former Vice President of User Growth at Facebook, was giving a talk at Stanford University, when he said this: “I feel tremendous guilt. The short-term, dopamine-driven feedback loops that we have created are destroying how society ...
November 18, 2019
Data Science

3 best solutions to improve training stability of GANs (Ep. 88)

Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradien...
November 5, 2019

Deep learning is easier when it is illustrated (with Jon Krohn) (Ep. 86)

In this episode I speak with Jon Krohn, author of Deeplearning Illustrated a book that makes deep learning easier to grasp.  We also talk about some important guidelines to take into account whenever you implement a deep learning model, how to deal with ...
November 4, 2019

How to generate very large images with GANs (Ep. 85)

Join the discussion on our Discord server In this episode I explain how a research group from the University of Lubeck dominated the curse of dimensionality for the generation of large medical images with GANs. The problem is not as trivial as it seems. ...
October 27, 2019

More powerful deep learning with transformers (Ep. 84)

Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture. Such architecture is built on top of another important concept already known to the community: self-attention.In this episode I ...
[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 79)
Our website uses cookies to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept,” you consent to use ALL the cookies.
Read more