January 1, 2020
In the last episode of 2019 I speak with Filip Piekniewski about some of the most worth noting findings in AI and machine learning in 2019. As a matter of fact, the entire field of AI has been inflated by hype and claims that are hard to believe. A lot o...
December 28, 2019
This is the fourth and last episode of mini series "The dark side of AI". I am your host Francesco and I’m with Chiara Tonini from London. The title of today’s episode is Bias in the machine
C: Francesco, today we are starting with an infuriating ...
December 23, 2019
Get in touch with us
Join the discussion about data science, machine learning and artificial intelligence on our Discord server
Episode transcript
We always hear the word “metadata”, usually in a sentence that goes like this
Your Honor, ...
December 11, 2019
In 2017 a research group at the University of Washington did a study on the Black Lives Matter movement on Twitter. They constructed what they call a “shared audience graph” to analyse the different groups of audiences participating in the debate, and fo...
December 3, 2019
Chamath Palihapitiya, former Vice President of User Growth at Facebook, was giving a talk at Stanford University, when he said this: “I feel tremendous guilt. The short-term, dopamine-driven feedback loops that we have created are destroying how society ...
November 18, 2019
Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradien...
November 12, 2019
What happens to a neural network trained with random data?
Are massive neural networks just lookup tables or do they truly learn something?
Today’s episode will be about memorisation and generalisation in deep learning, with Stanislaw Jastrzębski from ...
November 5, 2019
In this episode I speak with Jon Krohn, author of Deeplearning Illustrated a book that makes deep learning easier to grasp.
We also talk about some important guidelines to take into account whenever you implement a deep learning model, how to deal with ...
November 4, 2019
Join the discussion on our Discord server
In this episode I explain how a research group from the University of Lubeck dominated the curse of dimensionality for the generation of large medical images with GANs. The problem is not as trivial as it seems. ...
October 27, 2019
Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture. Such architecture is built on top of another important concept already known to the community: self-attention.In this episode I ...