February 3, 2026
Join the discussion on our Discord server
After reinforcement learning agents doing great at playing Atari video games, Alpha Go, doing financial trading, dealing with language modeling, let me tell you the real story here.In this episode I want to shi...
February 3, 2026
Join the discussion on our Discord server
In this episode I explain how a research group from the University of Lubeck dominated the curse of dimensionality for the generation of large medical images with GANs. The problem is not as trivial as it seems. ...
February 3, 2026
Join the discussion on our Discord server
Training neural networks faster usually involves the usage of powerful GPUs. In this episode I explain an interesting method from a group of researchers from Google Brain, who can train neural networks faster...
February 3, 2026
Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture. Such architecture is built on top of another important concept already known to the community: self-attention.In this episode I ...
February 3, 2026
The brutal truth about why Silicon Valley is blowing billions on glorified autocomplete while pretending it’s the next iPhone. We’re diving deep into the AI investment […]
February 3, 2026
VortexNet uses actual whirlpools to build neural networks. Seriously. By borrowing equations from fluid dynamics, this new architecture might solve deep learning’s toughest problems—from vanishing gradients […]
February 3, 2026
Also on YouTube Two AI experts who actually love the technology explain why chasing AGI might be the worst thing for AI’s future—and why the […]
December 23, 2025
Mark Brocato built Mockaroo—the tool that taught millions of developers how to fake data. Now, as Head of Engineering at Tonic.ai, he’s building the AI agent […]
November 26, 2025
Most companies don’t have an AI problem. They have a decision-making problem. Matt Lea, founder of Schematical and CloudWarGames, has spent nearly 20 years helping tech […]
November 12, 2025
LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs […]



