Make Stochastic Gradient Descent Fast Again (Ep. 113)

There is definitely room for improvement in the family of algorithms of stochastic gradient descent. In this episode I explain a relatively simple method that has shown to improve on the Adam optimizer. But, watch out! This approach does not generalize well.

Join our Discord channel and chat with us.

References


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Data Science

Our Services

Amethix works to create and maximize the impact of the world’s leading corporations and startups, so they can create a better future for everyone they serve.

We provide solutions in:
  1. AI/ML
  2. Fintech
  3. Healthcare/RWE
  4. Predictive maintenance

Discord community chat

Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!


Subscribe to our newsletter

Support us