Activate deep learning neurons faster with Dynamic RELU (ep. 101)

In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently.

This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website.

Don’t forget to join us on discord channel to propose new episode or discuss the previous ones.

References

Dynamic ReLU https://arxiv.org/abs/2003.10027


One Reply to “Activate deep learning neurons faster with Dynamic RELU (ep. 101)”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Data Science

Our Services

Amethix works to create and maximize the impact of the world’s leading corporations and startups, so they can create a better future for everyone they serve.

We provide solutions in:
  1. AI/ML
  2. Fintech
  3. Healthcare/RWE
  4. Predictive maintenance

Discord community chat

Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!


Subscribe to our newsletter

Support us