Autoencoders explained in one slide

Posted by Francesco Gadaleta on April 28, 2015

Autoencoders represent an amazing trick to learn the structure within the input data. In a neural network, learning the weights of the hidden layer to represent the input is an approach that tries to discover the geometry of the data, if any. When the hidden units are fewer than the dimensions of the input data, autoencoders resemble Principal Component Analysis PCA. The main difference between the two is that the non-linear function of autoencoders can capture the non-linearity within the data (if any). Something that is not possible to achieve with PCA.


Before you go

If you enjoyed this post, you will love the newsletter of Data Science at Home. It’s my FREE digest of the best content in Artificial Intelligence, data science, predictive analytics and computer science. Subscribe!