Word Embedding explained in one slide

How artificial intelligence understand semantics

Posted by Frag on October 30, 2016

Word embeddings is one of the most powerful concepts of deep learning applied to Natural Language Processing. Any word of a dictionary (the set of words recognized for the specific task) is basically transformed into a numeric vector of a certain number of dimensions. All the rest, classification, semantic analysis, etc. is done from the aforementioned vectors on.

Here is a slide that explains this with a bit of algebra and some user friendly text.

Feel free to download and don’t forget to share.


Before you go

If you enjoyed this post, you will love the newsletter of Data Science at Home. It’s my FREE digest of the best content in Artificial Intelligence, data science, predictive analytics and computer science. Subscribe!