A Deep Dive into Word Embeddings for Sentiment Analysis
By Bert Carremans
When applying one-hot encoding to words, we end up with sparse (containing many zeros) vectors of high dimensionality. On large data sets, this could cause performance issues.
Additionally, one-hot encoding does not take into accou...
freecodecamp.org8 min read