78.9% accuracy on a sentiment classification of tweets with no neutral class is actually slightly _worse_ than you get if you do this in scikit-learn with plain old bag of words and Logistic Regression: https://github.com/williamsmj/sentiment/blob/master/sentimen....
Could suggest improving the DL algorithm I wrote?
I have to point out though, that it's a bit dangerous to measure classifier accuracy as the percentage of correctly classified samples when you've no idea how the test data is skewed in favor of one class vs. the other (for binary classification, and can be generalized to multi-class problems too).
It's always much better to represent accuracy as the F1 score or to just examine a confusion matrix of the predictions.
Haven't had the opportunity to measure the difference in quality, and I've mostly used word2vec until now (with vectors I've trained myself after lemmatization and PoS-tagging of a corpus), but the fact that GloVe provides you different trained models from twitter, Wikipedia and so on is pretty nice
Word2vec do have a Google News corpus model on their official page, but there are many more trained word2vec models in the literature.