This was an era where language modeling was only considered as a pretraining step. You were then supposed to fine tune it further to get a classifier or similar type of specialized model.
That was in 2015, with RNN LMs, which are all much much weaker in that blog post compared GPT1.
And already looking at those examples in 2015, you could maybe see the future potential. But no-one was thinking that scaling up would work as effective as it does.
2015 is also by far not the first time where we had such LMs. Mikolov has done RNN LMs since 2010, or Sutskever in 2011. You might find even earlier examples of NN LMs.
(Before that, state-of-the-art was mostly N-grams.)
Thanks for posting some of the history... "You might find even earlier examples" is pretty tongue-in-cheek though. [1], expanded in 2003 into [2], has 12466 citations, 299 by 2011 (according to Google Scholar which seems to conflate the two versions). The abstract [2] mentions that their "large models (with millions of parameters)" "significantly improves on state-of-the-art n-gram models, and... allows to take advantage of longer contexts." Progress between 2000 and 2017 (transformers) was slow and models barely got bigger.
And what people forget about Mikolov's word2vec (2013) was that it actually took a huge step backwards from the NNs like [1] that inspired it, removing all the hidden layers in order to be able to train fast on lots of data.
[1] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 2000, NIPS, A Neural Probabilistic Language Model
Ngram models had been superceded by RNNs by that time. RNNs struggled with long-range dependencies, but useful ngrams were essentially capped at n=5 because of sparsity, and RNNs could do better than that.