Personally, fantastic performance of LLMs and Stable Diffusion has hurt my motivation for reading documentations of cool libraries or studying another programming language book cause it seems to me all of these "classical" methods are going to be redundant soon.
I mean, how far are we from interactively describing the problem in English and having an AI translate it directly to ASM instructions? Aren't LLMs, "transformers" in essence?
Yeah, ChatGPT spits out hallucinations and downright wrong answers for some technical questions but it's only GPT3. Ridiculous amounts of funds is getting redirected to this space. They're gonna improve these models, they always do [1] [2].
And where do we stand here? Would you learn a book on "Optimal Design of Vacuum Tubes" when Shockley introduced the transistor? Would you read the documentation of a steamship when Tesla was demonstrating his coil to the public?
[1]: https://miro.medium.com/v2/resize:fit:651/1*aksFYLAhO-I85DST29svXQ.png
[2]: https://arxiv.org/abs/2302.14045
[1]: http://www.letsgofrolic.com