I'm curious as to what comes after this "shove LLM into everything and call it a day" phase is over.
I'm also not exactly convinced when some CEO fires 100 employees in these market conditions, and then says how AI is helping them. Sorry, but these aren't exactly companies that are pushing the needle forward (mostly BPO and low margin non-tech businesses, or companies that have been around a while and still don't seem to get anywhere). It isn't that sexy to say we fired 100 people to save $5 million opex, I'm sure they want to raise more money slapping "AI" onto their brand.
So what's next?
Are you working on something interesting that pushes the boundaries? Or what are you following that some of us don't know much about?
Then we had RNN, which have the output fed back into the input. This gave the networks some form of memory.
Then we had Transformers, which are basically parallel processors. I.e generate 3 outputs in parallel, multiply them together. This was basically just a better form of compression applicable to everything.
The general trend here is that someone discovers some architecture that works out nicely and then everyone builds something around it. This is probably going to be the future. Google has some neat things with automated robotics, OpenAi has their A* stuff thats supposed to be "accurate" instead of probabilistic.
Then there is the hardware piece, which I know much less about, but hoping companies like Tinycorp or Tenstorrent give us a way to reliably run something like GPT3 full parameter model at home.