You'll be very efficient at accomplishing old use cases, which is just as well, because you'll need it - the market for them is most probably commodified, very competitive, and with low margins. Dominated by big actors with economies of scale. Not a comfortable place to be.
You'll probably get into very few new use cases, because by the time your 3-5 years quarantine passes on their enabling technology, they're already explored, if not conquered, by several competitors. The exception to this is new use cases without a new enabling technology, but those tend to ressemble fads, you'll have no tech moat, so again they'll be a very competitive, low margin market.
New techs create value only when they solve some use case more efficiently for someone. This creates value. Not all new techs do this. God knows that especially in software engineering, people deliver new techs for fun and name recognition all the time. Managers allow it as a perk, because of the tight labor market. But it's a mistake to consider all new techs to be in this later category.
New techs are also tech moats, giving you negotiating power to capture some of the created value. Without the tech moat, you better have a non-tech one (regulatory, institutional, market structure, economy of scale) because otherwise the value you capture will quickly be pushed to your marginal cost - if that - by competition.
Techstacks very very very rarely enable new use-cases. They are the equivalent of fashion statements by young developers who haven't learned what it means to support software for 10-20 years.
I think your argument works fine-ish for backends but it's bananas to suggest that jQuery is the same thing as React or Svelte. I do security for a living and maybe 100% of all jQuery sites have XSS. If I find a React page I can just grep for dangerouslySetInnerHTML and I'm 80% of the way there. (I am exaggerating, but hopefully my point is clear: from both a development perspective and a safety perspective, React is not just a shiny new version of jQuery.)
I have seen a lot of sites get worse as a result of migrating from server-side rendering to client-side rendering. Things like broken back buttons, massive page sizes, and terrible rendering performance on low powered hardware.
An example that comes to mind is Salesforce. They introduced a new “lighting experience” that seems to use a newer stack. But it’s taken them ages to migrate all of the features over, forcing users to switch back and forth between the old and the new. It can also be a real beast to render on something like a surface tablet. It must be costing them an enormous amount of money to support both UIs, and I have to wonder if it was really worth it vs maybe applying some new CSS and more incremental improvements.
React being 'declarative' tends to end up with more stability in regards to UX (e.g. complex search forms). Makes the integration of third-party components smoother too.
1. Frontend was less mature to begin with
2. Frontend has a unique, definitional state management problem in the DOM
3. Actually, we can make real progress sometimes
4. Really, frontend hasn’t made strides, you’re just ignoring $x
5. Several/none of the above?
(I think real progress is possible and disillusionment with some new technologies should not prevent us from trying. But also that the DOMs unique state management problem is so overt that functional approaches almost couldn’t help but dominate.)
React is a shiny new jQuery - that's all it is. WebAssembly, Canvas, WebRTC, etc. those are something different. Those enable new use cases.
Thought experiment: why does your argument not apply to, say, C? Why bother doing new language or library design? It’s all int 80h eventually.
By the way, I do think the virtual DOM is either a fad or simply an overstatement. What I mean by overstatement is that batching updates is one of the most normal things developers have been doing, from that perspective there's nothing new here.
From a fad perspective, there is no reason why the regular DOM cannot be fast and schedule batch updates to optimize framerate (and with one less level of indirection). The virtual DOM may actually be a problem in and of itself because it makes an assumption that it knows about how to schedule updates better than the actual DOM - even if that is true today, why would it necessarily be true tomorrow?
I'm very hesitant about my assumptions here, and I am confident I'm missing an important point. So if you can clear up my understanding I appreciate it.
jQuery makes XSS more common in several ways, and some of them are really just the influence jQuery on the frontend has on how the back end works. Some of those ways are pretty subtle, eg CSP bypass gadgets in data attributes (which are very commonplace in jQ libraries). By contrast, React, by building a DOM, has contextual information that jQuery lacks. Go’s HTML templating is unique on the server side in that sense since it too actually understands if it’s in a text node context, a script context, an attribute context, or an inline JS (such as onclick) context, and hence the correct way to treat arbitrary input.
Of course, it’s not because you use React you’re immune. I got XSS against every site that used ZeitJS for example. But the pattern that lead to that (generated code dumped into a script tag) is a common pattern for pre-React frameworks.
Is that boring? It certainly has some issues.