The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.
I try hard to correct for this bias but sometimes struggle with exactly the same thing. There's just something about wanting to have a uniform "world-view" with fewer explanatory variables that never stops being motivating.
Your resume needs to have lots of fashionable buzzwords rather than pragmatic good enough / keep it simple choices. You must keep on learning (lots of things rather than mastering any one thing). I can write a really nice site in standard Django with some JQuery, and it will take me half the time that adding React to it will. But adding React will make me much more employable and get me a better wage.
It's seems like at some point around five years ago the three tier architecture with it's division of labor vanished over night. I'm not saying things were perfect back then but I've never seen any demonstrably objective reasons why it was replaced.
I went from having to be mindful a few configuration items which arose from deploying my war to different environments to slogging through configuration hell in the Terraform and AWS world. I've been learning way more about Ops than I ever cared to know while at the same time becoming a -10x developer in terms of shipping business value.
Yeah, duh. I render simple HTML templates on the server and serve them as browsers expect them; not with a thousand lines of JS for topping.
I'm reminded of a post from a few years ago where someone's website had a table of items with [delete] links and would take database actions based on GET requests to those URLs. Who cares? It looks the same to a human browsing it.
And then it got crawled by a search engine which followed all the links to see where they went.
But if you're not doing anything unusual like that, I don't see how prefetching HTML would cause any problems.
Top be fair it has reduced our server costs a bit (after maybe 6 months of developer time). I am unconvinced it will be worth the hassle.
Are the improvements worth $125,000?
Though one comment I saw about Kubernetes on here a few weeks back concerned an old schooler like me. The guy suggested that if something goes wrong, just kill the pod and let kubernetes bring up another. Apparently that's the way you are supposed to do things. Something seems really wrong with that approach to me. Just throw resources at the problem with very little understanding of why things went wrong.
Tbh, I have no real world experience in this, so it might just be my own delusion. However, I've recently started getting into self-hosting some of the services I use. I'm using a simpler infrastructure than what OP described and while it is the right choice for me and a useful skill to have, I feel like it absolutely won't get me anything in the sysadmin/ops/etc. job space. I've actually considered adding more "enterprisey" tech to it (like Ansible or comparable stuff) just to make it more sexy for recruiters.
It is typical for devs.
Meanwhile ops have to support every half-arsed tyre-fire technology until the end of time, because a dev wanted to try it once, and now it’s in prod with users relying on it.
Kubernetes is in a sense the pushback against that “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”.
Webdev does seem to pay better than most other stuff, though.
Still happily using JEE/Spring/ASP.NET + VanillaJS in what concerns webdev projects.
And while other AOT compiled languages might offer a little bit more performance, they lack in tooling and libraries.
Until somebody cyberattacks those pods and steals all personal data of your users because the devs didn't bother to apply security patches. But hey, it's not your problem. You are not responsible for the pods. k8s is still up.
But that has always been true. If a dev leaves a SQL injection for example in the code and it got penetrated, absolutely no one would blame the sysadmin for that.
My interpretation of DevOps is that it's one team with shared responsibility and not "shove your stuff in that pod and don't bother me."
E.g. you could imagine some extreme case in which dependency X, version N has a critical vulnerability - but at the same time, the developed software relies on exactly version N being present and will break horribly on any other version.
You'd need Dev and Ops to actively work together to solve this problem and no amount of layering or containerization would get you around that.
Very infuriating mindset to deal with.
Imagine if people could all just get along.
I've been on both sides of this divide myself, but have spent the last fifteen years or so as a developer. In my experience, developers will burn the whole place down if we're given the chance.
We're focused on writing code, and it's boring to write the same code over and over: we want to write new code, in exciting ways, and we are surprised when it fails in exciting ways.
We're focused on delivering features; our incentives are all about getting it done, not about getting it done well (our industry doesn't even have a consistent view of what's good or bad: note that C/C++ are still used in 2019) or supportably. Some organisations really try had to properly incentivise developers, but I've not seen it really work yet. DevOps is an attempt to incentivise developers by getting us to buy into ops. I've read a lot of success stories, but not seen a lot of success with my own eyes.
I do my best to be diligent, I do my best to wear my Ops hat — yet I still fall down. I don't think that it's unavoidable, but so far I've not avoided it, and I've not seen others avoid it either.
The real problem always starts when they become separate cost centers with separate budgets and have to independently show a 'profit'.
This worked great when several other business systems relied on their vanity toy, and invariably the API would change with every release.
There's a balance to be struck between 'never change anything because it's always worked' and 'new shiny every week'. In my experience it's an absolute nightmare getting people to agree where the line is, and on top of that, get management to buy-in and push-back when either side oversteps.
To balance this with a counter example from the quieter group of people not "hot for the latest tech":
I'm a "dev" and i've never had this problem, however I work for a small company, where everything I make and deploy I also have to maintain in some form or other. This gives me a strong bias towards operational simplicity and trying to essentially eliminate dev ops... New tech which is both complex and opaque in solution without clear cut advantages is basically repulsive to me, because trust and reliability without constant attention and tweaking is important.
- non downtime deployments (yes you can have a downtime, but everytime you deploy an app?!)
- schedule more than one thing (no company has a single product that only has a single binary or at least nearly no company, there are some unicorns tough)
- some kind of automation (this is complex, no matter what you use)
Oddly, a lot of small companies really don't need this. If your customers are mostly businesses in a limited set of time zones, having a maintenance window outside of their business hours is probably easier.
I also think this has a lot to do with how devs spend our time: with the tech itself. Whether your application is running on Kubernetes or a box in your garage matters to precisely zero customers as long as it performs well, but as developers we spend our whole day dealing with various APIs and technologies, so we develop an outsized sense of the importance of those things.
One quickly learns that business has a complete different set of priorities and dealing with software as little bonsai trees is not one of them.
The best way to get promoted at many companies is to write a framework. The best way to get noticed is to write an open source framework. And so on.
Hip technologies are being used in SV, and they have to pay tons of money just to keep the talent pool large and circulating.
Older technologies are used in other cities, and there the market forces aren't so crazy.
But a good Java dev can make plenty of money in SV, and a Go developer will make a competitive salary by Dallas standards but not by SV standards (and probably have a harder time finding a new job).
For the record, I have been using a JVM language as my primary work language since early 2012.
Range of employment options. Possibly salary, though that's more variable. There are some jobs keeping the lights on with legacy tech long after it is done being the hot thing, but typically with any particular stack it's a shrinking numbwrt of jobs often with shrinking average real pay unless it hits a phase where the decline in people able to do it exceeds the decline in work.
If you are riding out the last few years of your tech-focussed career (whether that's before retirement or before moving out of hands-on tech into, e.g., management) that's maybe not so bad, but if you planning being in tech for a longer period it's potentially extremely career-limiting not to adapt to current market focus.
I'm not sure this is true. Most of the shops I've been in don't care about whether you know this or that language or library. You're expected to learn that as you need to. Most of what I've seen cut people from interview loops is missing fundamentals.
Then you'd be wise to stick stuff like Java or .NET, because there are probably millions of jobs requiring them.
The job might not be as interessing as riding every tech wave, but on the plus side there are plenty of tech waves that you save yourself from riding on.
Plus one gets to rescue projects that ended up betting on the wrong waves, getting back to boring old tech.
It is a bit unfair for Cobol, given that its latest revision is from 2014, and while verbose as it might be, it supports most of the nice features of any modern multi-paradigm language.
SOAP was\is a pretty stable technology that did exactly what it promised to do, without too many releases or breaking changes for about 10 years.
Even today it has a good utility for the situations it is designed for...
RPC over a well known standard format, for tightly coupled endpoints, that require metadata, enforced schema, security, and perhaps transactions.
big problem for SOAP is it was the default for web services for a long time, when in reality a big shift happened in about 2008 where webservices were most likely NOT going to fit into those constraints. just my 2 cent
And neither of them were selling "magic" stuff like BPEL and BizzTalk.
Confirmation bias (where you've spent some time on k8s or whatever, and now you just want to cash in on your time loss, objective criteria be damned)
Generational churn (where you find yourself in a field where everything has been said and done, and you just need a new buzzword on your resume to start over; this goes hand-in-hand with corporate IT longing for fresh and cheap staff and their stack in need to look sexy)
Big media (where extremely large infrastructure runs on k8s or whatever, and gets disproportional airtime, because cloud providers want to sell you lots of pods, and people not checking whether the proposed arch is a good fit)
When comparing consumer products where there's lots of choices, I find myself finding an OK options and 'falling in love' with it - when I reflect, it's basically a way of cutting through all the reviews and deciding that one is the best and I've no reason to regret buying it or need to do any more trawling through reviews and comparisons. I'll just buy this one and be done with it.
This strategy often works, to be fair!
Management (top bosses) often seem want the latest ie. Big Data . It doesn't mater it'll cost a fortune and you'll get better results on a single server.
And if the devs are out of control and pushing for %tech% and getting it, that's management at fault. To be a good manager you need to understand what your employees are doing. I've met too many that don't.
There is also such a thing as being sleepy old.
And there is reasonable in between.
So a developer that doesn't want to deal with already solved problems and who wants to advance their knowledge is incentivized to push for jumping to the newest tech.
However going the opposite way (sticking to one reliable tech stack and refusing to change even when something better comes along) could be just as damaging to a business.
How then, do you build a culture where people are open-minded to new tech without feeling obliged to jump on every bandwagon? I don't think I've ever seen an organisation get the balance quite right.
Most developers just don't want to be left behind, so they pick up whatever is trendy at the moment. It's completely rational, because knowing what is trendy getting you hired.
However, implementing what's trendy, without carefully weighing pros and cons*, is what's dangerous.