Hacker News new | past | comments | ask | show | jobs | submit login

I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.

The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

I try hard to correct for this bias but sometimes struggle with exactly the same thing. There's just something about wanting to have a uniform "world-view" with fewer explanatory variables that never stops being motivating.




Part of the problem is the hiring process (plus attitudes seen on here).

Your resume needs to have lots of fashionable buzzwords rather than pragmatic good enough / keep it simple choices. You must keep on learning (lots of things rather than mastering any one thing). I can write a really nice site in standard Django with some JQuery, and it will take me half the time that adding React to it will. But adding React will make me much more employable and get me a better wage.


You've touched on some very key problems here.

It's seems like at some point around five years ago the three tier architecture with it's division of labor vanished over night. I'm not saying things were perfect back then but I've never seen any demonstrably objective reasons why it was replaced.

I went from having to be mindful a few configuration items which arose from deploying my war to different environments to slogging through configuration hell in the Terraform and AWS world. I've been learning way more about Ops than I ever cared to know while at the same time becoming a -10x developer in terms of shipping business value.


The real trick is to make your site load so fast people swear it's magic. I use a combination of serving things from ram and https://instant.page to do this with a fairly boring plain old HTML rendering on the server app. I even have a Progressive Web App out of it too.


Honestly, very true. After doing some brief work for a financial services company, the one thing they were consistently surprised at was how fast the application ran!

Yeah, duh. I render simple HTML templates on the server and serve them as browsers expect them; not with a thousand lines of JS for topping.


Are there any side effects from preloading pages when hovering over the link?


Probably? I don't have side effects on hyperlinks though.


The "Pages not preloaded" page notes that it excludes addresses with query strings just in case they run some action that you might not want to trigger on hover. You can override the default behavior if you know it's not an issue.

I'm reminded of a post from a few years ago where someone's website had a table of items with [delete] links and would take database actions based on GET requests to those URLs. Who cares? It looks the same to a human browsing it.

And then it got crawled by a search engine which followed all the links to see where they went.

But if you're not doing anything unusual like that, I don't see how prefetching HTML would cause any problems.


What do you use to serve from RAM?


Rails with turbolinks or Django/Laravel + pjax is good enough for most purposes. When Kubrrnetes first appeared it was laughable if you used it for anything less than provisioning a massive fleet of servers. Now it's something you sprinkle on your corn flakes.


Yep. We just started implementing it at my place. I had only just started and wanted to say that it seemed like overkill, but it was under way when I started and bringing that up in my first week didn't seem like a good way to start.

Top be fair it has reduced our server costs a bit (after maybe 6 months of developer time). I am unconvinced it will be worth the hassle.


FTE dev, fully loaded, is what? $250,000 per year? More?

Are the improvements worth $125,000?


We are in Spain, no San Francisco, so a fair bit lower than that. IF the startup goes well and we need to scale maybe it will be worth it. And it does give us the advantages of high availability.

Though one comment I saw about Kubernetes on here a few weeks back concerned an old schooler like me. The guy suggested that if something goes wrong, just kill the pod and let kubernetes bring up another. Apparently that's the way you are supposed to do things. Something seems really wrong with that approach to me. Just throw resources at the problem with very little understanding of why things went wrong.


Docker + Kubernetes = the death of YAGNI


It seems to me that the requirements for personal infrastructure and professional service-grade infrastructure have drifted so far apart that essentially, if you know one world you don't (automatically) know the other at all.

Tbh, I have no real world experience in this, so it might just be my own delusion. However, I've recently started getting into self-hosting some of the services I use. I'm using a simpler infrastructure than what OP described and while it is the right choice for me and a useful skill to have, I feel like it absolutely won't get me anything in the sysadmin/ops/etc. job space. I've actually considered adding more "enterprisey" tech to it (like Ansible or comparable stuff) just to make it more sexy for recruiters.


The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

It is typical for devs.

Meanwhile ops have to support every half-arsed tyre-fire technology until the end of time, because a dev wanted to try it once, and now it’s in prod with users relying on it.

Kubernetes is in a sense the pushback against that “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”.


It's typical for web application devs. There is a huge ecosystem of software developers outside of web services who are much less fad-happy and much more focused on using established tools to produce useful, reliable systems.


Webdev is where the money is. It's where people with a CS degree or programming experience are most likely to find a way to put food on the table. Everything else requires more expertise and, aside from the most specialized of applications, pays less money. So as it is, webdev is the center of the universe, and RDD is table stakes for being considered a professional in the field.


That's not quite true; most of my colleagues with CS degrees work for non-tech companies in factories (doing really boring stuff, but still).

Webdev does seem to pay better than most other stuff, though.


DBEs as well. We're constantly getting new database back ends for apps to the point that the Ops DBAs support some 9 backend database solutions. Granted, that probably falls under WebAppDevs for the most part.


Not all of them though.

Still happily using JEE/Spring/ASP.NET + VanillaJS in what concerns webdev projects.


Being old doesn't mean good. From my experience, using j2ee or spring to make a web app is grossly overcomplicated (I have heard of but not yet used spring boot). Asp.net is fine but anyone who is paying $$ for that is probably a dumbass


From my experience it still offers more performance than anything based on JavaScript, or scripting language du jour.

And while other AOT compiled languages might offer a little bit more performance, they lack in tooling and libraries.


> “do what you want, as long as k8s is up, what you run in your pods is your problem, not ours”

Until somebody cyberattacks those pods and steals all personal data of your users because the devs didn't bother to apply security patches. But hey, it's not your problem. You are not responsible for the pods. k8s is still up.


True. I own all 24 clusters from a management perspective plus own the core OS container they use. I rebuild the OS container, patch, and upgrade the clusters quarterly. I currently have to manually check to make sure they're not using some third party OS container and reject it if they do. I'm working on a PodSecurityPolicy that enforces that so I don't have to manually do it any more. They are fully aware of this because I'm part of their process, attending their scrums and adding lifecycle bits to their Jira backlog. It was initially a shock to them and pushback happened but since I "own" the environments, and could provide good reasons for it, and showed them it didn't adversely impact their workflow, they seem good with it. I can't say they aren't complaining about it among themselves though :)


But hey, it's not your problem. You are not responsible for the pods. k8s is still up.

But that has always been true. If a dev leaves a SQL injection for example in the code and it got penetrated, absolutely no one would blame the sysadmin for that.


In the case of sql injection the responsibility indeed weighs more on devs. But often it's a grey area. What about upgrading openssl lib for example, or patching Struts framework (see Equifax hack)?

My interpretation of DevOps is that it's one team with shared responsibility and not "shove your stuff in that pod and don't bother me."


I think one root cause is that the two demands Dev usually have for Ops (keep the system protected and up-to-date and keep the developed software working in a well-defined environment) are sometimes directly conflicting - and developers don't always seem to realise this can be the case.

E.g. you could imagine some extreme case in which dependency X, version N has a critical vulnerability - but at the same time, the developed software relies on exactly version N being present and will break horribly on any other version.

You'd need Dev and Ops to actively work together to solve this problem and no amount of layering or containerization would get you around that.


I worked for someone who had mindset that whatever technology developers wanted was always good and ops should just shut up and put up with it because devs are the ones that make the money for the business.

Very infuriating mindset to deal with.


And I've worked at companies where the devs where expected to know their place and not question ops, because ops was seen as the serious adults in the room keeping things running and devs where seen as easily distracted children chasing after the shiniest thing that most recently caught their attention. Made perfect sense when I was on the ops side and was super annoying when I was on the dev side :)

Imagine if people could all just get along.


Let's make a movement to bridge the fundamental divide between Dev and Ops.. we can call it OpsDev.


You are forgetting Cybersecurity team. Now, that's a fun party. SecOpsDev.


You jest but a dev chucking an insecurable thing over the fence to ops is very common. I will bet that’s how there are so many open MongoDB’s out there.


Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway. As a developer, I've seen DevOps as a frustrating gate at times. The only things I think can fix this divide are communication and built trust. (and probably less assumed malice)


> Having worked in both DevOps (or Ops as we called it in 2002), there really was a belief that the developers were stupid, and they'd burn the whole place down if we gave them any leeway.

I've been on both sides of this divide myself, but have spent the last fifteen years or so as a developer. In my experience, developers will burn the whole place down if we're given the chance.

We're focused on writing code, and it's boring to write the same code over and over: we want to write new code, in exciting ways, and we are surprised when it fails in exciting ways.

We're focused on delivering features; our incentives are all about getting it done, not about getting it done well (our industry doesn't even have a consistent view of what's good or bad: note that C/C++ are still used in 2019) or supportably. Some organisations really try had to properly incentivise developers, but I've not seen it really work yet. DevOps is an attempt to incentivise developers by getting us to buy into ops. I've read a lot of success stories, but not seen a lot of success with my own eyes.

I do my best to be diligent, I do my best to wear my Ops hat — yet I still fall down. I don't think that it's unavoidable, but so far I've not avoided it, and I've not seen others avoid it either.


Smaller companies I've worked for don't really seem to suffer from this problem although once companies are larger and have separate teams (and, perhaps more importantly, managers who are incentivized in different ways) this problem always seem to arise.


once companies are larger and have separate teams

The real problem always starts when they become separate cost centers with separate budgets and have to independently show a 'profit'.


I've seen this in a 50-person volunteer group. The devs turned up every year with a proposal to throw away and completely rewrite what they'd done the previous year. No incremental upgrades -- a complete rewrite every time.

This worked great when several other business systems relied on their vanity toy, and invariably the API would change with every release.

There's a balance to be struck between 'never change anything because it's always worked' and 'new shiny every week'. In my experience it's an absolute nightmare getting people to agree where the line is, and on top of that, get management to buy-in and push-back when either side oversteps.


The company I work for went even further. Ops doesn't support anything in public cloud past the basic connectivity to the corporate network. Everything created in public cloud is the product/dev team's responsibility. Kubernetes? Not their problem.


This is why developers should own their ops.


> The cycle of picking a tech, jumping ship to it, religiously evangelising it, riding the wave and then jumping ship to the next related tech is typical in my opinion.

To balance this with a counter example from the quieter group of people not "hot for the latest tech":

I'm a "dev" and i've never had this problem, however I work for a small company, where everything I make and deploy I also have to maintain in some form or other. This gives me a strong bias towards operational simplicity and trying to essentially eliminate dev ops... New tech which is both complex and opaque in solution without clear cut advantages is basically repulsive to me, because trust and reliability without constant attention and tweaking is important.


You're not eliminating "dev ops", you're doing it right.


Eliminating dev ops is doing it right! The whole entire point of devops, as it was originally formed, was that making developers bear the load of operations would encourage them to simplify and automate it.


I should have added scare quotes around "eliminating", too, it seems :-)


well even in a small company you might need:

- non downtime deployments (yes you can have a downtime, but everytime you deploy an app?!)

- schedule more than one thing (no company has a single product that only has a single binary or at least nearly no company, there are some unicorns tough)

- some kind of automation (this is complex, no matter what you use)


> non downtime deployment

Oddly, a lot of small companies really don't need this. If your customers are mostly businesses in a limited set of time zones, having a maintenance window outside of their business hours is probably easier.


You are in a an excellent position, but beyond a certain size things become difficult to manage in this way.


I’m curious which technologies you’ve ended up working with.


> I think devs often make bad decision makers because in some sense tech is often an addiction rather than a pragmatic choice.

I also think this has a lot to do with how devs spend our time: with the tech itself. Whether your application is running on Kubernetes or a box in your garage matters to precisely zero customers as long as it performs well, but as developers we spend our whole day dealing with various APIs and technologies, so we develop an outsized sense of the importance of those things.


Which is why I think it helps a lot to work in domains where shipping software is not the core business, just a cost center to keep real business going on.

One quickly learns that business has a complete different set of priorities and dealing with software as little bonsai trees is not one of them.


I think jumping on new tech and marketing yourself is a good decision for a developer as its a good way to increase their compensation and market value. If you're a developer stuck maintaining a Java spring app at some unknown company the best way to make a shift is to pick up Go or something and move to a startup. Else your career will stagnate.

The best way to get promoted at many companies is to write a framework. The best way to get noticed is to write an open source framework. And so on.


Is this really true? There are plenty of job openings for people who are good at maintaining Java Spring apps!


They also pay a fraction of what the jobs "hip" companies pay, and often come along with developers Being treated as second-class citizens.


Is that true, or are you inadvertently comparing the cost of living between SF and other major cities?

Hip technologies are being used in SV, and they have to pay tons of money just to keep the talent pool large and circulating.

Older technologies are used in other cities, and there the market forces aren't so crazy.

But a good Java dev can make plenty of money in SV, and a Go developer will make a competitive salary by Dallas standards but not by SV standards (and probably have a harder time finding a new job).


I am currently working in NYC (living in NJ), with a total comp that is more than 3X what I was making when I left Dallas. Based on the market there, I would still probably be making 40% of what I do now had I stayed, and the company would not have been as good.

For the record, I have been using a JVM language as my primary work language since early 2012.


Lots of hip companies use the JVM, and sometimes, Java.


Not according to itjobswatch.co.uk where Java/Spring roles fetch top rates. Same with Indeed.co.uk so which job market are you referring to?


It’s not so simple. For eg if your skill set is in demand you can easily trade up to a better company than if it wasn’t. This is true in my own career. Also the bar to entry would be lower. So for this reason if you’re breaking in right now learning React is better than learning Spring.


Yes exactly my point. Go lookup salaries.


Java will be around forever, and becoming an excellent Java developer will absolutely remain highly lucrative for a very long time. That's its reputation at the companies I"ve worked for over the last 10 years (all startups). Go is more of an anti-language imho. Among myself and similarly minded colleagues, I would say its main attraction is its lack of features and it seems a haven for people that are grumpy like me. I always get a chuckle when I see it framed as "hip" because its just never felt like that to me. Elixir is Hip. Rust is somehow Hip. Go, just not in my experience.


Stagnate by what standard though? There is a certain joy in just maintaining the status quo.


> Stagnate by what standard though?

Range of employment options. Possibly salary, though that's more variable. There are some jobs keeping the lights on with legacy tech long after it is done being the hot thing, but typically with any particular stack it's a shrinking numbwrt of jobs often with shrinking average real pay unless it hits a phase where the decline in people able to do it exceeds the decline in work.

If you are riding out the last few years of your tech-focussed career (whether that's before retirement or before moving out of hands-on tech into, e.g., management) that's maybe not so bad, but if you planning being in tech for a longer period it's potentially extremely career-limiting not to adapt to current market focus.


> Range of employment options. Possibly salary, though that's more variable.

I'm not sure this is true. Most of the shops I've been in don't care about whether you know this or that language or library. You're expected to learn that as you need to. Most of what I've seen cut people from interview loops is missing fundamentals.


From a programming perspective, possibly. As an Ops Engineer, I'm having a hard time shifting jobs. Where I work now, it's heavily siloed so I can't shift into a CI/CD team because it's a different team or the Product Engineering team because they don't do Unix administration, automation, or Kubernetes (other than the deployment aspect). I focus on automation with shell scripts and Ansible plus Tower to get Infrastructure as Code going. I took on the Kubernetes role, and am the single point of failure for the 24 clusters I manage. And now management is asking what support contracts we have for Kubernetes (me, it's just me and asking questions in various places on the 'net). Add in that I'm taking courses for the CI/CD toolset and implementing them on my homelab. But I still can't get a bite on shifting jobs.


> Range of employment options.

Then you'd be wise to stick stuff like Java or .NET, because there are probably millions of jobs requiring them.


By financial standards :-)


It helps working for big corps.

The job might not be as interessing as riding every tech wave, but on the plus side there are plenty of tech waves that you save yourself from riding on.

Plus one gets to rescue projects that ended up betting on the wrong waves, getting back to boring old tech.


You mean, Kubernetes is the COBOL of 2050?


I mean that Kubernetes is the NoSQL, CoffeScript, BigData, Grails, SOAP... of 2019.

It is a bit unfair for Cobol, given that its latest revision is from 2014, and while verbose as it might be, it supports most of the nice features of any modern multi-paradigm language.


I think this is a bit unfair to SOAP.

SOAP was\is a pretty stable technology that did exactly what it promised to do, without too many releases or breaking changes for about 10 years.

Even today it has a good utility for the situations it is designed for...

RPC over a well known standard format, for tightly coupled endpoints, that require metadata, enforced schema, security, and perhaps transactions.

big problem for SOAP is it was the default for web services for a long time, when in reality a big shift happened in about 2008 where webservices were most likely NOT going to fit into those constraints. just my 2 cent


CORBA and DCOM certainly felt easier to use than all the headaches I had with SOAP interoperability.

And neither of them were selling "magic" stuff like BPEL and BizzTalk.


Additional factors I'd want to add:

Confirmation bias (where you've spent some time on k8s or whatever, and now you just want to cash in on your time loss, objective criteria be damned)

Generational churn (where you find yourself in a field where everything has been said and done, and you just need a new buzzword on your resume to start over; this goes hand-in-hand with corporate IT longing for fresh and cheap staff and their stack in need to look sexy)

Big media (where extremely large infrastructure runs on k8s or whatever, and gets disproportional airtime, because cloud providers want to sell you lots of pods, and people not checking whether the proposed arch is a good fit)


Decision fatigue and opportunity cost aversion is a big factor too, I think.

When comparing consumer products where there's lots of choices, I find myself finding an OK options and 'falling in love' with it - when I reflect, it's basically a way of cutting through all the reviews and deciding that one is the best and I've no reason to regret buying it or need to do any more trawling through reviews and comparisons. I'll just buy this one and be done with it.

This strategy often works, to be fair!


Not just devs, it's really a management problem all round.

Management (top bosses) often seem want the latest ie. Big Data . It doesn't mater it'll cost a fortune and you'll get better results on a single server.

And if the devs are out of control and pushing for %tech% and getting it, that's management at fault. To be a good manager you need to understand what your employees are doing. I've met too many that don't.


I've been around enough to see a couple iterations of this. Being able to spot when something is about to fade away and something else come into focus is a valuable skill for consultants. I suppose it's necessary for tech. progress but, man, a lot of money gets spent chasing the new thing.


I think that larger reason is that devs who don't act like that or at least don't pretend to be like that are considered less capable by many. Somehow pragmatical decision making that is seen as not being passionate.

There is also such a thing as being sleepy old.

And there is reasonable in between.


The other side of the coin is that there are very real improvements in newer tech and companies, in my experience, are only willing to support continuing education that is directly related to the tech stack that they are using.

So a developer that doesn't want to deal with already solved problems and who wants to advance their knowledge is incentivized to push for jumping to the newest tech.


I've suspected this to be the case almost everywhere I've worked. Another reason it happens is that anyone questioning the adoption of a new tech risks looking like they don't understand it.

However going the opposite way (sticking to one reliable tech stack and refusing to change even when something better comes along) could be just as damaging to a business.

How then, do you build a culture where people are open-minded to new tech without feeling obliged to jump on every bandwagon? I don't think I've ever seen an organisation get the balance quite right.


Actually trying all promising new technologies is another full time job, or at least take 20 hours a week.

Most developers just don't want to be left behind, so they pick up whatever is trendy at the moment. It's completely rational, because knowing what is trendy getting you hired.

However, implementing what's trendy, without carefully weighing pros and cons*, is what's dangerous.


You could say that to pretty much any technical abstraction though.


I entirely agree... in fact it applies to me more with maths than it does with tech.


Which maths are you doing? Unless you’re talking about already well defined formalizations.


Bayesian stats.


I think maybe it's easier to blame an external framework in hindsight, than to take the blame for some smaller solution that you personally created in-house.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: