Hacker News new | past | comments | ask | show | jobs | submit login
Programmers Should Plan for Lower Pay? (jefftk.com)
339 points by luu 3 months ago | hide | past | web | favorite | 492 comments



Many programmers have a bias in thinking that what they're doing is easy or simple, because they truly enjoy it and, many times, have learnt it as a hobby. Their career choice have never been a masterplan on winning big, but it just happened to be so, so they feel like they're fraud.

Hobbyist programmers (who usually end up being the best ones) usually put great investment into their career by doing side projects on their free time. We're talking weekends and night of trying out stuff, hacking stuff, reading, learning, etc. It's often an invisible investment, because it's a hobby and it doesn't feel like work.

In the end, programmers don't really need to dig that far to reach a point where they're good at building stuff using enough layers of knowledge to make what they do absolutely obscure to others.

Nobody has a hobby of learning by heart the bones in the human body or the law texts on intellectual property; but programmers most likely know by heart a sizable bunch of Bash commands and their options, HTTP status codes, API interfaces, etc. It seems like highly advanced knowledge to be able to solve a health problem from a set of symptoms, but from the point of view of the layman, solving a computer problem from an error message is about as magic as it gets.


> Many programmers have a bias in thinking that what they're doing is easy or simple

In my experience it's exactly the opposite. People who do nothing but write glue code between major services think they're hot stuff.

One of my personal career goals is to stay as far away from simple glue code as I can. Unfortunately it makes up most of our field right now. But it seems like an especially precarious place to be.


Most doctors are doing simple glue stuff and think they're hot shit. You think I couldn't diagnose that cold, or a machine?

Most lawyers are doing simple glue stuff and think they're hot shit. You think I couldn't refer that case to another law firm, or a machine?

Most businessmen are doing simple glue stuff and think they're hot shit. You think I couldn't randomly guess at the market, but not outperform it, nor a machine?


Doctors have physician assistants and registered nurses along with EHR medical databases the hospitals subscribe to for aiding in their patient diagnosis and medical workup via always up-to-date medical coding insurance industry practices (for billing patient health insurance $$$). Any exotic infections/diseases or surgery are farmed out to specialist doctors who are also following medical billing guidelines and a narrow workup to diagnose patients within their specialty.

Lawyers have paralegals that dig through the case studies and case law for them along with law databases like LexisNexis.


>You think I couldn't diagnose that cold

I think you'd get the 90% case due simply to the preponderance of people who have a cold instead of some other, more rare condition with similar symptoms. Problem is you'd cause harm to everyone who doesn't have a simple cold because you have no idea what you're doing.

Comparing the knowledge needed to be a good doctor to what is required to throw up some bloated SPA? C'mon, get over yourself.


Doctors make a lot of mistakes too, just like programmers. But with doctors, it's easy to blame it on the vagaries of the human body. So even when the "cure" doesn't, people expect that because of the differences between individual bodies. This gives them a pass of sorts when they screw up, the medicine doesn't work, the side-effects kill you, you get a fatal infection in the hospital, etc.

With programming, if you are designing the control system for an aircraft and it fails, people will be able to find out exactly what failed and why, and who to blame.

It's hard to compare doctors and programmers, because there are many levels of medical professionals, just like there are many levels of programming.


That's a funny comparison considering doctors must pay outrageous amounts of money for malpractice insurance, but engineers are never held personally responsible for a bug (legally, at least.)


I think you're grossing oversimplifying what goes into making each of these everyday decisions -- though they look simple. Misdiagnosing a cold when its ready a (deadly) brain/amoeba infection is easy to do particularly when the patient presents with vague descriptions of how they feel and 'where it hurts'.

The ability of an experienced professional to quickly decide these things comes after many years of hard-won skill building...


And you really believe your stressed, sick-of-you-already, doctor is going to even consider a brain amoeba when you present flu-like symptoms?

They either have a checklist, or they wing it. You give obscenely too much credit.

Medical mistakes are incredibly common. There are thousands of people who DIE from prescription accidents each year.


There aren't a lot of areas left in CS where "clever" solutions are warranted. If you're writing clever code, then chances are you're doing a disservice to anyone who has to maintain the mess you're creating. Code is mostly just plumbing. Boring code is better 99% of the time.


The flip side of this is that because digital plumbing is "boring", the industry is constantly seeking ways to avoid it. Writing assembly language was (usually) really boring, so we developed good compilers to do that for us.

I look forward to the day where I can send data from one place to another and not have to bother with file formats, text encodings, port numbers, network protocols, and so on. IP (and especially TCP/IP) is so prevalent today that we use "IP connection" and "network connection" interchangeably. Someday I hope we have that consistency at some other layers of the stack. I shouldn't have to say "send me that file -- what app did you use to make it?"


I've been thinking through this for a while now and I've come to the conclusion that the details keep getting more complicated faster than other things can automate them. For example, I remember a time when software served ascii and when the Canadian government asked to buy a version of some software we were writing for the Americans the reason the head of the company said no was because we'd have to support French and that meant non-ascii characters.

Now, UTF-8 is a given and there is a mess of potential issues. No longer are byte-by-byte comparisons reliable, especially in some languages like Vietnamese.

Your example of IP connection and network connection is true, but overall connections are more complicated. Some clients can't use HTTP2, some clients are behind firewalls, some clients need CORS support, etc.

And plus there is this accelerating wave of advancements everyone is trying to keep up with, but that come with real investment. Upgrading OSes, programming languages, frameworks, libraries. Figuring out how to keep data secure.

I really don't see things getting less complicated as time goes on. At least not until we have some real form of AI, but even there I have my doubts because there will always be tradeoffs.


That's why we have libraries and frameworks to do complicated, human-facing bits. If you're rolling your own dateTime, it's a bad idea, but fortunately other people have solutions for this. If you want to get into the nitty gritty you can but you do so at your own peril, similar to how the vast majority of people are not reinventing TCP/IP.

It'll take a bit of evolution to get there depending on how new the problem is, but it also took a bit of evolution to get where we are today, and nothing is ever perfect.


My argument is that, as a function of time, there are more things to consider, not less. Abstraction works to a point, but it doesn't keep up with the growing complexity.


Except by the time those libraries and frameworks become stable and well documented enough for serious use, we decide to reinvent the wheel and move on to something new.


In fact, that's already been solved for the most boring of digital plumbing. The data is JSON (which requires UTF-8), and the port number either 80 or 443 - the library automatically handles that depending on if the URL starts with http or https. `requests.get(URL).json()` will grab that data for you (Python). That only covers a very specific but very prevalent use case, just like TCP/IP covers the most prevalent use case for networking.

What are you doing where that can't covered by that use case? Actually, I can think of tons. What are you doing that can't be covered by that use case, but is still boring?


Something I find myself doing repeatedly is implementing arcane protocols that were created long before JSON. Bit-packed mainframe protocols. After the second or third time of working with another binary format, there's no new knowledge to gain, and it's not interesting work. But it requires meticulous attention to detail and can't be automated. Most industries don't require you to solve this specific problem but it's all variations of this. Shuffle data from one format to another format. Even if it's JSON, you're still moving it from one business domain to another, or one tech stack to another. Backend to frontend. Etc.


I think you are coming from a common perspective for people on HN and elsewhere in IT, where you can just install anything and provision any equipment to install it on and direct others to do the same.

A lot of people aren't in a position to do this. I think actual plumbers also have to deal with similar constraints.

It's nice to not be constrained, but it's also more impressive to do a good job under severe constraints, IMO.


There aren't a lot of areas left in CS where highly generic clever solutions are warranted. There are plenty of cases where a specific business has specific needs and the solution to those needs is more interesting than glue code.


Agree 100%. Programmers that love 'clever' code should think twice about using it. Writing code that can be maintained over a long time is many times more important than saving a few bits here and there by using clever coding.


Mostly it's not the clever code that is the problem, but the way people tend to use it. Instead of covering that clever piece code in a structure with an easy-to-understand interface and a lot of documentation explaining what things allow this to work properly, it's left without any clues what it is and how it works. Essentially a puzzle in the middle of code base to solve for everyone reading.


I don't think they're talking about clever code, more about challenging solutions. Plenty of problems that have challenging solutions, though they are not often easy to recognize.


The flip side of this is that our industry has a huge affection for laziness, incompetence and anti-intellectualism. The moment they get their first job, a lot of people seem to forget that this field is a profession, where learning things is actually a part of the job. Instead, they freeze their development at junior level and whine at anything that's beyond what they know. Often times when I see actual complaints about clever code, the code isn't really all that clever - it's the complainer who can't be bothered to take two hours off and read up on a feature of the language or library they're using 40 hours a week.

When seeing something one doesn't understand, the default reaction of an intelligent person should be to ask oneself what one should learn to comprehend it. It shouldn't be dismissing it as "clever".


I'm in the opposite opinion, CS is filled with clever things we taken for granted. A lot of stuff in this field are innovations or clever stuff already figured out. The remainder is a puzzle like approach to fit it all together alongside moments of novel innovation and puzzle solving.


"simple glue code" tends to be the part that has the most value added. We can keep building new frameworks and technologies over and over again but the real value is in using them to solve business problems.


Right. The problem is that it isn't terribly hard to do. It certainly doesn't require a CS degree. Any reasonably smart person can do it after a three-month boot camp. This is why there's a correction on the way.


This is largely true. However, I've been cleaning up and scaling those systems glued together by dilettantes for years. My salary has been increasing, on average, 10-12% yearly for quite a while. I'm in management now and my primary job is to find and manage 1) people who glue together APIs to solve business problems AND 2) people who can fix the problems created by group #1. There is very little overlap between the two.

I welcome more boot camp grads. The amount of work for both groups is growing. I don't see salaries for devs falling. They may flatten out for some business problems/areas, but new and/or underserved will always appear.


People have been saying there's a correction (or bubble burst) on the way for the last several years.

It's very much a "correctly predicted nine out of the last five recessions" kind of situation.


Having worked with people coming out of a three-month bootcamp, I strongly disagree. The havoc these people wreak on your codebase for the first 6 months to a year after their boot camp makes them completely counter productive to have around.

Incredibly thorough code reviews helps reduce this, but at that point it slows down the skilled members of the team too much for it to be worth it.


Completely agree. As a boot-camper myself, I would have been an absolute disaster for the first 6 months (really, longer) of my career. It was exactly working with experienced people who had the time to provide rigorous code reviews, and my own desire to learn from these people, that prevented me from totally messing up our codebase and eventually brought me up to speed. But even today, I acknowledge that my knowledge is very shallow compared to anyone with a four year degree. And I think the job security provided by that degree — which comes from the ability to do good work on things besides just web applications — is increasingly valuable as the number of developers increases (even though developers can work to overcome this knowledge gap).


Glueing together code gets complicated fast as you start adding more pieces to the system.

There is a reason why so many projects end up being a giant mess.

Most people aren't smart enough to make it through a bootcamp and those who do still need years of experience to be fully productive.


Sometimes it is very hard to do. I am now responsible of a platform that has over a thousand, unique configuration, servers and hundreds of database servers. Putting this together is so easy, it was done by a few guys with no CS degree (not that the degree matters) with a few months of on the job training. It is a mess, it will take about one year to clean up the mess and get it right (for example, just paying the licenses we need on the servers we need will save ~ 60% of the licensing costs). Try leaving a terabyte database on the capable hands of a 3-month boot camp graduate and see how long it will work and how.


There's a definite bubble at the junior level. I feel it will be self limiting as and when the bootcamps start failing to place people and people stop seeing it as a way to easy wealth.


That sounds good, but ultimately seems pithy. Going back to the recent "my business card runs Linux" post, board bring up and PCB design is far from "simple glue code" and far more niche than web dev. Except that pay rates for that segment of the industry are lower than a bootcamp graduate can make in SF,

The issue of precarious is also a bit "the grass is greener" sort of view. In 5 or 10 years when the industry again looks different, bootcamp level web dev work may be as precarious as you project. But right now, what's more precarious, specializing for one company in a corner of the industry where there are only a small handful of companies, or working for a company that you the employee, could easily replace?


I think, therefore, we should try to use different labels for different kinds of developers as much as we can.

E.g. "software engineer", and "programmer" should be labels for different roles/capabilities, as should be "computer scientist". Using the terminology incorrectly should be frowned upon, similarly to how this goes in other disciplines.


As jefftk's comment says, the field is too .. protean and ill-defined to do that. We can barely agree on "frontend" and "backend". There used to be separate "analyst" and "programmer" jobs, enshrining that would have prevented a whole load of efficiencies.

It's also why there's so much difficulty in bringing wages down by just training more programmers: it's much harder than it looks. Teaching people to program is far more unreliable than other disciplines; the degree of self-starter on-demand continuous improvement required is very high. Other professions (e.g. actuary) may have a big hill to climb, but the requirements are written down and there is a clearly marked path up the hill. In software you may wake up to find that someone has moved the entire hill during the night and you have to make your own path, again.


I'm not sure the field is mature enough yet for us to be splitting things up like that. I've worked in performance critical code, research science code, website optimization, ads infrastructure, browser testing, experimentation frameworks, dumbphone mobile payments, and other things, in C, C++, Python, JS, Bash, and Java. Most of my coworkers have a similarly wide range of experience.

These have a lot of core skills in common, both hard and soft, but it's such a mishmash of things that trying to impose precision via labels doesn't seem helpful to me.


Perhaps this is due to the location where you work.


Similarly in the UK I have done bits of everything from IC design, IC design software, ARM assembly, webserver (Zeus), network engineering, sysadmin, web apps, Windows CE, and even some actual soldering. Small companies are great for this.


I'm in Boston, and more than half of the above were while working for a single company (Google)


This! I think Rob Pike said it best in his initial talk about Go: there are two kinds of developers...programmers, and engineers. programmers want to build things, engineers are constantly monitoring and maintaining existing structures. the terms "builder" and "gardener" are also used in this context, but I always felt that the classic usage of "engineer" was someone who would maintain, monitor, and keep systems alive...whereas a programmer is tasked with building that system in the first place. Not saying that programmers shouldn't be responsible for the correctness of their own work or anything, nor should engineers never build anything from scratch, it's more a matter of your preference and what you personally like to do in your job.

But what is a "computer scientist", really? Is it just some theorist who sits in an ivory tower and tells us all what to do? In my opinion, a "computer scientist" can actually be either a programmer or an engineer (or maybe have both in their blood? who knows). It's someone who is less concerned about building things for today, and more concerned about how computers are going to work tomorrow. It's a wholly separate role from us "construction workers" who have our boots on the ground and are actually building things...the scientists are concerned with how we're going to build things tomorrow. I don't know if Rob Pike, Robert Griesemer, Ken Thompson, or Russ Cox still work on actual systems code anymore or whether they just figure out how to improve the Go language, but assuming that they only work on Go full-time, that to me is the role of a "computer scientist", though for example Ken Thompson is also a "programmer" because he likes to build things.


...except that people tend to use there words with reversed meanings too :| If a programmer wants to sound fancies he/she uses the term "software engineer".

Also, separating "building" from "maintaining" (when "maintaining" means more than running / scaling / securing, and also includes adding features) is a a recipe for disasters! The builders/creators start creating unmaintainable messes because it's not their job to maintain them.

I don't think these kinds of distinctions help in software... we need different ways to look at the problem...

Also this lack of separation is kinda what makes software dev a cool field to work in... I love the hacker mentality and if it ever fully dries out I'm gonna quit working in this field and move to a fresher/younger one.


Most people with engineering titles are primarily busy creating whatever represents "glue code" for their particular field. Most white collar work is like the saying about dissertations in the humanities: The transfer of bones from one graveyard to another.


> One of my personal career goals is to stay as far away from simple glue code as I can. Unfortunately it makes up most of our field right now. But it seems like an especially precarious place to be.

Integration platforms are getting good. The API integrations that make up a lot of my work might be better done with them in not too long. It's already reached the point where some clients speak with me only after an integration platform hasn't met their use case.


In my opinion if you can write glue code between major services, you're hot stuff.


One of the easiest assignments I ever had was to write code between two services. It was easy as hell but it looked impressive because both services did impressive and complicated things.

In fact, I think it helped me get my current job.


Or maybe it is impressive.

If I'm working on a blind, handicapped user's interface and someone figures out how to connect OK Google's speech recognition in a few lines of code, and it's legit and allowed by their terms, it's still impressive. Who cares if you don't know the first thing about speech recognition?


It's not impressive. The problem is, no one yet realizes that any person who can learn algebra can achieve this level of skill in less time then it took to pick up algebra. That's what I mean by "you can learn it at a bootcamp." You can hire a random homeless person off the street and in a couple months have him be a competent junior programmer. I'd be willing to pay minimum wage for that level of skill and like you said, that's probably all I need for "glue code."

What's going on right now is a bubble. People perceive these skills to be "hot stuff" when in fact it is just "average" or even "below average"

When the bubble bursts all programmer wages will go down after everyone realizes that programming is a skill akin to driving or algebra. Anyone can do this trivial work.


First of all, please don't use "random homeless person off the street" this way. It's insulting and also many homeless people are highly intelligent.

Try "random walmart shopper" (a box store that everyone visits.) or just random person.

As for your claim that a random person can learn to write glue code, well, we're going to disagree.


>First of all, please don't use "random homeless person off the street" this way. It's insulting and also many homeless people are highly intelligent.

>Try "random walmart shopper" (a box store that everyone visits.) or just random person.

Random Walmart shoppers are people too and they can be intelligent as well. I use the word homeless not to imply lack of intelligence or to insult, but to imply that even a person with no ability/capability/desire of getting a job can with minimal training gain those skills.

>As for your claim that a random person can learn to write glue code, well, we're going to disagree.

Can any person learn algebra? Can any person learn arithmetic? Can any person learn to read an english book? All of these things, seem trivial to learn, but they are not trivial at all. It is only because of the context in which we learn these things do we find it easy later in life, but fail to remember that at the time of learning, these concepts are just as hard as programming.

Programming is not easy to learn just like how reading english is not easy to learn at all. But make no mistake, just like how basically anyone can learn to read english, anyone can learn how to program. Some people have talent for learning languages others don't, but with time, everyone can learn it. That's how cheap this skill is.

The very existence of bootcamps hinges on this concept that anyone can learn. People walk out of a bootcamps thinking they have talent or were taught incredible skills. The reality is, everything they learned is available online for free. They paid money to learn because they lack the self control to do it themselves.


So do you think the average person off the street (the random walmart shopper) can go through a coding bootcamp successfully? I don't think so.

I think the people who can go through a coding boot camp successfully have great arithmetic skills, great reading skills, and can easily solve 2x-7=11 for x in their head in a moment 100% of the time. (Basic algebra 1.) And can read any generic software manual easily, if the manual doesn't assume too much prior knowledge and is written at a basic level.

I don't think the average walmart shopper can do it.

Do you disagree? What do you think the baseline skills are for people who go through a coding bootcamp successfully? Maybe you have more experience with those people than I do. I think of them as intelligent, educated people with above-average SAT scores but without specialized programming training, who decide to go through a boot camp. They're curious, motivated, smart. But they're not programmers.


>I think the people who can go through a coding boot camp successfully have great arithmetic skills, great reading skills, and can easily solve 2x-7=11 for x in their head in a moment 100% of the time. (Basic algebra 1.) And can read any generic software manual easily, if the manual doesn't assume too much prior knowledge and is written at a basic level.

Anybody can do what you just described with practice and training. The typical high school education along with programming is achievable by the typical person.

>I don't think the average walmart shopper can do it.

Stop insulting walmart shoppers.

>Do you disagree? What do you think the baseline skills are for people who go through a coding bootcamp successfully? Maybe you have more experience with those people than I do. I think of them as intelligent, educated people with above-average SAT scores but without specialized programming training, who decide to go through a boot camp. They're curious, motivated, smart. But they're not programmers.

I don't even think you need to go through a bootcamp to get the baseline skills needed for your typical "glue code" job.

Anybody can learn programming. People who go through a bootcamp typically have enough skills to do the job but they likely may not have enough skills to pass a coding interview. Coding interviews are, unfortunately, harder and unrelated to the job.


You said "stop insulting walmart shoppers", but we're talking about intellectual labor. It's not an insult: I don't think given pen and paper but no google, the average person can solve 2x-7=11 for x. (Algebra 1.) I do think the average person knows their multiplication table and can add and subtract small numbers.

Anyway we clearly view people's baselines differently. Maybe you're the one who's right.


It's a biased term. Why would a walmart shopper be stupid? There's no causative connection.

>I don't think given pen and paper but no google, the average person can solve 2x-7=11 for x.

I'm pretty sure everyone on can do this.

>Anyway we clearly view people's baselines differently. Maybe you're the one who's right.

I think you're way off. Check your baseline. There is not one person in my high school who couldn't solve that expression in their head.


I disagree, the fact that you can go to a bootcamp to learn this shows that you're not.


Isn't an alternative explanation that bootcamp actually makes a person hot stuff, despite not teaching much?

For example maybe if you went to bootcamp, you (crimsonalucard) would be hot stuff when you came out (even if you've been programming for 20 years), due to exposure to all the external services you'll be able to glue together.

The fact that your glue code will have runtime O(1) (a single call) instead of O(n^n), but only for up to 4, as the poor kids do something like have four different if statements for 1 to 4, and each one is a nested for loop, and over 4 it silently does nothing, is less relevant than the exposure to the API's at all.

In other words, maybe terrible programmers can be hot stuff writing the latest glue code - even if it's brittle and barely works.

With all the services huge companies are exposing, this wouldn't surprise me.


You're just redefining what being "hot stuff" is.

Under that definition, anyone on the face of the earth can be "hot stuff." You don't even need to go to bootcamp. Just read some online tutorials, make a website. Done. You're just calling the average person "hot stuff."

In my opinion, you're only hot when you know how to do something that is rare, very hard to learn and in demand.


I see a difference between a web site and a web site that exposes a service that on the backend connects API's of two different companies that do the heavy lifting.


There is a difference. But the difference is trivial. Making a backend or a frontend that connects to N amount of API's isn't impressive at all. Any fool can learn this concept.

In fact that's all a modern website is. The old way involved servers rendering pages. Now The frontend interfaces with an API. Efficiency at the cost of complexity.

You want to know what's impressive? Writing a compiler. Writing an OS. Nothing is simpler than interfacing with an API.

If writing glue code is your job and you're getting paid a lot of money to do this stuff then count your lucky stars because truly anyone can do that job.


Welcome to the wonderful world of Dunning-Kruger.

Folks in the middle of the curve are blissfully unaware how much there is to know and how many difficult problems must be solved to put together those backend services they are gluing.


>Nobody has a hobby of learning by heart the bones in the human body or the law texts on intellectual property

You're making a lot of assumptions on the hobbies people have based upon your environment.

About the only thing the two fields you listed differ from programming is actual legal barriers of entry.


s/nobody/insignificant minority/


"programmers most likely know by heart a sizable bunch of Bash commands and their options"

ha. case in point.


I happen to have a hobby of reading law cases on intellectual property that was sparked by my business being affected by IP issues.

I used to read all kinds of cases as a hobby even before that, for fun. I remember reading through much of Ross Ulbricht's trial documents and comparing it to what his advocates were claiming, which was enlightening.


Yep. The SCO UNIX trial and the ongoing oracle v google cases are exceptionally fun reads. Other notable public narrative vs reality cases included OJ Simpson, the Zimmerman guy, and what’s happening right now with Trump (the treason word gets thrown around so often yet no one actually know what that means legally)


Treason against the United States shall consist in levying war against them or in adhering to their enemies, providing aid and comfort. It's in the Constitution.


undefined in that sentence: almost every word. "Levying"? "War" ? "Enemies"? This is why we have both a Constitution and a body of law. The Constitution uses words, the law defines what they mean.


Those words are not difficult to define or understand. Only a nihilist claims nothing is inherently knowable.

It's possible to understand what treason is, just as it's possible for something to skirt the edge of it and require more nuance and thought.


The words Paul Davis used are defined in the constitution, it’s a legal document. Everything has a legal definition. Legal texts are interpreted by the judicial branch which sets the limits of nuance and thought. As Paul Davis showed, however, not everyone reads legal texts as a hobby.. so one should really delegate that work to someone who does (just like I won’t ask my plumber to create some ML models for me)


that's a load of phooey.

Trump got elected by the people, he can be judged by the people.

This argument you're making is the same mindset that causes people to argue that since freedom of speech is only applicable to the government, we shouldn't strive for the ideal in every aspect of our society.

It's looking at the problem exactly backwards.


Excuse me? I didn’t say he couldn’t be judged, I’m saying people are saying he has committed treason when he literally has not. Which is the difference between public opinion vs reality. I don’t consider my plumber an expert in AI, so I won’t expect him to have the best insight into my job. Likewise, folks who don’t wish to take the time to read legal texts should probably defer to someone a bit more knowledge than them. So call trump crooked, ugly, whatever.. but if you say he’s the nicest and happiest president ever would be just as false as saying he committed treason.

Just like OJ being not guilty based on facts (reality) but everyone already judged him as guilty. Same with Zimmerman. Same with Ulbricht.

Please don’t mistake my post for support of any particular view.


So you believe OJ and Ulbricht were innocent, and the respective civil and criminal court cases finding the opposite were mistaken?


Let me reiterate my disclaimer

>Please don’t mistake my post for support of any particular view.

A lot of America did. A lot of America was going off what the general public was thinking rather than following the actual case[0]. This thread was spawned from the love and desire to read legal texts which includes outcomes of cases which could set case/common law. I read all of the case data against Ulbricht, OJ Simpson, Zimmerman etc. As more and more information came to light it was becoming quite obvious what a reasonable jury would respond with. The prosecution showed a clear connection between Ulbricht, his bitcoin wallet, and they caught him red-handed. The LAPD completely huffed up on their police duties with OJ. The star witness the prosecution brought against Zimmerman flopped on the stand. There is believing one is innocent and then there is the actual court judgement of not guilty based on the case presented. The latter is what matters to me as a legal hobbyist.

[0] PS this is why juries are instructed to avoid discussing the case outside of court, reading news on the case, and sometimes are sequestered if it's a high profile case where such communications can't be avoided


You are implying the amount of difficulty / effort put in by programmers is as great as for other highly paid professions. I just don’t think this is true for most cases. I’ve worked at 2 of the FANG companies where compensations is on par or higher than doctors / lawyers / consultants etc. Sure we all pulled some nights and weekends and did some reading and experimentation on the side, but for the most part it was off to the pub / go home at 6. My evenings and weekends were mostly spent doing “normal” things like playing games, hanging out with friends etc. outside of startups I don’t know any programmers pulling 80 hour weeks regularly, including side projects and self learning.


I don't know any lawyers or doctors pulling 80 hour weeks regularly without billing extra for it.


I know quite a few lawyers at the larger firms doing 70 - 80 hour weeks on a regular basis. The goal of course is making partner, but few will get there. Consultants and bankers ditto.


You don't know any doctors who have ever done a residency?


I wouldn’t say that “Regularly” means “only during the first 3 years of your career”


and 4 years of medical school. A programmer can earn enough to retire before a doctor finishes their training.


You are implying that effort should be correlated to pay. It isn't. The hardest job I ever had was being a bartender/barback at a very busy bar - it sure didn't pay much.

It's possible that one programmer can bring generate and capture more economic value by doing one "not very complicated thing" that gets pushed to 1 billion instances[0] than a brain surgeon can by doing one extremely complicated thing on a few dozen or hundred people per year.

'curl' is not complicated, but it clearly has incredible economic value, and has an install base of over 1 billion devices.

[0] https://daniel.haxx.se/blog/2018/09/17/the-worlds-biggest-cu...


I guess you need to get a job like game programming to see hours like that on a regular (career) basis.


> Nobody has a hobby of learning the law texts on intellectual property

Speak for yourself buddy, i enjoy pulling up the Cornell law or my local jurisdictions legislative website to fully understand the laws around me. In college my hobby was reading the laws about public nuisances and noise violations (and none of our parties were ever shutdown!). I still do this today to fully educate myself. Along with other nonstandard hobbies like flying planes, studying neuroscience or genetics, and owning a server farm for fun..

> Many programmers have a bias in thinking

I think the odds of finding hobbies outside of software and hardware on HN is probably lower than the general public, but the very reason we have specialists in other fields is because of their dedication to their hobbies!


They were exaggerating, there are going to be counterexamples but their point still stands...


This is true, I had neighbor cringe hard when I didn't know how much to charge for assistance. The only way for me to somehow value what I do is to compare how much usual jobs charge even. I dont like this because it causes inflation and unworthy motives (and probably harms society). But it's how I manage to not die giving time for free.

One other way would be to demonstrate true mastery, but with the breadth of computing and its ever changing nature it's borderline not possible.


"Nobody has a hobby of learning by heart the bones in the human body or the law texts on intellectual property;"

I don't know anyone who likes learning about bones, but I have actually met a guy, who was really passionate about quirks of IP related laws (my company hired him as consultant to clarify some mundane and fuzzy issues we had with usage of fragments of published academic books). He could talk for hours about various cases he worked on and was citing paragraphs of some obscure laws, going with true passion into detailed analysis how a given case looks in view of local Polish, Romanian, EU and international regulations. I would compare him to Rust, Lisp advocates easily.


Wondering if I'm a fraud because I can name about as many bones as bash commands.

chmod's connected to, access permissions

(Maybe I'll come up with the whole song if I ever teach CS in grade school.)


sudo !!


> Hobbyist programmers (who usually end up being the best ones) usually put great investment into their career by doing side projects on their free time.

Who’s to say those that went to school didn’t enjoy the learning process of their degree? They are by this definition giving up free time in order to learn something and are as well making a great investment in not only time but money as well.

I think that this needs to be reread as “Programmers who are passionate about what they do usually end up being the best at what they do” since if you think about it, they are dedicating ~40 hrs a week to their hobby. Who’s to say someone can’t excel in a hobby if they can’t dedicate more than that? What about people who have more than 1 hobby?


the amount of programming/hour you get at a job vs on a hobby project is very different, though, and tends to go down as you get more responsibility.

Not that the other work you do instead of programming, like problem decomposition, spec authoring, review, etc., isn’t useful for improving your programming skills, but not having to coordinate with anyone means you can just write code on a hobby project.

I recently lead a project and even with a tight schedule and trying to focus on implementing as much as possible I spent maybe 30% of my time programming.

Note further that I don’t really do any programming outside of work and don’t think it’s necessary or even useful to spend that much extra time programming. Iterating implementation ideas like a scientist is the only way I’ve improved and that’s fairly independent of time spent.


Most people do not have a habit of reading the law, however this is a path to becoming a lawyer in a few states, including California. http://www.abajournal.com/news/article/want_to_avoid_the_cos...


> Hobbyist programmers (who usually end up being the best ones)

Cite?

Let me guess, you're a self-trained coder.


There is a reason senior role hiring almost never cares about degrees on your resume.

Instead they’re looking at your community contributions and work experience.


This could be a temporary condition.

The principal value in my CS courses was access to systems and compilers (!) that I wouldn't be able to afford otherwise. Most of the people ostensibly hired to present the content didn't add much themselves (one was great, and we're still friends).

However, I've been really impressed with the depth and breadth of the coursework I've seen by way of close friends and younger relatives lately. A robotics class now seems like it might actually deliver on the promise of providing more knowledge in less time than designing/building/programming/etc a complex autonomous robot on your own (for example). You can still do it, but it's going to take a lot longer than just taking the bloody class, or you're going to miss out on a lot of depth/context.


Entirely temporary. I was rejected by google from 2006-2012 because I didn’t have a degree. Suddenly they need more programmers and they drop that requirement. Never mind I have formal CS training, they required a piece of paper.

I would not be surprised during the next recession that employers fall back to arbitrary demands like degrees or obscene amounts of experience.


Probably some. But competent employers will never fall back to degrees. They will always focus on... competence.


Not always true. For example, the AI/ML craze brought a lot of academics into industry and also brought a lot of their politics. For an AI/ML job at most places, prepare to be judged based on having the right credentials (good degree, school and publications) and looked down upon as scum if you don't have right ones.


Or again relevant experience. No one is gonna say no to a person coming out of e.g. Netflix’s search dept or Facebook or has decades of big data experience, regardless of their education.


What is “formal CS training”, if not a degree?


Possibly attending but not graduating (Bill Gates and Mark Zuckerberg come to mind).


I spent time going to college but never graduated. I finished all the required CS courses and a bunch of optional ones plus a few grad level CS courses then left to migrate a Fortune 500 bank off of COBOL. My resume after that was a who’s who of fortune 500s and deeply technical positions but google said no degree, no job. I believe that was the founders attitude towards hiring which has since changed


Have you seen the actual coursework or just the end result?

To borrow from my graphics programming class from ~2008, we didn't actually do much. Every assignment, we were given an almost-working program, and the assignment was just to fill in a single function - the one I remember offhand was a simple linear interpolation.

Granted on the flipside, with my information retrieval and data mining courses, we weren't given any code to start with and had to implement everything completely from scratch.


that was a shitty graphics programming class.

In mine, which was pre 2000 (I'm old), we were doing matrix multiplication by hand to ensure we understood how the underlying transformations worked, building our own renderer's, and so forth.


Yes and no. One of the big problems with coursework at the undergrad level is you aren't typically actually practicing idea and model genesis. This is an important skill, and one that those hobbyists have practiced a lot.


People who combine their natural interests with their work are intrinsically motivated. As such they tend to put in more hours and that has a compounding effect over time. Teaching oneself also has that effect.

If John Carmack had a twin who wasn't so inclined and just did programming as a 9-5 career, there's no way they'd be as skilled as he is. Plus they'd probably wait until being taught how to program instead of starting to teach themselves as a kid.


I mean it’s true. I’m honestly not even that good of a coder (I didn’t do CS in college and I tend to “hack” a lot) but I’ve been living and breathing code for hobby projects and then work for the last 25 years, and I’m 35. I spend entire days coding nonstop (working on a side project after my job) and I’m still constantly learning. Boot camps or even college degrees are no substitute for this experience.


I "self-trained" at coding before going to CS school. I don't have a citation for my claim (but it seems like a very reasonable theory to me).


Wage isn’t based on the difficulty of the job.

Wage is based on the opportunity cost of the uncaptured labor value incurred due to employment. For software engineering, this is acute since with the same skills needed for employment one can make a competing enterprise to their employer and capture all the value.

Other professions require you to have large capital to do so, so the opportunity cost is either non-existent if you cannot access that capital or further discounted by the financing cost/risk.

Your end SWE wage ends up being the value you would otherwise be able to capture discounted to the expected value w.r.t your risk.

Companies profit on employment by having a lower risk in enterprising, thus a better EV, than their employees in the same enterprise for a given number of reasons—brand, preexisting customer base, speed, proprietary market analytics, etc.


I think the skillset needed to successfully start a company is quite different from the skillset needed to be a good programmer. I consider myself to be a decent programmer, but I would never be a successful business owner


True but a development machine is a bit cheaper than a steel foundry or an automotive production line. The barrier to entry for a new software company is orders of magnitude lower than other industries.


Also, for the most part, isn't yet monopolized by lobbyists, corporatists and government working together to use regulatory capture to corner markets. It's pretty extreme in some industries, such as telecomm with natural monopolies, but also in markets without natural monopoloes. People even get arrested/fined for ridiculous things like braiding hair, giving haircuts (such as to homeless), selling hot dogs, selling cookies, or selling diet advice, etc.

So in accordance with OP's theory, if this were to happen the salaries for devs would go down.


> Also, for the most part, isn't yet monopolized by lobbyists, corporatists and government working together to use regulatory capture to corner markets

When your flawed government has its own stage in the product life cycle


I'm confused: normally occupational licensing and other barriers to entry are modeled as increasing pay by constraining labor supply. Why do you think I think the opposite?


I was just following a thought from the OP's theory. Although it makes some sense as follows:

Regulatory Capture involves not just occupational licensing, but artificial monopolies over products and services. For a current example, Facebook has been lobbying congress and even publicly stating that they want to work with them to regulate social media platforms and news media; and even though many can see some benefit to these regulatory laws, the trouble here is that critics argue these types of regulations tend to entrench large corporations who have the legal staff and dev bankrolls to deal with these rules. Actually, many industries, even historically cottage industries such as agriculture has been in modern time criticized in this way as having rules and subsidies written by large industrial, corporate entities which benefit large providers by creating barriers to entry for small competitors.

In short, they create legal barriers to entry to creating the next providers, such as the next Facebook. In accordance with OP's theory, as the opportunities diminish, the value of labor decreases.

But yes, traditionally occupational licensing increases cost and barriers to entry to providing a product or service and is modeled as to increase salaties and decrease jobs as labor supply is constrained.

This is an interesting comparison of orthodox economics as you have mentioned with a reasonable yet heterodox theory in the wild I've seen. I haven't seen OP's theory stated explicitly before. Have you? Unfortunately in economics, we are dealing with the science of studying human decisions and as such it is grossly impractical to create a true scientific experiment here to determine which theory makes better predictions. This is probably also partly why economics tends to be snobby, pretentious and inflammatory: it is ultimately a war of words and mathematical arguments when it comes down to it.


Uh, I am the OP, and that's not what I was saying?


What? You don't think you said something along the lines of the valuation of labor to a firm being tied to the opportunity cost of the product of that labor?


There is no such thing as local demand for software engineers, instead you get tend to get paid more the bigger the project you work in is. This means that you'd expect software engineer salaries to be the highest where you have the highest concentration of them, such as San Francisco or New York. If it becomes illegal to gather such high concentrations of developers in one place then all of those high paying jobs will have to move elsewhere, as you can't sustain them without high concentrations.


But that's because employees are the Factory Line in a way.


It's still probably a lot easier compared to other professions. Most need their employers to provide them with very expensive equipment to do their work. Programmers need little more than a laptop. That's a big difference, business-making skill notwithstanding.


"Successful business owner" doesn't mean creating Google, it could just mean having a 3-5 person IT consultancy and independent clients.


By that success measure basically any office worker can start a company just as easily as a programmer. The only advantage I can see for programmers is that a relatively small team can serve a very large number of customers, because software scales much better than other kinds of works.


>By that success measure basically any office worker can start a company just as easily as a programmer.

Yes, but nobody needs them. Companies can hire office workers by the ton, and office workers can't reach further than a small load of work. Software can.


That's a huge, huge advantage. Even doctors and lawyers can only serve a small number of clients. While software can be reused and enable large deployments. These margins make all the difference.


A good software developer paired with a talented marketer/salesperson together with a lot of work (400-800 hours each) have a decent chance.

They would probably be odds-on to succeed if they were chasing a niche in an established market.

This doesn't apply to companies like Facebook, Google etc because they're protected by a network effect.


It's likely that your employer will support anything you ask for to be a better programmer, but will support nothing that helps you to be a better business owner.


Dev lead (actual lead, not just a title) and product manager roles would definitely help, and developers can move into those. Granted that’s not everything, but you’ll need some of both of those to make it, especially product management, even if you’re solo.


You never know until you try.


Not with that attitude.


As a previous employer of SWEs, this is somewhat correct.

More accurately for me was salary was equivalent to the market price.

I needed to maintain a certain level of talent to be competitive, so I recruited and found developers at that level. After enough interviews and salary requests it's easy to ballpark the industry average for a certain level of talent and I'd hire the ones that were a good fit at a reasonable salary.

In my case supply was scarce but just available enough that prices were pretty stable.

As the supply of good developers increases it makes sense that the average salary request will drop.

This is economics 101 that no capitalist industry is immune to.


This is the right answer. Life isn't fair. Though, too, programming isn't as trivial as TFA makes it sound -- working with your mind is not backbreaking like manual labor, but it is still exhausting.

But TFA makes a good point: the level of compensation that programmers can demand may well fall more in line with that of other professions at some point. Thus one should not go into programming simply because current compensation levels make it attractive, nor should current programmers spend like drunken sailors. Caution is warranted here.

Software has been eating the world. We're probably far from peak software. Whether we're far from peak software developer compensation is another story -- I wouldn't know or make any predictions, informed or otherwise.


> Software has been eating the world. We're probably far from peak software. Whether we're far from peak software developer compensation is another story -- I wouldn't know or make any predictions, informed or otherwise.

Plus, we will reach peak software eventually - likely decades out of course


> Plus, we will reach peak software eventually - likely decades out of course

I'm amused by people who think that some tech will erase their tech job. Take ops automation (Ansible, not robots); some people think that it's about cutting tech jobs. Obviously there's definitely not fewer SREs today than there were sysadmins 20 years ago, far from it.

Until a major paradigm shift occurs, the need for software / tech can only grow in every economic niche.

A crisis is coming, it's not going to be a problem for those with IT skills / abilities, but for those without.


Excellent comment. The killer feature of software is that it is utterly delocalized. Theoretically, you can capture the whole market whether you are based in SF, Berlin, or Calcutta. That has actually probably been somewhat reduced over time as governments have realized the benefits to protectionism (eg you can’t capture the Chinese or Russian market anymore if you’re a search engine), but it’ll likely remain generally true compared to other enterprises.

This effect is so powerful that even if your business has a strong physical component (Uber, Airbnb, etc) the scalability of software still puts you at a huge advantage.

As a doctor or lawyer, you tend to be limited to a radius around your immediate physical location.

Software is a weird beast that we haven’t really seen any equivalents to in human history.


It hasn't been reduced over time - if anything it has increased. The early online commerce ventures were completely nationalistic in their scope, which is markedly different from today.

Localization in the sense that you mean is a function of a lot more than the software: language, culture and law all play major roles.


This is actually a great argument in support of OP's conclusion. Once starting a software-based business achieves a sufficiently high cost to entry (at some point most of the profitable problems to solve will be out of reach of a handful of programmers), I would expect pay rates to plummet.


I think the surface area of "problems that are possible to solve with a handful of programmers" is so that enormous that even if the capabilities of a handful of programmers stagnated, it would take decades or more to run out of profitable problems. Additionally, the capabilities of "a handful of programmers" increases over time as the best tooling available gets more capable, so the surface area of what is possible expands* over time as well.

* I'd argue that as tools improve, the number of solvable problems of a given value increases exponentially e.g. if you have a capability A and you introduce capability B, you can now do things that require only B or A and B, and if you then introduce capability C, that opens up (C), (A, C), (B, C), and (A, B, C) as new problems that are possible to solve.

Edit: For a concrete example, with tools like Stripe you can do payments-related stuff without being an expert in handling credit cards, and with tools like EasyPost likewise for physical mail, and with the combination of the two a single developer can now do any of the things that require either of those individually plus things like taking online payments and managing shipping on the best-cost carrier without any in-house staff and only a couple months of dev work.


First of all, the fact that any SWE you don't employ can start their own business, is just a boosting factor to SWE wage.

The key reason companies can and do pay SWEs relatively well is that these SWEs generate large profits for them.

That's the "opportunity cost" GP mentioned: yes, you'd have to pay the SWE $400k, but the cost of not employing them is several times that in lost profit.

Moreover, I'm not really seeing how the barrier of entry is going to increase substantially. In fact, it will probably decrease.

As software eats the world, there will be more and more "pure software" work, which means your only toolset is your knowledge. SWEs with large amounts of specialized knowledge and skills will increasingly be able to compete in these areas (security is a pretty good example). It doesn't have to be your typical company with a logo, brand, and office space - we're talking about consultancies and virtual businesses.


The places I've worked have had revenue-per-employee in the $200-300k range. That's total employees, not just developers (so, sales, accounting, HR, as well). Doing some rough math, it's been more like $15-25mm per developer.


Yes, it's the same for FAANG as well. If you just divide total revenue by total headcount, you already get seven figure numbers. But if you divide revenue by the engineers who generate it, you get a big multiple of that.


If you divide the total revenue by the number of HR employees, you will also get a much larger number. (Total revenue / total number of employees) will always be less than (total revenue / subset of employees).

This is not a very good metric.


In a company like Facebook, who is directly responsible for the revenue? And who is working support roles?

If you laid off 90% of HR, how will that affect revenue? What about laying off 90% of the engineers?


HR, Accounting, etc. aren't directly revenue-producing. You could make a good argument that Sales headcount should be included, though.


Divide by number of product managers then, even higher per headcount.


I suspect that at least one cost to entry that might be coming down the pike is a legal requirement to design secure and resilient systems / software liability. This might start in sectors deemed “critical infrastructure” but that line is increasingly blurred.


Most software products must stay relevant, so they need countinous improvement.


Software is poorly explained by economics so I question the validity of economic considerations in hiring or paying developers.

The thing that makes software different is scale due to low reproduction costs. You can easily envision the costs of adding new labor, for example, in building a car and directly map those expenses to increased production quantity. In that case scale of labor to scale of product is not exactly 1 to 1, but it’s close. In software that 1 to 1 scaling does not exist.

Perhaps the big difference is that few people working a car assembly line are engineers. Most of the labor is produced by manual labor and robots. In software the actual engineering is equally as rare but the distinction between engineering and button pressing (the manual labor) is absent.

To really prove that point many developers adamantly fear and oppose producing any form of original code. If you are terrified at writing original solutions to a problem how could the work possibly be considered any form of engineering? https://news.ycombinator.com/item?id=21883670


A software engineer is more like a car designer (or more general the entire R&D of car manufacturers). The number of car designers is independent of the number of cars you are producing, but if you want to have more or higher quality models you have to hire more of them.


Research (R&D) suggests some amount of intentional documented knowledge expansion, not mere personal learning. Every job has personal learning including scrubbing toilets and digging ditches. A person mindlessly smashing buttons will eventually learn to smash them faster with fewer mistakes but that isn’t scientific progress increasing understanding of product use cases. There actually is real research that occurs, with white papers, to test opinions and that isn’t what most developers are doing.


In this context I understand R&D as Research and (product) Development. Sure, that includes actual scientific research (say material science for better break pads) but also the development of that research into actual products (making a new, attractive, profitable break assembly with that great new materials the scientists came up with), as well as mundane product development (let's make a new truck by making slight modifications to last years model).

Programming can fall anywhere on that spectrum.


Low reproduction costs seems like a 90s thing.

If you are building proprietary, not-for-sale software, then low reproduction costs have nothing to do with excess value. Ditto if you're building cloud-y/online services.

On the cost side, too, software comes with extremely high maintenance costs that are often not accounted for ahead of time. These are so high that there had better be enormous excess value in the production and use of software regardless of the relevance of reproduction costs.

The real value of software lies in making the impossible possible.

Automation -made possible in the extreme by software- is extremely valuable. Even beyond automation, there are things humans simply cannot do, and even things that mechanical devices alone simply cannot do as well as software-aided ones.

It's hard to put a price on making impossible things possible.


Software isn't magic. It is either a product or a service the value of which is either a typical business question of marketplace penetration or internal expense reduction. That's it, but that isn't what this thread is about.


> Wage isn’t based on the difficulty of the job.

If by "difficulty" you mean "skill" then of course it is, in part. Why do baseball players make $30m and teachers make $30k? The demand for watching baseball is huge relative to the supply of people who can perform at a professional level.


Yeah, but speed running Mario is also really hard but pays nothing.


Demand.


> Wage isn’t based on the difficulty of the job.

It is though, kind of. It comes down to supply and demand economics -- and the perceived difficulty of the job. People hire lawyers because they don't want to become one themselves. Lawyer wages come down when there is a glut of lawyers and not a corresponding glut of lawyer work.

I'm old enough to remember working in 2001 when the software market contracted. It was very hard to get a job, even for skilled developers because hardly anybody was hiring. It sucked. It also temporarily removed software developers who were just in it for the money -- they literally switched careers to something else.

Incidentally my salary is now higher than it ever has been -- so the market clearly recovered, it just took a few years.


You could say the same about Law. All you need to start a competing practice is an office, if that.


Its true. All you need is an office...and an very expensive degree...and clients. That's all


The parent comment's whole point was that an already-skilled worker can start a competing business with minimal up-front investment, and that the main benefit an existing company provides is its existing brand and customer relationships. Please read before weighing in.


To simplify you can say wage is based on supply and demand.


Supply and demand (SD) doesn’t tell you what the equilibrium price is, which is what we are concerned with. That’s what this analysis resolves, and in this context, the long term view of the wage; of which the wage reaches a state closer to the SD equilibrium. But you’re right that the direction of price is completely subsumed by SD theory.


Exactly. Very, very well explained.


But... we understand EXACTLY why developers are paid so well. Economics (the same reason EVERY profession is paid the way they are). There are two factors that come into play for employee pay- 1. Supply/demand: When demand outstrips supply, prices rise. The demand for software so vastly outstrips the ability to produce it that salaries are being driven very high because 2. The marginal value that an employee can produce. This is a hard cap on the salary that any profession can charge, and is the primary driver for demand. So long as 2 is higher than the prevailing salary, demand will continue to rise, which will apply upwards pressure on salaries.

Of course, high salaries will attract more supply over time, which will put pressure back down on salaries. The current dynamic is SO out of whack though- there is a ton of slack in the system.

This isn't a boom/bust thing either. There is SO MUCH business value that could be had if there were programmers available to build the software. I don't think we're even scratching the surface of everything that could be profitably built yet. I think betting on a big bust in software engineer salaries would be a bad move.


There are more factors than just supply/demand and use value. There's the sensitivity of productivity and employee turnover with respect to pay, and the replacement of costly performance quantification with a significantly cheaper rank-ordering system (usually informal via promoting your "best" few engineers).

The former leads to efficiency wages, and the latter to tournament theory.

Example of the first: there are two jobs with an identical market-clearing wage and expected net present value of productivity. The first is relatively unskilled labor - you train them for maybe a week, and they do pretty much average performance for however long they stay. The second requires a year of training during which they provide zero value, then stay an average of a year after that during which they provide double the value. A savvy employer would pay the market-clearing wage for the first role, and above that for the second. By doing so, employees of the second type would be unable to find a job that pays as well as what they have now (since the market isn't clearing), so they'll tend to stay longer and provide extra value.

Example of the second: corporations will pay CEOs much more than vice-presidents. The work that both jobs do is largely the same, and the disparity in value added between the two roles is much less than the salary difference. The spread is there to make sure that senior executives work really hard to be seen as a better choice than their competitors - now the board doesn't have to evaluate how well executives did in an absolute sense, but rather need only judge who the best candidate is for the top role.


Of course, high salaries will attract more supply over time, which will put pressure back down on salaries. The current dynamic is SO out of whack though- there is a ton of slack in the system.

I understand all of this.

Are there any examples of industries where - for lack of better terminology - a salary "bubble" formed and later popped? For example, X job used to pay 100k and now it pays 50k? Probably better to exclude jobs that aren't around anymore because they were replaced by automation or don't make sense because of modern technology, at least for this question.

I guess I have an inkling that - ignoring things like economic downturns and lowering salaries due to high unemployment - employees will fight back against any downward pressure on pay, and once salaries go up, they tend to stay up. Sort of similar to what you see when an individual gets raises and jumps companies for an income boost - you rarely see someone take a job paying less than their current job.

All that reminds me of how people who graduate during recessions earn lower income over their careers[1].

[1] https://hbr.org/2018/09/people-who-graduate-during-recession...


I've heard this basically happened with Law. 20 years ago lawyers were raking it in and everyone wanted to be a lawyer. Today an entry-level lawyer makes less the half what an entry-level programmer does, and with twice the student loans. There's a glut in the market. My girlfriend was a lawyer for a couple years before doing a bootcamp and she almost instantly started making three times what she used to. It's crazy.


I thought a lot of the pay changes with lawyers was due to tort reform and caps on punitive damages, but I am basing that purely on vague memory.

To be fair, first year lawyers at top firms do earn comparable pay to programmers at good companies, maybe not as much as a full compensation package with equity, I’m not sure. But those lawyers who consistently rank at the top of their class and among their peers and stick it out and eventually make partner - they will make much more money than the average software engineer. See this link for some details on what top Texas firms are paying associates [0]. Partners at the big firms make a lot more than 190-300k - probably 2-3x.

The catch is that with law or investment banking you need to be at the top of your class and pedigree matters if you want to get the best jobs. With software engineering these days, a degree is becoming less and less important.

I think at the end of the day the person who enjoys being a lawyer is a very different type than the person who enjoys being a software engineer / coder, and I’d be surprised if there’s much overlap between the two. I’m sure some people can be good at both, and certainly most good developers have the brains to get through law school and pass a test, but the work is so drastically different many people just won’t be happy in the job, and, like what possibly happened to your girlfriend, they move on.

[0] https://www.bizjournals.com/houston/news/2018/06/25/major-te...


For a long time, law was also a thing that a lot of liberal arts majors from good schools sort of fell into when they had trouble turning their English or History degrees into a decent living.

Big Law still pays pretty well--although I assume the associate work life is as punishing as ever--but it's probably not a great career path in general for someone who is mostly chasing the dollars.


We saw this with programmers and the dot-com bubble. There was briefly really high demand and high pay, and then when funding suddenly disappeared there were massive shutdowns and layoffs. Many devs couldn't find work for years, and the market wage was much lower because there were so many extra devs. (College CS enrollment also plummeted then, which is one hypothesis for why CS training has been slow to scale up to current demand.)

Automotive factory work is another example, I think. Something like: there were high wages (with unions) in Detroit factories, and then competition first from overseas and from other US states led to a collapse in pay (along with a general collapse in Detroit).

I haven't looked into it, but I suspect that there was a time when some railroad workers were very well paid, which then went bust when one of the railroad bubbles popped.


I do feel that the dot com bubble was different. In that case, the whole industry was brought into doubt. "Maybe this whole internet thing isn't all it's cracked up to be." That notion is clearly ridiculous in 2019.

The much scarier "bubble" today is "actually most of this stuff isn't all that hard". There are probably also smaller bubbles to be had in AI, blockchain, venture capital, etc., but I don't think those will be nearly as broad-reaching as dot com.


The thing with dot-bomb was that there were cascading effects. All those startups were buying gear from the "Four Horsemen of the Internet" so their collapse brought down a bunch of the large firms as well. (And it was all mixed in with 9/11, etc.)

Today, even if there were a big ad-tech collapse, a bunch of highly paid Bay Area engineers might end up having to move back to Ohio but I'm not sure you have the same overall economic effects. (And, if it's mostly just a drying up of VC capital, the effects are even less.)


Airline pilots real wages have been decreasing for a long time. Starting pilot out of flight school makes like 28k for a lot of hours a week.


Yeah, I’m slightly familiar with that. Flying is a bit unusual since training is so much more expensive than other professions, and requires a lot of hours, so I guess you can consider the first few years like a medical residency. A low-paid training job.

http://www.aacadetacademy.com/CadetAcademy/career_progressio...

AA has an interesting program here, but I still think the best route for aspiring pilots, if you can get a pilot spot, is probably Air Force Academy. It’s very hard to get a pilot spot, though, and I don’t think it’s ever guaranteed unless you do the Army Street to Seat flight program, which does not require a college degree, but they mainly fly helicopters (maybe only?).

I often wish I had been a pilot... currently reading “Viper Pilot” by Dan Hampton which is a hell of a book.


Yes, and the consequence is that there is a shortage of pilots and prices are rising back up.


It's usually less exaggerated than that - stagnation while inflation or salary increases in other fields are higher.


This is a bit hand wavy... isn’t current comp due to a handful of tech companies that became very profitable very quickly, and a lot of venture capital money? Without either of these two factors I doubt wages would be at the same level.


Those companies became very profitable because technology made them so. It wasn't some accident that is here today and gone tomorrow. Technology is at the core of companies like Google, Netflix, Amazon, Facebook. Without the technology, which is implemented by SWEs, none of these would exist.

Their number isn't declining, it's growing.


> Those companies became very profitable because technology made them so. It wasn't some accident that is here today and gone tomorrow. Technology is at the core of companies like Google, Netflix, Amazon, Facebook. Without the technology, which is implemented by SWEs, none of these would exist.

Yes but don't assume that people in india, africa, or eastern europe are too dumb to do the same work. Nor do the outposts of the companies you mentioned pay the same US levels in those countries. There must be another component of this.


> Yes but don't assume that people in india, africa, or eastern europe are too dumb to do the same work.

Where did I ever assume that?!

> Nor do the outposts of the companies you mentioned pay the same US levels in those countries. There must be another component of this.

How about "they pay what they have to"?

In India, if they pay $60k/yr, plenty of talented candidates would work for them. In the US, if they paid that much, nobody would apply.

Simple.


Plenty of talented candidates would work for that price, and some would emigrate for 3x the salary and a visa. It’s a complicated and moving system I don’t think we are modeling properly in this discussion.


Well the question is: why don't they replace the $370k/yr american employees by the $60k/yr indian ones? Why do they pay the premium for US employees? Except for defense projects, it doesn't matter where the code is made, US or India. The German software industry outsources heavily and tech wages don't reach the US levels. Why doesn't Google?


The answer would be: 1) they do, and 2) they can't

1) Google clearly hires plenty of people in India. Here's their new office in India, with allegedly ~10k employees: https://careers.google.com/locations/hyderabad/. The assumption that Google is only hiring "expensive american employees" vs "cheap indian employees" does not hold

2) Talking about "american" and "indian" employees is misleading. Your question could just as well be "Why not hire for $100k from <midwest> instead of $370k in CA?" At that level of salary, you're talking about attracting top talent from around the world, who are often happy to migrate. The salary is a reflection of an arms race between tech companies - the same person isn't going to opt for $60k to stay in India - they will either take a $350k job in Facebook, a $300k job in Amazon, or at worst, a $100k+ remote job to stay at home(numbers are illustrative). You simply cannot attract the same talent due to how global(and competitive) the top-end of the programming marketplace is.


> Why not hire for $100k from <midwest> instead of $370k in CA?

That's not a realistic comparison. That's a comparison of an average-ish salary in the midwest to a far above average salary in CA. Realistically, it's more like $100k in the midwest vs $130k in CA or $370k in CA vs not being able to hire anyone in the midwest because there isn't enough volume to be able to find someone who's that much of an outlier.


> The German software industry outsources heavily and tech wages don't reach the US levels.

German SWE work doesn't pay well domestically. It's also struggling to attract talent. Probably a reason why Germany isn't known for its many profitable software companies.

> Well the question is: why don't they replace the $370k/yr american employees by the $60k/yr indian ones?

In the case of an engineer making $400k in the US, I suggest this is a fairly exceptional individual, with a level of skills and abilities that are rare in _any_ country.

Since that individual produces large profits for their employers, employers struggle to hire them at any price that is lower than their productive yield.

Strong SWEs can make companies like Google and Facebook millions of dollars per year. So these companies will gladly pay any six figure salary for these individuals.

Of course, if they can get away with only paying them $60k in India, they will. It's not their goal to pay well; it's a necessity.


They try to but it's not always that easy. When you outsource you often need a lot more people to do the same work since there is more overhead. So the total cost may not be that much lower and the output is less.

Not saying this is always the case but it has been like that at my company several times. IMO IT and business need to work close together to efficiently produce quality stuff that the customers want. This is way harder if you are separated by a sucky phone line.


Additionally, businesses learn after being burned a few times that throwing distant resources at software problems tends to make a bigger mess than they started with (no matter how good their PM may be). Often one good developer embedded within a domain in a company is worth more than an entire outsourced team.

I'm sure there are exceptions, but it's what I've anecdotally observed over my career to date.


Google would have to become a remote management company for that to happen. But Google makes so much money that they're not really under pressure to do that.

Outsourcing is a strategy employed by companies who need software engineers on the cheap and are less concerned about the non-zero difficulty of remote work/management.


I wonder if lack of competition has made them lazy when it comes to driving down costs. Each of the FAANGS is a monopoly or near monopoly in its niche and American regulators seem reluctant to deal with that.


Except for defense projects, it doesn't matter where the code is made, US or India.

Actually, yes it does.


My point is that there are usually quite specific conditions and money flows leading to very good compensation for certain jobs, rather than a 'force' like 'Technology' which seems a bit magical to me. An example is when Ford starting offering higher wages to change the labour market dynamics [1]. Although yes, this was brought about by the industrial revolution, saying that doesn't really provide the crucial insight into why it happened and the effect it had.

[1] https://www.forbes.com/sites/timworstall/2012/03/04/the-stor...


Your initial comment talked about "tech companies that became very profitable very quickly" as if that's some sort of accident. I'm reminding you that it is not, but a direct result of their technological focus.

Technology is a force that makes employees more profitable. Work that used to require dozens of employees now requires just one engineer writing code for multiple computing devices. Nothing "magical" about it, it's the very nature and purpose of technology.


I didn't mean to say it was an accident, but at the same time surely you aren't claiming it was an inevitability?


It is inevitability. Increasing productivity is the goal of technology. Look at technology from the dawn of time, all it did was increase the productivity of labor, both individually and as a group.

Without getting into a huge topic - it also encourages a "winner takes all" set of conditions, in which companies with the right technology and the best employees control a disproportionate amount of the profits in their field.


Current comp is ultimately because computers and the internet are both the entertainment medium of choice (displacing film, radio, TV, and we know all of those industries and the supporting industries they spawned were huge) and a universal personal and business tool (with an impact as great or greater than the automobile did on both personal and business transportation and, as before, we know how big the auto industry and supporting industries became). Computing and the internet are so fundamental now that I do not expect this wave to stop until every person on the planet, no matter what their local standard of living is, has a personal smartphone within arm's reach 24/7.

We're not guaranteed to have no bumps along the way of course but, over the long term, it's really hard to envision a case where demand for developers, and consequently comp, drops drastically before this global saturation occurs.


I guess that depends on if you're talking about bay area comp or high comp elsewhere, which are on two very different levels, but still high compared to other jobs in their respective regions. For the bay area it might indeed be coupled to extremely profitable tech companies, but elsewhere there is still a very high demand for software engineers from SMBs.

A lot of mid-sized businesses want to gain an edge over their competition via custom software solutions. (The optimistic part in my also thinks that it's still the early days there, where companies in each niche are trying to uncover software driven upsides with fewer resources, and once those are found everyone in the niche competes over squeezing out the most of that opportunity, which requires a lot more engineering resources).

On top of that even moderately complex websites (search function + data pipeline leading into that) that a lot of 100+ people companies have require regular attention from a somewhat skilled software engineer.

So in conclusion, I think that high compensations might be driven up by "a handful of tech companies", but a lot of the demand is mostly uncoupled from that.


Those handful of tech companies need a boat load of engineers, restricting the supply.

A restricted supply means you have to play a higher price.

It doesn't matter what causes the lack of supply, its the lack of supply that causes price rises.


> Of course, high salaries will attract more supply over time, which will put pressure back down on salaries.

It's not clear to me that there aren't soft and hard caps on supply, even if everyone wanted to be a programmer because the compensation was so great. There are certainly aspects of intelligence, temperament, and personality that are needed to crank through information, organization, and abstraction problems as part of a team for decades of one's life.


I've worked with the overseas "talent" and if that is the best they can do, I'm not worried.

Hell, I've worked with a lot of people on the US side of the pond who have experience or are educatied and they can't rub two keyboards together to ship a product.

My observation has been the opposite: no matter how much you offer, a lot of companies STILL cannot hire enough engineering talent.


Yeah this. I've been in the industry almost 20 years and over that time have noticed a massive shift in recuitment processes.

The first couple of jobs I had the interview process was a conversation with barely any technical content (let alone being asked to write a single line of code). Nowadays there is a generally a barrage of technical testing.

I think companies are realising the huge difference between having top 20%ile programmers versus the rest. Where I am currently although we get plenty of applicants we simply can't fill all of the spots.

Agree with the overseas talent comment as well. The amount of technical debt they create is massive, the overhead in communication is too high, and the people aren't invested in the products the code they are creating.


> Agree with the overseas talent comment as well. The amount of technical debt they create is massive, the overhead in communication is too high, and the people aren't invested in the products the code they are creating.

Technical debt: As an oversee talent, cleaning up the technical debt created by (now left) US employee, we can see that both yours and mine experiences are anecdotal.

Slack Communication Me: 1. Question A, 2. Question B, 3. Question C US employee: answers half of the question B, general ramblings about A and C

Zoom communication Me: Lets get on zoom and figure this out right now US employee: lets schedule the meeting on 2 days, throw a PM and two more developers in it, so they can waste time too

So, I guess, lets agree that generalizations are not ok.


You just made the parents point about the overhead in dealing with offshore devs is too high to be productive.

Perhaps you’d like to expand on this technical debt you’re cleaning up?


There is a flip side to this: check my other comment. "Overseas talent" is not invested in the product because most companies don't treat "overseas talent" as equals. My experience as "overseas talent" is that US companies bring their own technical debt which they are powerless to fix and they blame us for their incompetence.


Your experience is vastly different from mine.

I've worked at 7 different silicon valley companies in my 35 years of employment, though the last 20 have been at two particularly high flying companies. Approximate half the staff is foreign born, some US educated and most foreign educated. I see no pattern to skill levels between US born and foreign born coworkers. Maybe it is because these companies draw top talent, and can afford to pay for it, I have this experience. You might be cynical and think I'm a "meh" level employee who is easily impressed, but my pay grade and annual reviews say otherwise.


The comment suggested people who still lived overseas. The foreign born talent you work with lives in the US now. If anything it suggests the best talent moves to the US for the better salary and quality of life.


My company has significant teams in many countries. Daily I work with people from all over the world. I've certainly worked with some excellent people who worked in China and in India and had never set foot in the US. But you are right, the majority of the people I work most closely with are in the US and so I know their characteristics the best.


You are both making different points, because I agree with both of you. Some of the best engineers I’ve worked with have been foreign born, and I’ve also seen disasters happen due to outsourcing to foreign talent. These are two vastly different groups we are talking about. The best foreign engineers almost always end up working for well paying product companies. The other end of the spectrum, foreign or US, often end up at contracting companies that compete on price.


I think a lot of the good practices and skills necessary to ship good quality software are not documented well (or formalized). That's why working with good teams and learning from good devs (or profs) is what determines the quality of a developer.

Also, time and again we see posts here lamenting the hiring process in this industry and we haven't cracked it yet. This adds to the demand-supply gap.

Anecdote: I have interacted with some of the 'overseas talent' and they were excellent programmers but lacked communication and hadn't fully figured out the mechanics of working in teams. Others from the same background but who decided to get a higher education from US had better exposure and generally fared better.


Totally agree. Successful software engineering has almost nothing to do with clever programming tricks and everything to do with writing software that's empathetic to being within a commercial and team environment. That means writing software that takes into account that it will be read by others, used by others, and inevitably changed by others.

Interview processes at so many software companies don't do a good job of selecting for it. They still look for algorithmic competence and next to nothing about teamwork and structuring software projects.


I’d argue that many of these practices and skills are in fact non documentable/teachable (see the oft cited problem of teaching recursion/pointers to CS students, and that’s just one of many fundamental required things that are way more formally defined compared to “leading a team to ship consistently good software”). I’m sure many of us have seen very motivated, well intentioned junior developers who struggle very hard to push and improve their skills and still get completely left in the dust compared to the engineers who just seem to have a knack for it.

In this way software engineering resembles traditional craftsmanship more than any other discipline.


Methinks this is a property of a new (~50 yrs) industry, rather than an inherent property of programming.

Whereas programming itself has somewhat solidified, most of the programmer management/learning/etc. state-of-the-art still deals in nasal demons, i.e. much is still seat-of-the-pants experimental.


I think if you play the game of finding the lowest bidder to do the job you'll definitely run into folks who aren't good on both sides of the pond.

If you pay the buck you can hire excellent engineers in India, China, Eastern Europe & other parts of the world. No wonder Amazon/Google/Microsoft have a large developer presence in these markets.


I've worked for US companies and I'm amazed how low quality their talent is in general compared to talent in Europe. Then I realized that US companies can afford to buy top 2% talent in the EU, but most of them can't afford to do the same in the US. So this leads to a very frustrating situation where you're smarter than most of your US colleagues, but they still tell you what to do because they hired you.


Just as in other fields, Europeans tend to outpace the U.S. when it comes to book smarts, but lag when it comes to things you can't/don't learn in school. They're great at building exactly to requirements, but less skilled at handling the unexpected or coming up with something novel. There's a place for both skill sets, fortunately.


It turns out that being a good programmer isn't simple or easy. I've seen many teams where everyone on paper looks like they should be a good programmer, yet only a few team members are doing the heavy lifting and the majority are really just in supporting roles and sometimes not even doing that so well or even contributing negative marginal productivity by creating bugs, technical debt, and team dissension.


Yeah but I think most of the time this happens because some senior developer edges all the other devs out. After a few months of this he’s so productive with the code base that the others look like bad programmers


It's hardly a one dimensional thing though.

One of my coworkers is really good at churning out code in a short amount of time that will do the job. He's not so good at abstracting his code to make it more reusable, even when he has time.

Another coworker is a hard-working cleanup guy, he doesn't mind getting his hands dirty and spend several long days cleaning up a decade of cruft. But he doesn't always appreciate subtle differences in code, so sometimes introduces regressions.

Third guy we have has excellent domain knowledge which is invaluable, yet is an otherwise average programmer.

I've got my set of things that I'm really good at, and the things I'm not particularly great at.

We don't have a rock star dev, but we have people who are good at different things and we take that into account when planning and executing projects.


Sometimes but usually I think the primary factor is the personality of the junior engineers. I think to be successful you have to like learning independently and have a good work ethic. Most places don't have strong mentoring programs and as you mentioned some programmers won't mentor for whatever reason


Programmer salaries are only especially high in the U.S. and a few other countries. In most countries, programmers don't get paid anywhere close to what SV engineers make. In the U.K. for example, programmers start at ~$32k/yr.

I think the reason for high programmer salaries in the U.S. is due to:

1. Culture that values engineers

In Silicon Valley and in many tech companies, the culture is that high quality engineers are highly valued and respected, and thus command high salaries. Companies like Google, Facebook, and Microsoft were founded by ex-programmers / ex-CS majors, so they understand the value that high quality engineers bring to the table and are willing to pay for them. Whatsapp was bought for $20b and only had 50 employees, 70% of whom were engineers.

In non-tech companies where tech is seen merely as a cost center, programmers don't make that much (eg. fashion companies). For the record there's nothing wrong a non-tech company not wanting to pay top dollar for engineers, some random ecommerce store probably doesn't need world class engineers, just like a semiconductor manufacturer doesn't need the best graphic designers.

I'm not very familiar with the UK so I don't know firsthand why UK engineer salaries are so much lower than in the U.S., but if I were to guess I'd say it's largely because their culture just doesn't value engineering talent like in SV.

2. Tech companies have a lot of money.

The big tech companies have very high margins. Facebook's market cap is $20m/employee, profit is $684k/employee and has a profit margin over 40%. Apple's market cap is $9m/employee, revenue is $2m/employee, gross profit margin is 38%, and they have $245b in cash. Even unicorns that bleed money (eg. Uber) are flush with cheap capital from VCs, with Uber having raised $20b to date.

Looking at it from this metric, it would seem that engineers are underpaid relative to the value they create. Not only just engineers, but labor as a whole.


People on HN look at everything not just from a regional perspective, but that of a very small clique out of all of the professional programmers. FAANG companies have plenty of work that is beneath their SWEs, which they outsource, in some cases, to American companies that have a mix of American employees and offshore. They hire people in the US around $40-60K and increase pay very slowly. Nobody gets paid $100K, because (a) if you switch jobs, you only get an incremental bump in salary, due to not having the right kind of title and pedigree, and (b) you're competing with much lower paid offshore employees. Best case is $80-90K after ten years and/or relocating to a major metro area.


Interesting. Any examples of this kind of lower valued work?


I mean, it's not precisely lower valued. It's valued low enough by the in-house SWEs, that they aren't going to do a good enough job to prevent embarrassing headlines.[1] Therefore, the outsourcing company isn't going to charge much less.[2] At the same time, they provide insulation from responsibility if something does go wrong. So they can pay much less and capture the difference. The cultural gap[3] prevents their employees from working directly for companies like their clients so competition doesn't equalize wages.

[1]This is a reference to particular events of which I'm aware.

[2]e.g. an hourly rate equating to $250K/year, maybe more.

[3]I mean nerd culture, not "real" culture.


the UK thing blew my mind. I started at 53k/year in 2004, at lower than market rate because I loved the company and didn't negotiate harder. I'm at almost 4x that now and I live in a stupidly cheap metro with a scant half a million people. When I verified "Junior software dev" in london, jaw practically hit the floor.


3rd reason: Difference in the net income, i.e., the money after taxes, the cost of living, different insurance structure per country. The 100k in the US is not comparable to the same amount in Germany, without taking everything into account.


I suspect your number 2 is the most important factor. Outside of big tech companies and those who have to compete with them, salaries are much lower. Possibly not low enough, given the lack of barrier to entry as in the original article, but still.


This isn't hard to understand at all. Programmers are paid well because you need a pretty smart person willing to work at a not very exciting job often offering few interesting challenges that will likely contribute nothing of note to society but has the potential to contribute substantially to the efficiency and profitability of your enterprise.

Most of the people that are smart enough to be good are likely to be either doing something more interesting, contributing more to society, or making even more money than you are willing to pay them to sit in a chair and write code.

It's not that most people aren't suitable. It's that most of the people that are suitable aren't available and thus you have to pay more money to compete to pay the ones that are to sit in a cubical and waste their life helping you sell more ads.


> a not very exciting job often offering few interesting challenges

I think the exact opposite is true. Maybe programming isn't as interesting as being a literal rockstar, but I think most programmers' second choice of job would be "real" engineering which is usually lower paid and in my opinion less interesting.


Yeah, he’s saying I’m either not that smart, or that I’m wasting my life... I’m not sure what to be offended by here.


Maybe you are lucky enough to work on interesting things and don't need to be offended?


Some people like cross-words/sudoku; I don't understand it except I like my work too just for the sake of solving problems, which is basically the same thing. Some people find satisfaction in "uninteresting things".


No matter what happens, his basic advice is sound: spend a small portion of your income.

My advice is similar: live on your base salary, and save a lot of it, so you will be fine even if your bonuses/RSUs/options end up worth nothing.

In the worst case, you save up a lot of cash before programming compensation drops. But even if that doesn't happen, it's still better to build wealth, save for retirement, and perhaps have the ability to retire early if you choose to.


Managing your life is about balancing risk. Having 6 months in an emergency fund is golden when (not if) you get laid off or something bad happens like a car wreck or illness or whatever. During the course of your career, it will likely happen. When it does, not having monetary worry be part of it is golden.

You can also broaden your opportunities by paying off your house and not borrowing to pay for depreciating assets (cars and other things with wheels or propellers). Getting burned out and want to take some time off? It's a lot easier if you are not having to make a big debt payment or rent payment each month.


I'm curious - what percentage of pre-tax cash per year would you recommend saving? All of my friends are on my case for not "saving enough" because I'm living my best life, but I'm not sure what the appropriate number is if my net worth is increasing.


15% of your pre-tax salary saved for 40 years should get you enough to retire at a comfortable level compared to your current spending. Lots of tech workers do not get 40 year careers. If you plan on a 25 year career, you need to save more like 35%.


I save around 30 pretax...theoretically it’ll get easier with a TC increase. Don’t know how to save more without drastically reducing quality of life though.

aantix 3 months ago [flagged]

Why?

So you can be old and crippled finally going on the vacation you dreamed of? Barely able to walk out in to the ocean, much less surf like you dreamed of?

Nothing is guaranteed. Quit living off rehydrated beans and eggs like you’re winning a merit badge and start financially planning to see the things your heart desires.

If you have left over savings when you die, you did it wrong.


No. So you can be old and crippled and still live.

If you are assuming that you will always be able to work, making the wages you make today, then you will have a rude awakening the day that you cannot. Living below your means has been almost universally good advice since the dawn of civilization.

I do appreciate your sentiment however, in that there is a lot of advice from FIRE types who talk about even the most minor indulgence as some hedonistic waste of money. And it makes sense to remind those who go to that extreme, that spending money now can actually buy a little happiness. But I think you're being slightly disingenuous by taking the parent comment to the extreme of living off rehydrated beans when they're simply reiterating the idea of saving as a virtue.


> old and crippled finally going on the vacation you dreamed of > living off rehydrated beans and eggs

I'm continually baffled by this kind of reaction.

I spend ~30k USD per year - more than the average american and more than 99% of the world. I live in a nice apartment with a garden, a 10 minute commute from the centre of one of the most expensive cities in the world. I buy organic food from the local supermarket and eat out multiple times per week. This year I vacationed in Tenerife, Greece, Italy, San Francisco and Vancouver.

This is, by any reasonable standards, a life of luxury. And on a typical FAANG salary it would take ~7 years to save for retirement - https://networthify.com/calculator/earlyretirement?income=13...

You would think most people would be interested to learn that their level of wealth opens up the option of complete financial independence in a comfortable middle class lifestyle before the age of 30, but instead it's always straight to yelling about dried beans.


The last 10 years of life gets incredible expensive unless you children who want to be full time caregivers... $30K per year + social security likely will not cut it. $30K today will maybe just cover the in home assistance in a medium to low cost city. Unsure how inflation will affect things.

You'd need much more than that....


> So you can be old and crippled finally going on the vacation you dreamed of? Barely able to walk out in to the ocean, much less surf like you dreamed of?

> Nothing is guaranteed. Quit living off rehydrated beans and eggs like you’re winning a merit badge and start financially planning to see the things your heart desires.

I never said you should live on beans and eggs and skip vacations. Most engineers are paid well enough to live well on their base salaries while saving a reasonable amount of money.

What I'm advising against is stuff like spending so much on your mortgage that you will go broke if your RSUs drop in value or your annual bonus is less than you expected; in other words, live a lifestyle you can afford with your base salary.


> My advice is similar: live on your base salary, and save a lot of it

To be fair, that comment doesn't exactly align with advising someone against getting a mortgage they can't afford unless they receive a bonus.

I'm currently reading "Narrative Economics" by Shiller and it mentions propaganda historically being fed to the poor/middle classes in regards to saving as much money as possible (so banks could profit or earn commish off it)...I'm by no means against saving/investing money, but your original comment (and the multitude of others that are similar) makes me wonder how much of that propaganda is alive and well today.


> If you have left over savings when you die, you did it wrong.

which would be a fine statement if we knew exactly what is going to happen.

As we have no idea if we are going to live till 100, and drop dead in the act of procreation with one's 50th life partner, or die at 32 of bone cancer after 5 years of off/on chemo and other invasive procedures. One has to build in slop to cope with all that life might throw at you.

The amount you save is entirely up to you. Some people might be willing to live off beans if they save enough to move to a pacific island in 5-10 years.

Other people are contented to live a bit larger, because they plan to stay where there are.

As with all things you need to figure out your goal and plan for that, and have enough slop to account likley occurrences (family, illness, recessions)


There’s a middle ground which is to spend intentionally so that you’re not wasting money. You can set aside a percentage for purely fun stuff. But a lot of us waste money — we spend on stuff we don’t need, things with diminishing marginal utility. How many dinners out a month are really necessary to improve happiness — and at what point does it cease to move the needle? That’s subjective, and will vary from person to person.

It’s about being intentional.


Or retire at 40-50 and live the good life doing all the things you didn't have time for when you were working, without being crippled like people working regular jobs are when they retire.

And have enough left over so your spouse can keep living comfortably after you're gone. Even if they were a SAHP and thus had a gap in their retirement savings.

And have enough left over so your kids and grandkids can get a head start in society.

It's not just about you in this very moment...


This is an extreme, uncharitable interpretation of the parent comment.


I think on the types of salaries this article is referring to, you could happily do everything you want to reasonably do lifestyle wise while still save a safety net for the future. Travel, coffee and restaurant meals, expensive hobbies, nice neighbourhood. Should all be achievable.

If you want to play it safe, then you're really just trying not to leverage your wage into insane debt. You could live a decent lifestyle while putting money away and be financially fine if wages went down. But if you get loans for a $1.5m home and a $200,000 car you might find yourself with payments too high.


> So you can be old and crippled finally going on the vacation you dreamed of?

No, so you can retire early, as in your 40s. It's stated clearly in the article and not easy to misinterpret.


And if ageism in the industry is as bad as it seems to be, being able to retire in your 40s might be a pretty important consideration. If nothing else, it gives you a way to turn down work you don't like doing.


By this logic the eptiome of efficiency would require my children to die on the same day I die. I can't bring them with me after all.


I think there's a balance between what you and grandparent are suggesting, but it's hard to strike, and there seems to be a lot more advice about either retiring by 40 or yoloing it than striking that balance.


You seem to be fighting your own personal demon here. This is not what the parent commenter meant. If you can't meet basic needs, there is no point trying to save a lot of money. The advice becomes relevant as your income begins to rise above cost of basic needs.


This definitely flies in the face of what the economist Milton Friedman[1] perceived as our societal values.

I personally hope this kind of thinking isn’t the trend.

[1] https://youtu.be/km9OCw3f5w4


In my experience, people with this argument are often trying to rationalize exceptionally unaffordable hobbies, like leasing a top end Audi or Tesla every 3 years.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: