Hacker News new | past | comments | ask | show | jobs | submit login
Are We in the Middle of a Programming Bubble? (thinkfaster.co)
386 points by swimorsinka 6 months ago | hide | past | web | favorite | 397 comments



I think programmers tend to underestimate the difficulties involved with becoming a good programmer because once you're good, you only see the even steeper learning curve ahead of you.

Some of the smartest people I know work in other domains: biology, chemistry, and even physics. They are sometimes baffled by tasks that seem trivial to me, and I'm under no impression that I'm more intelligent than them. I simply specialized and focused only on programming, while they program to accomplish other tasks in their domain of expertise.

Can this last forever? Of course not, nothing lasts forever. But wondering why the wealthiest corporations in the world pay their workers high salaries is perhaps like wondering why water is wet. Software has a low marginal cost, and the rest is basic incentives for the corporations.


> I think programmers tend to underestimate the difficulties involved with becoming a good programmer because once you're good, you only see the even steeper learning curve ahead of you.

Good programmers I know also overestimate the skill needed to earn high salary in this job. You don't have to go up the learning curve much; these days, you just learn yourself JS a little bit and go for a webdev job, making shit code and still earning more than most people in a given country.

> But wondering why the wealthiest corporations in the world pay their workers high salaries is perhaps like wondering why water is wet. Software has a low marginal cost, and the rest is basic incentives for the corporations.

Nah, that's like wondering why is this ice block sitting on a hot plate and still solid. The answer is: because it just got put there, and it'll melt in a moment. So too, will end high salaries, as most low-hanging fruits get eaten by software, made by mass-produced cohort of programmers.


> Nah, that's like wondering why is this ice block sitting on a hot plate and still solid. The answer is: because it just got put there, and it'll melt in a moment. So too, will end high salaries, as most low-hanging fruits get eaten by software, made by mass-produced cohort of programmers.

Our industry has its share of cycles, but this, in my view, is largely wishful thinking on the part of people. Nothing wrong with optimism but...

Every 5-10 years there's a "technical shift" that forces everyone to reevaluate how they build software or more importantly what they build, and the race starts all over again. The ice block is removed from the hot plate is replaced by a bigger, colder block of ice. And when these technical shifts aren't taking place, the bar for what constitutes as "good software" inches upward.

If your standards for acceptable software were frozen in time in 1985: using modern hardware and software toolchains, you could accomplish in one day what used to take a small team an entire month. But if I delivered, say, what passed for a "good enough" database in 1985, it would resemble someone's 200-level CS final project rather than a commercially-viable piece of software.


I have not noticed these technical shifts per se. What I have noticed is that mature engineers move on and do other things and new ones reinvent the wheel with some new fancy language or term which then becomes the new way of doing things, and the cycle repeats. Sometimes there is a great deal of value when a new level of abstraction happens but I wouldnt call this a shift, it's just progression.

Many of the underlying problems and solutions exist for decades. Database systems you mention are a good example of this.


When I talk about shifts, I'm referring to things like the proliferation of smartphones and tablets, which increase the net-demand for software and along with an entirely new specializations of knowledge.

While there are some key concepts to things like databases, the fact remains that your 1985 database would not be considered sufficient in todays world: It would have too many limitations, lack features we now take for granted, would not scale to modern data requirements, etc. Supporting all that "modern" functionality is non-trivial and requires a huge amount of effort. You can't just say "Well, we figured out space and computationally-efficient hashing, so relational databases are well on their way to being feature-complete"

There's a reason we haven't stuck with 1.0 on our platforms, and it's not just security or a desire for a bigger version number: New demands required new functionality and new ways of building things.


"When I talk about shifts, I'm referring to things like the proliferation of smartphones and tablets, which increase the net-demand for software and along with an entirely new specializations of knowledge."

iOS was basically AppKit, so anyone already developing for the Mac knew most of what they needed to know to develop for iPhone.

Pretty much every programming innovation is incremental, and doesn't require throwing out all of your previous knowledge and starting over.


> iOS was basically AppKit, so anyone already developing for the Mac knew most of what they needed to know to develop for iPhone.

Maybe. But AppKit was not the Mac Toolbox.

When my career began being good at memory management was a skill to be proud of. I would say now, being good at concurrency is a skill to be proud of.

I don't really have to worry about memory management any longer but didn't worry about threading when I started my career.

As I see the younger generation entering the programming field I wonder in what ways the craft will be different when they've had a few decades under their belt.

Will parameter tuning datasets for machine learning be the coveted skill? Who knows.


>so anyone already developing for the Mac knew most of what they needed to know to develop for iPhone

But the demand for developing in AppKit suddenly increased by orders of magnitude.



MongoDB didn't become a public company through innovations in fundamental distributed database technology or even through good engineering. They became a public company because once Javascript became adequate for building client software, there was a strong incentive to build MVPs using the same data structures from client to server to DB, and once you build an MVP that gets adoption there's a strong incentive not to switch databases.

That's the sort of shift in the environment that the grandparent is talking about. Fundamental CS tech was arguably better in the 1970s and 1980s, because it moved more slowly and you had time to get the details right. That doesn't matter if you're building say a mobile Ethereum wallet in 2018, because you're building for the user expectations of today, they don't care about data integrity or security as long as it doesn't fail during the period where they're deciding which tech to use, and software that solves the problem (poorly) now is better than software that doesn't exist.


> Fundamental CS tech was arguably better in the 1970s and 1980s, because it moved more slowly and you had time to get the details right. That doesn't matter if you're building say a mobile Ethereum wallet in 2018, because you're building for the user expectations of today, they don't care about data integrity or security as long as it doesn't fail during the period where they're deciding which tech to use, and software that solves the problem (poorly) now is better than software that doesn't exist.

I believe you are a victim of survivorship bias.

There was plenty of shitty software in the 70s and 80s. The difference between then and now is that we haven't been able to wait for 4 decades, to see what software of 2018 stood the test of time.


I agree, back then there was not a mindset of move fast and break things. It was foundational research, a lot results and learnings from then are still applicable today.


> It was foundational research

In the 1980s there was a lot of "foundational research" (poorly re-inventing the wheel) for microcomputer people who did not know about the work done on large computers in the 1960s. Move fast and break things was also very much a thing for microcomputer manufacturers and most microcomputer software vendors. Look at how many releases software packages went through, and at what rate.


I think you're agreeing to disagree. His viewpoint, to me at least, is opposite to yours. Or at least parallel. He never said that today there's no foundational research.


> software that solves the problem (poorly) now is better than software that doesn't exist.

That has always been the case and is about as good a one-line summary of the software industry as I can think of.


It could be a one line summary of Richard Gabriel's famous essay "The Rise of "Worse is Better""

https://www.jwz.org/doc/worse-is-better.html


Done is better than good!


But on the flip side of that slow movement, you wind up with things like Oracle... for good or bad.


And the mainframes that run our banks, transportation systems, healthcare, public safety.. etc etc. Use the right tool for the job, price it against what the market will bear. Pacemakers and insulin pumps driven by npm updates - shudder -


This predates the formulation of the cap theorem by ~15 years, so I doubt it contains everything relevant to a modern distributed database.


It's before my time, but I'm pretty sure people had at least an intuitive understanding of what partitioning does to a datastore before Eric Brewer wrote it down.


I'm not sure why you get downvoted, I'll upvote you.

The "modern" database systems are now going back to the exact design principles that the books you refer to solved long time ago. There is tons of research, dissertations,.. that focuses on this from decades ago.

Its just now that the new systems realize that these problems actually exist.

If you dont know the history of a certain field and what came out, you repeat and make the same mistakes again. This seems to also apply to software engineering.


I can assure you that the major theories behind modern big distributed databases were not written in decades old books.


Yeah, distributed database systems from the late 70s, early 80s actually had certain transactional guarantees that some of these "modern big distributed systems" you refer to still dont have.


Yes, some of those theories were also applied in decades old systems. Look up Tandem Computers and Teradata.


In maturing parts of the software industry you'll often see a desire to stay with the times in order to maintain a competitive edge, re-inventing the wheel often looks like full/partial re-writes of a system for minor marginal gains.

A great example of this is the evolution of FB/Google/Amazon. Portions of their core tech have been completely re-written over the years for marginal gain, but there is a large premium to being the best in tech.

In other parts of the industry every new cycle enables some new area of tech, and those marginal gains become the minimum bar for entry. e.g. Deep Learning and Computer Vision, distributed systems and cloud computing/SaaS.


You overestimate the quality and reliability and expandability of those old systems. The 1998 10 Blue Links tech couldn't support the functionality and scale of Google today.


your standards for acceptable software were frozen in time in 1985

Haha in 1985 Amiga had a multitasking GUI desktop in 512k and 7Mhz. Now we have millions of times more computational power and struggle to deliver an experience as crisp and satisfying.

I wish people still made software like it was 1985, that actually used the full power of the hardware for something useful...


Yeah but Exec/Kickstart/Workbench was missing some basic niceties that are table stakes today:

- No process or kernel security (processes could rewrite kernel code)

- Processes could effectively disable the scheduler

- Supported only a single, fixed address space: both a massive limitation and a performance hack that made the system bearable to use

- Single-user

- No security model

There are embedded applications these days where not having these features are deak-breakers. Let me assure you: if you re-implemented the original Amiga OS feature set it too will be screaming fast. The tricky part is keeping it fast once you start adding protection and additional functionality on top of it.

And largely what happened when you tried to implement more complicated applications on top of these primitive systems is that they would crash the entire system, constantly.


That and the fact that amiga was clean room design. IBM Pc was already old and even x86-64 won over itanium. Backwards compatibility has a cost but also gains. Amiga wasn't even compatible with c64.


That and the fact that amiga was clean room design. IBM Pc was already

Well the PC wasn’t backward compatible with CP/M so that’s an odd critique to level at the Amiga.


Sure but "pc" at the time of amiga was already pc compatibles and at backward compatible with xt and IBM PC. Besides amiga was very optimized for 2D graphics. 3D story has been less rosey. But I would seriously welcome a new amiga. That wouldn't mean a specced up amigaos running pc. That would mean a radical new mobile architecture. Or Vr machine. Some fresh air On a radical, alien architecture. Belt CPU ? Gpu/CPU à la xeon phy ? Ram only? Dunno. Something crazy.


Itaninum only lost because Intel did the mistake to allow AMD to design x86 chips.


That would mean itanium could have won only by a monopoly and that with open competition, backwards compatibility won.


Itaninum also had an emulation mode, while not perfect, they could eventually improved it.

Backwards compatible didn't matter in the mobile world, or for those screaming for ARM laptops.

Sometimes we can't just have nice things.


more complicated applications on top of these primitive systems is that they would crash the entire system, constantly

That’s chicken-and-egg. Why do modern apps with actually quite simple functionality need all this vast power, the GHz and Gb? Because it’s there. Why does software crash? Because the OS let’s it with nothing more than inconvenience to the user.

Amigas were actually quite usable, they were stable enough for complex software to be developed and real work done. Same for ST’s.


your whole point is extremely backwards! software in 1985 was more "complete" than today. Even CD roms shipped with magazines had to support more windows variations than "good software" today supports of browsers! not to mention that every corporation with a software team also had usability and QA teams. Software quality and resiliency was much better than today. Then in the 90s it took a dive because online made "first to market wins", and it have been downhill from there.

so, your software concepts from 85 would be overkill today, not lacking.


In 1985 you target one machine with one thread and one level of memory.

Today I'm working on systems with memory latencies from l0 cache all the way to tape storage. And it's getting worse.


Not when coding machines like the Amiga.


Agreed, and the games industry may be the clearest example of this. In 1985 the developers had to actually ship a finished game, there wasn't the opportunity to release a half-finished product and update it later. Compare to Fallout 76, which was very clearly unfinished at launch.

Edit:typo


Games in 1985 shipped with bugs baked in that people were just stuck with. Civilization had a broken Gandhi, and still shipped 4 different bugfix versions.


No game (or any large software project) is ever bug-free, but the standards were higher than they are now. Fallout 76 was literally unplayable on launch-a fairly early mainline quest was broken, so it wasn't possible to reach the endgame. Was civ unbeatable on release? Sure, ghandi was bugged, but that was entertaining enough they kept the bug in the sequels.


There were plenty of bad and buggy games in the 80's, it even crashed the market [1]. We just don't remember them.

Also modern non-indie games are orders of magnitude larger and intricate than 80's games, and have about the same distrubution of quality/bugs. They're now made by teams of size 10-1000 rather than 1-10 though.

[1] https://en.wikipedia.org/wiki/Video_game_crash_of_1983


Neither Civilization nor Windows existed in 1985.


Take a guess when Windows 1.0 was released.


> "software in 1985 was more "complete" than today. Even CD roms shipped with magazines had to support more windows variations than "good software"

There was one variation of Windows in 1985. (And I remember installing it!)


Yes, but in your previous comment, you said it didn't exist. It may as well not have! Almost nobody used Windows 1.0...

Still, in 1985, even DOS apps had to target varied environments: mono, CGA, EGA, Tandy graphics, different memory configurations, 8088 or 286, printer drivers...


The anecdote is they deliberately left in nuclear Gandhi for the lulz.


> inches upward

I wish it inched upward... So much software now-days is bloated crap.


Preach, brother/sister! My career started in the early '80s and I say this all the time in comments here on HN. You had less to work with but less was expected of you.


I think you analysis is fair, but Glassdoor data is a bit off. At my work, I filled out salary data on Glassdoor and it never showed up. Maybe they thought it too high?


End user tasks might be able to be automated into dumb programming jobs, but companies have tried for decades to down-source their programming and it always falls flat on its face.

Even if you have library support to hand hold your budget coders, even if you use a lot of them, even if you give them all the time in the world, they will produce more complicated, less coherent, less stable, buggier, and harder to modify, improve, or iterate results than better coders who understand the problem better.

That means that no matter how little you pay up front you end up paying more in the long run throwing more man hours and money at fixing the mess made. A good mess is easier to maintain and improve and costs less over time. A mediocre / bad mess takes substantial efforts to maintain and iterate on.

Its also probably a domain impossible problem to remove the ability for any coder to make bad code. If for no other reason that in any programming environment you can never stop someone from iterating by index over a binary searchable tree for a specific element in it, or you can't stop someone from turning a float into a string to truncate the decimal part to then reinterpret the result as an int. But if you don't give them the tools - the integer types, the data structures, access to the bytes in some form or another - you aren't really programming. Someone else did the programming and you are just trying to compose the result. A lot of businesses, like I said, can be sated on that, but its still not programming unless you are in a Turing complete environment, and anyone in such an environment can footgun themselves.


> and it always falls flat on its face. How so? What about outsourcing in general? Where I work wanted to save money, and now we have 4x as many engineers in India. Most of them suck, and we have to shepherd and correct their mistakes, but business is viewing it as a success. They are creating features at a higher rate than we were before. Every other large company also has teams in 2nd world countries because they're cheap. At some point this labor will be both cheaper and qualitatively comparable to US talent.


"any programming environment you can never stop someone from iterating by index over a binary searchable tree for a specific element in it."

SQL solved that problem a long time ago. You write SELECT, the query optimizer figures out how, using any available indices.


I think that misses the point. SQL does not prevent a junior dev from selecting many more rows than he needs to and then proceed to iterate over them naively.


just use Rust


I know its a troll comment but it really illustrates my point. Rust feels great when you know all the paradigms and complexity involved in writing Rust code because for a seasoned veteran it does everything sensible and right. But for your average person trying to get into Rust is a nightmare of fighting the borrow checker on all your misintended mistakes that would at best be silent undefined behavior in other languages. Top it all off with how expressive Rust is relative to most C like languages (remember, we had an entire generation who thought function objects were alien because they were taught Java) and its fantastic if you know what you are doing and a totally lost cause when you don't.


Ironically, knowing that I can get a good salary with relatively shit skills makes me want to up my game. Because it seems like a situation that is inherently unstable.

Eventually it has to change (imo), either through companies becoming more scrupulous in their hiring, or through a massive flood of new devs.


If you take a look at most schools CS is now one of the most popular majors. When I graduated ~3 years ago the class was ~100 per grade. I went back to recruit and class size was ~600 per grade. Talking to profs I know all over the country this seems to be thematic. The supply curve is about to shift.


I think it will be a massive flood of new devs (maybe we're in it now?) -> overvalued tech stocks -> crash -> stricter hiring.

It's not that tech, even done by shitty devs, isn't valuable. It's a question of whether the market can control itself, which I'm pretty sure is no.


> stricter hiring

Stricter hiring on the low-end perhaps, because in FAANG and the companies that copy them, hiring could hardly get any stricter...


You will get a massive flood of wanna-be new devs. But economic theory would tell us that high salary attracts more talent and even the companies will have more people to choose from they also become more selective so the riff-raffs still won't get a job. So only the good ones will get the plum roles; of coruse a few bad one fall through the cracks, but overall devs are smart people.


I remember back in the early aughts after the .dom implosion and the resultant uptake of outsourcing. There was a pervasive attitude that the high paying software jobs were not going to come back in any meaningful way. Yeah, well...

There will be a day, but when is hard to say. Thinking it's right around the corner is akin to the belief we're on the cusp of true AI. We're more pessimistic today than we were in the mid 80s. And non-programmers were programming with hypercard, filemaker pro, VBA, etc, back in the 90s.

There are of course former well paying jobs such as old-school front end devs (html/css/sprinkle of js) that are largely commoditized, but that's a given considering the low barrier to entry.


Except the high salaries won't end given the level of productivity found within the technology industry. Tech companies as it stands today are some of the most profitable companies out there, it's this profit that enables them to further compete for talent and pay people more.

The same highly productive nature per employee is found in virtually every other high paying industry, most of which have not seen the pay for the higher end of those in the industry fall over time.


> these days, you just learn yourself JS a little bit and go for a webdev job, making shit code and still earning more than most people in a given country.

With these jobs in particular, what I see is that the definition of seniority has shifted to 'knowing the latest tech'.

So a junior dev who's just got to grips with React has become a React Developer, and they are now relatively senior in that field. The experience isn't transferable to other parts of the software stack though, it's too heavily tied up in the browser. So they end up as a super-specialised frontend dev.

It'll pay pretty well until the tech becomes obsolete, unless that kind of person enjoys maintaining legacy code.


I don't see that mass-produced cohort arriving any time soon. My city has 1 developer for every 3 jobs and it's getting worse.


Universities are already producing those like there's no tomorrow, and we now have bootcamps that contribute some more. The only reason there isn't a glut of programmers visible is because the industry is growing even faster. But I can't imagine that growth lasting much longer.


What city is that?


Sydney. This statistic was given to me this week, I actually figure that it's 1 person for every 3 vacancies but who knows.


So then when are doctors going to start making less? And investment bankers, too?


Doctors are already seeing commoditization as healthcare systems consolidate and adopt standardized workflows (aided by EHR implementations). And nurse practitioners and physicians assistants are doing much of the work once done exclusively by doctors, but with less training and lower pay.

It wouldn't surprise me if index funds have caused a decline in earnings for investment bankers.


Yeah, but the relatively poor performance of index funds in 2018 (no gainz) will lead to capital flowing back to ibanks and hedfe funds


I think the comparison to biology/chemistry/physics is interesting. Perhaps even more than software, there's a huge spread between the value of low and high performers - the best scientists make new discoveries that can be worth billions.

On the other hand, if you think the software industry has a hard time figuring out (at hiring time) who the high performers are... science is driven by serendipity. Nobody can predict who will find the billion dollar discovery. Not even past performance is a reliable indicator.

So it makes sense to me that the salary spread in science is relatively even. If they could reliably figure out who to dump money on, they would. On the other hand, the FAANG companies clearly believe their hiring practices can select out the high performers... and perhaps they are right? If they're paying 3-4X what everyone else does, they expect to get at least 3-4X the value.


I've worked with good people at every company I've been at... but the nice thing about being at a top company is I never have to deal with totally incompetent or helpless people. Nothing frustrates me more than having my job responsibilities include training someone with no initiative.

The selection process seems to do a good job of keeping out the lowest tier at least, although we openly acknowledge that we miss a lot of good people as well.


> Nothing frustrates me more than having my job responsibilities include training someone with no initiative.

Years ago, I worked someplace where a colleague was tasked with working with another developer on project X. After about 15 minutes it was clear the other developer ... wasn't? A web project, and this person had been employed as a "web developer" for at least several months. Questions like "how does this information in this browser get back to the server?" came up.

Colleague goes to manager and says "I can hit the project deadline, or I can make sure other_dev learns the basics enough to be able to contribute and understand projectX, but I can't do both by the deadline. Can we move the deadline back a few weeks?"

No, and no. Train other_dev and hit deadline.

Deadline was hit, other_dev moved to another project afterwards, and was pretty much as ineffective as before, but colleague was then saddled with this reputation of being a 'bad mentor' because the next team learned other_dev didn't know how things worked. Why the hiring manager wasn't tarnished... who knows?


Sadly reputation comes from result.

He was given 2 tasks and he only delivers one result.

I know this sounds... not ideal... but it is what it is.

His manager probably has to operate at the same level of expectation: given 3 tasks by his manager (or director), either you finish all 3 or you're less dependable.


That just sounds like a place that doesn’t have smart people. He was done a favor because it forced him to move on to somewhere better


Isn't the hiring manager tasked with "hire someone with basic competence"? And they failed? But their reputation/credentials don't get called in to question?


Science doesn’t reinvent itself every couple of years either: new discoveries build upon a foundation of old discoveries. Software is more like the fashion industry.


I think they expect to not have to face another Google or Facebook to compete with by hiring everyone at 3-4x. Then employees just rotate among the big techs. The big techs then decide where they want to compete with each other... e.g Netflix and amazon video/prime... so if stocks decline I’d expect a rise in bonus or base pay or more stock.


Part of the issue is that the metric for "high performers" can be well out of line with skill, at least as measured as a product of profitability.


One way that they may be getting 3-4x the value is in the long term. Although I've only worked as a programmer, I'd expect that the impact of having one negatively valued programmer is far larger than having one negatively valued chemist or physicist. Legacy code can often be years old and far outlive the jobs of the people who wrote it.


Oddly enough, I think learning to program is easy, but only for a few people. And those are the people who are motivated to learn it as an end unto itself.

I was motivated because my older brother, and my mom, had already learned how to program, and they were quite excited about it. After getting past a few familiar conceptual hurdles, it became very easy for me to learn programming myself.

People who are only motivated by the money, or under pressure from others, have a harder time, because their curiosity and drive aren't activated. There's some sort of valve that lets the knowledge into your brain, that has to be opened.

For the most part, the people I know who seem to be motivated by money itself are not so desirous of getting rich per se (many are already rich), but are actually interested and curious about money in the way that I was curious about programming.

I don't program for a living today, but my ability to program is definitely a force multiplier for my work. It has either improved my earnings, or improved the continuity and longevity of my career.


"""I don't program for a living today, but my ability to program is definitely a force multiplier for my work. It has either improved my earnings, or improved the continuity and longevity of my career."""

may I ask what domain you are working in? Can you give some examples of how you've slipped in some programming knowledge into other job tasks? I love to hear people's anecdotal problem/solution approaches. Was the programming side of it actually slipping in some VBA/chrome extension/javascript or was it more of just an 'analytical' approach taken to a business decision.


My background is in math and physics. While studying those subjects in college, I learned programming on my own. Today, I develop technology for fancy measurement and control equipment. When I say I don't program for a living, I mean that it's not my job title, and my managers may actually be unaware of the role of programming in my work.

I use programming extensively as a problem solving tool, for things like data analysis, modeling, automation of experiments, and prototyping. Almost all modern equipment is electronic and computerized. To be capable of rolling out an MVP on my own, I program.

You will rarely see my computer without a Jupyter notebook on the desktop. ;-)

In addition to working in a computerized field, program code is just a super powerful way to express ideas. And the disciplines of good programming practices (yes, learn them) provide ways to organize the innards of complex things, so they actually have a fighting chance of working and being right. Plus, it's fun.

People who work as full time programmers may make more money than me, but I'm not sure that I can do their jobs. When thinking of any profession, a person should not only look at the cool, fun stuff, or the money, but what the actual daily grind looks like, because that's what you have to survive.


> I think programmers tend to underestimate the difficulties involved with becoming a good programmer because once you're good, you only see the even steeper learning curve ahead of you.

But that is no definition of bubble. Bubble, at a very basic level, means there is a lot of capital flowing within, it has little to do with whether how difficult your job is.


This is... debatable. I was around in 1985 and if I had proposed using "an eventually consistent schemaless DB" to tackle a bank or a manufacturing project I would have been laughed out of the room. And not because it sounded too good to be true, mind you...


Funny thing is, that is what ACH and many other banking systems are today, or at least back then. "Reconciliation" can take days.


> I was around in 1985 and if I had proposed using "an eventually consistent schemaless DB" to tackle a bank or a manufacturing project I would have been laughed out of the room.

Medical record systems were running on MUMPS (schemaless) and being eventually consistent (records were keyed in from paper forms) long before 1985.


Does anyone do that, at least on the software side? Obviously there are consistency issues with e.g. non-instantaneous bank teller actions, but those are human inconsistencies, not software inconsistencies..


What I mean is that some of the powerful components we can leverage today solve problems that did not exist in 1985, so having these available then would not really help.

I mean: what would you use Node.js for - in 1985 - even assuming you had access to a system to develop and test stuff made with it?


> I mean: what would you use Node.js for - in 1985

In 1985, your hypothetical bank would be running VMS, which had asynchronous IO system calls as the default. There was no need to "invent" something like Node.js.


I agree, but the real bubble that has popped is that non-techies have become more selective in judging the skills of programmers. Gone are the days where being able to stand up an FTP server and display a mysql query result in a web page means you can write your own ticket into any tech job in the world.


Most programmers are not that well compensated. I never made more than 120k a year tops, I'm almost 50 years old. Not everyone is young and pretty and attractive to the Googles. The number of employers who compensate like this are tiny you can count them on one or two hands. 98% of programmers never make more than 100k. With nearly 30 years experience I build things that would bill out to $160,000 a month for an outside team to produce, but I live in an 80 square foot roach infested apartment and can't afford a car.


If your work is that valuable then perhaps you’re selling yourself short? In my 40s now and have definitely encountered age discrimination but found remote work once I updated my skills. I hope you get out of that dump!


Do you live in a really high LOC area? Or are you exaggerating?


Absolutely right and something we see in Data science too. Coding is a domain where we work cross-architecture. I can't be just a useful Python programmer - I can only be a useful Python programmer in Marketing automation or some Business associated with it.

The fat package makes programmers not realize this.


As a data scientists who considers himself moderately good with the pydata library set, I realized the other day if you took those 5 libraries away from me, I'm not sure I have that much to offer in terms of my programming abilities. I don't necessarily feel bad about that, although it did give me pause.


I’ve known people who built lucrative careers on expertise in specialised technologies, and others who’s entire careers disappeared because they were too specialised and the technology got deprecated. You might be fine, but I’d definitely recommend diversifying your skill set a bit. Being a generalist has served me well.


I would say every capable programmer should be a "generalist" in a sense and be able to transfer their skills with relatively little effort. If they can't, they are not really that good at the abstract concepts. Still this doesn't prevent one from focusing more and specializing more on one field/language that they love and be as familiar with it as possible.


the libraries have grown nicely over the years, though I am surprised they still lack some basic stuff.

why in 2019 is there no good way to visualize a decision tree? having to install, configure, and get working graphviz feels very hoaky.

I prefer python, but R still does some things really well that the python libraries are just not up to par

1) anything geospatial, including drawing maps. here is a list of projects my students did to give examples: https://pennmusa.github.io/MUSA_801.io/

2) time series

3) linear models, is it so hard to give me a good summary?

If anyone knows of any packages that do these >= R, I'd love to see them :)


> why in 2019 is there no good way to visualize a decision tree?

You should try https://github.com/parrt/dtreeviz. From Terrence Parr and Vince Grover, released in Sept 2018.

There is a good background article on the problem space and their design iterations here: https://explained.ai/decision-tree-viz/index.html


thanks for the info, but as I said above anything that relies on the installation of graphiz is extremely hoaky, and as you can see on many posts on SO, doesnt work many times.


I came in to the pydata library after extended time in Matlab/Octave. Learning there first was terrible for programming basics (e.g. polymorphism) but was excellent for ensuring I knew what a given algorithm does/should do.

I highly recommend spending some time in C/C++, Go, or Julia to pydata-first folks that ask.


As a data scientist myself, I learned to write code that utilized arrays and matrices - from the most basic library. From cleaning to analysis to machine learning (I supposed numpy was used for ML). Is this the most correct manner to write code? I don't know but learning to do DS without specific Python libraries has improved my coding ability. Aside from being able to read in different data formats I believe I could do a fair portion of my work in C.


What is the pydata library set? numpy, pandas, matplotlib, sklearn and the like? My google fu is failing me...


Yeah pretty much.


Data scientist is a term similar to Web Master, I don’t think it will be a thing in 10 years.


Someone has to make the tools that are used to automate the marketing, analyze the business data, whatever the task at hand is


It is like exercise... you are shaping your brain to the problem. The US military actually tests for this in what's called the Army GT score of the ASVAB test.

When I took the ASVAB in high school I scored a 107. That score is too low to become a warrant officer so I had to retake it a couple of years ago for my officer packet and I scored a 129 out of a maximum 130. That puts me in the top 0.1% of testers. I am not smarter or more intelligent than when I was in high school. I do write software though. Every couple of years I look back on my software and algorithms realizing how I continue to improve and see the solutions more clearly.

https://en.wikipedia.org/wiki/Armed_Services_Vocational_Apti...


This

The best teacher has been helping collegues. I have been programming a lot better ( less errors and even without sometimes running the application when I'm pretty sure), because I think and analyze more upfront than I used to.

Some things come back, but it's rarely related to me ( eg. last moment spec changes).

I do have to watch out, I notice that basing my code on someone else is ok. But, they always have faulty code in hard to test areas. So make testing easier on things that are hard to test is my next motto.

Also, helping others is a pretty huge timesink :(

Ps. Being in the zone does wonders lately

PS2. There was another threat about videolan yesterday. And nobody heard the entire story about https. I gave the VLC developers the benefit of the doubt, not knowing everything off their infrastructure. . A lot of the comments here on HN disagreed with me ( stole silently upvoted though).

Today I saw a blog post about why..

It was infrastructure based...

I can't understand why I was practically the only one with another view in the subject in this community, where developers come together.

FYI:

Comments are in my history. Mostly on the videolan topic. It's all recent


I would say that it's the same for the other trades as well though. For example, people might imagine that learning a new human language is trivial with the proliferation of misleading advertisements such as Duolingo or Babbel, but to really learn a language the effort needed is tremendous and it requires constant repeated practice over a long span of time.

The question the author posed was why programmers are paid that much even when some other paths could seem "harder", which seems valid. Sure not all careers are supposed to be "harder" than programming, but they're not as easy as one'd imagine either.

Though yeah at least for now I don't see the situation abating much. The demand is still going strong. Once the proverbial "flood" of the market happens from new grads, things might get worse. But still if you know what you're doing, you know all the right concepts and skills, you should be able to stay on top of the game. There has always been a saying that the irony of the CS degree is that many people who graduated with the degree can't program, while many who can program didn't need to do a degree at all. I doubt the influx of students trying to study CS would change this situation much. Coding bootcamps have been around for a decade yet they don't seem to change the market equilibrium that much.


> they program to accomplish other tasks in their domain of expertise

My kids have mentioned that they might be interested in a degree in computer science because and I've encouraged them to combine that with a second area of specialization. Programmers are everywhere, but a programmer that also knows chemistry or biology or economics or art history or just about anything stands out.


>Programmers are everywhere, but a programmer that also knows chemistry or biology or economics or art history or just about anything stands out.

Really? I have a physics degree with some experience in rocket science, but my most valuable skillset (measured by how much pay I can fetch for it) is plain old software engineering. I don't think I'd be able to leverage my area of specialization to exceed or even match what I can get from FB/LI/G as a generic software engineer.


You don't think your broader education and experience makes you a better software developer (that encompasses programming, writing, and working with other people)?


That's a good point. Not going through the standard CS track and straight into software engineering has probably given me broader experience and skills (both soft and hard) that I wouldn't have developed as well otherwise.


Maybe. I studied CS and bioinformatics. You end up competing with both pure CS folks and also the bio folks that are bioinformaticians. Still I generally agree that some domain expertise is helpful.


If anything good programmers are underpaid. If you are making your company a mil, you should be getting 1/2.


But what if another programmer is willing to make me a million for only 25%?

That's how markets work. You get paid what what the market will bear, not what you "should" make.


Except if the other side of the market teams up. https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...


That is hard because you have to convince many other people to agree. It's easier for me to just undercut other people by taking less pay. I can for example going remote and live in low living cost area.


On the side of the employees, unions have proven themselves to be good means to improve the situation for employees. Here in Germany they definitely have helped in many industries.

My comment was more about the companies though which may form cartels to drive down employee wages. Companies forming cartels is illegal, while unions are legal in many places.


If unions are the best way to improve compensation why do un-unionized FAANG employees get paid so much more than unionized developers in Europe?


Because they're in Europe.


You're replying in a thread about how those very same FAANG companies colluded to keep engineer compensation artificially low.


Yeah but even at that point faang (or well at that point apple, Ms, google, Netflix didn't exist and Facebook broke the cartel) compensation was significantly more than people in Europe were making.


And with anti-collusion labor protections, engineers stood to make even more.

In the US, union workers make between 10% to 30% more than their non-union peers[1].

You're comparing pay across two different economies and only looking at unionization as a variable. It's like wondering why engineer rates in Omaha, Nebraska aren't on par with those in New York, and concluding that it has something to do with differing fire codes.

[1] https://www.bls.gov/opub/mlr/2013/04/art2full.pdf


> And with anti-collusion labor protections, engineers stood to make even more.

Correct, but these protections exist (and existed at the time) independent from a union.

> In the US, union workers make between 10% to 30% more than their non-union peers

There's very few high-skill jobs which are commonly unionized. In a market where supply is greater than demand, then yes unions have absolutely shown to improve worker outcomes[1]. I'm not aware of any evidence for markets where demand outstrips supply (like that for skilled software engineers). It's not immediately clear that union protections would be beneficial.

>You're comparing pay across two different economies and only looking at unionization as a variable.

No, I'm simply pointing out that your flippant response to esoterica doesn't actually address the question. If unions are better for workers, why is it that a non-union area !!with a cartel depressing wages!! was still substantially better for workers than a unioned area with no such issue?

Saying "oh the market is different" ignores the question of why the market is different.

[1]: Indeed, that's kind of exactly what happened with this cartel. Facebook wanted to hire skilled engineers, and was willing to pay more, so broke the cartel. That kind of thing won't happen when workers are generally equivalent, but SWEs aren't.


> There's very few high-skill jobs which are commonly unionized.

Sure there are. Doctors and actors, to name just a couple. In both cases the "union" actively works to create barriers to entry.

The AMA colludes with medical schools to set artificially-low student body quotas. If you've ever wondered why teaching "XYZ for pre-meds" is such a miserable experience, this is why. You have to earn straight A's to get into med school because there are so many more qualified candidates than openings (but it's not clear to me how, say, art history or algebra-based physics makes you a better doctor).

SAG (the screen actors guild) requires actors to have already performed in a SAG production a a condition of membership. And they also strictly limit the number of non-SAG performers on SAG productions. That chicken-and-egg problem was very intentional

If you've ever taken a macro economics course, you know what effect these actions have on prices.

> I'm not aware of any evidence for markets where demand outstrips supply (like that for skilled software engineers). It's not immediately clear that union protections would be beneficial.

See above. Unions can create a market where demand outstrips supply.

> If unions are better for workers, why is it that a non-union area !!with a cartel depressing wages!! was still substantially better for workers than a unioned area with no such issue? Saying "oh the market is different" ignores the question of why the market is different.

So tell me why professional associations exist, then. Why do doctors form a union to increase wages, if as you say, they would be better off without it?


> Sure there are. Doctors and actors, to name just a couple. In both cases the "union" actively works to create barriers to entry.

Neither the SAG nor the AMA are unions in the traditional sense. In many ways, the AMA actively works against worker quality of life (consider that the horrible conditions for med students/residents and the high suicide rate among MDs) to artificially reduce supply.

>Why do doctors form a union to increase wages, if as you say, they would be better off without it?

The AMA is mainly a lobbying organization, not a union. Since a significant percentage of doctors are in private practices or small practices, they don't have representation with the government. So sure, the AMA does collectively bargain with the US Government. But by that same token, since 53% of MDs are self employed, the AMA can't do "normal" union things like set wages, because there's no one to bargain with except the doctors themselves.

And interestingly, the AMA actually admitted that its intentional supply-reduction is hurting the medical industry as a whole. To answer your question, "because they thought it would be better". But in hindsight, they probably weren't.


I've said that they are good means, not the best means. And I guess the reason why they are paid so little is the higher profit margins of FAANG companies as well as probably the alternative in SV that you can found a startup and make much much more if you're good (and lucky).


Different Culture and engineers have a lower social status in Europe in general


Unions are basically legalized price fixing. What happens is that the union negotiate a "fair" price, and then all companies decide to pay no more than said "fair" price. See for example (original is in Swedish):

https://translate.google.com/translate?sl=auto&tl=en&u=https...


That just seems to be reporting the wage levels in Sweden which is one thing unions do price discovery for workers - run surveys.

I suspect your posting in bad faith here and in the usa SAG minimum rates doesn't effect the higher rates that successful actors get.


The problem is that the numbers that gets published by unions in Sweden are taken as law by employers. You don't really know what unions are like if you haven't heard your employer say "We can't give you a bigger raise due to our collective agreement". And since basically all other employers follow the same guidelines you can't get competing offers for significantly more. There is a reason why salaries are very flat in Sweden.

Another way to see it, collective bargaining goes both ways, ie both workers and employers will come to a joint agreement. So if we created a FAANG engineers union and created a joint pay-scale for them, then that would basically be equivalent to the non poaching agreement often derided in discussions like this.


Well they probably shouldn't make it public :-)

Not all union models have sector bargaining and it certainly doesn't work for professional unions - and I am not saying that European unions really get the needs of m&P members and need to change.


As I said good programmers are underpaid. They should figure out how much they are making their companies and ask for more. The market can often afford to pay more, if you just negotiate better. You can also unionize to get your employers closer to what you are worth to them rather than what they are worth to you.


> just negotiate better

In every other aspect of computers, the industry has finally embraced usability as a desirable goal, and not just for end-users.

On my first computer, you had to read a 100-page user manual and learn exactly what commands to type. In my first programming language, you had to manually allocate (and worse, deallocate) memory. With my first database, we used to have to go type VACUUM regularly. None of these is true today.

Yet even though some of the highest paid people in the world are members of unions and have agents to do their negotiating, programmers seem to have latched onto this idea that if you're not making top dollar or have your ideal working conditions, you should "just negotiate better".

Why stop there? Tell programmers they should "just program better", too.

> You can also unionize

Have you ever organized? I don't think you realize how difficult this is, especially without strong support from an existing union. There's a reason unions heap rewards on people who do it.

Existing unions also have great labor lawyers. A common response to even thinking about unionization is getting fired. (That was in the news recently because it happened 4 weeks ago here in Seattle.) Labor laws aren't what they once were, and there's usually no consequence to the company for firing organizers.


> On my first computer, you had to read a 100-page user manual and learn exactly what commands to type.

Flipside: I can still write software for my first computer without looking anything up, over 30 years after reading those 100 pages. I still know the memory layout, opcodes, assembly etc by heart and it is still the best way today to program that particular computer (which still works in my man cave) today. Yes, today it is all simpler, but the 100 page example I find a plus, not a negative. Maybe you were referring to something but my 100+ page manual was usage and at the same time programming (using was programming beyond the basics) as that was the only way to use the system.


People who make enough to have agents negotiating their salary (famous actors, professional athletes, other celebrity types) are usually looking at an order of magnitude higher compensation than even the best software developers get. At the lower end of the spectrum (lesser-known actors, musicians, etc.) agents are known for enriching themselves as much as helping their clients. They are just sort of accepted parasites on the way compensation is handled.


Suppose I’m working on pulling out some functionality from a large, monolithic application. How much am I making my company?


Depends on what the outcome is. If it makes the site 50% more performant on 25% less hardware, pretty easy to swag it. Same if the outcome makes developers on the team able to ship new functionality 20% faster with 33% fewer bugs.


This seems awfully contrived.

Issue 1: It's very difficult to tell if your contribution got 50% improvement in performance because there were 10 other devs pushing in features and bug fixes. This is the attribution problem

Issue 2: This happens over time. It's very unlikely that your 50% improvement happens every year or month. Because, think for your self, this is compounding with large rates. It grows quickly. 1.5x improvement in 6 cycles (months or years) is 10x. This essentially is the time problem

Issue 3: even if you deliver the results you did, in a large company there's a large bureaucracy and no one person has the ability to increase your salary by that much. This is the control problem.


The problem with this argument is that programmers don’t work alone in a vacuum. How do you account for the support staff? The recruiter that hired you? The cleaning lady? The DevOps people? And so on.

It’s avtually fairly non-trivial to be able to say with even a modicum of certainty how much value a given developer brings to their company.


This is precisely my point! Thank you for getting it and explaining it.

I currently write software used by millions of people. Partly because I’m a backend engineer, I have no real idea how much more the company is making due to my direct efforts. Since they keep paying me, I’m assuming it’s a decent multiple of my carrying cost, but I have no way to measure it.


It's how markets work when wealth is distributed incredibly unevenly, and a weak social safety net makes it intimidating to work for yourself instead.

In other words, markets work that way because that's how the bosses and capitalists want it to work, and have so far been successful at thwarting attempts to use the government to change things.


The simplest and most accurate statement is that it’s an emergent result of the principles of capitalism combined with human nature and not a plot to keep us down.


It's both.


I'm all for programmers getting paid more. However, by your logic, if the company is losing money, should programmers contribute from their own pockets to keep the company afloat?

Starting your own company (not self employed contractor) gives a really good perspective on what it means to be owner and employee.


How is this his logic at all? The logic is more similar to a sales person. When a company starts losing money, it doesn't try to claw back commissions from its top sales agents to keep afloat. It might lower %s / do layoffs / something else, but money paid is money gone.

The logic is that programmers tend to produce far more value than they capture -- so that gets captured elsewhere, a lot of it typically by management. Except the value can be hard to quantify when the company is old and so is the software, how much of the value is employee #3701 making fixing a bug that's making the product not work for one customer in one instance vs employees #107, #85, and #150 who in their past team's life created the original version of that system to begin with to make the new customer even consider using it? There's no point to moaning about how much you "should" get paid. Just ask for more if you feel underpaid, but be aware that because of competition and because people usually want to hear what more you'll do to justify it, you're not going to always get it.


No they should quit and find another company.


You can't keep a significant portion of the upside without taking on more risk.


> Privatizing profits and socializing losses refers to the practice of treating firms' earnings as the rightful property of their shareholders, while treating losses as a responsibility that society as a whole must shoulder, for example through taxpayer-funded subsidies or bailouts.


which software businesses have been bailed out by the government?


Not really related to what we're talking about here though?


And in tough times are you taking a drastic pay cut, or jumping ship? Sounds like you want a huge share of the good and none of the bad. Also not sure why you value programming so far above all of the other activities it takes to make a successful product.


Define "making". Does the program/service sells itself? If so, then your argument may stand. Nine out of 10 times though it doesn't. There are other people involved in making the sales and they also require a piece of the pie.


The usual ratio for top flight talent in the finance industry is 10:1. A trader makes JPMorganStanleyofAmerica $5m, they get $500k compensation. That seems reasonable because there’s a direct line between talent and profit. Even so someone had to build the business, capitalise it, create the business opportunities and relationships, train and support the trader and assume all of the risk.

How much of the business risk of the enterprise is your top flight programmer assuming? Are her decisions the only ones that make any difference to the increase in profitability as a result of her work? How direct is the line between back room engineer, no matter how good and profit?

The only case where 50% or near it makes sense is for a founder owner who is also the lead talent. Maybe. Because then they are also creating the business opportunity and assuming a big chunk of the risk.


if you solo develop (usually not the case) an app for 1 mil then make it yourself if you want that large a cut...


Sales people don't get anywhere near that kind of commission.

Most SWEs I know are not making more than they bring in profit.


Good ones do - the Oracle one that sold a large company I used to work for a company wide licence allegedly retired on that single deal.


It's a team effort. I stopped trying to get software projects off the ground by myself


Sarcasm works really well in a text-only format!


> Can this last forever? Of course not, nothing lasts forever. But wondering why the wealthiest corporations in the world pay their workers high salaries is perhaps like wondering why water is wet. Software has a low marginal cost, and the rest is basic incentives for the corporations.

Agree


This has been contemplated for many years and the price has gone almost nowhere but up. In the early to mid 2000's it was thought that most work would go to much cheaper India, and put highly compensated on shore workers out of jobs. It was tried to some extent, and failed. Then it was thought that lots of people would then fill the demand by going to university for CS and other related fields and saturate the market. That didn't happen either. It has been known for a long time that CS is one of the most valuable majors as far as salary and still the percentage of bachelor's degrees awarded has stayed between 2-4% Not saying that this is the only way to get into the field but it is the most traditional way and indicative of the supply coming in.

Consider that some of the most valuable companies in the world did not exist 25 years ago, and now they do making real non bubble money where their biggest assets are software developers. This isn't the dot-com boom. The attrition rates in CS programs are very high and even then not everyone that receives formal educations ends up being a decent developer. Factory workers made good wages in the mid 20th century due to companies making lots of money on an industrial boom. We now have a technological boom, and unlike factory work the barrier to entry is much higher. So I don't see why it can't continue. Sure the 3-400k salaries are high but they are at top firms that are selective and competing for a finite pool of talented workers in very expensive areas.


CS has been through a couple of salary busts in the US previously though, no? One in the '80s when all the then-new CS programs started to release graduates into the market, then another after the dot-com bubble; or have I been misinformed?


The dot com bubble caused a salary bust because so many companies that were employing lots of people went out of business. This put a glut of programmers on the market with nowhere to work, which predictably pushed salary levels down.

This is nothing like the current environment, where there is a consistent demand for programming talent which outstrips the supply, and the industry as a whole is far more stable.

Of course, a global economic slowdown could put a damper on salary growth, but it will not be a salary "bust" like what happened after the dot com bubble.


I'm certainly not claiming that the future will necessarily resemble the past. But GP was making an argument from the past, so in that context we should certainly at least try to get the past right as much as possible. (And there was (I am told http://philip.greenspun.com/research/tr1408/lessons-learned.... ) the '80s, where the salary slump was driven by increased supply rather than a collapse in demand.)


If the current tech bubble bursts, it’s going to absolutely wreck amazon, fb and google—the three companies that actually gain most of the benefit from all these tech companies blowing their funding on ads and AWS spend.

Those are the companies driving up salaries — once that rollercoast ride stops I imagine that there are going to be mass layoffs and salary readjustments for years afterwards.


What is the current tech bubble, exactly? Those companies you mentioned have absolutely massive revenues and profits, and plenty of non-tech people and companies are responsible for those revenues and profits. This was not the case in the earlier bubble (lots of overvalued companies with little or no revenues).

What seems more likely is that the little startups will get crushed as VC funding dries up.


I think that's true, but I also suspect that one of the main drivers for the incredible acceleration of big company salaries has been to keep talent at that company rather than seeing the best engineers leave to go take VC money and play the startup lottery. If VC funding really does dry up, the pressure to keep salaries this high may be considerably lower.


amazon, google and facebook are the companies selling shovels during a gold rush, which is a great business. While the gold rush is still going on.


I graduated from a no-name college shortly after the dotcom bubble burst and it was still easy for me to find a job paying $60k/year, which was plenty to live off as a single person in the SF bay area at the time.


Okay, but that doesn't seem to contradict the conventional wisdom that the hiring maket was significantly better there (from the developer's POV) a year or two earlier.


I wouldn't label the dot-com bubble as a CS bubble.


How many software engineers are truly making over $300k as a rank-and-file? Let's be generous and say there's about a dozen or so companies that routinely pay engineers that well. Each FAANG will have on average, 10k engineers? So that's around 120,000 out of an estimated ~20 million developers (from a quick google search). That's around 0.6% earning pay in a "bubble" situation, the rest being senior folks or executives who would demand high pay everywhere else. So if you're a trained software engineer, you have less than a 1% chance of being in the situation described.

A quick google search finds that law school students have around a 14% chance of making BigLaw (the legal equivalent). The odds of getting into a medical school is between 2-5% on average. So no, I don't think we're in a bubble, the majority in the situation described would simply exist as the elite compensation class elsewhere as well, but maybe with even better odds.


Let's put it another way: we are looking at the top 1% of software engineers. Is it surprising that they have incomes in the 1% range?

For comparison, in Amazon, Senior and above engineers count for ~20% of the total, and those are the ones pulling +300k regularly. So only the top 20% of one of the top companies are getting such compensation.

And to follow the article, this won't last forever. Whenever the next stock market crash comes, almost half of that compensation (the equity based one) will almost vanish. But maybe on the next bull market we will see a similar situation (remember people in the 90s making 250k?).


> Whenever the next stock market crash comes, almost half of that compensation (the equity based one) will almost vanish.

You're saying the value of (for example) Google stock will plummet to 10% or less and then not recover at all over the following few years?


Amazon went down over 90% from peak to valley after the .com bubble. It then took 7 years to recover to peak valuation.

I don't think a 90% drop is common, but I also don't think it's outrageous.


Wouldn't be the first time (see AOL, Yahoo, Pets.com, etc).


> we are looking at the top 1% of software engineers

Working for a big company that pays well does not mean you're at the top 1% of software engineers. It means you're willing to do what it takes to secure that job and maintain it, including moving somewhere many don't want to live.


It means you're in the top 1% of software engineers in the only metric that matters.


Not really, I rather be paid less but work remote and have more freedom with my time and where I work.


To be explicit, you're saying the only metric that matters (when we're ranking software engineers) is how well-paid a software engineer is?


When we're discussing income, that seems reasonable.


people working on new systems and tech are generally better programmers than people hacking away at enterprise spaghetti at a big corp. big corp jobs like that pay better for less work though.


To be fair - that's not the point he was making. As we were discussing income - that's the only metric that matters in the discussion at hand.


"willing to do what it takes to secure that job and maintain it"

Some (most?) of that is innate ability and intelligence. Sure, there are well-paying jobs that are unpleasant. But for many others the company literally gets to choose 1 out of 100 candidates.


No one has "innate ability" for programming. It's a human construct you have to learn.

Getting a job at these companies largely requires studying up on CS101 trivia and CS basics to pass stupid tests. It's practicing how to pass their interview process. It's not "innate ability and intelligence."


Less. Closer to 10%


> Each FAANG will have on average, 10k engineers?

That's..really low. FB has around that many, Google has like 50k, and from what I've heard Amazon has about the same number or more.

Considering that everyone I know got >= $100k base out of undergrad regardless of location I don't think it's as uncommon as you would think anymore.


Well, Google has offices around the world. In their Warsaw office, for example, the normal salary is definitely way below $100k.


What makes you think that?


>= $100K outside of a major city for a green undergrad doesn't seem typical. Do you hang with grads from big name schools?


Agree, in the Midwest USA for example even very experienced developers are barely into six figures.

Of course cost of living is a fraction of what it is in SV. And quality of life is (subjectively) better.


No, but I know a lot of those too.


Nobody I know did, I suspect most people outside of the USA would say the same.


AMZN Vancouver pays fresh-grad $95k base + sign-on (30-50k) + RSUs (20-30 units)


> Google has like 50k

Does this include the now >50% TVC "non-headcount" as recently reported?


No, they have about 100,000 full time employees of which about 50% are engineers. They have another ~100,000 TVC's


TVCs at Google are not writing software. They are making sandwiches.


Personal, professional, and direct experience informs me otherwise.


You should probably narrow that since you're almost exclusively talking about the US market. There are a few pockets outside the US where you can earn that, it's just quite rare comparatively.

The US has anywhere from one to four million software developers, depending on your source. The BLS lists 1.2 million US software developers, with a $103,560 median pay (excludes benefits) in 2017.

You have closer to a 5% to 10% chance of earning $300,000 in total compensation as a software developer in the US, at some point in your career. Frequently high incomes don't last, there's a relatively high turnover because peak earning power only lasts so long, layoffs happen, specialization changes, job changes, et al.

The giant caveat to this, as everyone here knows, is you have a <1% chance of earning that outside of a small group of markets (ie it's very much not evenly distributed; if you're in New Orleans or El Paso you have almost a zero shot at it; if you're in SF or NY you have a legitimate shot at it).


I live in New Orleans (grad school). I have plenty of friends from college who work at FAANGs. The local market here consists largely of DXC, CGI and IBM paying “data scientists” $90k and junior devs $40k and laughing all the way to the bank.


FAANG companies have engineers staying just 2 years because they made enough money. Your statistics don't account for time.

Sure its not "easy" to get into those companies, but it isn't an outlier to get into them either.

The simple reality is this:

If you are an engineer at a publicly traded tech company, it is customary to get RSUs and Refresher RSUs. These have compounding effects as their vesting schedules start occurring in parallel. By the end of your second year you will have two series of shares unlocking, and this is in conjunction with your salary increases and bonuses.

You should expect and negotiate your RSU grants to be proportional to your salary. Competing offers from other publicly traded tech companies ensures this.

If the share price has also increases, which is the only thing that happened over the last decade, this is enough for a lot of people to quit.

The article did not talk about share price increasing.


People have enough money after two years!?!?!


Yes a lot of people are content with their earnings from 2 years at a FAANG. They aren't thinkings its enough to live off of forever, but its enough for:

A house somewhere else and never paying rent again

or

A downpayment on a home in a high priced area where demand doesn't seem to stop, which is always a low-interest leveraged investment that has cashflow opportunities

or

"Taking a break" which means years of luxury at every music festival and socialite event while optionally networking and pursuing other fulfilling money-making activities, with almost no opportunity cost of having 'gaps' on your resume

or

Riding off the resume entry of being an "ex-Googler" to secure advisory roles, raise capital easier, get C-level titles at someone else's startup

or

Joining another FAANG company at a higher premium

and/or

Leaving the company so you can write covered calls on all those shares you earned, for passive income


if you're compensated 300-400k annually, you'll have enough to move somewhere else or take risks if you play your cards right (huge savings, etc).

Or the other options is to jump to a different companies that offer more money or more equities or higher level.


There is a bubble on salaries in the Bay area. Only because of the real estate bubble that has been artificially created there. It will eventually burst as more companies will seek to hire in cheaper areas.


Top companies pay similar salaries in Seattle and New York. New York real estate isn't as bubble-like as SV, and Seattle's is possibly bubble-like but still about half the price as SV.


There is nothing artificial about the real estate "bubble" in the Bay Area. Prices are set by supply and demand, both in salaries and real estate.

Real estate prices are high, because the Bay Area is a very desirable place to live, there are people who can and will pay the high price, and the supply of housing is low.

Salaries are high because big companies are fighting for talent (demand), and the supply of talent is low.


The thing that makes it a bubble is that the supply is artificially constrained by local zoning laws. Eventually people will be fed up with the situation and they'll vote for a government that'll allow more development, at which point housing prices will plummet.


This is one of those bits of common wisdom that falls into the "it's more complicated than that" territory. Zoning laws are a contributing factor, but they are far from the only factor -- and it's a stretch to say that they're the deciding factor. Zoning is an issue. Rent control is an issue. Basic supply and demand is a huge issue that arguably outstrips regulatory roadblocks. Regional geography is also much more of an issue than people sometimes think.

Housing prices here are likely to stabilize, but they're not going to start seriously declining unless the job market starts declining, too.


All those other reasons you mentioned are contingent on zoning. Allow highrise apartments anywhere in the Bay area and just watch how fast the buildings go up and the rent goes down.


> Eventually people will be fed up with the situation and they'll vote for a government that'll allow more development, at which point housing prices will plummet.

Clearly you don't live in CA. Homeowners vote more than renters, and they will never intentionally vote for policies that will put them underwater on their own mortgages.


It is artificial in the sense that there really isn't a good reason why the supply shouldn't be going up.


You missed last week - available housing stock has started to grow.


Yes, it is finally leveling off after years of extreme increases in prices. Demand has cooled as mortgage interest rates have crept up.


> Let's be generous and say there's about a dozen or so companies that routinely pay engineers that well

You aren't being generous. The amount of companies paying people that well is on the order of dozens, maybe even 100.

Every single FAANG company, every single unicorn, all hedge funds, and a few successful non-unicorn company based out of SF and the Bay Area.


I think Simon Peyton Jones has the right idea (paraphrasing): programming is one of the most difficult, complex engineering efforts undertaken by humans. At the scale we do it one can't help but be amazed that it works at all, let alone that it works so well.

I think the reason these compensations get that high is because it does take 5-10 years to become good enough to lead a team that manages something so complex and to do it well so that the results are reliable and consistent. I think the difference with doctors and lawyers is that we're not licensed to practice. We're not a capital-P profession. However we still have to attend conferences and stay relevant but the expense and requirements to do so are on us or the companies we work for: there's no professional obligation to do so.

I don't think we're in a programming bubble if the author means we're in a compensation bubble and that programming is over-valued.

I think the real bubble is complexity. We're seeing a deluge of security breaches, the cost of software running robots in the public sphere on unregulated and very lean practices, and a lot of what we do is harming the public... though by harm I don't necessarily mean only harm to human life -- but harm to property, insurance, people's identities, politics, etc... and we're not accountable yet.

If anything I think we need to up our game as an industry and reach out for new tools and training that will tame some of the complexity I'm talking about... and in order to do that I expect compensation to remain the same or continue to spread further out and become the norm. Being able to synthesize a security authorization protocol from a proof is no simple feat... but it will become quite useful I suspect.

[0] https://www.youtube.com/watch?v=PTSE779n0nI


As someone who hires programmers, no. It is difficult to hire good people. The only way I can successfully hire low-cost programmers is by building a system/machine for creating and releasing software that does not require a high level of intelligence/creativity. The only people that can make high margins in programming-heavy industries are those who can do this or some sort of defensible moat.

As a programmer myself, yes. There is no way I can continue to make this much. I'm a dumbass. (I say this having worked on, in the last year, a compiler, trading algorithms, and 3D object analysis)


I am still basically an entry-level developer even though I've been doing it for 30 years (20 for money).

The way I see this playing out is, something like behavior-driven development (BDD) where the business folks describe the functionality they desire, and programmers write up the backend logic. Then as AI progresses to AGI, a higher and higher percentage of that backend code will be generated by machine learning.

So over the next 10 years, I expect to see more specialization, probably whole careers revolving around managing containers like Docker. There will be cookie cutter solutions for most algorithms. So the money will be in refactoring the inevitable deluge of bad code that keeps profitable businesses running.

But in 5 years we'll start to see automated solutions that record all of the inputs, outputs and logs of these containers and reverse engineer the internal logic into something that looks more like a spreadsheet or lisp. At that point people will be hand-tuning the various edge cases and failure modes.

In about 10 years, AI will be powerful enough to pattern match the millions of examples in open source and StackOverflow and extrapolate solutions for these edge cases. At that point, most programmers today will be out of a job unless they can rise to a higher level of abstraction and design the workflows better than the business folks.

Or, we can throw all of this at any of the myriad problems facing society and finally solve them for once. Which calls into question the need for money, or hierarchy, or even authority, which could very well trigger a dystopian backlash to suppress us all. But I digress.


How is this maintainable? What do you use to describe the inputs and the outputs (if it resembles a programing language then we're basically pushing people around, don't we)? Is AI supposed to design the interfaces as well as the plumbing?

Let's say, a bug appears. If the internals are produced by machine learning, chances are it's basically un-freakin-fixable from the high mountains of the spreadsheet/lisp interface. So someone has to dive in, and do it by hand. I doubt the business folk will do it, they won't know where to look!

The result, seems to me, is a metric-ton of machine generated code that now someone has to rewrite. Better hire a team to do it...


Your argument is based on AGI, something we have no idea if will ever happen, and most likely nearly as close as you think.


Oh, If we're getting to AGI then it's either apocalypse-time or post-scarcity-time, ain't it?


I really love your storytelling even if I'm not sure I believe one whit! You should try your hand at writing books/blog posts/short stories if you haven't already.


Hah thanks, ya I don't even know what's real and what's not anymore. Someday we'll live in a society where grandma married a guy she met on the internet and the grandkids have to fill in their pre-learning questionnaire with all the stuff they've already learned on the internet so that the teacher can move on to the really important stuff that prepares them for getting their degree from Silicon Valley online university, where they'll major in pre-K robot childhood education. The year is 2029.


> But in 5 years we'll start to see automated solutions that record all of the inputs, outputs and logs of these containers and reverse engineer the internal logic into something that looks more like a spreadsheet or lisp.

And that Lisp code will look something like: https://groups.google.com/forum/#!msg/comp.lang.lisp/4nwskBo...

(Unfortunately, Lisp neither makes you smarter, nor a better programmer, which seems to be a very profound, ego-wounding disappointment for a lot of people who try to dabble in Lisp programming).

Now programming-by-spreadsheets, on the other hand, is a real thing, that is almost as old as Lisp, and is called "decision tables." It was a fad that peaked in the mid-1970s. There were several software packages that would translate decision tables to COBOL code, and other packages that would interpret the tables directly. I think decision tables are still interesting for several reasons: they are a good way to do requirements analysis for complex rules; the problem of compiling a decision table to an optimum sequence of conditional statements is interesting to think about and has some interesting algorithmic solutions; and lookup table dispatching can be a good way to simplify and/or speed up certain kinds of code.

What is not interesting at all is the use case of decision tables for "business rules." A few of the 1970s software packages survive in one form of another, and I have not heard anything good about them. And the problem is very simple: the "business folks" generally do not know what they actually want. They have some vague ideas that turn out to be either inconsistent, or underspecified in terms of the "inputs," or in terms of the "outputs," or have "outputs" that on second thought they did not really want, and they (the "business folks") never think about the interactions that several "business processes" of the same "business rule" might have if they take place at the same time, much less the interactions of different "business rules," etc.

AI cannot solve the problem of people not knowing what they want or are talking about. Machine learning on wrong outcomes and faulty assumptions is only going to dumb systems and people down (IMO this is already obvious from the widespread use of recommendation systems).


So are you making the point that this is just imposter syndrome writ large?


Sort of. The only way I know where someone can hire lowly paid programmers today is by building a system that does not benefit from more highly skilled programmers. Since I do just this for a living (sorry), it's only a matter of time before I am no longer needed.

About a decade ago, I created a piece of software that made a department of 10 people redundant. The company actually tried to make use of them, but they were so happy with the improved productivity, that they basically kept the dead weight on, for the most part.

Last year, I did this at a financial company and eliminated an entire team. They did not keep the dead weight.

I may not be so lucky to avoid a future mercenary like me. Hopefully that explains it! I rely 100% on my creativity and out-of-the-box problem solving ability to solve real problems (note: not imaginary interview riddles). So far, the living I've made doing this is good.


I see, that's an interesting perspective. I'm sure positions like that will be the last to be made redundant.


A few thoughts:

- Just because a value is high and may very well go down, doesn't mean it's a bubble. FAANG are making real money from those workers, not just inflating an asset and selling it to other investors.

- Just because it doesn't involve long hours, doesn't mean it's not hard. A lot of my college colleagues really struggled, and many more didn't even get in. Don't discount natural ability - in the land of the blind, the one-eyed man is king, even if seeing is effortless for him.


I think the article's author is just suffering from imposter syndrome.


I dunno about a bubble, implying it will pop in some catastrophic way.

But will market forces correct the above average salary? I think so.

More young students than ever are learning to code, which is naturally going to increase the labor pool. The supply of software engineers is going to go up in the next 10-20 years (as will the demand, though! But I still think supply will outpace). This seems like it would mostly affect new hires, as some 15 year veteran is going to have valuable experience that (most) companies will always be willing to pay for.

It feels like finance to me. People who got there early made a killing. Then salaries, while still pretty high, fell considerably as everyone rushed there to get rich.

As a counterpoint to the author's comment on doctors (maybe not lawyers, still plenty of law students in the pipeline). It does appear that the number of people going after medical degrees is decreasing, so I would predict their salaries to jump considerably in the next 20 years.

And last, and a totally aside point, I have exactly one friend who skipped college and over the last 12 years worked his way up the electricians union and now runs his own small business doing residential electrical work. He's making more than most of our social circle. He doesn't know many people his age doing this type of work either, so as the old guard retires, he's going to charge whatever he wants.


> It feels like finance to me.

Former quant dev, ie straddling the industries.

The amount of people who study CS or can learn on their own is surprisingly limited. You won't get a job just by knowing how to write some if statements.

By contrast, there are load of history majors in finance. There's plenty of ways to act like you understand it. Plenty of bullshitters. Also the role of luck makes some of them seem smarter than they actually are.

Engineers who can't code will be uncovered sooner or later. It's hard to know beforehand at an interview, but it's a lot easier to discover with that person present for a few weeks.


An acquaintance at University went on to work in the City - I was amazed as he had studied History rather than Physics or Maths or something.

It turns out the firm he went to work at was run by a family friend.

At least Tech has always seemed fairer to me, with less of the 'old boys network' than other fields like Finance and Law.


> More young students than ever are learning to code

More young students than ever are being taught the utmost basics. But is it true that more people than ever are pursuing it in the sense of seeking professional mastery over the craft? A proxy for this might be population-relative CS program enrollment and graduation rates. It would also be interesting to know how this is scaling compared to the overall volume of programming labor demanded, which is surely growing as well.


Anecdotal, but the CS program overtook the business school at the University of Washington a few years back, my alma matter.

I remember when I was there, the CS classes were for NERDS and now here we are, everyone wants in.


UW is #6 nationally for software engineering and #60 overall... rankings are imprecise, but that might have something to do with it. Top students from all over the country compete for a small number of spots at top 10 schools.


    I remember when I was there, the CS classes were for
    NERDS and now here we are, everyone wants in.
When was that? I went to school in 2002 and CS at the time was already well past the “just for nerds” phase - I would guess that happened some time in the late 90s.


UW CSE has always been supply constrained, not demand constrained. Even back more than 20 years ago when I entered the program.


As far as I can tell, CS enrollment is at an all-time high.


When you start from a very small base, it’s easier to continue to grow and reach “all-time highs”.


Lawyers have been around for a while, there doesn't seem to be a shortage of them, and the average salary is still relatively high. However the distribution of starting salaries has trended towards being bimodal (https://www.nalp.org/salarydistrib).

I could imagine the same thing happening for programmers over time, if it hasn't already.


I think barrier to entry for lawyers is way higher than for programming.


Can you imagine hiring a self-taught lawyer?


I'd hire Abe Lincoln in a heartbeat.


I believe some states will let you simply take the bar exam and start practicing.

Of course, one would probably get started by clerking or working into a paralegal position, both of which would require independent study and coursework.

Sometimes people just need a cheap lawyer to take a look at a simple will or divorce papers, etc.


No one will let you take the bar exam and start practicing. California will accept four years clerking under the direction of a lawyer and passing the Baby Bar for eligibility to sit the Bar Exam. There are other states with similar programmes but I think they all demand at least one year of law school.

https://en.wikipedia.org/wiki/State_Bar_of_California#.22Bab...

> California State Bar Law Office Study Program The California State Bar Law Office Study Program allows California residents to become California attorneys without graduating from college or law school, assuming they meet basic pre-legal educational requirements.[19] (If the candidate has no college degree, he or she may take and pass the College Level Examination Program (CLEP).) The Bar candidate must study under a judge or lawyer for four years and must also pass the Baby Bar within three administrations after first becoming eligible to take the examination. They are then eligible to take the California Bar Examination.


I believe Washington is the only state that allows this anymore. Even then, you still need to be an understudy before you are allowed to take the Bar in Washington.


California, Virginia, Vermont, and Washington all seem to allow it. There are caveats, of course.


I'd counsel caution in relying on those charts because, inter alia, they're self-reported salaries and the very nature of the report excludes unemployed bar members.


Big law salaries are widely known because all the top firms pay in lockstep. The big spike to the right is 100% accurate.


How is law anything like programming? the potential is endless in software and tech... Law careers have extreme limitations.


I think we in tech get rather full of ourselves.

Lawyers are specialists in negotiation, mediation, organizational procedures, persuasion, analysis of edge-cases of language and rule sets, evaluating the motives and thoughts of other humans and predicting their actions in ambiguous contexts, and quickly consuming and producing textual information under high pressure.

That seems like a pretty impressive toolbox that has broad applications across great swaths of human endeavor.

As it stands, many of my lawyer friends have contributed far more to humanity's greater good than I think any of my fellow techies have.


Having a legal background opens up a lot of opportunities in government-related areas. These simply aren't as easily available to other professions. If you'd like to hold a position of power, doing law is probably the best bet.


I imagine the distribution is already bimodal.


Total med school enrollment is constrained by the number of residencies and supposedly the AMA. Meanwhile, the numbers of grads for other health professionals (Nurse Practitioners and Physician Assistants) that can fill some of the same roles as doctors has skyrocketed.


There is still a drastic shortage in good engineers.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: