Some of the smartest people I know work in other domains: biology, chemistry, and even physics. They are sometimes baffled by tasks that seem trivial to me, and I'm under no impression that I'm more intelligent than them. I simply specialized and focused only on programming, while they program to accomplish other tasks in their domain of expertise.
Can this last forever? Of course not, nothing lasts forever. But wondering why the wealthiest corporations in the world pay their workers high salaries is perhaps like wondering why water is wet. Software has a low marginal cost, and the rest is basic incentives for the corporations.
Good programmers I know also overestimate the skill needed to earn high salary in this job. You don't have to go up the learning curve much; these days, you just learn yourself JS a little bit and go for a webdev job, making shit code and still earning more than most people in a given country.
> But wondering why the wealthiest corporations in the world pay their workers high salaries is perhaps like wondering why water is wet. Software has a low marginal cost, and the rest is basic incentives for the corporations.
Nah, that's like wondering why is this ice block sitting on a hot plate and still solid. The answer is: because it just got put there, and it'll melt in a moment. So too, will end high salaries, as most low-hanging fruits get eaten by software, made by mass-produced cohort of programmers.
Our industry has its share of cycles, but this, in my view, is largely wishful thinking on the part of people. Nothing wrong with optimism but...
Every 5-10 years there's a "technical shift" that forces everyone to reevaluate how they build software or more importantly what they build, and the race starts all over again. The ice block is removed from the hot plate is replaced by a bigger, colder block of ice. And when these technical shifts aren't taking place, the bar for what constitutes as "good software" inches upward.
If your standards for acceptable software were frozen in time in 1985: using modern hardware and software toolchains, you could accomplish in one day what used to take a small team an entire month. But if I delivered, say, what passed for a "good enough" database in 1985, it would resemble someone's 200-level CS final project rather than a commercially-viable piece of software.
Many of the underlying problems and solutions exist for decades. Database systems you mention are a good example of this.
While there are some key concepts to things like databases, the fact remains that your 1985 database would not be considered sufficient in todays world: It would have too many limitations, lack features we now take for granted, would not scale to modern data requirements, etc. Supporting all that "modern" functionality is non-trivial and requires a huge amount of effort. You can't just say "Well, we figured out space and computationally-efficient hashing, so relational databases are well on their way to being feature-complete"
There's a reason we haven't stuck with 1.0 on our platforms, and it's not just security or a desire for a bigger version number: New demands required new functionality and new ways of building things.
iOS was basically AppKit, so anyone already developing for the Mac knew most of what they needed to know to develop for iPhone.
Pretty much every programming innovation is incremental, and doesn't require throwing out all of your previous knowledge and starting over.
Maybe. But AppKit was not the Mac Toolbox.
When my career began being good at memory management was a skill to be proud of. I would say now, being good at concurrency is a skill to be proud of.
I don't really have to worry about memory management any longer but didn't worry about threading when I started my career.
As I see the younger generation entering the programming field I wonder in what ways the craft will be different when they've had a few decades under their belt.
Will parameter tuning datasets for machine learning be the coveted skill? Who knows.
But the demand for developing in AppKit suddenly increased by orders of magnitude.
That's the sort of shift in the environment that the grandparent is talking about. Fundamental CS tech was arguably better in the 1970s and 1980s, because it moved more slowly and you had time to get the details right. That doesn't matter if you're building say a mobile Ethereum wallet in 2018, because you're building for the user expectations of today, they don't care about data integrity or security as long as it doesn't fail during the period where they're deciding which tech to use, and software that solves the problem (poorly) now is better than software that doesn't exist.
I believe you are a victim of survivorship bias.
There was plenty of shitty software in the 70s and 80s. The difference between then and now is that we haven't been able to wait for 4 decades, to see what software of 2018 stood the test of time.
In the 1980s there was a lot of "foundational research" (poorly re-inventing the wheel) for microcomputer people who did not know about the work done on large computers in the 1960s. Move fast and break things was also very much a thing for microcomputer manufacturers and most microcomputer software vendors. Look at how many releases software packages went through, and at what rate.
That has always been the case and is about as good a one-line summary of the software industry as I can think of.
The "modern" database systems are now going back to the exact design principles that the books you refer to solved long time ago. There is tons of research, dissertations,.. that focuses on this from decades ago.
Its just now that the new systems realize that these problems actually exist.
If you dont know the history of a certain field and what came out, you repeat and make the same mistakes again. This seems to also apply to software engineering.
A great example of this is the evolution of FB/Google/Amazon. Portions of their core tech have been completely re-written over the years for marginal gain, but there is a large premium to being the best in tech.
In other parts of the industry every new cycle enables some new area of tech, and those marginal gains become the minimum bar for entry. e.g. Deep Learning and Computer Vision, distributed systems and cloud computing/SaaS.
Haha in 1985 Amiga had a multitasking GUI desktop in 512k and 7Mhz. Now we have millions of times more computational power and struggle to deliver an experience as crisp and satisfying.
I wish people still made software like it was 1985, that actually used the full power of the hardware for something useful...
- No process or kernel security (processes could rewrite kernel code)
- Processes could effectively disable the scheduler
- Supported only a single, fixed address space: both a massive limitation and a performance hack that made the system bearable to use
- No security model
There are embedded applications these days where not having these features are deak-breakers. Let me assure you: if you re-implemented the original Amiga OS feature set it too will be screaming fast. The tricky part is keeping it fast once you start adding protection and additional functionality on top of it.
And largely what happened when you tried to implement more complicated applications on top of these primitive systems is that they would crash the entire system, constantly.
Well the PC wasn’t backward compatible with CP/M so that’s an odd critique to level at the Amiga.
Backwards compatible didn't matter in the mobile world, or for those screaming for ARM laptops.
Sometimes we can't just have nice things.
That’s chicken-and-egg. Why do modern apps with actually quite simple functionality need all this vast power, the GHz and Gb? Because it’s there. Why does software crash? Because the OS let’s it with nothing more than inconvenience to the user.
Amigas were actually quite usable, they were stable enough for complex software to be developed and real work done. Same for ST’s.
so, your software concepts from 85 would be overkill today, not lacking.
Today I'm working on systems with memory latencies from l0 cache all the way to tape storage. And it's getting worse.
Also modern non-indie games are orders of magnitude larger and intricate than 80's games, and have about the same distrubution of quality/bugs. They're now made by teams of size 10-1000 rather than 1-10 though.
There was one variation of Windows in 1985. (And I remember installing it!)
Still, in 1985, even DOS apps had to target varied environments: mono, CGA, EGA, Tandy graphics, different memory configurations, 8088 or 286, printer drivers...
I wish it inched upward... So much software now-days is bloated crap.
Even if you have library support to hand hold your budget coders, even if you use a lot of them, even if you give them all the time in the world, they will produce more complicated, less coherent, less stable, buggier, and harder to modify, improve, or iterate results than better coders who understand the problem better.
That means that no matter how little you pay up front you end up paying more in the long run throwing more man hours and money at fixing the mess made. A good mess is easier to maintain and improve and costs less over time. A mediocre / bad mess takes substantial efforts to maintain and iterate on.
Its also probably a domain impossible problem to remove the ability for any coder to make bad code. If for no other reason that in any programming environment you can never stop someone from iterating by index over a binary searchable tree for a specific element in it, or you can't stop someone from turning a float into a string to truncate the decimal part to then reinterpret the result as an int. But if you don't give them the tools - the integer types, the data structures, access to the bytes in some form or another - you aren't really programming. Someone else did the programming and you are just trying to compose the result. A lot of businesses, like I said, can be sated on that, but its still not programming unless you are in a Turing complete environment, and anyone in such an environment can footgun themselves.
SQL solved that problem a long time ago. You write SELECT, the query optimizer figures out how, using any available indices.
Eventually it has to change (imo), either through companies becoming more scrupulous in their hiring, or through a massive flood of new devs.
It's not that tech, even done by shitty devs, isn't valuable. It's a question of whether the market can control itself, which I'm pretty sure is no.
Stricter hiring on the low-end perhaps, because in FAANG and the companies that copy them, hiring could hardly get any stricter...
There will be a day, but when is hard to say. Thinking it's right around the corner is akin to the belief we're on the cusp of true AI. We're more pessimistic today than we were in the mid 80s. And non-programmers were programming with hypercard, filemaker pro, VBA, etc, back in the 90s.
There are of course former well paying jobs such as old-school front end devs (html/css/sprinkle of js) that are largely commoditized, but that's a given considering the low barrier to entry.
The same highly productive nature per employee is found in virtually every other high paying industry, most of which have not seen the pay for the higher end of those in the industry fall over time.
With these jobs in particular, what I see is that the definition of seniority has shifted to 'knowing the latest tech'.
So a junior dev who's just got to grips with React has become a React Developer, and they are now relatively senior in that field. The experience isn't transferable to other parts of the software stack though, it's too heavily tied up in the browser. So they end up as a super-specialised frontend dev.
It'll pay pretty well until the tech becomes obsolete, unless that kind of person enjoys maintaining legacy code.
It wouldn't surprise me if index funds have caused a decline in earnings for investment bankers.
On the other hand, if you think the software industry has a hard time figuring out (at hiring time) who the high performers are... science is driven by serendipity. Nobody can predict who will find the billion dollar discovery. Not even past performance is a reliable indicator.
So it makes sense to me that the salary spread in science is relatively even. If they could reliably figure out who to dump money on, they would. On the other hand, the FAANG companies clearly believe their hiring practices can select out the high performers... and perhaps they are right? If they're paying 3-4X what everyone else does, they expect to get at least 3-4X the value.
The selection process seems to do a good job of keeping out the lowest tier at least, although we openly acknowledge that we miss a lot of good people as well.
Years ago, I worked someplace where a colleague was tasked with working with another developer on project X. After about 15 minutes it was clear the other developer ... wasn't? A web project, and this person had been employed as a "web developer" for at least several months. Questions like "how does this information in this browser get back to the server?" came up.
Colleague goes to manager and says "I can hit the project deadline, or I can make sure other_dev learns the basics enough to be able to contribute and understand projectX, but I can't do both by the deadline. Can we move the deadline back a few weeks?"
No, and no. Train other_dev and hit deadline.
Deadline was hit, other_dev moved to another project afterwards, and was pretty much as ineffective as before, but colleague was then saddled with this reputation of being a 'bad mentor' because the next team learned other_dev didn't know how things worked. Why the hiring manager wasn't tarnished... who knows?
He was given 2 tasks and he only delivers one result.
I know this sounds... not ideal... but it is what it is.
His manager probably has to operate at the same level of expectation: given 3 tasks by his manager (or director), either you finish all 3 or you're less dependable.
I was motivated because my older brother, and my mom, had already learned how to program, and they were quite excited about it. After getting past a few familiar conceptual hurdles, it became very easy for me to learn programming myself.
People who are only motivated by the money, or under pressure from others, have a harder time, because their curiosity and drive aren't activated. There's some sort of valve that lets the knowledge into your brain, that has to be opened.
For the most part, the people I know who seem to be motivated by money itself are not so desirous of getting rich per se (many are already rich), but are actually interested and curious about money in the way that I was curious about programming.
I don't program for a living today, but my ability to program is definitely a force multiplier for my work. It has either improved my earnings, or improved the continuity and longevity of my career.
I use programming extensively as a problem solving tool, for things like data analysis, modeling, automation of experiments, and prototyping. Almost all modern equipment is electronic and computerized. To be capable of rolling out an MVP on my own, I program.
You will rarely see my computer without a Jupyter notebook on the desktop. ;-)
In addition to working in a computerized field, program code is just a super powerful way to express ideas. And the disciplines of good programming practices (yes, learn them) provide ways to organize the innards of complex things, so they actually have a fighting chance of working and being right. Plus, it's fun.
People who work as full time programmers may make more money than me, but I'm not sure that I can do their jobs. When thinking of any profession, a person should not only look at the cool, fun stuff, or the money, but what the actual daily grind looks like, because that's what you have to survive.
But that is no definition of bubble. Bubble, at a very basic level, means there is a lot of capital flowing within, it has little to do with whether how difficult your job is.
Medical record systems were running on MUMPS (schemaless) and being eventually consistent (records were keyed in from paper forms) long before 1985.
I mean: what would you use Node.js for - in 1985 - even assuming you had access to a system to develop and test stuff made with it?
In 1985, your hypothetical bank would be running VMS, which had asynchronous IO system calls as the default. There was no need to "invent" something like Node.js.
The fat package makes programmers not realize this.
why in 2019 is there no good way to visualize a decision tree? having to install, configure, and get working graphviz feels very hoaky.
I prefer python, but R still does some things really well that the python libraries are just not up to par
1) anything geospatial, including drawing maps. here is a list of projects my students did to give examples:
2) time series
3) linear models, is it so hard to give me a good summary?
If anyone knows of any packages that do these >= R, I'd love to see them :)
You should try https://github.com/parrt/dtreeviz. From Terrence Parr and Vince Grover, released in Sept 2018.
There is a good background article on the problem space and their design iterations here: https://explained.ai/decision-tree-viz/index.html
I highly recommend spending some time in C/C++, Go, or Julia to pydata-first folks that ask.
When I took the ASVAB in high school I scored a 107. That score is too low to become a warrant officer so I had to retake it a couple of years ago for my officer packet and I scored a 129 out of a maximum 130. That puts me in the top 0.1% of testers. I am not smarter or more intelligent than when I was in high school. I do write software though. Every couple of years I look back on my software and algorithms realizing how I continue to improve and see the solutions more clearly.
The best teacher has been helping collegues. I have been programming a lot better ( less errors and even without sometimes running the application when I'm pretty sure), because I think and analyze more upfront than I used to.
Some things come back, but it's rarely related to me ( eg. last moment spec changes).
I do have to watch out, I notice that basing my code on someone else is ok. But, they always have faulty code in hard to test areas. So make testing easier on things that are hard to test is my next motto.
Also, helping others is a pretty huge timesink :(
Ps. Being in the zone does wonders lately
PS2. There was another threat about videolan yesterday. And nobody heard the entire story about https. I gave the VLC developers the benefit of the doubt, not knowing everything off their infrastructure.
. A lot of the comments here on HN disagreed with me ( stole silently upvoted though).
Today I saw a blog post about why..
It was infrastructure based...
I can't understand why I was practically the only one with another view in the subject in this community, where developers come together.
Comments are in my history. Mostly on the videolan topic. It's all recent
The question the author posed was why programmers are paid that much even when some other paths could seem "harder", which seems valid. Sure not all careers are supposed to be "harder" than programming, but they're not as easy as one'd imagine either.
Though yeah at least for now I don't see the situation abating much. The demand is still going strong. Once the proverbial "flood" of the market happens from new grads, things might get worse. But still if you know what you're doing, you know all the right concepts and skills, you should be able to stay on top of the game. There has always been a saying that the irony of the CS degree is that many people who graduated with the degree can't program, while many who can program didn't need to do a degree at all. I doubt the influx of students trying to study CS would change this situation much. Coding bootcamps have been around for a decade yet they don't seem to change the market equilibrium that much.
My kids have mentioned that they might be interested in a degree in computer science because and I've encouraged them to combine that with a second area of specialization. Programmers are everywhere, but a programmer that also knows chemistry or biology or economics or art history or just about anything stands out.
Really? I have a physics degree with some experience in rocket science, but my most valuable skillset (measured by how much pay I can fetch for it) is plain old software engineering. I don't think I'd be able to leverage my area of specialization to exceed or even match what I can get from FB/LI/G as a generic software engineer.
That's how markets work. You get paid what what the market will bear, not what you "should" make.
My comment was more about the companies though which may form cartels to drive down employee wages. Companies forming cartels is illegal, while unions are legal in many places.
In the US, union workers make between 10% to 30% more than their non-union peers.
You're comparing pay across two different economies and only looking at unionization as a variable. It's like wondering why engineer rates in Omaha, Nebraska aren't on par with those in New York, and concluding that it has something to do with differing fire codes.
Correct, but these protections exist (and existed at the time) independent from a union.
> In the US, union workers make between 10% to 30% more than their non-union peers
There's very few high-skill jobs which are commonly unionized. In a market where supply is greater than demand, then yes unions have absolutely shown to improve worker outcomes. I'm not aware of any evidence for markets where demand outstrips supply (like that for skilled software engineers). It's not immediately clear that union protections would be beneficial.
>You're comparing pay across two different economies and only looking at unionization as a variable.
No, I'm simply pointing out that your flippant response to esoterica doesn't actually address the question. If unions are better for workers, why is it that a non-union area !!with a cartel depressing wages!! was still substantially better for workers than a unioned area with no such issue?
Saying "oh the market is different" ignores the question of why the market is different.
: Indeed, that's kind of exactly what happened with this cartel. Facebook wanted to hire skilled engineers, and was willing to pay more, so broke the cartel. That kind of thing won't happen when workers are generally equivalent, but SWEs aren't.
Sure there are. Doctors and actors, to name just a couple. In both cases the "union" actively works to create barriers to entry.
The AMA colludes with medical schools to set artificially-low student body quotas. If you've ever wondered why teaching "XYZ for pre-meds" is such a miserable experience, this is why. You have to earn straight A's to get into med school because there are so many more qualified candidates than openings (but it's not clear to me how, say, art history or algebra-based physics makes you a better doctor).
SAG (the screen actors guild) requires actors to have already performed in a SAG production a a condition of membership. And they also strictly limit the number of non-SAG performers on SAG productions. That chicken-and-egg problem was very intentional
If you've ever taken a macro economics course, you know what effect these actions have on prices.
> I'm not aware of any evidence for markets where demand outstrips supply (like that for skilled software engineers). It's not immediately clear that union protections would be beneficial.
See above. Unions can create a market where demand outstrips supply.
> If unions are better for workers, why is it that a non-union area !!with a cartel depressing wages!! was still substantially better for workers than a unioned area with no such issue? Saying "oh the market is different" ignores the question of why the market is different.
So tell me why professional associations exist, then. Why do doctors form a union to increase wages, if as you say, they would be better off without it?
Neither the SAG nor the AMA are unions in the traditional sense. In many ways, the AMA actively works against worker quality of life (consider that the horrible conditions for med students/residents and the high suicide rate among MDs) to artificially reduce supply.
>Why do doctors form a union to increase wages, if as you say, they would be better off without it?
The AMA is mainly a lobbying organization, not a union. Since a significant percentage of doctors are in private practices or small practices, they don't have representation with the government. So sure, the AMA does collectively bargain with the US Government. But by that same token, since 53% of MDs are self employed, the AMA can't do "normal" union things like set wages, because there's no one to bargain with except the doctors themselves.
And interestingly, the AMA actually admitted that its intentional supply-reduction is hurting the medical industry as a whole. To answer your question, "because they thought it would be better". But in hindsight, they probably weren't.
I suspect your posting in bad faith here and in the usa SAG minimum rates doesn't effect the higher rates that successful actors get.
Another way to see it, collective bargaining goes both ways, ie both workers and employers will come to a joint agreement. So if we created a FAANG engineers union and created a joint pay-scale for them, then that would basically be equivalent to the non poaching agreement often derided in discussions like this.
Not all union models have sector bargaining and it certainly doesn't work for professional unions - and I am not saying that European unions really get the needs of m&P members and need to change.
In every other aspect of computers, the industry has finally embraced usability as a desirable goal, and not just for end-users.
On my first computer, you had to read a 100-page user manual and learn exactly what commands to type. In my first programming language, you had to manually allocate (and worse, deallocate) memory. With my first database, we used to have to go type VACUUM regularly. None of these is true today.
Yet even though some of the highest paid people in the world are members of unions and have agents to do their negotiating, programmers seem to have latched onto this idea that if you're not making top dollar or have your ideal working conditions, you should "just negotiate better".
Why stop there? Tell programmers they should "just program better", too.
> You can also unionize
Have you ever organized? I don't think you realize how difficult this is, especially without strong support from an existing union. There's a reason unions heap rewards on people who do it.
Existing unions also have great labor lawyers. A common response to even thinking about unionization is getting fired. (That was in the news recently because it happened 4 weeks ago here in Seattle.) Labor laws aren't what they once were, and there's usually no consequence to the company for firing organizers.
Flipside: I can still write software for my first computer without looking anything up, over 30 years after reading those 100 pages. I still know the memory layout, opcodes, assembly etc by heart and it is still the best way today to program that particular computer (which still works in my man cave) today. Yes, today it is all simpler, but the 100 page example I find a plus, not a negative. Maybe you were referring to something but my 100+ page manual was usage and at the same time programming (using was programming beyond the basics) as that was the only way to use the system.
Issue 1: It's very difficult to tell if your contribution got 50% improvement in performance because there were 10 other devs pushing in features and bug fixes. This is the attribution problem
Issue 2: This happens over time. It's very unlikely that your 50% improvement happens every year or month. Because, think for your self, this is compounding with large rates. It grows quickly. 1.5x improvement in 6 cycles (months or years) is 10x. This essentially is the time problem
Issue 3: even if you deliver the results you did, in a large company there's a large bureaucracy and no one person has the ability to increase your salary by that much. This is the control problem.
It’s avtually fairly non-trivial to be able to say with even a modicum of certainty how much value a given developer brings to their company.
I currently write software used by millions of people. Partly because I’m a backend engineer, I have no real idea how much more the company is making due to my direct efforts. Since they keep paying me, I’m assuming it’s a decent multiple of my carrying cost, but I have no way to measure it.
In other words, markets work that way because that's how the bosses and capitalists want it to work, and have so far been successful at thwarting attempts to use the government to change things.
Starting your own company (not self employed contractor) gives a really good perspective on what it means to be owner and employee.
The logic is that programmers tend to produce far more value than they capture -- so that gets captured elsewhere, a lot of it typically by management. Except the value can be hard to quantify when the company is old and so is the software, how much of the value is employee #3701 making fixing a bug that's making the product not work for one customer in one instance vs employees #107, #85, and #150 who in their past team's life created the original version of that system to begin with to make the new customer even consider using it? There's no point to moaning about how much you "should" get paid. Just ask for more if you feel underpaid, but be aware that because of competition and because people usually want to hear what more you'll do to justify it, you're not going to always get it.
How much of the business risk of the enterprise is your top flight programmer assuming? Are her decisions the only ones that make any difference to the increase in profitability as a result of her work? How direct is the line between back room engineer, no matter how good and profit?
The only case where 50% or near it makes sense is for a founder owner who is also the lead talent. Maybe. Because then they are also creating the business opportunity and assuming a big chunk of the risk.
Most SWEs I know are not making more than they bring in profit.
Consider that some of the most valuable companies in the world did not exist 25 years ago, and now they do making real non bubble money where their biggest assets are software developers. This isn't the dot-com boom. The attrition rates in CS programs are very high and even then not everyone that receives formal educations ends up being a decent developer. Factory workers made good wages in the mid 20th century due to companies making lots of money on an industrial boom. We now have a technological boom, and unlike factory work the barrier to entry is much higher. So I don't see why it can't continue. Sure the 3-400k salaries are high but they are at top firms that are selective and competing for a finite pool of talented workers in very expensive areas.
This is nothing like the current environment, where there is a consistent demand for programming talent which outstrips the supply, and the industry as a whole is far more stable.
Of course, a global economic slowdown could put a damper on salary growth, but it will not be a salary "bust" like what happened after the dot com bubble.
Those are the companies driving up salaries — once that rollercoast ride stops I imagine that there are going to be mass layoffs and salary readjustments for years afterwards.
What seems more likely is that the little startups will get crushed as VC funding dries up.
A quick google search finds that law school students have around a 14% chance of making BigLaw (the legal equivalent). The odds of getting into a medical school is between 2-5% on average. So no, I don't think we're in a bubble, the majority in the situation described would simply exist as the elite compensation class elsewhere as well, but maybe with even better odds.
For comparison, in Amazon, Senior and above engineers count for ~20% of the total, and those are the ones pulling +300k regularly. So only the top 20% of one of the top companies are getting such compensation.
And to follow the article, this won't last forever. Whenever the next stock market crash comes, almost half of that compensation (the equity based one) will almost vanish. But maybe on the next bull market we will see a similar situation (remember people in the 90s making 250k?).
You're saying the value of (for example) Google stock will plummet to 10% or less and then not recover at all over the following few years?
I don't think a 90% drop is common, but I also don't think it's outrageous.
Working for a big company that pays well does not mean you're at the top 1% of software engineers. It means you're willing to do what it takes to secure that job and maintain it, including moving somewhere many don't want to live.
Some (most?) of that is innate ability and intelligence. Sure, there are well-paying jobs that are unpleasant. But for many others the company literally gets to choose 1 out of 100 candidates.
Getting a job at these companies largely requires studying up on CS101 trivia and CS basics to pass stupid tests. It's practicing how to pass their interview process. It's not "innate ability and intelligence."
That's..really low. FB has around that many, Google has like 50k, and from what I've heard Amazon has about the same number or more.
Considering that everyone I know got >= $100k base out of undergrad regardless of location I don't think it's as uncommon as you would think anymore.
Of course cost of living is a fraction of what it is in SV. And quality of life is (subjectively) better.
Does this include the now >50% TVC "non-headcount" as recently reported?
The US has anywhere from one to four million software developers, depending on your source. The BLS lists 1.2 million US software developers, with a $103,560 median pay (excludes benefits) in 2017.
You have closer to a 5% to 10% chance of earning $300,000 in total compensation as a software developer in the US, at some point in your career. Frequently high incomes don't last, there's a relatively high turnover because peak earning power only lasts so long, layoffs happen, specialization changes, job changes, et al.
The giant caveat to this, as everyone here knows, is you have a <1% chance of earning that outside of a small group of markets (ie it's very much not evenly distributed; if you're in New Orleans or El Paso you have almost a zero shot at it; if you're in SF or NY you have a legitimate shot at it).
Sure its not "easy" to get into those companies, but it isn't an outlier to get into them either.
The simple reality is this:
If you are an engineer at a publicly traded tech company, it is customary to get RSUs and Refresher RSUs. These have compounding effects as their vesting schedules start occurring in parallel. By the end of your second year you will have two series of shares unlocking, and this is in conjunction with your salary increases and bonuses.
You should expect and negotiate your RSU grants to be proportional to your salary. Competing offers from other publicly traded tech companies ensures this.
If the share price has also increases, which is the only thing that happened over the last decade, this is enough for a lot of people to quit.
The article did not talk about share price increasing.
A house somewhere else and never paying rent again
A downpayment on a home in a high priced area where demand doesn't seem to stop, which is always a low-interest leveraged investment that has cashflow opportunities
"Taking a break" which means years of luxury at every music festival and socialite event while optionally networking and pursuing other fulfilling money-making activities, with almost no opportunity cost of having 'gaps' on your resume
Riding off the resume entry of being an "ex-Googler" to secure advisory roles, raise capital easier, get C-level titles at someone else's startup
Joining another FAANG company at a higher premium
Leaving the company so you can write covered calls on all those shares you earned, for passive income
Or the other options is to jump to a different companies that offer more money or more equities or higher level.
Real estate prices are high, because the Bay Area is a very desirable place to live, there are people who can and will pay the high price, and the supply of housing is low.
Salaries are high because big companies are fighting for talent (demand), and the supply of talent is low.
Housing prices here are likely to stabilize, but they're not going to start seriously declining unless the job market starts declining, too.
Clearly you don't live in CA. Homeowners vote more than renters, and they will never intentionally vote for policies that will put them underwater on their own mortgages.
You aren't being generous. The amount of companies paying people that well is on the order of dozens, maybe even 100.
Every single FAANG company, every single unicorn, all hedge funds, and a few successful non-unicorn company based out of SF and the Bay Area.
I think the reason these compensations get that high is because it does take 5-10 years to become good enough to lead a team that manages something so complex and to do it well so that the results are reliable and consistent. I think the difference with doctors and lawyers is that we're not licensed to practice. We're not a capital-P profession. However we still have to attend conferences and stay relevant but the expense and requirements to do so are on us or the companies we work for: there's no professional obligation to do so.
I don't think we're in a programming bubble if the author means we're in a compensation bubble and that programming is over-valued.
I think the real bubble is complexity. We're seeing a deluge of security breaches, the cost of software running robots in the public sphere on unregulated and very lean practices, and a lot of what we do is harming the public... though by harm I don't necessarily mean only harm to human life -- but harm to property, insurance, people's identities, politics, etc... and we're not accountable yet.
If anything I think we need to up our game as an industry and reach out for new tools and training that will tame some of the complexity I'm talking about... and in order to do that I expect compensation to remain the same or continue to spread further out and become the norm. Being able to synthesize a security authorization protocol from a proof is no simple feat... but it will become quite useful I suspect.
As a programmer myself, yes. There is no way I can continue to make this much. I'm a dumbass. (I say this having worked on, in the last year, a compiler, trading algorithms, and 3D object analysis)
The way I see this playing out is, something like behavior-driven development (BDD) where the business folks describe the functionality they desire, and programmers write up the backend logic. Then as AI progresses to AGI, a higher and higher percentage of that backend code will be generated by machine learning.
So over the next 10 years, I expect to see more specialization, probably whole careers revolving around managing containers like Docker. There will be cookie cutter solutions for most algorithms. So the money will be in refactoring the inevitable deluge of bad code that keeps profitable businesses running.
But in 5 years we'll start to see automated solutions that record all of the inputs, outputs and logs of these containers and reverse engineer the internal logic into something that looks more like a spreadsheet or lisp. At that point people will be hand-tuning the various edge cases and failure modes.
In about 10 years, AI will be powerful enough to pattern match the millions of examples in open source and StackOverflow and extrapolate solutions for these edge cases. At that point, most programmers today will be out of a job unless they can rise to a higher level of abstraction and design the workflows better than the business folks.
Or, we can throw all of this at any of the myriad problems facing society and finally solve them for once. Which calls into question the need for money, or hierarchy, or even authority, which could very well trigger a dystopian backlash to suppress us all. But I digress.
Let's say, a bug appears.
If the internals are produced by machine learning, chances are it's basically un-freakin-fixable from the high mountains of the spreadsheet/lisp interface. So someone has to dive in, and do it by hand. I doubt the business folk will do it, they won't know where to look!
The result, seems to me, is a metric-ton of machine generated code that now someone has to rewrite. Better hire a team to do it...
And that Lisp code will look something like: https://groups.google.com/forum/#!msg/comp.lang.lisp/4nwskBo...
(Unfortunately, Lisp neither makes you smarter, nor a better programmer, which seems to be a very profound, ego-wounding disappointment for a lot of people who try to dabble in Lisp programming).
Now programming-by-spreadsheets, on the other hand, is a real thing, that is almost as old as Lisp, and is called "decision tables." It was a fad that peaked in the mid-1970s. There were several software packages that would translate decision tables to COBOL code, and other packages that would interpret the tables directly. I think decision tables are still interesting for several reasons: they are a good way to do requirements analysis for complex rules; the problem of compiling a decision table to an optimum sequence of conditional statements is interesting to think about and has some interesting algorithmic solutions; and lookup table dispatching can be a good way to simplify and/or speed up certain kinds of code.
What is not interesting at all is the use case of decision tables for "business rules." A few of the 1970s software packages survive in one form of another, and I have not heard anything good about them. And the problem is very simple: the "business folks" generally do not know what they actually want. They have some vague ideas that turn out to be either inconsistent, or underspecified in terms of the "inputs," or in terms of the "outputs," or have "outputs" that on second thought they did not really want, and they (the "business folks") never think about the interactions that several "business processes" of the same "business rule" might have if they take place at the same time, much less the interactions of different "business rules," etc.
AI cannot solve the problem of people not knowing what they want or are talking about. Machine learning on wrong outcomes and faulty assumptions is only going to dumb systems and people down (IMO this is already obvious from the widespread use of recommendation systems).
About a decade ago, I created a piece of software that made a department of 10 people redundant. The company actually tried to make use of them, but they were so happy with the improved productivity, that they basically kept the dead weight on, for the most part.
Last year, I did this at a financial company and eliminated an entire team. They did not keep the dead weight.
I may not be so lucky to avoid a future mercenary like me. Hopefully that explains it! I rely 100% on my creativity and out-of-the-box problem solving ability to solve real problems (note: not imaginary interview riddles). So far, the living I've made doing this is good.
- Just because a value is high and may very well go down, doesn't mean it's a bubble. FAANG are making real money from those workers, not just inflating an asset and selling it to other investors.
- Just because it doesn't involve long hours, doesn't mean it's not hard. A lot of my college colleagues really struggled, and many more didn't even get in. Don't discount natural ability - in the land of the blind, the one-eyed man is king, even if seeing is effortless for him.
But will market forces correct the above average salary? I think so.
More young students than ever are learning to code, which is naturally going to increase the labor pool. The supply of software engineers is going to go up in the next 10-20 years (as will the demand, though! But I still think supply will outpace). This seems like it would mostly affect new hires, as some 15 year veteran is going to have valuable experience that (most) companies will always be willing to pay for.
It feels like finance to me. People who got there early made a killing. Then salaries, while still pretty high, fell considerably as everyone rushed there to get rich.
As a counterpoint to the author's comment on doctors (maybe not lawyers, still plenty of law students in the pipeline). It does appear that the number of people going after medical degrees is decreasing, so I would predict their salaries to jump considerably in the next 20 years.
And last, and a totally aside point, I have exactly one friend who skipped college and over the last 12 years worked his way up the electricians union and now runs his own small business doing residential electrical work. He's making more than most of our social circle. He doesn't know many people his age doing this type of work either, so as the old guard retires, he's going to charge whatever he wants.
Former quant dev, ie straddling the industries.
The amount of people who study CS or can learn on their own is surprisingly limited. You won't get a job just by knowing how to write some if statements.
By contrast, there are load of history majors in finance. There's plenty of ways to act like you understand it. Plenty of bullshitters. Also the role of luck makes some of them seem smarter than they actually are.
Engineers who can't code will be uncovered sooner or later. It's hard to know beforehand at an interview, but it's a lot easier to discover with that person present for a few weeks.
It turns out the firm he went to work at was run by a family friend.
At least Tech has always seemed fairer to me, with less of the 'old boys network' than other fields like Finance and Law.
More young students than ever are being taught the utmost basics. But is it true that more people than ever are pursuing it in the sense of seeking professional mastery over the craft? A proxy for this might be population-relative CS program enrollment and graduation rates. It would also be interesting to know how this is scaling compared to the overall volume of programming labor demanded, which is surely growing as well.
I remember when I was there, the CS classes were for NERDS and now here we are, everyone wants in.
I remember when I was there, the CS classes were for
NERDS and now here we are, everyone wants in.
I could imagine the same thing happening for programmers over time, if it hasn't already.
Of course, one would probably get started by clerking or working into a paralegal position, both of which would require independent study and coursework.
Sometimes people just need a cheap lawyer to take a look at a simple will or divorce papers, etc.
> California State Bar Law Office Study Program
The California State Bar Law Office Study Program allows California residents to become California attorneys without graduating from college or law school, assuming they meet basic pre-legal educational requirements. (If the candidate has no college degree, he or she may take and pass the College Level Examination Program (CLEP).) The Bar candidate must study under a judge or lawyer for four years and must also pass the Baby Bar within three administrations after first becoming eligible to take the examination. They are then eligible to take the California Bar Examination.
Lawyers are specialists in negotiation, mediation, organizational procedures, persuasion, analysis of edge-cases of language and rule sets, evaluating the motives and thoughts of other humans and predicting their actions in ambiguous contexts, and quickly consuming and producing textual information under high pressure.
That seems like a pretty impressive toolbox that has broad applications across great swaths of human endeavor.
As it stands, many of my lawyer friends have contributed far more to humanity's greater good than I think any of my fellow techies have.