Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Uncomfortable Truths in Software Engineering (buttondown.email/hillelwayne)
268 points by BerislavLopac on Dec 15, 2021 | hide | past | favorite | 386 comments


There’s no rigorous academic evidence for this, but a lot of companies have been backporting typescript/mypy/sorbet onto existing dynamic codebases and the case studies have been overwhelmingly positive.

Not to be a buzzkill, but migration reports from any tech A to tech B are always overwhelmingly positive when the industry has a newlywed period with tech B. Save for a clear, undeniable failure the stakeholders will always claim success.


Number one, static type checking is hardly a "new tech" for which the industry is in a "newlywed period". If anything, it is the middle-aged wife that the industry's crawling back to, as the passion fades from its dynamic mistress dalliance.

Secondly, the obvious counterexample here is MongoDB. Way too hyped-up during its honeymoon, and then almost immediately crapped on by the entire industry (to the point where we've PROBABLY gone too far and are unfair now). When hype waves don't work out, this industry is pretty quick about discarding them.


Also, TODAY type checking (and more important, how Types are used) is not the same than YESTERDAY types usage.

Java, C++ and similar are terrible benchmark for it. Before, types were almost for do taxonomy, with limited help to actually write CORRECT code, and more important, MODELING the domain was very verbose and with limited advantages!.

Against that, no static types makes more sense. With o without the end result, result-wise was kinda the same, only that without you remove a lot of noise on the codebase.

Only after a bit of the ML/oCalm/Haskell/etc type-system get in, and NULL removal becomes truly feasible, and some of the failures of error-prone software and how WRONG JS,C++ and for some extend, Java become more and more evident, then modern static types start to leverage a greater return of investment.

It also come together with other improvements in tooling (that make more enjoyable type inference on IDEs, for example), composability, iterators/generators, etc and you get a very nice toolset.

This is where everyone is converging, in some way or another.


You may believe that, but the fact is that we haven't been able to find any evidence to support the fact that "new types" significantly increase correctness and have a greater return on investment, either compared to "old types" or in general, and not for lack of trying. At this point, it's okay to believe it, but I'd be very careful about being so sure about it. Large effects are very easy to detect and are hard to hide. The fact that we haven't been able to easily see the effect in any rigorous way, suggests that even if there is some positive effect, the most reasonable assumption we can make right now is that it is most likely small. Having had experience with both "new types" and "old types," my personal impression (which is certainly not rigorous or necessarily trustworthy) is that such claims about a big bottom-line impact are exaggerated. Sometimes people feel they're writing more correct programs more easily, but the bottom line doesn't quite support it.


> we haven't been able to find any evidence to support the fact that "new types" significantly increase correctness and have a greater return on investment

Rust?

A lot of people say Rust improve the game (I'm one of them). I have coded in +12 langs all my life,. Rust totally remove tons of issues (for me) that were present in the past just before shipping and even after.

And I port the same project. The kind of issues I get of the Rust codebase are a fraction of what I has.

--- And I think many studies show it?


I was expecting someone would say that. There are very specific situations where a language has an advantage over an exceptionally "bad" language in the same domain, such as Rust vs. C or TypeScript vs. JS (for which we also have evidence of a ~15% improvement). But that doesn't mean that the very concept of "new types" generally has a big impact. E.g., it's easier to write more correct software in Rust than in C, but probably not than in Java or even (the untyped) Clojure.

Rust is a special case because its main contribution is to use typing rules to solve a harmful problem that's well-known and particular to C (or C++).


Clojure is strongly typed and dynamically typed, not "untyped". Much of its core behavior is built on interfaces like ISeq.

It's not uncommon to use a spec library like clojure.spec or malli, whose benefits overlap those of static typing. I'm not sure if there is a measured improvement from their use, but they have either advantages like facilitating generative testing that do help one to write more correct software.


The term "untyped" means anything that isn't statically typed (or just "typed"). This is because (static) types and dynamic "types" are two very different objects, and only the former is called "types" in programming language theory and formal languages in general.

I am well aware of clojure.spec, and it, as well as many other techniques employed in development, are probably among the reasons why types don't actually seem to have a big relative impact on correctness.


Thanks for the explanation. What are some of the other techniques?


Another way to phrase this is that in old languages, static types were added for the compiler, while in modern languages they are added for the developers (specifically, the teammates of the developer).

This is why hybrid typing is so prevalent nowadays, as you don’t need to satisfy any internal need of the compiler, but you can still keep the documentation of models and interfaces that static typing gives, where desired. Performance is usually also a no-factor nowadays unless you have high demands.

On top of this, the only reason JavaScript grew so large was that there was literally no other way to ship code to a user. IE6 was the universe. Nobody wrote code like this by choice. And those unfortunate souls that did still have scars from it. Remember this was also before CI checking every PR for correctness became as common as it is today.

Because of all this, there will not be any “swing of the pendulum” back to the crazy era of untyped PHP, JavaScript without TS or python without mypy. This is something that is here to stay.


I'd strongly suspect that we'll see a cycle for this over time. Fast Java incremental builds, minimally typed languages like go, and IDEs which actually worked for most statically typed languages ushered in the current trend of favorable static typing views.

I'd bet this lasts for as long as static typing remains fast and comprehensible to the average dev just trying to get something done. I suspect language designers and library builders will add more typing foo until builds are either slow or the code becomes a mess of factories, traits, monads, functors, and other constructs - ushering in a new wave of dynamically typed languages which "just get out of your way".


You are right there is hardly a tech that hasn't existed in some form or shape since 1970s. Doesn't change my point (and nobody said newlyweds have to be on their first marriage).


I think the trend is more of a pendulum swinging back and forth than a single cycle of "static" => "dynamic" => "oh, dynamic was bad, so static" explanation. Prolog, Smalltalk, Lisp, etc. are likely older than most of the engineers now very enthusiastic about static typing. Static typing is fashionable right now, I suspect, because it's the first time many engineers are stumbling upon it via Typescript, Sorbet, Haskell (or Haskell-derived projects like Elm). I predict eventually the costs (and let's be honest: static typing is not free) are realized, there will be a re-emergence of dynamic typing being fashionable again in the next 10 years.


Popular music changes every 5 or so years, because freshmen entering high school want their own identity that's distinct from the seniors who just graduated. This is why they can reject the music wave that came just before, yet also embrace nostalgia for the prior wave that came before that. That's fine, because the previous wave of kids (from which we want a separate identity) weren't into that wave.

This teenage dynamic carries over into adulthood. Academia is a bunch of young adults chasing tenure, by publishing papers arguing that the existing batch of tenured professors are washed up and got it all wrong. And also "rediscovering" the older research that was discarded by the previous generation.

Software development is a bunch of a junior devs working on bug tickets. Frustrated by the tech debt they inherited, and convinced that tech debt comes from the tech rather from the organization. It's just that in comparison to academia, technology has the generational lifespan of fruit flies. So we spin through the cycle a lot faster than other walks of life.


Static typing is net 0 cost. We don't need to be "honest" static typing is obviously not free you need to think of types. The real revelation was that the cost of dynamic languages far exceeds the cost of a statically typed language.


You're not wrong, but yet you are.

There were failures with Ada as well as I recall. Statically typed. Ahead of it's time. But in some critical use cases, it failed. It also cost a lot to maintain. Compilers/IDE's were super expensive. Yet there were cases of undefined behavior that you still had to (hopefully) catch in a peer review.

This was the case of the Arianne 5 rocket explosion caused by software written in statically typed Ada. It's worth a read through:

https://itsfoss.com/a-floating-point-error-that-caused-a-dam...

Java and the log4j mess are similar. It's a statically typed language, but a poorly reviewed code base. The static typing didn't catch the security hole. And it's cost the US millions of dollars so far to fix it.

Static typing may be nice and all, but it sure as hell ain't a silver bullet.


>Java and the log4j mess are similar. It's a statically typed language, but a poorly reviewed code base. The static typing didn't catch the security hole. And it's cost the US millions of dollars so far to fix it.

This is a ridiculous statement and just really devalues everything you are saying. How is a vulnerability in a library that is completely orthogonal to typing relevant in this discussion. Static typing never claimed to cure programmer stupidity.

Also when I say static typing I mean languages with good type systems, as much as people like to say typescript is bad because it's based on javascript, at least null errors are impossible if you are using strict mode unlike Java and Go where any variable could potentially be null and the compiler wont tell you if you have unhandled null cases.


You said:

> Static typing is net 0 cost. We don't need to be "honest" static typing is obviously not free you need to think of types. The real revelation was that the cost of dynamic languages far exceeds the cost of a statically typed language.

And my point is the static typed languages we do have didn't save us anything in terms of cost.

I recommend maybe you read the hackernews posting guidelines before you post again.

https://news.ycombinator.com/newsguidelines.html


> Number one, static type checking is hardly a "new tech"

Static typing of course not new, but migrations usually happen to relatively new languages: Go, Rust, TypeScript. I haven't seen reports recently of any large migrations to Ada, Pascal or even C++ or other old (20yo+) language.

> MongoDB. Way too hyped-up during its honeymoon, and then almost immediately crapped

MongoDB is still used in new projects but it is not longer getting a lot of up-votes on HN. It is somewhat similar to PHP - HN doesn't like it but the language is still widely used.


> I haven't seen reports recently of any large migrations to Ada, Pascal or even C++ or other old (20yo+) language.

Things are quietly rewritten in Java and C# all the time. I’m sure it happens with C++ as well since the more recent quality of life improvements have landed in that language.


Java and C# are very "corporate languages". Microsoft and Sun/Oracle have done well in their marketing and manipulations. The usual philosophy is that it's easier to find programmers that know those languages, so it should be easier to support and replace programmers as needed.

It creates a kind of "self-fulfilling prophecy". "We can only find Java and C# programmers, so everything needs to be and upgraded to Java or C#. Since everything is in Java or C#, we have no choice but to keep using it. Since everything is written in Java or C#, then those are the languages I better learn." It's very hard for other languages to penetrate that bubble.

Ada, and even more so for C++, will hold on relative to the code previously written with them. But clearly, so many programmers will "hedge their bets", with also learning Java or C#.

Pascal/Object Pascal/Delphi, from back in the 1980s, was a "problem" for which big players (like AT&T, Microsoft, Sun, etc...) have arguably pushed a lot of hate and disinformation towards. There is a good argument to say that these days it's really more for independents and mavericks. Way more for Pascal programmers who managed to get themselves into positions of power, that are calling the shots or can influence what gets used.

Delphi/Embarcadero would have made the language extinct because of their short-sightedness, charging outrageous prices for its IDE/compiler to the enterprise, and developing little to no Object Pascal talent, if not for the open source projects of Free Pascal/Lazarus and PascalABC. Interestingly, a few changes in the past or future, and Object Pascal could/can be a contender. It is a viable alternative to Java or C#, but doesn't have the corporate push.


But this was specifically in reference to introducing statically-typed tech into something that didn't have it (the original quote was "companies have been backporting typescript/mypy/sorbet onto existing dynamic codebases"), therefore it's new in the sense that it wasn't in that place before and now it is being introduced there. So there could absolutely be a honeymoon period local to that team/project.


Software developers are goldfish, not tortoises. We retain no long-term memories. There is always - oh, look! - a brand new side of the bowl to swim toward. All change is new tech to us.

Except COBOL. We've never seen it but it's a punchline.

Now regarding number one . . .


After using python and C++ in production for more than 10 years I'll reach for python for trivial things, but for something complex give my C++ over python! Yes C++ has awful syntax full of footguns, but at least I can change a large project. At 50k lines of code python becomes something you can't change for fear of some 1 in a million path that will only break after it hits production. Static types means that C++ won't compile in most of those cases. As a result python needs a lot more unit tests, and even 100% code coverage isn't assurance that you covered all the cases that can crash.

Don't take the above as a statement that C++ is a great language. I'm interested in rust and ADA (just to name a few that keep coming up) which might or might not be better for my needs. I'm a C++ expert though, so I know what the limits are.


Python in a project == tech debt. Same for bash scripts.

For us, Golang turned out to be the optimal balance between (probably imaginary) strictness of C++ and "everything allowed until 2AM PD call" philosophy of Python.

Even for simpler things like scripts and little auxiliary services Python is bad, because these little things tend to grow and get more complex.


I too have C++ experience and would never take "it compiles" as a sign that it'll work. I don't dispute that there are languages which have sufficiently powerful and complex type systems and compilers that enable you to model significant amounts of domain constraints such that you feel you don't "need" tests, because it'll probably work if it compiles (though I'd argue that you really ought to still have tests to verify your modelling of the domain in the type system is correct, i.e. a set of programs that must typecheck and a set of programs that mustn't typecheck) -- C++ is not usually one of those.


True. I have more confidence my c++ works with 50% code coverage in tests than python with 100% coverage.

I try to model my domain in C++'s types. it isn't easy, but even avoiding int and bool helps a lot to ensure my code doesn't make stupid mistakes. Haskell would give me more power, but I'm moving my C++ in that direction.


i think your expertise in C++ might be affecting your view of python more than expected. i used to feel this way about languages other than my « primary » one and it turned out i just wasn’t as good at others and blamed the language for it


I agree with bluGill and I've done most of my career programming backend infra in Python. Give me C++, Rust, Go, Java, C#, TypeScript—as long as it has a type system. Eliminates a whole class of runtime errors and just makes life easier. When I'm dealing with large projects that have lots of moving parts and other developers, strict types are essential.

It's true that one can write unit tests to firm up dynamically-typed code, but doing so is not only a slog, it makes changes more fragile and deployments a lot scarier. If you aren't perfect the first time you write the tests, instead of a build failing you get restart loops, dead threads, and in the worst cases data corruption from improper silent type coercion.


I have enough experience in python to know where the limits are. I've done more C++ for sure, but I have done a fair amount of python. I still reach for python for small projects where I'm not expecting over 10k lines of code.


> C++ has awful syntax full of footguns

HN never stops delivering new funny insults about this language :-)


I mean, the sentence translates to "there's no evidence this is better, but some people are doing it and liking it".

This is not about being objectively better but only about preference. So that's not a "truth".

Sure people might prefer static over dynamic, but until we have hard evidence that one is actually better than the other, this is only preference which is heavily influenced by the current moment we are in.

Having said all that, I've worked with both and lean a lot more towards typed languages now (more than I did previously). But this is not an uncomfortable truth, because it's not objectively true.


Typing is almost a no-loss addition though. If your function takes an integer argument and someone might pass it a string, that is a bug. It is quite hard to argue that a typing mechanism won't improve code quality, and it is very lopsided in the amount of time it takes to type a function vs. debug something that would have been caught by a type system.

There are edge cases that can be argued all day, but if some large codebase is going to be maintained long term it is difficult for me to see how types could be a handicap. Even if a coder doesn't really engage with the type system "properly", as little as capturing information that was already obvious anyway will reduce the bug count.

There are good reasons to expect typed languages to do well.


> If your function takes an integer argument and someone might pass it a string, that is a bug.

What if you pass it a float and that's converted to an integer?


What you’re describing is optional typing. What is usually derided as cumbersome is mandatory and exhaustive typing.


“Mandatory and exhaustive” typing is also good and usually not particularly difficult if you have a decent type system. With OCaml, for instance, you rarely even need to go out of your way to have 100% type coverage.


What if... we had the "best" parts of PHP or JavaScript development but for any traditionally statically-typed and precompiled language?

(sorry, I can't say that with a straight-face)

It could work though! And I don't just mean like Visual Studio's Edit-and-Continue feature...

Consider a language feature that lets you annotate a local/parameter/field's ("thing") type as "Hindley–Milner-esque" which tells the compiler to invert the thing's typing rules from prescriptive to descriptive (i.e. to mostly stop complaining and make the thing behave more like a JavaScript `object`, so the compiler only complains about 100% provably wrong usage of that thing).

Another "feature" of PHP/JS dev that I want to see in compiled projects is a way to run a project-with--build-errors by adding a toggle that instructs the compiler to stub-out all functions/members that contain build errors (even syntax errors!), this way we can do quick ad-hoc testing without needing the entire project to build. Think about the times when you your boss asks you to make 1 or 2 "minor" alterations for an unrelated feature and you don't think it's worth making a whole new git branch-andworktree for, but you can't test it right-away because of unrelated breaking changes you've already made.

Just some ideas...

---------------

Unrelated additional comment: I'm looking forward to when all programming languages fully support generalized ADTs. So many issues with data-modelling can be solved with ADTs and GADTs, but yet thanks to OOP's legacy from the 1990s we still have to shoehorn simple and clear models into inappropriate applications of inheritance. I want my intersection-types, damnit.


Exactly, that that kind of content is literally publication bias[0] on steroids.

[0] https://en.wikipedia.org/wiki/Publication_bias


The biggest problem is talking about 'statically typed' and 'dynamically typed' languages as if Haskell and C and Javascript, Python and Common Lisp (or Clojure) are the same.


For Typescript, I think it's safe to say the honeymoon period is over. It's been >9 years, or more than 3 internet generations.

For at least this one case, many people (including myself) believe it's a success. Whether the tooling for Sorbet or mypy ever reaches that point, there are compelling reasons to believe in the success of the model.


Adoption curve takes a while. Typescript definitely wasn't common back in 2012, I can tell you that.


This is true, but the adoption has been dramatic.

The question I would have is what would take for this adoption to reverse? Where is the dissatisfaction with the model? The rationale for Typescript and Sorbet was precisely long-term maintenance and codebases at scale.

To add to the list, there's also Clojure spec and Elixir spec. All of these tools have a similar philosophy across very different problem domains. All of them have enthusiasm and the model of type annotations for dynamic languages is decades old.

To reference another one of the author's points: people are slow learners in software engineering. The continued reinvention of this concept decade after decade shows its utility, both theoretical and practical.


Well my point was not that all technologies are equivalent. And it could well be than a new iteration of B is better than ancient version of A. But that the way industry celebrates new (or if you wish, rediscovered) paradigms is too detached from reality to be useful as any objective metric.


It's not common now! The software world is huge; a niche of a niche is not some bellwether of the industry.


> It's not common now!

It is [1].

Over 60% of Javascript developers use it somewhere, and over 1/3rd of all developers use it for something. Almost all widely used npm modules have @types annotations as well. Outside of node, it's arguably the most ubiquitous voluntary extension of the JS ecosystem ever.

[1] https://redmonk.com/jgovernor/2019/05/07/typescriptexploding...


The key of my point is "niche of a niche", which is seemingly the exact point you're making in arguing that typescript is common.

Common among JS developers != common among programmers != common among professionals in the software industry


> Common among JS developers != common among programmers != common among professionals in the software industry

Not sure if you're a troll, but as I pointed out, as of 2019 over 1/3rd of all developers in the software industry were using it. It's higher now.

That's common. Find me another technology used by more developers in the software industry.


I think part of it is that there are suddenly a bunch of clear, actionable and measurable goals when doing a migration - so people can commit to progress, deliver on it and feel good about their accomplishments.


Agreed, just because more companies are adopting something and saying it helps doesn't mean it actually helps. There's too much incentive for spin and arguable claims of success.

Additionally for things like static typing, the larger the company the more risk-averse people tend to be within the company so things that give a feeling of safety like type-safety are going to be a much easier sell.


It... depends.

I like Typescript. The Typescript enthusiasts are also glib about what a pain it can be to teach. This is not a minor tradeoff.


Yes, survivorship bias too.


> 6. We don’t have the Alan Kay / Free Software / Hypercard dream of everybody having control over their own computer because integrating with disparate APIs is really fucking hard and takes a lot of work to do. This will never get better, because API design is decentralized and everybody is going to make different decisions with their APIs.

No, it’s because it’s the incentive of most every developer involved to make it harder for the user to have control over their own computer. Even though it might make a net win, a proprietary software developer wants to not have competition. Also, they lobby for copyright and DMCA, etc. to further lock in their assumed place in the hierarchy.


So software copyright is not a net win?



its fine until it stifles innovation one could argue

https://www.investopedia.com/terms/p/patent-troll.asp


> Pairing is probably better than two solo devs if both pairers can handle it.

Man, am I the only one that absolutely detests pair programming? Not only do I find it socially awkward, it completely kills my productivity.

Software development is often about building a very complex model in your head and then figuring out the changes that need to be made. Having to dedicate half of my mental bandwidth to handling social interaction is just devastating to that process.


No. Lots of programmers hate it. Very common reason for new hires quitting at an XP shop I worked at. I expect that's why the author qualified the statement with "if both pairers can handle it".

Note that your comment also identifies a key value of pairing (if you can stomach it). When the model your are building is undocumented/implicit and can only be constructed in your mind by careful solitary thought, then you have a weak system design. But a pairing session can only succeed if you make the model explicit. So pairing has to start with constructing a shared model of the work (hard). It turns out, once you can share a model between two people verbally, turning that knowledge into documentation and/or refactoring is relatively easy.

To the the individual programmer, this can seem like extra (wasteful!) effort. Unfortunately, it also means they are likely to make another inscrutable addition to the pile of Jenga that is clear them in the moment, but will be opaque to all next Monday. Mind you, if your management is running a ticket mill and doesn't value design or documentation, that is the right thing to do.


I just want to be able to fart at my desk in peace.

I haven't tried pair programming. It sounds miserable. I figure I'd end up paired with one of those people that can't stand to see you looking for something. You know the type: you're looking through cabinets for something and they demand you tell them what and they demand you go directly to the right place. Sure, I'm looking for something, but I'm also learning where other things are.


The trick to pair programming is for each dev to have their own workstation with two screens.

Then, you simply use a screen sharing tool to share and collaborate on a code editor on each of your primary screens. We used to use Screenhero for this before they got bought out by Slack, but another tool that works well is something like Tuple.app or Drovio.

The beauty of this approach is that each person can "check out" temporarily and do research and look up stuff or check messages on their secondary monitor (without the other person seeing it), but they can easily move their mouse back over to their primary monitor and re-join in on the pair programming.

It makes pair programming a lot more tolerable, and gives you the ability to collaborate, while still having some flexibility and autonomy.

This also works well in both local and remote pairing sessions.


I’m totally not trying to convince you, because I don’t strongly believe in pairing or anything. But, one benefit that I do see with pairing is that code is always shared amongst a team. You’re seeing the social interaction as only a cost, but there is also a benefit: pairing ensures that the code produced matches more than just one person’s brain.

It basically frontloads the communication / knowledge sharing of the structure and reasoning behind the code, and avoids using patterns that only one person enjoys or understands. Which surely is an important part of working on a team.


The problem as I see it is that if you have someone working by themselves, then you only have to worry if that person is a good fit for that problem.

If you have two people working with each other, then you have to worry if each person is a fit for the problem ... and if they're a fit for each other. Or you have to worry if they are each partially a fit for that problem such that the two of them count as being fit for the problem (and they still have to be a fit for each other). And you also have to keep an eye on overall productivity. You're billing at 2x the rate, so you had better be seeing some noticeable benefit to justify it.

I've had good experiences and bad experiences pairing. I suspect that people who like it a lot already have a bunch of experience solo developing and are generally able to get along with a large number of people. Also, they're probably solving easy problems with an overabundance of budget.


You're not the only one. I absolutely detest it (to the extent I would absolutely leave a job if they made it a requirement) and a significant portion of devs I have worked with are lukewarm at best.


The ideal is when you're both able to load that complex model into your heads and have conversations about it. If you are unable to do that then you are a worse team programmer than those that can.


Anecdotal: I find pair programming great for my productivity, and I'm generally a shy/reserved person. Making things explicit by talking them out loud makes programming way safer imho. I almost never solo program if it's a life-and-death code and I have a trusted partner who enjoys pair programming.


Re. 3 & 5: my theory is that a language with gradual typing, gradual error handling enforcement, and "gradual proving" that is seamlessly embedded in it (see e.g. the https://nim-lang.org/docs/drnim.html experiment) could hopefully actually work here. And as to dynamic typing, I found it useful when prototyping, to quickly PoC/MVP the "happy path" of an idea/design, and see if it is worth investing any bigger effort, or just completely crap. Ideally I'd then (gradually) enable (or, "un-disable") statical typing, error handling enforcement, and formal proving. Question is, can anyone design & implement a language that would have all of that, and still be readable and understandable to a common programmer (like me).

Re. 6: for API design, I have some (feeble) hopes behind https://aip.dev; as to the rest, I have immense hopes behind https://enso.org


> Question is, can anyone design & implement a language that would have all of that, and still be readable and understandable to a common programmer

No, because exception handling and the answer to "what type are these data?" are product questions. Hard questions that no one really wants to work out because what if our customer base wants something completely different in 6 months?

This is largely reflected in cloud providers being unwilling/unable to provide account spend caps: what is a sane response once the budget goes over? AWS can't tell you; only you can define it. (This isn't to say "there's nothing AWS can do for you here" -- they could but won't for other reasons).

Python is the largest, common language that can support gradual checking systems being added in over time, and mypy has outlined a great framework for doing just that. But it's not a silver bullet and can't be because the problems are cultural, not mathematic. Either spend the time working out what you want your software to do, or don't. But don't argue "strictness" is "correctness".


But don't argue "strictness" is "correctness"

I feel my mind expanding right now the same way I did when I first heard "concurrency and parallelism aren't the same thing".


Hm; I guess when being in talks with a company using the language, assuming the pragmas idea I elaborated in a cousin comment, I'd then be asking questions like: "Is your codebase any=default or any=forbidden? What percentage of it is any=allowed?" (Instead of the current: "are you using JS, Go, or Rust?")

Notably, in my particular case, what you say is a problem is exactly why I want it: because when writing the PoC, I don't really know what I'm doing yet; and I want to only sketch it. Once I see the sketch and it looks like the general idea has some promise, I want to "grab the pen and start drawing" - i.e. start diving into the nitty-gritty dark corners of ugly interactions, corner cases, & stability, as a way of better understanding & exploring my domain/project/idea (specifically, using the compiler as a tool that is forcing me to do that, by asking me difficult questions). And I may want it graded in various areas of my codebase: some are super unimportant or still being explored, and I'm ok having them explicitly marked as "sketchy"; others are where I really care about extreme security, so want full prover power, and I want to know their exact boundaries too; finally, some are the "more or less know what I'm doing, but not life-critical" to bigger or lesser extent; notably with some small corners marked with an explicit "caution: here be dragons" `unsafe` (I mean, `any=allowed`) yellow tape.


Gradual typing seems like the worst of all worlds. You spend time adding typing but can't depend on it consistently. I was surprised using Dart that it would throw runtime type exceptions even though it looked like the code had explicit types.


So, that makes me notice I didn't clearly explain, that my idea here would be for some kind of guard blocks that enforce static typing (etc.) in some areas of code. Kinda similar to "unsafe" in Rust: ideally, you could start with e.g. fully dynamically typed code, then mark some parts of it with "I want this area typechecked", finally requiring all of it to be. Now that I think of it, seeing that in statically typed languages you can kinda already do this (with "Any" type, or "void*", or "interface{}", or whatsit), you could e.g. imagine marking some code blocks with a pragma "any=default" or "any=allowed" or "any=forbidden". So:

- "any=default" - marking the block as dynamically typed and not needing type annotations everywhere (but maybe allowing them if I want, and typechecking then if possible?) - modulo any inference;

- "any=allowed" - marking a block as "statically typed" with type annotations required, and "Any" allowed ("classical static typing");

- "any=forbidden" - would mark the block as statically typed with "Any" not allowed at all.

I'd then start by coding with whole codebase being "any=default", making it work like in JS/Python/Lua/...; then if happy with the prototype, I'd switch the whole codebase to "any=forbidden" and have the compiler force me to add type annotations. Finally if I know in some areas I really want the "Any", I'd mark some small blocks with "any=allowed", working as kinda "unsafe" marker in Rust. (Obviously, subject to bikeshedding over the specific pragma naming & modes.)

Similarly, I'd like to have another pragma for errors handling strictness (regardless whether actual error handling is done in a Rust-like "Result" way, in Go/Lua-like "soft-Result" way, or in an "exceptions" way), and for proving.


If I'm understanding you correctly, this is basically what we have with Typescript, and it's...fine, but it's not great. The difference between what you're suggesting and what Rust does is that in Rust the unsafety is lexically scoped - it concerns static code, not live data, so it can't escape the unsafe code block. Whereas in Typescript, as soon as you let an `any` flow back into the rest of the system, all bets are off.

ETA: If you're suggesting actually preventing the flow of Any-typed data out of the Any-typed blocks, well, that's going to make things pretty hard. Any-typed code wouldn't be able to call any "safe" code, because that safe code might do something like take its (presumably safe!) input and stash it away somewhere. Safe code wouldn't be able to access any unsafe data whatsoever - all calls into unsafe code would have to be completely procedural. It'd be painfully limiting.

I'll also add that it's not trivial at all to statically type a dynamically typed code base if it's exploiting that dynamism at all. "Just go back and add type annotations" could be a multi-engineer-year project requiring refactoring of huge chunks of the system. If you believe in static types, encouraging completely unprincipled dynamism at the language level for any reason besides backwards compatibility with existing dynamic code seems like a mistake.


So, IANAPLT, and I think that must be definitely clear now if it wasn't enough so before... but then, who says I can't dream! :D and in this dream of mine, those things seem to indeed be lexically scoped, apparently... :) As to the next thing, I feel I don't understand completely the details of what you wrote in the ETA section, but wouldn't it be similar like with unsafe in Rust? AFAIU, it's on the writer of the code in the unsafe block to ensure that whatever leaves the unsafe zone, it has its invariants all buttoned up and behaves properly back in the civilised world.

As for not being trivial, I kind of maybe have to understand (I found esp. Hishamhm's presentations about the design and evolution of the Teal language, née TL a.k.a. Typed Lua, enlightening yet approachable in this regard); yet, again, IANAPLT and I can dream :) and who knows, maybe some actual PLT will look at it and say, whaa, hmmm; but is it really for sure impossible? I. AM. CURIOUS..... <cue ominous thunder>


Well, all I mean in the "ETA" is that if you want very robust guarantees about type safety - more robust than Rust's notion of safety - then you're going to be very, very restricted in what you can do.

OTOH, if you're fine with looser guarantees about correctness, I'm not sure if marking blocks as safe/unsafe really gets you much more than Typescript, which does (with strictness turned on) force you to be explicit whenever you're using anys. And it's unclear how you can "button up" dynamic data and pass it back to the civilized world.


What's IANAPLT?

I googled and got nothing.


I Am Not A Programming Language Theorist (Theoretician?...)


With that argument you wouldn’t start adding unit tests either?

If I only add one test, how can I depend on it consistently.


The problem isn't best case scenario, but possibility of abuse.

gradual typing is an easy way to acquire more tech debt once tickets are closed and no-one comes back to add more types.

Corps like a simple plan->code->test->done without any "return" to code considered "done".


Is that enso software related to the software launcher by Aza Raskin?


Noo, I don't think so, from googling what you describe it seems just a clash of names.


Another uncomfortable truth: most software engineers beyond a few years' experience are highly resistant to suggestions for improvement in their craft. They're more likely to use their intellectual/verbal skills to push back than to move forward.


Different people have different things they care about at different times. If I’m currently trying to improve my ability to do effective stakeholder interaction, I’m going to be reasonably resistant to switching to spending a bunch of time learning terraform.


The same could be said about all trades people (and yes I consider most software engineering a trade), there are those who care deeply about the quality of their work and improving that. Then there are those who get really comfortable and are fine with stagnating.


Their craft isn't important. They can just move up the income ladder by getting a new job every ~3 years without really improving.


13. We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced. [3] And regulations are written in blood, so we’ll only get enforced legal regulations after a lack of accessibility kills people.

[3] ...I think the killing has to be directly and obviously attributable to the software. Something that would cause a media sensation.

Since we're talking uncomfortable truths: BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.

That was blamed on a bad UI/UX at the time, and nobody died, but there was never even talk of regulation. Credit Cards on the web got PCI DSS regulation because money, not because of people dying. Money has to be tied to "things like performance, compatibility, accessibility, security, or privacy" not lives.

https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Sec...

Also, the recent Boeing 737 Max scandal resulted in a payment but no regulation.

https://www.justice.gov/opa/pr/boeing-charged-737-max-fraud-...


Money and focus to improve SE culture can come from just changing civil liability doctrines, instead of gov’t regulation.

For example, guilty-until-proven-innocent (res ipsa loquitur) would make poor IT design and operations a lot more expensive, at the expense of a lot more court cases and a chilling effect on new software. For mature core operations, the price paid by society might be worth it. The IT industry would then be forced to come together and address issues at an industry level, if for no other reason than every player avoiding painful amounts of time and money spent in the civil justice system…which is a lot less fun than even the most torturous code reviews.

My sense is (IANAL) that the liability doctrines for software are stuck in the 1960s with all software viewed as a “best effort” sort of experimental exercise that must be protected from the rigors of normal liability. I think we are past that point for a lot of software components. But who in the software industry dare acknowledge it?


This piece is great overall, but:

> TDD[...] zealots are probably zealots [...]

Because they're sick to death of things breaking in production and having to program against interfaces that were designed as an implementer was thinking about them over how a caller would view them. There are times and places to not do TDD, I'd say on the backend around 15% of the time where exploration first is more important (e.g., some data science thing where you don't even know the shape of the data yet) but for run of the mill programming TDD works, but most people still don't do it even if they say they do.

I had my zealot phase on TDD, and what got me out of it was just resigning to the reality that the world isn't going to change. But that said, when friends / coworkers going on zealot style TDD rants I pour gasoline on that fire because I still enthusiastically agree with them.


What got me out of TDD zealotry is the enormous cost of really testing software well, I mean all branches, all sides of all conditions, large amounts of input space coverage. Then that coupled with the fact that specifications change constantly and relentlessly, invalidating some percentage of all of your cases.

I agree with the sentiment that got me into testing in the first place: software is complicated and hard to modify without breaking something. But, TDD for sure isn’t the answer to that problem.


Is mobbing really better than pairing? I've never done it. Sounds awful. I got into this game to have long stretches of solitary work, that's when I'm at my best. Not entirely obviously, I still work on a team. Anyway, pairing can work well under the right circumstances, curious to hear anyone's take on whether mobbing works or not.


I've done it, it's significantly better in terms of software robustness and having a good time.

You definitely get less code written than you would individually, but at the end of it everyone involved understands how the code works and why it conforms to the requirements which keeps your team's general knowledge high and turbocharges training for juniors.

It's also just pleasant to do a genuine team activity and have a laugh with people.

This is all coming from someone who greatly prefers working on their own to doing paring etc btw. It is nothing like the joy of getting your mind spinning at exactly the right speed, becoming one with the problem and smashing out a solution - so don't mentally compare it to that. It's much more like hanging out and brainstorming stupid ideas with your friends on the couch.


This is some of the first I've heard of mobbing.

Is it generally (or in your positive experiences) done remotely, in-person, or could be either?


I've only ever done it in person and I can't imagine it would work particularly well remotely. Maybe you could attempt to recreate the experience in VR but I don't think we have anywhere near the technology level to accurately recreate 7 people physically in the same space as each other constantly moving around, having side conversations and whiteboarding things


I've experienced it working well-enough in groups of 3-4, depending on everyone having decent internet at that time and on the software used. Just like regular mobbing IMO there's some circumstances where it'll feel like the right method, if it's a method you've occasionally practiced.

Here's some free marketing, I am not affiliated with any of these co.s: The best tool I've used for multiple users was CoScreen, because it shows everyone's cursors and allows simultaneous sharing of windows from multiple coworkers. So per that point in the OP that the major benefit is "trivia" knowledge, if you have one coworker who can quickly solve subproblem X from their terminal, and another who can quickly solve subproblem Y from their terminal, both terminals can be shared simultaneously and everyone can gain that context.


I would be willing to try what you're saying, but I don't think it would feel to me like in person feels. But maybe that's OK? It doesn't need to be the same experience as long as the outcome is the same.


Thanks, that's what I suspected!

That makes sense, the "constantly moving around having side-conversations and whiteboarding things" being a good part of it.

After reading your description, I could see myself liking in-person mobbing too, and it really being a different thing than "pairing" (which I don't love)... but trying to do it remote sounds nightmarish to me personally.


Bear in mind it needs to take place in an environment where people trust each other and feel safe. Nothing kills the mood faster than someone saying something like "that's a stupid idea Steve, what's wrong with you?"

I definitely think remote group programming is possible, but I reckon it'd be really hard to get right _starting_ remote, and it'd never be quite as good as in person is.


The way it is promoted is a bunch of people in a room. But, I've done something similar using VSCode Live Share and Live Share Audio and it worked well.


On the teams I've been on that mobbed extensively there were still people who preferred to work alone. Fortunately the other team members respected that everyone has a different preference and work style. Sometimes it's good to stretch one's comfort zone to try new things, and I'd encourage those who haven't tried mobbing to try it occasionally, but I wouldn't recommend it as something to do all the time. If it's a forced thing or something you do without question or exception, it'll easily become one or two people doing all the work with another just watching or joking around, and sometimes it's not easy to recognize who should be splitting off and who should be staying on, so often everyone stays on with different people alternately drifting in and out of an engaged, participatory state.


If you can do long stretches of solitary work without needing to ask questions of anyone else, do that!

But, I often work on features that require the knowledge and input of many people. I lose a lot of time getting blocked on simple things by not knowing what I don't know, figuring out what I'm missing, figuring what to ask, waiting for an answer, trying to implement something based on a brief reply, etc...

The few times I've managed to gather a grop to do something approximating mob programming were incredibly productive. The whole time it was 3 people discussing the main thread of code and 1 person off on their own "fixing up some detail". Who was off on their own changed frequently. We were using VSCode Live Share and Live Share Audio.


It depends....if the mobbing is what my former agile consultants were pushing (having everyone from dev's to business analysts do it) then no, it doesn't work. The few places that I have seen it work sort of well is when there is some big issue going on such as a production outage that can be benefited from having a group of devs together looking at the issue. Aside from that the level of engagement goes down the more people you add to the session and it felt to have diminishing returns on the quality to developer count ratio.


This is my experience as well, for a large issue or for learning new tech (Let's say you have a new lib which everyone will be interacting with) it makes sense. For day-to-day it's at best a waste of resources and at worst an excuse for people to do less while getting the same pay.


I've done it with 4 people on a team where occasionally there would be one blocking story before anything else could move forward, so we'd all just jump on zoom. I was never the one in control. It was really hard to keep focused, but the task at hand got done, and it was easy for people to take a break for a few minutes while things continued to move forward.


Mobbing can be easier for people that pairing, and can be a way to observe the skills required for good pairing before being on the hook for practicing them.


This was the only statement in the article I disagreed with.


This is just an opinion piece labeled "truths", but I see a few things differently than they do.

8. Kinda. Only when the new ideas impact intra-team boundaries.

8.2 - No. Formal methods will one day be mainstream. We're still learning how to agree on what constitutes the optimal syntax for software. We are getting closer, then will come the optimal forms.

10.1 - No. The Unreasonable Man Theory applies here. It's not better for software that they have to encounter nor for the industry in general (not that I agree with any of the specific paradigms mentioned).

10.2 - I see it differently. Probably because developers require evidence that their life will be easier by adopting specific behaviors. That's hard to do over long periods of time, especially when the esoteric practices are both partly codified ad-hoc choices and partly useful. See item 8.

11. No. If your interviews are over 2hours, your 'loop' is reflecting problems in your organization. When you hire an engineering manager, do they interview every single person they are going to be managing to see if they like them before taking their new job? C'mon.

12. No. Just because it's a problem doesn't mean it's solvable or needs to be solved. Laying effect at the feet of the cause is reasonable. This response is uncharacteristically unreasonable.

13. No. This has already happened (Max 737), so not even then.


> But especially so with me, given I care about weird exotic technology at the boundary of programming.

Smells a bit like a humblebrag. Here's mine:

But especially so with me, given I care about extremely high-Quality software.

> Also also also, most muggles don’t want to program their own computers, much in the same way that most muggles don’t want to do their own taxes or wire their own houses.

This is true, but I'm uncomfortable referring to non-techs as "muggles." Some of the least-computer-tech-savvy people I know, are doctors, lawyers, and scientists (and I actually know quite a few).

> We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced.

Unfortunately, I have to agree with this.


> This is true, but I'm uncomfortable referring to non-techs as "muggles." Some of the least-computer-tech-savvy people I know, are doctors, lawyers, and scientists (and I actually know quite a few).

Agreed. As a tech educator, STOP. I've been teaching people to use technology for decades, and, like with math, a huge stumbling block is "I'm not smart enough for this; only 'tech people' can understand this." And no, not everybody wants to develop algorithms or web pages (or fully wire their house), but most people can learn basics of how things work and look for ways to improve their experience.

Also that analogy doesn't work: most people don't want to wire their own house, but being able to do basic problem-solving on your electricity saves the homeowner time and money AND lets electricians spend their time on bigger problems. Same with taxes, as somebody who isn't an accountant but does do my own taxes: Not everybody will do that, but most people could definitely learn how things like tax brackets work.

This guy is a computer nerd. That's fine, but it doesn't make you INHERENTLY BETTER than people who are color theory nerds or whatever.


> > but I'm uncomfortable referring to non-techs as "muggles." Some of the least-computer-tech-savvy people I know, are doctors, lawyers, and scientists (and I actually know quite a few).

> That's fine, but it doesn't make you INHERENTLY BETTER than people who are color theory nerds or whatever.

Consider that this meaning of the word is only used by the antagonists in Harry Potter. Everyone else uses it in a neutral way, and Arthur Weasley even admires muggle ingenuity.


Oh man, a pedantic HP correction. That takes me back!

Unfortunately, that it isn't the term's colloquial usage, and as a descriptivist, I fall on the side of how language is used by its native speakers. Rather like (staying in the realm of YA fiction) the Hunger Games as an actual big screen spectacle was rather...uncomfortable from an actually thematic standpoint.


I'm not so sure it is. I generally don't see it used like that, and apparently neither does the author:

> This used to say “muggles” but several people pointed out that this has negative connotations. One of these days I should actually read Harry Potter


> We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced.

Which is why software as a skill in and of itself will go the way of mathematics. And then SE's might actually start doing real engineering.


Well if we're just gonna bash software guys here....

The arrogance stands out for me. I have worked in many different fields and with a range of engineers (EE,ME,CE), none come close to the general aura of big ego and arrogance around software guys. It's similar to the levels seen in finance guys. Its like there has to be some relationship between compensation and true value, where ego makes up the difference.

Of course there are the humble bunch in there, but god I have no idea how they get along with their colleagues.


This is simply because it is an intellectual field that is not subject to professional norms of cognitive workers. You will likely find, if you read period pieces or biographies of similar situtations e.g. 19th century EM engineers, that they too had an inflated sense of the scope of their understanding of matters.

This is likely (imo) due to the fact that mastery is gained somewhat independently and not subject to any pseudo-objective critical review (beyond 'my device x works!'), and no professional rigors are selecting for different (professional) mindsets. 'Those that don't know that they don't know' and all that.

To amplify, a cognitive worker in an established discipline is buried under domain details and painfully aware of the scope of their field. They specialize, and are certified for that specialization. Their ego is better informed about the realities of their extent of knowledge, and yes, they paid dearly to get schooled and certified for their narrow niche. (So you have the inverse effect of possibly insufferable attitudes of experts regarding novel developments and ideas in their specialization.)


> This is simply because it is an intellectual field that is not subject to professional norms of cognitive workers. You will likely find, if you read period pieces or biographies of similar situtations e.g. 19th century EM engineers, that they too had an inflated sense of the scope of their understanding of matters.

Engineering education in the 19th century was a crucible; only the best and brightest successfully graduated. The following excerpts are from a blog entry[0] on the book The Great Bridge.

In three years’ time he had also to master nearly a hundred different courses, including, among others, Analytical Geometry of Three Dimensions, … Calculus of Variations, Qualitative and Quantitative Analysis, Determinative Mineralogy, Higher Geodesy, …, Orthographic and Spherical Projections, Acoustics, Optics, Thermotics, Geology of Mining, Paleontology, Rational Mechanics of Solids and Fluids, Spherical Astronomy, … Machine Design, Hydraulic Motors, Steam Engines, Stability of Structures, Engineering and Architectural Design and Construction, and Intellectual and Ethical Philosophy.

A century later, D.B. Steinman, a noted bridgebuilder and professor of civil engineering, would write, “Under such a curriculum the average college boy of today woul be left reeling and staggering. In that earlier, era, before colleges embarked upon mass production, engineering education was a real test and training, an intensive intellectual discipline and professional equipment for a most exacting life work. Only the ablest and the most ambitious could stand the pace and survive the ordeal.

… Of the sixty-five students who started out in [Roebling’s class], only twelve finished. And among those who did not finish there had been some rather severe breakdowns, it appears, and one suicide.

[0] https://philip.greenspun.com/blog/2006/01/26/favorite-extrac...


I wonder if it was the case in the previous decades.

I assume that a lot of this comes from the fact that computing is now more than trendy, and being knowledgeable on it makes you feel like king of the hill.

Also lots of people going into computing, knowing the recent history. Strong opinions about things based on the last 10-20 years, choosing their little church. While having next to zero knowledge about previous history (tech or even business in general), nor fundamentals about computing, but they have enough data to flood you with their certainty :)


> I wonder if it was the case in the previous decades.

Less so, in my memory. Sure there was hubris. Sure there was conflict. But there was also a recognition that we were all small fish in a big pond. And there were still people around from when computing was an even smaller sliver of the economy, who tended me more "grounded" than anyone you'll see today. Now that computing companies are among the richest in the world, the mix of personality types within them has changed a lot and not for the better IMO.


I'd add:

- innate difficulty: it's a bit harder to make an application on a 286 machine, you need to think more, you can't clone a template and tweak a few things to get a fully featured platform

- also, more utility driven, can't make shiny visual and presentation. the app has steps toward goals and you get done quicker.


"The Unix philosophy of “do one thing well” doesn’t actually work that well. Dan Luu explains this better than I could."

The question is, what works better?

Nothing in software development works particularly well. But some things don't work at all.

"TDD/FP/Agile zealots are probably zealots because adopting TDD/FP/Agile/Whatever made them better programmers than they were before. I mean it would be better if they learned TDD/FP/Agile/Whatever without becoming zealots, and incorporated their ideas into an overall synthesis of a nuanced programming philosophy, but I don’t think most people are interested in doing that anyway. Possibly because “synthesizing a nuanced programming philosophy” only matters / is useful to a very small slice of programmers, because it’s honestly a kinda weird thing to expect tens of millions of people to be that weirdly obsessive about their job."

But it's not kinda weird to expect tens of millions of people to behave like zealots of a thing that doesn't work particularly well? (A non-zealot cannot work with a zealot without at least acting like a zealot. The true zealot won't allow it.)


> We don’t have the Alan Kay / Free Software / Hypercard dream of everybody having control over their own computer because integrating with disparate APIs is really fucking hard and takes a lot of work to do.

APIs and distributed integration are hard. But the modern API landscape is a long long long way away from what Alan and folks were shooting for. Sure, you get the messaging part, but that research team wrote a number of papers, one of my favorite being Design Principles behind Smalltalk (https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....). When I read through the headlines there, I don’t feel like the modern API space lives up to those principles very well.

To be clear, Alan Kay’s entire hypothesis may be flawed. I’m not asserting either way on that issue. I’m just pointing out that alluding that Alans dream didn’t work out because APIs are what he said and we’re miserable, doesn’t really follow, because there’s so much lacking in the rest of the picture.


didn't work out because ... little inventors, lots of implementors.


My Own Addition:

14. Those who code for a living make good money.

I have a lot of friends who went into various fields, and it seems relative to the number of buttons we press daily, anyone who works on software makes great money. We didn't spend a ton of $ to go to medical school and 3 years of residency. We didn't go to law school and take the bar. Most of us went to 4 years, or less, of school, and here we are making great money by utilizing the greatest tool of all time: the computer.

It's uncomfortable because many of us didn't understand this going in. We liked computers for a long time, we had a penchant for working with them, and it turned out that made us great money. Meanwhile, our friends who are teachers, firefighters, child-care workers, etc. are doing tough work daily for less pay, perhaps simply because they didn't use computers much or didn't like them. And yet, their jobs are ESSENTIAL and we need people to do them even though they're grueling and underpaid.


You’re not wrong, but I’ll say the part about not going into it for the money is becoming less and less true all the time.

For developers who are ~35 and up, almost 100% true, once you get younger than that you’ll find that there are a large and growing minority of people who are very much in it for the money alone without any particular interest or affinity for computing. Their parents, advisors, etc saw how much money was being made and pushed them towards computing as something more like a high ROI trade.

You see the same done for specialized welding for smart, but not academically smart, kids. It’s hard work to get into, but if you get there it can pay very well, regardless of if you care much about the underlying metallurgic properties that you’re dealing with or not.

This is completely fine either way, but it’s important to realize that we’re not really the professional of child prodigies that it once was.


Agreed. I spend a lot of time on TikTok (my main interests tech-wise are HCI, digital discovery, and social computing so it's like catnip) and a LOT of younger people are going into tech for the money.

I'm 33 and I started programming in the 90s (I was raised by two geek parents a la Larry Page who were supportive), so I'm sort of in-between the two 'generations'. At age 37 and above, pretty much everybody is either a career switcher (in which case they've considered things that aren't money) or started before developing got big. I say 37 because that was old enough to already be coding and brought INTO the money with job offers in the dot com era. (I had a couple of offers from people who didn't believe I was 11/12, and I knew some people who were 16-19 who were leaving school to take REALLY well paying jobs none of us ever expected.)

After that, it was well known that coding could lead to a lot of money, and as the sector continues to grow and other opportunities to escape poverty shrink, it's pretty inevitable that people end up in coding for the money. Especially since that's how we teach them: All the tech courses are about employment, not playing. Kids don't learn tech as a joy or a tool to make their lives better, it's just another corporate hoop.


What you're seeing is really the same pattern as the late 90's dotcom boom-bust. Then, too, TONS of people (and their sisters and mothers) went into the "Information Superhighway" to become "WebMasters" and develop "Web Sites" for businesses large and small as more and more money plowed into those companies and a rush for talent, including not-very-much talent was drawn into the field because of the $. When the dotcom bust happened, and believe me, there will be another (even if different) correction again in the future, many of those same people will exit the field from layoffs or finding another job in tech again becoming hard. They'll migrate to another field that pays OK that they can find their thing.

And then, rinse-and-repeat, we'll probably see the same cycle again in the 2020s thru the 2030s.


Yes and no. I remember the dotcom boom-bust a bit, although I was a child at the time. I think a lot of crypto and web3 stuff is where the dotcoms were in the 90s: A lot of people chasing the money and who are aware that this is a Big Deal but we haven't figured out the specifics so everybody's rushing in because of first-mover advantage.

FAANG and dev positions in general, though, I think people are treating like they used to treat being a doctor/lawyer/accountant: It's a good, steady living and it will always be around. That's very different from the dot-com boom, which was way more a gold rush, even at the entry level. ("They're willing to pay HOW MUCH for the stuff I was playing around with?") Somebody who started at Google in 1999 (or Amazon, or even FB in 2006) is very different than most people entering those companies now. The dotcom was high risk high reward like modern web3/crypto, whereas modern swe positions are low risk medium high reward, which makes them great for people who are more risk averse for whatever reason.


I’m 31 and at least in my age bracket most of the people I was in CS with weren’t the money crowd.

I’d believe it’s true now (I don’t really know), but it’s likely more recent than the OP thinks.


Post 2008, I'd say. I was a 2010 graduate, and the change between 'do what you love' and 'College is for MONEY' was like a culture bomb going off.


I think the real calculus should be do something that interests you, but also consider ROI and if education cost is worth it.

Most of the people I know with the worst financial outcomes overpaid to study stuff they didn’t care much about (and isn’t marketable or often even valuable) because they thought it was what they were supposed to do.


Eh, I'd go beyond that and say invest in yourself first and foremost, but I've been VERY SCREWED after trying to make decent financial choices.

1.) I was into computers and programming in the 90s but I was too young for the dot-com boom, so by the time I could work we were in the bust. So computers weren't a 'safe' ROI.

2.) So I went into languages and wanted to be a diplomat: I studied rare languages to be more valuable (Arabic and Chinese), went to the Middle East, and planned. Except that it turns out that basically required either law school or a bunch of unpaid internships I couldn't take since this was post 2008 and everything was hell/the competition was insane, so I dropped that plan and continued library work.

3.) I recovered from losing that dream and went to grad school for Library and Information Sciences. I did a focus on assessment/metrics/KPIs + tech (so databases, HCI, and UX research) with an intent on pursing an academic librarian career and building academic tech tools on the side. My first job out of grad school was at an Ivy. I got what ended up being MS my last semester of grad school and had to quit to see to my health.

4.) I finally managed to find decent employment and get health care (MS drugs cost 300k/yr)... in 2019. Then COVID. Luckily I'm still employed, but it did throw out ALL of my career planning post MS-diagnosis.

At this point, I've just given up on planning. It's worthless. The minute you or someone you love gets sick, or you get a divorce, or a disaster hits your house (which you won't get help for), you're going to lose it all anyway. I'm just picking up as many skills as possible and hoping one basket doesn't break.


Plans are worthless, but planning is a useful exercise.

I can't find any agreement on who first said that, but it appears that several great generals have said something like that.


This is a good point. I should say I've given up on OCCUPATIONAL planning. I'm very big into community and skill building plans.


I think to some extent there are always unknowns and there's a risk you'll choose wrong, but it's worth trying to choose correctly. In hindsight from your example it was a mistake not to do computers, your ROI calculation turned out to be short-sighted because of proximity to 99. Same thing might have happened to me had I been there at the time, it's hard to make the right choice in that kind of environment.


That's ignoring the cost of the research and planning, though. Even if that cost is only time + brain power that could be used on something else, in opting to engage in occupational planning, that time isn't being spent on something else.

> Same thing might have happened to me had I been there at the time, it's hard to make the right choice in that kind of environment.

I'd say for most people (but not most people at HN because the tech community is pretty unique in terms of labor relations), occupational planning is more akin the tech career planning in 99 where there are numerous big unknowns hitting people constantly (COVID, 08, etc.). Our informational environment is TERRIBLE to build a decent ROI model. Doing so would take the average person so much time to research and build that I think right now it's better to spend the time either:

a.) Planning outside of the current systems and their incentives. For example, I'm supplementing my conventional retirement savings by handing out a LOT of free money locally. Even if the stock market crashes in my 60s and I lose everything (which, based on my lifetime, will probably happen bc it always does...), hopefully there will be at least a few people/families/organizations around that view me kindly and will help (if 2000 people give me a dollar a month, I can make rent). This is also likely to be true even if society completely collapses, provided I'm alive.

b.) Focus on building better heuristics for reacting to sudden change and pivoting. Our culture and society doesn't care about being stable and will always change to chase the latest profit/dopamine hit, so instead of making plans that require and assume stability is possible and that the past is predictive, it's better to accept the future is NOT predictable and spend one's energy on building a great skill set to be flexible. Bending instead of breaking.


You did not consider TLA's with that language background ?


I did! I couldn't force myself to go through with it, honestly, plus I needed a little more experience with the languages which I couldn't afford; I couldn't travel or hire tutors; I was trying to live on 800/mo with no external support. I was sleeping on a mattress I dragged home off the street. I had no office clothes, no non-library work experience, nobody in my family who had a college degree to help me with the cultural expectations. I didn't even have a way to move to DC and back then you pretty much had to do that. I almost did a Master's just for the funding to move/live in DC but I researched that and was like 'that would be financially stupid'. So I didn't even though WITH MY SKILL SET it would have worked, because I read too much generic advice and had nobody to guide me.

That CIA signing bonus was REALLY tempting, though, I'm not going to lie.


Yeah graduating at a time where even a "useful" degree couldnt get you a job will do that.


Yup. I even planned out my debt at the age of 17. When I selected a college in 2005, I would have graduated with 8k in debt. Quite reasonable.

The crash my junior year and the resulting divorce of two of my parents means over half my undergrad debt was from my senior year. I had to choose between doubling my debt and dropping out with a year left.

And I also effed myself by working my way through school to try to keep costs down/take out less debt. It meant I had no money when I graduated so I couldn't even afford things like interview clothes for basic office jobs and I had to take the first thing I could get. In retrospect, working on tech projects in my spare time would have been a better FINANCIAL option than working a job for 7.25/hr, especially since this was '06-'10 and I was interested in things like NLP.

I just don't care anymore. I tried to be responsible, it didn't work, and now I just don't care.


I had a very similar path to you. I felt like I had been responsible and made it all the way through college. Then I had $75k in debt and I was going into $8/hour interviews with 40-60 other applicants.

One time I stopped at kinkos to print out some portfolio designs for an interview, and the cost was 6 dollars. I had to leave because I only had 3 dollars.

Things are much, much better now that I transitioned into tech. I kinda did it for money but I'm lucky because I do genuinely love it - I wish I had started it sooner in life.

I'm curious - where did you end up? If you're on here, I assume you're in a STEM field of some sort?


I'm actually in political communications; I leaned heavily into the rise of identity politics. (I'm a gay disabled female. Not great in early 00s tech, especially as a teenaged girl, but VERY good in certain spaces). I do tech stuff for non-techies, and it works pretty well, especially as there's more crossover between politics and tech. (I had to explain rms to a bunch of policy wonks the other day, that was weird).

I just missed talking about tech and was sick of getting blank looks when I rambled about how Page Rank is a mistake.

Oh, and regarding HN, I've been here on and off lurking since it was founded in 2005, I just thought I was too stupid to contribute. Now I'm old, jaded, and realize we're all stupid.


Yep, exactly. I graduated in 2008. The market crashed, I had a useless degree, and by then knew that a very significant portion of my money went to administrative positions at my college.

It definitely left me, and likely most other people, wondering why on earth we shot ourselves in the foot with the monetary equivalent of a mortgage at the dawn of our professional lives.


These two points, yours and the parent, should probably be #0; many of the other "uncomfortable truths" follow from them.


That's true, but they also often leave.

I've taught three[0] people a little younger than me software that all wanted to do it for the freedom and financial security and two of the three transferred out of software to something else in tech like product management. It really bummed me out too, because it was the two women out of the three. Their lives are still markedly better than before, but I want more women in actual software.

[0] Well, more than that, but the others gave up on the first week or two.


I don't understand this weird fetishism with getting X demographic into software/tech. Naturally I don't think that any demographic should be restricted from entering the field, and everyone should be equally encouraged and provided the same opportunities, but more and more I see these pushes for trying to pump the numbers of specific demographics. If someone wants to join the industry, great. If not, also great. I think that to an extent these movements do more harm than good. Women should absolutely be made aware that tech is a viable career path for them as much as anyone else - yet it seems that as of late there's a sort of societal coercion that mimics that historically seen in gender dominated industries like welding for men or nursing for women.


I think it's based on the assumption that if all cultural factors were removed, an equal number of men and women would be in software engineering. And so, by modus tollens, the current gender ratio of the industry is a damning indicator of how much cultural bias still needs to be fought against.

Personally I think that's a flawed assumption. While there are certainly cultural biases at play still, there are also aggregate differences between the genders that most likely means the "natural" gender ratio will still be skewed towards men.

This is unfortunately a taboo topic to make note of, and is absolutely not up for discussion in most workplaces (see the Damore scandal), and so the fetishism you note persists.


It’s definitely a fraught topic, but if you go in using loaded language like “fetishism” to describe a position that many people have rationally arrived at in good faith, then yeah people are going to call you a jerk.

Part of the reason that I don’t think your paragraph 1 assumption is flawed is that there is evidence that men and women were equally interested in programming before the 80’s, but that a combination of cultural and societal factors came into play in that decade that branded computers as a “boy thing” and resulted in women being pressured out of the pipeline from a young age.

But even if you can find some sort of counterargument for that, my overriding reason why I stick with this assumption is as follows: decade after decade, people insisted that women weren’t into $career. And one by one, women have found their way into those careers (factory work in WWII, the executive suite more recently, plenty of other examples in between). In each case, the people saying women weren’t into $career had an argument that seemed to make sense, often relying on some combination of status quo statistics and hand-wavy conclusions drawn from psychology. And they were wrong. So when I see those same arguments brought out again for this career (often with some exhortation that “this time is different”) I generally dismiss them, as the epistemology they’re based on has a bad track record.

With all that being said, when it comes to actions, I’ll be more likely to encourage women to get into programming if I get a sense they might be interested in it. However, when it comes to interviewing candidates, I will not grade people on a curve based on their identity, as it is my belief that men and women can in fact be equally good at programming and should be held to the same standard.


You're correct about my language choice, I could (and should) have used more neutral language to describe my position. While I do believe that there are some fundamental differences in the interests and abilities of the genders (neither for better nor worse, they just are) I also agree that's its a slippery slope that has been abused in the past. Societal pressures and conditioning certainly play a role, however I think if you look at the lower-end of the job market the difference is clearer. Women tend to dominate retail work as well as waitress/hostess positions while men tend to dominate more labor intensive positions like construction, welding, modern day factory work etc. Is this due to some rampant discrimination in those fields or is it due to inherit biological/psychological differences in men and women? I'd be inclined to say the later. There are no pushes happening in the bottom end of the job market due lessen the gender disparity because those aren't as desirable positions so no one really cares that more men work in construction and more women are servers at restaurants. However when the position becomes more prestigious with higher pay it's suddenly a huge issue that a disparity exists - and obviously it's due to discrimination and has nothing to due with what each gender is naturally inclined to pursue or enjoy.

You're right in that it's important not to automatically ascribe any demographic disparity to fundamental differences between the genders, but it's equally important to not automatically ascribe that difference to discrimination without considering those fundamental differences.


I disagree about the “equally important” stuff, and here’s why: If I’m wrong and the overwhelming majority of women really don’t like programming, I’ll have wasted some of my time, but no one will have been materially harmed[0]. However, if I am right, then there are currently a large number of people who are being pushed away from a career or hobby that could increase their overall happiness and/or standard of living if they were not being pushed out of it, and I see that as a potential tragedy.

Given how slanted these stakes are and how many times society as a whole has underestimated women as a whole, I think the right answer here is to err on the side of trying to get women and other underrepresented minorities interested into tech.

[0] bear in mind what I said about the particular way I’m putting my belief into practice and that I do not believe in “reverse discrimination” to even the scales.


That's fair. I do believe that in an ideal world, everyone's barrier to entry into any profession would be equal removing all factors except merit. While that's obviously not possible, I do think it's worthwhile to work towards that. I also think that it can be worthwhile to make it known to women that tech and other career paths are viable and not out of reach for them - as I'm sure some women just have never considered it just as I have never seriously considered becoming a nurse. However, I think that's where it should stop. Make it known that it's an option, make it so that if they choose to pursue it that it's no more difficult for them than it would be if they were a man, and then let it be.

Anything more and it becomes the individual(s) behind the movement projecting their desires onto others and is no different that the societal molding that's taken place for the past several decades.


I'm only interested in why you believe what you believe. How smart are you? What do you value most? How truth-oriented is your epistemology?

I'm glad you don't discriminate (nothing "reverse" about it) against men, but many do, and since [your views are wrong](https://www.amazon.com/Essential-Difference-Female-Brains-Au...) it's tantamount to sex-based theft and redistribution.


If you're wrong, surely you'll have materially harmed some people by encouraging them to do something they don't like?


The unstated major premise here is that software engineering work is valuable/fulfilling/high-status, while other work is not---so that pushing someone naturally suited to software engineering away causes harm, while pushing people more naturally suited to a different field into software engineering causes less harm.

I don't think your premise is wrong, but I think it does explain why computer science is particularly focused on broadening participation, compared to other disciplines like nursing, teaching, etc. Computer science is current (for better or ill) a golden ticket to economic mobility.


> there is evidence that men and women were equally interested in programming before the 80’s, but that a combination of cultural and societal factors came into play in that decade that branded computers as a "boy thing" and resulted in women being pressured out of the pipeline from a young age.

The nature of "programming" as an activity also changed quite a bit.


I think the general perception is that women often get turned off not by the work, but by the male-dominated environments. This can be due to subtle issues and sometimes due to non-subtle issues. You want to go out of your way to make it a friendly environment in those cases. Basically, do you want women to have the same shitty experience when they were breaking through various male-dominated fields in the past (e.g. medicine, law, business, etc) or do you want to be a welcoming host? It seems like as a profession we're still in the shitty part of the spectrum based on the horror stories I hear from my female friends in tech.


The issue is the same in reverse. I've heard horrible anecdotes by male friends in female dominated industries such as teaching, nursing, etc. Why is the same societal push not happening in those industries? I never said we shouldn't strive to be welcoming and to cultivate a friendly environment for all, just that the intentional push for more of any specific demographic seems strange to me. Shitty people are going to exist everywhere regardless. A twitter campaign shouting about the injustices of the industry and pushing for more women in technology isn't going to change that. The people that need to hear the message being conveyed there aren't going to listen, and the people who are listening don't need to hear the message because they likely aren't already shitty people.


While there may not be similar pushes in other industries, there is no reason to dismiss the efforts to encourage participation in software based on a lack of effort in other fields.

Implicit and explicit biases exist everywhere. My wife is in teaching, and would love to move into administration some day. The problem with that? Despite women being the dominant sex in the actual teaching position, men continue to fill administrative (i.e. principals, deans, etc.) roles at far higher rates than women. Why are men filling leadership positions more than women, when they have less experience and participation in the field overall?

I guess my rant/point is, each field will have their own unique battles with inequality and/or lack of participation from various demographics. We can't compare apples to oranges.


My point is that societally we should allow anyone to enter any field they desire with minimal barriers halting their progress so long as they are able to do the work - but we shouldn't try to force it just because we feel like there should be more women or more of whomever. Just because there is a disparity in the demographic make-up of a field doesn't automatically mean that's it's root cause is some negative bias or discrimination. Maybe more men fill administrative roles because more men pursue educations in business administration and leadership whereas a teacher can be an excellent educator, but a not so excellent administrator if they have no relevant experience. Does your wife have any experience in administration or any credentials that would show her to be qualified for the job? If not, maybe that's her issue - not sexism. That's like asking why do great software engineers sometimes never rise to staff/management positions - because they're good at writing software, not managing people and organizations. Is this bad? Why? Because administrative roles are generally better paid and more prestigious and more women should have that opportunity? Why not encourage more women to get into masonry or welding? Where's that movement? Has it maybe not happened because those aren't well-paying, desirable jobs for most people? Do more men occupy those industries because of the nature of the work or because of some insidious rampant sexism? I'd be inclined to say the former. If this were actually about equality, it would be happening all across the labor market and not just in a certain subset of high-paying fields.

Some of these disparities can be traced back to social conditioning and discrimination, but its also in large part just due to inherit differences in the interest and abilities between genders. It's been proven time and time again that the genders are often fundamentally different in terms of abilities and innate interest. Neither gender is better than the other, but we are different and denying that does more harm than good.


(Rereading, I'm not actually sure we disagree, but the tone of the parent post seems to be that I can't change things because people won't change, ...?)

I'm in tech, I work on fixing tech.

> the intentional push for more of any specific demographic seems strange to me I believe the skewed demographics is a reflection of actions and pressures I don't agree with. I think there is pretty-open discrimination and steering of qualified candidates away from the field. (Read the news, sadly.) I'm morally against that. The demographics are an imperfect measure of our progress, but in my opinion it's a useful indicator.

> Shitty people are going to exist everywhere regardless Yeah, and in my workplace I'm against shitty behavior.

Without speaking to whether this is true or not, the parent's argument seems to me to be a kind of "whataboutism".

I don't choose to spend my time working to improve the work environment in other fields, not because I don't care but because I can't do everything.


I'm unsure what you're trying to express in this reply. It seems we do agree in general viewpoint, just perhaps we have different ways of expressing that view. Discrimination definitely happens, but I have yet to see one example of "pretty open discrimination" that hasn't resulted in an all out PR disaster for the org involved ,your "Read the news" statement supports that. If discrimination were a common occurrence it wouldn't be news worthy. If anything it seems that discrimination is beginning to become more frequent in the other direction, where certain demographics who are less qualified are hired over other, more competent, candidates simply for the sake of "diversity". Not to say that's very common either, both are IME exceedingly rare. Glad that you're against shitty behavior, I am as well. Most people are. That doesn't have much to do with my comment though.


My close friend interned at a small embedded devices company and was the only woman besides a part time HR employee. People made comments about the HR employee when she wasn't around.

Is this "pretty open discrimination"? It certainly doesn't make the cut for news. But yes it's very uncomfortable to be the only woman in that situation, people wok look at you for approval for their comments.


People talk about other people all the time - regardless of gender. I would guarantee they talked about male employees as well but your friend likely didn't mentally catalogue those discussions because they weren't relevant to her. Confirmation bias is very real. Maybe this one specific anecdote you're providing really was an example of rampant discrimination - I don't know - either way it certainly isn't indicative of the entirety of the industry. I could share many anecdotes of my own that I could interpret as people discriminating against me specifically - but I doubt that's the case. Instead it's people just being people. This is another issue, I think that these movement often make women and minorities attribute "normal" negative behavior to be explicitly driven by discrimination or x-ism rather than it just being par for the course.


No, discrimination and x-ism is not neccesary to make an environment that women don't want to work at, only sufficient.

People being regular-shitty, combined with context, like making comments about the appearance and intelligence of the only other woman in the office, can create an inhospitable environment that forces women out. Companies that wish to hire and retain women should proactively create a hospitable environment or situations like above will occur.

The bigger thing is that it seems like you don't see return on investment into creating these better environments. Do you feel that diversity initiatives are wholly unnecessary or just poorly designed/targeted?


As I've said multiple times in this comment section - I'm all for creating an accepting and inclusive environment. Diversity initiatives aren't that. I think that explicit diversity initiatives are indicative of a bad work center that can't attract or keep their desired talent without explicitly targeting specific groups. I don't think that these diversity initiatives do anything to really affect change in an organizations culture and are PR stunts to make people feel good. At worst, they're harmful to everyone - including minorities. At best, they're a waste of time and resources. If your organization has actively promoted a toxic culture from it's inception and just now wishes to change things then it has a lot more work to do than a "diversity initiative".

Regardless, my original stance was more aimed at the societal push of "get x demographic into tech". I'm not going to retype my thoughts on that - go read the other comments of mine in this comments section to get a clearer picture.


The “news” can be considered an iceberg situation. What you see regarding discrimination are just the salacious ones that capture headlines. There’s so much day-to-day discrimination and sexism that doesn’t hit the papers. Consider that it took almost a decade if not more before we started hearing problems with Google execs which is one of the most likely companies where such problems are to grab headlines (executives being invoked being the salacious bit to grab headlines).


Are you sure the current campaigns aren't effective? The demographics are indeed changing. I know causation correlation blah blah, but we're just speculating in this thread, supplying no data anyway.


I'm sure they're effective to some degree in terms of recruitment numbers. What I was alluding to in that comment was that the current campaigns likely won't dramatically reduce the number of "horror stories" we hear - which is the crux of the entire issue. The people that are the antagonist in those horror stories tend not to change their way's based on social media movements. Whereas, the people who are interacting with and encouraging those movements would likely never find themselves in the antagonist shoes anyway. It's a very small subset of the population that would actually see those movements and decide to change as a result, and that's better than none, but it's certainly not what I think most people supporting those movements wish to see as a result.

Edit: grammar


> The demographics are indeed changing.

The gender ratio isn't changing though. If anything it is becoming worse, it was decades ago that less women went into software engineering today.

The only thin that changed is that now every company tries to put their women software engineers front and center, in their marketing material, to go speak at conferences etc. So it looks like there are more women, but there aren't. Tech companies also tries to improve these statistics by bundling non programmers into this category, like product and project managers.


This page suggests otherwise (although the effect is slight): https://www.zippia.com/software-engineer-jobs/demographics/ Do you have other data?


That page suggests the number of women got lower, yes. The data points with 5 year gaps:

> 2008 Male: 72.33% Female: 27.67%

> 2013 Male: 73.44% Female: 26.56%

> 2018 Male: 73.82%. Female: 26.18%

The year by year numbers are too noisy to really mean much. 2018 being 0.3% percentage points higher than 2016 doesn't say much.

Anyway, the main point is that the gender ratio isn't really improving. At best you can say that it has stagnated, rather than getting worse as it did a decade ago.


The gender ratio is changing for the worse. In 1984, women made up something like 37% of CS students. Now it's under 20%.

Reference: https://jaxenter.com/wp-content/uploads/2017/04/women-in-com...


Interesting. Thanks for the data. I wonder how that effects jobs.


It’s hard to know if an environment is hostile to you or not if there are no examples of safety. For example I would hesitate to join a magic the gathering group if I didn’t first check that they’re friendly to newbies. Especially because I know magic the gathering can vary in experience and that a group that isn’t friendly is not an outlier. If I think about it this way I can understand why a woman might not want to join a company with no other women engineers; how is she supposed to know she’ll be treated with respect, and won’t be pushed to something like management or product, if there are no examples to go off of? It likely worsens broadly if a woman knows exactly 0 women who are engineers as she grows up and considers a career for herself. She might not know any engineering is a viable path.


> It likely worsens broadly if a woman knows exactly 0 women who are engineers as she grows up and considers a career for herself. She might not know any engineering is a viable path.

This was why I didn't get into STEM initially. It's not that I thought that engineering culture was toxic per se, but rather because the women I knew growing up didn't have careers, much less careers in technology.


> I can understand why a woman might not want to join a company with no other > women engineers; how is she supposed to know she’ll be treated with respect

Hate to say it, but 0 female engineers _is_ evidence...

The assumption going in changes from "this is going to be ok" to "what is wrong with this workplace?"

(Of course this depends on team size, etc., but this is my own personal read going in.)


I don't disagree at all with making programming welcoming to newbies, but if Magic the Gathering were something you were really interested in picking up, wouldn't you look around for a newbie-friendly group to join instead of just giving up on the game altogether?

How many of us started tinkering with programming at a young age out of pure curiosity or interest in how cool it is, without any regards for how viable of a career path it is later in life? It seems most of us who did just that were boys. At some point you have to consider that there's perhaps an innate aggregate imbalance in how interested each gender is in programming, or in how much each gender responds to financial incentives -- which would explaian both why countries that are more gender equal and more prosperous tend to have a greater degree of gender imbalance in STEM.


> I don't understand this weird fetishism with getting X demographic into software/tech.

Perspective is valuable, and like it or not various demographics have inherently different perspectives due to a variety of factors. We need conscious effort to include more demographics because our industry is dominated by a very small number of them and therefore its culture is heavily biased towards them.


This is a valid point, and I don't disagree that perspective is incredibly valuable. I just don't know if we've been going about it the right way. If someone is interested in technology, then we should make sure that it's as accessible to them as it is to anyone else. Yet, along with varying perspectives among varying demographics comes different interests and aspirations. In the process of pushing what we think other demographics should be aspiring to do are we not drowning out their own interests and aspirations? Are we not simultaneously warping the perspective we seek into one that more closely aligns with our own? It's 2021, anyone can google "well paying careers" see "software development" on the list and decide to pursue it. It should happen naturally, we should absolutely facilitate and encourage that natural convergence, but I think the artificial push is just as harmful as it is helpful.


What I always tell people thinking about a software engineering career is that, yes, it's great money but I strongly advise only doing it if you actually like working with computers. Otherwise it's going to be a nightmare. Working as a software developer without a passion for coding is kind of like doing math homework every day for the rest of your life.


> Working as a software developer without a passion for coding is kind of like doing math homework every day for the rest of your life.

I disagree. I coded for fun once, as a kid, and have never felt the need to do it again. If I never coded again, I wouldn't even notice because I my passions lie elsewhere. But working as a software developer is still tolerable because I'm just sitting at a desk, typing on a computer. As long as I don't have meetings...


Are you above 35?


Not that poster, but I am over 35, and I hate computers. Fucking around with computers without being paid to do it is just about the last thing I'd want to do. They're miserable time-sinks unless you've got a strong need for something only they can do, or you just like tinkering with them for the sake of it. Which, I did enjoy once upon a time, but not anymore, and I don't have any hobbies that really need computers beyond some light use of existing software, and even those are mostly just nice-to-haves. Everything else I might use a computer for that'd require real work to set up handily fails the XKCD software ROI chart.


I feel lucky that at 44 I still get a thrill when something runs as expected for the first time or when tests go all green. I still enjoy working with software in the general sense and learning new things in my areas of interest. Organizations, however, and the environments that they create (both human and software) are a different story ...


I'm curious to learn how that is relevant to my post and what assumptions you make about people around this topic based on age


The parent topics were talking about those older than 35 getting into the field because they loved it and those under mixed. Wondering where you fit into this worldview.


> doing math homework every day

Sigh... if only there were a way I could get paid for actually doing math homework every day...


become a math teacher and never look in the back of those cheater "teacher's edition" books!


I don’t really agree with this. I don’t love coding but I think I’m pretty good at it (and all the other parts of being a software engineer). I am still very happy I chose this job even though coding is “meh” for me in terms of enjoyment.

I don’t think this is particularly unusual either. Lots of people don’t code outside their day job, even the really good ones, IME. And look at other jobs - I doubt most plumbers, linemen, bricklayers, etc really find their job enjoyable. But they might still be happy with their career because they don’t hate it, they’re good at it, and it pays the bills.


Perhaps this will paradoxically be a good thing. I have for a long time been wondering if or to what extent are the reports of ageism in the industry well founded.

If it turns out that hiring older developers gets you a much higher likelihood of getting someone who cares, this might change matters significantly (assuming of course that there actually is a problem in the first place).


I expect this to change radically as the entry level 6-figure crowd gets older. Speaking for myself, my ability to care about my work is inversely proportional to my retirement account balances, and I want out asap. Software development and the internet in general isn't fun any more and there are things I'd rather do.


I'm in my 40's and my graduating comp sci class was the biggest in history at that time because of the money people were making in the first dot com boom.


Not sure how it is today, but for me in the mid-90s, there was a lot of interest in software and computer degrees, but the curriculum was sufficiently difficult to filter out the posers. I went to a huge state school, and our first year Comp Sci class was, not exaggerating, an auditorium-sized lecture of 1000+ students dreaming about becoming a Microsoft Millionaire. Most of the students didn't make it through hello_world.c. We also had requirements like Linear Algebra, Matrix Theory, MOSFETs, Linear Systems, and Signal Processing. The electives were all very tough, too. By the time we graduated, there were about 30-40 of us. So, there was a lot of interest, but most didn't actually graduate with the major.


There was a bit of a downswing in CS degrees around the first DOTCOM bust, a lot of folks left programs at 2nd tier colleges and moved to adjacent but less “software” and “subject to outsourcing” like industrial engineering, electrical engineering.

I didn’t exactly have a broad view at the time, but it seemed like top ~20 Comp Sci programs were largely unaffected and kept growing the whole time.


I see nothing wrong with this.

Very few people truly have a passion for their work and for most , it's just a means to an end. This has been true for all other "high paying" professions such as law and medicine. One of my best friends is a lawyer and he happened to mention that no one really gets into law because to fulfill some higher purpose..

Further, I don't think it really has an impact on the quality of the work they produce, you don't need a spiritual awakening to build good systems. You just need to be vigilant and gain experience.

As an older engineer (is mid thirties old?..) I've grown out of this and now look elsewhere to nourish my need for numinous in things like philosophy, art, music, relationships, etc. When I was younger, I used to naively think my identity was a "Software Engineer"; now looking back I chuckle at that outlook on life.

On a final note, I would argue that more people joining our profession is a _good_ thing. It will create more jobs, increase demand and we end up with a stronger workforce.

I don't think gatekeeping a field yields to a healthy outcome.


> For developers who are ~35 and up, almost 100% true

I'll come up with one of the few exceptions to the rule, i.e. me. I'm in my very early 40s and I went in it for the money 20+ years ago, I hadn't even played with a computer as a kid or as a teenager for the simple reason that my parents couldn't afford to purchase me one (and I wasn't really thinking of becoming a computer programmer back as a teenager, I found the work boring).

Pretty soon though I discovered that I liked this, I even, in a way, avoided a career path that would have brought me into larger companies so into management (I think it's harder to remain a computer programmer in a larger company once you pass a certain age, I can't explain the reason for why that happens).

Either way, I'm almost 100% sure that no-one can spend 15-20-30 years in this job if they don't like it, i.e. only for the money alone, at times it can be really draining on one's mental condition.


...once you get younger than that you’ll find that there are a large and growing minority of people who are very much in it for the money alone without any particular interest or affinity for computing

That was true 35 years ago, and without the metric butt-ton of money being paid now (though programming has always paid well within my lifetime). I've worked with those folks. The reason you might not see it is because they either left the field they hated, or worse, they stuck around long enough to become managers of what they don't understand.

it’s important to realize that we’re not really the professional of child prodigies that it once was.

Mmm, I don't know that it ever "was". Ignore the Carmacks, Wozniaks, et. al., of the world and you'll find that most folks holding programming jobs had the standard four year degree (because you were unlikely to get hired in 1985 without one), working on internal LOB apps.


Unlikely to get hired? Not in my experence; not in 1985. A healthy fraction of the folks working in Silicon Valley were musicians! Certainly not many were degreed in computer science because very few were back then.

So what brought them to Silicon Valley? For a great many, love of the machine.


> For developers who are ~35 and up, almost 100% true, once you get younger than that you’ll find that there are a large and growing minority of people who are very much in it for the money alone without any particular interest or affinity for computing.

Just to add, during the dot.com boom there was a brief spike of people coming into software purely due to the hype of riches without much interest in software development. Thankfully the dot.com boom was relatively brief and it crashed hard, so from what I saw then most of those people got disenchanted right away (by the time they graduated the crash was happening) and changed careers.

This time the boom has been going on for basically a decade, so it is becoming more common and lasting.


It feels to me like most people no longer find fulfillment in their work, regardless what it is. It really seems like almost everyone is in it for the money and could care less about the lack of craftsmanship.


A lot of that seems like it comes from the company or industries not seeming to care about the worker: bad work environments, precarious jobs, stress, working on something commercially useful but not otherwise meaningful.


I liked tinkering with computers all the time before I started working as software engineer. But after over one year of working I started wanting everything to just work so I don't need to do anything with it...


It is hard to imagine e.g. Google devs are enthusiastic about manipulative ads or blocking parts of Linux from developers in a software lock-in scheme.


And that's great. High wages means the market signals a shortage of workforce. That more people of lower productivity fill the gap means the market works in a healthy manner.


>And yet, their jobs are ESSENTIAL and we need people to do them even though they're grueling and underpaid.

I think this touches on an uncomfortable point that many people don’t acknowledge: our current system generally* pays people according to how much they contribute to the economy and not by how much they contribute to society.

The scalable nature of software means developers can disproportionately add value to the economy, even while sometimes simultaneously working on projects that could be arguably a net negative for society.

* yes, we can all point to people who get paid handsomely without adding anything substantive, but general rules are meant to be applied, well…generally


To be more precise, it pays people for their differentiated contribution to the economy. If there’s no one to open the office doors in the morning, productivity is 0. Does that make the security guard the highest paid staff? Of course not, because he’s easily replaceable. A developer whose absence will only lose the company 1k a day may be paid 500 a day, because the few people who can bring in those 1k all want more


At the risk of adding to his number 12... this is capitalism.


I actually think there’s a more nuanced perspective. Capitalism just means we can monetarily reward what we value. Capitalism is a means, not an end. So if the GP has an issue with what is being observed, I think the author’s remark of “this is capitalism” stops short. A better question is what is at the root of one position being valued over another? E.g., if we really value teachers as much as we collectively say we do, why aren’t teaching colleges the most competitive and their positions the most valued?

The U.S. often sends their best and brightest to tech, consulting, law, and medicine. Other societies send theirs to education. Both can occur under capitalism. I would argue it’s not “capitalism” but that the economy is most valued. Capitalism could just as well make social workers the highest paid individuals if there was a different value construct.


I've felt for a long time that the reason software is not fulfilling its promise is because it has been tied exclusively to capitalist goals. The incentive structure is completely broken.

I think a balance probably exists, but we are badly missing the mark.


Do you think that is still the case? I can think of all kinds of applications where software can contribute to non-profit driven goals. Healthcare, government, energy/climate, etc.

I think the problem is it becomes hard for some to take those jobs when they could instead work in, say, social media or finance for many multiples of the pay.


The wages for software developers were basically stagnant from 2001 to about 2014 or 2015. The big surge in pay is relatively recent. While this is a complex subject, I have written about a few examples, and can offer more. But take this as one example:

Right now the best devops people in New York City are making between $200 and $300 an hour. That's obviously good money, but in the 1990s I had older friends who were making $200 an hour as Oracle consultants. At the top, the nominal pay has remained the same, but adjusting for inflation, that is a 50% paycut.

Likewise, lower down, I know lots of mid-level devs who are happy to make $100 an hour right now, but I also had older friends who were slapping together classic VBA apps with VisualBasic, back around 1999, and they were also making $100 an hour, so, again, it's a 50% pay cut.

But it is absolutely true that pay has been surging since about 2014/2015, so I imagine the software developer pay is about to catch up to the highs of the 1990s.

It's also true that the picture only looks bad when compared to inflation. To your point, most other professions did even worse, so the benefits of being a computer programmer are more obvious if the comparison is to other professions.

All in all, after the peak in 2000, the USA economy had a difficult 15 years.


>But it is absolutely true that pay has been surging since about 2014/2015

Only partially true:

1. High total compensation (not salaries) is limited to FAANG/Unicorn employees (awash with money)

2. That high compensation has been achieved by stock options multiplying in value, thanks to abnormal run of the stock market (thank you Federal Reserve), which is not going to continue with rising interest rates

3. Salary base seems to be stuck at $150K for mid-senior developers in bigger cities (except for NYC) for many years now


> 1. High total compensation (not salaries) is limited to FAANG/Unicorn employees (awash with money)

Very nearly every publicly-traded tech company, actually. The list of "tech" companies where mid-level engineers aren't breaking 200k TC is much shorter than the opposite.

> 2. That high compensation has been achieved by stock options multiplying in value, thanks to abnormal run of the stock market (thank you Federal Reserve), which is not going to continue with rising interest rates

Initial offers have been increasing; we aren't talking about golden handcuffs that employees have from stock growth. To give a very recent example: Amazon recently bumped up their pay-band for mid-level engineers and they seem to be hitting 350k/year (maybe more; have seen a couple reports of 400k). Their mid-level pay-band previously topped out at ~300k/year.

> 3. Salary base seems to be stuck at $150K for mid-senior developers in bigger cities (except for NYC) for many years now

...you mean "except the Bay Area", right? Though it's definitely not true either way; I live in a large city that isn't in the Bay, Seattle, or NYC, and 180k is a pretty common number for senior engineer base salaries, and I see 200k+ more and more frequently. (Not that I care too much about just base; given the choice why would I work for a company that only pays base salary, i.e. ~half of what other companies pay?)


There has never been a 1:1 relationship between how essential a job is, nor how difficult it is, and the pay. It's mostly about a) the barriers to entry, which can be academic (law, medicine), regulatory (law, medicine, probably finance?), institutional (finance, tier 1 law), or just how grueling it is to get through (definitely medicine outside of some specialties like family med or peds, probably tier 1 finance/IB too); b) leverage/BATNA. A is more systemic - it drives wages up or down as an industry and is why a family med doctor makes $200k/yr but a radiologist makes $600k/yr. B is more localized, if I have a lot of leverage or my BATNA is "sit on a beach and drink" I might pull a $400k TC instead of $300k simply by waiting around for a better offer. The more people in my field have that opportunity, the more of a systemic pressure this has on wages.

Anyone can be a teacher. The certification requirements are low. The academic requirements aren't anything more than most kids are getting anyway (a degree in anything will qualify you in most US states). Most teachers need to work so their BATNA is "work someplace else doing the exact same thing for roughly the exact same amount of money." What's more (and I didn't really mention it above) is that you can't tie teaching directly to income. Not that you should be able to, but if you're a quant at an HFT shop you can probably draw a line with a pen from your code to tens of millions of dollars in profit that's there this year that wasn't there last year. Do that year over year and it's hard to argue against you getting paid $800k-1M/yr in cash and bonuses. Helping one of your students read a little better, or finally understand linear algebra, isn't going to increase your personal bottom line at all.


Imo this is related to value provided. It’s why an excellent tutor can make serious money and be hired by wealthy families to teach kids. Good teaching is a scarce resource.

Many (most?) teachers are bad and the existing incentives don’t require them to be good. It’s hard to fire them and public schools have issues which limit selection and competition.

Perhaps harsh, but some of the dimmest people I knew in highschool became teachers and some of the meanest adults I’ve ever interacted with were teachers too. Obviously great outliers exist, but I’m skeptical they’re the norm.

Most jobs that have high demand, real evaluation of skill, and limited supply of people that can do it well (for whatever reason) pay well in capitalist markets.

The exception I can think of would be something like social work, where demand is high, costs are high and it’s mostly government funded.


I don't think that's an exception, as government funding and capitalist market are more-or-less mutually exclusive, even in capitalist countries. Nobody in a state government makes more than the governor, nobody in the federal government makes more than the President, etc. And for social work specifically, "real evaluation of skill" is going to raise more than a few eyebrows.


Yeah, I think these are all fair points.


I think you forgot what has been the single largest driver of salaries for most Europeans, unions and legistlation.


I'd rather be a software developer in the US than a teacher in Europe.


Sure. And I'd rather win the lottery than have socialized healthcare. Obviously it is better to be the beneficiary of income inequality in a country with large disparities of income than it is to work a "normal" job in a more equal society.


For most non stem teachers, teaching is the best paid job they could possibly get once you account for pension, etc


Yes even in US public schools, and even for relatively new hires, the benefits can be pretty great. Time off, retirement, etc. And it's still possible to get into administration and make what an entry-level SWE makes. Hell, when I was in HS in the very early 00's there was a big scandal in my area because we found out the district superintendent was making $200k/yr. For context, most of my friends' parents worked blue collar and manual labor jobs. Teachers averaged $44k/yr or so back then.


Not to mention that for most teachers you're going to draw a pension for 20 years after you retire at 3/4 of your max salary, so all salaries are 75% higher than the listed number.


I would gladly give up software development for a medical or law degree. I would gladly study for a bar exam or do a medical internship instead of grinding Leetcode and throwing salt over my shoulder or some other good luck inducing act, in the hope of passing the often subjective, 'are you a fit', can you design and code Z in the next 38 minutes type interview.

The barrier of entry is way too low these days and therefore we have this hazing ritual for an interview process. It's absurd to think that after 12 years in the business, I won't be able to figure out what to do with those requirements. I'm sorry, we need 5 years of Java, not 4.

For those young developers, save up your money over a 10 year period and get out.


Residency isn't the barrier to entry to being a doctor (at least not practically), the real barrier is getting accepted into medical school. The process can also feel like a crapshoot.

Likewise passing the bar isn't the barrier to entry to a good law career (at least not practically), it's getting through the interviews at a top law firm.

Many attractive industries have problems when it comes to deciding who's eligible to reap their rewards.


I know people saving up and getting out of the medical field, as it became extremely toxic in the US. Expensive and mostly ran by huge corporate entities seeking profit.

I would much rather click on a computer.


There are quite a few toxic places in the software business, no?

Medical field (being a physician) is not a walk in the park, but:

1. nobody questions you intelligence/knowledge after, say, 20 demonstrably successful years in the business

2. on your way home you can say "tough day, but at least I am making $600K/year"


In the end, everything you do for money will become "work", because if it was only the fun parts people would just do it for free as a hobby (see also: open source).


> I have a lot of friends who went into various fields, and it seems relative to the number of buttons we press daily, anyone who works on software makes great money. We didn't spend a ton of $ to go to medical school and 3 years of residency. We didn't go to law school and take the bar. Most of us went to 4 years, or less, of school, and here we are making great money by utilizing the greatest tool of all time: the computer.

This is one of the main reasons I'm a software engineer and shudder to think about doing anything else for a living, despite not even liking software that much.

> It's uncomfortable because many of us didn't understand this going in.

This is probably not as common as you think. Most of the SWEs I know switched over from other fields because it's an obviously easier gig. Epidemiology, architecture, electrical engineering, biomedical engineering, archaeology, public policy. People are painfully aware of how easy we have it and that's precisely why they're (we're) SWEs, not because of any interest in coding itself.


It feels like we're tradespeople being paid like businesspeople because we're in demand right now and it could change at any moment.


They been trying to transform programming into blu collar work since the first developer demanded more than minimum wage for their labour, with 4g languages and anyone can code movements and whatnot.

Thank gods while they're right - anyone can indeed write code - the valuable part of our labor is problem solving - and no amount of better tooling will change that.

Ultimately we are business people, we are accountants, we are tour operators and much more: our labor value is in the ability to transform these jobs into requirements and solutions, coding is how we describe our understanding of the domain, and it's production inconsequential once all the hard questions get resolved.

To bad today that gets a stench from the so called agile crowd, which would prefer us to tackle problem with aggressive randomness in the hope that infinite monkeys can be as cost effective as a single thinker.


The difference here is usually leverage. A teacher spending an hour to teach one child teaches one child, but a programmer who spends an hour writing code to generate a dollar’s worth of value can have that run thousands of times a second or 24/7 even when they’re sleeping.

Of course teachers who understand this also make money, see Miss Excel ( https://www.instagram.com/miss.excel/?hl=en ), who was/is making $100k a day with excel instruction videos on tiktok and instagram. It’ll all about leverage.


I disagree - the difference isn’t leverage it’s monetize-ability.

If teachers were paid a lifetime percentage of their students earnings teachers would make far more than software engineers. Barring some cultural revolution that’s not going to happen.

Software engineers simply make more money because the companies they work for have high margins.


This assumes there’s actually a significant benefit from school on lifetime earnings, which is at best debatable if not actually at odds with all available research.


I’d love to see the research you’re citing. Are you suggesting kids would be better off if schools didn’t exist with respect to lifetime earnings?

Seems easy to prove wrong by trivially looking at high school dropout vs graduate incomes.


It's not that trivial, because the types of people that choose to drop out might be the types of people that earn less, even if there are no causal links at all.

For example, they might be risk averse and a risk averse personality might lead to other decisions that make more money. (or the opposite)

It would be easy for a fad to arise calling education unnecessary and a waste of time, and for people to consider the option to drop out less risky just because it gets talked about more. So even if the correlation held statistically now, in any year it could change just because the subgroup of risk-averse people are more likely to consider it. So you still couldn't use it to make permanent decisions(like creating a school system for hundreds of years).

Homeschooling vs. high school graduates would remove "choice to drop out" from the student, but then "choice to homeschool" on the parents might carry down genetically or culturally and maybe "aversion to government authority" improves earnings on average.

I try to reject all statistics on "easily understandable" social topics. Leave that for scientists only. It's too easy to make a mistake and not get any warning that you made a mistake, and believe wrong things your whole life.


Just because one is homeschooled doesn’t mean they don’t have a teacher.

All observable evidence shows those who complete their schooling in any form make more money


It's illegal not to provide schooling of some sort, so that is yet a 3rd variable("willingness to commit crimes") that could be a better predictor.

I don't care whether schooling causes or doesn't cause a benefit: You called it "trivial" by looking at statistics, and that supreme confidence that there can be no other explanation seems unjustified.


I didn’t mention any statistics so I’m not sure what you’re going on about. I’m saying those who don’t go to school make less money. That is empirically true. Selection bias is irrelevant here to the original comment.


> Are you suggesting kids would be better off if schools didn’t exist with respect to lifetime earnings?

> Seems easy to prove wrong by trivially looking at high school dropout vs graduate incomes.

There is an implied statistical argument here that by plotting "did X drop out" on one axis and their future income on another, that a correlation would refute a statement that "Schools cause students to earn less".

Specifically, you would be:

    1. Gathering many examples of individuals(a "population") with boolean "did they drop out?" V1, some income metric V2, maybe some others.
    2. Comparing and aggregating V1 and V2 across the population
    3. Using it to refute a statement S
That is a statistical argument for ~S because it involves comparing population aggregates. You may not have explicitly used the word "statistics" or realized you did it, but you made a statistical argument.


The teachers who teach 30 kids / year x 20 year avg career = 600 kids, on average 10% go on to become coders, so now that one teacher is responsible for generating 60 coders each of whose code goes on to "run thousands of times a second or 24/7 even when they're sleeping." They are more levered in impact than most coders will ever be and yet remain under compensated. It is a sad state of affairs.


This is nonsense of course. Teachers affect lives every day and those lives effect others. Your code is ephemeral and only exists in a digital realm. It's only as good as the next pull request.

Time to get over ourselves people.


Imagine we added up all the dollars generated as a result of good instruction.


"A teacher spending an hour to teach one child teaches one child"

Thats a tautology, not analysis.

A teacher spending an hour in front of 300 kids might create 300 buisinessmen, so have you accounted for their contributuon to the conomy?


Classic tech guy response. Teachers teach multiple kids, hundreds a week of not a day, and can inspire many to go on to do great things (money-wise or not).

Why are we tech people so arrogant about our career? We already make good money, no need to think we're better than others.


Nobody suggested greater leverage meant "better than others".

If you write software in any given year that gets used by thousands or millions of others, you may also be enabling great things - either in capabilities or reduced friction shipped.

Also, successful software gets used for a long time, with additional work adding value on top of previous value.

So software work is leveragable without fundamental limits across both space and time.

This space-time leverage is true of all automation design.

In contrast, well done online courses can "automate" learning, but not as completely. The highest quality teaching involves meeting many students individual needs. And there is nothing automated about most teaching.


Software engineers get to have leverage 100% of the time.

Poor teachers: when I was teaching it was more like 20% of the time, and the other 80% was breaking rocks. Planning, marking, parents, discipline, pastoral care, sports, and straight up babysitting.

That’s a big part of why the pay is so different, in my experience. Teachers have to put in a lot of time and effort to make a diff[erence] and the rest of the work is completely unskilled labor.

For SWEs, the next pull request is only limited by your ability to focus and type.

I’m a full time SWE these days, and enjoying the 10x pay.


Indeed, I feel extremely fortunate to have wound up with computers as an interest while young because that allowed me to build a well paid career doing something I love despite lacking a degree, but that sense of unfairness you mention makes it a bit bittersweet.

I think the bottom line is that everybody should be able to earn a good living wage, regardless of their line of work. It feels broken that one would need to seek a specific job or set of jobs to have a hope of living comfortably.


To a large extent, it is their individual choices. They went in with their eyes wide open, knowing what the market is like. If you decided to pursue something like librarian studies without a concrete plan for income, then it is hardly the fault of society if you do not get a job. I have commented on this yesterday and I will post it here again. Not every job will change the world, but often those boring but well paying jobs are more important than the "impactful" careers.

https://news.ycombinator.com/item?id=29553341


Individual choice, yes. And it is indeed not the fault of society that some jobs provide less money or security than others. But "they went in knowing what the market is like" is not really true for the vast majority of 18-20 year olds. Most simply do not have enough experience about "adult life" yet to make a rational and informed choice.


15. Some of the highest paying software jobs are at companies that profit from attention/marketing and might not have a net positive contribution to society.


> We didn't spend a ton of $ to go to medical school and 3 years of residency.

There's plenty of suffering on top of that time and money too.

https://web.archive.org/web/20201112012023/http://www.medica...


But at the same point in our careers for the same equivalent we do not have the same $ and status as professions like law and medicine.

You are also confusing our broadly speaking "professional" jobs with some of those other examples and firefighters have early retirement and DB pensions in the UK


Wait until you get to your 40s to appreciate other factors. "teachers, firefighters" enjoy: * structured careers with well-defined benefits that they keep even after retirement * accumulate experience that's valued by their employers all the way to retirement * can find a job anywhere; don't need to emigrate/move to SFBA/NYC/London/Berlin, speak foreign languages, be surrounded by aliens from distant lands * are respected for what they do

Software development is heavily skewed towards the young demographics. We just don't think about it when we are young and still in love with computers. In the immortal words quoted by Steve McConnell over 20 years ago: "Wanted: Young, skinny, wirey fellows not over 18. Must be expert riders willing to risk death daily. Orphans preferred. Wages $25 per week."


> Wait until you get to your 40s to appreciate other factors. "teachers, firefighters" enjoy: * structured careers with well-defined benefits that they keep even after retirement * accumulate experience that's valued by their employers all the way to retirement * can find a job anywhere; don't need to emigrate/move to SFBA/NYC/London/Berlin, speak foreign languages, be surrounded by aliens from distant lands * are respected for what they do

If you get to your 40s as a dev and want that without leaving development, become a public sector (direct government, not contract) software developer.


> structured careers with well-defined benefits that they keep even after retirement

That's why careers have a stock comp. Jobs don't.

> accumulate experience that's valued by their employers all the way to retirement

Real engineers do get that.

> don't need to emigrate/move to SFBA/NYC/London/Berlin > speak foreign languages > be surrounded by aliens from distant lands

I can't help but hear a dog whistle here...


Personally I went into software because it has the highest pay/effort and I think this is becoming even more common for younger generations.


In the US* Canadian checking in :/


> I have a lot of friends who went into various fields, and it seems relative to the number of buttons we press daily, anyone who works on software makes great money.

I don't know about that. Software scales almost infinitely. Impossible to double the number of billable hours or patient load of a cardiologist outside of an herculean effort, but software folks can easily 10x the number of people they bring value to in a short amount of time.

> We didn't spend a ton of $ to go to medical school and 3 years of residency. We didn't go to law school and take the bar. Most of us went to 4 years, or less, of school, and here we are making great money by utilizing the greatest tool of all time: the computer.

Honestly, medical school isn't really the correct example to use because it's not efficient at all.

The big reason medical schooling is so hard is because they have to get a completely unrelated undergrad before even thinking about applying to med school. That means racking up serious debts before even knowing if you are going to get admitted. You can imagine how that fosters a culture of diversity and inclusion and help poor, often non-white, applicants reach medicine when the first step is to get into 6 figures debts with absolutely no guaranteed income.

Other countries with similar health outcomes for their population simply admit freshmen into med school right away.

Residency is something else. It's overworked since the total number of residents is capped by the AMA (also you have to fly-in to your residency interviews on your dime, so much for underrepresented applicants!) and honestly there's a lot of cargo culting. 48 hours shifts with no sleep because it's believed that it's impossible to correctly handover patients from one medical professional to the other (hint, this is a solved problem in a lot of high stress cognitive disciplines outside of healthcare). It's a culture of always working harder instead of working smarter, and the people take pride in that strangely enough.

All it really does is create a moat and allow AMA members to charge more because they made themselves scarce.

> Meanwhile, our friends who are teachers, firefighters, child-care workers, etc. are doing tough work daily for less pay, perhaps simply because they didn't use computers much or didn't like them. And yet, their jobs are ESSENTIAL and we need people to do them even though they're grueling and underpaid.

Firefighters tend to have nice pension funds in a lot of cities.

Teacher's pay is incredibly interesting. Engineers are paid on results (this is even more true at serious tech companies where a significant portion of ones earning will be stock comp) while teachers are paid a government approved rate no matter the results.

I know a lot of parents that would pay serious money to teachers based on results (if their kid can get into Stanford/MIT for instance) if said teachers could show a track record of having previously mentored alumni for instance. Instead, the premium is now captured by real-estate since parents that care will pay extra to get into a good school district.


No. 13 in this list is absolutely true, but I’m not even sure that a mass casualty event would be enough. Year after year there are credit card / social security / customer data breaches, but companies have done the math on whether to put in place more upfront measures versus a cleanup afterwards. Same with performance - I used to go into companies to fix performance issues. Companies do not care if their staff has to wait minutes for that request to process. They don’t care if customers have to do stupid workarounds. Only when there is indisputable proof that (big) money is being lost will they bother to fix performance problems.


This one from the article is a bit weird:

> Sophisticated DSLs with special syntax are probably a dead-end. Ruby and Scala both leaned hard into this and neither got it to catch on.

Rails is one of the most popular web frameworks around and a combo of Rails and Ruby power some really high traffic / high importance platforms like GitHub, Shopify and Stripe's API.


Ruby had a huge hype, then its growth slowed down and now it's falling behind quickly. Yeah it's big, but other frameworks that chose a different path w.r.t. metaprogramming seem to be the more popular (and growing) choice today.


While that may be true, I'm not sure that's evidence that DSL's are the reason. I would argue the explanation is: JavaScript. I think many people are compelled by the idea that you can learn one language, and handle both back and front-end development. As the language has improved over the past ~10 years, this angle has only become more compelling, and has eaten into the popularity of other "traditional" web frameworks like Django, Rails, Spring, etc.


I myself am one of these people. But even if Ruby worked anywhere and I could share all my code, I would still prefer TypeScript because of its support for typed JSX and excellent typing system overall. The IDE can help me in most cases, while it stays silent a lot with Ruby. I just don't see what more Ruby offers - and I'm saying that as someone who really likes OOP and really wanted to like Ruby.


I think JavaScript benefits from typing a lot more than Ruby does. Ruby is at least strongly typed, so doing something like `[] + {}` or `4 + {foo: "bar"}` will raise an exception instead of return you a seemingly-arbitrary value.

As far as what Ruby offers over JavaScript? It's a bit personal preference, but I prefer Ruby for several reasons:

- More stable ecosystem. There are more canonical "best" tools for the job. There's one dominant web framework, one dominant background job processor, one dominant task runner, debugger, testing framework, etc.

- More robust standard library. Enumerable rocks. And rolls.

- Better fits "OO" definitions—everything is an object in Ruby. It supports public/private/protected instance methods/variables.


I was about to say as well, JavaScript, SQL, were both niche DSLs that are now mainstream enough that people prep them for general software development interviews.

JavaScript is now a general language, but maybe the most popular DSLs break out of that once they've gained a certain mindshare, at which point they're no longer a DSL.


That's a category error. Ruby is not a framework.


Doesn't matter, almost nobody uses Ruby without Rails. And my statement holds true for any Ruby framework.


> my statement holds true for any Ruby framework

Are you sure all Ruby frameworks use the same "path w.r.t. metaprogramming" as Rails? It's been a while since I looked at them but I recall many priding themselves in the difference back then.


Ruby programmers still use the features, even if their framework doesn't


And you're saying that that was the reason? I'm just trying to divine the relevance of the "other frameworks that chose a different path w.r.t. metaprogramming seem to be the more popular (and growing) choice today" claim.


Sorry, meant languages+frameworks, not just frameworks there. I was thinking of TS+React or C#+ASP.NET in that particular case.


It's hard to complete with JavaScript on client and server. How's the Ruby to JS story these days?


Ehh, when I would introduce people to a rails project, I always got to a point where I said basically what the author says. Usually either while explaining rspec (and `let` especially) or the routes.rb file. Rspec and the routes file are still around, but you used to see that sort of thing a lot more.

My impression is that Ruby libraries have leaned away from DSLs over time. Rails hasnt added a new DSL in a long time, none that I can think of anyway.


I also don't understand the notion that the various bits of ruby that look like a "DSL" even qualify for this rule.

Ruby's syntax allows for writing things in a pretty expressive style, often not requiring parentheses (for parameters) or curly braces (for object/hash literals), and that allows you to write ruby code in a way that sorta looks like a nice DSL. But it's very much not a DSL, it doesn't have "special syntax", it's just... Ruby. Just because Ruby can be written beautifully doesn't mean Ruby "leaned hard into this".


I think he meant creating new DSLs for program code, not so much as a way to make libraries and frameworks more accessible. From my experience with Ruby, I really love well designed DSLs (like good APIs in general), but when people start to create their own mini-languages in existing codebases, you really have to call them back, because the quality of that code will be much harder to control (mostly because of the implementation via metaprogramming magic).


Languages that allow the creation of such DSLs do, in the end, seem to be gradually declining in popularity. This is probably because it is difficult to add static typing to a language that flexible. (Even in JavaScript this has proven difficult, as shown by the fact that TypeScript is known to be unsound.)


Also nothing in Ruby that's called a "DSL" is actually a "special syntax". It's necessarily all just method calls on objects, ruby has no actual mechanism to introduce new syntax. So I'm not entirely sure what the OP is meaning to mean.

I don't know about Scala.


You kind of know what he means. Ruby uses methods for things that look like syntax in other languages (eg the plus sign for addition) and lets you define them however you want. AND Ruby has the method_missing magic method, which means anywhere the language was expecting a method call you can put pretty much any tokens of your choice, and use metaprogramming to interpret it at runtime.

Those are things that people leaned into a lot ten years ago, and relatively little now. It might still make sense to, say, define the addition operator for colour spaces in your graphics library, but not to change the look of the language to the extent that rspec does.


Hmm. So C++ operator overloading is a DSL, at least in a sense. It lets you define things like + and * for vectors and matrices, so you can write your code using your usual notation for such things.

But it's a fairly limited DSL, because you can't introduce new (non-C++) syntax.

I hadn't thought of it that way, but I can kind of see it.


I would not call Rails “popular” in Europe at all. Some use it, sure.


> 10. TDD/FP/Agile zealots are probably zealots because adopting TDD/FP/Agile/Whatever made them better programmers than they were before.

Being a Zealot also stunts your growth beyond the level that zealotry got you to in the first place.


Pair programming or mobbing are a huge waste of resources.

I know that myself I can't actually get anything done unless I'm in control.

The only use this sort of this could have is to train juniors or hand over projects.


> Pair programming or mobbing are a huge waste of resources.

Likely for some, but not all cases. Pair programming at my shop had a weird positive multiplier effect on velocity/delivery. It was optional and about half the team did it. Different folks work differently, we aren't all productive in the same ways.


Do you think that this is true of every programmer, or that there are people who do experience increase in productivity from pairing?


We all have our own set of values.

Myself I value independence and self-learning, so I would be inclined towards having a negative opinion of people that require the constant assistance that pair programming provides.


1. Not everything can be declarative. Navigating tree models (file systems, DOM, decentralize relationships) is an inherently imperative task. This doesn't stop developers from forcing declarative abstractions over imperative concerns. It just means a foot gun at maintenance time or passing the buck to somebody else.

2. Legacy paradigms will continue to live, no matter how obsolete or harmful, so long as developers reliant upon those paradigms continue to find employment. Such developers will continue to find employment so long as such paradigms remain in formal education criteria.

3. Software hiring is extremely biased. Such bias will continue until software is perceived as generally harmful by people outside of software hiring. Until such time any attempts to limit bias will be met with maximum hostility, as such only benefits experienced and generally confident incumbents.

4. It is common in software to hate on credentials of any kind except a computer science bachelor's degree, which then becomes a credential. This line of thinking is prolific and extreme enough to view credentials as a form of incompetence regardless of what such credentials are or what they mean, thereby increasing bias.

5. Software will continue to swell and slow so long as the goals of the developers are out of alignment with the end product. Claiming product first goals by developers is not a solution if not reflected by the tool chain and processes.

6. The older the web gets the more costly it becomes to add text content to a commercial webpage.


>We don’t have the Alan Kay / Free Software / Hypercard dream of everybody having control over their own computer because integrating with disparate APIs is really fucking hard and takes a lot of work to do.

A couple weeks ago there was a conversation here about software in the 90s where the argument was proposed that as an industry, software veered off in the wrong direction somewhere along the way. When I read the above from the article, there's a part of me that starts feeling this...we have all this horsepower in both hardware and software such that it should be as "easy" for people to make software in the same way they can now make pictures, music, and videos without needing hundreds of thousands of dollars invested in a movie or recording studio. Like, these are mutually exclusive points - you can have APIs that maybe are a step beyond, but does that mean that all software must be hard to write? I was telling my son just last night about how people did all these cool, unexpected things with Hypercard back in the day. It didn't need to be perfect, it didn't need to be "enterprise grade" because it was easy to make and was good enough for what people wanted to do.


I’d never heard of mobbing. That’s an awesome idea.


"Mobbing" means "bullying" in Swedish, which initially confused me greatly.


It's not really a positive term in English either, I think the name's supposed to be a joke


I didn't know it had a negative connotation.


“Ensemble programming” is a slightly better name for it.


My team calls it Swarming. We Swarm up on a problem like a bunch of hornets, together we hammer that shit from many angles, heavyweight team and then boom, it's done. I think I got it from TPS


I am yet to see pairing work effectively for more than 20-30 minutes at a time, on a very specific problem, that really needs shared expertise. "Mobbing" sounds horrible 99% of the time.


I've done it a bit in a professional environment and it worked well. I wouldn't want to do it 100% of the time, but it has its uses.


> ... will never broadly adopt Sphinx/rST or Sphinx/MyST, because everybody already knows how to use markdown.

No. It's because the syntax is, well, §!$?§$. Headings, things like using an image as a link, that takes two lines of it's own (3 with an alt text).

    .. |LABEL_TO_USE| image:: image.jpg
       :target: http://example\.com


> TDD/FP/Agile zealots

So his phrasing is a little hard to follow here, but I take this as being part of the ongoing TDD backlash, which is unfortunate because in my 30 years of experience, the opposite of TDD is "not testing anything at all". Or rather, when somebody pushes back against TDD, they usually push back against any form of unit testing. The net result is that the code itself is difficult to impossible to actually test... so nobody ever does outside of QA who does end-to-end black box testing and just kind of hopes they've covered all of their bases (hint: they didn't).

I guess I'd have to understand better what his definition of a TDD "zealot" is because I've been accused of being one for insisting that non-trivial code that doesn't have unit tests (and doesn't even allow for them) is unacceptable.


> so nobody ever does outside of QA who does end-to-end black box testing and just kind of hopes they've covered all of their bases (hint: they didn't).

You don't need to cover all bases, in 99% of software. (And in the 1% where you do, you should probably be using formal verification.)

Testing is an investment and should be analyzed in terms of ROI. There is a cost to writing tests in advance, a cost to fixing bugs after writing the code, and a cost to shipping software with bugs left unfixed on rare codepaths. Sometimes TDD yields high returns indeed (usually for "library code" that (a) does something very complicated on a complex data structure; (b) is easily isolated from the rest of the system without writing tons of mocking infrastructure; (c) has a stable API that is not frequently refactored.) Often it does not. I've found that a combination of static assertions, black-box systems testing, and incrementally adding regression tests during debugging allows me to develop good code more efficiently than TDD the vast majority of the time. Your mileage may vary, of course.


> While dynamic languages can be the right choice for a large project, you have to make the case for why it’s the right choice, like a killer library, something particular to the domain, etc. ...While dynamic languages can be the right choice for a large project, you have to make the case for why it’s the right choice, like a killer library, something particular to the domain, etc.

99% of software projects are by no definition "large". Why in the world would it follow that what is right for critical state infrastructure is at all the right way to do something not nearly as complex?

An attempt at rephrasing the argument: "When you go to the moon, you need to concern yourself with the vacuum. If you're not going to the moon, you should put on your space suit because other people are going to the moon."


"Empirical research on software engineering is a trainwreck"

...and here are some truths from my empirical research.


> The open source maintainer problem isn’t going to get much better in the near future. People are gonna keep blaming “Capitalism” for this. People are gonna keep blaming “Capitalism” for a lot of irrelevant things. People are gonna argue with me about blaming “Capitalism”. Goddamn I am so tired of “the problem is CAPITALISM” takes and they will never stop

The entitlement and arrogance is pretty mind-blowing - those damn' developern don't fix our bugs, don't always respond fast for security updates - such a big problem, but wait - maybe not such a big problem to actually task a dev from my company to work on the dependency - either by maintaining a local fork or helping upstream. Because oh well - this costs actually money.

Sure blaming capitalism is oversimplifying the problem but it's pretty clear that most companies try to avoid using their resources for improving important open-source components. I don't mean paying the developers - that doesn't always work but instead doing things like upstreaming bugfixes and features, maintaining a local fork if upstream is not responsive or even publishing an improved fork. It happens but not very often.


The problem is that there’s no agreed-upon point when we give FOSS maintainers the negotiating power to get the compensation they actually need —- whether that is patches or pesos.


I argue that a healthier approach would be to own the foss-components you use and to try to build an internal understanding of them - no need to audit every line of code but at least some high-level understanding what every component does would already be helpful - try asking for resources to do so, I guess most of the time you will be laughed at. Also giving your devs the knowledge and time to upstream fixes until the maintainer is happy would be great - so many interesting forks and pull requests in all kinds of projects that are just dumped out there but the work to integrate it into upstream is something nobody is doing. This requires resources and resources are money and there is no direct benefit for the corporation. So it's quite a big problem.


You’ve reframed the problem — have you made it larger or smaller? Which problem requires a smaller team to achieve:

A. Training a high enough software engineers in the negotiating skills that they could get the time to develop that understanding. I estimate that training 80,000[1] developers to do this would require 3-6 years from a team of at least 1,000 people with tech lead level of experience… it would likely also need to be repeated every 2 years as new people join the industry.

B. Establishing a social norm that sets the default choices of debates in lots of companies. Baking that social norm into the structure of tools like npm and github. I estimate this would take 1-5 years from 10-30 people with leadership experience and political capital in developer relations/marketing and sales engineering.

[1] https://insights.stackoverflow.com/survey/2021


> Also also also, most muggles don’t want to program their own computers, much in the same way that most muggles don’t want to do their own taxes or wire their own houses.

I smell a red-herring that reeks of domain incompetence here.

Find me even a FSF/Gnu zealot who argues that paradise includes end users fishing around in ALSA or display settings to "program their own computers."

As Keenan's character on SNL's Black Jeopardy said-- "Everybody's got a guy."

Anyone who has ever installed something as simple as Ublock Origin for a muggle gets this.

Anyone who has ever tried to turn off telemetry for good on a muggle's proprietary OS gets this.


> Any technical interview is going to be gameable and exclusionary to some underrepresented group/background.

Bold claim! I agree, and think this isn't talked about enough.

People love to hate leetcode-style tech interviews, but the truth is that it made it more accessible for some people (like myself) to enter the rungs of top tech companies.


Are these uncomfortable?

I don't think they're cynical (as stated); I think most of them are common sense and I had assumed consensus understanding. But I don't spend much time with young whippersnappers.

It's true that experience, and what appears to the irrationally exuberant to be cynicism, are regularly coextensive...


14. Most software “engineering” jobs are dominated by activities other than writing software, and the limited time spent actually writing software is dominated by routine coding tasks that can only be loosely termed “engineering”.


> 10. TDD/FP/Agile zealots are probably zealots because adopting TDD/FP/Agile/Whatever made them better programmers than they were before.

Possibly, but that doesn't mean it made them good programmers.


> ...statically-typed languages are better for large projects than dynamically-typed languages...

> Formal methods will never be mainstream...

Say what? these two statements are written just 2 lines apart?


What’s confusing about it? In this context, “statically-typed languages” refers to things like Haskell, and “formal methods” refers to things like TLA+. The two are not at all the same, nor are they intended to be.


One would say that static types has grown out of research in formal methods, and hence, is a consequence of formal methods.


#99: Everyone is replaceable, even you.


I really like this list, they all resonate with my own experience, I’ve been in both sides of many items (zealot became cynical)


>>The open source maintainer problem isn’t going to get much better in the near future. People are gonna keep blaming “Capitalism” for this. People are gonna keep blaming “Capitalism” for a lot of irrelevant things. People are gonna argue with me about blaming “Capitalism”. Goddamn I am so tired of “the problem is CAPITALISM” takes and they will never stop

This is a horribly constructed argument for a great point that I would have loved to see expanded on and communicated better. Feels like the author was just trying to dismiss an argument with some "repetition legitimizes" comedic statement.


9 is a repeat of 5.


I think point 13 misses a nuance- technology has arguably already killed people (see: genocide and Facebook). But I would like to specify that it has to kill people that other folks deem important enough to prevent the further deaths of.


>The open source maintainer problem isn’t going to get much better in the near future.

>People are gonna keep blaming “Capitalism” for this.

Capitalism is great, there really is no need to defend it at every opportunity, especially if no one is attacking it.


As a marxist, I will certainly concede that hand-wavy diagnoses of "capitalism" with no other analysis or consideration being applied to everything from mental illness to the domination of Marvel films is grating and meaningless.


The author spends time on another forum called Lobsters, which has a fairly pronounced anti-capitalist user base. I think this is where that is coming from.


great is a relative term. It's much better than feudalism, and certainly better than the most prominent socialisms that've been tried (USSR, China etc). However, it does not align incentives towards the things we think are "good", like maintaining open source software, securing software, etc - so it certainly isn't ideal.


Yea I mean nothing is perfect. While capitalism doesn’t lend well to maintaining open source software (which I find to be a surprising comment given that GitHub exists) it does lend well to things like having a house, or being able to save money for retirement, or to creating markets where you get products you want. YC for example has enabled lots of founders to create brand new companies and services we all use.

So I would definitely take the bad with the good here. The world’s most capitalist countries also have the highest standards of living (Norway, Sweden, etc.).


> While capitalism doesn’t lend well to maintaining open source software (which I find to be a surprising comment given that GitHub exists)

Github doesn't maintain the open source software they host.

> it does lend well to things like having a house

Capitalism isn't the originating point of houses though. However, it is the originating point of mortgages, and the massive profits banks draw off of them. I'd say it's a double-edged sword on that front.

> The world’s most capitalist countries also have the highest standards of living (Norway, Sweden, etc.).

Those countries are social democracies - they invest a lot into welfare and the state. The US is probably the world's most capitalist country, that or South Korea. Both have terrible standards of living for poorer people.

Fundamentally, the incentives of capital drive towards maximising profits and minimising costs - everything else is surplus to requirements. I'm not on board with the people who think capitalism is the worst thing ever, but its incentives are pretty bad and it's usually better off mixed with taxes, regulation, nationalisation etc into a mixed economy.


> Github doesn't maintain the open source software they host.

Right but they host it. And before Github hosted it and made money doing so we didn't have a great distribution mechanism for open source software.

> Capitalism isn't the originating point of houses though. However, it is the originating point of mortgages, and the massive profits banks draw off of them. I'd say it's a double-edged sword on that front

It's a hard thing because you also can't just go into the woods and cut down trees and build your own house anymore right? There's no "free property" so the times have changed. Banks might make massive profits on home loans, but at the same time without making a profit you won't give someone a loan for a house. Making a profit isn't really a big issue anyway.

The scenario here is not: "get a free house" or "banks make massive profits on houses". So it's a bit of a false dichotomy you've created.

> Those countries are social democracies - they invest a lot into welfare and the state. The US is probably the world's most capitalist country, that or South Korea. Both have terrible standards of living for poorer people.

> Fundamentally, the incentives of capital drive towards maximising profits and minimising costs - everything else is surplus to requirements. I'm not on board with the people who think capitalism is the worst thing ever, but its incentives are pretty bad and it's usually better off mixed with taxes, regulation, nationalisation etc into a mixed economy.

Yea this happens from time to time but people confuse government and economic systems. The US for example is a constitutional republic (or similar idk exactly) and uses the private means of production (capitalism) as its economic system. Take Norway or another country and while they do have democratic socialism as a government system, they still employ capitalism as their economic model. They have a stock market, people own factories or houses, and they sell their labor. And if you do some research (if you're interested) countries like that tend to rate even higher than the US on metrics of things like "the worlds most capitalistic economies", "the most entrepreneurial economies", the most "free economies", etc. I think your comment about regulation and taxes highlights this misunderstanding. Highly capitalist economies (Denmark, Singapore, United Kingdom, United States, etc.) have different levels of taxation and for different items, rules and regulations, etc.

I think the biggest failure of capitalism (if you'll allow) is that it does very bad with things that can't be or haven't been priced appropriately. We see this with pollution and the environment right now.


> Right but they host it. And before Github hosted it and made money doing so we didn't have a great distribution mechanism for open source software.

That's true, and I'm glad they do that, though I'm under no pretence that they do it out of altruism or anything like that. Again, it's not that everything is bad under capitalism, but that incentives are aligned badly, and tend towards rent-seeking, exploitation and so on. Good things do still happen when the incentives work out alright.

> The scenario here is not: "get a free house" or "banks make massive profits on houses". So it's a bit of a false dichotomy you've created.

I didn't set up that dichotomy, I just said that capitalism didn't invent the house. There are alternative models of housing, like housing cooperatives, and historically houses were not built by corporations to put on the market, but by the government, by local workers, by communities etc - there are many historic models of home construction.

> Norway or another country and while they do have democratic socialism as a government system, they still employ capitalism as their economic model.

This is true (except the part where they're democratic socialist - they're actually social democracies, which is to say capitalism plus strong welfare, but close enough) - but your claim that these countries are the "most capitalist" was incorrect. Typically speaking, more versus less capitalism is an axis of laissez-faire capitalism, which is to say "the most capitalist" would be a neoliberal free market, and "the least capitalist", while still being capitalist, would be a strong social democracy like the Scandinavian model, where capitalism is quite heavily constrained. You might have seen other resources that use these terms differently, but that's what I was thinking.

I think there's a few pretty serious issues with capitalism that, left unaddressed, will cause serious material and social harm. Also, I don't think that we can resolve the climate change crisis through pricing in emissions through something like cap and trade. I think we solve it practically through changing how we live, trying to reduce emissions systemically while aiming to maintain or even improve living standards. One example would be mixed land development coupled with better urban transit, to reduce car usage - but both the car lobby and the fossil fuel lobby will block this, due to the nature of regulatory capture (an inherent problem with capitalism). Even switching to EVs doesn't solve the issue of rare earth mining using exploitative labour (eg cobalt mining by child slaves) - because neocolonialism is another problem caused by capitalism.

The problem in my mind is that there are few good alternatives or steps forward - 20th century attempts at socialism were a catastrophe. My assumption is that we're just going to charge through climate change headfirst until the environment is ruined beyond repair.


Wait. Are Norway and Sweden in the group of "most capitalist"? I thought they also had a bunch of socialized programs. I feel like USA and UK are the top two "most capitalist" countries.


Having social programs doesn't preclude you from being a capitalist country. If it did, the U.S. has medicare, medicaid, welfare, social security, food stamps, affordable housing programs, and many others yet we consider it to be highly capitalist. People often confuse political and economic systems (which is fair) because they get tangled together. Both Norway and Sweden are capitalist countries. They have stock markets even. [1][2]

At least for me and I'll speak on behalf of others who can correct me but there is great annoyance with "capitalism bad for everything" slogans and phrases precisely because it's more of a lack of functional government (at least in America) than it is private ownership of property or production. You should be able to start and own a business, the government (largely) shouldn't be telling you what you can and can't spend your money on, and entrepreneurship is very valuable and should be promoted and encouraged, not shamed or something you get executed for.

-edit-

The main difference between Nordic social democracies and a country like the United States is marketing and effectiveness. The U.S. spends an incredible amount of money on social programs. Some groups in the U.S. spend an incredible amount of resources convincing others that government programs are "socialism" and therefore bad. It's a wonder that my older relatives who rail against these programs are the same ones that benefit mostly from them. I always say yea I'm on board we should cut medicare and social security (that they receive) and the conversation changes at that point, though their minds don't. That should tell you something.

[1] https://en.wikipedia.org/wiki/Oslo_Stock_Exchange

[2] https://en.wikipedia.org/wiki/Nasdaq_Stockholm


>I always say yea I'm on board we should cut medicare and social security

You're evil ;)


Ranking countries by "capitalism" is hard and kind of pointless. Sweden for example has no legally mandated minimum wage, which should make it more 'capitalist' than the US. It also has stronger unions and worker protection, which some people might argue makes it less capitalist. Sweden has (I believe) a much higher percentage of privately run high schools than the US (more capitalist), but free University education (less capitalist). Also the UK is in some ways 'less capitalist' then Sweden with its fondness for large government institutions like the NHS, while in Sweden health care is to much larger extent provided by private healthcare providers.


Government systems don't seem to have any better track record for security. That alone should stop this particular line of argument.


"Capitalism" and "government" aren't opposites.


Anything democratic devolves to oligarchy.

A monarchy with an effective ruler selection process is probably best. There’s a reason most successful companies have a single CEO with absolute power - and they degrade once this is no longer true.

It certainly has a large risk of tyranny though.


You want to live under a dictatorship?

> A monarchy with an effective ruler selection process is probably best.

That's called an election.


No. 13 is gold:

> We’re never going to get the broader SE culture to care about things like performance, compatibility, accessibility, security, or privacy, at least not without legal regulations that are enforced. And regulations are written in blood, so we’ll only get enforced legal regulations after a lack of accessibility kills people.

Sorry, can't help but blame capitalism for saying "fuck performance, compatibility, accessibility, security, and privacy" and making the lives of engineers (and users) miserable.


I'm honestly starting to feel like I transported back in time to my 1980's small town childhood, when the adults around me all blamed "Satan" for everything in the world they didn't like.

"Capitalism" has about as much to do with software performance as actual devil worship had to do with my Dungeons & Dragons or heavy metal cassettes.

And if your life as an engineer or user is "miserable" today, then I wish I could transport YOU to the 1980's. Yeah, existence is suffering... it sucks to be in the top 5% of household incomes, be inundated with open source platforms and tools that didn't exist too long ago, and a have a supercomputer in my jeans pocket.


What sucks is having a supercomputer in your jeans' pocket that is significantly less responsive to your input than a Commodore 64 and is mostly used to deliver ads to you.


I realize that this will get some reflexive upvotes from the meme crowd. But neither of those statements are remotely true.


I frequently have to pause while typing and wait for my pocket-supercomputer to catch up and actually give me feedback like putting text on the damn screen. Multi-second non-interactable pauses are not uncommon at all. If that isn't "unresponsive" I'd really like to know what is.

The ad thing I admit might be a touch hyperbolic.


I have never experienced input lag on a iPhone, other than perhaps a context where the input is traversing across an Internet connection.

And I assure you that a C64 with a 300 baud modem had that issue, too.


Yeah but that 300 baud modem is connected to another computer. I'm just typing on this somewhat laggy phone inside a textarea -- which while worse than a C64, isn't usually so bad that I regularly notice.


Amen to that!

One time at Sunday school in the 80s we did a craft project, little lanterns out of coffee cans with patterns of holes punched in them. Mine was a star and a simple cross. As a kid I knew I could not play with matches but wanted to try out my new creation. So I snuck behind the garage and lit a bunch of matches and melted the candle. A week later my dad finds it, I’m adamant I have no idea where that thing came from. Later on in the day I overhear him tell mom he thinks there are some kids in the area doing some kind of satanic ritual behind the garage.

Rock and Roll however that is from satan.


The problem with blaming capitalism, is that these things are by no means unique to capitalism. People are as self-serving in communistic "utopias" as in capitalist ones. Plus many economies are a mix of both capitalist/socialist elements - people cherry-pick things they like/dislike according to what they want to prove; but the US is by no mean more/less "capitalistic" than any other nation. Some capitalistic value could just as well be described as conservative.

Fostering co-operative communities has less to do with the political/control system, and more to do with how it is implemented - and the mindset of the population. I could just as well blame multi-culturalism, or globalism, or lack of religion, or metropolitan lifestyle on lack of social cohesion.


Software is not charity. I will not work hard to deliver all those things out of the goodness of my heart. The ROI on those things isn’t great.

If it’s easy to do I’ll do it, but none of those are trivial things.


Software is not charity but it's a responsibility. As probably the most innocent example, when you ship an app that drains a user's battery you betray your users.

But then how about an app that sneakily collects data knowing that users won't even understand what you are doing or why?

You can be a responsible business which unfortunately won't necessarily be appreciated. Or you can be a typical business that exploits information asymmetry to the last bit.


If you don't like our current system that lets users decide what they can put up with or not then are you proposing some sort of centralized authority to enforce this? Like the government? The same governments of the world that produce the crappiest, most insecure, user hating software with some of the most ridiculous rules and regulations or else 10 layers of subcontractors making the price per line of code outrageous?


Case in point the Governor of Missouri in the US claiming that a report was hacking[1] when they hit F12 and viewed a website's HTML which contained Social Security Numbers.

[1] https://www.nytimes.com/2021/10/15/us/missouri-st-louis-post...


You could start with e.g. requiring full legal disclosure of what my data is used for. Apple is trying to move in that direction, but we can't be at the mercy of a private corporation no matter how good the intentions are, when it comes to user data and privacy. But Apple's new requirements on the App Store are an example of what's possible to do.

Performance and quality are a bit different, they are much harder to regulate but still possible to an extent.


I'll go further: I would love nothing more then some well-defined common standards to work towards. Weighty documents I can drop on product manager's desk and tell them the ways in which our customers and the government will legally have their way with them if we don't meet those requirements.


This is why I like working in highly-regulated industry. Technically correct is almost always what flies at the end of the day, because there certainly exists some audit item tracking the thing.

I absolutely loathe the audit process on one hand, but its a useful tool for getting everyone to play the same game on the other. It also means your solution is compliant in a lot of places if you can get it signed off in one.


It is unfortunate that parent latched on to capitalism. Point number 13 from the post resonated with me the most since the software I write can damage the environment or expensive machines that are difficult to replace if there are errors, and as a professional electrical engineer I do sign off that it is built correctly and am liable, and therefore I limit the scope so I can manage the workload, document test results that show it works correctly, and put a lot of effort in to handling faults so machinery shuts down safely when there is a problem with a sensor or actuator. Also I keep it as simple as it can be to get the job done so it is readable for other people to understand and troubleshoot.

Imagine if you could sue software vendors for lost revenue due to bugs.

The posts point about people taking responsibility for their software not happening until there are regulations and regulations being written in blood is a good one.


>"Imagine if you could sue software vendors for lost revenue due to bugs"

It does happen when software is made under specific contract. You do pay an exorbitant price for such software though.

For generic software - this is the dumbest idea I've ever heard. Go sue your politicians instead. I am pretty sure you'd find enough cases of lost revenue / income coming as the results of their actions.


Why should software be unlike other industries that are held accountable for their products? Imagine everything in your life had no warranty and click through eula type terms and changed uncontrollably to improve some metric for the manufacturer. I suppose it all comes down to when life and environment are on the line instead of just revenue, and for revenue as you say the people who can afford it pay for it and enforce it with their contracts. To me therein lies the difference between programming and software engineering.


True, but I'd argue that regulation is not part of the free market capitalist paradigm. Regulations are created and enforced by the democratic institutions that are not part of the market paradigm.


Every response seems to have missed the joke by not reading the actual article. No. 12 makes fun of "blame capitalism" and this comment is riffing on that.


Counterpoint: life in communist countries is not known for being "not miserable".


That's the most irrelevant counterpoint that can be made here. "Communist" states were also totalitarian, i.e. undemocratic and that was the main reason for their ultimate failure. With democracy they'd be able to self-correct and likely switch to free[er] markets for example.

Consumer software and Internet are probably the least regulated areas of engineering today, that's the problem. Compare that to aerospace, construction, etc. Regulation though is not part of the free market capitalism paradigm, it's the prerogative of democracy.


> Regulation though is not part of the free market capitalism paradigm

Who told you that? People like to simplify views and say that people are "anti-regulation", but really people are anti-poorly-thought-out-regulation. What famous capitalist do you know that argued theft for example should be unregulated?


Show me a corporation that would whole heartedly support regulation of the market they're in.


Facebook[0]. Regulation could make it more difficult for competitors to get off the ground so they are in support of regulation to that effect.

[0]https://about.facebook.com/regulations/


You bring one of the most hypocritical pages on the Internet as an example?


Corporations also have the means and the connections to lobby the state, which ensures the regulations are more to their liking. All helping entrench the corporation.

The effect is that small competitors are kept out. They can't afford legal departments and legions of developers to match the regulatory requirements.

For now, the Internet is relatively unregulated, which allows alternative media and platforms to emerge. This is a good thing, imagine having just YouTube and Facebook forever.


> Corporations also have the means and the connections to lobby the state, which ensures the regulations are more to their liking.

I understand but under proper democracies regulations are something the society should benefit from in the first place.


What if they weren't democratic because Communism is incompatible with democracy


It's not capitalism, but the bad apples exploiting it.


It is a bit naïve to just blame the free market for everything wrong in the world. When I read complaints like that I wonder what "alternative" the author has in mind.

Capitalism in this sense also gave us much of the software and hardware that is praised on HN.

It is more accurate and useful to discuss which motivations, cultural shifts, or even psycological traits that have caused the decline in software quality and freedom. These are more practical than proposing mass economic realignment.


It's not about the free market per se, but rather a system that allows to turn the information asymmetry into money without consequences most of the time.


I'd like to see examples of well known good apples outside of OSS that care about performance, compatibility, accessibility, security, and privacy.


Although they have many faults, Microsoft has cared about compatibility for 30 years (you can still run DOS programs on your Windows 10 computer). They have cared about the security of their users since about 1999, and they cared about their user’s privacy for about 10 years before dropping it in favor of advertising revenue a few years ago. People still install Windows on new their computers, so it evidentially isn’t a big concern for most people either. Their support for accessibility technologies is probably better than most operating systems, though it’s a bit hard to tell from the outside. Performance is an odd one. It is my understanding that the Windows kernel gets high marks for performance, but aside from that almost nothing does.


Even in OSS caring about performance, compatibility, security and privacy is the exception. The biggest projects do tend to care, but the majority don't. The biggest examples in OSS are mostly funded by capitalism to care ab out those things.


It's not the fault of a system that rewards unethical behavior by having a massively skewed risk/reward profile for said behavior, it's the fault of people with low-normal levels of empathy who make rational decisions. Sure.


"The system harnesses the power of incentives to make people do the right thing!"

"But here the incentives point in the wrong direction."

"It's not the system's fault! People shouldn't follow those incentives!"


That same line of argument could work for socialism/communism. What country that doesn't use the free trade paradigm for their economy isn't some authoritarian nightmare of poverty and despair? Are they just bad actors using the system wrong, or is it a system that always ends that way?


Corruption isn't a problem of free trade, it's a problem resulting from centralization of authority. A single (or small cabal of) regulatory gatekeeper is comparatively cheap to pay off compared to satisfying all interested parties. Bribe one person? Check. Bribe 10,000 without having it break the bank or turn into a PR nightmare? Not so much.

Clearly, our "free market" isn't really free, and our regulatory and enforcement strategies are the problem.


Not always the case. There are non-democratic capitalist societies with varying degrees of performance, from great (Singapore) to awful (the entire post-Soviet space).

On the other hand, over-regulated or fully socialist societies seem to be incompatible with democracy, I'd agree with that one.


Bad environments make bad apples.


From a capitalistic POV, it's irresponsible not to exploit ("make full use of and derive benefit from") underlying economic systems, no? This is why a purely capitalist, free market economy would be a societal disaster.


if it's a near-universal problem, it's not "bad apples" it's bad incentives. IE, capitalism.


The exploitation is a feature of capitalism, not a bug.


Capitalism is a handful of people who own the means of production and the rest of us trying to get by. The bad apples exploiting it are the people who own the means of production, and saying they're the bad part of capitalism might as well be saying "capitalism is only bad because of what makes capitalism what it is".


You replied to that comment using a computer. That computer is the means of production for software. You own the means of production here. What have you been doing with that capability?


> Capitalism is a handful of people who own the means of production and the rest of us trying to get by.

Well, good thing then that the "means of production" for software development consist of a 500€ laptop, which is well within reach of just about every programmer in the Western world.


>Well, good thing then that the "means of production" for software development consist of a 500€ laptop, which is well within reach of just about every programmer in the Western world.

Sure, now tell me your opinion on this idea I have

Today for music there are many parasites that suck the profits and creators barely get something. So my idea is to make a cooperatives (or non-parasitic company) to help the creators of content. First step is to have mobile apps and here we just HIT the parasites, Apple and Google , any transaction or subscription will be taxed by7 this parasites. It seems impossible to make money for the right people without fating up the parasites.


When people say capitalism is bad, what they're saying is allowing people to choose is bad. Because no one is forcing these products on anyone and nearly everyone has at least heard of the issues and mostly don't care.


>When people say capitalism is bad, what they're saying is allowing people to choose is bad. Because no one is forcing these products on anyone and nearly everyone has at least heard of the issues and mostly don't care.

My issue is with "free markets" solve everything when most of the time you don't have free markets(like when you have too few competitors or when you have collusion not to compete ) , or when you forget to specify that free market would solve the problem but only after you make illegal all the bad or evil practices companies will abuse. If someone is an economist is there a theorem /law that proves that imperfect free market will covere to an evil state so you always need supervision to intervene when you notice the effects and apply a correction/bug fix ?

In my example with mobile applications we don't have free markets , where free market would mean you have many agents competing to offer you something, we have basically 2 competitors and this 2 are happy to be in a balance and share this big pie then aggressively competing to make happy the custoemrs but reduce their profits.


Don't confuse free with unregulated. Not even Adam Smith wanted unregulated markets. Free simply means everyone is equally free to play in the same market created by a system of laws and equal enforcement. And I doubt you'll find anyone defending monopolies or duopolies. All systems are corrupted, we just try to minimize it when we can.


#10 also applies to religion.


Not capitalism, but imperialism & its neo variants.


This is a great essay. Also, the problem is capitalism and the fact that the argumentation won’t stop should probably tell us something.


> 12. The open source maintainer problem

The what problem? The only problem I see are entitled developers who think that putting stuff on GitHub and calling it “open source” is somehow an occupation for which they deserve a salary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: