I’d say the same thing except about 25-30 years ago and definitely not 15, which suggests to me that both of us suffer from observer bias and likely there’s always been a real mix of programmers who were really into it and others who were only after a job. Perhaps the proportion of uninterested programmers has increased though.
A lot of people heard there is money in programming so they did the bare minimum and landed a job.
The managers who hired them were not complete idiots, mind you. They knew what they were getting but they are playing their own games: they need the bigger head-count, they want to hire their cousin to oversee these sub-par programmers, they want to be seen as professional when they inevitably have to ask for external consulting when their underlings crap the bed on a regular basis, etc.
But it's mostly power play, I found, almost on a sexual kink level. A lot of people in power out there will hire you if you are meek and can't stand up for yourself. Really weird.
Having hired more than a few people, I'll suggest a much simpler reason. These programmers might just do the bare minimum, but that bare minimum is enough to accomplish the job we need them to do, and we can stop interviewing and get back to work.
It probably follows the booms and busts of the industry. Boom times encourage people to enter the industry whether they are interested or not. During a bust you will only get people who are interested in the subject for its own sake.
Step 1: make everyone an incredible offer
Step 2: get them all hired away from your competitor who is now out of business
Step 3: in a year or two, restructure all these people out (or just fire them if your jurisdiction allows)
Step 4: your competitor is gone, and all it cost was a year or two of salaries.
Seems like a great way to help out budding monopolies.
it seems like you can just prevent this by providing incentives for your employees to not get poached, and also companies that mass-hire-mass-fire would get reputations for doing so, and people wouldn't fall for it. making it illegal instead of requiring businesses to actually pay for retention and loyalty in a free market way is so silly
When a mass employment offer is made to steal or destroy another business, it's usually something ridiculous. For developers it might be a million a year each, for example. It's not an amount intended to be paid perpetually so it can be larger than the defending business can be paying to retain.
It is not illegal to do general hiring at good rates and shop for employees at a particular company. That wouldn't have the same results as buying a company. Plus, you wouldn't own their creations; you'd have to rebuild or clean room steal it.
The 'people wouldn't fall for it' is in error.
People aren't rational actors and don't have complete information.
That's a bold statement, I know, but it's at least as correct as 'people wouldn't fall for it'. I'm pretty sure it's easy to make a case for 'too many people will fall for it'.
I’ve encountered a similar phenomenon with regard to skill as well: people want to ensure that every part of the software system can be understood and operated by the least skilled members of the team (meaning completely inexperienced people).
But similarly to personal responsibility, it’s worth asking what the costs of that approach are, and why it is that we shouldn’t have either baseline expectations of skill or shouldn’t expect that some parts of the software system require higher levels of expertise.
There is the reason Haskell or F# are relatively unpopular and Go has a much wider footprint in the industry: high expertise levels don’t scale. You can hire 100 juniors but not 100 seniors all trained up in the same difficult abstractions.
Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.
> Conversely, one skilled senior can often outperform a hundred juniors using simpler tools, but management just doesn’t see it that way.
Management is correct, if that's the question.
In some very rare bleeding edge cases it is true. Everyone wants to think their company is working on those areas. But here's the truth: your company (for any "you") is actually not.
If you're writing code that is inventing new techniques and pushing the hardware to limits not before imagined (say, like John Carmack) then yes, a single superstar is going to outperform a hundred juniors who simply won't be able to do it, ever.
Asymptotically close to 100% of software jobs are not like that (unfortunately). They're just applying common patterns and libraries to run of the mill product needs. A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.
This is made worse today because neither quality nor performance matter anymore (which is depressing, but true). It used to be the software had to work fast enough on normal hardware and if it had bugs it meant shipping new media to all customers which was expensive. So quality and performance mattered. Today companies test everything in production and are continuously pushing updates and performance doesn't matter because you just spin up 50 more instances in AWS if one won't do (let the CFO worry about the AWS bill).
Programming doesn't happen in a vacuum, and experience and institutional knowledge can account for many orders of magnitude of performance. A trivial example/recent anecdote:
The other day, two of our juniors came to see me, they had been stumped by the wrong result of a very complex query for 2 hours. I didn´t event look at the query, just scrolled down the results for 10 seconds and instantly knew exactly what was wrong. This is not because I'm better at SQL than them or a Carmack level talent. This is because I've known the people in the results listing for basically all my life, so I instantly knew who didn´t belong there and very probably why he was being wrongly selected.
Trivial, but 10 seconds vs. 4 man hours is quite the improvement.
> A superstar can outperform maybe 3-4 juniors but that's about it. The jobs isn't that hard and there are only so many hours in a day.
There do exist (I would even claim "quite some") jobs/programming tasks where superstars are capable of, but a junir developer will at least need years of training to be able so do/solve them (think, for example, of turning a deep theoretical breakthrough in (constructive) mathematics into a computer program; or think of programming involving deep, obscure x86 firmware trivia), but I agree with your other judgement that such programming tasks are not very common in industry.
You don't even need to go to rocket science for this.
3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.
Knowledge/skills/experience/ can have massive impact.
> 3-10 juniors can make a massive expensive mess of a crud app that costs $x0k a month in amazon spend and barely works, while someone who knows what they're doing could cobble it together on a lamp stack running under their desk for basically nothing.
Yes! Absolutely. It will be faster and more reliable and an order of magnitude (or more) cheaper.
Alas, I'm slowly (grudginly and very slowly) coming to terms accepting that absolutely nobody cares. Companies are happy to pay AWS 100K/mo for that ball of gum that becomes unresponsive four times a day, rather than pay for one expert to build a good system.
Indeed, specialist knowledge is a real constraint, but I think it’s possible to at least _orient_ towards building systems that require no baseline level of skill (the fast food model I guess) or towards training your staff so they acquire the necessary level of skills to work with a less accessible system. I suspect that the second pathway results in higher productivity and achievement in the long term.
However, management tends to align with reducing the baseline level of skill, presumably because it’s convenient for various business reasons to have everyone be a replaceable “resource”, and to have new people quickly become productive without requiring expensive training.
Ironically, this is one of the factors that drives ever faster job hopping, which reinforces the need for replaceable “resources”, and on it goes.
Which is why the most important qualification for a manager is to always consistently put in way more effort than the average worker, and be very, very good at doing things that are not the least bit easy at all.
I'm not sure I understand this position. What I hear is "obscure hard to understand code is good" but as others have said, code will be maintained and modified for years to come and not by the original author so making it easy to understand and follow is usually the recommendation. Even the original programmer will usually find it easier to understand their own code months or years later
Yes, I meant something else, and of course I'm not advocating for hard to understand code. However, as the sibling comment suggests, what's obscure or hard is relative.
The problem with indiscriminate application of "code has to be easy to understand" is that it can be used to make pretty much anything, including most features of your language, off limits. After all, a junior developer may not be familiar with any given feature. Thus, we can establish no reasonable lower bound on allowed complexity using such a guideline.
Conversely, what’s too simple or too difficult is very specific to the person. Somebody who’s coming to a junior developer role from a data science background might have no problem reading 200 lines of SQL. Somebody with FP background might find data transformation pipelines simple to understand but class hierarchies difficult, and so on. So the "easy to understand for anyone" guideline proves less than useful for establishing an upper bound on allowed complexity as well.
Therefore, I find that it’s more useful to talk about a lower and upper bound of what’s required and acceptable. There are things we should reasonably expect a person working on the project to know or learn (such as most language features, basic framework features, how to manipulate data, how to debug etc.) regardless of seniority. On the other hand, we don’t want to have code that’s only understood by one or two people on the team, so perhaps we say that advanced metaprogramming or category theory concepts should be applied very sparingly.
Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.
Reminds me of a dev team I once encountered where they stated they wanted to be expert C programmers and that they didn't understand pointers, so they avoided them.
> Once that competency band is established, we can work to bring everyone into the band (by providing training and support) rather than trying to stretch the band downwards to suit everyone regardless of experience.
Great point. This would also apply in the context of DEI hiring initiatives.
Aha! advertises as far as New Zealand and I also never heard back when I applied. The excuse they supply (at least they do that much…) is that they receive too many applications to respond to everyone. But then why advertise all over the world? A scummy tactic when I think about it.
[I am the author of the post.] I know that keyword lists are different from maps, but I think their superficial similarity to maps isn’t ideal, and I don’t think that they’re the best possible solution (eg consider pattern matching on keyword lists which is order dependent, it’s not intuitive in the case of named args). They’re confusing to newcomers because of these things.
Both can happen at the same time, they can get confused and be learning.
It would be best if there was no confusion whatsoever, but given that it is most likely impossible (in any language), it is important to learn from where the confusion comes from, so we can continue improving the docs. It is a positive loop to have, as long as we listen. :)
It is unfortunately much trickier because keyword lists are also optional. So what happens if opt2 is not given? We could raise but... since they are optional, we wouldn't want it to raise but rather have a default value.
Of course, we could then say "let's allow a default value to be given when pattern matching keyword lists". The problem is that no other pattern works like this and it would be inconsistent.
I think optional arguments are fundamentally incompatible with pattern matching (in Elixir). I understand why we would want that but, if we did have it, we would be adding tons of complexity and it would stand out as a sore thumb. I am also wary of going towards the same direction as Ruby and Python which have quite complex argument handling (albeit we have slightly different feature sets).
I think there is a possible design for them that is not too bad, but it would need named parameters. I think the time o encountered that idea was part of a longer post about solving a different problem in Rust.
But i doubt it would be an urgent thing to change.
Tldr: add named arguments (optional). You can call with anon names, in which case it is positional as it is today. If you use named arguments to call, you have to use them in the order they have been declared, just like with positional, but you _can_ omit some. These get the default value.
There are ofc combinations that could be allowed, like anon until the first optional one, whatever.
I get that. But it's built on top of the Erlang VM, neither Erlang nor it's VM support these. All languages have their own leaky abstractions, and stuff that should in theory work, but doesn't.
Keyword lists were certainly confusing as a newcomer. But, FWIW, they existed well before maps. The AST is made up of only tuples and keyword lists on top of atoms and other literals. (Actually, if I'm not mistaken, keyword lists themselves are just lists with tuples of atom/value pairs. If you dig far enough down, it's incredible just how much of the language is built on top of a tiny number of primitives. Almost every bit of syntax you can think of is probably several layers of macros.)
Keyword lists are from Erlang's history really. Erlang didn't have maps or records for a long time and used keyword lists to perform a similar function.
Ah yes, sorry, you did say you understood them in the article but wasn't obvious you understood cases where one was better suited than the other.
Otherwise very nice article and sorry I only focused on the rebuttal points! I absolutely agree about grouped alias/import/require. I know those weren't always in the language and I always assumed it was a community request (but maybe not). I personally prefer to avoid `alias` and `import` whenever possible! There a handful of scenarios where I use them but I love how Elixir is big on locality. Any jumps I can avoid, even to the top of a file, is a win in my book.
Otherwise the focus on pragmatism is apt. Possibly my favourite thing about Erlang is that it is the only currently widely adopted general purpose language that was built to solve an actual business problem, and that problem so closely resembles web programming. All of the language decisions were made in support of their business goal as opposed to trying to come up with a beautiful, academically sound, language. Something about that really resonates with me so I kinda enjoy and embrace the "warts". Call it Stockholm Syndrome if you must :)
FWIW, no significant changes to the language are expected in the next couple of years.
Another side of this argument is that the rate of Elm ecosystem change is far more manageable than for JS.
A "stable JS framework" is an oxymoron, I'm afraid. My JS tools and NPM packages continuously change from under me, but somehow that's considered normal and not a problem.
I've just looked up a project I have with a mere 9 dependencies, and 7 of them have gone through multiple major version changes since the end of 2019. The other two have had minor releases. None are unchanged, even though all were already "stable" when I added them! Will the code still work if I update them? I don't know. Do I want to spend the time continuously tracking the changes to these dependencies, testing, making new releases of my project? Not really.
Perhaps I could continue deferring dependency updates, but that might make my job that much more painful when I have a legitimate reason to update (eg. for some new features I need) as half the package APIs will have changed beyond recognition by then.
I have so much less trouble with going back to Elm projects and picking up where I left off. That should be taken into consideration too.