Hacker News new | past | comments | ask | show | jobs | submit login
Skills Poor Programmers Lack (justinmeiners.github.io)
191 points by rspivak 7 months ago | hide | past | web | favorite | 200 comments



The skills cited in the article are:

1. Understanding how the language works. Additionally understanding how the language infrastructure interfaces with the computer.

2. Anticipating problems. Prefer solid foundations over veneers that appear to get the job done.

3. Organizing and designing systems. Essentially, SOLID.

Two things on this:

First, bad code often results from conflicting goals. Moving goalposts and on time shipping, for example. The result appears to have been written by a poor programmer, when this may not be the case.

Second, the most valuable skill a programmer can have isn't technical, but rater social: empathy. The best programmers I've seen have it and the worst completely lack it.

Lack of empathy leads to poor communication. If programmer can't anticipate or read his/her audience's perspective, there's no way s/he can communicate a complex concept to them. The temptation will be to blame the audience when in fact the failure lies squarely with the programmer doing the speaking or writing.

Lack of empathy also leads to disregard for the needs of users and future maintainers. Systems get built that don't need building, and systems that should be built aren't. Should the two happen to coincide, the system is a nightmare to maintain because the programmer simply didn't care about the people who would need to maintain the contraption.

A lot of the 10x programmer discussion focussed on people who lack empathy. For some reason, it's easy to conflate lack of empathy with technical skill.


Yeah, context and empathy are two things that I'm only appreciating more and more as my career goes on.

I once had this small but terribly written module written by an inexperienced developer who wasn't given the kind of feedback and code review that he should have been given. It was still running in production years after that person had left because it was in a corner of the code base that was basically never touched. What made it interesting to me was that it was badly written at almost every level from the high level separation of concerns to low level coding practices, while still basically getting the job done.

I started giving this module as an exercise during interviews for a certain position, with the framing of "This was written by an beginner developer on your team. What kind of feedback would you give them to help them improve?" This sort of thing was actually a major part of the job, as it was a position that would be a kind of consulting resource for other teams and would involve many code reviews and encouragement of best practices -- basically providing subject matter expertise to full stack, cross-functional teams.

The results were fascinating to me because it acted like a Rorschach test of sorts and told me a lot more about the focus on the interviewee than the code they were criticizing. More junior candidates immediately jumped on the low level issues like the boolean assignment example, naming conventions, or small snippets of code duplication, and spent all their time there. More experienced folks often mentioned the low level issues but spent more time on the higher level problems -- the class should be broken out into two, extending in the most obvious way would be hard because XYZ, this etc. Some of the best candidates would ask for more context on the situation and overall codebase.

It also helped weed out the jerks who, despite the prompt, decided that the exercise was an opportunity to show off and insult the original author (who was of course anonymous), venting about how stupid something or other was or using it as a springboard to attack their current co-workers. Everyone starts somewhere. It's fine to wince a little at something that's poorly written, but the point is to actually help them improve. The better candidates were there trying to understand what their gaps in understanding were that would cause them to make certain mistakes. The very best candidate was trying to map out a staged plan of more easily digestable things to work on so that they're not overwhelmed all at once -- extrapolating a whole technical mentorship model out of what they could glean from the code review.


bookmarked.

i don’t suppose you could publicly share the module? this is a stellar interview question.

or maybe there’s a library of such code? (yes yes, jquery/openssl or your favorite true but not useful reference comes to mind)


I don't think I can, sorry. Though I imagine that if you have a code base of any size that more than a couple dozen people have touched, you'll be able to find something similar if you ask around.


What was involved in their mentorship plan?


This was years ago, so my recollection is a bit fuzzy on the details. But pair programming was mentioned, and they also thought about what kinds of tasks to assign and how to provide scaffolding.

So for instance, instead of giving a really broad assignment and then going through six rounds of code review because everything is wrong, sit down with them in the beginning and make an outline (while explaining the rationale) of classes, methods, and responsibilities. Then have them fill in the implementations. That way you can use code reviews to focus on a lot of the smaller issues that are more concrete and less hand-wavy, while giving them practice working within sane high level designs. As time goes on, move on to more abstract concepts and give them more design latitude until they're capable of making a module like that by themselves.

There was a bit about walking the fine line in how much direction to give -- too little and they're lost at sea and spending forever to merge changes, too much and you stunt their growth.

I think there was also a book recommendation, but I have completely forgotten what it was.

Anyhow, they said it all a lot better than this, and I really wish they had accepted our offer. :-P


Thank you for sharing your experience. I found this comment very insightful.


> the most valuable skill a programmer can have isn't technical, but rather social: empathy

You should say "a programmer needs social skill as well as technical skills to be valuable". If a programmer doesn't have a basic aptitude for programming-like tasks, they won't be able to understand how the language works, anticipate technical problems, or organize and design systems. Until you've worked in a group that's filled with "programmers" who don't have the aptitude for coding, you won't really understand this. The social skill must complement the technical aptitude and skills, but is not more important than them. In fact, I'm even suspicious of people who say "the most valuable skill a programmer can have isn't technical, but rather social" because they often turn out to be aptitudinally-challenged programmers themselves.


"Social skill" and "empathy" are very different things. An awkward person can be empathetic. A charming person can be callous.

In fact, I'm even suspicious of people who say "the most valuable skill a programmer can have isn't technical, but rather social" because they often turn out to be aptitudinally-challenged programmers themselves.

In my experience the programmers who are saying that are usually engineering leadership. If you believe the endgame is becoming engineering leadership, then it's absolutely true that soft skills become more important.

While I don't agree that "social skills" are the most valuable skill for a programmer, I do sincerely think it is the most common reason why competent programmers hit an invisible ceiling in their careers. I've seen plenty of programmers who are technically talented and hardworking, but who get stuck in their careers at the junior end of "senior" because nobody wants to work with them no matter how right they are. If person A is right 90% of the time but nobody wants to deal with them, and person B is right 80% of the time and people are willing to listen, I would rather keep B over A because those junior developers who are running at 60% will turn into 80%-ers under B, but they'll stay 60% under A.


I think the difference is that empathy often correlates with teach-ability, and it's much easier to teach someone technical skills than it is to teach someone empathy.


>The temptation will be to blame the audience when in fact the failure lies squarely with the programmer doing the speaking or writing.

This is a popular claim on HN that I don't think is true. Where does the boundary end? Is it my fault they can't understand my program if they don't know any code at all? What about if they're a Python programmer hired onto a Java project? The point is you should make a good faith effort to write readable code, especially for your team members, but you're a programmer, not a teacher.

>Lack of empathy also leads to disregard for the needs of users and future maintainers.

This I agree with. Its a very underappreciated reason to build empathy as a programmer.


You’re assuming that the only way developers communicate is through code.

But it’s not. If you want your message to be understood, and the listener is making a good-faith effort but doesn’t get what you’re saying, the onus is on you to communicate in a way the listener understands.


> Second, the most valuable skill a programmer can have isn't technical, but rater social: empathy.

couldn’t agree more!

but the term you’re really looking for is perhaps a strongly developed theory of mind. whereas empathy really is referring to emotional awareness.

the worst and also most annoying programmers i know, to a person, consistently fail to see other points of view. their way is the right way, period. they tend to actually be smart, and right about very many things in a small problem domain. i dare say their raw intelligence is a double edged sword.


Maybe empathy comes in when trying to frame criticism in a constructive way that the receiver will internalize and find helpful?

In some cases, it might be better to ask them about reasoning around a particular bit of code than to offer alternatives right away. In others, a good idea or anecdote about something you found useful in that spot, might be better received.

In my experience, there is a good amount of empathy (or emotional awareness) involved in offering advice that will be accepted in a constructive spirit, and not seen as an attack.

After all, we ALL write bad code sometimes. A bit of code might seem like a good idea at the time, but could just be overly "clever" upon later inspection. A good review comment that does not point fingers seems like it would fall under the "empathy" umbrella.


Second, the most valuable skill a programmer can have isn't technical, but rater social: empathy. The best programmers I've seen have it and the worst completely lack it.

+10

This is missed by 99% of articles and lists. And most programmers never stop to step on someone else's shoes.


Interesting that a one letter typo changes the meaning so much!


I don’t think that social skills and technical ability are at all related. I’ve seen people with great social skills produce great code, and others produce a tangled mess. And I’ve had to deal with people with terrible social skills who produce great code.

One thing you don’t see often in the workplace are people who lack both social and technical skills.


I'd say that point 1 is more De Morgan and Karnaugh than code.


As an addendum perhaps to the Organize and Design Systems section, I'd propose including mise-en-place as it applies to project maintenance and the development environment.

This centers on tooling and documentation, especially the README. I want a README that gets me set up and running as quickly and in as few steps as possible. Recently I needed to test out some stuff one of my teams was working on. It was a somewhat complex Wordpress project. I was able to get set up using our documentation, but a couple key steps were missing making it error-prone and unnecessarily aggravating.

Anthony Bourdain explains it as only he can:

The universe is in order when your station is set up the way you like it: you know where to find everything with your eyes closed, everything you need during the course of the shift is at the ready at arm’s reach, your defenses are deployed. If you let your mise-en-place run down, get dirty and disorganized, you’ll quickly find yourself spinning in place and calling for backup. I worked with a chef who used to step behind the line to a dirty cook’s station in the middle of a rush to explain why the offending cook was falling behind. He’d press his palm down on the cutting board, which was littered with peppercorns, spattered sauce, bits of parsley, bread crumbs and the usual flotsam and jetsam that accumulates quickly on a station if not constantly wiped away with a moist side towel. “You see this?” he’d inquire, raising his palm so that the cook could see the bits of dirt and scraps sticking to his chef’s palm. “That’s what the inside of your head looks like now.”

https://books.google.com/books?id=XAsRYpsX9dEC&lpg=PA65&ots=...


Mise-en-place is so important.

Inexperienced engineers sometimes find themselves in teams which don’t appreciate it. Because confrontation can be nausea-inducing, they sometimes don’t endure the discomfort to insist on it.

———

When a customer walks into a restaurant, does he ask about the temperature of the fridge the chicken is stored in? Does he ask if the kitchen is clean and organized? No. Thats the chef’s job to insist on. But the customer does care if his food is late or laden with salmonella.

Likewise, its an engineer’s job to care if she has automated tests and well-structured code.


I'd argue that missing on mise en place is symptom of a team leadership failure, as well. Allowing excessive tribal knowledge vs. repeatable process (docs, dev tools, tests, well organized code, etc.) is an organizational risk. There's the efficiency loss side, but also the possibility that an incapacitated team member becomes a critical business loss – the team becomes unable to meet goals or worst case, to ship at all. Not empowering new team members to efficiently onboard is just one facet of this. With new hires, there are also the power issues that result, particularly blaming new hires for poor onboarding performance.

I "fondly" recall one early job in my career starting with the promoted-from-the-ranks dev lead filling a whiteboard with boxes and acronyms... an overwhelming brain-dump to a newcomer that took hours of time spread over days. By the time I'd sorted out what we really did, months later, I could explain the whole thing to a new hire in fifteen minutes. One easily understood diagram, pointers to the two actually-important directories in our codebase, and some context – and they would be off to the races. Funny enough, I met a number of other people at that gig that thought of their work as "irreducible". One was, to use the recent meme, a "10x engineer" for whom the company maintained a rotating roster[1] of tech writers to follow around everywhere and try in vain to record what the heck they were doing.

[1] "rotating" as in "hired, then fled screaming"


I agree deeply.


It would also help if folks read READMEs. I write a lot of them for my projects at work. Most are detailed but also have a quick setup section in the beginning. Unfortunately people still don’t read them and if they do they miss steps specifically spelled out within markdown formatted code blocks.


Totally agreed. I wrote up my thoughts on this recently: https://blog.0x74696d.com/posts/mise-en-place/


Most programmers do not follow the Golden Rule, which is to write code you'd like to maintain with minimal training. It's a principle, but there are various skills involved in doing it successfully, including writing, automation, design, seeking quality peer review, and a few other things.

Similarly, writing code that can be deleted is an important skill.

Using "good design" and "knowing your language" are fairly nebulous and somewhat tautological goals out of context. What is good design? Design that works well, I suppose?

Most code is actually fairly easy to change in the future. Just slap another "if (isLeapDay(today)) dailyProfit * 28 / 29" wherever will solve the immediate bug. Is this a good idea? Nope. Why is it a bad idea? Well, it's the S in SOLID, probably, but the real reason is that someone would be surprised and chagrined to discover that design choice the hard way. Hence the Golden Rule.


Writing code that is well designed but not over-designed is a great skill.

It is in every engineer to write the gold plated car, as it is in marketing and sales to always need the future tomorrow to gain competitive edge.

The number of times I wrote beautiful code are quite numerous, the number of times it is still not used to its potential as well. In the end I spent too much time over engineering for a disastrous moment or extension that never happened, or was never requested by a customer.

I have written godforsaken shortcuts that are used so frequently it made me feel proud and ashamed at the same time.

Now, I have come to believe that knowing when to stop is the greatest skill of all.


There really needs to be a blog post about this. (I'm sure there have been, but I feel like this isn't talked about enough.) A good percentage of HN readers probably already know most of the advice about bad design, but once you're at that more experienced level, you run into the inverse problem of knowing too much for your own good and wasting cycles over-designing and over-engineering.

I've definitely been guilty of this, both at the level of high-level system/project/solution design, and at the level of code design and code abstractions. I've blown a ton of time and effort over-engineering things which never end up getting used or looked at at all years later, or even just to make things look nicer despite no one else ever using, reading, or modifying my code other than me.

It can sometimes be hard to really accept YAGNI and be able to just move on even if the code isn't quite as elegant or DRY or abstracted as you might have preferred. When you're creating abstractions upon abstractions, you can lose sight of the original goals.

I'm still learning to know when to stop, but I'm getting better.


While I applaud this and would love this rule to be in production, it tends to go against (the majority of) marketing's golden rule of, "I need this done in an unreasonable amount of time with unreasonable requests on functionality" and a management structure that tends to only flex to its subordinates instead of other departments, like they're supposed to.... though, I might be jaded =) I have seen a few very good managers/CTO's in my time and have left companies when they were forced out.


> writing code that can be deleted is an important skill

Are you saying "I can easily delete this code from the project" is a metric for modularity? That's really interesting, is it your own thought?


I saw it before. Deletion Driven Development is one of the many names. For me a good rule of thumb is: removal of a feature should result in a diff that deletes a bunch of files (classes in case of oop) completely, and only single lines in other files: the call sites to that feature.


Write code that is easy to delete, not easy to extend: https://news.ycombinator.com/item?id=11093733


Most programmers work in passes, and don't write everything in one shot. But a common mistake I see is not doing enough passes still. Programmers jump on making polished commits they can show off to their peers.

Instead, what should be use more often is what I would call "postit commits". Simple strokes of code that are certainly not final (if any code can be final) but achieve a purpose. Their postit nature makes it very easy to change. Those commits still need to point in the right directions though because they may stay here for a while (think 15 years). As long as they aren't hurting the customer or the codebase (like a supposedly-temporary hack, even though those are sometimes required), then it's all fine, a program is never finished anyway.

Working with more passes allows you to think more at every step and shape a solution that's more efficient. For example, take a piece of code that's too slow for the requirements and for obvious reasons. Don't necessarily jump on optimising it right away. You may find later that you will finally get the whole subsystem in which that piece of code is included better by using another more powerful idea. And when you come up with that new idea, the only thing that will stand in your way is "postit commits". You can change things now because you're not facing polished ones. You can change things now, not in 10 years when you find out every one of your competitors has finally implemented that idea, in which case you would take the cost of a full rewrite.

This is essentially how you do things well from scratch (and by extension, how you do anything well with code). Keep postits as long as you can, because it's a better strategic position. Only harden a solution when you can't give it more time or because it just hardened itself with cool ideas. At the end of the day, you will win time, if you worry about that. You will crush your competitors even. They will have 3 times more code with bad solutions, you will have 3 times less code with all the super cool stuff and nice subtleties (it doesn't always play out like that, but often enough).


This forms part of what Alan Kay calls “late binding” [see link below] and describes as part of what makes Lisp and Smalltalk great, which is something of a lost art today, mainly because most contemporary programming languages (the usual suspects in the statically-typed camp but also languages one would not expect, Python / Javascript) go out of their way to de-emphasize it. I feel that Perl was the last, massively popular language that promoted late binding.

https://ovid.github.io/articles/alan-kay-and-oo-programming....


    > Programmers who only work on small temporary 
    > projects (like an agency) may get by without 
    > ever improving how to design programs.
Amen to this. We recently ended our engagement with a very well-known Ruby consulting shop for exactly this reason.

Their engineers were very smart and wrote very pretty code.

However it was not suitable for the real world once even a little bit of scaling was required, and this was an app that needed to store and move a fair bit of data as a Day 1 requirement.

The real tragedy is that they were fairly arrogant about it. They didn't know what they didn't know. It's OK to not know how to build things at scale.... as long as you know that's not one of your core competencies. I don't know how to fly a plane, or create a design system, or write assembly code. That's OK, because I know my limitations there.

However, these folks were arrogant and dismissive about scaling concerns.

Well, they were dismissed as a result of their dismissiveness.

Which is a shame, because they did have some talent there.


This is the problem I've got at my current job. While I try my best to write code in a way that it'd work many years ahead, the lack of scale requirements and relatively freedom I've got on my own decisions are double edge sword most of the time. Would love to work in a large shop with more senior people than am.


Both kinds of experience are so valuable. "Consulting shop" experience where you get to play with a lot of stuff, often very cutting edge stuff.

"Long lived legacy app" experience where you really get into the nuts and bolts of engineering some software, but you are often locked into a particular stack and the cutting edge is your enemy and not your friend. Can suck big time when you look for your next job and your skills are 5, 10, whatever years out of date...


Weird, I find this to be more frequent the case for people who work on large, long lived, legacies


Which part? The inability to write code that scales, or the arrogance/ignorance about what they don't know?

The "arrogance/ignorance about what they don't know" part most definitely exists everywhere, including people who work on long-lived legacy apps. Definitely agree with you there, if that's what you mean.


Well, where he says

"You may have seen code which misunderstands how expressions work:

if isDelivered and isNotified: isDone = True else: isDone = false;

Instead of:

isDone = isDelivered and isNotified "

I think that's a matter of style, I prefer the isDone = isDelivered and isNotified style myself, and I think the people who write the other way have very poor style but as arrogant as that sounds I don't think I would be so arrogant as to say they don't know how expressions work.


I agree, and that's a fantastic point (about arrogance in particular). Without asking the author of the code, it's impossible to determine why they wrote it as they did. It's easy to make assumptions about who write it, but it's much more difficult for some of us to step outside our own judgmental views long enough to consider that they might have had their reasons. When I was younger, I was certainly guilty of this; e.g. "What kind of idiot wrote this?" It takes a certain degree of mental maturity to recognize you may not have a complete picture of why the code was written the way it was. (Admittedly, it also helped me when I realized the idiot who wrote said code years and years was myself, but that's another story!)

This discussion reminds me of the ternary operators. I personally don't like using them except for very specific applications. Not because I don't know how they work, but because I think their overuse can lead to mistakes and maintenance issues as the conditional complexity grows. For simple statements, sure, but once you get into territory where you see 2 or 3 nested ternary statements with complex conditionals, it's easy to lose your mind (and much harder to follow the author's intent).

I'm with the other poster (humanrebar) too, in that one should strive toward writing maintainable code whenever possible. Make it clean and readable. The poor bugger who has to maintain it when you're gone will thank you!


This is a silly example altogether.

First, it doesn't necessarily imply that someone writing code like this doesn't understand how expressions work. The author is trying to psychologically model people who code in this manner. I could similarly claim that the else there is unnecessary since the boolean is false by default. If it's a language that doesn't require variables to be declared first, I could claim that the assignment is itself unnecessary. I could then make a further claim that a person who codes like this doesn't understand expressions.

Second, the problem is taken out of context. What happens next? There'll be an if(else) statement somewhere to determine what happens next, and then the if(else) the author has painstakingly avoided will be back.


In college they taught me that every line of code should do one thing and one thing only. So doing both a test and an assignment on one line would not be prefered over the first example(split over multiple lines). I don't have a lot of experience with programming so for now I do what I was taught. :)


that every line of code should do one thing and one thing only

Taking that at face value results in something like Asm written in a high-level language: very short lines with not much on each one, and it's a pain to read and understand because you have to scroll two pages to see what could be accomplished in a dozen lines. I've seen code written in that style and it was not easy to work with. (What often goes along with that, "every function 'should do one thing and one thing only'", is even worse when taken to its logical conclusion --- it turns lots of scrolling into lots of jumping around.)

I don't have a lot of experience with programming so for now I do what I was taught.

Reading the code for lots of other successful open-source software will probably tell you far more useful things about how to write code than the "indoctrination" that passes for teaching these days. I'd recommend older codebases; the newer ones tend to unfortunately show the same bad habits due to the reasons above.


The code is doing one thing: assigning the result of a boolean expression. It's not different in structure from, say, `x = a + b`.


it's sort of an interesting problem in intention - if the reason why the expression

"isDone = isDelivered and isNotified" was written is because the programmer saw "if isDelivered and isNotified: isDone = True else: isDone = false;"

and thought I can improve that, then what they have done is managed to wrap the checking and assignment into one expression, they are logically now doing two things in the one line - it just so happens that they can do that because one of those things was not really necessary to do.


The most lacking skill that I tend to see is an inability to think in types, and to design software accordingly. Too many software developers never progress beyond primitives and basic control structures - I call them Int, String, and For Loop Developers.

Second biggest issue I see is failing to incorporate our cognitive shortcomings into code design, assuming that you'll remember these dozens of little details from today forever, and failing to enforce or make explicit one's "today knowledge" in the code.


Can you give a clear example on “thinking in types” approach ?


It's the anti-pattern "Primitive obsession"

https://refactoring.guru/smells/primitive-obsession

Let's say, for example, that we need to handle distances in on our code with different units (meters and inches, for example). I've seen code bases that use integers or floating points for this and it's always confusing and error-prone what is the unit. In fact, you could accidentally use some other values (money) as distances.

Instead, if your programming language allows this, you could define a type Distance and make sure that all the code that handles distances only use this Distance type. Everyone is forced to think about distances when maintaining the code, conversions routines are implemented and instantiated in one place etc.


This is why I like type defs in a language.


Not GP, but the book 'Domain Modeling Made Functional' does explain advantages in modeling with types quite well.


I can’t speak for the parent, but my take on it is a skilled programmer would define structs or classes, for example:

parent, student, address

While the unskilled would:

parentfirstname, parentlastname, parentaddress1, .... studentpostcode


Yes, I'd call that "grouping related data (and functions, if you're using OOP) together"; no need to obfuscate the matter by saying "thinking in types".

Incidentally, that's another thing about the difference between actually-skilled programmers and "pretenders": the former will always explain something in very simple terms, while the latter will try to use as much abstract and vague technical-sounding terms as possible.


I think that your response is overly accusatory, especially given that I think "grouping related data (and functions, if you're using OOP) together" is only one aspect of thinking in a type-oriented way, and not always necessary.

A perhaps simpler, perhaps better example: Floats are lossy, but transactions in currency cannot be. An inexperienced developer might make the mistake of representing currency values as floats, while a more experienced developer would know to use some sort of BigDecimal. But the more experienced developer is still making a fatal error. Currencies have units, but BigDecimals do not. While BigDecimals can be added together, it is not meaningful to add Dollars to Euros. A developer who is "thinking in types" will define a numeric type for Dollars and a numeric type for Euros that cannot be added together.


I'm sorry, for me it still more of a "how to define data" that "thinking in type". Full disclosure, i would create a struct like this: typedef struct Transactions { int value; char currency; } Transaction; (sorry for the naming and the poor c code, i've not done any c since 2016).

If the transaction value can be superior to 21 thousands (or if the code is deployed on a 32bit system, or if i want my floating point value high), maybe i would use a long, and probably an enum to keep track of the different currencies the second time i pass over the module. Maybe even a field (or a macro outside the struct) with the floating point value (4 to 6 seems good enough tbh).

I'm an "int, char, loop" guy, at least at first, especially if i have to write something from scratch. Thinking in "object" or "type" makes me uncomfortable except when it is for interfaces (but GUI makes me uncomfortable too, so...). I feel this is inefficient, and that well-defined/organized data should not need such complex handlers in most cases.


> (via @ararwhatever) The most lacking skill that I tend to see is an inability to think in types, and to design software accordingly.

> (via @userbinator) Yes, I'd call that "grouping related data (and functions, if you're using OOP) together"; no need to obfuscate the matter by saying "thinking in types".

There's some nuance between what you two are saying. IMO the "related" vector (depending on the definition) is a potential driver of what I refer to as the "Single Class Application". Back to the OP example, the Shipment and Notification are related to an Order. The simple "related" question says it's ok to implement all of those in one class instead of three.

The real questions to ask are about the specific relationship of possession. Does this attribute BELONG to this object or another? Does this object DO this action? These go a little beyond the typical OOP mantra of abstraction and encapsulation.


Could you explain the consequences of not progressing beyond "Int, String, and For Loop" development? Are you suggesting that these engineers don't understand or apply Design Patterns [0]?

[0]: https://en.wikipedia.org/wiki/Design_Patterns


That would be part of it, but since was talking about inability to think in types I suspect they were taking aim more at things like excessively stringly-typed[0] code.

[0] https://devcards.io/stringly-typed


I do a lot of TypeScript these days. It's possible to define a type which equates to one or more literal string values. The TypeScript compiler will complain if you try and supply an invalid string parameter. So --at least for TypeScript-- I think this is less of a concern than in the past (and TypeScript is growing in popularity within the JS community). In a more general sense, I agree with the author that the 2nd code sample offers additional clarity.


That sounds like a type that holds a string, which is just what the article suggested to use.


Which, prior to TypeScript, wouldn't have been an option. The ever-evolving front end ecosystem!


> Using sleep(), cron jobs, or setTimeout is almost always wrong because it typically means you are waiting for a task to finish and don’t know how long it will take. What if it takes longer than you expect? What if the scheduler gives resources to another program? Will that break your program? It may take a little bit of effort to rig a proper event, but it is always worth it.

While I understand the sentiment of this post, which I feel is mostly correct, it's interesting to see how divergent this statement is from my experience building "modern" microservice architectures, devops, and distributed systems design is from this view point that I used to hold so true. Async background tasks that self-heal and are entirely outside of the serving path? Yes, please.


I initially had the same reaction, but I don't think that's the author's intentions. He's saying don't use sleep() and cron to constantly poll if some asynchronous thing has completed, have a proper event fire at the end and handle that. Don't think that's in conflict with what you like about your systems :-)


Yes, but in the article author also argues that proper definitions are important. In more general case I'd say that expressing ideas in a non-ambiguous manner is important.


"I believe OOP and relational database get a lot of flack because programmers tend to be bad at design, not because they are broken paradigms". This is very true. Another more charitable interpretation is that sometimes deadlines prevent programmers from thinking their models thoroughly and so an imperfect model ends being used, causing problems in the future.


This article says more to me that the author(s) are inexperienced themselves rather than shed any light on the practice of software development. I'm imagining some recent boot camp graduates attempting to conflate their months of programming experience into something more than that. "Hey old dudes in company I just joined, I found some things I think are basic so I'm going to write an article to indirectly shame you in hopes our manager will see how valuable I am already." They know something isn't quite impressive about their older co-workers but this list isn't it and the authors don't even come close to being experienced enough to put a box around it and be thought leaders of any kind.


This comment is classic bulverism. You need to explain how the author is wrong to have a valid argument.


Eh, I won't be that harsh. The author is clearly getting experienced in software development. I actually like ambitious engineers who write from their own perspective. Most engineers (1) are fairly go-along to get-along types -- team-oriented folks. Someone with the interest and communication skills to improve things should be encouraged and (if needed) mentored.

(1) Most engineers we notice could be blowhards. They tend to be a minority of any particular engineering community.


I don't think these accusations are fair. The author has been programming since at least 2009, and is degreed in math. They also wrote the "think in math write in code article" that people here seemed to like quite a bit last week.


> The author has been programming since at least 2009

Which is to say, since the author was 12 years old if they followed a standard K-12 + undergrad program (graduated from a Utah Valley University last year it seems).

I think the irony in the article is that the author is very likely 22 or 23 years old and opining about how developers that have been coding for longer than he's been alive still just-don't-get-it. I guess you'd just expect this kind of article from someone with more experience in the field.

I did like this, however:

> Poorly designed software lacks conceptual integrity...It usually looks like a giant Rube Goldberg machine that haphazardly sets state and triggers events.

That is, it seems, the modern web :)


>Which is to say, since the author was 12 years old if they followed a standard K-12 + undergrad program (graduated from a Utah Valley University last year it seems).

Graduated the university in 3 years (check the dates), which doesn't nullify your proposition, but it makes as likely the other possibilities, such as that they went to university well into their adult life, applying credits from a previous (unlisted) university career.


Given he has work experience from prior to his university dates, it’s more likely that he got his degree after having already worked for a few years.


Programming in academia has very little to do with software in general, maybe that's the disconnect.


If you look at his resume from a sibling comment of the one you were replying to, you’ll see that his experience is in industry, not academia


He's also arguing that code is math, which is academic bullshit at it's finest.


Abstract Algebra makes it way into code sometimes...

https://mikhail.io/2018/07/monads-explained-in-csharp-again/


You can express mathematical abstractions in code, this doesn't imply that one is a subtype of the other. You can express a lot of things in code that have nothing to do with math.


I think it would be more useful if you could provide specific criticism - i.e what you think is wrong and why. Your comment seems rather general. I could imagine copying and pasting it beneath a wide variety of articles and it would be equally applicable.


People new to the industry aren't always aware of how those "good practices" are simply a product of the industry zeitgeist. If you've just now gotten your feet wet, you're fully up-to-date.

For example, a preference towards writing pure functions might have gotten called-out during code review as recently as 10 years ago. Immutable programming can result in a larger memory footprint and/or more GC activity and years ago that was considered a deal-breaker... even in environments with plenty of headroom RAM and CPU performance.


Amen, I don't even understand why this got upvoted ....


Just looking quickly at his about me page it seems he has at least 6 years programming experience, going by github projects, his oldest project The IOS color wheel argues for some level of skill/experience.


His CV suggests something otherwise. He is experienced.

https://justinmeiners.github.io/files/cv.pdf


If you notice in his CV, he doesn't have long term experience on single projects (Also doesn't have that much experience). I often see these types of posts, and I remember making similar kinds of comments around the 10 year mark, none of which are really invalid points, just there is more to it. I'm coming up 40 years of programming, and I'm way less clear on the essence of design and skills (which I have a lot of opionions about), but it is quite nuanced and often boils down to knowing the difference between "not enough" "just enough" "too much".


Still doesn't get opening quotation marks right though....


I'm curious where you think that he is mistaken? I enjoyed the post so it'd be useful to know why you didn't think it was valuable.


I have a feeling you felt hurt on those points?


Please don't cross into personal attacks on HN.

https://news.ycombinator.com/newsguidelines.html


Not even close


>However, there is a certain level everyone should know

Sadly that's still debated and you can only get piecemeal ideas from blogs and job postings.

There's no central authority to approve developer skill levels that says "these skills every programmer should know before joining the profession - after that, it's up to the specific job" and then it's legally enforced via certifications and official exams.

Right now, it's up to specific jobs and blogs and you have to figure that out.

It's ambiguous and that frustrates people who like exact and measurable goals when learning things.

I know it seems like one blog isn't a big deal, but I've read many blogs that add things like "10 things every programmer should know" and this language just never ends.

If you took a union set of all the things from every "things programmer should know" article, you'd be studying for about 6-8 years before you got a junior position.

>For example, if a successful login generates a session token and it collides with another token, you could reject the login and have the user try again

Eh, don't even go back to the user. Collisions should be a rare occurrence that you can make a second call to the token generator and just replace the bad one. If you get common collisions, you need a better token generator.


Kinda frustrating that an article calling out where programmers fall short was a non-responsive document that was almost unreadable on mobile. Mildly ironic?


No, mildly unrelated and orthogonal. Not everybody cares for their website to be read well on mobile (I, for one, don't), and not everybody who runs a blog has/wants control over the posting environment (they just pick an engine and them and be done with it -- if the readers find their work valuable enough, they can spend some effort to read it).


Actually you very specifically choose your posting environment as an individual, what an odd and misleading statement. It’s a shame that you write content but don’t believe it to be important to care about the largest fraction of readers. I’d hope for your sake that you reconsider.


>Actually you very specifically choose your posting environment as an individual

And most people just chose one that's easy to register/use/setup, and could not care less about details like design, typography, special formatting, load speed, and so on. They just want to put their words out, words being the important part.

The rest is more common for designers, fiddlers and pedants...

>what an odd and misleading statement

And also, how true and common...


12pt font does make it pretty unreadable on mobile, but thankfully firefox reader mode fixes the issue by dropping the stylesheet.


Does blocks of text need responsiveness in order to be readable on mobile?

What problem are you experiencing?


As in, the page width stays at full desktop width and to make the font a readable size you must zoom in, making reading a scrolling in two directions excercise. So yes, blocks of text should be responsive to screen size.


The text is very, very small on a mobile device (XL devices may be easier). Requiring anyone with a small/normal sized device without near perfect vision to zoom in and swipe left/right to read a single line of text.

For example I'm on a normal sized device and can't read the text at a zoom level that shows an entire line without my glasses (and my vision isn't too bad, I'm legally allowed to drive without my glasses).


This is what "reader mode" is for.

The text is using good oldfashioned html markup (basically just h2, p, code, a, em).

So there's no reason why you should have to use the stylesheet provided by the site, and with a sensible browser you don't have to.



I’d argue that it’s quite on topic being a post about the skills developers lack. It seems this developer doesn’t regard a11y as an important skill, which is unfortunate or perhaps just an oversight.


An article about bad programmers by someone who is so bad that he can't handle putting up simple static text on a web page sanely is wildly relevant.


Well, I do agree with some points in the article, but I don’t believe even the author really understands how the env. he runs his code on works. There is more to it than just knowing the language, one might argue that you have to know the OS (see fsync misuse by Postgres), how all the involved drivers work, how CPU works, and the list never ends.

It is quite idealistic view, in reality you have to realize you never know how everything works, otherwise you will spend a lifetime in academia.

On Software Engineering side (not just programming), the most important skill to me will be curiosity towards business, not just code. Endless devs are dying in arguing over tabs vs spaces, when their job is to deliver for the business, not for their ego. Surprisingly, it is a common decease (based on my Google and other companies experience). They create their wonderful world of logic and sense, and think that everything that violates it is stupid (like business person asking for a feature).

The skill poor engineering lack is actually common sense.


In all fields there is a minority who are not very good at what they do. I think, though, that in programming that minority might actually be a majority. The thing is, if you are a poor plumber who causes floods in peoples houses you are not going to be in business for a very long time. In programming there seems no such discipline because it actually takes somebody good at programming to perceive the difference between good and bad programming. And if the programming project is a disaster right now the cause of this might be bad decisions made 5 years ago and the people who made them are already gone thereby shielding themselves from the consequences of their actions. In a healthy programming department there should certainly be a small influx of new people and new ideas but also the stability of people who see the project as their personal responsibility.


Programming is more like trades than you think. A bad plumber might occasionally cause a flood, just like a bad programmer might occasionally cause a hard crash during an important sales presentation, but actually bad plumbers use sub par material, and install them incorrectly, and you don't know until 5 years later when your first floor is flooding because a pipe joint inside the wall finally burst after the 1000th time water hammer slammed it.

And, like programming, sometimes you'd find someone who actually just didn't know what they were doing because they were inexperienced, and sometimes you'd find someone who was rushed and underfunded, but in either case they are generally long gone, and the cause of the ultimate failure isn't perfectly clear, so the world turns and everyone keeps plumbing, for better or worse.

I can tell a similar story for almost all trades.


Used to work in plumbing. One morning coming to work and see a waterfall coming from the 3rd floor..The apartments were complete with kitchen furniture installed. Turns out one idiot didn't solder an elbow,while other didn't test it properly and the third signed it off and gave green light to turn the water for the entire block.. It took many people and a lot of time to undo this. Not much of a difference when things go wrong in code.


One of the best clients I had told me something once (she knew IT pretty well and was a very good manager). (paraphrasing) In her experience, only 10-15% of people in IT actually knew what they were doing and were good at it. The next 20%-25% had something of an idea and could get stuff done, but you had to pay attention to them. The bottom 65-60% were basically useless.... I think it does take a certain <something> to really grasp this field at a deep level, but as most jobs are CRUD type ones, that level of skill is not essential - and as always, the money / business side of things always trumps the engineering.


> The thing is, if you are a poor plumber who causes floods in peoples houses you are not going to be in business for a very long time.

That might be true if the pipes leak after repair or installation. But what if the job holds up long enough to allow the plumber to avoid responsibility?

Same with software. It has to just work well enough to appear functional and for long enough to allow the developer to quickly shed responsibility and put distance between them and the customer. Then blame them for the error and force them to pay for support.

Dealt with an ERP system like that. Didn't matter how fragile or crappy their system was, you had to pay tech support to fix their bugs (windows 7 updates broke it frequently ...) Twice it nuked its own DB and had to restore from backups. We had to pay tech support to tell us that. They're still in business. Once they got you on the hook and their system becomes the companies lifeblood then you tend to eat the crap sandwich and pay up. Then if you leave a bad review they sue you for defamation.


> it actually takes somebody good at programming to perceive the difference between good and bad programming.

sorry that’s not true. there are plenty of products (let’s limit ourselves to digital products) where consumers can differentiate between good and bad programming. they may not realize it’s the programming that’s the problem, but they can tell the difference.

for products where the user can’t tell the difference, does the difference matter? good enough is literally good enough.


The customer can and will perceive bugs. E.g., I used to own an MP3 player that would crash regularly. Everybody can notice that a crashing MP3 player is lacking in quality. What is more difficult is knowing who in the team contributed to a good or to a bad result and who of the programmers was a dead weight on the team. One really needs to know a lot about programming to say anything useful about that.

I agree that what the user can tell is the most important attribute of quality. There is is another important matter, though. This is how easy it is to modify the software. Good code can relatively easily be extended to incorporate new features and bad code makes this as difficult as possible.


The example is interesting.

  if isDelivered and isNotified:
    isDone = True
  else:
    isDone = false;
is not the same code as

  isDone = isDelivered and isNotified
in ruby, python, js (and more?).

[edit: while I'm nitpicking... breaking webpage text selection is also a clear sign of poor programming]

[Edit2: more unsubstantiated "absolute truth" from OP]:

  Using sleep(), cron jobs [...] is almost always wrong.
false. Any hw related code (most of the code humanity had created?) relies on such timing considerations.

  Another common mistake I see is generating a random file or identifier and hoping that collisions will never happen
false. This is a 100% valid approach, the odds of uuid collision are 0 to any practical consideration.


Assuming those variables are all booleans why wouldn't it be?


Assuming is not something "good developers" do.


They're not booleans in ruby, python, js; they're objects that can be null.


null (or None) is still a valid Falsey value, and in both examples isDone will be a boolean (strict) value, as it is the result of a boolean operation (and).

So the point of the comment is moot...


JS: undefined && undefined -> undefined

Python: None and None -> None


On top of that, truthy values also propagate, so if isDelivered or isNotified are something else that's truthy (such as a 1 from mysql), isDone will not be True either and risk failing in more interesting ways down the line.


Not the case in any of the abovementioned languages.


That's what !! is for!! :)


Which is not really needed here. isDone will be boolean, and the other ones are still Truthy / Falsey values...


if the first term is undefined, isDone will be undefined, not false. At least in Javascript.

&& returns the falsy value on the left side of the expression if it evaluates to a falsy value

(undefined && true) evaluates to undefined

correspondingly || returns the truthy value (which might be an object) on the left hand side if evaluates truthy. That's useful for expressions like this:

const a = b || 'default value';

!! turns truthy/falsy values into booleans.


True, !! is still needed to get a True/False instead of the truthy/falsey value...


> Organize and design systems

To extend on that point, I think that better programming happens when you don't simply apply prefab design patterns to every problem or be overzealous with those patterns.

This is where things like object-orientation, MVC, SOLID, DRY, abstraction, inheritance, overly-organized code with lots of categories for different things, decoupling, etc. can be taken too far or be applied to the wrong problems.

Basically, don't believe in any one thing. Just understand different approaches and realize that nearly all of them have a % probability of succeeding. That can even mean writing things in a more procedural way.


> I believe OOP and relational database get a lot of flack

Who gives relational databases flack? The RDBMS and SQL is the cleanest, simplest, and most productive technology stack I've ever used.


I don't think you understand. Maybe this video would help enlighten you on the subject.

https://www.youtube.com/watch?v=b2F-DItXtZs


I've encountered a growing trend of programmers whose experience is mostly on the front-end avoiding dealing with their own database at all and just using Firebase or even DynamoDB for everything.

I personally find an MVC backend framework with Postgres to be the most productive option, but a lot of people disagree.


>Who gives relational databases flack?

Tons of people, it was especially a thing back in 2005-2015 with the ORM craze (and the supposed "impedance mismatch between relational and OO") and then the NoSQL craze (and how schemas and relations are a thing of the past).


My standard for writing "good' code is...

- can my team read & understand it now

- would my team likely be able to read & understand it later

Earlier I wasted a lot of time trying to impress people with clean code, only to later learn those people were never going to care. Now I'm more in the camp of getting it done, which is a heck of a lot easier when you work on a team, and at a company, that shares enough of my values. Getting hung up on "what is good?" is a waste of time, because everyone cares about different things. When you ship stuff, the people who don't care don't speak up don't, and the people who do care do. When the people who care speak up, that's when you have an opportunity to learn what is important to them.

Don't write complete crap, and don't try to make everything. Just try to be productive and not ridiculous, what that means will vary from project to project. Your polished diamond is someone else's stinky turd, and vice versa. For example, I initially learned to write code while hanging out with people who really valued testing everything and aggressive decomposition (e.g. a line of code should only do one thing). I wrote code like that on a new job, and the team hated the number of classes and functions I was asking them to read in a code review.


Some of the bad code cited is produced by high management pressure, in my experience. Be careful how you manage programmers, and try to incentivize optimally.


i call bs on this kind of observations and all advice that claims you need to know A,B,C to be a “real programmer”

imho, you need to 1) be curious 2) continuously learn and want to improve 3) don’t make the same mistakes over and over again 4) share your thought process and be willing to both learn and teach others

yes, sometimes the delta in level of experience is inconvenient but we are all somewhere in our journey. be nice to others.

that is all.


Advising against sleep(), cronjobs, etc. seems insane to me, and I don't understand the rationale here. Wanting to yield until there's more work to do (or poll periodically), or run things on a schedule that need run on a schedule, etc. are all very common and very valid use cases. Unless the author's recommending using something else?

Maybe that's just because I'm a poor programmer, though :)

EDIT: I guess in the context of waiting for something to asynchronously finish happening, it'd be more ideal to check for an actual indication of success (e.g. via an await, or by listening for a response message) instead of doing a sleep(5) and hoping for the best. Unfortunately, there are disturbingly many scenarios where that ain't exactly possible (or it's "possible" but not practical), especially when interfacing with external systems written by poor programmers :)

Still, the author should probably clarify how the "cronjobs are bad" opinion fits into that context, because without further elaboration it sounds really silly.


The question is: if making a schema to distinguish between "good" and "poor/naive", can one do so whilst keeping biases in check, viz. without inadvertently putting one's own understanding (or unknown lack of understanding) in the "good" camp?

Unless you're EWD, the answer is "no". No you cannot.


> You may have seen code which misunderstands how expressions work:

> if isDelivered and isNotified: isDone = True else: isDone = false;

> Instead of:

> isDone = isDelivered and isNotified

Are people actually finding code like this in professional work or is this just an example? I'm self-taught and know I've got some gaps, but this example is so fundamental I find it shocking.


In my 20something years of writing software for a living I've seen both. And I find it very hard to care which one someone on my team uses. It's not something fundamental. Both lines work perfectly well. They do the same thing. The second is more concise and 'better', but if the only improvement you can suggest in a code review is something like shortening a line then the code is basically fine.

A difference between good programmer and a bad programmer is that the good one writes working, readable, human understandable code that does the job it's supposed to do. That's it. Someone who writes code that's shorter without being more understandable, or faster, or significantly more efficient, isn't better.


I've worked on codebases where people consistently did things in 3-6 lines that could have been done in one (without it being an overly complex line).

It was massively detrimental to readability, because it meant what should have been a 5 or 6 line function suddenly became 20 or 30 lines. A 20 line function is suddenly 100 lines. And you had to wade through each part to work out what it was doing rather than just glancing at it and it being obvious...


A 20 line function is suddenly 100 lines.

...and what's worse, that then gets broken up into smaller functions containing equally trivial-but-bloated code, and now you need to jump around even more to understand something that's actually very simple.

At the extreme end of the terseness scale are the APL-family languages, and the people who work with them have no problems reading or writing code like this:

https://code.jsoftware.com/wiki/Essays/Incunabulum

I think it really says something about the state of software development when you consider that a not-insignificant number of people who are perfectly fine with that level of density (and they are making $$$$), and yet there are others who will complain that even K&R's style is not verbose enough. IMHO it's a matter of education and attitude.


You have a point but the issue with redundant boolean code like above is that it shows a lack of metalevel thinking.

kinda like this python exagerated code:

    def sum(a,b):
        return operators.add(a,b)
It can lead to bloat.


That's not really a fair comparison. The first example is a fairly natural way that a reasonable person could model the logic in their head. Knowing that your programming language provides a more concise way is an optimization. A basic one admittedly, but a mere optimization nonetheless.

In contrast, your example is kind of the opposite situation - it's an example of forgoing the most natural, human representation in favour of a highly Python-specific one. In other words, the first one happens when you know the abstract logic, but don't know your tools very well; your example happens when you know lots of useless details about the tool but you're struggling with the abstract logic.

Actually, the example from the article suggests to me that the author misunderstands what I regard as the most fundamental programming skill: knowing the precedence of correctness over all other concerns, including efficiency. It's better to attach the right parts together badly than to attach the wrong parts like a pro.


I've seen it in professional work. If it's actually a single variable conditional, that's irritating but by itself not worth more than a comment in the PR. However, in practice it's rarely this cut and dry. I usually see conditions expanded like this for clarity reasons. For example, I consider this kosher if it's in some business logic... instead of writing:

> return a && !(b || c) || d;

I've seen (pardon the formatting, I'm typing on my phone):

> if (d) {return true}

> else if (a) {return !(b || c);}

> else {return false}

It's usually a choice to be verbose for clarity.


I think that would be clearer as "return a && !b && !c || d". Also, your second example doesn't directly correspond with the first, because condition 'a' is evaluated first.

The major problem with code that's "overly branchy", as in containing a lot of if/else, is that you're forced to go through each case when trying to understand how it works, and it often proliferates into even more branchy code as someone makes a bugfix (with another if/else) in one of the cases, but neglects to see that a similar if not identical change must be made to some of the others.

In other words, if/else cases like your second example optimise for micro-readability when what's often important in debugging and understanding is macro-readability.


While I get your point, several thoughts:

1. I'm talking about acceptable code, as opposed to "the best choice". There are good reasons to go for verbosity. Depending on the problem domain I might agree with your expansion of the parentheses, but when we get to this type of discussion, usually my overwhelming reaction is "this is bikeshedding," because...

2. If it's "overly branchy" code and you're worried about causing macro-readability issues, the answer is to refactor, not to compress. Modifying inline conditions runs just as much of a risk of becoming inscrutable. You choose between following branches and enumerating binary tables. If it's showing up too many places, you're likely extracting either expression into its own function.

When overly branch code happens, my experience is that the root cause is underthinking/overthinking the method, not taking time to design the right high level abstractions, or as you mentioned a case of too much repeating yourself. Generally, the fixes for those issues don't have that much to do with your choice of boolean expression vs conditional.


  // a b c d
  // * * * 1  true
  // * 1 * 0  false
  // * * 1 0  false
  // 1 0 0 *  true
  // 0 * * 0  false

  if (d)    {return true }
  if (b||c) {return false}
  if (a)    {return true }
  return false


> return d || a && !(b || c);


I see this in professional work. The former reads more like English, and so there feels like there is an argument to be made that it is more readable, and also an argument that "the compiler will optimize it for you anyway".


Are they wrong? To me, the long version is valid precisely because it is explicit (more to me than the short version), and will be optimized


and you question yourself if it was better to pack that readability into a function or not.

Also why would anyone store a boolean named isDone rather than just returning it as the contract of the function


Yes, absolutely. Even the biggest Fortune 500 companies hire interns and juniors to code in relatively complicated apps and systems. Sooner or later they hire more mid-to-senior level engineers to clean up the mess. That example isn't even bad, either. At least you can understand it by reading it. If you've never had to spend days deciphering an ouroboros of several nested loops and if-statements in 2000+ line functions in classes with multiple layers of inheritance written by someone(who has since left the company) with a lack of experience or talent, you are one lucky individual IMO.


Yes.

And a coworker wrote his own string join method in Python (and was mad he wasn’t a senior engineer).


during his job interview or after he was hired? (jk)


Is anyone willing to explain the second line? I get the first means "if isDelivered and isNotified are both true then set isDone to True, if not set it to false". However I've never seen a variable being set "isDone = isDelivered" as part of a logical test.

Personally I think programmers who are "too clever" are cancer inside a codebase. Sure you saved a bunch of keystrokes but your code golf has locked the code forever unless someone is willing to rewrite it. This is not a bad example but in general I think people who strive to do things in less characters are highly problematic.


It feels clever if you aren’t used to it but it is super common.

isDelivered && isNotified

always resolves to a Boolean, either TRUE or FALSE

So you are just saying

isDone = TRUE

or

isDone = FALSE

that’s all


It isn't (isDone = isDelivered) and isNotified, it's isDone = (isDelivered and isNotified). It's just an expression being assigned to a variable.


Ah ok, great, thanks.


I tend to agree that, given compiler doesn’t care most of time, saving characters can’t justify decrease in readability.


The worst production code I've seen (Java) included its own Object interface and a corresponding ObjectImpl (IIRC it was an abstract class).


I've seen plenty of if(!(var != true) == false) and the like --- some people just seem to not really understand the concept of booleans.


It often goes much further than that.

I was a teacher's assistant in the intro-to-programming courses in college for 3 years, and eventually realized what students were seeing. Now, with some new hires, I'm seeing the same thing:

They think the expression is "if (.. == ..) { .. }"

Not "if (..) { .. }"

The simple explanation of what boolean expressions are, and that any boolean expression can go in those parens - and that it's the same thing as storing the result to a variable and then using that variable - is like a lightbulb suddenly blinks in their head.


I have that all throughout a codebase.

I felt momentarily bad about it, and myself, and then realized I have a reason:

The codebase I am maintaining is intended to be readable by people who have extremely varying levels of skill, and who have no intention of ever being professional programmers.

When possible I write stuff so you don’t have to “think like a programmer” to understand it. I’m pleased I use the conditional version more often than not, because I believe it is far more intuitive.


Came here to comment on that too...and was surprised how many replies you had where the fundamental whooshed right over all these skilled programmer’s heads.

The shortcoming is failure to recognize your object has more than 2 states and the use of flags should be abandoned in favor of a state variable (or status bit vector, aka a state variable).


Can confirm. Made this mistake myself actually. However to make a contradictory point apart from being better style what are the other advantages of not using if statements and use boolean algebra instead ?


Verbosity tends to hurt readability, but trying to cram too much into one line tends to create something where nobody remembers how it works either. So it's really a judgement call on which approach reads more naturally.

My personal rule of thumb on this is that single statement conditions aka "a = {condition}" should be using only "and" operators or only "or" operators and should generally avoid nesting unless there's a clear explanatory comment. So "a and b and c and d" is ok but "a and b or c and d" is sketchy. The reason is that "all" and "any" are idiomatic, but "some" takes more mental work to parse.

Also it's always useful to keep in the back of your mind:

!(a || b) == !a && !b

Whenever I spot either side of the equation in a condition, I stop and consider whether the alternate formulation would be more readable.


Oh, yes, I saw stuff like this in Big Blue Company's source control.

At a different company, I found stuff like:

    ZERO_INDEX = 0
    ONE_INDEX = 1
    return arr[ZERO_INDEX]


That's usually a sign of overly aggressive linters and static analyzers. They probably got a "no magic numbers" diagnostic.

The solution to this problem is to add some sort of `NOLINT` comment to that line or, if possible, the comment that turns off that particular check. Then you link to the design document describing what "arr" is and how its interface involves accessing indices 0 and 1.

Or you could wrap "arr" up in a class or function of some sort so the code is theoretically more self documenting.


It took me a while to understand the whole "no magic numbers" thing.

I was basically just told that most (if not all) numbers should be stored in constants. So I just added int twentyFive and stuff, and it annoyed me, but my teacher was happy.

Much later, I realized the missing part: Name the constant not what it is, but rather what it does. So int rightBoundary instead of twentyFive, and the code suddenly becomes much more readable.


Yeah, linters that make "you should usually do X" into "you MUST ALWAYS do X" cause a lot of terrible code.


We had no linters enforcing the "no magic constants" convention. If we did, I'd question the sanity of the engineer who added that linter.

The reason we got code like this was because of outsourced contractors cargo-culting something they saw elsewhere.


Well, maybe it's cargo culted from a place with such a linter. Or ornery peer reviewers who think their job is to be a human linter.

Correcting this sort of stuff is the job of a senior engineer, though. If it's extremely frustrating, I'd become independently wealthy or stay away from technical leadership positions.


I've seen variations of it. Usually along the lines of

   let isBoolean = false
   if (booleanComparisonOrComputation) {
     isBoolean = true
   }

   ...


All the time. There are a lot of reasons people write really bad code and being a bad programmer is only one of those reasons.


yes, and that's a pretty tame example. Don't assume that any code you see is correct. A lot of misunderstandings about how a language works can be carried on for decades within a team.


This is extremely common.


Many, many times.


I read it. To me, it came off more as edgy, brash and inexperienced than what it intended to do - to influence people on what skills they should pick up, so that they may, according to the author, not be a 'poor programmer'.

Often, there are two factors that lead to very different code than usual:

- Lack of time and hence taking shortcuts

- Simple code is different from clever code

And, the TL;DR that I wish to convey back is:

- All code sucks given the right circumstances

- A skill learnt over time: being a snob is not the way to influence people

- Everyone starts lousy and gets opportunities to learn and become better - even better than you

- The way people think and envision things is very different, and operating philosophies are very different. There are multiple pathways to succeed and the skills you mentioned do not appear in all of them.

Any time you wish to state such an opinion, please think about what would influence 'poor programmers' to do better than status quo.


Poor programmers are optimistic:

* computers will always run my code fast

* infrastructure problems will not happen

* our team will always have plenty of time to understand my code

* users are not malicious and the libraries I depend on are not malicious

* I will always have to plenty of time to diagnose and fix problems in this code

The author covers some of that.

But you can care about those, and still be a poor programmer: being too pessimistic is problematic. You need to pick your battles and spend your time mindfully.

Now, I have a problem with this:

> Naive programmers think that design means “don’t make functions or classes too long”. However, the real problem is writing code that mixes unrelated ideas.

How do you enforce that policy using a linter? How do you audit a large project for these issues? You can count characters in a line, you can count lines of code in a function or file, and you can count import statements. All those are indicators of coupling unrelated ideas together.


You don't. (to both questions)

I don't know about you - but I can spot it by glancing through MR and just expect code provider to explain his solution in description or be "obvious" to me. So sulution to not having these problems is code review.

And if you have this problem then well, you have bigger problem (bad CR). But fixing it all at once is bad idea. Just fix modules you are working with/at.

EDIT: I spot it by checking imports/exports and API surface, so could be automated.


You cannot do CR retroactively.


>Poor programmers are optimistic

completely agreed, just had a long conversation with a junior engineer about how adding a cache would not solve a complex architectural problem and just add more complexity.

> All those are indicators of coupling unrelated ideas together.

once useful tool i have found is using automated tools to spot indicators like these. In java using checkstyles cyclomatic complexity checks and class fan out is a strong indicator (not proof though).


That's different. That engineer is just mistaken, but at least is acknowledging that a problem exists, that's a good starting point.


"There are no tricks or rules that you can follow to guarantee you will write good software. As Alex Stepanov said, 'think, and then the code will be good.'"

Excellent conclusion, can't think of anything else. I don't know how many times I've seen programmers apply latest patterns and use trendy libraries only to make a terrible mess out of thing.

I literary heard a guy say last week "we should use MongoDb, it is better than SQL". No context, no arguments, just read a blog entry or I don't know what.

There are no golden bullets, think, and then the code will be good :)


> On the other hand, what if you generate storage files with random names and you have a collision? You just lost someone’s data! “This probably won’t happen” is not a strategy for writing reliable code.

If the entropy is high enough, and the likelihood of collision low enough, then this is a very useful tool for certain situations, particularly distributed systems. I suppose IPFS (and even Ethereum) was written by "poor programmers"?


There’s a lot to get right here. I wouldn’t call using a UUID or its equivalent a random name. It has random components - sure, but the format is intentionally structured to reduce collisions.


Aren't Ethereum wallet private keys 'completely' random?


Skills are nouns, but the list was all verbs. Weird.


Skills which someone has put a name to already are nouns. But if you are trying to say something meaningful about the skills required in a rapidly-evolving field, it is going to be hard to find prepackaged nouns and so you are going to need to use verb phrases.

——

EDIT: for example, the notion of a language “working” in a mechanical sense has only been around for the past 60 years. If there is a specific noun to refer to understanding those mechanics as you type, it is obscure. This 28-year old software engineer who has worked in both the US and UK has not heard it.


Skills are generally gerunds, which are nouned verbs. "Nouned" is an adjective made out of a verbed noun.


> In JavaScript, this is often indicated by new Promise inside a .then().

I have found myself doing this sometimes, why is it considered bad practice?


I guess because inside of a then() you can simply return the success case and throw the error case, no need to create a new Promise and call the resolve/reject functions.


Was trying to think of exceptions to this when I read and and could only think of one: When you need to wrap a callback API with unusual callback arguments, where a `promisify` like helper won't work. Then again, I still feel the wrapping function should be defined outside of the `then`, as it feels like this is a separate utility to the work being done in the Promise chain.


This is the context I’ve used it in


Ah ok, they’re saying don’t use a new promise to do async logic not don’t use them for more sync things later down the chain. Thank you!


> Programmers who work only on old programs never learn to write new ones.

Maybe give Linus a shout and ask him to call it a day. ;)


Another skill: making a website readable on mobile.


>I believe OOP and relational database get a lot of flack because programmers tend to be bad at design, not because they are broken paradigms.

OOP has Fundamental and Intrinsic problems that can be described in a very concrete way.

If you believe OOP gets a lot of flack just because programmers are bad at design, then you are the one that is also bad at design.

I will say this, OOP is bad for many and most design problems, and most people who hate OOP also don't even know why OOP is bad. They just have this gut feeling and bad experiences but they can never pinpoint the concrete reason for why OOP tends to lead to bad designs. A lot of people who like OOP, really like the mind bending design patterns but they don't realize that these patterns often offer limited flexibility and the catharsis of creating a design pattern abstraction is just an illusion.

So really what's going on is nobody knows the true nature and goal of design in programming. There's no theory behind it. To first know why OOP is bad you need to know the fundamental nature and goal of design when it comes to programs.

The goal and nature of design is abstraction. When designing programs we want to start from primitives, then compose those primitives into higher level abstractions, then take those abstractions and also form those into even higher level abstraction until we achieve the final level of abstraction that represents the program itself. The key insight here is because everything starts with a primitive, the way your abstractions are designed, depends entirely on your primitives and choice of primitives.

A good primitive must be able to additively compose with other primitives to form every other possible abstraction that the program may possibly need. "Additively" being the keyword here because if you subtract information from your primitive during composition it means your primitive is not "primitive" enough and that in actuality the "thing" your dealing with may be two primitives representing what you tried to "subtract" from the primitive and the remaining part of the original primitive itself.

Bad design often involves dealing with bad primitives. You may find yourself realizing that you have abstractions that cannot be built out of the composition of the primitives that you have. You may find that you have to split up one of your primitives and realize that since it's all encapsulated in a micro-service there's no easy way to do this, so you take on technical debt by making a redundant component that does what you need. You may not even have a notion of what the primitives of your programs are and designed things at the highest level of abstraction with Zero code re-use. 99.999% of all programmers will not have a notion of what primitives are, and design programs in a way where they have a potpourri of components that are a mishmash of high level logic combined with low level logic and no notion of forming higher levels of logic with lower level composition of primitives. Yes 99% of programmers are like this, literally take a look at yourself and your colleagues and tell me who out of all them actually has linked the notion of the "design of programs" with "choice of primitives." In fact the entire blog post never mentioned the word "primitive" or "axiom" once in the entire write up on design.

Which brings me back to the initial topic of why OOP is bad:

OOP is bad because the object is a bad primitive.

Objects are actually arbitrary mixtures of lower level primitives: functions and data. It is far easier to compose functions with functions and data with data than it is to compose these arbitrary mixtures called objects with other objects. Object compositions often involve surgical grafts of one object into another object resulting in a hideous dependency. A lot of people like to use tricks and have this happen at runtime; People tend to call it dependency injection, a very abstract and clever concept but also very very very bad.

Meanwhile composing two arrays:

   [1,2,3] + [4,5,6] = [1,2,3,4,5,6]
Composing two functions:

   function compose(f, g) {
      return function(x){
         return f(g(x))
      }
   }

   b = function(x){return x+1}
   c = function(y){return y*2}
   a = function(j){return j-3}
   d = function(e){return e*e}


   g = compose(b,c) // g(x) = (x*2)+1
   t = compose(a,b) // t(x) = (x+1)-3
   l = compose(d,d) // l(x) = x*x*x
... you get the picture.


The real idea for OOP is quite similar to the actor model ala Erlang. OOP as implemented and C++ and Java is an abomination. I still don't get how Bjarne Stroustrup did not steal co-routines from Simula.

It's very easy to implement actual actor model style OOP with procedural code. You can get most of the benefits by using a message bus.


>C++ and Java is an abomination.

It's also the OOP version I'm talking about. Most people are talking about this when they talk about OOP, not smalltalk. Why does everyone turn it in this direction... yes smalltalk was the first, but nowadays the traditional term used is not OOP as defined by smalltalk, it's OOP defined by JAVA.


> Objects are actually arbitrary mixtures of lower level primitives: functions and data.

A well-chosen object is a grouping of a set of data items that are closely related to each other, and the functions that act on them. It's almost the exact opposite of arbitrary.

Your programs are going to be a mixture of data and functions. Why should the lower-level building blocks not be the same?


Ideally the choices you make would fit the domain. However for very large projects that have existed for a long time, three things happen.

1. Requirements change and this causes a change in primitives.

2. Small errors in design and scope creep in because objects allow you to mistakenly scope functions with the wrong piece of data, over a long time these small errors accumulate into something called technical debt. I call it inevitable.

3. Correct design principles appear after the project is complete, often because the design solution to the problem domain is unknown. You may have segregated you data and functions in the wrong scope because you simply didn't know what your program would end up doing when your finished.

By forcing your data and functions into groupings like this you lose flexibility of composition, you take away the future proof of your design. Too often programmers realize that the lines of segregation of their designs are wrong or have changed. They see a function that should have been universal but is trapped on an object mutating data so they're forced to make a similar method on a another object and they just call it a trade off between technical debt and time.

This happens all the time on large projects because the concept of the object as a primitive is wrong. The primitives are data and functions, not the two combined into an object.

Without that arbitrary artificial line called an object drawn around data and objects your code can react instantly to design changes. You can change a function without damaging the entire context the function relies on because it has no context in the first place, functions are independent and modular from context; methods are inexlorably tangled with state. And therefore when the time comes to inevitably change your program, you create technical debt to get around the immoveability of your methods.

I understand the need to place everything under a single primitive type and while this is ideal it only works if primitives can compose; of which objects can't. You will see across mathematics the quest for an elegant assembly language and simplified language to describe the field first began with set theory than transitioned to category theory. In both theories the dichotomy between function and data remains solid, if they couldn't unify for theoretical math then it likely says something about the primitives of the universe and your designs as well.


If you separate functions and data, do you keep related data in clusters, or do you keep it as primitives?

If you keep it in clusters, then you have all the same problems. Your functions depend on the structure of the clusters; if they change (for all of the reasons you describe), then the functions are broken just as much as if they were grouped with the data in objects.

On the other hand, if you keep the data as primitives, then you have a lot of primitives scattered all over the place, some of which have to maintain relationships with each other, even though they're not grouped. That gets difficult to manage, no matter how nicely everything composes.

You object that mathematics doesn't group things like this. Well, programming is not a purely mathematical activity - it's an engineering one. So even if your observation is accurate, it is not all that relevant.


>If you separate functions and data, do you keep related data in clusters, or do you keep it as primitives?

The data exists in primitives. And yes you are correct, you eventually need to group this data, but the data must exist in both forms, both as a primitive and both organized. But when you start out building your program you start with primitives at the root and you build up the organization. Allow me to elucidate.

Think of the way your program is organized. You have layers of logic from the lowest level that's primitive to the highest level that's organized and closest to the user.

Let's say you build an App that's a phonebook and prints the addresses of people. You can break the data into two primitives at the lowest later: Person and Address. Then in the next layer you compose Person and Address into PersonWithAddress. Much more organized, but then let's say Companies are added to your design. Now all you have to do is add a company primitive to your design and create a composition called CompanyWithAddress at the next layer of logic.

If you decided to group your primitives immediately and define a Person with address attributes included, you would get design problems later on. Think about what happens when the company is introduced but no Address primitive exists. You're either going to create the Address primitive and have the Person data structure forever keep redundant attributes, or you have redundant address attributes in Company. How would you then define a function FindAllNeighboringAddresses when addresses exists in a fractured/redundant state in your program?

Obviously this is a trivial example, but the same type of problem happens all the time in a more complicated way on projects much more complex than this. When a team encounters such a problem they either add more technical debt, or rewrite a huge portion of the program, or rewrite everything. Choosing the right primitives are a critical part of design.

The entire purpose of primitives is for Maximum flexibility so that you can organize your program to do whatever you want it to do. Objects as a primitive will lock your organization into a specific purpose way to early and often incorrectly. Objects are a bad primitive for this and other reasons.

>You object that mathematics doesn't group things like this. Well, programming is not a purely mathematical activity - it's an engineering one. So even if your observation is accurate, it is not all that relevant.

This is a philosophical argument. Either way historically speaking the foundations of all of software and programming comes from the field of mathematics. It is formally defined both algorithmically and computationally from lambda calculus, von neumann machines to decidability. One can argue that the entire field is a sub field of mathematics.

Additionally, if the definition of mathematics is simply the creation of axioms and deriving theorems and statements from a set of axioms, than programing is the Exact same thing. You are literally assigning values to things and deriving new values (theorems) from your initial set of values (axioms).

Unlike other fields labeled with "engineering" the output of a computer is highly, highly deterministic. That is why also unlike other engineering fields, programming is the subject of formal mathematical analysis more-so than almost every other engineering field out there.


you get it wrong :)

> Objects are actually arbitrary mixtures of lower level primitives: functions and data.

Nope, everything is an object, numbers, array, function, etc, are objects.

Objects allow you to describe user-defined data by composing objects

  address1 = Address { street: "..." }
  address2 = Address { street: "..." }
  john = Person {
           name: john,
           addresses: [address1, address2]
         }
Methods are functions biased toward an object, which allows polymorphism, so you can abstract over data

  function f(data) {
    data.add("foo");
  }  
works for every data that provides a method "add".

so OOP is complementary to function.

That said, OOP has several flaws: - at some point someone decides that OOP was about inheritance or prototype chains but those mechanisms are more bad than good. - existing mainstream languages tend to favor mutable objects by default which leads to bugfests.


>Objects allow you to describe user-defined data by composing objects

No you got it wrong. :) Re-read what I wrote about dependency injection. You are surgically grafting an address into Person. You must change the nature of person (The type signature) in order to graft in an address, this is not Composition. This is the creation of a dependency. The fact that people call it composition over inheritance is entirely the wrong word. A more accurate term but not fully correct is "Explicit Dependencies over Inheritance."

Meanwhile examine my function composition. In the composition of two functions into a new function... the nature of either function remained EXACTLY the same. The type signature does not change.

When I talk about COMPOSITION, I am referring to a different sort of composition. My entire piece on the nature of design, works on THIS type of composition which is entirely different from what people talk about when they talk about Object Composition.

Also, your example was not an Object defined by OOP. Your example is just data. It's more akin to a dictionary/hashmap/Record than it is an "Object" as defined by OOP. An Object has methods that operate on itself which your example failed to show. Think about how a method on one object would compose with the methods of another object... it becomes a mess. Just letting you know the difference.

:)

>Nope, everything is an object, numbers, array, function, etc, are objects.

This is an arbitrary definition that probably comes from smalltalk. In foundational mathematics category theory or the theory of sets; functions and data are different primitives. This makes more sense from a primitive standpoint.

Traditionally, in OOP as popularized by JAVA and C++, an object is defined by a class syntax definition. On the class you can put in methods and data. This Class is entirely different from another separate primitive in C++ called an Int, where on the Int methods don't exist and data/functions are different. The popular definition of OOP is the one I refer too.

>Methods are functions biased toward an object, which allows polymorphism, so you can abstract over data

The type signature of a function biases a function towards that type. You don't need to "attach" it to the "type." That being said a method has the ability to mutate data while a function does not. This is the true difference between method and function.

:)

>That said, OOP has several flaws: - at some point someone decides that OOP was about inheritance or prototype chains but those mechanisms are more bad than good. - existing mainstream languages tend to favor mutable objects by default which leads to bugfests.

When you create an "object" that is immutable it is no longer object oriented programming. It becomes functional programming. When objects become immutable, then all functions "attached" to the object are simply scoped functions. Whether you define that function in one scope or another scope is not the point as the type signature will control the overall bias of the function, not the scope.

The type of programming described by the definition of OOP as popularized by JAVA and C++ tends to use mutators like getters and setters to change state. With this last paragraph you wrote, you're essentially changing the topic, you're talking about non-traditional OOP. It's a loaded word, you might as well call it what it is: Functional programming. Still under this methodology your scoped methods are scoped in a way where they can never compose with functions outside of the object. It's an arbitrary cut and bad design for primitives. It forces you to compose data along with methods at the same time.

I'm sorry but you entirely missed the point.

:)


This is just an arbitrary collection of a couple of things. A lot of programmers lack a lot of skills.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: