> Most won't care about the craft. Cherish the ones that do, meet the rest where they are
> (…)
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
What you call “stressing over minutiae” others might call “caring for the craft”. Revered artisans are precisely the ones who care for the details. “Stressing” is your value judgement, not necessarily the ground truth.
What you’re essentially saying is “cherish the people who care up to the level I personally and subjectively think is right, and dismiss everyone who cares more as insane weirdos who cannot prioritise”.
There's another way to look at this: if you consider the school of thought that says that the code is the design, and compilation is the construction process, then stressing over code style is equivalent to stressing over the formatting and conventions of the blueprint (to use a civil engineering metaphor), instead of stressing over load bearing, material costs and utility of the space.
I'm fond of saying that anything that doesn't survive the compilation process is not design but code organization. Design would be: which data structures to use (list, map, array etc.), which data to keep in memory, which data to load/save and when, which algorithms to use, how to handle concurrency etc. Keeping the code organized is useful and is a part of basic hygiene, but it's far from the defining characteristic of the craft.
Some of those formatting conventions are written in blood. The clarity of a blueprint is a big deal when people are using it to convey safety critical information.
I don’t think code formatting rises anywhere close to that level, but it’s also trying to reduce cognitive load which is a big deal in software development. Nobody wants to look at multiple lines concatenated together, how far beyond that you take things runs into diminishing returns. However at a minimum formatting changes shouldn’t regularly complicate doing a diff.
I 100% agree. The problem is that after a half a century, software engineering discipline has been unable to agree on global conventions and standards. I recently had an experience where a repair crew was worried about the odd looking placement of a concrete beam in my house. I brought over the blueprints, and the technician found the schedule of beams and columns within seconds, pinpointed the beam and said, "Ah, that's why. We just need to <solution I won't go into>". Just then it struck me how we can't do this in software engineering, even when the project is basically a bog-standard business app: CRUD API backed by an RDBMS.
That’s because the few hard rules you have to comply with have workarounds and matters rarely. In house construction, you have to care about weight, material degradation, the code, etc… there’s no such limitation on software so you can get something to work even if it’s born out of a LSD trip.
But we do have some common concepts. But they’re theoretical, so only the people that read the books knows the jargon.
The rules are what make it flexible. The rules let me understand what the heck is going on in the code you wrote so I can change it. Code that is faster to rewrite from scratch isn’t flexible.
Construction and civil engineering have been unable to agree on global conventions and standards, and they have a multi-millenia head start over software engineering. The US may claim to follow the "International Building Code", but it's just called that because a couple of small countries on the Americas have adopted it. For all intents and purposes it's a national standard. Globally we can't even agree on a system of units and measurements, never mind anything more consequential than that.
I’d say that globally we have agreed on a system of units and measurements. It’s just the US and a handful of third world countries that don’t follow that system.
> I brought over the blueprints, and the technician found the schedule of beams and columns within seconds
Is that really an example of the standardization you want? It shows that the blueprint was done in a way that the technician expected it to be, but I am not sure that these blueprints are standardized in that way globally. Each country has its standards and language.
If an architect from a different country did that blueprint, I would bet that it would be significantly different from the blueprint you have.
Software Engineering doesn't have a problem with country borders, but different languages would require different standards and conventions. Unless you can convince everyone to use the same language (which would be a bad idea; CRUD apps and rocket systems have different trade-offs), I doubt there could be an industry-wide standard.
But I can't look at the design from my desk-mate and hope to understand it quickly. We wall love to invent as much as possible ourselves, and we lack a common design language for the spaces we are problem solving in. Personally I don't entirely think it's a problem of discipline of software engineering, but a reflection of the fact that the space of possible solutions is so high for software, and the [opportunity] cost of missing a great solution because it is too different from previous solutions is so high (difference between 120 seconds application start and 120 milliseconds application start, for instance).
> The problem is that after a half a century, software engineering discipline has been unable to agree on global conventions and standards.
It can't, and it won't, as long as we insist on always working directly on the "single source of truth", and representing it as plaintext code. It's just not sufficient to comprehensibly represent all concerns its consumers have at the same time. We're stuck in endless fights about what is cleaner or more readable or more maintainable way of abstracting / passing errors / sizing functions / naming variables... and so on, because the industry still misses the actual answer: it depends on what you're doing at the moment. There is no one best representation, one best function size. In fact, one form may be ideal for you now, and the opposite of it may be ideal for you five minutes later, as you switch from e.g. understanding the module to debugging a specific issue in it.
We've saturated the expressive capability of our programming paradigm. We're sliding back and forth along the Pareto frontier, like a drunkard leaning against a wall, trying to find their way back to a pub without falling over. No, inventing even more mathematically complex category of magic monads won't help, that's just repackaging complexity to reach a different point on the Pareto frontier.
Hint: in construction, there is never one blueprint everyone works with. Try to fit all information about geometry, structural properties, material properties, interior arrangement, plumbing, electricity, insulation, HVAC, geological conditions, hydrological conditions, and tax conditions onto a single flat image, and... you'll get a good approximation of what source code looks like in programming as it is today.
I think we have to think of software like books and writing. It is about conveying information, and while there are grammatical rules to language and conventions around good and bad writing, we're generally happy to leave it there because too many rules is so constructive as to remove the ability to express information in the way we feel we need to. We just have to accept that some are better writers than others, or we like the style of some authors better.
That would make sense if code was written solely for the enjoyment of its readers, but it isn't.
Code uses text, sometimes even natural-language prose, but isn't like "books and writing". It has to communicate knowledge, not feels. It also ultimately have to be precise enough to give unambiguous instructions to a machine.
In this sense, code is like mathematical proofs and like architectural blueprints, which is a different category to drawings, paintings and literature. One is about shared, objective, precise understanding of a problem. The other is about sharing subjective, emotional experiences; having the audience understand the work, much less everyone understand it the same way, is not required, usually not possible, and often undesirable.
I don't know that we'll ever be able to agree on "global standards" though. Software is too specialized for that.
The only software standard that I'm reasonably familiar with is https://en.wikipedia.org/wiki/IEC_62304 which is specific to Medical Devices. In a 62304-compliant project, we might be able to do something like your example, but it could take a while. OTOH, I'm told that my Aviation counterparts following DO-178 would almost certainly be able to do something comparable.
It is going to be very industry dependent, where "industry" means aviation, or maritime, or railroad, or Healthcare, etc... Just "software" is too general to be meaningful these days.
Yes, of course. My point is that those standards apply only to specific domains where software is applied, not to the software development industry as a whole.
It's telling I think that the discussion becomes about standards, which software people like, rather than say, clear communication to technical stakeholders beyond the original programmer. I have a list somewhere I made of everyone that might eventually need to read an electronics schematic. Particularly to emphasize clarity to younger engineers and (ex-)hobbyists that think schematics are a kind of awkward source code or data entry for layout. It's not short: test, legal, quality control, manufacturing, firmware, sustaining/maintenance, sometimes even purchasing, etc.
Whoever drew your blueprints knew they would be needed beyond getting the house up. What would the equivalent perspective and effort for software engineering be?
That is largely due to a difference in complexity.I would say that the level of complexity of a blueprint of a house is on par with a 20-30 line python solution of a leet code easy excercise to a programmer. If the one reading the blue print is an engineer and the one reading the python code is a programmer. A crud app is more like the blue prints for a vacuum cleaner or something like that.
This was such a relieve for us.
Looking back its unbelievable how much combined time we wasted complaining about and fixing formatting issues in code reviews and reformatting in general.
With clang-format & co. on Save plus possibly a git hook this all went away.
It might not always be perfect (which is subjective anyway) but its so worth it.
Maybe a new paradigm for code formatting could be local-only. Your editor automatically formats files the way you like to see them, and then de-formats back to match the codebase when pushing, making your changes match the codebase style.
It's a decent idea, but it's weird reviewing code you wrote in saying GitHub, it looks totally different. Imo not a show stopper but a side effect you have to get used to.
This is disastrously easy to implement with just a few filters on git (clean & smudge).
I highly recommend it though, especially if you worked for a long time at one company and are used to a specific way of writing code. Or if you like tabs instead of spaces...
This is pretty common now. At least my Vim/git combo does this, where I always open source code with my preferred formatting but by the time it's pushed to the server it's changed to match the repo preferences.
Software, being arbitrary logic, just doesn't have the same physical properties and constraints that have allowed civil engineering and construction code to converge on standards. It's often not clear what the load bearing properties of software systems are since the applications and consequences of errors are so varied. How often does a doghouse you built in your backyard turn into a skyscraper serving the needs of millions over a few short years?
> However at a minimum formatting changes shouldn’t regularly complicate doing a diff.
If the code needs to be reformatted, this should be done in a separate commit. Fortunately, there are now structural/semantic diff tools available for some languages that can help if someone hasn't properly split their formatting and logic changes.
Reformatting comitts still leaves the issue that you deprive yourself of any possibility to reliably diff across such commits (what changes from there to there?) Or attribute a line of code to a specific change (why did we introduce this code?).
What we should have instead is syntax-aware diffs that can ignore meaningless changes like curly braces moving into another line or lines getting wrapped for reasons.
> What we should have instead is syntax-aware diffs that can ignore meaningless changes like curly braces moving into another line or lines getting wrapped for reasons.
These diffs already exist (at least for some languages) but aren't yet integrated into the standard tools. For example, if you want a command line tool, you can use https://github.com/Wilfred/difftastic or if you are interested in a VS Code extension / GitHub App instead, you can give https://semanticdiff.com a try.
This is the best argument I've ever heard for editorconfig, commiting a standardised format to the repos, and viewing it in whatever way you want to view it on your own machine
The value of software is both what it does now (behavior), and what you can get it to do later (structure). What you described as design and the compiled artiface is the behavior.
The craft is what gives you future choices. So when people cares about readability, writing tests, architecture, they’re just making easy for them to adjust the current behavior later when requirements change. A software is not an house, it doesn’t get build and stays a certain way.
experience gives you some possible ideas for how it will be used in the future, but after a long time I'm coming to the position you're fooling yourself if you think you can predict where with any accuracy. It's still valuable and important to try, just not as critical to be right as I used to think. Example: I've completely flip-flopped from interface simplicity to implementation simplicity.
I agree that a house is a bad analogy, for your reasons and because you can "live" in software that as a building would not be fit for human habitation.
You have to be pragmatic about it, balancing between the speed of only implementing the now and the flexibility of taking care of the future. It's not predicting, mostly it's about recognizing the consequences of each choice (for later modifications) and and either accepting it or ensuring that it will not happen.
> It's not predicting, mostly it's about recognizing the consequences of each choice (for later modifications)
This is the exact trap I'm describing. It sounds very reasonable, but how is it not prediction when you're asking people to "recognize the <future> consequences of each choice"? You have very little to no understanding of the context, environment or application of today's creations. Smart, experienced people got us into the current microservice, frontend JS, "serverless" cloud messes.
Risk management is a fact of all activities. It's not predicting that some thing is going to happen, but it's evaluating that if we can afford the consequences if it really happens. If we can, let's go ahead with the easy choice. If we cannot, let's make sure that it won't affect us as much.
> Smart, experienced people got us into the current microservice, frontend JS, "serverless" cloud messes.
Those are solutions to real problems. The real issue is the cargo cult, aka "Google is doing it, let's do it too". If you don't have the problem, don't adopt the solution (which always bring its own issues). It's always a balancing act as there is no silver bullet.
Yea, I've always considered craftsmanship to be about paying attention to the details and making everything high quality--even the things that the end user will never see, but you know are there. The Steve Jobs quote sums it up nicely:
> "When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through."
If you look at back pieces of old classic furniture made during hand powered tools era, its mostly very roughly finished. Professionals rarely had time to spend dicking around with stuff that isn't visible.
To take it a bit further, the "unseen quality" does surface periodically, like when the proverbial chest of drawers ends its first life, and goes to the dump or gets restored.
The same is true in software when a developer inherits a codebase.
Those signs of quality turn into longevity, even when they face away from the user.
> stressing over the formatting and conventions of the blueprint (to use a civil engineering metaphor)
This is incredibly important.
This is the kind of stuff that prevents shit like half the team using metric and the other half thinking they're imperial, or you coming up with the perfect design, but then the manufacturer makes a mirrored version of it because you didn't have the conventions agreed upon.
Imperial vs Metric is a hard requirement not a convention or formatting. I have a co-worker who wants everything to be a one liner, doesn't like if/else statements, thinks exception handling is bad and will fail a code review over a variable that he feels isn't cased properly.
This makes code reviews super slow and painful, it also means you aren't focusing on the important stuff. What the code actually does and if it meets the requirements in functionality. You don't have time for that stuff, you are too busy trying to make this one person happy by turning an if else into a ternary and the end of sprint is a day away.
If a reviewer is regularly rejecting PRs because the variable names have incorrect capitalization then that's a problem with the author, not the reviewer. That is the incredibly basic shit you decide on at the start of a codebase and then follow regardless of your personal thoughts on what scheme is preferable.
If/else vs ternaries is something where consistency is a lot less important, but if you know that a team member has a strong preference for one over the other and you think it's unimportant then you should just write it how they prefer to begin with. Fight over things you think are important, not things that you think don't matter.
I worked with a guy where you would try to predict what he would bitch about next. In this example, you would write it as a ternary so you don't have to hear about it ... and he'd suggest it be an if-else statement.
Nobody fucking cares which one it is; is it readable? That's the real question. Your preference of which version of "readable" it is only applies when you are the author. If you're that picky about it, write it yourself. That's what we eventually did to that guy after the team got sick of it. Anytime he'd complain about something like that, we would invite him to write a PR to our PR, otherwise, get over it. Then, we would merge our PR before he could finish.
He eventually got fired for no longer able to keep up with his work due to constantly opening refactor PR's to the dev branch.
Sure, but if there are actual rules, agreed to and accepted by the entire team (as opposed to one guy's idiosyncratic preferences) then there should be a commit-hook to run the code through a linter and reject the commit if the rules are violated.
Yes indeed. I'm a fan of getting the team to pick an already existing lint ruleset and then doing this. You can also set to only lint changed files if you want a gradual change over in existing codebase.
The construction metaphor isn't a very good fit here in my opinion.
No building is expected to have the amount of adaptability that is expected of software. It falls completely apart for interpreted languages. When is a PHP app constructed in this metaphor? On every single request? The metaphor assumes a "design once, build once" approach, which is basically no software I've ever seen used in real life. Hardware, OS, language, collaborator/dependency updates all require changes to the application code more often than not. And that's assuming the feature set stays stable, which, in my experience is also quiet rare.
Maintainability is therefore a quality dimension of software, and reduced cognitive load usually results in increased maintainability (curious if someone has a counterexample to that)
That is not to say I'm one of those people who need a specific code style to have their weird brain satisfied. But not using a linter/autoformatter at all [when available] in 2025 sounds like the opposite of "work smart, not hard"
By your own analogy, blueprints have a very strict set of guidelines on stuff like line styling, font selection, displaying measurements, etc. This is absolutely critical as nobody wants to live with the kind of outcomes generated when there's any meaningful ambiguity in the basic conventions of critical design documents. At the same time, having to hack through a thicket of clashing code style adds to the cognitive load of dealing with complex codebases without offering any improvement in functionality to balance the tradeoff. I've literally seen open source developers lose their commit access to projects over style issues because the maintainers correctly concluded that the benefits of maintaining an authoritarian grip on the style of code committed to the project outweighed humoring the minority of fussy individualists who couldn't set aside their preferences long enough to satisfy the needs of the community.
I overall agree. The one thing I will say is that what you call code organization (anything pre-compilation) also includes structuring the code to improve maintainability, extensibility, and testability. I would therefore disagree that code organization is only basic hygiene, not part of design, and not a large part of the “craft” (use of that word is something I’ve changed my opinion on—while it feels good to think of it that way, it leads to exactly the thing we’re discussing; putting too much emphasis on unimportant things).
Code style though, I do agree isn’t worth stressing about. I do think you may as well decide on a linter/style, just so it’s decided and you can give it minimal energy moving forward.
>I'm fond of saying that anything that doesn't survive the compilation process is not design but code organization.
Maybe not at the same level, but code organization is also a design.
I don’t care that much about the exact linter rules applies (but I do prefer blue, hmm, nooo). But getting rid of merge conflicts that come from lake of common linter rules, this is a great pipeline process improvement, and this is some kind of code contribution pipeline design.
And some specific people with lots of experience in teaching as well as in developing real life useful systems will tell you, that code is first and foremost written for people to understand, and only incidentally for a computer to run. Human understanding of what is going on is the one most important thing. If we do not have that, everything else will go to shit.
There is definitely something to be said for the idea that a shitslop engineering blueprint which still conveys correct design (maybe; who knows really?) is shitslop, whether or not the design is sound. In the case of software engineering, the blueprint is the implementation, too, so it’s not just shitslop blueprinting, it’s also shitslop brickwork (totally sound bones though I promise!), shitslop drywalling, shitslop concrete finishing and rebar work — and maybe it’s all good under the hood! Totally fine if all you’re building is a shithouse! But I think you get where I’m going with this.
IMO, gofmt doesn't go far enough. It should sort imports and break lines automatically, or you end up with different people with slightly different preferences for breaking up lines, leading to an inconsistent style.
Something like gofumpt + golines + goimports makes more sense to me, but I'm used to ruff in Python (previous black + isort) and rustfmt.
I'd say that if you're manually formatting stuff with line breaks and spaces, that should have been automated by tooling. And that tooling should run both as a pre-commit hook and in CI.
Be careful what you wish for. Sometimes leaving a line stupidly long is better for readability than awkwardly trying to break it up.
If you have five similar statements somewhere with only one exceeding the linter's line length, breaking up that one can lead to code that is harder to parse than just leaving it jutting out by way of an exception to the general rule; especially if it contains some predictable bit of code.
Those would be edge cases where formatting can be turned off if needed. This would require justification during code review. Otherwise, we'd keep the automatic format, with a strong tendency towards the latter.
The general benefit of automated formatting outweighs these edge cases.
I have a habit of putting stdlib imports, a line break, golang experimental (x) imports, a line break and external imports.
I just tested, gofmt orders imports within these blocks. If I won't use the line breaks, it'll consider it a single block and will globally sort.
golang also aligns variable spacing and inline comments to look them as unified blocks (or a table if you prefer the term) and handles indents. The only thing it doesn't do is breaking lines, which I think acceptable.
I meant that you could do this with isort in Python or rustfmt with `group_imports = StdExternalCrate` (sadly, not the default), where the imports are automatically separated into blocks, which is basically what you seem to be doing manually.
My point is that many things we end up doing manually can be automated, so I don't need to care about unsorted or unused imports at all and can rely on them to be fixed without my input, and I don't have to worry about a colleague forgetting to it either.
I didn't know that so many people did that so formatters have an option for this. On the other hand, I'm doing this for so long across so many languages, I never thought about it. It consumes no effort on my part.
Also, gofmt not removing the breaking lines and handling groups inside themselves tell me that gofmt is aware of these choices already and accommodates this without being fussy or needing configuration about it, and it's an elegant approach.
Vehemently disagree. I get that go's conventions line up with your own, but when they don't, it's irritating. For example, Dart is finally coming around on the "tall" style (https://github.com/dart-lang/dart_style/issues/1253). But why should people be forced to wait for a committee (or some other lofty institution) to change a style convention that then effects everyone using the language? What's the harm in letting people write their code in the "tall" style beforehand? Why must the formatter be so insistent about it?
When you are a solo dev, everything is acceptable. When you're in a group, this can cause friction and cause real problems if people are "so insistent" about their style.
Go is a programming language developed for teams. From its syntax to core library to language server to tooling, Go is made for teams, and to minimize thinking. It's opposite of Rust. It's devilishly simple, so you can't make mistakes most of the time.
Even the logo's expression is an homage to this.
So yes, Go's formatter should be this prescriptive. The other thing is, you can continue this conversation till the end of time, but you can't argue with the formatter. It'll always do its thing.
You are misunderstanding the argument: I am not against prescriptive formatters. By all means, enforce a style convention for your project, I have nothing against that, nor do I understand how you interpreted that from my comment. I am against prescriptive formatters that cannot be configured. This creates the absurd situation, as previously described, where one must appeal to 'The Committee' who decides the style convention for the entire world. This should not be necessary. Nor should you have to surround your code with formatter-disabling comments (assuming the language even supports that) to, for example, use the "tall" style, as previously mentioned. Nor should you have to literally fork the language or its tools to disable the formatter.
If something can be configured, this opens up infinite possibilities for discussions on how it should be configured. The fact that you even ASK if you should use the tall style means that you are considering the POSSIBILITY of using it that way.
Put it this way, if it is technically impossible to develop a car that have the color red, we would not be discussing or entertaining what color the next car should have. It would be red, end of story. It doesn't matter if we like red or not, it just has to be. end of story.
Hot take: stop being authoritarian with code styles, perchance? You are not so wise as to determine what is clean code for the entire world, and the ego required to believe that you are is beyond astounding. And we're still having these discussions despite unconfigurably prescriptive formatters. This idea that such formatters end these discussions is just demonstrably false... you are literally taking part in one of these discussions.
As I've said, it's fine for people, projects, and organisations to require and enforce a particular style for their code, to demand consistency for their code. The problem comes when people with inflated egos believe they have the right to dictate how the entire world writes their code. I feel like I need to stress that this is the issue here.
The fact that Dart's formatter FAQ tells developers who are unhappy with the output to change their code to satisfy the formatter (https://github.com/dart-lang/dart_style/wiki/FAQ#i-dont-like...), rather than allowing developers to tweak the formatter to satisfy their code, is such a huge tell about their mindset. And again, these global styles do not "end the discussion" as people in this thread have asserted. The "tall style" PR (https://github.com/dart-lang/dart_style/issues/1253) refutes that. And that PR has since devolved into bickering and heated demands since its adoption into SDK 3.7.0, since it transforms people's code in ways they dislike and/or find less readable.
Just let people, projects, and organisations enforce their own styles conventions. It's not hard and it shouldn't be controversial.
I know this sounds insane, but I used to work on some big svn codebase with many developers, without any fancy formating tools AND no one cared about consistent style.
One interesting thing that happened was the ability to guess who wrote what code purely based on coding style.
I sometimes care when I want to introduce empty space to visually make parts of code stand out, eg multiple blank lines between functions or classes etc. I think whitespace can be effectively used like paragraphs in a book, basically make different blocks more obvious. Most formatters just squash everything to be one empty line apart, for me it can be annoying.
While Python has some great linters, I don't know of any in C that can correctly and automatically enforce some coding style. Most of them can only indent correctly, but they can't break up long lines over multiple lines, format array literals, or strings. Few or none knows how to deal with names or preprocessor macros.
clang-format and clang-tidy are both excellent for C and C++ (and protobuf, if your group uses it). Since they are based on the clang front-end, they naturally have full support for both languages and all of their complexity.
Completely agree on all points. I used to have a style that I preferred and really wanted to use everywhere but nowadays I just throw prettier at it and take the defaults for the most part. I’ll take consistent code over mixed styles every day.
100%. Enforcing lint rules is very important. What those lint rules should say is generally very unimportant because the editor should be doing all the work, and most of the time "that's just like, your opinion, man".
In my opinion, I think the author is criticizing bike shedding [1] rather than meaningful decisions. Of course some people will differ on whether a decision is one or the other. But as a whole, not sweating the details is a good quality to have whatever road in life you are on.
Details are important though. Some bikeshed type ideas spiral into very expensive changes. This is a large part of why US transit construction is so much more expensive - people asking for lots of little details which add up (large monument stations, bike paths done with the project... those things all add up) - the important safety details are left to experts, but only after they are told to build something far more expensive than needed. (this isn't a plea for brutalism architecture there are nice things you can do that are only minimally more expensive)
To reduce your argument to its essence, you're saying typesetting is part of the craft of writing. I've yet to meet an author who believes this (other than enjoying editing their own work as output from a typewriter), and I think the same broadly applies to code. It's not that everyone thinks these things are unimportant, it's that caring deeply about doing them a particular way is orthogonal to the craft. It's something that has long been lampooned (tabs vs. spaces, braces, etc.) as weird behavior.
More than one writer refuses to use a computer, preferring typewriters. Harlan Ellison learned how to repair typewriters after he could no longer find anyone to fix his. Stephen King wrote Dreamcatcher with a fountain pen.
Authors totally obsess over details that seem irrelevant to people outside that craft.
And that’s totally fine! But there is no correlation whatsoever between writing on a typewriter or using a fountain pen (or the physical experience of writing generally) and the quality of the writing. None.
There is nothing to support this in either writing or programming, at all. For every software craftsman out there obsessing over formatting, editor layout, linters, line length, there is an ancient, horrifyingly laid out C codebase that expertly runs and captures the essence of its domain, serves real traffic and makes real money.
Make your editor work how you like, but if my team lead started to get annoyed after my 4th formatting-only PR I should probably start to think about what I want them to bring up to my manager in my performance review.
> But there is no correlation whatsoever between writing on a typewriter or using a fountain pen (or the physical experience of writing generally) and the quality of the writing
I heard neal stephenson say that he writes using a pen rather than a wordprocessor specifically because it does affect the quality of the writing. Because he handwrites slower than he types, he does more thinking and editing in his head rather than after it's on paper.
And if that works for him then he should do that, and I have no problem with that at all. I just don't think there is anything one can say about writing tools that _generalizes_ to writing quality, and the same applies to the type of conversations programmers often engage in like "functions need to be short!" or "line length MUST be less than 90 or else I will reject this PR!" or "dark mode is objectively better to write code in" etc.
I have no problem whatsoever with people having preferences, I just think people mistake preferences for proof.
Those are tools you use to write, not typesetting. It's equivalent to wanting to code on paper or use a specific editor. Cognitive connection to tools is a real thing, and I know a number of authors who really can't form a mental connection with their writing if not using their tool of choice.
That doesn't mean it's normal for them only use a certain typewriter because of its typeface, insist a publisher use Garamond to typeset their book for publication, or refuse to write without a certain margin.
To bring it closer to programming, in collaborative writing especially (think manual writing at large corporations), nobody is insisting that everyone indents paragraphs their way because it's better. As long as there's consistency those matters are best left to the printer. When I was younger I knew a lot of technical writers who in fact really disliked the move to Word from traditional word processors, because they didn't want to be distracted by those things.
There is a class of people who refuse to see computer programming as an art.
They try to shoehorn it into being an engineering discipline and comparing it to authoring a book (something you can't give timelines on or T-Shirt size) probably horrifies them.
Ah, the joys of overloading. Do you mean "art" as the high-brow stuff we see in galleries and are produced in volumes of dozens per artist-years? Or do you mean "art" as the more common stuff that's produced by artisans are the rate of dozens per week? Because to me it's more the latter -- I'm an artisan, not an artist.
The prose writing metaphor also falls apart the moment one admits that prose doesn't have the need (and is actually very terrible at) working collaboratively, concurrently but not perfectly synchronized and continuously on the same body of text, ensuring at the same time that combined changes don't add up to unwanted/wrong semantics, even in the long term.
Are consistent indentations, variable names etc. strictly required for that? No, not logically, but the real world in which our software must be built is resource constrained, so every minute I spend parsing weird formatting inconsistencies is one minute less I can focus on the actual problem that needs solving.
Just use a formatter/linter everyone. And I promise I don't care how it's configured, as long as it's consistent across the codebase
Plenty of writers work collaboratively, fyi. Even fiction authors like my mother routinely deal with multiple drafts and edits from multiple editors who review and suggest changes, from peer authors to professional editors contracted by the publishing house. And non-fiction authors routinely collaborate, too. I personally know some consequential non-fiction books where another author ghost wrote troublesome sections, not taking credit except in private.
And this ignores the collaborative writing many authors do to pay the bills: technical writing at large corporations, like bank manuals and such, or academic writing at universities. While consistency and standards are enforced, nobody's arguing that everyone else should really indent paragraphs their way, because that's the best way.
A better one would probably be accounting and spread sheets. Having common formatting conventions between spreadsheets (and code files) allows your brain to filter out the noise better. Obviously you can get too down in the weeds on "what are the best conventions" but the most important part is to have them and stick to them.
I think there are accessibility aspects to formatting. Specifically to different formatting.
Not sure the typesetting analogy is the best, but typesetting absolutely matters for readability. Authors don’t need to care about it because typesetting is easy to change (before printing) and because publishers spend time caring about it —- all before it ends up in the hands of readers.
The purpose of writing is to produce something to be read, and typesetting is an important part of making a document readable.
It is incredibly irritating if I need to reformat code to be able to read it clearly before modifying it, then have to either back out all of the formatting changes to create a clean PR that shows the actual change, or create a PR full of formatting changes with the actual logic change buried somewhere within.
If typesetting and a grammar mistake in one sentence were what made a book viable or not, authors would care. I've seen enough (crazy expensive) bugs that could have been caught by linters and bugs introduced through insane formatting and style choices that I can't agree that a book and software are all that comparable.
I'm on team "agree at the beginning and then make it part of CI" and I basically never have to have this conversation more than once or twice per project now but I also think that the people most obsessed with it and dwell on it for their personal daily work are problematic, as are the people who hate any rules whatsoever and want to write complete shit code to just call the job done because "that's the important part".
There is more than one type of people who stress over code style. There's the group who wants to discuss about how to style your code and then there's the group who wants to just use a common code formatter and be done with that.
For example, I have objections to rustfmt's default style. I would never start discussions on rust projects about changing that to another formatter or changing its configuration. I definitely would carefully ask that people should really use rustfmt, though, if they don't do so yet.
> I definitely would carefully ask that people should really use rustfmt, though, if they don't do so yet.
If you don't already, you should run `cargo fmt --check` in CI and block PR merges until it passes. You can also run it in a pre-commit hook, but you can't be sure everyone will set that up.
Automating such types of decision is great but moot if others have to opt into it.
Code formatters are the best of both worlds. I despise Allman-style indentation for reasons I can't explain or justify, but if I have to work on a codebase that uses it, I can simply put hooks to reformat it to whatever the repo style is before commit.
Edit: This comment was based on misreading the parent comment. I've left it up, but I should have been more careful.
You've set yourself up to always be the outlier. To always need to have that discussion or tweak the rules on every project you work with.
You've increased the overhead of onboarding anyone used to the default style. You've increased the overhead of you working on anyone else's projects that is more likely to have the default style.
All of that is friction introduced because you haven't learned to live with the default style.
Do I love that the default C# rules put a new line before the else block? No, but I've learned to live with it so that I can work on all manner of projects without fussing over that style option every time.
By adhering to default rules, you never have to have the endless arguments such as tabs vs spaces again. ( Spaces won, and the tabbers got over it fairly quickly once it was the default in almost all formatters. )
What I understood from your parent comment was the exact opposite, i.e. that they’re saying “I disagree with some of the default choices of the formatter (and so would prefer they were different) but I never voice those because it’s not worth it. However, I do think everyone should use something, whatever it is (even if the default style), as opposed to nothing”.
I apologise, I misread your statement as: "I would never start discussions on rust projects *without* changing that to another formatter" which changed the meaning entirely. I should have taken a moment to re-read what you wrote.
they were stressing about details that customers see, details that customers don’t see were to cut corners on
Sure, but there's two differences between artisans and programmers.
Firstly, most artisans produce sellable products. Once the customer has bought an item, they would never see it again. I'm pretty sure that if there was a minor error on a self-produced table or a vase and it was standing in the artisan's own living room, they'd not be able to unsee it, and still work to correct it.
Secondly and more importantly: code is not just the product that programmers work on, it's also the workshop that programmers work in. And you bet your ass that artisans are very anal about the layout and organization of their workshop. Put away screws in the wrong box, or throw all the dowels of multiple sizes in the same container? The carpenter will fire his apprentice if it happens more than once; place your knives in the wrong place in a kitchen and the chef will eat you alive; not properly wearing or storing safety equipment can be a fireable offense in many places.
To me, a code review is how you close your workshop for the week: tools are cleaned and stored, floors are tidy enough to walk around, and the work area is available so I can come back on monday and be productive again. I shouldn't have to spend monday cleaning glass shards because someone left a hammer standing straight up on a glass table - or chasing down last week's lunch because someone left the fridge open and now the cheese has grown legs.
So no -- making fuss about code style and quality can certainly be artisanal (maybe not about indents specifically, but can certainly be about textual organization). Because the code is the workshop, and you know the next time you will enter this room it will be because of a high-priority demand and you can't afford to spend half your day cleaning up what you couldn't be bothered to do last time.
Everyone wants a particular style. Except when they have to use someone elses style.
Pick a style stick with it. Review it every 6 months to year to see if anything needs to be tweaked.
If you hear 'we are professionals' you are about to see code that has 20 different styles and design patterns.
I worked with one guy who could not make up his mind and changed the whole style guide about every 2-3 weeks. It royal made him mad the original style guide fit on a couple of postit notes. Me and two other engineers bashed it out in a 1-2 hour meeting at the start of the project (odd number of people to vote on anything). It came down to the fact he came in after the fact and had no say in it. Then proceeded to change everything. One week it was tabs everywhere then spaces then tabs again. One day camel case, week later all lower, another partial hungarian, upper on random things, etc. Waste of time.
Ideally pick a style from a different large organization that you have no input in. Because the organization is large they will have put a lot of effort into it, but since you have no input you can just follow it without thinking. Sometimes an organization will make some really weird choices and you will be forced to change styles (google as rejected a lot of the latest C++ standard and thus their C++ style guide is not to be used elsewhere, but there are plenty of other good options).
Second best is to start a large cross company standards organization and only allow one representative per organization. Make sure there is a lot of process standing in the way of changes so that changes are only made when really justified (because most are not justified)
Maybe I'm projecting my own views here but I interpreted those two statements as being about different things: the finished product vs the process that gets you there.
I care deeply about the end result that is presented to the user who has no idea what code even looks like. How we put together the UI. How we load data to minimize delays. That's "the craft" to me.
I care much less about code style, linting etc. that no-one other than a small group of developers will ever see. To a certain extent the latter enables the former. But I've often witnessed the latter being valued over the former and that's where things start to go wrong.
In some ways, it's the point OC makes — that it's subjective. It's a culture problem.
In our profession, our conventional approach to resolve these kinds of differences is to reduce them to a specific set of conditionally applied rules that everyone _has_ to agree on. Differences in opinions are treated as based on top of a more fundamental set of values that _have_ to be universal, modular, and distinct. Why do we do this? Because that's how we culturally approach problem-solving.
Most industries at large train and groom people to absorb structured value systems whose primary function is to promote productivity (as in, delivery of results). That value system, however, ultimately benefits capital most, not necessarily knowledge or completeness.
Roles and positions ultimately encompass and package a set of values and expectations. So, we are left with a small group of people who practiced valuing few other aspects but feel isolated and burdened with having to voluntarily take on additional work (because they really care about it), and others unnecessarily pressured to mass-adopt values and also burdened taking on what feels like additional work that only a small group of people like to care about.
In the cultural discourse, we are trying to fix minimum thresholds of some values and value systems and, correspondingly, their expectations. That is never going to be possible. In and of itself, that can be a valid ask. However, time and resources are limited, and values are a continuum. Fixing one requires compromising on another. This is where we are as a professional culture and community in the larger society today.
The Tech industry refuses to break down the role of a "software engineer/developer" further than what it is today and, consequently, refuses to further break down more complex/ambiguous values and value systems into simpler ones, thus reducing the compromises encompassed in and perceived by different sub-groups and increasing overall satisfaction of developers in the industry. Instead, we've expanded on what software developers should be responsible for, which has caused more and more people to burn out trying to meet a broader set of expectations and a diminished set of value systems with more compromises to accommodate that.
Ideally, we need an industry and a professional culture that allows for and respects niche values and acknowledges the necessity of more niche roles to focus on different parts of the larger craft of software development.
PS. As a side note, the phrasing of it in the article is unfair, which OC is pointing out too — there is a false equivalency drawn between "caring for the craft" and "stressing over minutia." This causes, in this context of having a discourse around the article, those who value and want to talk about the value of caring for the craft to be viewed and perceived as the insane weirdos who stress over the minutia that the author was referring to.
To me "craft" is about keeping code efficient, scalable, extensible, well-tested and documented. Code style is more about what naming convention to use, tabs vs spaces etc. - it's nice to have it consistent, but no need to spend more than 5 minutes arguing about it.
Tabs/spaces is a non-issue, agreed - IDEs handle it.
But surely naming convention contributes to keeping code documented, extensible and efficient?
A deviation from the norm leads to people thinking x does not exist in a large code base, leading to them implementing duplicate methods/functionality, leading to one instance evolving differently enough to cause subtle bugs but not enough to be distinct, or leading to one instance getting fixes the other does not etc?
Sample size of 1, but I've seen it happen unfortunately.
Sure some people that care about minutiae are code artisans but what I have seen more often is co-workers weaponizing these discussions to hide their own incompetence.
I have seen so many people going on and on about best practices and coding styles and whatnot and using big words just in hopes to keep discussions going so no one figures out out that they don't know how to code.
Thing is, you can care for the craft, but let the code style and linting tools do what they do best and don't stress over them. Code reviews are better now that there's tooling that automatically checks for, fixes, or marks common issues like code style so the reviewer doesn't have to do anything with them.
That is, I'd argue the "stressing" is not about what these tools check, but about the tools and their configuration itself. Just go with all the defaults and focus on bigger issues.
Now we have tooling to make sure (1) code style is consistent and (2) you don't have to stress about it.
Every language has automatic formatters. Use them, configure them if you don't like the default (in accordance with your team), and configure your editor so that is applied automatically when you save. And use CI to detect PRs with bad formatting so devs who don't have configured their editor yet can't break it.
Same with linters, you still have to agree with your co-workers about which rules make sense to be enforced, but with good editor integration you can see it right away and fix it as you code.
That's my stance--formatting and style is important but it's also a job for machines. I think you can still quibble over formatting but the result should be automation, not nit-picky code review comments or chat war threads.
> What you’re essentially saying is “cherish the people who care up to the level I personally and subjectively think is right
No. The two aspects (let's call them the craft and the minutiae) are orthogonal, they're not different level of caring about the same thing. I've seen people obsessed over minutiae and writing buggy, careless and unmaintainable code.
As with other formal aspects (e.g., testing) the quality of the code and the level of adherence to convention or forms are independent of each other; but they do both consume the same limited resource, which is the time and attention of developers.
The idea that style and linting define the “craft” is bizarre. What it sounds like you are saying is that you prefer style over substance. Not a single developer I’ve ever revered cared about styling. It’s a means to an end for large teams because to make it easy to read you need commonality.
It’s kind of like what Bruce Lee said about punches. Before you know how to code, styling is just styling. When you become proficient in coding, styling is more than just styling. But when you’ve mastered coding, styling is just styling again. Be like water my friend.
Code is human readable text. Authors should craft it with care.
The linter people are hypercorrectionalists of the type that would change “to boldly go where no man has gone before!” because it’s a split infinitive.
If what you care about deeply can be automated by a linter, it's trivial, and you ought to just setup the linter rules, and go use all that time you just gained to work on something more meaningful.
There's a correlation-is-not-causation issue here. Yes, most very good developers pay a lot of attention to details, as well as to the bigger picture. But there are a lot of mediocre to downright bad developers who think that paying a lot of attention to code style, linting rules and other minutia will make them better developers.
My experience has been the opposite. Most mediocre to bad developers are so happy that their code works AT ALL that they pay little to no attention to things like sensible variable names and whitespace conventions. We sometimes call these things, "code smell."
I've always thought on the contrary that "code smell" refers much more to the patterns used and not so much how it was formatted. Anyway, choosing a language that has automatic formatting (Go for example) renders about 80% of comments in this thread about "linting" and "formatting" and blah blah blah completely irrelevant.
> What you’re essentially saying is “cherish the people who care up to the level I personally and subjectively think is right, and dismiss everyone who cares more as insane weirdos who cannot prioritise”.
Theses are bullet points expressing general rules of thumb, not legal treatises. You're reading far too much into these.
It's not even more or less, it's just about different aspects. Consider a woodworker obsessing over sharp chisels - they don't all care beyond 'sharpish', or about the tools used to do so - but to some that's hugely important and how they are then able to do their best work.
To take it further some woodworkers are really tool hobbyists. They obsess over the tools and never really touch the wood they’re supposedly working. Same goes for software devs, I think about this when reading threads obsessing over AWS services, pipelines, build stacks, instead of writing software.
It's a strange assertion to claim that people who care about formatting care about the craft more. IMO, it's the opposite: people who care about formatting prefer to talk and think about the periphery of programming not the actual act of programming itself. For the record, I reject the usually offered claim that the most important attribute is consistency. I have an easier to reading code written in a different style than understanding somebody with a different accent, but nobody is claiming we should only hire teams from the same regions of the same country.
I dont care about style as long as you can give me a document that specifies all the rules. It can be a simple text file, but don't expect me to remember them all from one convo.
At least with Python I just push everyone to follow PEP-8 makes it easier.
Having a consistent code style and adhering to a set of linting rules is important, but that they exist is far more important than their particulars. I get bored very fast when people argue about the visual presentation of their code. Use a formatter and be done with it. Same with linting rules; pick a set that your team can work with. If you dial everything up to 11, you’re going to have perfectly compliant code that doesn’t do anything. Recognize that compliance sometimes makes your code worse and pick your roles accordingly.
This may point to the dividing line. Software is required to be functional, not simply "artistic". You can certainly make the argument that there are non-functional considerations in its construction: readability, maintainability, extensibility, etc. And, these absolutely intersect with style, so it's tempting to apply words like "artistry".
But, is there a point where details veer into personal preference and insistence on style for the sake of style? I think so, and many of us have seen this. For those who haven't yet, stick around!
Most of these things really don't matter, and it all just boils down to "that's not how I would have written it". Well, okay, but that doesn't mean that's somehow objectively better.
People, though, tend to spiral down into the bikeshedding abyss. It’s one thing to stand your ground about a linting rule that has proven effective in combatting certain classes of errors that you encountered in the field. It’s another thing to make every discussion be about linting rules.
I cannot put it into words right, but you can see there’s usually a vibe. It will be different coming from an experienced developer knowing what they are talking about from lived experience, and from a clueless one who has seen shit but is hell bent on process and rules.
I could agree with your general sentiment, but don't in this case. There's nothing in code style that is important for the craft, at least not in the way it's usually discussed. It could be important if one style is somehow dangerous, but for most people it's a matter of aesthetics, which after a (too) long time doing this I think is weird. Because there are so many real problems to deal with, and where to place a character certainly ain't one.
> Revered artisans are precisely the ones who care for the details.
Abhored obstructionists and covert saboteurs are also the ones that care for the details.
You need to care about right kind of details. Any specific linting is not that kind. Linting at all is.
Artisan without a proper sense could spend weeks exquisitely decorating a spoon with gaping hole in the middle. State of nearly all software is more or less horrible. Choosing the right linting rules is most often than not putting lipstick on a pig.
There’s a big difference when someone stresses over minutiae and fails to see the bigger picture or why the overall design might be flawed.
Wanting to have code up to a given standard is a laudable goal but as with everything, it can be taken to such an extreme that people lose sight of the real value being delivered.
Code is not the end goal, except for some people it is.
Whether one uses tabs or spaces, Allman or K&R, etc. is largely immaterial.
On your own projects, choose a style, go with it. On someone else's project, go with the style chosen. On a shared project, come up with a style, go with it.
Code _organization_ matters way more than code _style_. Where style refers only to the aesthetic choices that don't impact the structure or flow.
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me.
Until you open a file that has 10 different coding styles from 5 different developers. Just the variations of variable naming schemes alone in individual code files that I see/edit, would drive anyone crazy.
I think the line between minutiae and craft is drawn by the effect it has on the product you're creating. Method length makes your code more manageable, for example. Placement of line breaks, not so much.
At the end of the day, we aren't paid to produce code, but working software that works today and is easy to change tomorrow.
That is a purely commercial take of the matter. I don’t think it’s controversial to argue the artisans who stand out do so because they care for the craft itself. Spending an extra hour or two perfecting the shape of the armrest in the chair may not allow you to earn more money from that one commission, but it might improve your knowledge and skill and be slightly more comfortable to the sitter. If they comment on it and appreciate it, so grows your motivation and pride.
Sometimes the code itself, and not its result, is the product. For example, when making tutorials the clarity and beauty of the code matters more than what it does.
I’m not arguing for obsessing over code formatting, but pointing out the line between “master of the craft with extensive attention to detail” and “insane weirdo with prioritisation deficits focusing on minutiae” is subjective to what each person considers important. Most of us seem to agree that being consistent in a code base is more important than any specific rule, but that being consistent does matter.
At the end of the day, we aren’t paid to eat healthily and taking care of our bodies either. But doing so pays dividends. Same for caring about the quality of your code. Getting in the habit of doing it right primes you to do it like that from the start in every new project. Like most skills, writing quality code becomes easier the more you do it.
That's kind of my point: the end cannot see, or feel, minutiae. If they can, it's not minutiae.
> Sometimes the code itself, and not its result, is the product. For example, when making tutorials the clarity and beauty of the code matters more than what it does.
The code still isn't the product in that instance. It's the educational process. In many cases, clarity != beauty. This is why the best written tests often duplicate code, rather than being curated exercises in DRY.
> we aren’t paid to eat healthily and taking care of our bodies either
Yet programmers insist to be paid. Obviously taking time to grow on your own, on your own dime, is self-enriching for all the reasons you describe.
this is my biggest beef with linters. i break lines following typography rules and poetry, using the line break to help communicate. hate it when the linter takes away my thoughtfully chosen line break, because i broke it there to improve readability. i seem to be the only person in the world who cares about the line break of code. other style things idgaf but you can use typography and poetry rules to improve the readability of your code.
I took this as "don't stress about things you can automate."
Yes, you need to care about this. But for the most part you should just follow the conventions of the language/framework, and not reinvent the wheel. Instead, you should put cycles into crafting architectures and algorithms.
Sigh... it really doesn't matter compared to say how and what you test, and that you are consistent. He's saying your opinion about where a brace goes or even spaces or tabs is just not that important compared to crafting simple systems with clear code.
Following non functional stylistic rules to the letter as if they were on the level of SQL injections or memory leaks isn't "craft". It's cargo culture weirdness.
Sure, have a style and a linter. Be DRY. Don't lose your head over it though.
There are aspects of code that matter and aspects that don't. I lump everything that doesn't matter under "code style", and cherish those who cherish the remaining aspects.
I think most people who care about code have a bit of OCD over these things, but there is a difference between how the code looks vs. how it is structured. I think that's what the author means.
This is baldfaced relativism. It only takes a moment of reflection to understand the absurd consequences of this position. For example, how it becomes meaningless to speak of caring about the craft if there is no objective definition of what exactly one should care about, or what is important. It ceases to have intersubjective relevance.
> What you call “stressing over minutiae” others might call “caring for the craft”.
So what? The presence of disagreement is not an argument in favor of relativism and subjectivism. People can be wrong, and they can be wrong about what is valuable. Value is not subjective. The fact-value dichotomy is false.
That's the general principle. As far as this particular example is concerned, the author didn't say things like code style and linting rules have absolutely no value. They have some value. The question is how much, especially in the grand scheme of things, and whether one's concerns, attention, and efforts are proportioned to the objective value of such things. That's how this question should be framed. The author's position, charitably read, is that it is objectively irrational to obsess over such things.
If you wish to rebut, then go ahead and provide an argument, but don't retreat into the bollocks of subjectivism.
Not really. Obsessing over breaking lines after famous 80 chars in Eclipse was, is and will be idiotic to be polite. Surprisingly large amount of people were obsessed by this long after we got much bigger screens, if that was ever an argument (it wasn't for me). 2 spaces vs 4 spaces or tab. Cases like these were not that rare, even though now it seems better. That's not productive focus of one's (or team's) energy and a proper waste of money for employer/customer, it brings 0 added value to products apart form polishing ego of specific individual.
Folks who care about the craft obsess (well within realm of being realistic) more about architecture, good use of design patterns, using good modern toolset (but not bleeding edge), not building monolithic spaghetti monster that can't evolve much further, avoiding quick hacks that end up being hard to remove and work with over time and so on.
If you don't see a difference between those groups, I don't think you understood author's points.
I'd say avoiding long lines is one of the most important rules. I regularly have 2-3 files open side by side, I don't want to have to scroll sideways to read the code.
80 characters is a bit on the low end imo but I'd rather have the code be too vertical than too horizontal. Maybe 120-150 is a more reasonable limit. It's not difficult to stay within those bounds as long as you don't do deep nesting which I don't really want to see anyway because it's hardly ever necessary and it makes code more difficult to read.
Reading vertically is much faster than reading horizontally, so I think 80 is a good soft limit. In some contexts it's hard to not go over it sometimes, e.g. in Java where it's not uncommon with very long type and method names.
And the laptop excuse is not even valid, I used a 11" MacBook Air for 10 years and even back then 80 always felt extremely limiting for me.
I just tested and: even when zooming +1 on VSCode and leaving the minimap open I can fit 140 chars without any horizontal scroll.
People demanding 80 columns always have some crazy setups, like an IDE where the editor is just a minuscule square in the centre, like an Osbourne 1 computer.
Try saying that again when you are 50 and your eyes no longer as good as they used to be. Back when I was 25 I loved the tiny fonts I could fit on my (then incredibly large) 19 inch monitor which I had pushed to the highest resolution. These days even with special computer glasses (magnification and optimized for computer distance) I can't make such tiny text.
And 140 chars aren't enough for two files side by side with 80 chars. With a readable font size and a narrow font about 90 chars is a good limit on a 14" laptop screen. Coincidentally that same limit then allows for three files side by side on the average desktop screen - or a browser window at the side for reference.
If you can live with a single file on screen that's great, but the utility of two is far greater than having a chunk of the screen empty most of the time because of a few long lines.
If you do important work frequently on 14" screen and 2 files side-by-side (regardless of its resolution), then you are seriously self-limiting yourself and your efficiency, plus hurting your eyes which will inevitably bring regrets later. That's not how 'love for the craft' or ie efficiency looks like.
One reason I like longer lines, in those very few cases (way less than 1% of code lines) - it bundles logically several easy-to-read things, ie more complex 'if' or larger constructors. We talk about Java here just to be clear, for more compact languages those numbers can get lower significantly but same principles apply.
Doing overly smart complex one-liners just for the sake of it goes completely against what we write here, I've seen only (otherwise smart) juniors do those. Harder to debug, harder to read, simply a junior show-off move.
200 is entirely too fucking long, and I code on a 43” 4K. I try to stay under 90 in deference to others, and if it looks better breaking at 80, so be it.
Reporting as an Eclipse user of 20+ years, and a person who cares about the craft:
The choice for me is simple:
If I'm going to view the code I'm writing in a 80x24 terminal later on, I'll break that lines, and will try really hard to not get closer to 80 chars per line.
If that code is only going to be seen in Eclipse and only by me, I won't break that lines.
I omitted your other examples for brevity.
Having bigger screens doesn't make longer lines legit or valid. I may have anything between 4-9 terminals open on my 28" 2K screen anytime, and no, I don't want to see lines going from one side to another like spikes, even if I have written them.
I like 80 columns, I can tolerate 100 or 120. I get really annoyed with formatting standards, JS/TS in particular that waste a whole line for a closing brace. Standard aspect has screens more limited vertically than horizontally.
When dealing with tabular data, particularly test data, I find most formatting lacking. I want to be able to specify blocks that align on the decimal point. Especially when dealing with lists of dicts. This makes reading test fixtures much more intuitive than default indentation styles.
Has anyone seen a formatter where you can specify a block be formatted in that manner?
> Revered artisans are precisely the ones who care for the details.
Well yea, but which detail you care about still matters and reveals a lot about your "craft". Code style is of such little consequence compared to semantics it's always a little eyebrow-raising to see people who are extremely opinionated about it.
I've also noticed that this sort of thing can sort of fade into the background after a couple decades of coding. I know people who go on and on about how "beautiful" code is—to me it's just syntax serving a purpose. Sometimes you can make really elegant code that works well with the language, sometimes you can't. But how the code is presented impacts my reading comprehension very little unless you're doing something very strange (looking at you, early 90s c code with one-letter variable names and heavy macro usage).
Actually, I recant one element of this—Javascript and Typescript are just straight ugly and hard to read.
Heartily disagree. I have had to fight with others over automated formatting because it obscured intent or clarity. For example (C#) mandating `var`, mandating one and only one line of whitespace between lines of code, mandating the => syntax for single-line functions (instead of
Function(vars)
{
...
}
)
I would say, when to use one or the other of each of those options is very much a craft.
Somewhat agree and disagree. I bucket people's style into two camps, stressing over the former is largely unproductive, but stressing over the latter is crucial to writing high-quality, maintainable software.
Someone's style can be "different" without being "bad", and you have two basic options to deal with it. One is to authoritatively remove the soul via process (auto-formatters, code review, and to a lesser degree linters, etc. are all designed to create uniformity at the cost of individuality.) The other is to suck it up and deal with it, as this is just an inevitability of creating a team size larger than one: people have different tastes and those have to be reconciled. I somewhat prefer allowing for individuality, and individuals should endeavor to match the style of whatever module they're working in, out of courtesy to its owners/stakeholders if nothing else. However I have only worked independently or on small teams. Most large teams (/ open source projects) have gone the former route of automating all the fun/craftsmanship out of their systems, and even I think that makes sense at a certain scale.
Someone's style can just be objectively "bad", however, and I usually find it's evidence they just don't care about the source artifact that much, and they're focused on the results. (It can also be a sign of an under-performer that spends so much mental capacity just getting the code to work that they have no spare cycles to spend thinking about matters of taste.) If it compiles / works / passes the test-suite that's "good enough" and "their job is done" and they move on to the next task. These people tend to be hyper-literal thinkers that are very micro-task oriented: they see implementing a new feature as a checklist to be conquered, rather than being systems-level thinkers on a journey of discovery & understanding.
If the author is talking about the latter, I have to agree with you that the latter are quite difficult for me to work with; particularly since I know that the source has to be maintained & supported over a much larger time-scale. The source-code is like your house, you live in it, being comfortable to work with/in/on it is the key to success. The deployed artifact may live for only a few weeks, days, or even hours before it gets replaced. The source has evolved over decades. You (the organization) are practically married to it. To further the analogy: I don't mind if somebody wants to hang posters in their room for a band I don't like. (Hell I can even handle if a group of those posters are tastefully hung out-of-level to make some kind of statement.) I do mind if their furniture is blocking a vent, the outlet covers are hanging off, there's a hole in one of the walls, a light has been burnt out for months, and the window-blinds over there are clearly broken but they insist it's fine because daylight still gets through.
> Most programming should be done long before a single line of code is written
Nah.
I (16+ years developer) prefer to iteratively go between coding and designing. It happens way too often that when you're coding, you stumble across something that makes you go "oh f me, that would NEVER work", which forces you to approach a problem entirely differently.
Quite often you also have eureka moments with better solutions that just would not have happened unless you had code in front of you, which again makes you approach the problem entirely differently.
Iterative work is THE way to work in large legacy codebases. The minute you wade into the code, all of your planning is moot. You don't know what's lurking below the surface. No one knows what's lurking under the surface. Except maybe Dave, because he vaguely remembers about 15 years back talking to some guy who wrote some code 30 years back about it.
Greenfield, absolutely design up front you lucky devils, but iterative is the way otherwise.
Greenfield lasts only for at best 2 years or the first public release. After that it is legacy.
I'm am the "Dave" on my current code since I was one of the first engineers on the project and the others before me have long moved to management. There is a lot I don't know about how the code works. There are dark corners we just lifted completely from an earlier project where the guy who wrote it 30 years back is retired. This is normal.
I'm fighting desperately to keep this code in shape as I don't want to go to management to ask for $$$ (billions) to rewrite it. I regret many choices. I'm making other choices in response that I fear someone will regret in 15 more years. I'm hoping to retire before then - better talk to me now because soon the people who have talked to the person who wrote the code 30 years ago will also be a memory. (the guy who write the code 30 years ago is still alive and someone has his phone number - they talk once a year about something weird to see if why is remembered)
> I'm making other choices in response that I fear someone will regret in 15 more years.
Junior dev made to maintain some code base: "wtf, all this old code suck. People were really bad at their job".
Same dev 5 years later: "wellll, this looks bad but there must be a reason.". Usually the reason is someone sold a new feature without asking the implementers or even checking what the impacts could be. So it has to be ready yesterday and you'll never get approval to refactor or clean-up anything, until it breaks.
As someone who's spent 12 years working on legacy codebases, I strongly disagree with this.
Iterative work in a large legacy codebase is how you end up making your large legacy codebase larger and even less understood.
Your planning should "wade into the code" from the start. I have always gotten better results by charting out flow diagrams and whiteboarding process changes than just "diving in and changing stuff".
Frankly, I'd say it's the opposite for greenfield development. Doing iterative work to build out a new product and make changes as you discover needs you didn't account for makes a lot more sense that flailing around making holes in something you don't fully understand that is tied to active business needs.
> I have always gotten better results by charting out flow diagrams and whiteboarding process changes than just "diving in and changing stuff".
In terms of a broad population, I am not sure there is a meaningful difference, though. You can iterate on your ideas on the whiteboard or you can iterate on your ideas in code, but the intent is the same. Either way you are going to throw it all away once you have settled on what should be the final iteration anyhow.
It just comes down to where you are most comfortable expressing your ideas. Some like the visuals of a diagram, others are more comfortable thinking in code, some prefer to write it out in plain English, and I'm sure others carry out the process in other ways. But at the end of the day it is all the same.
> Either way you are going to throw it all away once you have settled on what should be the final iteration anyhow.
I think this needs to be highlighted, because while I completely agree, I think it's often implicit, taken for granted, and neglected. Far, far too often I've seen code bases bloat because this never takes place. The sentiment at a lot of places seems to be, if the tests pass, ship it. Arguably, it may even be the right decision.
>Everyone has a plan until they get punched in their face (by landmines in legacy code)
unless you have the old maintainer on call (I rarely did due to them leaving the company years ago) you definitely need to move slowly. rely on test suites if you are blessed with them. Submit small changes that pass tests.
> Greenfield, absolutely design up front you lucky devils, but iterative is the way otherwise.
That only works for greenfield projects where you have extensive experience with everything that is going to be used on that project. For all others you still learn as you go and all plans and designs need to be revalidated and updated multiple times.
> I (16+ years developer) prefer to iteratively go between coding and designing
I have an extra ten years on you and couldn't agree more.
There are two jokes:
- A few months of programming can save weeks of design.
- A few months of design can save weeks of programming.
Inexperience is thinking that only one of these jokes is grounded in truth.
Recognizing which kind of situation you're in is an imperfect art, and incremental work that interleaves design with implementation is a hedge against being wrong.
Most programming is actually figuring out what already exists and what (and more importantly: why) the requirements are. This is best done long before a single line of code is written.
I think the author is taking a wider view of "programming" than the actual writing of code as the end product. Some of the most important work I've done is spend the time to argue that something doesn't need to be done at all.
And how do you figure out what the requirements are? In my 10+ professional years, I have never gotten requirements by asking for them. Almost always I had to show my interpretation of what I think the requirements are, and use the feedback I got to define the actual requirements. The quickest way to get there is by iterating.
You don't ask for the requirements. You ask what they're trying to do, or what problem they're trying to solve. Sometimes I have to ask "where is this data going" or "what do you expect the end result of this to be".
Not disagreeing here but whatever question you ask, you will only get the final answer _after_ you have implemented it, almost always after several iterations.
> Most programming is actually figuring out what already exists and what (and more importantly: why) the requirements are. This is best done long before a single line of code is written.
Calling requirements gathering "programming" is just misusing a term for no good reason. By all means, include it in "software development" but it clearly isn't "programming".
> what (and more importantly: why) the requirements are
Maybe in a startup? My experience as an IC in larger, more established companies is the requirements are dictated to you. Someone else has already thought carefully about the customer ask, your job is just to implement, maybe push back a little if the requirements they came up with are particularly unreasonable.
If you dig deep you discover they have figured out some requirements in detail, but there is a lot missing. Is this new feature that last one in that line, or will there be more options in the future? Is this new feature really going to be used - many times we have put large effort into features only to discover no customer used them (as evidenced by the critical bug that made the feature unusable outside of the test lab that nobody complained about until 4 years had passed). These things drive how you engineer the thing in the first place.
This makes me think it would be really cool to tie code sections to slack conversations or emails. There's always commit messages yes, but most product decisions on why something was done lives in slack at least where I've worked.
Even an AI tool that takes a slack thread and summarizes how that thread informed the code would be cool to try.
Works great until you're not using it anymore. We're on our third system, all the cases from the first one and most from the second one are long since gone. Meanwhile the commit messages survive it all, even across cvs -> svn and svn -> git migrations.
'Programming as theory-building' is an approach that has grown on me in the past few years.
Your first draft may be qualitatively an MVP, but it's still just a theory of a final product you want, which requires a lot of iterative building before you get to that.
As such, there's no way to not shift between code and design, especially when business requirements are involved and which themselves may change over time.
I'd go one further and say it's an estimate for a bathroom remodel in a house you've never seen that turned out to actually be a garage remodel instead.
Exactly. I did a Ph.D. on software engineering and architecture before embarking on a career practicing what I preach. One thing that I realized early is that designs always lag implementations. They are aspirational at best. And people largely stopped using design tools completely when agile became a thing. Some still do. But you'll look in vain for UML diagrams on most software you ever heard off.
I now have a few decades of experience doing technical work, running startups, teams, doing consultancy, etc. Coding is my way of getting hands on. It's quicker for me to just prototype something quickly in code than it is to do whatever on a whiteboard. I always run out of space on whiteboards and they don't have multi level undo, auto completion, etc. They really suck actually. I avoid using them.
Of course, I sometimes chin stroke extensively before coding; or I restart what I'm doing several times. But once I have an idea of what I'm doing in my head, I stub out a solution and start iteratively refining. And if I'm stuck with the chin stroking, the best way to break through that is to just start coding. Usually, I then discover things I hadn't thought about and realize I don't need to over complicate things quite as much. This progressive insight is something you can only gain through doing the work. The finished code is also the finished design; they co-evolve and don't exist as separate artifacts.
The engineering fallacy is believing that they are separate and that developers are just being lazy by not having designs. Here's a counter argument to that: we don't build bridges, rockets, expensive machines, etc. Our designs compile to executable code. Physical things have extensive post design project phases where stuff gets built/constructed/assembled. Changing the design at that stage is really expensive. For software, this phase is pretty much 100% automated in software. And continuous deployment means having working stuff from pretty much as soon as your builds start passing. Of course refactoring your design/code still is important. You risk making it hard to evolve your software otherwise.
The process of designing a bridge is actually more similar to developing software than the process of constructing one. The difference is that when you are done with the bridge design, you still have to build it. But it's a lengthy/risky process with progressive insights about constraints, physics, legislation, requirements, etc. Like software, it's hard to plan the design. And actually modern day architects use a lot of software tools to try out their designs before they hand them over.
Just some simple insights here. There is no blue print for the blue print for either bridges or software. Not a thing, generally.
You could try writing an RFC or a tech spec sometimes with different approaches, proposed solutions, pros/cons. It's basically coding and designing the system in your mind and anticipating issues and challenges. It's a good exercise to do this before writing a line of code. The more you do it, the easier it gets, the mind starts to think about different approaches and pitfalls, you get into a focused state where the brain organizes the logical flow and then you can write a rough outline without caring about making the compiler happy or what the exact syntax is. Sometimes it also helps to translate this high level outline into pseudocde in a comment and then fill in the blanks with actual code.
I've compared it to finding the integral of a function. Unless it's trivial or closely resembles something I've done before, how am I going to have the faintest idea what it's going to be like until I start?
Always think bigger picture than what you're immediately working on. (I don't mean that you can't ever just focus on the problem you're trying to solve for, say, hours. I mean you can't focus like that for the entire time you're in that development phase.)
Think about design and code (and functionality!) before you start coding. Think about design as well as code while you're coding. Think about design, code, and functionality while you're testing.
What I think is a better way to say this is that you need a `design` phase before actually writing the first `real` implementation code.
Something I do a lot, and even more with the LLMS, is that I make `scratch` projects where I sketch code over and over (and maybe make mockups in Keynote or similar, make some notes, etc), then write from scratch again in the real codebase.
The OP didn't say what it is they're talking about that should be done before writing any code.
He might have meant design, and I'm not sure about that.
But the other thing i think of is: Understanding the problem.
It's hard to do too much of that before you start coding, and easy to do too little.
It overlaps with design to some extent, because once you understand the problem better, some designs will naturally seem inappropriate or better -- without having to spend time allocated to "designing" necessarily, just when you design you're going to come up with things that work a lot better the better you understand the problem you are trying to solve.
How the stakeholders see it, and what's really going on, and why it's a problem, and what would make an acceptable solution, and what the next steps down the road might be.
Right, by your interpretation what they suggested is logically impossible (one can't possibly write any code, let alone most, before one writes a single line of code), so I understand you think they should have written differently, but its clear they meant something else, I would assume they meant programming as a synonym for software development, right.
Agree. Although I increasingly spend this iteration time on types/interfaces/documenting proposal of same. The actual implementation below that is often trivial once the boundaries are settled.
I assume there are people who are able to have those eureka moments before writing any code. I definitely write a lot of code before figuring out the final design but always think I should be designing more.
Absolutely, though sometimes it's more about reading code or 'playing' with code than writing/committing code. I try to always be hoping around my codebase during meetings.
Or am I. The typical engineering savant omniscient to all future past and present engineering roadblocks fixed by “Just”(TM) thinking about it beforehand. I expect this from bay area mid level not someone with credentials.
Strange because I agree with so much more of the article
TBH, I think it's more of a 'manager' attitude. A lot of actual "hacker" type people are very much in the "rough consensus and working code" category where you see what works by doing it.
ok that sounds bad, you should have the option to go back to design, but depending on what point you find that issue, depends on how much time you have wasted?
It's about defining and solving small problems all the way, and avoiding trying to solve big problems.
If you manage to restrict yourself to only solving small problems (THIS is the true challenge with software engineering, in my humble opinion), then you won't ever have wasted too much time if (when) you need to reset.
Just personal opinions, I guess, I agree with most, but here are some I disagree with:
- There is no pride in managing or understanding complexity
Complexity exists, you can't make it go away, managing it and understanding it is the only thing you can do. Simple systems only displace complexity.
- Java is a great language because it's boring
That is if you write Java the boring way. A lot of Java code (looking at you Spring) is everything but boring, and it is not fun either.
- Most programming should be done long before a single line of code is written
I went the opposite extreme. That is, if you are not writing code, you are not programming. If you are not writing code on your first day your are wasting time. It is a personal opinion, but the idea is that without doing something concrete, i.e. writing code, it is too easy to lose track of the reality, the reality being that in the end, you will have a program that runs on a machine. It doesn't mean you will have to keep that code.
- Formal modeling and analysis is an essential skill set
Maybe that explains our difference with regard to the last point. Given the opportunity, I prefer try stuff rather than formalize. It is not that formal modeling is useless, it is just less essential to me than experimentation. To quote Don Knuth out of context: "Beware of bugs in the above code; I have only proved it correct, not tried it." ;)
- You literally cannot add too many comments to test code (I challenge anyone to try)
> - There is no pride in managing or understanding complexity
> Complexity exists, you can't make it go away, managing it and understanding it is the only thing you can do. Simple systems only displace complexity.
I interpreted that one as a suggestion to avoid welcoming needless complexity because of the false sense of pride it gives you to successfully manage that complexity.
To give an example, I believe C++'s enduring popularity is mostly because of exactly this false sense of pride. You practically need a doctorate-level understanding of the language to use most of its features without stepping on the dozen landmines the language places in your way (I'm so smart because I: remembered to declare my destructors virtual and understand why; can interpret this 2MB of template errors in the compiler output; can """cleverly""" use operator overloads). It can feel nice to be a master of such a complex tool, but that's a false sense of pride. The complexity of your tooling is not the point; the end product is.
C++'s enduring popularity is mostly inertia from the time it was if not the only game in town, the biggest, baddest game in town, and from being the souped-up (if overly complex) successor to the previous biggest, baddest game in town.
Thousands of companies collectively have billions of lines of code in C++. Millions of programmers know it well enough to get the job done. Entire ecosystems with absolutely huge areas are well defined by C++ (and previously C).
Rewriting all this code would be a gargantuan task. It all mostly works (yes, it has bugs, lots of them, but it is still mostly doing the job). The "R" in "ROI" for rewriting it is extremely low and hard to predict, and the "I" is very high.
And that is why old programming languages live on. Not because people take pride in being geniuses or the ability to code in it, but because inertia is really hard to change.
C++ complexity exists for a reason. It does a lot of things and these things are useful, if not necessary for those who use it. I can't think of any language that can replace C++ completely. Plenty can replace C++ incompletely, but then you would need another language for the leftovers, that's displacing complexity.
There are modern languages trying to eat C++ lunch, like Zig and Rust, but you don't get decades of backward compatibility, and they are not particularly simple either.
Rust in particular is one of the most complex programming language in use today, it could definitely be simplified by removing the borrow checker and lifetime things and make "unsafe" implicit, leave memory safety to the programmer. But it makes no sense because Rust was designed for memory safety and performance, which is a complex problem, and therefore Rust is complex.
IMO Rust is hard until you gain some intuition, then it becomes MUCH easier.
The real frustrating part is claiming to embrace errors by returns, but then still just panicing all over the place, dependencies piled upon dependencies piled upon dependencies, just about 0 documentation on doing anything asynchronous without external dependencies, important features being kept in unstable for basically ever, one of the highest barriers to contribute to language features, highly questionable leadership processes and the worst of all:
Openly embracing design complexity. When I learned about extension traits for the first time, I thought "That's awesome", only to find not much later crates, that seem to have some features, which I couldn't find the implementation for anywhere. Turns out external crates were pulled in, which then were used to extend anything carrying certain marker-traits from the previous crate. Like WHY?
Yeah, because Bjarne was figuring it out as he went. Which is fair, he was treading new ground. But C++ would have turned out a lot better if he'd taken a lot more vacation time.
Yes, C++ developers seem particularly prone to adding unnecessary complexity. I'm not sure why that is, but they feel compelled to only program the most generic, ultra flexible solutions, even though it'll likely only every get used in fairly simple ways. But you forever have to pay the cost of that complexity for virtually zero benefit.
and the variable will invariably store milliseconds because someone didn't read the docs on timelib.Now() or store an int as a counter for makeshift vector clock :p
Is it ms? seconds? days? weeks? months? How far up do I have to read to figure that out?
When I'm looking at a test case is broken, I ideally want context IN the actual test that lets me understand what the test author was thinking when they wrote it. Why does this test exist as it does? Why are the expectations that are in place valid? Write the comments for you-in-2-years.
That just means the variable isn't named correctly, not that it needs a comment. Just name it 'time_seconds" or whatever and save yourself the extraneous typing.
I tend to be a minimalist when writing comments. If I have to write out a comment to describe what I'm doing (like "advance 1 simulated second"), then I have failed at writing clear code. Sometimes I will write a comment to explain why I am doing something, if it's not clear (like "manually advance time to work around frobbing bug in foobar").
Comments add to your maintenance burden. Writing clearer code can often reduce that burden.
I agree that the time unit should be in the variable name. The code itself should do a good job of explaining "what" is happening, but you generally need comments to explain "why" this code exists. Why is the test advancing the time, and why are we advancing the time at this line of the test?
networkTimeMs++; // Callback occurs after timeout
timeSec++; // Advance time to check whether dependent properties update
utcTime++; // Leap second, DON'T advance ntpTime
In a performance language your "real types" aren't somehow more expensive and so you should only use the built-in primitives when they accurately reflect your intent. So I would write:
time += Duration::from_millis(1);
But I would expect that "time unit should be in the variable name" is a reasonable choice in a language which doesn't have this affordance, and I needn't care about performance because apparently the language doesn't either.
I also wonder why we've named this variable "time". Maybe we're a very abstract piece of software and so we know nothing more specific? I would prefer to name it e.g. "timeout" or "flight" or "exam_finishes" if we know why we care about this.
Same here - unit-specifying variable names like "delaySecs" and "amountBaseCcy" (where any possibility of ambiguity exists) are exactly what I enforce on our projects (when types aren't providing the guarantee). It makes avoiding and detecting mistakes easier, because you can immediately see where logic has gone wrong.
The problem, in this case, is that the correct size of the increment involves the unit of measurement. If we change the unit of measurement and go update your comment on the declaration of the variable, now everywhere which uses the variable is wrong.
int time; // in seconds
/* thousands of lines away or in another file */
time += 1;
Later we change the time to be in milliseconds. We update the comment on the declaration, but now that code is wrong and we have no reason to know that.
That's a bad choice, languages should do better (and some do - where they do, use the better features and this problem vanishes) but when it's forced upon us it makes sense to either put the unit in the name of the variable or ensure comments about changes to the variable explain the units consistently, even though that's lots of work. This extra work was dumped on you by the language.
It will vary immensely on how readable the actual code base it, but what comes to mind:
1. what units? I was just caught in this with a function with a timeout. I had to look at the docs to find out this was actually in nanoseconds (stuff like this is why I came more around to verbose parameter names).
2. what's the function of the timer?
3. (potential code smell) Do I need to manually increment such a timer for the test? is the time library a necessary part of the test (or perhaps what we testing)?
I agree with you, i'm much more on the "try stuff out" scale vs. formal methods. That being said, i've worked with people who are the other way and still very effective. I think this one is more of a trade-off or personality thing than something that's "true" or "false"
I agree with you that personality plays a role. But regardless of which way your personality pushes you:
You can never think enough up front to know all you need to know, or even 95%. You're not omniscient enough, and you never will be. Big Design Up Front fails because of this - you have to be able to iterate.
You also have to know what you're trying to build, and at least roughly how you're going to build it. If you don't, no amount of iteration and experimentation will enable you to converge on a solution. You need to experiment and iterate and explore within at least a sketch of a larger picture, not on a blank canvas.
> Complexity exists, you can't make it go away, managing it and understanding it is the only thing you can do. Simple systems only displace complexity.
My thoughts, exactly. And considering that so much unnecessary complexity keeps being added to software (through poor understanding of requirements, technical debt, etc), it's an extremely valuable skill.
I think the "most programming" thing has to be determined according to project type. You should have your architecture and data relationships all figured out long before coding when it comes to safety-critical systems.
Software development should also be seen from what stage the org is in.
Software development looks super different when the org is a startup vs when the market fit is established.
When you are a pre-PMF, you have to establish trust. You deliver fast, cut corners, make sure customer needs are met, value is generated. Nothing else matters.
When PMF is established. You have to de-risk everything. All your work is at stake. Then best practices has to be in place to makes things scalable, code quality matters because it is a proxy for enforcing standards.
I don't think Software can be seen in isolation. It has to be seen from thr orgs perspective.
This. Engineers are hired to solve problems. Sometimes beautiful code and clean architecture are part of the requirements to solve the problem at hand. Sometimes… they aren’t.
I equate design patterns with language deficiencies, so not sure what to answer. I know 'beautiful' won't be part of the response, though.
Code is a liability, unless you're selling it. The less you have of it, the better off you are in the long run. That doesn't mean I don't agree that it should be clean, as an engineer. As a customer making a decision whether to buy a product or not, I couldn't care less. If you're hunting for product-market fit, you can do it with clean code, nothing is stopping you, except perhaps if there's enough runway to make paycheck.
> Java is a great language because it's boring [...] Types are assertions we make about the world
This is less of a mind-was-changed case and more just controversial, but... Checked Exceptions were a fundamentally good idea. They just needed some syntactic sugar to help redirect certain developers into less self-destructive ways of procrastinating on proper error handling.
In brief for non-Java folks: Checked Exceptions are a subset of all Exceptions. To throw them, they must be part of the function's type signature. To call that function, the caller code must make some kind of decision about what to do when that Checked Exception arrives. [0] It's basically another return type for the method, married with the conventions and flow-control features of Exceptions.
[0] Ex: Let it bubble up unimpeded, adding it to your own function signature; catch it and wrap it in your own exception with a type more appropriate to the layer of abstraction; catch it and log it; catch it and ignore it... Alas, many caught it and wrapped it in a generic RuntimeException.
It was botched from the start because there's so many opportunities for unchecked exceptions as well. Without a more sophisticated type system that represented nullability, you can get NullPointerException anywhere. Divide by zero. And so on.
You also have a problem similar to "monads and logging": if you want to log from anywhere in your program, your logging function needs to be exception-tight and deal with all the possible problems such as running out of disk space, otherwise you have to add those everywhere.
The problem there was really that Java confused unrecoverable errors with recoverable errors. NPEs and divide by zero should make the program abort (possibly with another, completely different mechanism to catch these if you really want to, a la Rust's panic handlers).
Recoverable errors should all be checked exceptions, and a part of each function's type signature. This would still be a huge pain to deal with, though, with the existing syntax.
Unchecked exceptions are just Java's weird way of what languages call panicking these days. They suck, but as long as you don't throw them yourself and catch them in a logical place (i.e. at request level so your web server doesn't die, at queue level so your data processing management doesn't lock up, etc.) you can usually pretty much ignore them.
The worst part about them is that for some reason even standard library methods will throw them. Like when you try `list.Add(1)` on a list without checking if said list is read-only. The overhead of having to read every single bit of documentation in the standard library just to be ahead of panicking standard methods is infuriating.
That's got nothing to do with the concept of checked/unchecked exceptions, though, that's just Java's mediocre standard library.
> Without a more sophisticated type system that represented nullability, you can get NullPointerException anywhere.
I started working in Java a few months ago and holy shit does this stick out like a sore thumb. Null checks cascade down from gods domain all the way to hell. But oop we missed one here and caused an outage lol add one more! So much wasted human effort around NPE, and yet, we sit around in a weekly meeting getting yelled at about how stability needs to be taken more seriously. Hrm.
C# half-fixes this with its nullable annotations. I say half-fixes, because the boundary between code that supports them and code that does not is leaky, so you can make a mistake that leaks a null into a non-nullable variable.
If you build an entire program with nullability checking on it's pretty great, though.
Java, or at least Lombok, seems to have a @NonNull annotation that does what I want— cause code not to build that fails the check, and forces propagation of the annotation.
Reality does indeed feel exactly like what you mentioned with C#, though. The annotation is going to be missing where it’s needed most unless something forces the whole project to use it.
check JSpecify (https://jspecify.dev) - it's the standardised null annotation package for Java. Intellij understands the annotations so you generally get decent null-checking across your codebase.
Even better, apply at the package level via `package-info.java` (unfortunately sub-packages need to be individually marked as well)
I've pretty much been stuck on java since 1995, I started right around 1.0. There have been some stints of Python or we have some glue code in C/C++ or whatever but we're talking 95% java.
Some of these things are mildly annoying when you think about them in theoretical terms.
But all the professional teams I've been on a (a lot) have successfully dealt with exceptions without issue. The teams settled on an exception and error handling process early in the design phase and it rarely has caused a major issue.
Yet it seems out on the internet in any place where programming languages are discussed it is an insurmountable problem that has caused all projects in Java to fail and the language died an early and unpopular death. It seems if it caused anyone huge problems it was not Java's issue but that team's issue.
There are/were other much bigger issues over the years. Memory leaks have been issues. Spring was hard for many people to deal with back around 2005 or so. The first iterations were quite bad with XML. XML in general caused a lot of issues. J2EE caused a lot of issues just because it was so badly designed early on. (Some of this was because it was birthed out of CORBA, which was itself pretty horrible.) Plenty of issues were caused by using Collections with mixed objects in them early on before Generics were introduced. Visual J++ caused havoc. Different models of web application caused a lot of havoc before we got to Javascript UIs in the browser driven by Web API back ends. JPA was a big mistake IMO. But exceptions were never really a huge problem anywhere.
So many have blamed a problem on Java when it was actually a problem with a library, component, or framework written in Java that became way more popular than it should have been. And along the way there were a lot of "developer influencer celebrities" who were listened too far far more than they should have been. Many of these guys (I can't remember one ever being a woman) sold everyone on ultra complex designs and ways of doing things and the community almost always bought in to a ridiculous degree.
This is precisely why they are so bad: checked exceptions must not be allowed to be used outside the package (or jar, or whatever, just limit it) otherwise they cause non-local build failures in all dependencies. They're fine if you are developing the artifact that's going to be deployed.
What's specific in checked exceptions is that if you don't handle or silently ignore the new exception, you must change the signature. Then your callers must do the same thing. Then their callers etc. sometimes right down to your public static void main.
And that is extremely good compared to the same function-writer adding or changing their non-checked exception, for which an 1+ levels removed consumer gets no warning at all until the pager goes off because the system broke in production.
That same scenario (an emergency version-change to a direct dependency) could also remove a function that your code calls! Yet that does not mean mean compiler-checks are bad, or that the solution is to make a system that lets you yeet it into production anyway.
Look, I get it: Sometimes a Checked Exception defined in a niche spot "infects" higher-level code which adds it to their signatures, because nobody takes the time to convert it into something more layer-appropriate.
But that is the exact same kind of problem you'd also get when library's NicheCalculationResult class is trickling upwards without conversion too! However nobody freaks out over that one. Not because it's mechanically different, but because it's familiar.
> That same scenario (an emergency version-change to a direct dependency) could also remove a function that your code calls!
absolutely, but the catch is it doesn't affect me transitively. the immediate caller must deal with the issue somehow. with exceptions, it is expected to not handle the ones you have no business handling, so you should change your signature. this propagates upwards and there is no layer of abstraction that can handle this problem without breaking the world. the only somewhat sane way is wrapping the new exception is something that you already handle - if that makes logical sense, which it very well might not.
> NicheCalculationResult class is trickling upwards
yes, and yes people do freak out, not sure why you think they aren't :)
consider the standard library case - e.g. there's a new kind of exception because new storage or network technology demands it. you can't add it without breaking the build of everything everywhere effectively freezing the standard library version for people who don't have the means to fix their build. that's super duper bad.
1. The standard library is special in many ways, particularly because it often isn't shipped along with your product and you can't always control what version is used. Just because something is problematic for those libraries doesn't mean it it's a bad idea everywhere else.
2. The difference between altering your un/checked exceptions is not whether consumers will have to react, but how it shows up and how badly you will ruin their day. A checked exception is unambiguously better. It will immediately break their build at the same time they ought to be expecting build-breaks, and the compiler will give them a clear and comprehensive list of cases to address. In contrast, an unchecked exception may let them compile but it will break their business in production, unpredictably.
Re 1) when the standard library becomes a major blocker for the runtime version upgrade many people are seriously angry, or depressed.
Re 2) that's what it must've sounded like in theory in the conference room when they were designing that part of the language. In practice, the upgrade of the library will never happen if it breaks the build. Production should catch all runtime exceptions and be able to restart itself gracefully anyway because cosmic rays don't make sense as checked exceptions.
I agree, although I would like to point out Java usually gets the blame for what was actually an idea being done in CLU, Mesa, Modula-3 and C++, before Oak came to be and turned into Java.
Additionally, the way result types work, isn't much different, from type system theory point of view.
I really miss them in .NET projects, because no one reads method documentation, or bothers to have catch all clauses, and then little fellow crashes in production.
> They just needed some syntactic sugar to help redirect certain developers into less self-destructive ways of procrastinating on proper error handling.
Syntactic sugar it needs is an easy way (like ! prefix) to turn it to a runtime exception.
Procrastinating on exceptions is usually the correct thing to do in your typical business application - crash the current business transaction, log the error, return error response. Not much else to do.
Instead the applications are now littered with layers of try-catch-rethrow (optionally with redundant logging and wrapping into other useless exceptions) which add no benefit.
The try/catch/rethrow model can easily be substituted by just adding a `throws` to the method. If you truly don't care, just make your method `throws Exception` or even `throws Throwable` and let the automatic bubbling take care of making you handle exceptions at top level.
I disagree. The real value of exceptions is you can skip 6 levels of functions that have lines like
status = DoThing();
if(status != allIsWell) {return status;}
C++ embedded for a long time has said don't use exceptions they are slow. However recent thinking has changed - turns out in trivial code exceptions are slow but in more real world code exceptions are faster than all those layers if checks - and better yet you won't give up on writing all the if checks. Thus embedded projects are starting turn
exceptions on (often optimized exceptions with static pre allocated buffers)
The final "print something when wrong" is of little value, but the unwinding is very valuable.
Khalil Estell has some great work on that. https://www.youtube.com/watch?v=bY2FlayomlE is one link - very low level technical of what is really happening. He has other talks and papers if you search his name.
Well, usually you want to handle it at some level - e.g. a common REST exception handler returning a standard 500 response with some details about what went wrong. Or retry the process (sometimes the errors may be intermittent)
The problem with checked exceptions is they are best explained and utilized in a single threaded execution model where the exceptions can be bubbled up to the operator.
This is not, of course, the only way that checked exceptions can be utilized. But, all too often, that is by far the easiest way to reason on them. They represent a decision that needs to be bubbled up to the operator of the overall system.
Worse, the easy way to explain how the system should resume, is to go back to where the exception happened and to restart with some change in place. Disk was full, continue, but write to a new location. That is, having the entire stack unrolled means you wind up wanting the entire process reentrant. But that is an actively hostile way to work for most workflows. Imagine if, on finding a road was closed, a cab driver took you back to pick up location to ask what you want to do about it.
If it is not something that you want to unwind the stack, or bubble up to the users, then you go through effort to wrap it so that it is another value that is being processed.
The most common pattern in languages with explicit error handling, is to simply return the error (possibly with some context added) in every function up to the point where the process was started (e.g. an HTTP endpoint handler, or the CLI's main function) to deal with it.
I'm not saying exceptions are good, but I am saying that they do represent the most common error handling pattern.
Right, this is largely the same idea. For things that have to be bubbled up, you wind up in the simplistic "single thread of execution by an operator" pattern. And, in that scenario, exceptions work exactly the same as just returning it all the way up. It is literally just making it easier to unwind the stack.
My assertion is that actual error handling in workflows doesn't work in that manner. Automated workflows have to either be able to work with the value where it was broken, or generally just mark the entire workflow as busted. In that scenario, you don't bubble up the exception, you instead bubble up an error code stating why it failed so that that can be recorded for later consideration. Along the way of bubbling up, you may take alternative actions.
Checked exceptions as an idea are great (Nim’s usage of something similar is excellent) but yeah Javas particular implementation was annoying and easy to avoid, so most did.
I think your exception model needs to match your problem domain and your solution.
I work on an Inversion of Control system integration framework on top of a herd of business logic passing messages between systems. If I were to do all over again, then I’d have the business logic:
* return success or failure (invalid input)
* throw exception with expectation that it might work in the near future (timeout), with advice on how long to wait to retry,
and how many retries before escalating
* throw exception with expectation that a person needs to check things out (authentication failure)
Unless the business logic catches it, unchecked exceptions are a failure. Discussion about what is what kind of exception is hard, but the business owners usually have strong opinions taking me off the hook.
I'm not a fan of checked exceptions because they force you to use a verbose control structure (try-catch) that distracts from the actual logic being expressed. Typically, checked exceptions are also not exceptional, so I prefer working with monadic exceptions, like Rust's Option/Result, because they encourage error recovery code to use normal control flow, keeping it consistent with the rest of the application.
I also find that generally exceptions can't be meaningfully caught until much higher in the call-stack. In which case, a lot of intermediary methods need to be annotated with a checked exception even though it's not something that matters to that particular method. For this reason, I've really come around on the Erlang way of doing things: throwing runtime exceptions in truly exceptional situations and designing top level code so that processes can simply fail then restart if necessary.
It's already very unlikely that you can 'recover-and-proceed' in the context of any business app exception (Security violation, customer not found, no such payment, etc.).
So what's left in exception handling is logging and/or rethrowing. And the 'nasty hackish way' of doing it (RuntimeException) already passes a complete stack trace up to the caller.
As someone who worked with Haskell and Rust in production this is a nightmare:
- All functions end up returnig `Either/Result`
- Stack traces are gone
- Exceptions and panics can still creep so it's not even "safer"
- There is no composability of different result types, you need something like `Either<FooError | BarError | ..., A> which is not supported in most languages (I think Scala 3 and Ocaml have this feature), or you create a "general wrapper" like `SomeException` (Haskell) or use `anyhow` (Rust).
Today I write C# at my job and I could not be happier with the usage of exceptions.
Stack traces are valuable and important, agreed. (In Scala it's pretty normal to use Either/Result with an exception type on the left so that you still get the stack trace). And it's definitely possible to carry an error state around too far when it's not recoverable and you should have just errored out earlier - ultimately you still need good coding judgement. "All functions end up returnig `Either/Result`" is a sign you're doing something wrong.
> Exceptions and panics can still creep so it's not even "safer"
This part is just as true for checked exceptions - you still have unchecked exceptions too. Ultimately you can't get away from needing a way to bail out from unrecoverable states. But "secondary state that is recoverable and understandable, but should be handled off the primary codepath" is a very useful tool to have in your vocabulary.
> Today I write C# at my job and I could not be happier with the usage of exceptions.
How do you do e.g. input validation? Things that are going to be 4xx errors not 5xx errors, in HTTP terms. Do you use exceptions for those? (I note that C# doesn't have checked exceptions at all, so there's no direct equivalent)
> This part is just as true for checked exceptions - you still have unchecked exceptions too
That's why I'm against checked exceptions in general: I avoid them in Java as much as possible.
> "All functions end up returnig `Either/Result`" is a sign you're doing something wrong.
I agree, that's why I'm sharing that it's a bad idea. I've had this experience recently in a team of very experienced developers (some of them who even have books written). In the wild you have Go in which essentially all non-trivial functions return `X, error`.
> How do you do e.g. input validation? Things that are going to be 4xx errors not 5xx errors, in HTTP terms
Recently I've been working on a piece of code that deals exactly with that. For such use cases we use the `bool Try(out var X)` pattern which follows what you have in the standard library (see `int.TryParse(string input, out int result)`) which works but I'm not a fan of. For this use case a `Either/Result` would work great, but is a very localized piece of code, not something that you would expect to see everywhere. Also, you might want to roll (or use) a specialized `Validation` type that does not short-circuit when possible (ex. in a form you want to check as many fields as possible in a single pass).
In summary, I'm not against the existence of `Either/Result` in general, I think there are great use cases for them (like validation); what I'm against is the usage of them to signal all possible type of errors, in particular when IO is involved.
> In summary, I'm not against the existence of `Either/Result` in general, I think there are great use cases for them (like validation); what I'm against is the usage of them to signal all possible type of errors, in particular when IO is involved.
I agree that a checked error type is a bad way to represent IO errors that most programs won't want to handle (or will handle in a very basic "retry the whole thing at high level" way). I think a lot of functional IO runtimes are converging on a design where IO actions implicitly carry some possibility of error that you don't have to nest in a separate either/result type.
Effect systems such as Bluefin (my own) and effectful allow you to have multiple possible exception effects in scope without having to cram them all into one type (with some sort of "variant" or "open sum" type). It's a very pleasant way to work!
I haven't had the time to play with Bluefin (I'll get to it eventually!) but I did try out effectful. In the later I've only used the `Fail` effect since most of the time I just want to bail out when some invariant is broken. In my experience these fine-grained errors don't provide much value in practice but I understand the appeal for some.
Yes, but then you have to handle every possible errors at every call point, and wrap all those you can't handle at the calling site into your own return type... This is well documented.
One should be cautious every time it feels like there is an obviously right way to do something that everybody fails to see :)
> Yes, but then you have to handle every possible errors at every call point, and wrap all those you can't handle at the calling site into your own return type... This is well documented.
What? No you don't. You can just propagate up the errors you can't handle with the types they come with.
What language are you using that will automatically infer this at compilation time?(*)
If I understand properly what you are suggesting, to make it work with my goto language (OCaml), I believe that I would have to make every function return a polymorphic variant for the error (aka technicaly equivalent to mandatory try blocks and boxed return types). The only time I had to deal with a library doing that it was not pleasant but it was too long ago for me to remember the details, probably related to complex compilation error messages.
I mean on paper they're a good idea and well implemented etc. However, the main two flaws with exceptions are that one, most exceptions are not exceptional situations, and two, exceptions are too expensive for dealing with issues that aren't exceptional like that.
Because eventually someone will want to add another exception to a new leaf class and the proper place to handle it is 20 functions down the call tree and every single one of those 20 functions needs to now add that new exception to their signature even though only the handler function cares and you adjusted it.
I haven't done java in decades, but I imagine this would get really nasty if you are passing a callback to a third party library and now your callback throws the new exception.
Checked exceptions seem like a great idea, but what java did is wrong. I'm not sure if other implementations are better.
The counter point here is that at least with Checked Exceptions you know only those 20 functions are part of the code path that can throw that exception. In the runtime exception case you are unaware about that 21'st function elsewhere that now throws it and it's not in the correct handling path anymore.
You have no way to assert that the error is always correctly handled in the codebase. You are basically crossing your fingers and hoping that over the life of the codebase you don't break the invariant.
What was missing was a good way to convert checked exceptions from code you don't own into your own error domain. So instead java devs just avoided them because it was more work to do the proper error domain modeling.
Like I said, the implementation is wrong. Adding an exception to that 21st function, and then that whole call chain as well ends up being a lot of work. Sure you eventually find the place to handle it, but it was a lot of effort in the mean time.
It gets worse. Sometimes we can prove that the 21st function because of the way it is calling your function can never trigger that exception, but still it will need code to handle it. If that handler code if the 21st function changes to trigger the exception now should be propagated back down but since you handled the exception before checked exceptions won't tell you that you handled it in the wrong place.
I don't know how to implement checked exceptions right. On paper they have a lot of great arguments for them. However in practice they don't work well in large projects (at least for java)
The Rust Result type with accompanying `?` operator and the `Try*` traits are how you implement exceptions correctly. It makes it easy to model the error domain once in your trait implementations and then the `?` does the rest of the work.
You could imagine something similar with Exceptions where there is simple and ergonomic way to rethrow the exception into your own error domain with little to no extra work on the part of the developer.
Because you can't capture the evaluation of a function as a value, or write the type of it. E.g. try to write a generic function that takes a list and a callback, and applies the callback to every element of the list. Now what happens if your callback throws a checked exception? It doesn't work and there's no way to make it work, you just have to write another overload of your function and copy/paste your code. Now what happens if your callback throws two checked exceptions? It doesn't work and there's no way to make it work, you just have to write another overload of your function and copy/paste your code. And you'll never guess what happens if your callback throws three checked exceptions!
Make the signature of your generic callback "throws Throwable". It's generic; it should never care about the specific types that the callback can throw.
(Except that then you have to decide what your generic function is going to do if the callback throws an exception...)
> Make the signature of your generic callback "throws Throwable". It's generic; it should never care about the specific types that the callback can throw.
> (Except that then you have to decide what your generic function is going to do if the callback throws an exception...)
Exactly. Presumably you don't want to handle them and want to throw them up to the caller. But now your function has to be "throws Throwable" rather than throwing the specific exception types that the callback throws.
By doing that you lose all the benefits of checked exceptions. If you have checked exceptions everywhere the compiler will tell you when an exception is not handled and in turn you can ensure you handle it.
Of course in general the manual effort to do that in a large code base ends up too hard and so in the real world nobody does that. Still the ideal is good, just the implementation is flawed.
what is "it doesn't work" ? The exception is part of the type, so it doesn't typecheck unless all callbacks are of type "... throws Exception"? What's the problem with that? It's not generic enough, i.e. the problem is Java generics are too weak to write something like "throws <T extends Exception>"? (Forgive me, it's been 13 years since I wrote java and only briefly, the questions are earnest)
edit, so like `@throws[T <: Exception] def effect[T](): Unit` or something, how is it supposed to work?
Yeah. Ironically the JLS includes a complete specification of what the type E1|E2 is, because if you write a catch block that catches both then that's the type of what you catch, there's just no syntax for it.
There's also the fact that common methods threw exception types that were not final, and in fact overly generic. If I call a method that declares itself to throw NoSuchFileException or DirectoryNotEmptyException, I can have a pretty good idea what I might do about it. If it throws IOException without elaboration, on the other hand...
With regards to I/O, there are generally any number of weird errors you can run into, file not found, directory exists, host unreachable, permission denied, file corrupt, etc etc. Like anything file related will be able to throw any of those exceptions at almost any point in time.
I think IOException (or maybe FileSystemException) is probably the best you can do in a lot of I/O cases unless you can dedicate time to handling each of those specially (and there's often not a lot much more you can do except for saying "Access is denied" or "File not found" to the user or by logging it somewhere).
There are fatal IO errors and non-fatal. Most of us don't care about the non-fatal case, but mainframes have the concept of "file isn't available now, load tape 12345", "file is being restored from backup, try again tomorrow" - things that your code could handle (if nothing else you should inform the user that this will take a while). There is also the "read off the end of the file" exception which in some languages is the idiomatic way to tell when you have read the whole file.
But most IO errors are fatal. It doesn't matter if the filename is not found, or the controller had too many errors talking to the drive and gave up - either way your code can do nothing.
>Frontend development is a nightmare world of Kafkaesque awfulness I no longer enjoy
As a backend/systems engineer I recently had to look at a React + Typescript + MobX app from 2019/2020. It is true that that some things, especially the webpack config and Typescript loading, were outdated but the overall design and architecture of the app was still understandable and modern. With some help from ChatGPT it took very little time to migrate to Vite and update dependencies. By 2019/2020 React Hooks had already been out for some time but there were still some class components in the app. They were easily migrated to functional components + Hooks using ChatGPT.
5-10 years ago I was just skipping Webpack and Babel, and raw-dogging React.createElement in personal projects in order to be happy and maintain sanity. At work I would just push hard for alternatives like Parcel or Brunch.
Now, with Vite I just don't mind the toolchain anymore, it just helps me instead of getting in the way. Similar to Go.
All the problems I have now are of my own creation.
dumb question from someone re-entering the FE world: why is Vite necessary?
My understanding is it basically strips the type information using Go (IIRC) where your typechecking is then a separate step (without transpilation). So feedback is more rapid.
But, with tools like Deno or ts-node, where the type checking is apparently also off the hot path, why does Vite still exist?
Is it because it connects file monitoring with a dev server? Because it also somehow works non-js artifacts like CSS and image imports?
Ultimately, I've found the world of Vite + ESNext imports to be a world of frustration and pain. And, I really want to like Deno, and a lot of it is magical, but ultimately, there's just some splitbrain stuff going on with Deno's concept of a monorepo and their inability to commit to their public projects (e.g. Fresh) leaves me concerned it's risky to build on top of.
(ok, that turned into a rant, but there are some questions in there.)
It is also a bundler and minifier and dead code tree-shaker. It combines all your modules into one file (or a few) for production. In development it doesn’t do the bundling part (for now — with Rolldown[0] replacing Rollup[1] in the future, bundling will be fast enough to do the same in dev and prod).
It also serves as an integration point for other kinds of tooling that involves processing or generating code. For example, the latest versions of React Router[2] (which now integrates Remix's features) and Tailwind[3] are designed primarily to be integrated into projects as Vite plugins.
I'll be the Steve Ballmer saying "I love JavaScript" but man it is great, I have made so many apps in different forms: web, deskop, mobile. If I needed to I can go into XCode and work on Swift. I have stuck with one stack though just ReactJS/NodeJS/ReactNative/ElectronJS or PWA. I prefer including SASS styles.
I still think TS is annoying to work with but I'm coming around since I have to use it at work and libraries like React Native are using it by default. Swift/C++ has typing but yeah, I like plain JS for speed development and typing can get annoying especially for a personal app.
If there is anyone here who has time to explain to me (or link articles about) why functional components and hooks are considered to be better than class components, please enlighten me.
Up until roughly 4-5 years ago I was doing small front-end React apps on the side (I'm a backend engineer) and was feeling very productive with class components. They made sense to me, concerns were nicely separated, and I felt I could reason pretty well about what was called when and how state was manipulated.
Then hooks came around, I tried them a few times, but I just felt so lost. Suddenly everything is intermingled in one function and we're using side effects to react to changes and manipulate state. I could no longer understand when which code was executed, and especially following and manipulating state became impossible for me.
The projects I already, I kept with class components. Haven't done any new front-end projects since then.
React's mental model has always been UI = F(data) and in ideal case any component is a pure function. But, of course, in most real apps there are cases when this mental model breaks: internal state, animations and side effects. The old way to provide escape hatches for those cases was to wrap a function into a class, where original function is 'render' method and the cases above are handled with lifecycle methods. The problem was that one piece of functionality often required updating several lifecycle methods, and for a big component that non-locality made things very hard. Hooks are a different way to express non-purity that keep code more local.
Thanks, that makes sense. Interesting perspective on the UI = F(data), I did not know that. I still wish the mechanics were a bit more... intuitive... I guess? Personally, I'm a big fan of The Elm Architecture [1]. I felt that is a very nice way to separate state and logic. But I'm not such a big fan of Elm itself (subjectively).
There was a great interview on ACM Queue [0] where they explain that paradigm. As a rule of thumb. Try to keep everything functional, and if you want to have state, keep it contained to its component.
I only knew React until my current job, which uses Vue. I'd strongly recommend trying a framework other than React for your next project. After you're past the learning curve, it's much more intuitive.
> If there is anyone here who has time to explain to me (or link articles about) why functional components and hooks are considered to be better than class components, please enlighten me.
Static evaluation of which instance properties are being used in a class instance is much harder than evaluating which variables are being referenced in a function. There's no need to worry about calling context or binding instance methods. Functions minify much better because you don't have to worry about preserving long property names like componentDidUpdate. With class components, sharing logic involving state between components required either functions taking a state setter and whatever slice of state you needed, or--more commonly--higher order components. With function components and hooks, the code responsible for initializing and updating state isn't tied to an instance. Now you can share that code with a plain function without needing to pass an entire slice of state and an update function into it. Instead of needing to shove all your update-related code into the same componentDidUpdate or componentWillUnmount methods, you now split them into different calls to useEffect.
> Suddenly everything is intermingled in one function and we're using side effects to react to changes and manipulate state. I could no longer understand when which code was executed, and especially following and manipulating state became impossible for me.
If you're talking about using useEffect to respond to changes by setting state: that's almost always a code smell and sounds like trying to sync state with props. This was an anti-pattern long before hooks, and was called out explicitly in the docs.
Having worked on a lot of class components and function components, class components offer a lot more opportunities for bugs which can't be statically prevented. On the other hand, most of these bugs can be caught in function components and hooks by a linter, or are just prevented entirely by the design. A frequent question which came up during the class component era was what code belonged in the class's constructor, componentWillMount, or componentDidMount methods. This always came with caveats, because generally component initialization isn't something developers should be thinking about because it can happen many times before anything appears on screen. Function components offer fewer opportunities for this. The useEffect hook forces people to think purely in terms of running effects in response to changes in variables which have been closed over, and about what things need to be done to clean up after the effect has run. Responding to user events (e.g. onClick) is almost exactly the same as it's always been other than cosmetic changes.
I'm not sure how everything being in a single function offers worse organization than everything being within a class. Instead of instance properties you have variables. Instead of methods you have inner functions.
"frontend" is the wrong word, he should have said "web". Writing desktop UIs or mobile apps tends to be elegant. And probably "SPA frontend" as server-side UIs are also much cleaner.
While I completely understand OP's sentiment, after literal decades of web/front-end development, I never quite grasped the depth of disdain for Javascript or CSS until being on a team where more than the same 2 or 3 people were actively coding in them.
With ChatGPT I am enjoying Typescript development and learning a lot. Unfortunately with ChatGPT being - in JavaScript terms - decades behind the state of the art. It becomes a little challenging to get it to do what you want. But it gets me 80-90% of the way there 70% of the time. Which is a huge win.
Yeah I wonder if his experience is mostly using JavaScript, which is absolutely impossible to maintain at scale. Most of my team comes from primarily backend-dev roles and they've all grown to love TypeScript over Python.
I have done frontend as well as backend and moved to backend only, because the endless hype traing jumping, cv driven development and config and library churn was just too much of a comedy. And it is still happening. Now it is people switching "routers" and version upgrades for nodejs and version upgrades for typescript and deployment platform and ... The list goes on.
This kind of thing is much much less pronounced in the Python ecosystem, but also existent, while many packages are written based on bad conceptual foundations, just like many (majority?) of NPM modules are.
You don't have to follow hype when doing frontend, you can pick whichever technologies you want and stick with them for years.
Part of being a good frontend lead is personally is not falling for the hype, and only adding packages I know will a. be supported for the foreseeable future, b. have an exit plan if those technologies aren't supported, and c. keep an eye on my juniors/seniors to make sure they're writing sensible long-term code.
The gnashing of teeth over the "upgrade cycle" I think speaks to poor team planning/leadership than it does for the actual tech now.
Yeah, you don't have to follow the hype, unless of course you got coworkers blinded by the hype and managers, who do not know how to discern who actually knows something and who is just jumping on bandwagons. Suddenly you will seem like the backwards guy, who does not want to learn the new shiny thing. Then suddenly you do have to follow the hype, even though you warned them.
As a full stack developer, your chances of being or becoming the frontend lead are reduced, as you don't have focus like a frontend only person, who will play that card subtly, that they are the specialist and you are not. And frankly, as a full stack developer, why would I even want to become a frontend lead and sacrifice the part of development, that is much saner? For the frontend lead it also pays well to follow the hype, raise the frontend to "modern level" and get paid senior salary.
>why would I even want to become a frontend lead and sacrifice the part of development, that is much saner
That's your choice, but myself personally, I like the deep focus on one thing along with guiding my team. I think we perform at a pretty high level and we are a happy productive team by all reports.
Our frontend is sane and that was because I was given the scope to make it sane. Others will have different experiences, and I agree that full-stack would have less sane by default, due to the lack of time or care for it.
What I am pointing out is, that one might have an idea how to build a FE in a sane way, but as soon as it becomes a team decision and people don't trust your experience or knowledge, then it often quickly becomes a game of who can represent the next modern hyped thing best, repeating the arguments heard elsewhere, rather than staying with a maintainable solution. If later they need to hire 2 more FE only roles, just in order to maintain the thing they built, they will still not see how silly the decision was and think, that this must be so. Work that could be done by a single person in a few months turns into work done by 3 people working FE only, just to maintain the thing built with the next hyped framework, while all they needed was static content rendering, with tiny sprinkles of JS for some dynamic check boxes.
Besides that, people tend to jump to the extremes in FE. Want some component (in terms of React component)? Immediately they think the whole site(!) has to be a whole React FE, because they like using sledgehammers. Most people don't even think about serving dynamic widgets only on the very few pages/parts of a platform, where they are needed, using frameworks which have this capability for many years now (for example VueJS), and keep the rest of the entirely static website just that. Static. Workable on by any capable software developer or engineer. No. They must build themselves a FE moat! Make things ever more complex, to justify their roles. When you step back and look at what their site actually does, it is extremely laughable, how much time it took to get there and how much time still goes into it, maintaining it.
The examples I wrote about earlier are not just imagined. I have personally witnessed multiple FE people taking 3 weeks to switch out an app router for a pages router (or other way around) in a nextjs project. This is a problem entirely of their own making. 3 weeks for 3 people, that is 9 work weeks. Someone please transfer 2 months of salary onto my account!
I know how to make completely responsive and well accessible websites, more responsive than 99% of websites that call themselves responsive. More responsive than what most FE people cook up with their JS frameworks and non-standard component systems. That is because I care when I build things and I inform myself about how to do things in standard conform ways and use HTML in the way its elements are meant to be semantically used. I have done FE stuff before. In 20 years my site will probably still work just the same, and it will be simple to maintain, because it uses standard HTML and CSS. Meanwhile their frameworks will be long abandoned for the next shiny thing by then. They will rebuild and rebuild and churn and churn.
Still, managements will rather trust a FE(only) dev to build a convoluted mess, because they label it as "modern" and want to jump on the hype trains. FE devs are often like drug dealers figuratively speaking. The incentives are just like that. The more complex a thing they build, the safer their job. Same is true for backend btw. There are lots of grifter out there.
> It's very hard to beat decades of RDBMS research and improvements
> Micro-services require justification (they've increasingly just become assumed)
That is so true. I am still deploying good old .war files to full blown Java app servers backed by simple SQL databases (all clustered and stuff) with some handwritten cli tools and a Jenkins server. Shit is fast, shit scales, shit just works. It is a pleasure to work with
20+ years as a professional developer and I still have to argue with coworkers on why MongoDb is a terrible fit for their data, which is clearly relational, just because said coworkers can't be bothered to learn SQL.
"93%, maybe 95.2%, of project managers, could disappear tomorrow to either no effect or a net gain in efficiency. (this estimate is up from 4 years ago)"
This made me laugh it is so true. My last big project at "Big Co" ( Knee surgery robot ) My small group went through 4 project managers - just for our small team. The entire project had probably 20. While a few where enjoyable to work with, there was very little value added and a lot of time spent filling them in.
You can extend this to all varieties of PM. Product/Project/Program. All should be replaced by people who actually create things. E.g. I prefer to have a UX Designer handle the Product Manager tasks, since they already think about the app from the user's perspective, and can actually create solutions to problems.
At $PREVIOUS_JOB, the team I was on worked the best when we had no manager. Or rather, we had a director (who should not have been managing us directly) meet with us once a week to tell us "here's what we're headed ... good?" and let us go work in quiet until the next meeting (or meet with us sooner if something really important cropped up).
When org charts are from 2 reorgs ago, team members are going straight to members of other teams where collaboration is needed, the backlog is 100% maintained by the developers with no input from managers, I wonder what exactly they do outside of meetings where they usually just relay orders from higher up.
I was really surprised to hear this because I feel the exact opposite! I've worked mainly with project managers who ran all the ceremonies, held people accountable, dealt with planning and doling out tasks, handled stakeholders and generally protected the devs from distractions, and took real leadership and accountability in the project.
Whenever I've worked on a team where a developer is the team lead and has to do all that stuff on top of coding- or worse, it's just a free for all with no leader- , things in my experience go much worse, communication breaks down, and things slip through the cracks.
Agree - in my 10+ year career, I've run into exactly 2 PM's that have provided enough value to a team or project to justify their inclusion in the team or project. Both were technical enough to understand what the engineers were working on and talking about and were able to offer genuinely good suggestions.
The rest? At best they were glorified QA/QC with a large stick to hit the engineers with when the spec wasn't met exactly. And when it was, and things still failed, they still hit the engineers with the large stick and were usually promoted for it.
> Typed languages are essential on teams with mixed experience levels
I like this one because it puts this endless dilemma in a human context. Most discussions are technical (static typing ease refactoring and safety, dynamic typing is easier to learn and better for interactive programming etc.) and ignore the users, the programmers.
Ill plug the "Design of Everyday Things" by Don Norman in case anyone in the thread hasn't read it. Its the classic text on design. You will never think about doors the same again.
I'm kind of wondering where the "mixed experience levels" part comes from. What is it about more homogeneously skilled teams that makes them less susceptible to the productivity boost that statically typed languages give in large code bases?
I would tend to agree with the author's statement there. Though less "necessary for heterogeneous teams" and more "unless your team is entirely senior/skilled".
To sum up some thoughts that have evolved over decades into something more reasonable for a comment--more junior developers are less able to develop a robus mental model of the codebase they're working in. A senior developer can often see further out into fog of war, while the more junior developer is working in a more local context. Enforcing typing brings the context closer into view and reduces the scope necessary for the developer to make sensible changes to something manageable.
It also makes it much easier to keep contracts in the codebase sane and able to be relied on. With no declared return type on some method, even with checks and code reviews it's possible there's some non-obvious branch or set of conditions where it fails to return something. Somebody might try and take a "shortcut" and return some totally different type of object in some case. In every case, it puts these things front and center and reduces the ability to "throw shit at the wall until it sort of works on the happy path" without making that much more obvious.
And once those types are declared, it saves everyone else time when stupid shit does slip through. You probably have some idea what `getProductAvailability(products)` might do in some eCommerce system. But when the actual function is implemented as `getProductAvailability(InvoiceItem[] products): void`, the foot gun for the rest of the team is... less. (Your guess as to how I know this is correct.)
In teams with good, experienced people the types are still helpful and I'd still question anyone choosing _not_ to be explicit about these sorts of things regardless of team composition. But they're much less _necessary_ in a skilled team in my experience.
IMO it's like scrum: if your team is good and homogeneous, it doesn't really matter much what you do: it just works. Scrum and no scrum, types and no types. It's not about having rockstars or 10x engineers, it's just about having shorthands, shared knowledge, etc.
If your team is varied or too large, you need things to help you out with organisation and communication.
(Whether my examples of Scrum and Types are the answer: depends on the team unfortunately)
In my experience: Too large is any team larger than 10 people or any code base with more than 10,000 lines of code. Both of those would be considered tiny by most in the industry.
The numbers I gave are not exact. Depending on details that I don't think anyone entirely knows. Sometime 1 person is too many (generally implying a bad programmer), while other times you can get a bit over 10 if you have strong discipline. Likewise strong discipline can get you to 100k lines. Really what this is about is how much pain you are willing to put up with. 10 people and 10k lines of code are good round numbers to work with.
Hard disagree on the 10kloc limit. At a previous job, I maintained and enhanced a 50kloc monolith (written by someone else), usually by myself. My productivity was very high. At my current job, we've split a codebase that should be about 50kloc into more than 10 separate repositories; everything is still just as coupled but it's much harder to reason about and refactor. My productivity is much lower.
I'm reading in it that experienced developers (be it overall or in a specific codebase) "know" all the ins and outs, types, conventions etc, whereas less experienced people cannot yet know all of that; being able to lean on good types and / or other types of automated checks helps them make more confident changes.
Less experienced devs iterate on something until it looks like it works, not realizing the footguns they may have embedded. Static typing removes some footguns and provides documentation for the next unfortunate soul to look at this code.
> What is it about more homogeneously skilled teams
They are a strawman example that doesn't exist in the real world.
Companies will be in big trouble in a few years when the team retires, people find new jobs, someone dies... All of them mean that a homogeneously skilled team will exist for at most a few years if you have one. As a company you need to ensure you have a program to train in new people.
I have long believed that when someone retires you should replace them with someone fresh out of school, promoting people all the way down to fill the opening. If someone finds a new job you can replace them with someone else with similar experience, but when someone retires they should be replaced by someone you already have groomed for the job.
I think the size of the code base also matters: bigger size = having types is more important.
There is a contradiction here as: bigger size = compile speed more important AND types slow down compilation. More advanced typing features slow down compilation even more.
> More advanced typing features slow down compilation even more.
C++ is a bit of an outlier here. But really people should think of typechecking as shifting fault detection earlier in the process than runtime. It doesn't matter if your test suite starts slightly quicker if you have to wait for the whole thing to run to find something you could otherwise have found with types.
Even when I am alone, I have "mixed experience levels", for example I can learn something niche, write some algorithm that works in that, then 2 years later I may have forgotten it. Types are essential for me.
You can do it whatever way you want but match the style of the project. I've worked on too many projects where someone decides their way is best and you end up with a mix of everything.
If you want to change the code style, okay, but change it everywhere and don't forget to test everything you've changed.
> Frontend development is a nightmare
But is it weird I kind of enjoy it every so often/
> Elegance is not a real metric
You're dam right it's not! Next time someone proudly presents a super elegant, refined, and minimalistic solution I'm going to phone them at 3am on a Saturday and get them to debug it while screaming at them about lost revenue or something.
> DynamoDB is the worst possible choice for general application development
Oh man, the amount of times I've seen some form of noSql and it's used as a relational database. 9/10 some rendition of SQL is more than sufficient.
I've grown to be a big fan of opinionated linters like gofmt, rustfmt, black etc. They avoid so much time spent disagreeing about code formatting and personal preferences. Instead engineers can do mutual grumbling sessions about weird formatting choices they see it do, and move on.
You're welcome to your opinions, I don't stop engineers I'm working with from having opinions about the code formatting. It's still going to be formatted by the opinionated formatter.
Having one gets us into the "Well, it's not quite what I want but at least it's consistent", and it gets rid of arguments that don't really provide anywhere near the amount of value engineers feel they do. There are almost always significantly better and more productive things to be spending time figuring out.
Code style is one thing - formatting that is - but there's others like how features are implemented tend to change over time and with different developers as well, which is difficult to automatically test and hard to keep in line except with good code reviews, but for that to work you already need to install a culture of consistency, which also means that innovation may be stifled and developers demotivated (e.g. because the better solution requires the 100 existing solutions to be rewritten, which is too expensive or requires a whole team to be blocked until it's done).
Consistency trumps a lot of things IMO, but not everyone is on board with that... myself included, I'm guilty of breaking with my own consistency all the time.
> REPLs are not useful design tools (though, they are useful exploratory tools)
I disagree with this. I’m a Clojure dev, and most of the time, I use the REPL to iterate on features, fix bugs, and refactor, thanks to the fast feedback loop.
I used to be a Java dev—oh god, restarting the whole app after every change made me want to shoot myself in the head. Now, I use the REPL to build what I want and then move on. This brings joy back to programming.
I’m not saying other languages are bad, but working with Clojure is more enjoyable for me. I’m at least 2-3 times faster than I was with Java. Of course, there are techniques you need to know to write efficient and idiomatic functional code.
I could definitely see how if you work in Java or C# all day, repls feel like a neat curiosity. Picture telling someone who writes scheme in emacs that repls aren't a good design tool.
If you want to argue the point with non lisp people, I'd go to javascript or SQL as great examples where you really use repl's quite a bit.
Well, because Clojure actually has a "proper" REPL. Non-lispy languages don't have such REPLs, at best - they are interactive shells. The blogpost author doesn't seem to have experience with homoiconic languages, otherwise, I'm sure, that sentence would be different.
I'm a heavy REPL/interactive shell user so when I do Java I abuse the testing framework, basically I put my sketches in unit tests and run those. The feedback loop is pretty tight, close to what I get in some other languages, whatever happens behind the scenes in IntelliJ it's much shorter than a full recompile and boot.
Supposedly there are some Java shells around but I haven't tried them out.
Java has been able to hot swap code since the beginning? Well, maybe not the beginning. But very early versions.
The standard runtime didn't like some redefinition. But there were alternatives. Eclipse, for example, would purposely let you get otherwise broken code running so that you could breakpoint and replace as you went.
Apparently Graal lets you hotswap. I tried getting it working about a year ago, but couldn't figure out how to install it with Gradle toolchains, so gave up at that point. I should probably look at it again. Anybody using this?
> Most programming should be done long before a single line of code is written
This is the only point I strongly disagree with. I have been doing this for twenty years now and every time we've gone into something with a STRONG plan for how it's going to be built, it's ended up an inflexible nightmare when we inevitably end up having to work around things that were not considered in the design phase.
The plan always ends up bumping into unforeseen realities, and you end up with sunk cost around the planning so instead of pivoting you keep on suboptimal course.
You can spend months planning the smallest feature and there will always be something you did not consider.
Rapid prototyping in my experience is the way. Throw something together that works, see how it can be improved, don't be afraid to throw the entire thing out.
I feel like exploratory and iterative development is more necessary when requirements are unclear or when there is a lack of domain knowledge. Also obviously things like game dev require an iterative style.
For backend web dev though, I can plan it all in advance. I have done it enough times that there are rarely any surprises and I know the pitfalls. I am really just limited by customers not knowing their own requirements.
I really love the iterative style but the problem is that I never worked in a company that allowed for enough time for large scale refactors. They might promise you that but it will never happen. You have to get it right the first time around or you will have to suffer until the system gets rewritten in a decade or two.
Of course plans can absolutely be too rigid but I generally found that more planning results in better products.
I would say it depends on your domain expertise and the expertise of the chosen programming language. Of course, for very large projects with multiple unknown integrations this is not the case.
But if you are an expert in both of domain and the technology itself, you can well design it before starting the coding, because you already know the technical issues you are likely facing.
At least I have personally managed to design some projects and implement them without any design changes. But I also read countless blog posts about the limitations of these programming languages every day.
There needs to be a balance of course, but eventually you will have code/interfaces that are too painful to change because of all the users. The more core code is the more painful it will be to change and so the more a well thought out strong design is needed. The less core the code is, the less strong the design needs to be - but often the code code will force some design on you.
Of course there will always be something you didn't consider.
> Most programming should be done long before a single line of code is written
I'd rephrase this to something like: Most programming should be done before 5% of the code is written. Because "no plan survives contact with the enemy". I often develop a plan, work for just a tiny bit, and realize some new constraints. It's after that point that you should construct your grand battle plan.
> # 3 “Plan to throw one away; you will, anyhow.” (Fred Brooks, “The
Mythical Man-Month”, Chapter 11)
> Or, to put it another way, you often don’t really understand the problem until after the first time you implement a solution. The
second time, maybe you know enough to do it right. So if you want to get it right, be ready to start over at least once.
I've made an opposite progression from the op. I was a strong believer of upfront design, but now value iterative approach as you do.
For the first try, hack together something working, you'll learn things along the way, validate/disprove your assumptions. Iterating on this will often bring you to a good solution. Sometimes you find out that your current approach is untenable, go back to the whiteboard and figure out something different.
Its wrong in those day , exploratory coding is better for dynamic languges like python using jupyter notebook , and then do proper coding after exploratory step..
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
It depends on what you're writing. I've seen enough projects writing raw SQL because of aversion to ORMs being bogged down in reinventing a lot of what ORMs offer. Like with other choices it is too often a premature optimization (for perf or DX) and a sign of prioritizing a sense of craftsmanship at the expense of the deliverables and the sanity of other team members.
I've never understood the ORM hate because a good ORM will get out of the way and let you write raw SQL when necessary while still offering all of the benefits you get out of an ORM when working with query results:
1. Mapping result rows back to objects, especially from joins where you will get back multiple rows per "object" that need to be collated.
2. Automatic handling of many-to-many relationships so you don't have to track which ids to add/remove from the join table yourself.
3. Identity mapping so if you query for the same object in different parts of your UI you always get the same underlying instance back.
4. Unit of work tracking so if you modify two properties of one object and one property of another the correct SQL is issued to only update those three particular columns.
5. Object change events so if you fetch a list of objects to display in the UI and some other part of your UI (or a background thread) add/updates/deletes an object, your list is automatically updated.
6. And finally in cases where your SQL is dynamic having a query builder is way cleaner than concatenating strings together.
For those who are against ORMs I am curious how you deal with these problems instead.
You are describing data mapper ORMs, a.k.a the good ORM. I think all the other ORM-loathing guys here had bad experiences with active record ORMs, a.k.a the bad ORM.
Also, infrastructure guys and DBA types tend not to like ORMs. But they are not the ones trying to manage the complexity in the business process. They just see our queries are not optimal, and it is everything to them.
Right! They should really be considered two different things. I've worked a lot with Django (the bad type) which people tend to love, but I've seen the horrors that it can produce. What they seem to love about it is being able to write ridiculously complicated SQL using ridiculously complicated Python. I don't get it. These types of ORMs don't even fully map to objects. The "objects" it gives you are nothing more than database rows, so it's all at the same abstraction level as SQL, but it just looks like Python. It's crazy.
SQLAlchemy is the real deal, but it's more difficult and people prefer easy.
Oh was I enthusiastic when I first got my hands on an active record ORM: "I can use all my usual objects and it'll manage the SQL for me? Wow!". That enthusiasm reached rock bottom rather quickly as soon as I wanted to fine tune things. Turns out I'm not a fan of mutating hierarchical objects and then calling a magical .commit()-method on it, or worse: letting the ORM do it implicitly. That abstraction is just not for me and I'd rather get my hands "dirty" writing SQL, I guess.
Yes I suspect a lot of the ORM hate comes 1) from people using poorly designed ones or 2) from people working on projects that don't really require the features I mentioned. Like if you are generating reports that just issue a bunch of queries and then dump the results to the screen you probably don't care that much about the lifetime of what you've retrieved. But just because an ORM might not be the right tool for your project doesn't make it a bad tool overall, that would be like saying hammers are bad tools because they can't be used to screw in screws.
It's not so much optimization but experience that on any sufficiently large project you gonna run into ORM limitation and end up with mix of ORM and direct queries. So might as well...
Starting with raw SQL is fun. But at some point you find out you need some caching here, then there, then you have a bunch of custom disconnected caches having bugs with invalidation. Then you need lazy loading and fetch graphs. Step by step you'll build your own (shitty) ORM.
Same thing for people claiming they don't need any frameworks.
caching is orthogonal to using or not using ORM. You might opt to have caching with or without ORM in a consistent manner. You can also opt to add read replicas fronted by say pgcat in Postgres case without having separate caching layer.
I guess this is a point where terminology matters. If you work with SQL database in an OOP language, you pretty much always do some object-relational mapping, no matter if you have a big framework or just raw SQL connection.
But this is not what people usually call as ORMs. All the "bad kind of ORM" (JPA impls, Entity Framework, SQLAlchemy, Doctrine, Active Record...) have some concept of an entity session which is tracking the entities being processed. To me, this is a central feature of an ORM, one of its major benefits. It is, incidentally, also serving as a transaction-scoped cache.
I won't of course dispute that you can have caching on other levels as well (which may perform differently, for different use cases).
As SRE who dealt with more caching errors then I care to. Alot of caching comes down to YAGNI.
To his point: It's very hard to beat decades of RDBMS research and improvements
Your RDBMS internal caching will likely get you extremely far and speed difference of Redis vs RDBMS call is very unlikely to matter in your standard CRUD App.
There are plenty of libraries/packages for SQL that do all of that for you, too. The choice isn't between a sophisticated ORM and just throwing SQL text at a socket. The fundamental assumption of ORMs is broken, but much of the tooling works well and exists in non-ORM places.
JPA implementations have "managed entities", sometimes called session or 1st level cache which is making sure that every entity is loaded at max. one time within a transaction. Like e.g. checking user/user permissions is something which typically has to be done in several places in course of a single request - you don't want to keep loading them for every check, you don't want to keep passing them across 20 layers, so some form of caching is needed. JPA implementations do it for you automatically (assuming you're fine with transaction-scoped cache) since this is such a core concept to how JPA works (the fact it's also a cache is kind of secondary consequence). JPA implementations typically provide more advanced caching capabilities, caching query results, distributed cache (with proper invalidation) etc.
Why is caching not a feature in DB connection pools? I mean, most databases have it on their side, why not have it as an option for the same query sets prior to hitting the db, with configurable invalidations? Or is it, and I've just never thought to look for it.
Integrating cache into connection pools brings little added value since connection pools don't have enough context/information to manage the cache intelligently. You'd have to do all the hard work (like invalidation) yourself anyway.
Example: if you execute "UPDATE orders SET x = 5 WHERE id = 10", the connection pool has no idea what entries to invalidate. ORM knows that since it tracks all managed entities, understands their structure, identity.
I guess I was thinking more of frequently run queries against infrequently modified data or where stale data doesn't matter so much. The sort of things that are ideal cache targets. You'd think you could tag queries like that. Sort of the things that a CDN caches but more granular. Sure if it's stuff that's frequently changed, an ORM could reason about it just like, well, the database does, but then you're back into all the bad things about running your shadow database with a badly fitting model, and you'd be better off just ensuring all or part of the database ran closer to your app with replication, say, in memory on same server.
That would be a result set layer, not a connection pool. Could make sense if you worked with rows, but if you use ORM, why mapping cached row again and again? ORMs cache hydrated objects, which seems to be more efficient.
I'm a hardliner on supporting this - I'd go further - all the DB code should be in SQL, with the database being manipulated by stored procedures and the db schema not even being exposed to the common developer.
As far as I know this is a very oldschool view on how to treat dbs, but I still think this is the only correct one.
I hate ORMs with a passion - they're just a potential source of bugs and performance issues, coming from either bugs in the engine, devs not understanding SQL, devs misjudging what query will get generated, leaky abstractions etc.
It's big enough of an ask to understand SQL itself, it's the height of folly to think you can understand SQL when it's being generated by some Rube-Goldberg SQL generator, especially if said generator advertises that you don't need to know SQL to use it.
And not just the ORM, but the way it's used. If you ensure that lazy-loading is turned off from day 1 and stays off, you might be okay. But if you don't pay attention to this and write a bunch of code for N years until all the "select N+1"s you've been unwittingly doing finally force your DB to a crawl... now you're in trouble.
Just mentioned to an acquaintance a couple of days ago that Entity Framework is the biggest flip-flop of my personal career. I was a _rabid_ supporter when it was released a decade and a half or so ago, and the appeal has decreased linearly over time, to the present day.
Statically-typed queries written in a mini-DSL within your application language seems like such a joy!
But then the configuration is a hassle. The DbContext lifecycles are a hassle. The DbContext winds up a big ball of mutable state that often gets passed all up and down your call stack, reducing the ability to reason about much of the code locally. Was this instance initialized with or without change tracking? How many and what changes have been applied? Were these navigation properties lazily or eagerly loaded?
And it promises to keep your domain persistence ignorant with its fluent configuration syntax. But then you have compromise on that here, then compromise in it there.
Pretty soon, you realize that you started out building a project for domain X or domain Y, only to realize that you're trying to shoehorn domain X/Y behavior into your Entity Framework app.
One job I had, we got handed a code base with at least 4 different reinventions of an ORM in it.
It became clear that each developer who'd worked on the code had written their own helpers to avoid direct SQL. It took a fair bit of persuading leadership, but the first task ended up being doing a huge reactor of everything SQL. Unsurprisingly enough, lots of bugs got squashed that way.
The way I like to use OO (usually not real OO, but rather class-based languages) is to minimize its mutable state. Often mutability is merely a lack of using builder patterns. Some state can be useful as long as it's easy and makes sense to globally reset or control. It's like writing a process as a construction of monads before any data is passed into it. Similarly a tree of processing objects can be assembled before running it on the input.
I've probably written more Java than any other programming language during my career, and I've seen both good and bad ways of writing Java.
Bad java heavily uses lots of inheritance, seems to think little about minimizing exposed state, and (unrelated to this discussion) uses lots techniques I like to call 'hidden coupling' (where there are dependencies, but you can't see them through code, as there is runtime magic hooking things up).
Good java almost never uses inheritance (instead composes shared pieces to create variation on a theme), prevents mutability wherever possible, and makes any coupling explicit for all to see.
Good java still has classes and objects, but starts to look pretty 'functional'.
I begrudgingly have had to enter the world of Spring Boot over the last couple years, and this drives me nuts. Every damn thing needs getters and setters so that the ten thousand magic Spring annotations can mutate things all over the place. If the business logic is complex, I try to make immutable domain models separate from the models that have to interact with Spring, but that can require a lot of boilerplate.
I had a different read of that point. More along the lines of “don’t throw the baby out with the bath water” (might have butchered that saying?).
I’m also more in the FP camp - even wrote a book on the topic of FP. But I also acknowledge OO is not inherently a bad choice for a project, and many languages nowadays do exist along a spectrum of OO and FP rather than being strictly one of the other.
To me a benefit for OO might be the ubiquity - you can generally assume people will understand an OO codebase if they have done a few years of coding. With more strict FP that is just not a given - even if people took a Haskell course in Uni a decade ago :).
> OO is not evil, but it also shouldn't be your default solution to everything.
With Smalltalk and Objective-C both being effectively dead at this point, that really only leaves Ruby (and arguably Erlang) as the only languages that are able to express OO. And neither of those languages are terribly popular either. Chances are it won't be your default solution, even if you want it to be.
Curious about your case for this. I don't know a lot about Erlang other than "it's what Elixir is based on" or whatever technical jargon is more accurate. I thought it was functional.
I was mostly riffing on the time Joe Armstrong, creator of Erlang, said that Erlang might be the only object-oriented language in existence. Although he's not exactly wrong, is he?
> I thought it was functional.
I think that is reasonable. Objects, describing encapsulation of data, are what define functional. Without encapsulation, you merely have procedural. Of course, that still does not imply the objects are oriented...
For that you need message passing. But Erlang has message passing too! So there is a good case to be made that is object-oriented.
Patrick Naughton came from the Smalltalk world, so Java is definitely inspired by Smalltalk, but he didn't bring along the oriented bits. Its object model is a lot closer to C++'s. To have objects does not imply orientation.
> Most won't care about the craft. Cherish the ones that do, meet the rest where they are
This becomes easier when you've transitioned from someone who cares to someone who doesn't. Some of us burn out from the industry and fall out of love with the job.
I understand my curmudgeon-ly ex-colleagues a lot more nowadays.
Functional programming does not prevent you from using objects
Stop listening to functional programming bros. Watch someone like Zoran Horvat. While I can't cosign all of his opinions, he tries to bridge the functional/OOP gap for OOP programmers. OOP programmers urgently need to distance themselves from this binary narrative. Everyone should understand functional programming and how they can utilize functional approaches in every language or paradigm. For me it is a requirement of understanding software development.
Absolutely every institution teaches OOP practices, they also need to teach functional practices. These aren't sports teams.
I wrote a comment echoing this point, about the coexistence of FP and OOP. This ties to the article's point about finding an algebra. Once you have an algebra, you have the basis for implementing a lot of your application in the functional style, manipulating objects that are inputs to and outputs from your algebra's operators.
I think the analogy works. It's all about the tradeoffs.
Double-entry is harder to grasp, but it has certain properties, like being able to sum along rows or columns. With that property, you can then make assertions like "if anything is off by a cent, then there has been a mistake and it needs to be looked at again."
On the other hand, single-entry is much simpler, you can just record a figure for a date with a reason, and be done with it. It widens the pool of employable candidates, it's easier to onboard new employees, and you don't have any elites screaming at you for doing accounting the wrong way.
If you take a hybrid approach and mix the two, then on average you only have to fill in 1.5 entries per transaction, so it's easier and faster than double-entry, but you can still express some transactions with two entries if it's more elegant, on a case-by-base basis.
Your team gets a clear and simple high-level goal that everyone at the company understands is important and shields you to work on it for a year without interruptions.
You don’t know it, but your manager playing up your teams contributions to the company, arranging happy coordination with other teams, occasionally intervening to resolve intra-team disputes, privately managing egos and careers, and jiu jitsu’ing any attempts to distract your team.
They trust your competence and decisions, but they understand enough to keep you on-track and provide a valuable outside view perspective.
1. For a non-manager, an indication that there is good management (project, process, etc.) in place is that the management aspect sort of seems to disappear/ moves into the background.
2. Communication becomes efficient or smooth.
How is it achieved?
1. High level goals and metric. And incremental upgrades to those. I think people/ teams need to get comfortable with one set of those before you want to improve better those metrics. Jira story points and velocity are not good metrics.
2. A manager acts as a buffer. A manager absorbs some shock and filters some data/ emotions which would otherwise flow between one (ideally more) pair of layers: one above them and one below them.
3. One kind of non-sense (from many kinds) is that people- junior or senior- are 'trying to prove their value'. This is why some people speak unnecessarily in meetings, emails go back and forth, senior management chimes in on low level issues, etc. A couple of good managers I saw were able to limit that- over a period of time.
Here's the problem with good management: you will not notice it, you only notice bad management.
A good manager is at the service of their team. That means they will do anything to keep the team productive. In most cases this means shielding them from corporate bullshit. In practice this also means that you will barely notice them.
So when you notice your management being intervening and bad, it's probably bad. When you barely notice your management and can't see what value they are adding, it's most likely excellent management.
Good managers will help you develop to where you want to be next. Part of this is helping you see where the next place for you is. Then they give you tasks to do that get you there.
They balance the above with the current business needs of course. Generally the two should be inline, but where they are not they help you manage that.
This list does resonate, but I’d make some tweaks to express things slightly better. For example:
> Most programming should be done long before a single line of code is written
I would say “most engineering should be done before a single line of production code is written”.
Formalizing a “draft process” is something I’m really trying to sell to my team. We work in an old codebase - like, it’s now older than most of the new hires. Needless to say, there’s a whole world of complexity in just navigating the system. My take: don’t try to predict when the production code will be done, focus on the next draft, and iterate until we can define and measure what the right impact will be.
The problem is that there’s a ton of neanderthal software engineering management thinking that we’re just ticket machines. They think the process is “senior make ticket, anyone implement ticket, unga bunga”. What usually happens here is that we write a bunch of crappy code learning the system, then we’re supposed to just throw that in a PR. Then management is like “it’s done, right” and now there’s a ton of implicit pressure to ship crap. And the technical debt grows.
I haven’t quite codified a draft process, but I think it’s kind of in line with what Chris here is talking about: you shouldn’t worry about starting with writing production code until you’re very confident you know exactly what to do.
Ah well, it’s a fun list of opinions to read. Chris’ WIP book is an interesting read as well
>They think the process is “senior make ticket, anyone implement ticket, unga bunga”.
The fact that you just summarized about an hour worth of argumentation from my last annual planning meeting with that one sentence has just destroyed me. I kneel.
At this precise moment in time, if anybody seriously thought the above was the way the process works or should work, they should be advocating for firing all the juniors and replacing them with LLMs.
> At this precise moment in time, if anybody seriously thought the above was the way the process works or should work, they should be advocating for firing all the juniors and replacing them with LLMs.
Sadly, I think this is happening at some places. Like Salesforce. Sigh
Sort of tangent, but is there a semi automated way to select which engineer would be best to implement a ticket? Something like "we need to modify this API, let's run git blame and see who has the most familiarity" and some form of scheduler that prioritizes the most experienced engineers on the parts of code that only they know?
I do think you could do some analysis to associate code with implementers and create graphs, where you account for additional things like time. I could see LLMs being helpful in maybe doing part of that analysis. But I would use that to see where the biggest "bus factor" is, i.e., finding subsystems where there's really only one active contributor.
For planning or task assignment, it might just help to say "ask X for more detail" when there's no other docs or your LLM is spewing jibberish about a topic
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
I wish he'd say which people are the weirdos: the ones who say let's just pick a standard and rigorously enforce compliance, or the ones who complain about the burdens of compliance and try to undermine enforcement. I'd say the latter. I'm surprised he didn't feel the need to be specific.
Agree. No code standards is a red flag. It betrays a lack of leadership and efficiency. It means the person in charge either doesn't enforce standards, or doesn't see the value in them.
I've been loving edgedb in my typescript side projects for a number of years now. I have always hated ORMs, mostly for performance and a little bit for elegance / ergonomics. Curious if you have thoughts on a solution like that?
I like EdgeDB. The only downside I see is the lock-in, that is less of an issue with SQL.
While I have not used EdgeDB, I have used Hasura, which I really liked.
Wrt the syntax of queries, I know of https://prql-lang.org which is kind of similar of EdgeQL.
Funny that EdgeDB says on its page it tries to solve the ORM problem: I strongly believe ORMs do not solve any problem, they merely create problems (as I argue in the linked article).
Wrt my thoughts: the EdgeQL queries are usually still strings (no compile time type checking, not IDE tooling). With Hasura the GraphQL interface (that EdgeDB also supports) allows one to generate a client library based on the GraphQL schema: this allows one to have compile time checks on the queries, which is really neat.
This is days old, but edgedb does allow generating a fully typesafe query builder package that is schema aware. At least for typescript, I haven't tried it with other languages yet.
Edgedb ends up being an extremely performant ORM if used that way. And since your database is fully aware of your object graph, it's just as or more type safe than graphql.
While I agree with this in spirit, I once made the observation that:
"Code quality can be measured by the number of occurrences of 'WTF' uttered by other developers per minute when reading a PR."
Elegance might be subjective, but how easy code is to read and understand probably can be quantified even if we don't have the best tooling for that yet.
Meaning that if 10 engineers less experienced than me all tell me that they think expressing something one way is easier to understand than another, I take that as objective feedback and change course.
>Given a long enough time horizon, you'll deeply regret building on Serverless Functions
What is the regret about? I've been almost exclusively using AWS Lambda (not using the "Serverless" framework), since 1 month after AWS released Lambda in Nov 2014. I've built quite a lot on top of AWS Lambda, and I love it. It just works. It's not difficult to work with, and I don't really have to worry about scaling it. Is 10 years not enough of a "time horizon" to regret it?
At a previous employer I tried to implement a Just Jar where people had to put a dollar in every time they said "Just do...". It didn't last because the boss was the biggest offender.
> You have to actively invest in improving your soft skills (and investments pay back immediately)
Can anyone elaborate how does one do that? I have seen this advice many many times but never anything actionable attached to it.
I believe my soft skills are my biggest weak point right now and is holding me from growing further in my career and I would like to do something about it. But I have no idea how.
If the person at the top can come down for a coffee with people who endured some bad management, and ask honest, non-loaded three questions, it can be measured qualitatively but with very high accuracy.
The three questions are:
- What should we start doing?
- What should we continue doing?
- What should we stop doing?
This is an immensely powerful tool. Thanks to the awesome person who introduced me this.
Addenda: "Theory X" is something really bad. If you're working with a team which responds positively to Theory X, you have much bigger problems IMHO.
That's true. The tool I gave above does exactly that. You distribute these three questions to your team, collect answers anonymously, then pile them up and look at what you see to read your team.
Then you can see what's going well and what's not, and plan next steps accordingly.
What insight does an IC have on what the person at the top even does? I can basically only comment on my interactions with direct management and I otherwise don't know how they spend their day.
That's not important - you're not discussing how to do their job, but what your own view is of your part of the work and those around you. The leader can then synthesize all these views and see if there are any surprises.
(another case similar to "you should listen to a customer when they say there's something wrong with the design of your product, but you should absolutely not pay attention to what they think the solution should be")
By having a 5 min group meeting where only the manager speaks. Manager is not allowed to talk about what direction the team should go and what the team should do. Anything else will give instant insights on whats going on at the isolated top.
> Very few abstractions exist in general application development. Just write the code you need.
I don't try to foresee abstractions (premature optimization), but I often encounter them when adding to a codebase. A recent example would be a tool that uses property info from a public database. The first version was hard coded to the database from one jurisdiction. When I added support for a second jurisdictions, abstractions helped avoid code duplication.
Agree. People who join a project and insist on changing the default styling configs are insane weirdos. But people who dont want to use any styling solution on a project with several developers are EVEN MORE insane weirdos.
Hard to have any kind of productive discussion when it's just a set of claims without rationales. I agree with some and disagree with some (RDBMSes are overrated and solve the wrong problem; ORMs are a useful tool and bypassing the ORM because you want to use your l33t SQL skillz is the devil; programming should be done by writing lots of code from the earliest stage on (and throwing most of it away)).
Regarding scaling… one thing that drives me bonkers about Apache Superset is that the maintainers are absolutely adamant that it must be deployed with kubernetes because it’s meant to scale. The project works just fine with a simple and slim docker/compose deployment, which is easier for small teams to manage. The lead maintainers refuse to document this and have unnecessarily sprinkled the docs with warnings.
I have a grey beard and have earned my right to have opinions on things, so:
> Most programming should be done long before a single line of code is written
I think via programming, and I think we should be building more prototypes. Upfront design almost always falls apart, and if you've invested too much into the design, you end up with some nasty frankenstein code. Get a good prototyping process in place instead, and write code and interfaces as early as possible, just make sure you're very willing to throw them away, review what worked, etc. The sooner you prototype, the sooner the "natural" design appears, and the less you've invested in it.
> Frontend development is a nightmare world of Kafkaesque awfulness I no longer enjoy
I like React much more than the shit I've enjoyed any of the run-up to it, and I've been building websites since the late 90s, working my way through server-side includes, hidden refreshing iframes, XMLHttpRequest, Prototype.js, jQuery ... and today's React just Makes Sense, man.
As always on here, a lot of people appear to act as if there are universal truths about our craft.
I would posit that the exact enviroment you are in changes massively what is correct, what is a good rule of thumb and what is a bad practice.
What applies to a game developer does not apply to a web developer. What applies to a product developer does not apply to a library developer. What applies in a startup does not apply in a legacy org.
I'm sure there are some universal truths, but usually they are principles and not prescriptions.
E.g. Code should probably lean toward readability, but you quickly come up against time constraints, language constraints, performance constraints, interpersonal conflict, etc.
Edit:
To that end, I would ask that when discussing what we find to be useful or not useful, that we caveat with the enviroments that we have found this to be true.
> It's very hard to beat decades of RDBMS research and improvements
This is further complicated by the fact that not all RDBMS engines are created equally.
It took me 2 decades to develop appreciation regarding why one would actually want to pay for something like SQL Server when options like SQLite and Postgres are free and easy to use.
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
My take on this would be: ORMs are the devil in all languages and all implementations. Static SQL queries with parameters will have a lot of duplication (lots of similar queries), whereas dynamically generating complex SQL will be hard to debug and maintain (e.g. myBatis). Write some good DB views in SQL. Generate some dumb ORM entities against those views for passable querying/filtering/pagination and do whatever works for altering it. Don’t go fully into the opposite direction either and don’t put 99% of your business logic into the DB, that will be hard to work with because the tooling isn’t very good (e.g. stored procedures).
> Typed languages are essential on teams with mixed experience levels
I'm 30 years in now, and on balance, whilst they have clear advantages, I'm still not convinced that typed languages are essential, particularly for low level or module programming.
Unless you can happily deal with '"four" + 1' your language is typed. The question is whether you want type errors reported upfront or you have to wait to find them at runtime (necessitating high test coverage).
One of these days I'd like to see a "typed assembler". It still matters what the contents of registers mean, even if they all look the same to the instruction set.
Interesting take, to me it is not so much about the type of coding (high vs low level) but more about the size of the project.
Dynamic languages work great for scripting and rapid prototyping, but if you are working with a team maintaining a large monolith, I would rather have a statically typed language and avoid at least a class of runtime issues due to dynamic typing.
Case in point, I love writing my Jupyter notebooks with Python but am amazed that entire platforms like Dropbox (and instagram?) chose Python as their default language.
One thing that I have never been more sure about after 20 years as a software developer, is that Hibernate is awesome! Seriously, it saves me ton of time and tinkering. It has support for native queries, and it is simple to transform your custom query into JPA entities. Using Hibernate has never been easier with the introduction of LLM, tell it what it needs to do, and see beautiful Hibernate examples code.
In my Spring Boot applications I log every SQL that is generated, and quickly spot unoptimizations, like lazy loading of entities instead of using join.
How ORM's behave in other languages and frameworks, like Python, Go, Rust is probably another story.
>People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
This sticks out like sore thumb to me and I think you are coding solo for 10 years. If you manage to lead a team of developers or work with them you are screwed without linting rules and standardized code style. Even if they are applied it takes months to get to a get a team working in harmony - without them it will be a disaster.
- merge conflict hells because code style diferences
- bad for code reviewers
- bad to ready everyone's different code sytles.
He's not saying you shouldn't do it, he's saying you shouldn't stress over it. The "run the language's standard formatter before commit and then get on with your life" approach.
I do think there's value in manual formatting some code sections. For example, very large arrays, or alignment of semantic parts of a group of mathematical expressions. Ultimately, I think the only thing that matters is that the code is readable, consistent and you don't spend much time on how it looks.
That said, if you have team members who somehow can't or won't copy the surrounding code style, then automatic linting sounds necessary.
See it like this - in 10 years when here is no original dev working on this, will this project still have the same code style?
With a nice .editorconfig the probability is very high that it's mostly consistent.
I can guarantee you any manual rules will be long forgotten and the IDE style will reign supreme - only your old "manual" formatting will stick out like a sore thumb.
Everyone on the team need to agree to the linting rules before using linters. Else they will keep wirting their code according to their habits , and then they start to modify the rule that dosen't fit their bad habits when linters hints to fix the code.
> run the language's standard formatter
Even with very smart linters like `ruff` it cannot fix all of the linting errors. You have to hand fix many of them.
What linter do you use that can just run and forget?
> Gradual, dependently typed languages are the future
What's that?
Idris is dependently typed.
> Gradual typing is a type system that lies inbetween static typing and in dynamic typing. Some variables and expressions may be given types and the correctness of the typing is checked at compile time and some expressions may be left untyped and eventual type errors are reported at runtime.
That sounds miserable. I can't see how one would guarantee anything when types of other things aren't known. And I can't see how to connect that to dependent types either.
Imagine adding types to a legacy JavaScript codebase. You can turn everything to valid TypeScript by annotating `any` everywhere, then you can gradually add types here and there.
Or imagine writing Rust with `Rc` everywhere and then using the borrowing style on the hot path.
I can see where the author is coming from, but sadly a difficulty is that dependently typed languages often require the programmer to prove type equality during type checking. It's hard to do if the information is not complete.
It isn't about "guaranteeing" _everything_, any more than assertions or guard clauses guarantee everything. It is about giving programmers a semantic tool they can use to communicate expected interfaces in an automatically-detected way, without adding unnecessary toil and boilerplate in tight, local contexts where functions are just working on obvious primitives.
The most graceful implementation I've seen is RDoc comments + Jetbrains tooling in Ruby. When it looks like a type system people assume it's going to work like Java, but having build-time checks based on where people bothered to describe the expected interface catches errors without any of the downsides of type systems that keep type systems from measurably boosting productivity.
I can only assume this comes from an observation that once your product matures, the static types become more apparent and you have a better idea how flexible your data modeling should be.
i.e. we are gradually adding more runtime type-checks to our Clojure codebase.
(Runtime check are even more powerful than dependent types)
Dependent typing is mainly just that feature of Typescript that `x: number | null` becomes just `x: number` inside an `if (x != null) { ... }` statement.
If you dig deeper into the type theory, it gets really interesting and complicated because types themselves can start to include any arbitrary code. And that's where Idris comes in! But after doing a whole project on Idris in college, I agree it is way outside the overturn window. Python and typescript rightfully keep it simple.
> to C++ to Java to Ruby to JS (also server-side) and Python
I feel that languages are going back to static typing. Newer languages such as Go, Rust, Kotlin, Swift, Dart, Nim etc are all statically typed (I can't remember a language from the last decade that is dynamically typed).
Even some dynamically typed languages are moving towards a more static typed system: for JS we have TS, and for Python we have type hints
> The trouble with functional programming is functional programmers
This hilariously works for a lot of things. Biggest trouble with legalization of drugs is the drug use of the ones that love to talk about why it should be legal. Biggest trouble of prohibition policies are the people that love going on about why they are better for not drinking.
I assert that this is, essentially, "The trouble with THING is the group of people that think THING is a panacea at what it does."
> Distributed locking is still really hard for some reason
I think distributed locking is hard because it only offers certain properties and requires that the application work together to achieve consistency, just like what you said about DynamoDB:
> DynamoDB is a good database (IFF your workload lines up with what it's offering)
Can someone explain the ORM thing to me? I’ve been a developer for 8 years but never really worked on an app that was really database dependent. ORMs for me have always been convenient, and the performance has been fine. I understand there’s obvious tradeoffs I’m making, and in some cases full control is necessary, but I’ve never seen it happen. What level of complexity does an app need to get to before an ORM becomes a nightmare?
* They hide the queries. When your DB or cloud service gives you a printout of your 10 slowest queries, you then have to figure out what object code that relates to. And then is there even a way to fix it, or are you stuck with the ORM?
* LINQ-specific: Love the tech, but it's unclear whether my .wheres() are being sent upstream properly, or if I'm downloading the whole database and filtering it in memory.
* Another LINQ one: we wanted to do "INSERT IF NOT EXISTS" but could not.
* Back in Java land, magic like that tends to be incompatible with basic hygiene like consting all your class fields. Frameworks like being able to construct a Foo in an invalid state, and then perform a bunch of mutations until it's in a good state.
* They make it near impossible to reason about transaction states. If I call two methods under the same open db context, what side-effects can leak out? If I try to do an UPDATE ... SET x = x + 1, that will always increment correctly in SQL. But if read x from an ORM object and write back x + 1, that looks like I'm just writing a constant, right?
* Extra magic: if you've read a class from the db, pass it around, and then modify a field in that class, will that perform a db update: now? later? never?
But just in general, I want to look at the data, play with queries in a repl environment until they look right, and then use directly in the code without needing to translate from high-level&declarative down into imperative loops, sets and gets.
>When your DB or cloud service gives you a printout of your 10 slowest queries, you then have to figure out what object code that relates to
Not if you have proper telemetry set up... Tooling like instana was extremely useful for me to diagnose exactly where SQL statements caused issues
>Extra magic: if you've read a class from the db, pass it around, and then modify a field in that class, will that perform a db update: now? later? never?
For hibernate if you understand the concepts of:
- application level repeatable reads
- it's dirty checking mechanism
- when the session is flushed / entity lifecycle
That 'magic' isn't magic anymore. But every abstraction is leaky (even SQL)
> If I try to do an UPDATE ... SET x = x + 1, that will always increment correctly in SQL. But if read x from an ORM object and write back x + 1, that looks like I'm just writing a constant, right?
This is not specific to ORMs... you can run into the same problem without one.
> Extra magic: if you've read a class from the db, pass it around, and then modify a field in that class, will that perform a db update: now? later? never?
In every ORM I've used you have specific control over when this happens.
First we have to make sure we're talking about the same thing. There are "active record" ORMs (like Django, Ruby on Rails, etc.), there are "data mapper" ORMs (Hibernate, SQLAlchemy etc.), then there are things like LINQ which are not ORMs at all but merely SQL generators (but you could build an ORM with it if you want).
The arguments against them, I think, are most strong for active record and much less strong for data mapper.
The problem really is complexity, aka coupling. An active record ORM naturally pervades an entire codebase that uses it. People pass around these "objects" that are really just thinly-veiled database rows. But that's all they are. They are at exactly the same abstraction level as the relational database itself, they just look like objects. But in fact they are filled with footguns because accessing those attributes could trigger database requests.
So you'll see business-level code written that has to "know" about the ORM and "know" about N+1 queries and therefore essentially "know" about SQL and the underlying relationships (or, conversely, data access layers that have to "know" about the business logic, e.g. "I know this logic needs to access this bit so I'll prefetch it"). So you're not really gaining anything. These ORMs are the complete opposite of a good software architecture that gives you flexibility and ability to reason about components in isolation.
A good data mapper ORM at least lets you map data from relational tables to real objects. That way you are able to build a new abstraction layer upon which to write business logic etc. A programmer writing those business rules should be able to fully write and test logic with no knowledge of the ORM at all. But in active record projects you'll find each and every developer has to have the full stack in their heads at all times.
I would be interested to know if there are strong reasons to avoid data mapper ORMs too.
In my experience, finding an actually good UI designer is hard.
Obviously they exist, but I don't know how to find them. It seems many — if not most — professional UI designers don't even understand that form inputs need labels.
I agree, it's hard. I'm lucky that I work for a large org with a separate design department. They are the ones that initiate how our products present themselves to the user, both in terms of styling and user flow, and they spend time and effort researching with guinea pigs what works best. It's very much "Design Thinking" in action. The outcome though is a monumental improvement in how easy the product feels to use. It's an artistic activity and is thus empathetic by nature. Being initially skeptical, I was won over very quickly.
For the most part, I agree with a lot of this, but there were a couple points I raised an eyebrow on.
> There is no pride in managing or understanding complexity
To the point that intentionally creating complexity or allowing it to continue to exist where it can be simplified, I agree - but some systems are necessarily complex and having pride in understanding and managing them is a good thing. If no one had pride in this, why would anyone care about the things that aren't simple - or bother to train the ability to simplify the complex in the first place!
> Micro-services require justification
I used to believe this, but have changed my mind on it. Micro-services are great in the current CS ecosystem (at least in the US) because they allow smaller parts of the whole to be more easily refactored when you find your development team with none of the people that were present when design decisions were made and a major feature needs to change.
It's an unpleasant situation to be in, and one that you ideally want to avoid happening, but is common enough that preparing for it in your application landscape is reasonable.
you don't need microservices to accomplish that - a library with a well designed public api and a language that lets you enforce private functions/methods are good enough,
I strongly agree with almost all of this except the last part about project managers - a good one is invaluable, an ok one can still be helpful. The majority probably are neither (and their ability to be good at managing projects depends a lot on other org functions - not necessarily within their control) but way better than 90%+ that the author suggests.
> You have to actively invest in improving your soft skills (and investments pay back immediately)
I wonder how much of this can be qualified with "after you've been hired".
Between two university graduates with equal skills in technical interviews and splicing linked lists and recursive-descent parsing and whatever, if one has better soft skills, you hire that one.
The question is what you do between one who is better at soft skills and another who is better at doubly linked lists or group-by queries or docker orchestration or whatever tech you're asking about in the interview. Empirical evidence suggests you hire the second person and support and expect them to skill up once they're in.
If you have spare time at uni, working on both soft and techy skills is great, if you have to trade off opportunity costs, the advice I hear a lot is invest on the technical side first.
It's funny I used to think tests are a PITA now I'm arguing for them as I'm seeing the problem they solve, even basic ones like full E2E headless browser testing vs. 100% unit test coverage.
I am holding back on stressing out regarding syntax like PEP 8. My role right now is great where we get to prototype random things.
> There is no pride in managing or understanding complexity
interact with
> If I think something is easy, that's a sure sign I don't understand it.
? Is it implying that you must understand the irreducible complexity, but mustn't take pride in that understanding? Or is "difficult" the opposite of "easy" here, rather than "complex?"
Both easy and difficult problems can have simple or complex solutions. Two different axes of measurement.
Most problems in software seem to end up being harder than they look, so if I think something is easy I have to suspect I've missed something.
Most solutions in software end up being more complicated than they needed to be, because we don't have the hindsight to realise that until after we've done it and our managers are demanding the next thing.
Sure, but that's not really what the quote says, it doesn't talk about making things more complicated than they really are, it just talks about "complexity."
It's saying you don't need to be proud of your app that has multiple layers of abstractions to be immune to every possible change. There are hard problems, but the most complex solution is rarely good or best.
I don't have a strong opinion about dynamically typed vs statically typed but I have a very strong opinion about interpreted vs compiled.
I regret how neglected interpreted languages have become. I'm much more productive with interpreted languages than compiled (or worse; transpiled) languages. The iteration time is crucial for me. I don't want to wait even 30 seconds to test a change. After 20 seconds waiting for a compilation, my brain is already going into sleep mode. I can't have that.
For most application-building use cases, I will choose an interpreted dynamically typed language over a compiled statically typed one.
When it comes to low-level embedded use cases, or high-performance systems programming I will tolerate a compilation step but I won't pretend like it's not a negative.
> Frontend development is a nightmare world of Kafkaesque awfulness I no longer enjoy
Yep. At some point in React (maybe 5 years in) ‘how do I handle state?’ started to be ‘read this essay about the true nature of FRP from Dan Abramov’ and it just seemed to get worse from there. I worry Svelte will do the same in future.
Good thoughts. I agree with much.
I quite disagree with the following: “Most projects don't need to ‘scale’”
I get that you’re probably referring to theoretically infinite scaling provided by stuff like microservices, containers, and serverless functions, but I do think it’s part of our job to consider the external factors that affect the functionality of our software. Similar to types being assertions about the external world, so too the design of the code and how it handles scale is an assertion about external factors. It’s important to define the bounds of what our software can handle in terms of throughput and what should happen in case those assumptions break or approach their outer bounds.
I think most projects need careful attention to handling unexpectedly large throughput gracefully.
Most to all I agree. I disagree with monoliths and microservices. I am not a fan of microservice per se but I'm not a fan of monoliths either. I certainly prefer a mix depending on their use cases. However both designs deserve equal scrutiny.
My monolith horror started when I was working with Ruby developers. The problem with a monolith is when it becomes too big for itself. Realistically something between microservices and monoliths are where quality of life resides.
If you need a small service that needs to be fast and focused microservice it. If you need a service where it relies heavily on the same logic and shared across many things. As long as it is performant, modular and easy to fix, monolith it.
Either way conforming to one design certainly cripples the potential of services.
You can build everything the wrong way.
In my experience making a monolith modular is organizationally easier than gluing together microservices and step on various toes and getting rid of needless abstractions.
Microservices if used sparingly in the right way are fantastic as they can offer flexibility in APIs that do not necessarily need a monolith obfuscating certain behaviors away from the monolith making the architecture more secure.
Everything needs to be purposefully built and used with the intention of serving the customer's needs without compromise.
Abstractions are a failure in most libraries too. Invariably something will come up that breaks the abstraction. Either new instances come up that don't quite match, or clients want new functionality that doesn't apply to all instances, or whatever.
I've come to the conclusion that in almost all cases, you should avoid providing any abstraction from your own library / service, and let consumers figure out what abstraction makes sense in their domain. By stipulating the abstraction, you're robbing your consumers of the ability to make things work the way they want, and painting your own thing into a corner.
When you build the abstraction, it seems like you're doing your consumers a favor. But that favor is short lived as the abstraction changes and breaks your consumers.
The best libraries provide both the abstractions and the building blocks those abstractions were built from so consumers can benefit from the abstraction when they can, but aren't blocked just because the library devs didn't consider a specific use case.
Half of my team's work right now is changing our abstractions because a couple clients want something slightly different, without breaking existing clients. The other half of my team's work is KLO forced on us by our dependencies needing to change their own abstractions.
> When you build the abstraction, it seems like you're doing your consumers a favor.
> You literally cannot add too many comments to test code (I challenge anyone to try)
I think it is possible, depending how you write them. If you write long comments interspersed with the code, you have a lot of scrolling to do follow the control-flow. Long block comments should go at the top to "set the stage", and then lightly interspersed comments throughout to remind of the specific steps, where necessary.
> Very few abstractions exist in general application development. Just write the code you need
I think they exist, but they're either not well known or are hard to engineer because of missing context; good abstractions are just hard. Solve the immediate problem and you'll maybe, eventually converge on the abstraction and you'll have your "aha!" moment.
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
I'm re-evaluating this again. I used to be all-in on ORMs, eventually became annoyed with the shortcomings and the performance problems, but am realizing the disadvantages of foregoing them as well.
* ORMs Bad: ORMs often lead to deep object graphs and tight coupling if you don't know what you're doing / don't factor the model properly. This has serious performance implications.
* ORMs Good: you get a unified, consistent view of all of your data on which you can enforce invariants in a holistic way.
* Only SQL Good: you can easily return and operate on only "slices" of the data, which is great for performance and rapidly iterating on features.
* Only SQL Bad: because you're only operating on "slices", you don't get a unified view of the data on which you can declare invariants. You can declare invariants on the slices and then stitch all of the properties together in your head to make sure you covered everything important (and write what tests you can to validate this), but this is error-prone and less necessary with ORMs.
So yeah, there's no single, clear winner here yet.
Good management is life and death, particularly for a startup. If you think you can do without it and you're not a good manager yourself then it's going to be a struggle to execute.
> ...you'll deeply regret building on Serverless Functions
Whatever serverless functions promise in scalability, they'll cost you in terms of complexity. Trying to build a cohesive backend entirely in a serverless environment just isn't worth it when it's trivial to boot up a simple, long-running server. Even running locally is a pain in the ass compared to `rails s` or `fastapi dev`.
God forbid your serverless environment is actually on the edge, at which point most of the stuff you're used to using won't work.
Not all backends are web applications. I have built many step functions applications for various ETL type workloads with massive success with little to no complexity. Just because someone doesn't understand the technology doesn't make it "complex"
"If I think something is easy, that's a sure sign I don't understand it."
Feeds fairly strongly into
"project managers, could disappear tomorrow to either no effect or a net gain in efficiency."
I think trying to fit everything into the same lens is usually the real problem. Functional programming is a great methodology for a lot of things, but there are many ways to design things. Chances are really good that something elegantly written in prolog won't be as elegant in scheme or java, and vice versa.
From my experience, functional programmers come in two flavors. The ones who constantly nag others by telling them "everything you do is wrong", and "hey, come here, I want to show you something!". Unfortunately, the first camp is way more dominant than others.
Again, from the same article, there's observation "People who care about the craft are rare. Cherish the who cares, meet the rest at where they are". This is very true. The problem is, the people in the first camp can't cater to either group. First group has strong opinions and want a solid (not hostile, solid) discussion about how they can improve and how that tool works, and the second group doesn't care.
Also, every programmer has an understanding of the machine, and they program on top of that abstraction. For me, "C virtual machine", or the modern hardware is very easy to grasp. For every line of code, I can have an educated guess about how my code will run (e.g. how will the branch predictor will behave or got inadvertently poisoned), hence imperative languages and the primitive memory management is easy for me. For others who think more meta, functional paradigm is a natural think to construct in their brains. Not meeting them at where they are is again off-putting.
All in all, nobody wants to be constantly nagged to feel "stupid" about not understanding functional programming or how unelegant imperative is, and most of the functional programmers are doing this, even without knowing it.
Some of my friends who do functional programming are more reasonable people. Who sit down and look at some code or problem say "oh, this looks an interesting problem and interesting solution, can I show you how it's done in functional way", and we can go from there, and go a very long way from there.
I have never picked up Common LISP because of an academic who praised functional programming like it's the second coming of Christ. I still have the first chapter of the LISP book printed on my desk. I'll start it after 15 years, but now I don't have the time I want to spend on it.
Lastly, from an architecture/elegance perspective, I believe a good imperative program in C/C++/Go is somewhere between Bauhaus and Brutalist architecture depending on what domain you're targeting (lower the level, more Brutalism), and I find that naked nature of imperative code directly tapping into the hardware or the kernel efficiently very elegant and well-designed. For the same majority of functional programmers this is ugly and shall be got ridden with a flamethrower. Also, an imperative programming language is no lighter on PLT and mathematics which the same majority finds mouth wateringly elegant.
TL;DR: If the majority of functional programmers can be a bit less arrogant, meet in the middle and learn why we love what we love, we can build better languages, ecosystems, and a better world to live in, but no, functional elegant, imperative bad.
Robert Martin's Talk: "What Killed Smalltalk can Kill Ruby, Too": https://youtu.be/YX3iRjKj7C0 which notes a similar mechanic about Smalltalk.
My experience is the opposite, I've only met incredibly friendly and helpful people within the functional world.
Your description of a person who has to put down everyone else in other to raise himself up, is just person with such low self-esteem that it's become toxic. You'll find these people everywhere, it's not exclusive to FP.
I'm personally fairly rigid about implementing business logic as functionally as possible, but I also enjoy game development which is inherently stateful and never without some imperative parts. I do think FP has some advantages over Imperative programming in many instances, but the opposite is also true, and I'd never pretend to be smarter or better than someone like John Carmack, who's made his career almost exclusively in Imperative languages.
Maybe not "most functional programmers", though. Maybe "the functional programmers who post the most" or "post the most stridently" and therefore "the functional programmers that I encounter the most often in ways that let me know that they are functional programmers".
And "Have you considered that your position is lazy" isn't a reply likely to change hearts and minds. (It is also, itself, a pretty lazy reply, compared to the effort the GP put into their answer.)
How my position is lazy? By not learning Functional Programming? I openly said that I was driven away by these very people, because I don't want to be part of a community who belittles the outsiders the moment they talked.
I mean, most (not all) functional programmers I met (for the last 20 years, no less!) started to praise functional programming by bashing imperative programming languages and never asked me about what I like about programming, and why I was so adamant to stay away from functional paradigm.
When you start selling what you like as an omnipotent silver bullet without listening to what the other party is saying, or by calling the other party lazy and the root of the problem you drive away people from the thing you are selling,
like you're doing right now.
Extra points for you for doing this, even after I have politely said that I have left that beef behind and trying to find the time to learn functional programming, and PLT in depth. Chef's kiss, actually.
> How my position is lazy? By not learning Functional Programming?
No, your position is lazy by asserting that most people who do functional programming are arrogant. This is a nonsensical value judgement. You don't know most functional programmers.
I don't know whether or not you're skilled FP. I don't know you. But as is appropriate, I assumed that you do actually know what you're talking about.
In response to "There is no pride in managing or understanding complexity.", I posit the following based on my experience:
Understanding and managing complexity is one of the first steps required in order to eliminate complexity. Some of the greatest achievements in my software development career have been instances where I carefully pulled apart inscruitable solutions, often short-term and built in desperation, and re-constituted them into well tested and much less complex solutions which were more understandable by devs and users.
I agree with most of Chris' observations and enjoyed reading his insights. Makes me want to do the same!
People who stress about people who stress about linting… focus on something else. Everyone has their idiosyncrasies.
A lot of great coders have psychological oddities. Some things help them focus. Whether it’s neat environment, listening to music, consistent code formatting.
I listen to them and try and accommodate even if I disagree. I’ve changed my mind about several topics multiple times now. I felt strongly about some at one point over the past few decades as well.
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
In my experience when working on and/or leading a team, having good and clear code style/guidelines helps the members of the team understand other people’s code much more easily. It often also serves as much easier to understand historical record when looking at previous contributions. I find it’s best to leverage tooling that enforces team conventions so that programmers don’t have to “stress” about it.
> You literally cannot add too many comments to test code
In my opinion, almost every developer I've worked with who advocated for generous amounts of comments has overestimated their (and or others') ability to write good quality comments.
Obvious ones like `a = b; // set a to b` while useless are also mostly harmless, but I've been lead astray by outright factually wrong comments, many more times than I can count. I certainly don't feel confident in my own ability to not write factually incorrect comments. So yeah, I'd rather the code do the talking.
That’s why your comments should be on things you know to be factual like WHY you made a certain decision. Things like that cannot always be communicated by good variables and functions.
I suspect they mean in tests, not in general code as hints about testing. A comment block that explains a bit why that test is doing what it's doing can be amazing.
> It's very hard to beat decades of RDBMS research and improvements
13 yoe here, I don't find this true, in fact I find the opposite - storing data in binary (to minimise de/serialization overhead) and precomputing indices and relations in a simple K/V store easily and immediately annihilates best case performance you could get out of a typical relational DB. I don't work in FAANG so we don't get millions of reads/writes per second, but there's probably more use-cases like us than like FAANG.
It comes from the FP world. It's one of those things like monads that seem obvious when explained but sound opaque if you're not in on it.
Basically it's saying to create a formal set of types and formal rules about how they interact. If you search for permutations of phrases like "functional programming" "algebra" "data types" you'll turn up some hits. For instance [1]
Algebraic structure. It's a fancy way of saying that a system follows certain patterns or rules.
As an easy example, you're probably quite familiar with the algebra of addition over integers. It describes rules like associativity and commutativity that describe some general transformations that are guaranteed to always behave predictably. Contrast this with subtraction over integers, which is associative but not commutative. Programs in general are neither associative nor commutative, so sadly no swapping terms or adding parenthesis willy-nilly.
A slightly more advanced example that you're probably still familiar with is mapping over Functors. A functor is just a thing that contains other things, and it includes collections you're familiar with like lists and trees. Well, the action of taking stuff out of a container, modifying it, and returning the results in their original position turns out to be an incredibly common and useful pattern!
Only guessing but my first thought was about how you have addition and multiplication and they both work for different types of numbers. But not in the generics/C++ template sense, more like working on Sets and Rings and other algebraic structures in the mathematical sense.
I've been at this a couple decades, focusing on SaaS, rising up with and through a unicorn, and having worked at over half a dozen real engineering orgs, I agree with just about everything in the article.
My two differing thoughts:
1) Gradual, dependently typed languages are the future: no, just go with something typed. We are trying to type our python code base right now. It sucks.
2) People who stress over code style...: I love go fmt. Use tooling and linters. Be consistent in the code base. Inconsistent styles can lead to misreading code.
Interesting, I've been in software development for a little more than 10 years, and just about every point I've read, I more or less agree one, will be interesting to see if it changes the next 10.
> Code coverage has absolutely nothing to do with code quality (in many cases, it's inversely proportional)
My theory from working at a company that demanded high unit test coverage is that it encourages coding patterns that are easier to create coverage for. Not necessarily easier to genuinely test, but easier to get a high coverage metric.
For example, try/catch blocks are bad if you want coverage. Now you have to do things like inject exceptions. People will try to design things so that's not required, with odd side effects.
After more than 25 years in the industry, and 10 years of retirement,
I agree with pretty much everything on this list. A few things that
struck me:
- Don't get too fundamentalist about OO or functional programming. Each has its place.
- "Spend time hunting for an algebra" is somewhat obscure, but truly excellent advice, and I have spend a lot of my career doing exactly that. That has never been time wasted. This connects to a couple of other points. 1) Once you have an algebra, you have the basis of a good API for functional programming. (This point is not explicit in the list.) 2) The things manipulated by your algebra are also the basis of a good OO design.
- "Elegance is not a real metric." It is absolutely dangerous as a metric, because it is a subjective goal we all aim for, and can therefore be used to justify anything, no matter how dumb. That said, elegance is something to strive for in your day-to-day programming life. You know it when you see it in someone else's work, and you appreciate it. You see a small set of principles, implemented in one place, that can then combine in many ways to do many useful things. Hmm, sounds a lot like an algebra!
As I understood it, the “algebra” is the domain-specific language/abstraction that you build — and if you’re lucky, discover — that encodes something fundamental about the domain you’re working in. Once you’ve found that, further coding is easier than you’d expect.
I’ve had the pleasure of finding a few of these over my career.
I think it goes beyond that. An algebra has operators that take objects of some type as input and yield objects of the same type as output. E.g. arithmetic (numbers in, numbers out), and relational algebra (tables in, tables out).
> Code coverage has absolutely nothing to do with code quality (in many cases, it's inversely proportional)
I think this one keeps on being viewed as a % metric. You have 50% code coverage, 75%, 99.9%, 100%, etc. In that sense it is useless. Where I think code coverage has an enormous value is in showing what parts of your logic are/are not covered by test code. Being able to eyeball that and see where key parts of logic are not tested is extremely helpful and tends to get lost in these discussions.
> Being able to eyeball that and see where key parts of logic are not tested is extremely helpful and tends to get lost in these discussions.
I've been saying for years code coverage broadly is fine, we're just presenting the metric in a screwed up way that makes people focus on the wrong thing. It should be absolute value and flipped, presented similar to linting: instead of percent of covered lines, it should be raw count of uncovered lines.
Not only does this help you focus on "this code has no tests" instead of "number go up", it removes the painful edge case where refactoring covered code to be shorter makes the percent-based metric worse.
Strong opinions, but only telling them doesn't say too much for me.
Constructive criticism: tell the reasoning behind those opinions. I really believe that they are based on facts and experiences, so tell them! Only telling "after 10 years in the industry" may look like the argumentum ab auctoritate fallacy (https://en.wikipedia.org/wiki/Argument_from_authority)
Presentation bit confusing since it's written comparatively (opinions changed/added/retained) but there isn't a reference list. (Maybe they keep such a list in their personal notes.) Compared to the older post (at 6y), none has been flipped (and actually some were strengthened, e.g. typing in mixed experience teams going from better to essential).
I agree but have been on teams where this slows down PR review or arguments break out. Delegating to the linter and format on save can get the team past this.
I cannot agree with this. I've constantly ran into issues around where you can insert data into with an attribute name such as "status" and then when you query it, it says you cannot query with a reserved keyword.
There is a myriad of other issues I've found and when asking around chat groups and forums, people universally dislike dynamo db.
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
Considering anyone who is any good is gonna automate these things… and they are very important for long term maintenance and readability and predictability…
… probably the author is terrible. Sad to waste a decade.
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
What are the main issues people run into with ORMs? I've used Django ORM for years and written some relatively large applications using it without much problems. Complex queries and aggregations can result in quite hairy code though.
It joins in code, not in SQL. We realized this way too late because we assumed an ORM is optimally structuring code. It wasn't.
This, along with the fact that SQL is already a definitive language, made us realize that ORM's are stupid and utterly useless. I'm talking about the ones that try to pretend they're code, and not a query.
There are multiple ways you can design an ORM though, and one way is to let the user fully manage how they want the query to run, so you're pretty much structuring a query already. Why not go the extra mile and make an SQL query?
I get that ORM's are fine when you don't want to deal with writing queries (e.g. school projects) but for real world apps just take the extra minute...
I've seen and poked at a lot of the ORM-hating on here, and whenever I can get people to give specific examples instead of generic theory-level stuff, it's either not a problem in Django or Django has a fix you just need to learn about and use. The fixes have sometimes even been there for over a decade. It seems be leaps and bounds ahead of every other ORM out there.
The main problem is if you are not very careful it leaks its internals (ie. the relational model) into everything it touches. That's what the N+1 query problem is. Business logic shouldn't have to know that certain attributes will cause database queries. By carefully writing managers and having a rule to never use querysets anywhere else you can avoid it, but it requires everyone to understand this and all the Django docs and examples everywhere will just randomly drop a queryset into a view or do a `prefetch_related` because they know that some higher-level function is going to need access to that attribute (ie. coupling code together).
Ultimately Django just doesn't fully do the object-relational mapping. It maps single rows, but that's it. So it doesn't really support objects that contain lists or sets etc. Things like SQLAlchemy can actually map data from a relational database into plain old objects. Those objects can be instantiated (e.g. in tests) completely independently of the database. Notice how in Django you can't test anything without a database being present? Why do I need to store an object in a db just to test some method on an entity?
>That's what the N+1 query problem is. Business logic shouldn't have to know that certain attributes will cause database queries.
I really don't get these arguments because in some form or another, ALL abstractions are leaky.
Example:
A novice developer might write a @OneToMany in hibernate without knowing the internals of the abstraction, causing n+1 problems. Two paths forward:
1. blame the abstraction
2. Learn (some) internals of the abstraction in order to use it correctly: dont do eager fetching, use join fetches, ... ( There's also tooling like hypersistence optimizer, digma, jpabuddy that comes to mind)
And by that same logic, would you berate somebody writing 'plain SQL' which - when expected with the query plan - turns out to be a very unperformant query?
Again, two options:
1. blame the abstraction
2. Learn (some) internals of the abstraction: analyze the query plan, perhaps write some indexes,...
The default test runner creates the test database automatically yes, but you can create Django model objects without touching the database, just use the class like a constructor and don't save it: Person(name="Foo", age=30)
OK, now try that with my ridiculously simple TodoList example, e.g. `TodoList(items=["item", "item2"])`. You can't do it! Nor can you construct an empty list then add items to it etc.
Well yeah, but you could have just used a document store at that point. If you start using JSON field you open up the problems with document databases (namely you can't really do many to many). In any case, postgres being awesome is something you get without any ORM!
It feels like the author is in the middle of the bell curve meme[0], especially with regards to "most programming should be done long before a single line of code is written".
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
Agreed, or just use a key/val JSON document database and it's get/set APIs for basic operations. All the main ones also provide indexing and SQL APIs if that's needed.
> ORMs are the devil in all languages and all implementations. Just write the damn SQL
I was agreeing with most opinions and then I saw this one. Having used Ecto from the elixir ecosystem completely changed my mind on this. It's an amazing piece of software.
> 3%, maybe 95.2%, of project managers, could disappear tomorrow to either no effect or a net gain in efficiency. (this estimate is up from 4 years ago)
This is 100% of immediate line manager that I have had in my career.
> Frontend development is a nightmare world of Kafkaesque awfulness I no longer enjoy
My feeling is that a lot of negativity towards the frontend stems from assuming that the entire field is like React and its community. It's really not like that.
The much bigger problem is that you not only have to deal with the technical challenges, but also the visual design challenges where non-technical people have all kinds of opinions that you have to deal with. Things get really nasty once you have to implement features that emulate non-web functionality like right-click menus and the UI becomes an inconsistent mess.
Visual Design Challenges is one I hear most frontend developer complain about. I explained a queen duck (https://bwiggs.com/notebook/queens-duck/) to one of them and he started doing that several times in his project. If you look at this git repo, there would be commits called "queen duck" right before he would demo so he could get feedback about the "mistake" he made, revert the commit and move on with his life.
I think Frontend (or maybe slightly expanded App Development) is a greatly underappreciated skill. It's easy to throw a UI together, but to do it in a way that is obvious to your user's, can be maintained and can move at the speed the business needs is a tough ask.
Anyone have more context on this? I've never thought of a repl as a design tool. Does he mean loading your app in a repl, calling functions manually and then manually swapping them out in real time?
How come a fully typed ORM is the devil, if we agree we want a typesafe codebase for our mixed experience dev team?
I have had positive experiences with Prisma. It just works.
I personally draw a distinction between micro-ORMs and ORMs. A micro-orm will simply take a strongly typed flat struct and map it into the set of parameters for a query, and likewise map a single row out to another flat struct (or enumerate/iterate while doing so). They may even include insert, update, and delete helpers that deal with a single table. This is what 90% of Prisma does and is the Good Parts. Migration generation is also good (but can be dangerous, e.g. deleting columns).
I'll call out query generation separately, as it is a lesser evil (in my opinion). This falls under a larger peeve of mine, which is using data (JSON, YAML, or this[1]) as programming languages. Using data as a programming language sucks. Tooling (compilers, LSP, etc.) are typically absent, meaning that mistakes very much become a runtime issue (not really a problem for Prisma). The deeper problem is that you'll run into limitations and will: either have to drop down to SQL (tricky given that you've rarely used it thanks to using ORMs to generate all your queries), or kludge/hack it up in order to remain in the ORM land. There's also lines of code and readability to contend with, the second Prisma example (the first is on my shit list for reasons further down):
SELECT u.email FROM users u
WHERE (u.email LIKE '%prisma.io' OR u.email LIKE '%gmail.com')
AND u.email NOT LIKE '%hotmail.com'
Just write the fucking SQL.
The objectively bad parts are one or more of the following:
* Change tracking: magically being able to update a value returned by the ORM and calling `save` to write it back to the DB.
* Object graphs: magically accessing related objects in memory, e.g. `order.orderlines` or `order.address.city.state`.
Relational databases (SQL) are not graph databases (in-memory object hierarchies). They are orthogonal concepts. Just throwing whatever you have as your classes into your database is neglecting to think about how your data is stored. This will come back to haunt you. You might claim that you can do the careful design using classes, and I would believe you, however, those juniors you mentioned aren't going to have the knowhow (because they've been shielded from SQL by ORMs their entire career and so don't understand how to make good databases).
Case in-point, back to that Prisma example shown earlier. Any ideas what's actually wrong with that query, besides the pointless hotmail check? Both the original and my conversion have the same serious issue that stems from not designing the database.
On Serverless since 2018 and we still love it. One thing it had helped us with is deploying and managing customisation for some clients without affecting the wider product.
1) Nix. I finally came around after one too many bricked Linux installs, and learning Docker and being kind of unhappy with it. And while I still haven't completely mastered it, you can learn enough in a reasonable amount of time to maintain a Linux install.
> Blind devotion to functional is dumb.
Except when it's not "blind" and informed by hard experience (years of OOP). Which leads me to...
2) Functional programming/immutable data. For the vast majority of use-cases, these just lead to better code, fewer bugs, and less LOC needed for a given functionality. (I just wish Elixir had the option to compile to a single binary. Roc-lang looks interesting, in that space, if you aren't into Rust.)
3) Typing. I'm coming around to it, and to a general principle of "happy-path strictness" in general. All to achieve determinism.
> Java is a great language because it's boring
No. The people who disparage software devs who have tool preferences are a special bunch and not really "software devs" (with apologies to the No True Scotsman fallacy). If everyone was supposed to be "fine" with Java, then no new languages need be developed!
Elegance is not a metric, right. But the primary drive beyond the discovery of non-Euclidean geometries was that the set of axioms did not look elegant.
That you must design the system before you type it down as code. Once you make the wrong assumptions or wrong abstractions no amount or skillful coding can save you. You risk solving the wrong problem perfectly.
That makes sense, but if this is the correct interpretation, then I find the wording a little weird. I've always considering "programming" and "writing code" to be the same thing.
Yeah I think programming is both ”thinking about what code to write” as well as actually typing it in. One can’t meaningfully separate any steps like ”design” or ”architecture” from programming.
Been using AWS Lambda for 10 years, since AWS Lambda was first released, so there really is no longer of a "time horizon" - I have not had any problems that would cause any regret, quite the opposite in fact. I also don't use the "Serverless framework", I rolled my own AWS Lambda toolchain long before the "Serverless framework" was ever a thing. So I'm not sure if the author is referring to "functions as a service" generally or specifically the "Serverless framework".
My team did recently try to set up the "Serverless framework" for a new project, and it was quickly apparent that it wasn't going to work for us. It was kind of messy, honestly. I really wanted it to work, but requiring an account on the Serverless framework website just to get Lambdas deployed was the wrong way forward for us. We just ended up using my own toolchain, which is simple and easy and does everything we need it to do, which is build the Lambda function inside AWS Lambda, and deploy the function with some basic configs.
C# is becoming one as well. I recommend Zoran Horvat on youtube
And most of the well written react projects I've worked on tend to be overwhelmingly functional in style. Hooks just make sense when thinking about the lifecycle of a UI. Shared state and imperative code with react just feels wrong.
> Typed languages are essential on teams with mixed experience levels
Essential, meaning 'cannot exist without', it's not. I've seen this work a number of places.
> Blind devotion to functional is dumb.
Managing/limiting state is always a worthwhile pursuit. I'd think he would agree since he seems to value simplicity
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
That's like saying don't check if your cars tires still have any pattern left, the engine is more important. Until it isn't. Streamlining the smaller details allows the brain to focus on more important stuff, but the details matter and their value increases exponentially with the size of the project.
Regarding code style etc. I get where he's coming from as plenty of people made entire careers on pursuing the perfect linter configuration. They're a net negative in every project.
"Objects are extremely good at what they're good at. Blind devotion to functional is dumb."
sure, but I think the status quo is to way overuse objects. my preferred style is to bias heavily towards value objects, and to never ever use things like inheritance for control flow
"ORMs are the devil in all languages and all implementations." hard agree lol
"Objects are extremely good at what they're good at. Blind devotion to functional is dumb.
"
Guess this hits home.
But Blind anything is bad.
I spent a decade blinded by objects being everything (original gang of four book). Then a decade where everything is functions.
Objects are also used in functional programming. This feels like a reaction to people being functional evangelists.
There's a reason most modern languages are completely blurring the lines and allowing you to think with multiple paradigms.
I'd be more upset with institutions lacking in functional programming coursework. OOP is dominant, I can't see why people get so irritated by a few annoying functional evangelists. And while annoying, I don't think the motivation there is always "Blind devotion to functional"
mostly good advice, but he's yet another person who seems to think orms are primarily about sql generation. they're not; they're about serialising/deserialising between objects and db columns!
Actually I recently discovered it is, it's just not what people think. Elegance is a function of the probability measure of how much someone likes your idea/solution/thing. There is a curve upon which people will think it is more or less elegant.
The problem is you can't actually see elegance from your own perspective. You might think it's elegant and it later turns out it's not to anyone else. And vice versa. It's like quantum physics: you have to observe it externally and your guess won't always be right. That doesn't mean it's not a metric. It just depends on multiple probabilistic measures.
It's important to gauge the elegance of things because in general, engineers will fight you harder the less elegant your idea is. It might be a better idea, despite its inelegance, but if you're working on a team, the team's acceptance of the idea is more important. (Unless you have total authority, in which case you can do whatever you want)
> Most projects (even inside of AWS!) don't need to "scale" and are damaged by pretending so
Actually it's the opposite: literally all projects need to scale. The problem, again, is perspective: the scale of the scaling varies.
If you have a physical business where you move goods around, you have to choose how to do that. You could buy a bike, or a car, or a truck, container ship, etc to move the goods. You might need just one or you might need many. Your business might increase or stay the same.
If you buy a container ship, probably you will never need to add another container ship again, as very few do that much business. But if you buy a bicycle, probably you will need to at least add more bicycles, if not change completely to a car or truck, or multiple.
If you use the smallest measure of compute on AWS, it's pretty much a guarantee it won't be sufficient for your entire workload. If you buy the biggest measure of compute, it's almost guaranteed it will be more than big enough. The scale of the scale matters.
Scale is therefore, again, a probability measure of whether your workload will match its container. If you could perfectly predict your workload and the capability of the container then this wouldn't be probabilistic.
But not only is your workload usually variable, so is the container. Bicycles/cars/trucks/container ships all can fail, so the scale of your workload may reach 0 at some point. The ability to scale dynamically is important in order to eventually deal with failure. Otherwise if your car died you could never get or rent another one, you'd have to sit there becoming an auto mechanic to get your business working again.
This is more obvious when you self-host. People take it for granted on AWS, where they don't realize that literally everything in AWS is scalable, and so tell themselves scalability doesn't matter... until it does. If we didn't have a way to scale bicycles/cars/trucks etc (by temporarily or permanently getting another one), the world would be much harder to do anything in.
From a use-case perspective, Haskell monads let you restrict where various effects happen. Some code can be allowed to do local mutation, other code can do shared-mutation suitable for (ACID-like) transactions. In Java you can do anything, everywhere (except throw checked exceptions from lambdas for some reason) so the monadic wrappers would give you about as much guarantee as a comment, and be less clear.
From a representational perspective, you need a generic of a generic to represent<M><A> and that doesn't really work. Or if you go for interfaces not type variables, you could have a Monad<A> but you don't know which one you have (List? Future? Parser? Either?). It's like representing the above as literal 'Object's, you'd be constantly casting.
because no matter how much java wants to be like scala, it's very painful to work with (verbose). There is no good Option type either. But I believe there are third party libraries out there that do a better job, if you really can't use scala for some reason
My issue with Java is not that it's boring, which it is not, but there is so much that needs to be done to get a simple "Hello World" program to run which also depends on an external library. It feels like you first have to build a castle just to put a bed in a room.
yes. main() is the room and print("hello world") the bed.
with the castle I'm referring to the whole myriad of things that must be done to be able to compile this. But, as I said, at least drop in a library, possibly via Gradle.
> (…)
> People who stress over code style, linting rules, or other minutia remain insane weirdos to me. Focus on more important things.
What you call “stressing over minutiae” others might call “caring for the craft”. Revered artisans are precisely the ones who care for the details. “Stressing” is your value judgement, not necessarily the ground truth.
What you’re essentially saying is “cherish the people who care up to the level I personally and subjectively think is right, and dismiss everyone who cares more as insane weirdos who cannot prioritise”.