Hacker News new | past | comments | ask | show | jobs | submit login

When applying this article to code, I don't think it has anything to do with spaghetti being good. It could be viewed instead as an argument against "architecture astronautics", the "15 layers of abstraction to print 'hello world'" school of software design.



Exactly. I remember being called into consult on an accounting system for microfinance; the target audience was small to medium-sized. The code had an absurd amount of layering; one path I traced copied the data 8 times from fetch to render, each time into a different set of objects that had basically the same fields, but that were conceptually different.

When I asked about this, I was told it was "best practice" and that if they ever needed to scale, there were now many places they could separate things. I pointed out that for the target audience, they probably wouldn't need to scale. But that if they did, it would be because they were doing 7x the work necessary.

The code was certainly "tidy" from the perspective of the guy who got paid a lot of money to produce architecture diagrams. But it was a nightmare from the perspective of an individual programmer trying to add a feature. They would have been way better off without a lot of quasi-religious design theory slowing them down.


I used to write code like that - architecting whole application up front, creating layers upon layers of abstractions. Experience taught me to do the reverse now - start with the simplest version that works, keep going until writing code gets tough, and then switch to whiteboard and use what I learned along the way to design a proper solution (and if any part of the design process starts getting tough, I switch back to writing the simplest thing that works). Rinse, repeat. It's not about rushing to release barely working pile of spaghetti, but recognizing that programming is an exploratory activity, and you don't know enough to do complete design up front.

And really, it turns out most of the time that not only you don't need the complex abstractions designed early on, you actually need a set of different ones. That's why keeping the design process continuously grounded in reality is important.

My solution for not producing spaghetti code with this method? I don't release the first version that works. I don't mark the ticket as "done", and I don't even push it out of my local repo. Instead, I clean it, or even straight up rewrite it, until it reaches a sleek and acceptably elegant state. It's the responsibility of a programmer to decide when the code is ready, and it doesn't have to be at the first moment it passes all the tests.


Maybe the most astonishing thing I've learned in my career is that it's far better to design your system after the code is written and working!

Because that's the time when you've learned to understand the problem, and you already have working code to move around.


This, absolutely. The time to design properly isn't before you write code. It's when the requirements start to become fixed rather than fluid.


While I do wholeheartedly agree with everything you wrote, I still feel need to point out a limitation I don't see (or feel) solved: how do you scale this approach to n>1 developers? In a large brownfield you can assign quasi-solo projects in different corners isolated by sufficiently wide buffers of legacy code you won't touch, but a multi-developer greenfield needs architectural structure simply to give the parallelly written code something to integrate with.


In one of my previous jobs we faced this issue, and we've approached it like this: we would start with a design meeting for a simple solution, divide it up into pieces that can be worked on independently, with agreed-on interfaces. Then each of us would be working independently on their piece, and we'd revisit the design questions as issues cropped up. Is this the best approach? I don't know. But I don't think it's bad.


Sounds similar to what Joe Armstrong reportedly did all the time with his programs: write an implementation, identify the faults, rewrite it entirely, identify the rewrite's faults, and rinse/repeat until it was good enough.

On reflecting, I've had a similar process, too, though now I'm at least consciously aware of it as an actual development strategy rather than just believing I'm a shitty programmer who's forced to rewrite things because he can't fix his prior horribly-broken implementations, lol.


> It could be viewed instead as an argument against "architecture astronautics", the "15 layers of abstraction to print 'hello world'" school of software design.

It's a long way from printf to framebuffer pixels. There's good reasons for every layer inbetween, too. (I like C compilers, format strings, buffered I/O, I like file descriptor semantics; I like having an operating system and it providing terminal emulation and framebuffer text rendering services!)

So I'm fine with 15 layers of abstraction to print hello world.


It's 15 layers in the system; I was talking about 15 extra layers in your own code, introduced up front, before any meat has been written.

I'm not saying abstraction is bad - just that it's constraining, and prematurely introducing a whole ladder of constraints is going to grind code evolution to a halt.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: