Hacker News new | past | comments | ask | show | jobs | submit login
Don’t Let Architecture Astronauts Scare You (2001) (joelonsoftware.com)
219 points by tekkertje on Aug 30, 2021 | hide | past | favorite | 148 comments



"When great thinkers ..."

One should give credit to Joel for correctly identifying the issue: great thinkers excel in abstract thought. Many of the comments here, however, seem to misunderstand his point about the propensity of creative minds to continue the refinement process and "not knowing when to stop". One (codeulike) apparently thinks simply being "bored" is what drives abstract thinkers. Let me assure you that is not the case, nor is it an inability to code. Many architecture astronauts started out as wiz coders.

Now let's review the gifts given by said astronauts - I'll limit to 2 examples but there are more:

- LISP is the mind product of a software astronaut.

- UNIX ("everything is a file") is the product of software space exploration.

My advice to serious young software engineers is to not accept mediocrity and imprecise thinking as acceptable standards for their chosen vocation. This cult of celebrating mediocrity in software design is a transient phase (20 years to date) and it is mostly a side effect of the introduction of facile soap boxes via the internet and blog sphere. When the dust settles, deep thinking and an ability to conceive powerful abstractions will yet again take center stage.

Software, after all, is all about abstraction.


I don't interpret Joel as saying 'never do architecture'. I interpret him as saying something more like:

- If your goal is to create great new architecture, that's fine, go for it.

- But if your goal is to ship a product to customers, on a tight schedule – say, you are a startup trying to get an MVP out the door before the end of your runway – then you should probably not be doing more than a fairly minimal amount of architecture, and you should beware of the temptation to do so.

Note that both your examples were created by people employed by large, wealthy organizations (MIT and AT&T respectively) that could afford to give them plenty of time to explore interesting new ideas. That's a great situation to be in, the world would be a better place if we put more people in that situation, and if you are in it, then you should take advantage of it. But if you are unfortunately not in it, then you should be aware of that fact.


Isn't UNIX a clear example of people not engaging in architecture "astronaut" behavior? It was designed to be as simple as possible - in stark contrast to Multics.


I would say Unix is a system designed to be simple and powerful. It takes a lot of thinking to achieve a system which is both simple AND powerful. Whereas things like XML and Multics are powerful they can do everything but they are not simple. In the end that limits their power because they are hard to learn and things built with them are difficult to understand, because they are not simple. And when something is hard to understand it is also error-prone, and such a system is fragile.

It takes "astronauts" who think hard about the architecture to come up with systems that are simple and powerful at the same time.


>Whereas things like XML and Multics are powerful they can do everything but they are not simple.

the original XML Spec is pretty simple https://www.w3.org/TR/xml/

even if you add in namespaces - still simple https://www.w3.org/TR/xml-names/

The difficulty in XML really came with XML Schemas and the need to wind that in every subsequent spec.


“It was designed” - that’s the key. We all know how hard it is to make things simple. That doesn’t come naturally. Hardest stuff I’ve ever done is to go through complexity to reach what is simple. (Definitely paraphrasing someone else there.)

Right now I have a project of potentially infinite complexity and we need to build really fast. A lot of my focus has been finding the very few pieces of bedrock and fixing on those, allowing the rest to vary as our appetite for complexity grows. Start today with an achievable goal, test hypotheses, don’t paint ourselves into a corner. All team comments are how simple it is. That’s by...design.


Simplicity does not contradict architecture. For some (me included), it is even the essence of it. Leave out the bells and whistles. This is why JSON is so successful, even though we had XML long ago. Technology should be more of that, and I would consider it superior architecture.


"Everything is a file" seems comparable to "everything is a message".


It's the thing with abstractions: changing angle changes the abstraction by seemingly replacement candidates, but only the good ones maps properly onto physical hardware and human cognition.

eg: "everything is a file" maps properly to both physical storage with addressable, streamable content and to the human mind as data reachable by classification (file name). Maybe a revolutionary OS design could better that with "everything is a message", but I think you can understand why one can have doubt on this translation from files.


The difference between a file and a message is that a file does not prescribe format of the file - it's usually application dependent, but is understood to be plain text.

A "message" tends to have formats associated with it (which is what differentiate it from being just a "file" that's transferred via a protocol). These formats, like JSON, or xml, or whatever new fangled formats kids these days use, now requires architectural astronauting; namely, common field names, or standards, so that different programs would parse them similarly. And now you'd want schemas for those formats, and automatic parser generated for those formats, and more and more...


> but is understood to be plain text

It's not. Plenty of Unix tools work with binary files.

> A "message" tends to have formats associated with it

It doesn't. At least, not any more than a "file" tends to have formats associated with it.


There is the whole worse is better thing too


I thought UNIX was supposed to be an example of worse-is-better philosophy which is in contrast to architecture astronauting?

But you may have a point with "everything is a file", and I wouldn't say that pattern have been entirely successful outside of actual files. I'm inherently skeptical towards any "everything is a..." philosophies. They tend to look good on paper but become ugly when they hit the gnarly real world.


Linux seems pretty powerful and successful though, and it's built on "everything is a file" as a foundation.


> One (codeulike) apparently thinks simply being "bored" is what drives abstract thinkers

This is a pretty huge leap from what the commenter actually said, which was:

> I'll tell you what though, if you have a programming task that you find boring, over-engineering it and over-architecting it can make it _so_ much more enjoyable.

It seems you've misinterpreted many of the comments. People aren't upset about any kind of architectural design, only where it crosses a hard-to-define threshold into "over-architected". This is pretty hard to quantify exactly, but it is entirely likely you, codeulike and the rest would have an identical threshold where you'd say "you've gone too far ..." on some design.


Well, Joel first talks about 'great thinkers' and then about 'smart thinkers'. The latter kind seems to me a failed instance of the former kind. The architecture astronaut is a failed kind of thinker who is now inflicting his failure on all of us by actually never getting anything useful done while masquerading as a great thinker. There actually are great thinkers in history but they did not invent RPC protocols while at the same time thinking this would somehow be a technology revolution.


> One should give credit to Joel for correctly identifying the issue: great thinkers excel in abstract thought. Many of the comments here, however, seem to misunderstand his point about the propensity of creative minds to continue the refinement process and "not knowing when to stop". One (codeulike) apparently thinks simply being "bored" is what drives abstract thinkers. Let me assure you that is not the case, nor is it an inability to code. Many architecture astronauts started out as wiz coders. I think you are just reading something that's not there.

> My advice to serious young software engineers is to not accept ''m and imprecise thinking as acceptable standards for their chosen vocation. Hmm? What straw man are you attacking?


One problem is, many if not most engineers work in corporate environments where they have to deal with expectations and deadlines. Mgmt may perceive reimagining a system to be a risky long-term bet and its tough to persuade them of the benefits. In their defense their hesitation may be justified. I hate crappy hacks but at the end of the day there's a reason why they get piled on so much.

So I guess the question is - what's the best way to scope and evangelize a 'moonshot' project? Or should you just shut your trap and go join a startup?


Well but XML was supposed to be the abstraction to rule them all.

Instead we have ended up with an overly bureaucratic format and process that is mostly used for paper-pushing and vendor lock-in pretty much.


Can you explain the vendor lock-in bit with regards to XML?


Vendor/group controlled XML standards. See the standards for EHRs for example: https://www.healthit.gov/isa/sites/isa/files/inline-files/20...


Is it just me or did this not age entirely well? I really appreciate Joel’s writings, but in a kind of very specific way .Net did actually turn out to be the big deal it set out to be, along with Java. Modern spreadsheets are just message-passing machines, and somehow Messaging companies are gigantically valuable. Yes, astronauts are expensive, perhaps overcelebrated and often wrong… but somebody has to do it right!? I for one am glad we’re not still emailing Master-Spreadsheet-FINAL(1)(1).xls back and forth to get things done. I rather like millennials like Ryan Dahl that saw the potential for Javascript to be more than a browser scripting language. I may have even been something of an astronaut myself (an “idea man”… my team would jeer)- but I’ve come down to the earth and dig my hoe into the soil these days. I think we’re better off with some good astronauts out there, at least sparingly.


.Net did actually turn out to be the big deal it set out to be,

.Net and c# have evolved into a great general purpose framework and language but the original lofty hype around .net was about turning everything into soap web services for b2b messaging with automatic discovery and blah blah blah and that never happened. JSON and Rest waltzed in and (somewhat) did that organically without the hype.

XML is a great example of what Joel's talking about. It turned out to be moderately useful but was hobbled by all the over-architecting in its early days.

Joel isn't arguing against progress in general here, he's arguing against over-architecting.


I think some of the technologies he lists are just bad, not over-engineered. Usually "complicated" and "bad" are signs of not doing enough engineering; just shipping what you have and bolting on haphazard extensions as people think of them. XML was that, and so were XML-RPC and SOAP.

You can see this happening with the newer technologies. JSON has a lot of quirks, like not having binary or integer types. So people have bolted those on. JSON doesn't have schema definitions (which are nice if you, say, want to generate code to talk to an API), so we're bolting those on (JSON Schema, OpenAPI, etc.) Admittedly, it's all a lot easier to understand this time around.

To some extent, I'm not sure if we as a field learn from the past, or just reinvent it every 15 years and do a little better accidentally. (Remember the PC revolution, and how you wouldn't have to connect to a big mainframe to do computing? That revolution died and now we carry around $1500 dumb terminals in our pockets, while all the software runs on a mainframe!)


And Java was supposed to run on toasters.

Which I suppose it sort of does, but that seems like much less of a good thing to me now than it did in the 90's.


Well these days your toaster has a gigahertz processor


That should explain some of the CPU-shortages we are now experiencing.


Now I wonder if a 2.4Ghz processor would work inside a microwave oven panel.


product idea: toaster and bitcoin miner all in one


Breakfast pays for itself.


>SON and Rest waltzed in and (somewhat) did that organically without the hype.

This is a perfect description of how it happened, and how practicality trumped the bombast of the people pushing SOAP and what later became the terrible framework that is WCF (whatever happened to Don Box? He was all over the place for a while).

Biztalk also lands nicely in the 'conceived and designed by astronauts' camp.


He isn't talking about all architects, architecture astronauts are specifically the kind of architect who no longer writes code. The guy who wrote node.js clearly still coded when he wrote it, and so on, so your examples aren't architecture astronauts.

Edit: And if we take your examples, the Architecture Astronaut would be the person who says "We should use node.js in our backend, it will solve all our problems!", not the person who wrote node.js.


Fair. The most (semi-recent) controversial astronaut I can think of right now might be Martin Fowler and his work on event sourcing + CQRS. I don’t know if he writes code or not, but I’ve heard a lot of angst from the people who have adopted those patterns. I made a small system using those ideas built on AWS Event Bus and it generally worked quite well, but I can see where making that pattern the law in a larger setting could be excruciating.


That's nothing. If he'd ever published that book of his on domain specific languages, he'd have done billions in damage.


Could you elaborate on your success with AWS Event Bus / event sourcing + CQRS?

As well as the excruciating angst you saw from others or anticipated at larger scale?


The system processes a stream of log events and triggers lambda functions that each handle various API calls in a user provisioning flow. Some of the tasks could run independently of the others, and those that depend on successful completion of a prior task are triggered based on a pattern-matched log emitted from the dependent lambda functions. Failure on any of the functions pushes events to a shared dead letter queue. Everything needs to be idempotent and use the original source timestamp field to be handled correctly. We could have achieved the same business logic with Step Functions (or any number of approaches, with the requirement that no new servers had to be managed) but Event Bus had a ready-made integration with an event source that we needed, and was a lot cheaper to operate. I can't really speak in depth to the gripes of others, but I can say that tracing and debugging was hard until we made it easy. If we had to add a lot more interdependent business logic to the system it would have probably become difficult to maintain.

Here is one HN thread about some of the issues others have run into: https://news.ycombinator.com/item?id=19072850


Trying to choreograph dozens of lambda functions is one of those solutions that’s more hype than sense. AWS pitched lambda for one-off video transcoding, then overzealous folks took it way way too far.


Yeah I think the user you are replying to may not be an astronaut yet, but he is at least a pilot :)


The wrong kind of pilot can get you to space but re-entry proves to be a real problem.


That's the best insight ever, made powerfully sharp by being correct.


FWIW, event driven is not event sourced. AWS Event Bus is neither persistent nor ordered which makes it not fit for event sourcing unless the lambdas were only an ingest for Kinesis or the like. For example: how would consistent replay be accomplished?



Thank you for sharing, TIL it will persist the events. Still...

> It processes these as quickly as possible, with no ordering guarantees.


It's so easy to shoot yourself in the foot with CQRS...I have yet to see co much duplication in an otherwise clean codebase like I saw in the single CQRS I worked on.


On the contrary, I think this has aged really well.

Look at this for example:

> A recent example illustrates this. Your typical architecture astronaut will take a fact like “Napster is a peer-to-peer service for downloading music” and ignore everything but the architecture, thinking it’s interesting because it’s peer to peer, completely missing the point that it’s interesting because you can type the name of a song and listen to it right away.

He was exactly right. Now you can do the same thing on YouTube: type the name of a song and most of the time it will come up right away. And that's what matters the most.


> All they’ll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other’s stories: “Peer To Peer: Dead!”

Crypto comes to mind. Crypto is astronaut crack. Distributed ledger astronautery for some. Weird economics astronautery for others. Meanwhile, the "listen to song" feature of crypt being "I can make money with this." Eventually we'll have CB crypto with reversible transactions.


I also appreciate a lot of what Joel has written; without fail, it's always a little insightful and thought provoking. I don't think it's so much that it didn't age well... but it started out fairly short-sighted or got lost on the way.

The example of generic file support seems a bit of a stretch for example. There's inherently nothing wrong with adding more support, but the contrived example conveniently meant the loss of a feature. Of course backward steps are bad... and why certain 'contracts' are established.

I think he went into it wanting to make a different point - delivery in the end matters more than design. This, I've [somewhat begrudgingly] learned, I agree with. He's probably gone on to make a post just like this that changed my views.

I don't think he gives credit to these astronauts and what they have to achieve, and the flexibility it demands. These are the dreamers, and sometimes you have to keep them from wandering too far. Other times, you might want to see where it goes.

The real world isn't FIFO. You can go with the short term safe plan while you try to find a clever solution to never have $design_problem again, or bolt on the latest feature.

It's great that he can imagine his new favorite gizmo, but it has to happen somehow. What's more, the lessons learned along the way tend to be useful anecdotes elsewhere.

Like Mr. Savage implied (I won't try to accurately quote), what makes science not screwing around... is writing it down.


> .Net did actually turn out to be the big deal it set out to be

At the time (2001) .Net was the brand for a vaguely defined all-encompassing Microsoft initiative - something about connecting everything. The .net framework was just one piece of this, there were also SOAP services, a global identity system and various other stuff. See for example this old article: https://www.computerworld.com/article/2596383/update--micros... :

> For end users, .Net provides a sparse, browser-like user interface without any menu bars. A key concept in the new user interface is the "universal canvas," which Microsoft said eliminates the borders between different applications. For example, spreadsheet and word processing features will be available inside e-mail documents. .Net also will support handwriting and speech recognition, the company said.

It is clear nobody (even in Microsoft) had any idea what .net was supposed to mean!

This branding obviously failed, so .net was narrowed to just describe the development framework.


I tend to agree with you: this did not age well. You know what is just as bad as architecture astronauts, wunderkinds ranting about how other smart people are stupid. This whole post feels like the aftermath of some late night argument that probably he should have just deleted in the morning. Not to mention the subtle digs in the pics that he conveniently forgot to caption: is that the MIT campus and two pics from Harvard? If it was revealed that this rant was related to some collegiate affair that only three people in the world remember, I wouldn't be surprised.


if you replace the technologies with tech-du-jour it's pretty much the same thing - kubernetes solving all of your deployment and system managment needs (except for, you know, managing or understanding kubernetes itself), json or yaml being the be-all-end-all for data formats (until you need comments, etc), 'cloud native', 'serverless', etc.

these are all tools not solutions but people keep pretending the tools are the solution. That's the 'astronaut' part - off in space and not grounded to actual user requirements.


> I rather like millennials like Ryan Dahl that saw the potential for Javascript to be more than a browser scripting language.

Not to take away from that achievement, but he wasn't the first one to implement this idea.

The infamous ECMAScript 4 was supposed to be a more general purpose language. Also there were previous attempts at roughly the same in the form of e.g. Rhino[0].

[0] https://en.m.wikipedia.org/wiki/Rhino_(JavaScript_engine)


> I for one am glad we’re not still emailing Master-Spreadsheet-FINAL(1)(1).xls back and forth to get things done.

Oh but we still do, we still do...


> When you go too far up, abstraction-wise, you run out of oxygen.

What happens if you don't go far enough up, abstraction wise?

You end up repeating yourself - everywhere. You end up with globals everywhere. You end up with God Classes. You end up with epic-length functions nested 15 levels deep. You end up with primitives trying to do the job that well-crafted data structures, named using the terminology of the problem domain, should be doing.

It's odd that the author rails against "astronauts" without addressing the very real motivations that lead to good architectures. Like all design, there are tensions in software development, and navigating the forces pulling you in opposite directions regarding architecture is one of the most important contribution you can make.

It's also odd that the author never calls out specific architectures that are the work of astronauts, just "the stupendous amount of millennial hype that surrounds them." He seems to hint in the introduction that CORBA is (was) one of them, but even that reference is pretty vague.


I would rather work with a codebase that has too little architecture than too much.

When the architecture is underdeveloped, it only takes somebody with a little more experience to fix the problem using incremental refactoring. But when astronauts have run wild with too many abstractions, it can be incredibly difficult to unravel them, especially if you don’t have the original developers’ help.


I entirely disagree with this. The worst codebases I have worked on have been the under-engineered ones - everything you touch affects a multitude of unknown, unrelated things, and unravelling that sometimes borders on impossible. You can't put any unit tests in place because everything depends on everything else, so good luck refactoring with any confidence. Give me an overengineered codebase any day of the week - it's much easier and safer to put multiple little pieces back together if they don't really need to be separate, than it is to disentangle one giant lump.


"The difference between underengineering and overengineering is that underengineering can be fixed."


I used to work with someone (a mechanical engineer) who used to ask would you prefer something that's over-engineered or under-engineered. Not sure why anyone would say under-engineered so since we obviously can't get anything just perfectly engineered it's gonna be over engineered. A bridge will take more load than it's rated load, an elevator will take more load than it's rated load etc. etc.

Sometimes we deal with under-engineered stuff, that's the stuff that breaks the first time you put it to use. We don't tend to think highly of those things.

Under-engineering can sometimes be fixed. But not always. Sometimes the fix is to throw it all away and re-do it because the issues run so deep the design is not fixable.

Good software has good "bones". The right decisions were made early on to support building more stuff over it... I completely disagree that you can refactor bad software into good software. Though presumably that's not what the parent meant. Perhaps the idea of the parent is defer big decisions that don't have to be made early. That doesn't mean don't make good decisions that do have to be made early.


> since we obviously can't get anything just perfectly engineered it's gonna be over engineered. A bridge will take more load than it's rated load, an elevator will take more load than it's rated load etc. etc.

That's not what overengineering mean.

To re-use your bridge example, over engineering a bridge would be something like using a complex suspended bridge design with expensive alloys, small tolerances, etc to cross a 10 meters gap where a simple slab of concrete would have done the job.

And "underengineering" would be to pour more steel and concrete at the bridge until it sticks.


well, for sure there are many axis to over engineering. I would say that a system that exceeds some specification is "over engineered" while on that is below the spec is "under engineered". At least I think that's the common use of the term.

I guess "expensive" might fall under costs, so something that was targeting a cost of $1000 but the design costs a million dollars to make is certainly over engineered.

I'd consider load ratings of an elevator to be similar. We mandate those are over-engineered by some margin. But if the load rating of an elevator is 1 ton and you designed an elevator to be able to carry 100 ton then it's certainly over-engineered by more than the safety requirements...


There are plenty of cases where exceeding the requirement is cheaper / easier / simpler than to try to match it more closely because you have some common and standard, hence cheaper, parts available.

So IMHO over and under engineering doesn't refer to the requirements, but to the engineering effort deployed.

What developers often refer to as YAGNI and KISS.


I agree the terminology is sometimes used in this sense. Though if there is a bigger engineering effort it's often because the engineers are attempting to go beyond the requirements. I.e. the requirements may be satisfied with SQLlite on a raspberry Pi but the engineer builds a large distributed system that runs in the Cloud. This could just be because they have no idea what they're doing, or it could be because they're intentionally building something beyond what is asked. It's definitely possible to build more complicated systems that just barely satisfy the same requirement that a simpler system would satisfy. I'm tempted to call this "bad engineering" rather than "over engineering".


> Good software has good "bones". The right decisions were made early on to support building more stuff over it.

I like how you expressed this. It puts into words what I've experienced with various codebases, both my own and others'.

This kind of understanding is difficult to teach and learn, what it means for software to have good bones. It also brings up the thought that it's not just about under- and over-engineered, but the qualitative differences between well- and (for lack of a better word) badly engineered software.

Fortunately there are various concepts that people have developed which help in talking about things like "good bones", such as modularity, encapsulation, functional purity, idempotence..

I suppose over-engineered software goes too far in applying these concepts (especially OOP?) where it starts to go against it being "well-engineered". But in general I'd personally prefer working with such a codebase, in comparison to under-engineered software where things are disorganized, too interdependent, rampant use of globals, etc. It follows your analogy of an "over-engineered bridge".

On the other hand, there is something to be said for software architecture that is tastefully and minimally engineered, with just enough strong bones as the basis - but no more! - to achieve its purpose. It's a quality that is somewhat rare to see, where the code is almost deceptively simple to understand, such as small functions with clear singular purposes, and not being too clever. It takes many years of experience (and perhaps certain kinds of personality) to reach that level of simplicity.

I believe such simplicity cannot be taught, but perhaps can be "absorbed" via osmosis, haha..


This statement is dangerous. Badly defined abstraction have understandable and well defined caveats, and used consistently across system. The problems might catastrophic, but you have something to hang on in the hope in fixing it. under-engineered systems grows into chaos, where every moving part can influence others, the combinatory is exponential, and so is the volume of work needed to fix it


overengineering can be fixed as well, usually by undoing things and doing them over again in the underengineered style. I would grant you that overengineering is typically more work and harder work to fix, though.


The definitions of under and over engineered seem at odds. To me the underdeveloped version is the one that is difficult to change without touching everything.


Sorry if I misunderstood but I have seen this kind of thought in non experienced programmers woking on greenfield projects and taking any decision as in "the end justifies any means"


You're missing the point entirely.

Joel's criticism targets PEOPLE not IDEAS. See the blog post title. There's nothing wrong with abstractions as an idea. The issue is when folks apply them in inappropriate and unnecessary ways.

When a smart, motivated, and caring person programs a computer then sensible abstractions will naturally emerge. A complex proprietary formula will not be copy-pasted because of course that's a bad idea. These judgement calls are important!

Globals are not always bad. Some data is truly global and should not be instanced, so to speak.

God classes are not always bad. Sometimes it's important to isolate experimental behavior from everything else and get it done quickly. Throw it into a class, throw the class into a file, mash a bunch of code in there, and test it out. If it works out then refactor it.

Primitives are not always bad. Data structures are not zero cost! The amount of times I've seen a "well crafted" (???) data structure tank performance by orders of magnitude is just sad.

> It's odd that the author rails against "astronauts" without addressing the very real motivations that lead to good architectures. Like all design, there are tensions in software development, and navigating the forces pulling you in opposite directions regarding architecture is one of the most important contribution you can make.

This paragraph is over-engineered.


The context of the early 2000s was peak object oriented programming, uml, xml. technologies like corba were just symptoms of all the praying at the altar of software architecture.


> What happens if you don't go far enough up, abstraction wise?

Or you end up with a flat hierarchy of isolated modules that can do one thing and one thing only but can do it well.

Architectures are what can't be changed easily and it is easier to change what is small. Architectures that try hard to accommodate every case always end up getting in the way of everything.


You are conflating architecture with abstraction. Architecture is design, the map to a solution. Abstraction, however, is the distance that the architecture sits from the system's reason for existence.

I've found more usefulness in being pithy than in being prolix.


Distance might be rather a symptom of senseless abstraction. Abstraction is throwing away details. If you throw away the right details, solutions are simpler. If you throw away the wrong ones, you miss the point.


Abstraction is throwing away details, yes, which results in a distance of the design from the goal of the product.


If you don't go far enough up abstraction-wise, you probably either don't have a good product or aren't solving anything profound. These 'high level' solutions are attractive to investors because they want 'napster for everything' or 'the iPod for your kitchen' or 'Sonos for your bathrooms' et ad nauseum. They're attractive pitches that make the likes of Mark Cuban's ears perk up. Whether or not you can sell them in the long-term, well, that's a different story. But these 'architecture astronauts' are the ones who get all the VC funding and grant money.


You can have great novel products...that aren't very abstract


In general general solutions are better because they are more reusable. More "adaptable".

Think of the formula for the solution of 2nd order polynomials. You might say it is too abstract, you don't need to solve all 2nd order polynomials, just this one and this one. If that kind of thinking had prevailed we would be missing many things the modern world has to offer.


I remember being invited by IBM to some sort of lunch junket with Grady Booch (IBM had acquired Rational Software). The guy basically spouted the most inane platitudes it has ever been my misfortune to be on the receiving end of, despite my working for a company so large it had not just architects but also IT Urbanists.


What in the everliving Son of Fuck is an IT urbanist


Architect of architects. In other words not just an astronaut, but someone who has transcended into a parallel universe of abstract irrelevance.


Civil engineers with stupid titles put in positions at companies that think a rather lot of themselves.


Guessing OP means people who don't just do the software equivalent of "architect buildings" (solutions), but do so on a scale bordering on town planning i.e. entire software landscapes?


I really agree with a lot of this.

I'll tell you what though, if you have a programming task that you find boring, over-engineering it and over-architecting it can make it _so_ much more enjoyable.


You better have the self-delusive ability to love and cherish that over-engineered system, because you will go from "not enough to do" to "can't keep up with the business because every little change requires 40 hours of heads-down work" in no time flat.


I think if you end up in that situation because of overengineering your engineering is not very good in the first place.

Even overengineered systems should be simple to change.


Even overengineered systems should be simple to change.

Clearly you've never met a 10x over-engineer

Over-engineering (for example over-generalising or adding too many layers of abstraction) is typically done with certain sorts of potential future changes in mind, so in the resulting system some changes are easy but some types of change (the ones the over-engineer didn't anticipate) can only be done with massive refactoring.


I worked with such a person, briefly. It seems he had never heard the phrase "the shortest distance between two points is a straight line". He had plans for the product stretching out 5 years, and architected accordingly. He knew exactly why each piece was done the way it was done, and could hold it all in his head. But when he was reassigned to a different project after 6 months, the rest of us had to pick up the pieces.


The general problem is that it is hard to estimate the amount of changes needed in the future. You have to make a guess. Sometimes this leads to over-engineering. Sometimes it leads to under-engineering. Depends on the case, the future, and the skill of the engineer.


My favorite designs are the ones that can't even solve current problems, much less anything that might need to change later, because they spent so much time optimizing for non-existent future scenarios that the present ones suffer.

"Sure, our web application might take a minimum of two seconds to handle trivial get requests, and logging in doesn't work half the time, but say the word and I can instantly migrate from Postgres to MySQL."


That's a good example, preparing for a future which probably never comes.

It is not easy to predict the future of a software application. In Louisiana they were building levees to withstand a once in 100 years flood or something like that. Based on historical weather reports they are able to estimate how high and strong the levees would need to be. But with software it is hard to see how we could estimate what kinds of requirement-changes might be needed during the next 100 years for any application.


To me, overengineered means you engineered something with the best intentions (speed of development, malleability, testability, operational resilience, etc.) but you made design choices that carry stiff costs and only pay off at scales that aren't relevant to your actual situation.

So, for example, you architected the system so that a dozen teams with a dozen developers each can hack independently on the system with minimum conflict, but you only have one team with four developers, who would have been able to iterate much faster on a simpler codebase, and who pay a high cost for maintaining and operating such a complex system.

Ofter the realization that a system is overengineered in one dimension happens simultaneously with the realization that it is underengineered in another. For example, you realize that your system that is engineered to scale to petabytes of traffic per day scales awkwardly to more than three or four customers. It takes weeks to onboard a new customer, and it's a impractical to onboard more than one customer at a time. Meanwhile your first four customers are only sending you hundreds of megabytes of traffic per day, and the mechanisms you added to scale to petabytes of traffic are making it really awkward to reengineer the onboarding process. Salespeople are quitting because they have prospects in the pipeline that they aren't allow to move forward on, and upper management is demanding to know why we need more integration engineers when they already outnumber our existing customers. But by god, if one of these customers wants to send us tens of millions of requests per second, we're ready (for some untested meaning of the word "ready.")

People who overengineer systems are often looking for a challenge because the real needs of the business (such as onboarding new customers, making the UI brandable, integrating with a dominant third party ecosystem) don't seem like engineering challenges. They want to do good engineering work, so they pick a challenge that they think of as engineering (tail latency, resilience, "web scale," you know, engineer stuff) and they throw themselves into it. And they run foul of the truth that LOC (and architectural complexity) are liabilities.


> Even overengineered systems should be simple to change.

One of the most significant qualities of a over-engineered system is an unusual resistance to change.


not to nitpick, but "quality" usually means some desirable aspect... here I would have used the word "characteristic" :)


Totally fair. Good nitpick.


I disagree. A well engineered system that 2,000 devs work on will look vastly different from a well engineered system that 10 devs work on. If you try to build the former for the latter you will make a system that is hard for 10 devs to change. And if you build the latter for the former you'll build a system that is impossible for 2,000 devs to change.


one could argue a well engineered system wouldn't require 2000 devs to work on it...


My favourite unpopular opinion: Microservices are a great way to enable 100 engineers to work on a product when otherwise you would have needed at least 10 engineers to achieve the same.


So any system that has required >= 2,000 devs could have been engineered to need less than that many? Seems like a pretty lofty goal: a hard cap on any system you could imagine.


How does it relate to Linux?


You missed the second part: work on it for a year, declare it well scoped and on track then move on to greener pastures, dumping the actual work on some juniors.

Best part about the people that do such things is that they may not even realize they do it!


unless your engineering efforts are spent entirely on making the system highly operable, your code easy to read, document, and test.


And that, my friends, is why the FAANGs need to hire tens of thousands of software engineers to copy protobufs from one place to another...


I'm glad that works for you, but I am also really glad that I don't work with you.


But that makes it difficult for others to maintain it.


Yes, it does. Better get some more head count for that.


Of course. I'm trying to point out one of the causes of over-engineering


Sorry for obvious reaction. Already burned by this. :-)


Yes, it's a widespread problem. Crafting code for maximum maintainability is often really important and often overlooked


Maintainable code can be changed more easily than unmaintainable code. Each change will tend to make code either more or less maintainable. Over time, the probability of maintainable code being refactored into unmaintainable code approaches 1. Most older code based observed were in fact found to be unmaintainable, supporting the theory.


Pity those that have to maintain it later though.


This is a good one. Joel Spolsky's writing has been a huge influence on me over the past 20 years. For anyone who liked this article but may not be familiar with his other writings: I encourage you to look at his "reading lists" (posted on https://www.joelonsoftware.com/) for some more gems.


What is dubbed as an "architecture astronaut" is, in my opinion, a researcher in software engineering as is the work they do. Largely, software engineering and practices aren't well defined and there's a lot of ways to approach a problem. We haven't found best approaches but in many cases we've slowly began converging on some architectural approaches for applications. These are really frameworks to think of how to approach a specific type of software system, much in the way research provides foundational theory to think of work in its domain.

We don't think of it that way, but I personally believe that's really what it is. It's a reductionist (abstracted) approach of developing a framework around a set of observations. Just like research, it may be too far abstracted to be relevant to the problem at hand and may take years before it becomes useful. That's just my perspective, though.


A good researcher should be an empiricist. They might want to test whether a pet theory applies to a new domain, but they should be challenging that assumption and open to the possibility that it doesn't. The architects who cause problems are those who dogmatically insist on a particular approach in defiance of all evidence.


I think that's a reasonable way of framing it I guess, but the reality is the industry on the whole doesn't have a place to put people like this. Most companies don't have a need for that kind of research. They just have to ship something.

Nor is it the case that everyone who does this kind of "research" is actually qualified to be doing it.


> What is dubbed as an "architecture astronaut" is, in my opinion, a researcher in software engineering as is the work they do.

So they're publishing papers with falsifiable hypotheses and subjecting themselves to peer review, right?

No? Oh.


Isn't all software engineering research peer-reviewed? Like Transactions of the ACM, all the great conferences etc. Or was that in the last millennium only?


That was my point - they aren’t publishing papers so they aren’t doing research. They’re just standing up at conferences and giving opinions with no data to back it up. That's not what a researcher or 'scientist' (as some call themselves) do.


Ah got it you're saying that so call "architecture astronauts" are not publishing, like serious computer scientists are (?)


The person I replied to said 'What is dubbed as an "architecture astronaut" is, in my opinion, a researcher', but that's not true - they're not researchers, for the reasons explained.


“ Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!”

Wow, this hasn’t aged well. I always found Joel’s articles to be interesting but pretty limited in actual wisdom. He writes as if to be making profound observations, but in reality they are actually pretty narrow minded and really only apply to a very narrow nitch of computing as a whole and really only apply well to a certain startup type — not technically complex communications, productivity or organizational type software.

His advice is invalid for the development of anything more complex, for example you shouldn’t take his advice if you’re building a biotech company or starting a new kind of tech company that isn’t your run of the mill CRUD app.

I think it’s rather disingenuous and quite arrogant to make generalizations about the types of people who work at big companies (without even knowing what they do or having ever met them). I’m sure the same traits that make one have disdain for corporations are the same drivers that cause one to become an entrepreneur, and then blog loudly about it to the world.


> I think it’s rather disingenuous and quite arrogant to make generalizations about the types of people who work at big companies (without even knowing what they do or having ever met them).

I get where you're coming from, with your reply as a whole and not just this snippet, but I think it's worth pointing out that Joel did work at Microsoft for quite a few years during its historically ascendent and dominant period in the 90s so I think he has at least some idea about the types of people that work at big companies.

If you want to talk about things that haven't aged well, SOAP would be right up there - you may not remember all the fanfare surrounding it, but I certainly do, and where exactly is it now? I know it's still used in some legacy contexts. I've even seen SOAP responses within the last 5 - 7 years, but it's basically dead (or at least stagnated) tech that isn't used for newer projects and systems, and certainly isn't something we'd even consider using.


> His advice is invalid for the development of anything more complex...

I think his advice would be 'does this actually have to be more complex?' - I suspect. If might superficially look complex, but could it be actually be handled in an old-school , simpler manner?


So... I think a lot this is two problems, not one.

One is definitely climbing the transaction ladder to the point of no oxygen, Astronaut Architecture. But, I don't think that alone explains the amount of bombastic, heroic, utopian, grandiloquence of those Astronaut Architecture quotes.

People describing their jobs, their companies, ideas and such seem draw to grandiose & abstract nonsense. "Solutions-talk." Walk around a lot of business-ey trade shows and read plaques. 90% of the time, it is impossible to know what they do, who they do it for, why. They all do "business, people, and technology solutions." Even if you stop with questions, the first answer is always hopelessly abstract. It takes a lot of digging to eventually find out they do custom spreadsheets for dentists.

Meaningful statements are limiting. Who wants to limit themselves?

Also, it works. Saying something specific enough to be meaningful opens you to criticism, being eliminated by process of elimination, etc.

Architecture Astronautary feeds comfortably into sales, marketing, investor relations, recruitment. It's acceptable in boardrooms, AGMs, job descriptions. Media is happy to report on it.

Obscurity by abstraction works.


Be wary of "When I used to work at Facebook we did xyz"


"...and by 'we', of course, I mean a team of developers larger than our entire engineering division, dedicated to developing and maintaining xyz. Why do you ask?"


It's not like the architects at Facebook would do something like scrap Android's gradle build system and replace it with their own custom build system...oh yaa, they did do that.


I mean, Google does this all the time with things, too. And so I suspect it's partially ex-Googlers influencing things to some degree.

From what I hear 'buck' at Facebook is directly modelled on Blaze/Bazel.


Worse yet is when the people who really didn’t make it at a FAANG start the “What we did at X…” adventure. We had a couple of those wander through the woods on their dream quest to the tune of a few million dollars in wasted work we’re throwing out now that they moved on.


> If Napster wasn’t peer-to-peer but it did let you type the name of a song and then listen to it, it would have been just as popular.

This is a very effective statement. Today p2p for music is certainly alive but has greatly been surpassed by services like YouTube and Spotify (and Rdio, sadly dead before it’s time). One is ad supported, the others paid. But they all just let the user search and listen.

Popcorn Time has done this for movies and tv. It’s purely p2p. Disliked by content owners, but they’ve largely failed to provide the same kind of universal search and watch functionality that the app provides. Hulu, Netflix, Apple+, Disney+, insert other streaming services. They all basically fail at usable search and recommendation usability and have limited overall content.


> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.

Is is the really big company that makes you die inside or the people themselves?

I tend to think its the company... its soo hard for me to work and feel productive at a large company. Something just oozes nobody really cares about you or what your doing. Everyone is just trying to do the bare minimum and go home.


Because doing the bare minimum is a reasonable thing to do. You will not be rewarded for more work unless you play the hierarchy game. The Gervais Principle: https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...


You can rationalize it as much as you like, but many people will not be happy by spending their working hours doing the bare minimum.


It's not a rationalization, it's a description of reality. Play the hierarchy game or find joy elsewhere, there's no other way really.


Not everybody is like you. Not everybody shares your worldview, or what you find happiness in. People are different, and different people find happiness in different places. I'm telling you, many people feel the need to do more than the bare minimum at work, and will feel happier by doing more. Perhaps you are not like that. You can still take my word that people like that exist.


I know :)

That's the "Clueless" section of that essay.


Premature optimisation is a real pain when it comes to software design and architecture. However, abstraction is a powerful and necessary tool that can make our life a lot easier by avoid repetition (DRY - don't repeat yourself).

A good architect worth their salt will be able to find the right balance. To distill and simplify a problem down to the absolute minimum where it cannot and should not be abstracted anymore before it lose or miss its core objectives and reason to exist.


A somewhat cheesy aphorism for this is "Dry but not so dry it cracks".

Personally I like the rule of thumb that you let 3 instances of something occur before you try to refactor it into a single design.

Without examples of real usage it's far too easy to come up with bad abstractions. I'd rather have code that's easy to delete later.


Much of what Joel says here - especially the conclusion at the end - is also mirrored in what Steve Jobs said when confronted over shutting down OpenDoc: "You have to start with the customer experience and work backwards to the technology. You can't start with the technology and figure out where you're gonna sell it."

https://youtu.be/oeqPrUmVz-o?t=110


The peer to peer part definitely didn't age well. And the Java part too.

Nobody likes bombastic claims, but these likely go from marketing than from architects. It's important to see beyond the marketing and recognize the engineering thinking and potential behind technologies.


Joelonsoftware always seems to me to be really, really close to punching someone in the nose.


"All they’ll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other’s stories: “Peer To Peer: Dead!”"

Strong blockchain vibes in this and the paragraph that follows it.


I think that example was interesting because he basically described BitTorrent (a more general Napster that's not just a push-button music player) as the potential brainchild of astronauts, but that actually was really useful to millions of people. So maybe the astronauts do have a point sometimes.


And yet the _user_ problem he describes Napster solving, “type a song name and hear it play,” is currently solved by Spotify which is WAY more popular than BitTorrent.


> currently solved by Spotify which is WAY more popular than BitTorrent.

Spotify claims they have 356 million users. I would claim that torrent users are greater - you just don't know or hear of them!


I don't know. I have found that those that torrent are very similar to those in crossfit. They won't shut up about being in crossfit.


Yes, it's funny how every person who talks about things loudly is a loud person. Almost like there's some kind of selection bias going on.


Can't we just agree _both_ BitTorrent and Spotify succeeded in their slightly overlapping goals?


well, the tragedy now compared to 2001 when Joel wrote this, is the architecture astronauts are no longer limited to big corps. with vc money, early stage startups got money to boot. I don't know how many early stage startups, I have encountered barely out of a seed stage with microservice this, serveless this, all running of course on k8's for unlimited scaling. if frontend was madness then current trends in backend dev are madness ^2. I have completely switched off now from node.js work to on saner boring things.


One of my favorites of all time


joel nailed it, xml was clearly peak internet




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: