Hacker News new | comments | show | ask | jobs | submit login
In Pursuit of Production Minimalism (brandur.org)
375 points by grey-area 79 days ago | hide | past | web | favorite | 82 comments



Fwiw, the SR-71 was not engineering excellence. It was slapped together to do what needed to be done. It literally leaked jet fuel onto the ground when it wasn't in the air, in order to allow the metal body panels to literally stretch and close gaps in its skin when it was in flight (because physics). The air frame couldn't handle a takeoff with a full fuel tank, so it had to be filled up right after taking off. 90% of the parts manufactured didn't work. It was difficult to maintain and eventually developed a "speedy" one-week turn-around time to get it to fly again after a mission, assuming a base had the specialized tools and support staff to do so.

The SR-71 is an example of how the sheer will to get something done can indeed accomplish great feats, if at great cost and complexity. It's also an example of how innovation is no guarantee of success; since the Blackbird was retired, we still have no airplane as fast as one built in the 1960's.

A great plane is just what it needs to be. It may not be the most minimal it could be, but it will be the most effective. Another example, the A-10 Warthog, is not "minimal", but it is an amazing plane.

I think to get the best result based on the desired outcome, you have to pursue craftsmanship and excellence through continuous education, practice, and improvement, and not worry how it looks so much as how well it works. Complexity is a sin, except when it is necessary.


I couldn't disagree with you more. I am presuming you have neither read the history of SR-71, nor have a good understanding of the trade of engineering. I am ex-Lockeed employee and even though SR-71 was a distant historical project when I was working there, there were internal discussions about the engineering excellence behind the plane's performance. I worked in stress analysis team where we frequently joked about how we don't need to deal with thermal expansion like in the SR-71.

You can choose to call it "Slapped together" and brush over the sheer ingenuity of the technologies that made SR-71 successful, or you could try to read up on the insane amounts of __engineering excellence__ that took to make it achieve its mission as a recon aircraft. Its mission wasn't to be a commercial aircraft, its mission was to go as high, far and fast as possible - Reconnaissance and to return safely by literally outrunning missiles. Therefore, refueling and leaking fuel are appropriate trade-offs.

Skunkworks engineers were perfectly aware of the trade offs you mention - that is integral to __any__ engineering discipline from semiconductor to aviation. That __does not__ imply the lack of engineering excellence.

Leaking fuel is __the__ most popular fact about the SR-71. But you omit or you are unaware of the ingenuity that went into making the Ramjet engine, material science, hypersonic regime that was completely new and nothing had flown at that speeds before, etc...there is just so much to say, I am having a hard time coherently replying :-) I apologize if my response was a bit harsh - I am just passionate about this topic!


If we're defining "engineering excellence" as the craziest, most amazing thing you can possibly build, then the SR-71 absolutely qualifies. So do top fuel dragsters (though obviously the SR-71 was way more difficult to achieve).

What I meant by "not engineering excellence" was in relation to the original post, comparing how to manage production servers with the SR-71. Nobody should regularly operate their servers like an SR-71 or top fuel dragster. It's great to fly really really fast, but most people should probably not be doing that, because it requires making exceptions like leaking fuel. You actually want to run your server like a big slow complicated comfortable redundant jumbo jet on autopilot.

Hopefully you get my meaning. I apologize if I made their accomplishments seem minor. I meant no insult against the Skunkworks engineers or the SR-71, and your response was not harsh at all.


Yes, the SR-71 was a remarkable achievement. Ben Rich's book about the Skunk Works describes the project. It's amazing that thing ever worked. It's not, however, engineering minimalism. It's money is no object R&D. There are many exotic technologies used in the SR-71 never seen before in aviation and seldom used again. Large parts made of Invar, a zero expansion alloy normally used only for watch springs, for example. The price of all this was a high-cost, high-maintenance aircraft.

A good plane to look at for production minimalism is the Boeing 747. A huge aircraft even today, it was designed as a commercial product to be produced in quantity and used intensively over a long life. About 1500 were built and many are still flying after 40 years. It's reliable, maintainable, safe, and profitable.

It's not exotic. There are some new things in it, but not that many. It's mostly upgrades to technologies known at the time. Better flight controls, better navigation, better autopilot, better hydraulics, better tires, better engines, etc. But little that hadn't flown before at smaller scale. The emphasis was on eliminating the weak points during design. The B-747 has four hydraulic systems, for redundancy.

It's worthwhile for people in software to read the pilot's manual for a large commercial jetliner.[1] They're not that complicated. If operating your system is more complicated than a jetliner, you're probably doing something wrong.

[1] http://www.aviationforall.com/wp-content/uploads/2016/09/AOM...


Engineering is ultimately all about solving problems. Operational consistency at acceptable cost is an important problem that the 777 or a LAMP reference infrastructure provides.

The SR-71 solved a different problem — people at the top of the chain of command lacked information, and the potential impact was devastating. Delivering information at any cost was critically important, and that operational cost and the risks associated with operations were high!


> What I meant by "not engineering excellence" was in relation to the original post, comparing how to manage production servers with the SR-71. Nobody should regularly operate their servers like an SR-71 or top fuel dragster. It's great to fly really really fast, but most people should probably not be doing that, because it requires making exceptions like leaking fuel. You actually want to run your server like a big slow complicated comfortable redundant jumbo jet on autopilot.

Engineer here. It appears you've mixed up some pretty basic but unrelated concepts. Engineering projects are based on use case scenarios and design requirements. You've presented your personal and very specific usecases and design requirements and somehow assumed that they were shared by another project or that they even made sense. If your main requirement is performance and all other requirements are minor in comparison then the optimal design wont be a workhorse, but a race horse.


Exceptions from known best engineering practices happen all the time on production servers. Heck, the entire concept of a "NoSQL" key-value database violates best practices for database normalization in pursuit of faster performance. Anyone running MySQL in less than an ACID-compliant configuration? Yes? (Like, almost everyone?)


I'm not sure you and the parent disagree much.

"Engineerings excellence" might well describe a product that performs its function reliably with a minimum of intervention or fuss over a long period of time. When something "just works" like this, it's an achievement.

"Engineering excellence" might also describe the "sheer will" combined with expertise and carefully chosen tradeoffs that makes something as unlikely and amazing as the SR-71 possible at all. The parent seemed to exclude it from excellence, perhaps on the grounds that it doesn't meet the criteria I describe in the previous paragraph, but it seems to me they're talking about the project with respect for what they were able to accomplish, and if so, this is essentially a semantic point about whether any engineering operation worthy of respect fits in the box of engineering excellence, rather than a question of whether the SR-71 was amazing.


Disagree. The SR-71 represented a pinacle of engineering, not because it did everything right but because it made appropriate tradeoffs. Leaking fuel on the ground was an appropriate tradeoffs for dealing with the constraints of materials science. It traded efficiency for speed. It traded nearly all else for speed because that's what the mission profile called for, and it did it dang well.


Agreed. I have read a bit about Skunkworks and they are really good at building planes quickly but the SR 71, the U2 and F117 are all very difficult to.maintain and/or tricky to fly. Not suitable for mass production and use.


It’s funny you mention the U2. If I’m remembering right, the U2 went from idea to flying in 18 months. That is my benchmark for what 18 months of development by a talented team looks like. And yes, you’re right, it’s not an airplane for mass production, but it filled a niche and did a damned good job of it.


To restate the point that a lot of people are making here, engineering is about trade offs. If it wasn't it would be science.

If the requirements for something are highly specialised, you're only going to make only a small number of them and you have a hard deadline to do it by then you shouldn't be wasting time on mass production considerations.

That's a trade off and I think they dealt with it correctly.

The AK-47 is built to be mass producible and reliable in the field. It's a great success in those terms.

The U2 and SR-71 are build to different criteria but are also great successes.


The U2 is still flying today, albeit for a different mission, and the USAF is having a hard time fielding anything that comes close in capability.


The blackbird failed because something else could do its job better and cheaper (satellites). We still don’t have simply means we still don’t need (at the high cost where it is possible).


> we still have no airplane as fast as one built in the 1960's.

Either that or we got very good at keeping secrets, getting stelthy, or the need for this “form factor” was depreciated with the advent of space based tech.

Heck there’s probably a Robocop-style human brain in that CIA space plane[0].

[0]: https://en.m.wikipedia.org/wiki/Boeing_X-37


> Fwiw, the SR-71 was not engineering excellence. It was slapped together to do what needed to be done.

For whole industries, this is the definition of engineering. Anything that is done but is not needed is frivolous extravaganza.


I agree with the thrust, but these need more nuance:

> Avoid custom technology. Software that you write is software that you have to maintain. Forever. Don’t succumb to NIH when there’s a well supported public solution that fits just as well (or even almost as well).

Software you run is software you have to maintain; until you can burn it in the fires of Mordor. If there's a public solution that is a good fit for your problem, and was built along the same lines you would build it, then that's great; but if it's a great fit for the problem and also pulls in 70 dependencies and updates three times a week, that's probably not worth the integration mess.

> Use services. Software that you install is software that you have to operate. From the moment it’s activated, someone will be taking regular time out of their schedule to perform maintenance, troubleshoot problems, and install upgrades. Don’t succumb to NHH (not hosted here) when there’s a public service available that will do the job better.

You don't need to host everything, but you better have a backup plan when the service you depend on decides that they're no longer going to provide the service. Chances are, you're going to need to debug the service too, sooner or later, and it's harder to debug a black box. That said, it's great when you can lean on a rock solid service to do things that are important, but not a 'core competency'


The whole fallacy of slopping together a bunch of open source stuff and not bothering to understand any of it is a house of cards waiting to fall down.

It's far better if you have the time and resources to develop your own software from as low a level as possible. Each layer of abstraction that you can shed is an opportunity to tailor your solution more closely to your problem and to have expertise in-house. Big tech companies know this and it's why they do a lot of stuff in-house. The key is in knowing what to develop in-house and what to punt on, and when.


My approach is to slap together a bunch of stuff to get a better feel for what's really needed. Once things solidify you can often see that you only need a few things which you then can either custom develop or with less components.

Starting out from scratch without exactly knowing what's needed seems extremely slow, inflexible and prone to wrong decisions.


For a big company it can make sense to take on a big fixed cost (custom software) to reduce a variable cost (cpu/storage/bandwidth consumption). A small company may not have the economy of scale to justify it.


I agree with the points being made here but there's also that issue that your custom creation is the next employee's 3rd party solution.


And any time a dependency breaks, there's a chance that the external fix will not come timely, or that whatever fix is right for you will not be compatible with that dependency's overall goal, so you'll end up maintaining an internal fork anyway.

I don't think there's a way around it. Professionals don't shy away from taking responsibility for all the code that goes into their product.


That's why I fab my own chips!


If I had unlimited time and resources that’s exactly what I’d do. Big companies often have both so that’s exactly what they do.


applauds loudly

It might even be bad for the project and/or client but lets not forget these tv dinner people cant even cook!


Yup, also "publicly supported solution" often means not perfectly customizable, and may only be publicly supported for a limited time.

I find that most services get me 80% of the way, 2x faster than a custom solution. But the final 20% is an unknown abyss that will more often than not, have insurmountable limitations. And so, if the software is in my wheelhouse, I'm almost always better off building it custom.


> Software you run is software you have to maintain; until you can burn it in the fires of Mordor. If there's a public solution that is a good fit for your problem, and was built along the same lines you would build it, then that's great; but if it's a great fit for the problem and also pulls in 70 dependencies and updates three times a week, that's probably not worth the integration mess.

I’ve gotten flak for rolling my own library. Basically, I was trying to get the base clock rate of a server for benchmarking purposes. Instead of using an external library, I simply wrote some inline assembly to read from the timestamp counter.

I was told this will create a maintainence headache, but zero fucks given on my end. I got to write some inline assembly today. :)


"I was told this will create a maintenance headache"

That's such a weird attitude I see often. If you had put your stuff on Github under a different name and then downloaded it everybody would have been fine with it.


I used to work at a place long ago where the on site engineers would write the code, commit it to a “shared source” program at a vendor, and then license the code back at a cost from the vendor.


Let me guess, at some point a manager was upset that some internal code broke and instituted a "only shared libraries from this vendor"-policy that was set in stone?


Sometimes I think it's just someone's laziness that somehow mutated into cultural norm. Or maybe CYA mentality, as when a dependency breaks, you can blame third parties and say that it was totally not expected to happen.


> I was told this will create a maintainence headache...

People say things like this as if it's possible to ever get away with not maintaining software. Everything you do will create a headache, you only get to decide the exact location. (And I'm with you on reducing external dependencies where the cost to do so isn't prohibitive.)


That's cool for you but

* How many people in your team/organisation can read and write assembly?

* How difficult is it to hire someone with assembly knowledge where you are?

(not having a go at your solution specifically - not enough project managers think about maintaining solutions after people have left)


Later, when this causes a portability problem, it will be replaced. As long as you, your manager, and your teammates are fine with having to replace your work, go crazy. Write an HTTP server in Pascal. Create a linker in awk. YOLO.


>HTTP server in Pascal

Heyyyy, what's wrong with that? I am creating a dashboard analytics program which is a Windows service with a fully fledged web server built-in in Pascal currently!


Totally with you on self-inventing/self-hosting.

However, the cynic in me will make a point that original advice is spot-on for startups, as most of them these days care not about their product, but are just a gamble on getting acquihired. Their expected lifetime is around the same as the services they depend on (likely less), so they can get away with it.

A less cynical point can be made that slapping your prototype pretty much entirely from third-party services and libraries can be a good way to get to the market fast, at which point you can secure funding that'll allow you to slowly replace critical parts with your own code, to ensure survival of the product.


The rule of thumb that I go by is that in order to add a new technology to a stack it can't just represent an incremental improvement: it needs to be a 2x improvement in productivity or (even better) it needs to make it possible to build something that isn't possible to build without it.

The cost of introducing new technology (in terms of training, tooling and now-you-have-two) is so high that it's just not worth it if it will only incrementally improve how things work at the moment.


This article mentions Clarence "Kelly" Johnson. If you don't know who he is and care to learn more about one of the best engineers that ever lived (SR-71 is his work), check out his book: https://www.amazon.com/Kelly-More-Than-Share-All/dp/08747449...

His second in command at Lockheed, a man named Ben Rich also wrote a very good book: https://www.amazon.com/Skunk-Works-Personal-Memoir-Lockheed/...


> (SR-71 is his work)

One unfortunate trait of the original Lockheed Skunkworks culture was to make Kelly Johnson (brilliant as he was) the only face of the organization. As just one such example, many know of the clever inlet system on the A-12/SR-71, but it, like many things Skunkworks in the pre-Rich era, is incorrectly ascribed to Johnson. The real person who deserves the credit as the chief for that system is David H. Campbell, but how many people know his name?


The form of the T-38 was apparently determined by Lee Begin at Northrop. Who, you ask? Well, good luck finding information about him on the Internet.


These are not bad. But "Use services" seems like a recipe for long term pain. It's an external dependency you can't control. I would prefer to use the external service to learn what you really need and then try to insource it again.


Great article, classic comment section:

Half of people arguing about the plane analogy

Other half arguing about jQuery


The list of suggestions do not match the headlines.

Introducing new technologies to replace old ones, preferring services to local hosting - I just do not agree!

I use the oldest technology I can get away with, and host everything on dedicated servers as it is often more performant and cheaper.

Simplicity needs bounds. The suggestion of reuse also needs that: If you end up storing binary content on SQL to avoid having to roll a file hosting, you are taking the simplicity mantra too far.


Introducing new technologies to replace old ones

I took that to mean if you must start using new solutions, make sure you remove some similar tech from your stack for every new solution you start using (e.g. replace RabbitMQ with Kafka) so that you don't end up with 50 ways to store data say. That was certainly the context in the article.

preferring services to local hosting

There is an argument for using hosted services too though - it reduces the ops workload - the correct solution probably depends on scale, staff etc. I agree this isn't one size fits all.

If you end up storing binary content on SQL to avoid having to roll a file hosting, you are taking the simplicity mantra too far.

I don't think the article mentions storing binary content in Postgresql does it? They're just talking about all non-ephemeral data, I imagine at one point they started using some data stores better suited as caches or message queues for permanent data storage, and discovering that when data was lost was the impetus for simplifying their storage - I like that they've moved to using their own hosted Postgresql solution to store their customer data, it simplifies their infrastructure, and makes sure they feel any pain customers feel too - a good kind of simplicity to strive for.


I really enjoy old, stable tools that are maintained and made incrementally better, day by day. Postgres is one example, Elixir is another (which doesn't re-invent the Erlang VM, but builds on top of it).

I also really like hosted cloud services, but they need to be replaceable with little to no effort. For instance, I don't want a new NoSQL platform, I want high-availability, managed, on-the-fly-scalable Postgres.


Unfashionable I know but I feel strongly like this about Java.

We chose Java maybe 2 decades ago when it was newly minted.

Some decent swathes of code written back then are still running in production.

Its enormously satisfying looking at anything in your stack which has really stood the test of time.

I harbour similar feelings about the Oracle DB but I'm a little more conflicted there.


> need to be replaceable with little to no effort. For instance, I don't want a new NoSQL platform, I want high-availability, managed, on-the-fly-scalable Postgres.

I don't think the latter is easily replaceable, though, either, since what you've described isn't Postgres, but, at best, something custom with a Postgres interface.

For example, the Citus DBaaS offering may fit the bill, but what could easily replace it? A self-hosted version would fail at being "managed" (though this is a bit academic, since, presumably one would hire for that) and at being on-the-fly-scalable [1]. Is Amazon RDS a drop-in replacement, or does it have compatibility and performance edge case caveats?

[1] I read that sort of "wish" often and have always found it borderline naive. That is, what I think they really mean is they just don't want to worry about scalability. The dominance of expensive, few-sizes-fits-all cloud infrastructure means that custom building an over-engineered database server, is not a practical option, and for most cases, neither short-notice scalability beyond that, nor runaway growth are a realistic problem.


Agreed, minimalism and age/maturity do not correlate in any meaningful way. It's much too circumstantial. In some cases, running a time-tested Java app might be much easier than spinning up a modern JS equivalent. In other cases, maybe the legacy options don't provide the functionality you need.

Not to mention contradicting ideas: "Retire old technology" and "Don’t use new technology the day, or even the year, that it’s initially released"

What is old technology? You're on a UNIX system, right? 0_0


I think the "retire old technology" is more in the context of recognizing when something you are thinking about adding could replace something you already have.


I think the "libraries over frameworks" adage definitely should be extended to SAAS as well. Use the tools you can get away with, but with an eye on the future and without painting yourself in the corner. Sometimes convenience makes us give up too much control and know-how.

Of course the article is about a PAAS company, so the advice ought to be taken with that in mind...


Storing binaries in SQL works great for small scale systems; it makes the backup/migration process simple.


> host everything on dedicated servers as it is often more performant and cheaper.

It depends on what you are serving and at what scale as well as the opportunity costs of managing it yourself. If I have a developer spending some hours maintaining a server each month, it isn’t cheaper at all. You can also have issues of under or over capacity. Heroku, for example saves us a ton of time we could be using for shipping features rather than maintaining infrastructure.

If really depends on the application.


Minimizing what code you own is a naive optimization. When you consider the liabilities that third-party library and service dependencies introduce it becomes quite attractive to build your own minimal systems. I'm not sure when programmers became so averse to writing simple code. But leftpad was just the beginning.


Maybe because so often that 'simple code' is only simple because the author is suffering from a combination of naiveté and Dunning-Kruger. Trivializing problems makes you a danger to your team, your company. Not an asset.

Deep object cloning/inspection, URL parsing/generator. Date arithmetic. Locking. Caching. Testing. Templating. I've had to rip so many of these out for being fundamentally unworkable and often wrongheaded solutions to already solved problems that it's become a cliche in my career.


I'm not trivializing problems. And I agree people writing wholly new solutions to complicated things is a hilarious cliche. I'd never say, "go write your own JSON parser." And like you said, they're already solved problems. That means there are many ways to DIY, including forking, or cloning just the parts you need.


>I'm not sure when programmers became so averse to writing simple code.

I know everyone likes to spend a day writing a date/time library, but when you need to develop a full stack app. You just dont have time to do everything from scratch.

These are decisions you have to make.


Leftpad isn't complex. It just adds spaces.

Datetime libraries are complex because they need to do things like deal with the fact that North Korea switched Timezones last year, that Fiji is changing their DST dates this year and other crap. We get about 3 to 10 releases of the TZ database every year and you'll probably want the updates as quickly as possible in your Datetime library to deal with timezoned data correctly, otherwise your Customer from Fiji will find that you aren't turning up for the meeting because you left an hour earlier believing the customer wasn't turning up at all. And then you have to deal with dates in the past where there might be possible changes to the entire calendar! Or different calendars such as the Japenese, Chinese or Jewish years.

I don't think it's valid to suggest that leftpad and datetime libraries are even on the same order of magnitude of engineering complexity.


> date/time library

> leftpad

Yes, there is a difference between these.


There's also a difference between "we have a dependency on this versioned library that we have archived and deployed" and "we have a dependency on this library which we refer to dynamically at run-time".

But the second one gets you closer to the mythical perfection called "serverless" because it uses so many different servers you can't keep track of them all.


Wouldn't advocate for total DIY. I recommend leveraging third-party libs and infra especially during experimentation or to buy increased development speed. Just calling out that those are long-term liabilities, and that when appropriate that tech debt should be paid down by writing code that you can control.


Build vs Buy has no universal answer.

Do you build? Or do you buy into the support level, the features, the bugs, the availability, the security, etc.


My rule of thumb is, write it yourself until it's painful, and spend a day looking for a good library to reduce the pain. If you can't find it after a day, or if you found one but it's taking longer than a day to configure it for your needs, try to write one yourself. This rule has not done me wrong yet. It's how I ended up moving away from Mobx and Redux in favor of RxJS and setState (just submitted a post about how to do this), and why I moved from Vue to React. But it is very subjective and subjectivity gets harder the more people you have on a team working on the same code.


I've actually come to prefer the opposite. I'd argue that the liability associated with third-party deps makes them debt. Debt is great for growth and experimentation. It works in a pinch especially if the experiment fails and you can just toss the dependency. So, today I'd rather buy, then build when I know I want to take on the pattern for better ops and long-term ownership.


This is the top comment, but I'm confused about how it relates to the linked article. The article doesn't really advocate outsourcing your software to third parties, just consolidating the infrastructure that you do own (sometimes in clever ways, like recursing your own PaaS) to make it as simple as possible.


From the article:

> Software that you write is software that you have to maintain. Forever. Don’t succumb to NIH when there’s a well supported public solution that fits just as well (or even almost as well).

While this idea has a lot of merit, it's not as black-and-white. (Most things in the world aren't, too.)


And frankly the "(or even almost as well)" is just wrong. I can't count how many times I've seen projects fail due to trying to shoehorn the problem into some existing solution because the NIH syndrome was so strong. Really if at first glance it looks like a perfect fit, it isn't, but you might get it to work. If right off you can see how it is not quite a fit, don't think you can make it work because you are most probably wrong. It will either fail outright or end up being way more work than if you had just solved it with custom code.


Looking at the SR-71, despite all its engineering "flaws" (documented excellently in a sibling comment: https://news.ycombinator.com/item?id=17675996), I wonder...

Have we really progressed technologically, as in, doing the impossible, since the late 1960s?

The Blackbird is still the fastest jet in the world more than fifty years later.

I'm not even sure we could return to the Moon successfully on the first try if we tried.


Making a plane faster is only one dimension for optimization and it's not actually a goal in itself. It sacrifices other goals. Nobody has done better because they are optimizing for something else.

The purpose of the SR-71 was reconnaissance. Do you really think the U.S. military hasn't gotten better at that, using drones and/or satellites?


SR-71 was meant to be flying over USSR and China. Few existing USAF drones would have survived 20 minutes there even in 1960s.

But sat reconnaissance have certainly got a lot better to the point of rendering the bird obsolete.


the disparity between progress in the 'world of bits' compared to progress in the 'world of atoms' really has become quite stark. I would agree with you that there even seems to have been a regression in some domains.

We've lost an engineering culture that connects to the physical world and real materials we deal with every day. Another good example is probably the kitchen. A kitchen mid 20th century looked very different from a kitchen at the beginning of the century. Kitchens nowadays really hasn't changed that much for a very long time.

Dan Wang has written a very good (albeit lenghty) essay on the topic: https://danwang.co/how-technology-grows/


It's interesting how often good-sounding advice is contradictory.

The article advocates standardization. Using fewer programming languages means fewer toolchains to maintain and less fragmentation of engineering knowledge.

On the other hand, the frequent advice that we should "use the correct tool for the job" suggests being familiar with a lot of different, specialist tools. More programming languages?

I guess it all depends. Knowing when to make a move versus when to do the opposite comes down to taste and experience.


I really like managed app hosting like AppEngine and Heroku. I have spent decades doing what we now call devops, or ‘you build it then you own it.’

Now I like to save my time for other things. AppEngine wins on cost while Heroku wins in that if you use standard things like Postgres, Kafka, etc. it is not too bad moving to your own dedicated services.

Edit: I also like the article’s advice on minimizing the number of components in systems. Using Postgres as a ‘Swiss Army Knife’ for information storage is a good start.


> Use services. Software that you install is software that you have to operate.

On the other hand, a software that you use as a service is a software you do not control. Therefore whether to use services depends on how mission critical the software/functionality is to the whole system.


If you remove everything that doesn't break the widget then everything left is functional. Pursing minimalism equals pursuing function. Therefore engineering equals applying minimalism to technology.


That "doesn't break" needs an "under which conditions" clause. A lot of complexity arises from having to handle the imperfections of the environment.

But I agree with the general idea. Good design usually arises after a worse, messy and more complex early design is produced, analyzed and simplified away, often radically. Coming to simplicity by a shorter route is very rare.


Yes. "Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away." - St Aubrey(?)


Antoine de St Exupery; quoted in the article :)


Fantastic suggestions.

Not only does having fewer technologies in your stack mean there're fewer to maintain, it also means they're used more so your devs are more expert at them.


Some good architecture related quotes here. Added to http://github.com/globalcitizen/taoup


"It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove." - Antoine de Saint


Though less literally accurate, I prefer the translation:

> Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.


Great article. I always tell to my clients: Software projects are cheap, the expensive is the never endless Maintenance.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: