The SR-71 is an example of how the sheer will to get something done can indeed accomplish great feats, if at great cost and complexity. It's also an example of how innovation is no guarantee of success; since the Blackbird was retired, we still have no airplane as fast as one built in the 1960's.
A great plane is just what it needs to be. It may not be the most minimal it could be, but it will be the most effective. Another example, the A-10 Warthog, is not "minimal", but it is an amazing plane.
I think to get the best result based on the desired outcome, you have to pursue craftsmanship and excellence through continuous education, practice, and improvement, and not worry how it looks so much as how well it works. Complexity is a sin, except when it is necessary.
You can choose to call it "Slapped together" and brush over the sheer ingenuity of the technologies that made SR-71 successful, or you could try to read up on the insane amounts of __engineering excellence__ that took to make it achieve its mission as a recon aircraft. Its mission wasn't to be a commercial aircraft, its mission was to go as high, far and fast as possible - Reconnaissance and to return safely by literally outrunning missiles. Therefore, refueling and leaking fuel are appropriate trade-offs.
Skunkworks engineers were perfectly aware of the trade offs you mention - that is integral to __any__ engineering discipline from semiconductor to aviation. That __does not__ imply the lack of engineering excellence.
Leaking fuel is __the__ most popular fact about the SR-71. But you omit or you are unaware of the ingenuity that went into making the Ramjet engine, material science, hypersonic regime that was completely new and nothing had flown at that speeds before, etc...there is just so much to say, I am having a hard time coherently replying :-) I apologize if my response was a bit harsh - I am just passionate about this topic!
What I meant by "not engineering excellence" was in relation to the original post, comparing how to manage production servers with the SR-71. Nobody should regularly operate their servers like an SR-71 or top fuel dragster. It's great to fly really really fast, but most people should probably not be doing that, because it requires making exceptions like leaking fuel. You actually want to run your server like a big slow complicated comfortable redundant jumbo jet on autopilot.
Hopefully you get my meaning. I apologize if I made their accomplishments seem minor. I meant no insult against the Skunkworks engineers or the SR-71, and your response was not harsh at all.
A good plane to look at for production minimalism is the Boeing 747.
A huge aircraft even today, it was designed as a commercial product to be produced in quantity and used intensively over a long life. About 1500 were built and many are still flying after 40 years. It's reliable, maintainable, safe, and profitable.
It's not exotic. There are some new things in it, but not that many. It's mostly upgrades to technologies known at the time. Better flight controls, better navigation, better autopilot, better hydraulics, better tires, better engines, etc. But little that hadn't flown before at smaller scale. The emphasis was on eliminating the weak points during design. The B-747 has four hydraulic systems, for redundancy.
It's worthwhile for people in software to read the pilot's manual for a large commercial jetliner. They're not that complicated. If operating your system is more complicated than a jetliner, you're probably doing something wrong.
The SR-71 solved a different problem — people at the top of the chain of command lacked information, and the potential impact was devastating. Delivering information at any cost was critically important, and that operational cost and the risks associated with operations were high!
Engineer here. It appears you've mixed up some pretty basic but unrelated concepts. Engineering projects are based on use case scenarios and design requirements. You've presented your personal and very specific usecases and design requirements and somehow assumed that they were shared by another project or that they even made sense. If your main requirement is performance and all other requirements are minor in comparison then the optimal design wont be a workhorse, but a race horse.
"Engineerings excellence" might well describe a product that performs its function reliably with a minimum of intervention or fuss over a long period of time. When something "just works" like this, it's an achievement.
"Engineering excellence" might also describe the "sheer will" combined with expertise and carefully chosen tradeoffs that makes something as unlikely and amazing as the SR-71 possible at all. The parent seemed to exclude it from excellence, perhaps on the grounds that it doesn't meet the criteria I describe in the previous paragraph, but it seems to me they're talking about the project with respect for what they were able to accomplish, and if so, this is essentially a semantic point about whether any engineering operation worthy of respect fits in the box of engineering excellence, rather than a question of whether the SR-71 was amazing.
If the requirements for something are highly specialised, you're only going to make only a small number of them and you have a hard deadline to do it by then you shouldn't be wasting time on mass production considerations.
That's a trade off and I think they dealt with it correctly.
The AK-47 is built to be mass producible and reliable in the field. It's a great success in those terms.
The U2 and SR-71 are build to different criteria but are also great successes.
Either that or we got very good at keeping secrets, getting stelthy, or the need for this “form factor” was depreciated with the advent of space based tech.
Heck there’s probably a Robocop-style human brain in that CIA space plane.
For whole industries, this is the definition of engineering. Anything that is done but is not needed is frivolous extravaganza.
> Avoid custom technology. Software that you write is software that you have to maintain. Forever. Don’t succumb to NIH when there’s a well supported public solution that fits just as well (or even almost as well).
Software you run is software you have to maintain; until you can burn it in the fires of Mordor. If there's a public solution that is a good fit for your problem, and was built along the same lines you would build it, then that's great; but if it's a great fit for the problem and also pulls in 70 dependencies and updates three times a week, that's probably not worth the integration mess.
> Use services. Software that you install is software that you have to operate. From the moment it’s activated, someone will be taking regular time out of their schedule to perform maintenance, troubleshoot problems, and install upgrades. Don’t succumb to NHH (not hosted here) when there’s a public service available that will do the job better.
You don't need to host everything, but you better have a backup plan when the service you depend on decides that they're no longer going to provide the service. Chances are, you're going to need to debug the service too, sooner or later, and it's harder to debug a black box. That said, it's great when you can lean on a rock solid service to do things that are important, but not a 'core competency'
It's far better if you have the time and resources to develop your own software from as low a level as possible. Each layer of abstraction that you can shed is an opportunity to tailor your solution more closely to your problem and to have expertise in-house. Big tech companies know this and it's why they do a lot of stuff in-house. The key is in knowing what to develop in-house and what to punt on, and when.
Starting out from scratch without exactly knowing what's needed seems extremely slow, inflexible and prone to wrong decisions.
I don't think there's a way around it. Professionals don't shy away from taking responsibility for all the code that goes into their product.
It might even be bad for the project and/or client but lets not forget these tv dinner people cant even cook!
I find that most services get me 80% of the way, 2x faster than a custom solution. But the final 20% is an unknown abyss that will more often than not, have insurmountable limitations. And so, if the software is in my wheelhouse, I'm almost always better off building it custom.
I’ve gotten flak for rolling my own library. Basically, I was trying to get the base clock rate of a server for benchmarking purposes. Instead of using an external library, I simply wrote some inline assembly to read from the timestamp counter.
I was told this will create a maintainence headache, but zero fucks given on my end. I got to write some inline assembly today. :)
That's such a weird attitude I see often. If you had put your stuff on Github under a different name and then downloaded it everybody would have been fine with it.
People say things like this as if it's possible to ever get away with not maintaining software. Everything you do will create a headache, you only get to decide the exact location. (And I'm with you on reducing external dependencies where the cost to do so isn't prohibitive.)
* How many people in your team/organisation can read and write assembly?
* How difficult is it to hire someone with assembly knowledge where you are?
(not having a go at your solution specifically - not enough project managers think about maintaining solutions after people have left)
Heyyyy, what's wrong with that? I am creating a dashboard analytics program which is a Windows service with a fully fledged web server built-in in Pascal currently!
However, the cynic in me will make a point that original advice is spot-on for startups, as most of them these days care not about their product, but are just a gamble on getting acquihired. Their expected lifetime is around the same as the services they depend on (likely less), so they can get away with it.
A less cynical point can be made that slapping your prototype pretty much entirely from third-party services and libraries can be a good way to get to the market fast, at which point you can secure funding that'll allow you to slowly replace critical parts with your own code, to ensure survival of the product.
The cost of introducing new technology (in terms of training, tooling and now-you-have-two) is so high that it's just not worth it if it will only incrementally improve how things work at the moment.
His second in command at Lockheed, a man named Ben Rich also wrote a very good book: https://www.amazon.com/Skunk-Works-Personal-Memoir-Lockheed/...
One unfortunate trait of the original Lockheed Skunkworks culture was to make Kelly Johnson (brilliant as he was) the only face of the organization. As just one such example, many know of the clever inlet system on the A-12/SR-71, but it, like many things Skunkworks in the pre-Rich era, is incorrectly ascribed to Johnson. The real person who deserves the credit as the chief for that system is David H. Campbell, but how many people know his name?
Half of people arguing about the plane analogy
Other half arguing about jQuery
Introducing new technologies to replace old ones, preferring services to local hosting - I just do not agree!
I use the oldest technology I can get away with, and host everything on dedicated servers as it is often more performant and cheaper.
Simplicity needs bounds. The suggestion of reuse also needs that: If you end up storing binary content on SQL to avoid having to roll a file hosting, you are taking the simplicity mantra too far.
I took that to mean if you must start using new solutions, make sure you remove some similar tech from your stack for every new solution you start using (e.g. replace RabbitMQ with Kafka) so that you don't end up with 50 ways to store data say. That was certainly the context in the article.
preferring services to local hosting
There is an argument for using hosted services too though - it reduces the ops workload - the correct solution probably depends on scale, staff etc. I agree this isn't one size fits all.
If you end up storing binary content on SQL to avoid having to roll a file hosting, you are taking the simplicity mantra too far.
I don't think the article mentions storing binary content in Postgresql does it? They're just talking about all non-ephemeral data, I imagine at one point they started using some data stores better suited as caches or message queues for permanent data storage, and discovering that when data was lost was the impetus for simplifying their storage - I like that they've moved to using their own hosted Postgresql solution to store their customer data, it simplifies their infrastructure, and makes sure they feel any pain customers feel too - a good kind of simplicity to strive for.
I also really like hosted cloud services, but they need to be replaceable with little to no effort. For instance, I don't want a new NoSQL platform, I want high-availability, managed, on-the-fly-scalable Postgres.
We chose Java maybe 2 decades ago when it was newly minted.
Some decent swathes of code written back then are still running in production.
Its enormously satisfying looking at anything in your stack which has really stood the test of time.
I harbour similar feelings about the Oracle DB but I'm a little more conflicted there.
I don't think the latter is easily replaceable, though, either, since what you've described isn't Postgres, but, at best, something custom with a Postgres interface.
For example, the Citus DBaaS offering may fit the bill, but what could easily replace it? A self-hosted version would fail at being "managed" (though this is a bit academic, since, presumably one would hire for that) and at being on-the-fly-scalable . Is Amazon RDS a drop-in replacement, or does it have compatibility and performance edge case caveats?
 I read that sort of "wish" often and have always found it borderline naive. That is, what I think they really mean is they just don't want to worry about scalability. The dominance of expensive, few-sizes-fits-all cloud infrastructure means that custom building an over-engineered database server, is not a practical option, and for most cases, neither short-notice scalability beyond that, nor runaway growth are a realistic problem.
Not to mention contradicting ideas:
"Retire old technology" and "Don’t use new technology the day, or even the year, that it’s initially released"
What is old technology? You're on a UNIX system, right? 0_0
Of course the article is about a PAAS company, so the advice ought to be taken with that in mind...
It depends on what you are serving and at what scale as well as the opportunity costs of managing it yourself. If I have a developer spending some hours maintaining a server each month, it isn’t cheaper at all. You can also have issues of under or over capacity. Heroku, for example saves us a ton of time we could be using for shipping features rather than maintaining infrastructure.
If really depends on the application.
Deep object cloning/inspection, URL parsing/generator. Date arithmetic. Locking. Caching. Testing. Templating. I've had to rip so many of these out for being fundamentally unworkable and often wrongheaded solutions to already solved problems that it's become a cliche in my career.
I know everyone likes to spend a day writing a date/time library, but when you need to develop a full stack app. You just dont have time to do everything from scratch.
These are decisions you have to make.
Datetime libraries are complex because they need to do things like deal with the fact that North Korea switched Timezones last year, that Fiji is changing their DST dates this year and other crap. We get about 3 to 10 releases of the TZ database every year and you'll probably want the updates as quickly as possible in your Datetime library to deal with timezoned data correctly, otherwise your Customer from Fiji will find that you aren't turning up for the meeting because you left an hour earlier believing the customer wasn't turning up at all. And then you have to deal with dates in the past where there might be possible changes to the entire calendar! Or different calendars such as the Japenese, Chinese or Jewish years.
I don't think it's valid to suggest that leftpad and datetime libraries are even on the same order of magnitude of engineering complexity.
Yes, there is a difference between these.
But the second one gets you closer to the mythical perfection called "serverless" because it uses so many different servers you can't keep track of them all.
Do you build? Or do you buy into the support level, the features, the bugs, the availability, the security, etc.
> Software that you write is software that you have to maintain. Forever. Don’t succumb to NIH when there’s a well supported public solution that fits just as well (or even almost as well).
While this idea has a lot of merit, it's not as black-and-white. (Most things in the world aren't, too.)
Have we really progressed technologically, as in, doing the impossible, since the late 1960s?
The Blackbird is still the fastest jet in the world more than fifty years later.
I'm not even sure we could return to the Moon successfully on the first try if we tried.
The purpose of the SR-71 was reconnaissance. Do you really think the U.S. military hasn't gotten better at that, using drones and/or satellites?
But sat reconnaissance have certainly got a lot better to the point of rendering the bird obsolete.
We've lost an engineering culture that connects to the physical world and real materials we deal with every day. Another good example is probably the kitchen. A kitchen mid 20th century looked very different from a kitchen at the beginning of the century. Kitchens nowadays really hasn't changed that much for a very long time.
Dan Wang has written a very good (albeit lenghty) essay on the topic: https://danwang.co/how-technology-grows/
The article advocates standardization. Using fewer programming languages means fewer toolchains to maintain and less fragmentation of engineering knowledge.
On the other hand, the frequent advice that we should "use the correct tool for the job" suggests being familiar with a lot of different, specialist tools. More programming languages?
I guess it all depends. Knowing when to make a move versus when to do the opposite comes down to taste and experience.
Now I like to save my time for other things. AppEngine wins on cost while Heroku wins in that if you use standard things like Postgres, Kafka, etc. it is not too bad moving to your own dedicated services.
Edit: I also like the article’s advice on minimizing the number of components in systems. Using Postgres as a ‘Swiss Army Knife’ for information storage is a good start.
On the other hand, a software that you use as a service is a software you do not control. Therefore whether to use services depends on how mission critical the software/functionality is to the whole system.
But I agree with the general idea. Good design usually arises after a worse, messy and more complex early design is produced, analyzed and simplified away, often radically. Coming to simplicity by a shorter route is very rare.
Not only does having fewer technologies in your stack mean there're fewer to maintain, it also means they're used more so your devs are more expert at them.
> Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.