Hacker News new | past | comments | ask | show | jobs | submit login
Software should be designed to last (adlrocha.substack.com)
159 points by kiyanwang 8 days ago | hide | past | favorite | 134 comments





I don't think anyone who is serious about software craftsmanship would dispute the basic idea here ("you should be tasteful and minimalist when choosing dependencies").

The problem is that the economics of software development don't really ever require craftsmanship. There are two basic modes of development. 1 - You are a startup, struggling to survive; you have to ship features as fast as possible, with many fewer engineers than you really need to do things right. 2 - You are a huge billion dollar corporation, probably a monopoly, with an enormous moat protecting you from serious competition.

Neither situation prizes craftsmanship. The startup just needs to glue together a hack to raise the next founding round or solve the immediate user problem. The gigacorp has more money than it knows what to do with, so they "solve" every software problem by hiring more engineers. There's actually not many companies who exist in the middle ground, where it might be important to produce high quality software. I believe this is also the reason we don't see more uptake of high concept languages like Haskell and Lisp.


The vast, vast majority of software written is done at neither a startup struggling for survival nor a tech giant monopoly.

I agree. I guess most of us, me included do client/project based work im some form. But the problem is still there, just in a different way: it is hard to convince people to pay extra for invisible results. Robustness, security, maintenance are often only visible some time after the initial proposal. So it becomes an issue of trust and communication rather than engineering.

The vast majority of software is written at non-software companies. Software there is a cost centre and not a core competency, so craftsmanship is not valued there either.

Imagine if buildings were bought by people who'd only ever see them from the outside.

No inspection. They don't even see the plans, just an exterior render before they greenlight. Imagine how dysfunctional architecture and construction would be in that world.

Software is uniquely intangible. Collectively we spend billions developing software every year, but often the decision maker never sees the codebase, only the UI.

And you're right, the problem is multiplied at companies that consider software a cost.

This is why projects like Jepsen are so powerful. Or Acid3 back in the day. They let a non-technical audience see the underlying implementation quality of different software efforts.

We need a lot more where those two came from. Basic example. Why can't I go to any app in the app store and see the average hours between crashes?

Software craftsmanship will suck until we make it visible.


Welcome to my life, it pays well but the rewards aren’t in the work of writing software, I like mentoring though so it’s not all negative.

If it moves fast and breaks things get rid of it asap.

(like a 10year old kid)

...encourage it to "grow-up" and mature.


I think you could argue that there is a third basic mode of development - the free and open source software. I'm not saying all FOSS software is artisan or cared for with excellent craftsmanship, but when I think of software that lasts FOSS software is what comes to mind: - emacs - PostgreSQL - linux

I really like the Postgres example because it was initially just a bunch of academics who refused to implement new SQL features until they could do it The Right Way. See how long it took them to get upsert.

I really hope that GNU Guix takes off in this way - just gradually doing things The Right Way until the product is just the default institution.


Some of the FOSS greats really seem to be built to last. I totally expect to still be using Linux, Firefox, Postgres, VLC, and heaps more decades from now. So why are there so many day-to-day tasks which are still just painful using the most popular tools available? IDEs, email clients, calendars, WYSIWYG editors and spreadsheets are still decades behind their closed source counterparts.

> email clients, calendars, WYSIWYG editors and spreadsheets

The web killed serious desktop development of these outside of Microsoft. Google's services are good enough for everyone else, and run pretty much everywhere that has a reasonably modern web browser. As a desktop Linux user for the past ~2 decades, I'm mostly happy about this.


The argument has little do do with software in actual fact. What it boils down to in long-term strategy and investment.

Building things well so they may serve you for years and years is a form of investment in the long term.

There is a business philosophy credited to Japanese business but evident elsewhere too that thinking, planning and strategy should be as long term as possible, to a ridiculous extent.

I read the biography of Akio Morita, the founder of Sony and while I'm sure it was somewhat biased to laud his achievements, even from the early post-war era manufacturing recording equipment for high end use, in what can be called "start-up mode" the emphasis was on building things of very high quality that would last.

Truly this long term planning mindset is a manifestation of DRY, don't repeat yourself, and is crucial to survival of a nascent business.

The whole disposable software attitude "we'll revisit once we ship" attitude is sinking you before you ever get off the ground.


Yet Nintendo had resisted a significant online component in their games and platforms for two generations of consoles.

That's actually part of it my dude. Nintendo is a centuries old company they move using old boring technologies. They only go in when they know they can go in all the way.. You know what I'm saying? In their home demographic people were not ready for online gaming until recent years.

With long term investment comes a certain kind of risk appetite. It's the same reason banks haven't jumped into crypto or why so many enterprises continue using Java..

But I'm glad you used Nintendo as an example, being a software company. How about SNK or Capcom or Konami? You know what it's like getting blacklisted for jumping ship from one of these companies?


Edit! With reference to the above comment I don't mean that these companies themselves would blacklist anyone but rather that their competitors would not hire anyone who worked there on principle to avoid that sort of association.

Like in North America you could work for Microsoft and then Google but that sort of thing doesn't always fly.


Can you go into a bit more detail about the blacklisting? I’ve never heard anything like that but it fits with something I read recently that the team that built Zelda Breath of the Wild had also been involved in making Majoras Mask.

It's one of those things which are just understood but you can't very well rely on having worked for a competitor to land a job at another company

Here is one article: https://arstechnica.com/information-technology/2017/06/konam... but I have heard of similar stories across industries


I'm writing something designed to last, and here is how I am doing it:

1) I am doing it by myself, without any corporate sponsors or financial pay.

2) I am writing it in such a way that it could have already been working for 25 years.


There is code in Microsoft Outlook that, although updated over the years, is essentially the same file format as it was in 1994 - and the code is unlikely to have been entirely rewritten in that time

In 2010, Microsoft published the PST file format, and the latest documentation was updated this year

That code was started in 1992, and first shipped in 1995 - 25 years in the wild!

https://docs.microsoft.com/en-us/openspecs/office_file_forma...


I'm trying something similar since I believe in having a stable core which is why I'm using Java as I predict the JVM to last a while.

I understand the logic behind the first point, but not the second - could you expand? Does it mean you aren't utilising anything that doesn't already have a track record of sticking around (choose boring technology + tech that has been around for a while?), or writing it in such a way that it is backwards compatible with the last 25 years worth of OS/CPU Architectures?

I like that 2nd criteria. Seems like a smart way to avoid components that are much more likely to get software decay.

Have you been programming for 25 years?


> Have you been programming for 25 years?

I have longer than that and it is really scary that software to see that software I wrote early 90s is still used. I thought, at the time, it would be rewritten in something cooler every few years. But nope. It does make me pick tech that can or will last; not many dependencies, open source, jvm, .net core or C/C++, so it is possible to revisit it in 20 years and not be completely lost (like I imagine it would be with js frameworks/libs/deps at the moment).


Yes, at least that long, and about 20 of it for the Web.

What language do you choose? Possibly c or java?

I chose:

Perl as the primary language

HTML/CSS for front-end

JavaScript for optional front-end enhancements

txt as the base data format

SQLite for cache and indexing

PHP for optional server connectors

PGP for the identity system


FORTRAN

Real programmers use Butterflies

https://xkcd.com/378/


Craftsmanship is most visible in mature open source projects who have reached certain stability when it comes to features. The role of developers on those projects is more akin to stewards, watching over it for the current and next generation, rather than just shipping things out the door asap to meet the next milestone.

SQLite is a great example that was recently discussed on HN.


This argument can be made in a lot of industries. Farming, furniture making, automotive, consumer electronics, appliances...

There are things we can do (and I'd argue we should) systemically to incentivise building for the long term but I think the cultural mindset needs to change first. I hesitate to lay all the blame on consumerism, but it certainly seems like a big part of the problem. I hope that projects like The Long Now and Artemis signal a shift toward thinking more about future generations.


Furniture-making actually seems like something you can draw a lot of intuitive analogies from.

You need to make a chair so your client can sit down and work at their desk. While waiting on this chair, your client needs to stand awkwardly at their desk. So they care about how long it takes for you to make this chair, as well as how much it costs and all the normal stuff.

You can build a very fancy premium chair custom fit to your client's body-shape, but it's extremely expensive, wouldn't work as well if the client ever gained or lost weight, and took forever to build, with your unhappy client standing around the whole time yelling at you to make it faster.

Maybe you can build a super cheap chair real fast, but then this chair falls apart all the time and you need to keep on fixing it. Client got to sit down fast, which is great, but they keep on needing to get up so you can glue on more balsa wood to try to hold it together.

You could get a stack of crates for them to sit on. Not really a chair, but it works, and you could then work on a nicer chair that won't fall apart instantly in the meantime. Or maybe they're happy enough with the stack of crates and say don't bother with the chair, i don't want to pay for it.

And then maybe your client moves cross-country every year, and so will just toss the chair when they move, and find someone else to build them a new chair when they get to their new apartment.


The part missing from that analogy is the iterative process. Your client may have asked you to build a chair, but until they sit down on what you've built they don't realize that what they really wanted was a couch.

Furniture rentals (aka FaaS) sounds like it'd suit this client more. :)

The market system is an epic hack, it does fantastic stuff but it is merely a hack. We won't be able to do everything outside its scope. It is a really large scope but not large enough.

I think large companies definitely have a lot to gain by building good software designed to last, and the best companies do place value in doing things right (I worked in a couple of monopolistic giants). Especially infrastructure has to be designed to last since they take a long time to build in the first place. If you value craftsmanship, infrastructure teams in large companies are probably a good place to be.

I would argue that the region in a company's lifespan where craftsmanship matters is really between Series B and C. When you're trying to go from about 1M to 10M in revenue, and your engineering team is about 15 to 25 people, you have some interesting dynamics in play:

1. Everybody is swamped. Sales is now scaling up, which means new customers, new demands, and new fires every day.

2. The product's complexity has now grown tremendously. No single person, not even founding engineers, can fit the entire system in their head now. Very few people are equipped to even dig in to the more esoteric pieces and understand how and why they work, because enough time has passed that any context that was undocumented is either forgotten or hidden deep in a random person's head, and very little was documented (not necessarily the wrong choice - the time saved on not documenting things likely contributed to the company moving quickly enough and surviving to this stage).

3. Hiring has been approximately solved, and occurs on an approximately predictable cadence. Your engineering team has a small but predictable stream of newcomers. Every year or so, the team doubles.

In this region, you have some interesting constraints to deal with:

1. Building new features is necessary to increase revenue, but each new feature has rapidly increasing marginal cost (due to overall system complexity). Therefore, we need more throughput.

2. Adding new engineers is necessary to increase throughput, but each additional engineer now provides rapidly diminishing marginal throughput.

There are many tactics for succeeding in this region. The general focus here is "increase marginal throughput per engineer". Some not-writing-code tactics include investing in solid onboarding, developing effective documentation at the system level, and narrowing focus on product initiatives (becoming more deliberate with "we will try experiments A, B, and C in this order" as opposed to "we will throw the kitchen sink at the problem and see what sticks").

From a "writing code" perspective, I think this is where craftsmanship really shines. Constructing abstractions that dramatically increase the productivity of each marginal engineer provides an enormous pay-off in this region. Of course, the correct engineering abstractions must also be coupled with the correct engineering team structure. The effects of Conway's Law in this region are felt very, very strongly.

Unfortunately, I think it is rather unlikely for someone to just be able to drop in to a company in this region and begin working on this kind of neat problem. I think the most likely ways to get to work on this are:

- Be there from early on (arguably, first 5 or 10 engineers). Having domain experience is extraordinarily helpful in understanding which systems will be force multipliers.

- Be very, very experienced. I can see a role for a senior staff engineer to be hired in at this point to help build these force multipliers. I think this person would need to have previous experience in companies of this size to correctly value domain experience and judge the right pieces to build.

- Join in this region, and work with on the team of one of the two people above.


> Unfortunately, I think it is rather unlikely for someone to just be able to drop in to a company in this region and begin working on this kind of neat problem.

This is basically what I specialize in, albeit mostly accidentally. It's actually not as hard as you'd think, because almost every company that gets to 2.5M ARR has already gone through at least one and sometimes two complete rewrites, that have solved some but not all of their issues. So at this point they already understand that their marginal cost of adding new features is increasing, they know what their problems are, and they understand the value of software architecture.

If you want to do this though then you can't really just apply for a job at the company, the technical co-founder has to kind of invite you to come in to work on this stuff.


The departments of gigacorps that do software engineering well can execute circles around the others. A corporation's internal economy doesn't always or even usually care about that, but sometimes it does.


What this article really boils down to is the usual coder arrogance: Thinking that most problems are inherently simple[1], thinking they are a better coder than average, and thinking that the answer is always to do more yourself, so that more of the codebase will benefit from your superior skills.

The truth is, there are no easy problems in programming. There are always a thousand corner cases and unexpected complexities. And all those buggy, unmaintained libraries you find in your language's package repository are still probably better than anything you're going to roll yourself. Look into the source code of such libraries and alongside the hacks and outdated code, you'll find hundreds of subtle problems avoided, because the author spent a long time and effort on the problem you've spent ten minutes thinking about.

The answer to quality problems in software engineering is not less libraries, it's more and better libraries. In the same way that the high-quality civil engineering the author admires is not achieved by having one person do everything themselves, but by the successful collaboration of a large number of skilled and specialised subcontractors.

Right now, we're still in the infancy of our industry. This, combined with continued the evolution of hardware capabilities has meant that software engineering has never had the long-term, stable base on which to build repeatable, reliable routes to success. That's fine. If we're in the same position in a couple of centuries, it's a problem, but right now we should be experimenting and failing.

Even today, the situation is not as bleak as if often portrayed in articles like this. There are a huge number of robust and high-quality libraries and frameworks providing extremely complex capabilities in an easily reusable way. If a programmer from the mid-nineties were to be transported to the present, I suspect they'd be amazed by what off-the-shelf libraries enable even a single programmer to do. The undeniable quality problems that do affect much modern software often have more to do with the hugely increased scope of ambition in what we expect software to do for us.

[1] Outside a handful, like encryption, that are acknowledged as hard for the purpose of being the exception that proves the rule.


Arrogant programmer here.

Simple != easy. It's much more difficult and more time-consuming to identify the simple solution than to implement the easy solution.

I think that a diverse selection of libraries and tools exploring new problem spaces is good - but once the problem space is understood, we should replace them with something closer to the fundamental truth of the problem.


> The answer to quality problems in software engineering is not less libraries, it's more and better libraries. In the same way that the high-quality civil engineering the author admires is not achieved by having one person do everything themselves, but by the successful collaboration of a large number of skilled and specialised subcontractors.

I think one of the fundamental problems with our industry is that there is no funding mechanism for those subcontractors. The "base" that we build on is either unpaid volunteers, or tailored to a specific companies needs and goodwill.

If free software was not the norm, I think we'd see a lot of investment and enterprise around producing high quality libraries.


I recently wrote about this problem here [1]. I think "free" software should still be the norm, but companies who use it should have to pay into a fund that distributes the money to where it can be used most productively to make new and more useful libraries. Otherwise there's no sustainable way for most people to contribute to open-source and be paid for it.

[1]: https://blog.andrepopovitch.com/complement-collution-paradox...


And would those libraries suffer from piracy, as other answers pointed out... Even developers are willing to "not pay" if a library costs a fortune, with fortune being any point above $50

> The truth is, there are no easy problems in programming.

I've come to think of my work as figuring out ways to think about problems so they become easy.

Not quite the same thing, I know.


I would like to answer the question "Why is software so frequently disappointing and flawed? "

As a developer, there is nothing I would like better than to turn out the highest quality software, as long as I could be compensated for it.

Does anyone really think that the millions of lines of code that went into Windows is worth the piddling $100+ the consumer pays for it?

Let's start with a basic premise: quality is worth money. I'm sure we agree that a Hyundai (or whatever passes for a cheap car in your neighbourhood) costs less than a Porsche or BMW because the Porsche is better designed and better built. Better design means more experienced and brilliant engineers, more talented designers; better built means more skilled and dedicated assembly-line workers. All these people demand more money. Hence the Porsche company commands a higher price for their Carreras, 911s, whatever.

The same should be true of software. If a company invests great care in designing a better operating system, or a better word processor, that never crash, and always have helpful help and meaningful error messages, how much do you think that would be worth? I'll give you a hint: the military do in fact get top-quality software for their jets and rockets. They get software that almost never fails, and does exactly what it is designed to do.

Do you know how much this software costs? $50 per line of code.

Translating into everyday terms, a bullet-proof operating system would cost you, at a rough guess, $5,000 per copy.

Now I have no doubt some people would be happy to pay $5,000 for a stable OS. However, there are many people who couldn't afford this amount.

So what would happen? In any other field (automobiles, stereos, TVs, restaurant meals, housing) people who can't afford quality just put up with less and shut up.

But in the software field... well, they just make a copy of someone else's software, and enjoy the full benefit of top-of-the-line quality, without paying for it. I wager even you couldn't resist obtaining a $5,000 OS for free.

How long do you think a software company would last if their product cost millions to make, and they only sold a few copies at $5,000? Why they would go broke, of course.

This is the crux of the software dilemma: except in a few specialized cases (commercial or embedded software), the maximum price for software is the monetary equivalent of the nuisance value of duplicating it.

In consumer software, this is in the range of $19-$29.

The digital world turns the economics of quality upside-down: in traditional models where quality is supported by price, the market pays the price if it wants the quality.

In the digital model, a perfect copy of the merchandise costs virtually nothing, and undercuts the legitimate market, putting a cap on the maximum that can be charged for a product.

There is a built-in limit to how much time, effort and expense a company can invest into a mass-produced product. This cap is equivalent to the "nuisance value" defined above. It is not reasonable for the consumer to expect warranties and liabilities that go way beyond what the manufacturer receives from sales of the product.

The music and movie industry are wrestling with the consequences of easy digital duplication. They have taken a different route to protecting their intellectual property.

I challenge anyone to come up with a business model where the software developer that invests great expense in building a quality product, can obtain full compensation from the market segment that values his quality.

Whose fault is it anyway? Simple: it's the consumer who copies and pirates software that forces the price down and therefore the quality to remain low. Any analysis that does not take this into account is simplistic.

It is naïve to think that most developers are not struggling to make ends meet and stay in business (Microsoft notwithstanding).


> Whose fault is it anyway? Simple: it's the consumer who copies and pirates software that forces the price down and therefore the quality to remain low.

Software piracy, by the consumer, at scale, is historically negligible. It's even more negligible now.

How can one pirate software like netflix or google docs, where there's a centrally hosted server that runs the actual software?

Apple's App Store is full of software that is incredibly difficult to pirate. How does a consumer duplicate a copy of paid iPhone app? Yet, the cost of an iphone app isn't significantly more.

I don't think consumers pirating software has as significant an impact on the price of software as more normal market forces. If I want to manufacture a paperclip and sell it, I can't charge a thousand dollars, even though the factory is pretty expensive to build, because others are able to charge less and profit.

If I want to make an iphone app to share short videos, I can't charge $1000, even if it's perfect code, because my competition can do the same thing for free. And if I'm sending video to other users, well, even perfect software will hit network errors.

The reality is, many software companies are profitable right now while selling software for 10s of dollars, so if I charge more, I won't be competitive.

Consumers don't pay for software that's better than a certain amount. Once it's "good enough" to solve their problems "well enough", that's it.


I think you're ignoring the gigantic industry of B2B/enterprise software, where piracy/illegal copies are essentially non-existent.

Curiously, that is the same industry that is well known for pumping out some of the worst/crappiest software in existence, so I seriously doubt your assertion that software piracy is at the core of the problem.

I think the main reason is that most people don't really care about "bulletproof" software, they'd rather have something with more features, or cheaper, or easier to use or more visually attractive as long as the bugs aren't too annoying.


What are your thoughts on where OSS fits into this?

I want to build software that lasts, but I don't think you really can at this point.

We haven't even figured out fully how we should represent a string or date or number in software let alone have an enduring language or ABI.

I feel like I am building everything on a foundation of sand at it will need to be rebuilt every 5 years, 10 if you're lucky.

I do think it will change, but it will be awhile and by then it will probably just be a few big players making software anyway, kinda like car companies.


I've got the same exe running in client installs since the late 90's. It's had a tweak or two here and there, but largely unchanged. I last recompiled it about 2yrs ago and had a fresh install of it about 6 months ago.

It's not the prettiest thing and could have certain upgrades in light of new security practices (eg use new encryption algos & go thru an in-depth security audit). But to the clients, it's unbeatable and there's nothing remotely close to migrate to.

Unbelievably, it's a vb6 app that I never bothered porting to dotnet. Even more unbelievably, it's a port of its predecessor, written in turbo pascal. As long as I can continue to find the dev environment installer, it's good to go.

So. I think s large part of the problem is that ppl don't redirect their dev environment enough to keep an install disk, (if one was available in the first place).

A second large problem is that modern development relies heavily on 3rd party library use, which means your software is reliant on more than one company for your binary.

So. Find an environment that you can archive/keep, and ignore the not invented here rule to a large extent


I'm glad for you. I'm aware of a dozen of systems in my country built using VB6 or Clipper/Harbour (text mode) which are still in use.

And what I like most on such cases of success is that users mostly feel pleased by using them -- those systems basically do the job, fastly. For me, that tells a whole lot about how wrong are some philosophies getting mainstream, focusing on great aesthetics, mobile-first and, in my opinion, a very mistaken sense of user-friendliness


While we're actively developing our program, we got several core modules that have barely been touched since they were written in 1997. It's not pretty code, but hey it works, so until we need to there's no need to touch it.

A coworker has written a tool that edits the registry such that you can do a fresh install of our compiler/IDE (Delphi), run the tool, and then load up our project and it'll compile. The tool gets updated when needed, usually when we change dependencies. All dependencies are kept alongside the source in the same repo, which makes it easy to ensure you got the right stuff available for compilation, as well as being able to make patches if we really need to fix an issue ourselves.

Need to compile an old version? Check out the right branch, run the tool and start the IDE version used for that branch. Easy.


>>> I've got the same exe running in client installs since the late 90's.

That's on Windows that has stable API and ABI. The 90's is probably the limit because older software are 16 bits and don't work on current 64 bits Windows.

On Linux all software break with each distro release because "core" libraries are unstable. Got executable and libraries compiled on RHEL 6 and most of them fail to load on RHEL 7 because something.so not found.


You can bundle the .so files in Linux just like you would bundle .dll files in Windows. Or statically compile if you don’t want dynamic libraries. Linux and Windows isn’t really any different there aside from Windows has more in the way of novice friendly tools to create redistributables.

It’s also worth noting that you’re comparing Apples to Oranges in that Visual Basic 6 is a very different language to C++. VB6 has its own warts when it comes to archiving such as it’s dependence on OCX and how they require registering for use (they can’t just exist in the file system like DLL and SO libraries, OCX required their UUIDs loaded into Windows Registry first).

To further my previous point, if you wanted to use another language on Linux, maybe one that targets the OS ABIs directly (eg Go), then you might find it would live longer without needing recompiling. Contrary to your statement about user space libraries, Linux ABIs don’t break often. Or you could use a JIT language like Perl or Python. Granted you are then introducing a dependency (their runtime environment being available on the target machine) but modern Perl 5 is still backwards compatible with earlier versions of Perl 5 released in the 90s (same timescale as the VB6 example you’d given except Perl is still maintained where as VB6 is not).


Linux ABIs might not break often.

But GTK, Qt, libwhatever ? More often than one would like.


I’d discussed that problem in the first paragraph of my post.

For reference, the C++ ABI is changing with every minor version of gcc, making compiled libraries like glibc or Qt incompatible. Major distro releases upgrade gcc so all packages are incompatible.

I agree on compiling statically to avoid DLL hell. However it is fairly difficult in practice because software rarely document how to statically build them and they very often take dependency to some libraries on the system. All it takes is one dynamic dependency to break (libstdc++ is not stable for example).


> For reference, the C++ ABI is changing with every minor version of gcc, making compiled libraries like glibc or Qt incompatible. Major distro releases upgrade gcc so all packages are incompatible.

It’s actually not as dramatic as that and you can still ship libc as a dependency of your project like I described if you really had to. It’s vaguely equivalent in that regard to a Docker container or chroot except you’re not sandboxing the applications running directory.

This is something I’ve personally done many times on both Linux and a some UNIXes too (because I’ve had a binary but for various different reasons didn’t have access to the source or build tools).

I’ve even run Linux ELFs on FreeBSD using a series of hacks, one of them being the above.

Back before Docker and treating servers like cattle were a thing, us sysadmins would often have some highly creative solutions running on our pet servers.

> I agree on compiling statically to avoid DLL hell. However it is fairly difficult in practice because software rarely document how to statically build them and they very often take dependency to some libraries on the system. All it takes is one dynamic dependency to break (libstdc++ is not stable for example).

There are a couple of commonly used flags but usually reading through the Makefile or configure.sh would give the game away. It has been a while since my build pipelines required me to build the world from source but I don’t recall running into any issues I couldn’t resolve back when I did need to commonly compile stuff from source.


the gcc C++ ABI hasn't changed since v5 (or has changed in backwards-compatible ways). There was a small (but important) change between 4 and 5. Earlier versions did have much mor churn, but it's simply not true to say "every minor version of gcc" at this point.

Or statically compile if you don’t want dynamic libraries.

Indeed. I do not understand why dynamic linking is still routinely used as part of so many software deployments. A lot of the arguments for using it that once made sense are now obsolete; some have been for a very long time. It's not that it's never useful, but it seems to be used almost by default in a lot of situations, even when a simple, static link would result in software that is both more efficient and more reliable.


> I feel like I am building everything on a foundation of sand at it will need to be rebuilt every 5 years, 10 if you're lucky.

This is exactly how I feel. I'm hoping at some point, the amount of churn will decrease, the community will figure out what problems it actually needs to solve, and come up with standardizedish practices for this. Obviously not all software can fit into this mold, but a lot of it probably can.

To do that I think an important part is it "move fast and break things" to figure out what works and doesn't work in practice (in practice including how to actually manage people to build your software). I do think for that to happen we need to retain more knowledge between generations, I see a lot of rediscovery and reinvention of the wheel.


Java has been around for 25 years with no significant changes in strings or numbers, and only one major change in dates. That has been good for backwards compatibility and code bases do last. But the 16 bit string character representation doesn't work well with modern Unicode. And the number representations lack a proper type hierarchy so you can't use polymorphism in numeric algorithms.

My understanding is Java went to UTF-16 in J2SE 5.0 in 2004 before that its was UCS-2 which is a fixed 2 byte encoding.

If Java were made today it would probably be UTF-8 instead, like I said we are still figuring strings out, I hope UTF-8 is it.


UTF8, assumed (but not validated/enforced; which are high level text processing library details), has been the sane default everywhere that isn't Windows centric.

> But the 16 bit string character representation doesn't work well with modern Unicode.

What do you mean? UTF-16 works well with all unicode characters. Some characters are encoded by two code units, but that's true for UTF-8 as well. ASCII strings are stored using 1-byte encoding on modern JVMs. I would prefer UTF-8, but hybrid ASCII/UTF-16 string works fine too.


The problem is that the internal character encoding is fixed to UTF-16 (although some JVMs use something else internally and then present a UTF-16 facade). A lot of Java code fails to correctly handle Unicode code points that don't fit into a single char. In retrospect string representation should have been more flexible and decoupled at the API layer from the underlying encoding.

I've never had any issue for as long as every I/O had their charset properly declared, do you have done samples to share?

IO isn't the problem. The issue is that a lot of code assumes that each string character is represented by a single char value, which isn't correct.

you can't do that in Java, string length is computed correctly for you and character loops give you normalised code points

you have to go at greath length to get a broken character out of a valid string, including casting and cutting it intentionally


Maybe reversing or splitting a Unicode string (with surrogate pairs) may be a problem in Java. I am not familiar with Java, so I am not sure.

The idea that removing external dependencies necessarily simplifies your code and leads to a maintainable system is a fallacy. Use external dependencies appropriately, follow SOLID principles and a clean architecture and you can leverage the benefits of other people’s code without losing the intelligibility of your system.

In the example the author gives, the real mistake is allowing the web framework to define a system that you then try to customise to produce your application. The battle is already lost because your business logic is now a dependency of someone else’s (rotting) generic web app template. This may be appropriate for you if this is a proof of concept or spike but for something with a longer lifetime, any time you save will be paid back with interest when the host framework diverges from your needs.

A JSON parser is not an equivalent class of dependency if you keep it at the periphery of your system. There’s not much point in writing a JSON parser unless you have a particular requirement that cannot be satisfied by a third party library. You should be able to swap it out for a different JSON parser in a few hours.

80:20 seems like a ratio that was just pulled out of the air; I have no data to hand either but for most user applications I would suspect the real ratio is more like 99:1.

Edited to add: I’m still in agreement with a lot of what the author says here, such as: evaluate your dependencies seriously, understand them, and be mindful of their impact on things like binary size.


For truly long lasting software, I feel all code that make up a system should be treated equally; i.e. not divided into "own code" and "third party dependencies". Basically extreme form of dependency vendoring.

But it is very expensive way of doing things, and would not work well with modern constantly updating libraries. But I do personally consider software maintenance to be an antipattern. I'd rather have my software be correct and eventually replaceable than constantly changing and "maintainable"


In order to have software that lasts, you need environments that last.

The classic 1950s Big Iron software, now run swaddled in layer upon layer of emulation on current mainframes, is software which lasts because people will recreate its environment again and again, and ignore the people who are ill-served by it because they don't fit into that environment neatly. Oh, your name doesn't fit into an all-caps fixed-width EBCDIC form? Go fold, spindle, and/or mutilate yourself, NEXT! (This happens to me. Over and over again.)

On the opposite extreme, unmaintained Internet software rots:

https://utcc.utoronto.ca/~cks/space/blog/tech/InternetSoftwa...

Or, to be more precise, software is built with assumptions underpinning it, and those assumptions change out from under it more quickly on the Internet than in other contexts. Software can go from being secure and completely above reproach to being a major factor in DDoS amplification or spam runs because the world changed around it. Software that lasts is like ships that last: Replace the hull, the mast, the sails, the cabins... same ship, neh?


It's very sad for me to see that software in consumer products isn't that reliable or performant anymore, just because vendors can "push updates", which seems to result in crappier software overall.

Why do you think consumer products at a similar price point used to be more reliable or performant in the past? I don't think that's true at all. In the 90's, software was riddled with bugs, you saved files every five minutes because your word processor regularly crashed, you had to restart your computer multiple times a day when the OS hanged, and starting Photoshop easily took a full minute.

Also why do you think the ability to push updates results in crappier software? In the 90's, if your copy of CorelDraw or Windows had a bug, you lived with that bug for years. Today, if it's a common showstopper, it gets fixed quickly.

To my eyes, everything's gotten far better.


I don't think he's talking about the 90s. But the late 2000s were definitely better than now. Computers were powerful enough to do most things we do today, and yet both OSes and applications seemed to be much better. Compare the quality and stability of Windows 7 (or even Vista) and their macOS counterparts to their nowadays' versions.

I guess I'm not seeing that either.

Compared to the late 2000's, today's computers have SSD's and retina displays and play 4K movies. They're vastly faster, with typography you can't see pixels in.

And OS's and applications are basically the same. My macOS is no less stable than it was a decade ago. The main difference is that my Mac is far more secure, so I trust third-party software much more.

I'm just not seeing how the quality of OS's or applications has gone down. I think what would be more accurate to say is that both OS's and applications have added more features, and that otherwise quality has remained basically the same.

Sure, I still have finicky Bluetooth issues today. But I had finicky Bluetooth issues 10 years ago too. It's certainly no worse. But now Bluetooth gives me AirDrop too.


> today's computers have SSD's and retina displays and play 4K movies

And yet, something as basic as a Slack client now requires gigabytes of RAM, microblogging such as Twitter loads a monstrosity of a webapp that immediately makes the fans spin up to display 240 characters in the rare case it actually loads without errors which require to refresh the page. Modern entry-level laptops have the processing power of a decent machine from a decade ago, and yet they appear just as slow to do the same computing tasks we did 10 years ago.

My 2017 Macbook lags and stutters when loading a YouTube video page. YouTube used to load fine and not stutter in 2009 on a laptop with a third of the RAM and CPU that I currently have, and yet the task at hand didn't change at all, it still just needs to display a video player and some text.

Windows 10 broke start menu search. Come on, this problem was solved a decade ago.

Every large website's login flow now involves dozens of redirects through various domains which can break in all kinds of interesting ways leaving the user stranded on a blank page in the middle of the flow. I know the reason behind them (oAuth, OpenID Connect, etc), but as a user I don't care; this is a major UX downgrade and the industry should've done better.

We've replaced offline-first applications with cloud-first. Nowadays even something that should work fine offline will shit itself in all kinds of unexpected ways if the network connection drops or a request fails.


It’s not the software. It’s all the ads that load up in the background, that tracks you, even when you’re browsing incognito.

Slack doesn't have ads.

iPods from the early 2000s still work just fine.

An android phone from 5 years ago is pretty much unusable on the other hand.

One may argue phones are a platform and complex. Fair enough. So then compare to something like a Sonos player.

I can pull up speaker setups from the 80s 90s and even earlier and have no issues with them working. But something like Sonos may not last 3 years.

And talking about pure software, I am not sure what the definition of consumer is, but a lot of software from the 80s and 90s is going strong and is extremely usable.


> An android phone from 5 years ago is pretty much unusable on the other hand.

Not sure what you mean? It might not get updates anymore and you may not be able to install the latest versions of most apps but the same is true about iPods.


But because of that developers were under more pressure to be careful and avoid shipping software with problems in the first place.

Now, as you say, the common showstoppers get fixed quickly, but they're probably a lot more common. Meanwhile the overall cohesion and well-thought-out feeling of a lot of older software is largely gone.


You can see this in console games now too. Day 1 multi-gigabyte patches are common now that it's assumed everyone has a sufficient connection.

too many layers of abstraction. that's our problem.

The kicker these days is that "standing still" is not enough. Software that "lasts" has to constantly change, because externalities force it to - primarily, security.

Whatever library, language or foundation you built on, you can guarantee whatever version you used is going to stop getting security updates in a couple of years. And then there will be a whole lot of baked in dependencies you didn't know about - URLs (like XML schema URIs that were never meant to be resolved but lots of libraries do), certificates, underlying system libraries etc.

So designing software "to last" now is designing software to be constantly updated in a reliable and systematic way. And the interesting thing about that is from what I can observe, the only way to achieve that currently IS through simplicity - as few dependencies as possible, those that you do have very well understood and robust, etc etc. So it sort of comes back to the author's argument in the end, though maybe through a different route.


It's possible to engineer for this. The mental model: imagine you are equipping a time travel mission decades into the future, to integrate with the future internet, and to return home with acquired results.

A TLS client is not possible to future proof like this of course, but you could have a roster of approaches. For example, try getting your hands on the latest curl / wget builds (try each with a few different approaches), or the latest JVM/Python/Powershell & use those TLS APIs.

A fun game would be to try this with tools available 20 years ago, and then redo it to make a sw time capsule from present day to 20 years into the future...


A better approach is to have the "TLS" library accept either an already connected socket (and host-name to validate) OR return something that behaves like one after dialing and validating an address. That way when the underlying library is re-linked to something modern it'll still have the same API / ABI but the underlying security implementation will belong with the updated library.

That could indeed be an idea, if you can assume the SW can be rebuilt/linked against API compatible libs, and you rely on the API existing far into the future. The automatic CLI tool scouring option would allow you to hedge your bets over several implementations because of looser coupling.

As a counter point to the author:

* Half baked in-house implementations of things are often filled with bugs and broken edge cases. Especially when it comes to security and concurrency.

* In a larger code base, you won't understand everything whether it's in-house code or third party code. The in-house code was written by someone who quit 5 years ago and is now a monk in Tibet.


Software should be built to accomplish its purpose. It's not a special snowflake, it's a tool to do some specific job. To think otherwise is to setup yourself and your company/team for disaster.

Kernels, compilers, language runtimes, databases? Sure, build them to last. Web pages that will not be used in 2 weeks? Don't waste your time with "build to last".

The problem is that some of us think that our software should be build to last - but in reality it's just some mediocre thing that needs to get in front of the clients as fast as possible and some bugs are "OK" to live with.


That would be fair if all those "pages that will not be used in 2 weeks" actually ended up not being used after just 2 weeks. Instead, they will probably continue to be used for 2 months or 2 years.

I try to use no dependencies beyond my database connection. The one area I feel unprepared to handle myself. Anything else I try to code a minimally-viable functionality that meets my needs. As a bonus, I have learned what all these things do and how they work while fiddling with implementations, and if I want to customize their functionality in any way, it's generally pretty easy.

I'd almost always take a pasted tutorial over a robust dependency every time, the former is more reliable. Any dependency I do take on is something I have to watch the changes of and project status of like a hawk.


That doesnt work in most cases though. You are not going to reimplement jpeg encoding, mpeg, etc.

If the language/framework you are using handles it already, you're probably golden: Your language/framework is functionally the one dependency you always have, and features implemented in the platform tend to be fairly stable. Pick a platform that supports the most difficult things core to your application.

I'm most comfortable in .NET or PHP, both which have extremely large long-lived built-in function sets that generally are non-breaking over a span of many years. Meanwhile, I generally don't add anything to them via NuGet/Composer/etc. They both operate almost opposite to the JavaScript world, where everything is built by random third parties. If it's popular enough a need, it tends to be added the basic package.


Pure HTML with progressive enhancement has worked fine for 30 years and will continue so for many more. But a good, lasting product is incompatible with the current "take the money and run" business strategy and analytics for management. If it works well for so many people, what can we do?

That's all well and good but go and visit some random websites and see how many of them depend on sites like unpkg or jsdelivr or something.

Every time you try to load a resource from a domain you're introducing a huge risk by letting them be the judge, jury and executioner.

Domain gets hacked and injects malicious scripts or collects user data? Domain fails entirely (no-renew) or in 10 years just stops responding?

A lot of web pages today have built in self-destruct features purely based on the JS/resources they're loading.

For 0 benefit, as well, since the amount of BW saved in so marginal.


Yes, security is the most sensible reason we use mega frameworks, because FB and Google will patch any issues instantly, for free and forever.

Software is not always designed to last. Take video games that use servers, the video game company can take down the servers and the video game no longer works. You'd have to set up your own server to emulate what the video game company's server did.

All those mobile games that require a server, once the server goes down the game no longer works.

In the old days you set up Doom as a server on your PC networked to other PCs for multiplayer. Those can last and be remade for new platforms.


LAN hosted games are the golden area. Online game servers are a really interesting example of brittle software.

When the server is shut down, the game breaks and the software _is_ unusable.

This often happens because of the misaligned incentives of the business to not release the server code. This disappoints both the original developers and the consumers.

I'd argue that this is not only a software problem, but mostly an problem of copyright distorting the incentives of publishers.

One could imagine a client hosted game or alternative architecture that removes the central dependency on the company could be the way to build software that is built to last to last longer.

In the online game I'm developing - I make sure that every client can also run as a game server. This way if my servers are shut down players can play without depending on me.

My decision to remove myself as a dependency directly goes against the incentives I have as a publisher. I slightly narrow the possible monetization options for the game.

Mostly I'm trying to explain why developers/publishers decide to make brittle software, even if I don't agree personally.


More often than not, the code I have to write needs to be written fast, and I (mostly) won't have time to look at it again once deployed. I've mostly worked at fast growing companies, so that's been the case everywhere.

What I've learned is that making stuff as simple and stable/low maintenance as possible is the only way to be sane. I can't afford to deploy code that then requires me to hold its hand with a team of 3-4 engineers for the upcoming years. It needs to run itself and be forgettable. And it needs to be written fast.

This has become much harder in recent years with the switch to things like Kubernetes. Kubernetes moves so fast and needs constant nurturing, even for casual users. Running old versions of it is painful and dangerous, so you're forced to update. And the ecosystem is still so early in its life that all the paradigms change and flip on their head every year. Odds are that something you deploy in it today, will need to be looked at in 18 months. And the whole thing will need a team dedicated to keeping it healthy.

Anyhow, that's my angle on the author's rant. Things need to last, IMO, mostly because I don't have time to go back to them later.


I'm using completely opposite approach. I'm trying to define a strict requirements to a given task and then I code assuming those requirements. If any small detail changes, my code will break. So I'm trying hard to break it as soon as possible (using asserts and similar patterns) with enough information to understand the reason of this breakage.

So my code definitely requires maintenance in a changing world. But more than once those breaks actually identified a bug somewhere else and fixing that bug was essential. If I would try to self-fix wrong data, those kinds of bugs could be missed. Sometimes I need to fix a code to adapt to a changed format or something like this. But that's not a big deal, because change is simple and something like exception stacktrace allows to instantly find a code which is to be fixed.


If you want your software to last remember that you also have to have a working and upto date development environment, ie your code may still work fine but if you cannot compile it to the correct target it is useless.

I found this out the hard way when I had to recompile some fpga firmware for a $100K piece of equipment which used a programmer not supported by windows, a chip and code not supported by the current IDE and a binary blob which we did not have the source code for and would not link, luckily we winged it by replacing the FPGA with a hardware compatible replacement... We now preserve the whole development environment on a VM and save it with the source code.


If you write your code properly, you will have tests which break on every library incompatibility, and a tiny wrapper around your libraries. (If you do TDD properly, you will have it.) And that's all you need to keep your code up to date. When upgrading a library, you will of course build and test your code with it, and that will show you any problems, which you can fix in the library interface layer before having any chance to go to production.

If you write your code properly, you will have tests which break on every library incompatibility, and a tiny wrapper around your libraries. (If you do TDD properly, you will have it.)

In general, I'm not sure it's possible for all of these things to be true.

Many libraries are useful because of their side effects. Depending on what those side effects are, it may simply not be possible to integration test your entire system in a way that would satisfy all of the above claims.

The alternatives tend to involve replacing the parts of the code that cause real side effects with some sort of simulation. However, you're then no longer fully testing the real code that will be doing the job in production. If anything about your simulation is inaccurate, your tests may just give you a false sense of confidence.

All of this assumes you can sensibly write unit tests for whatever the library does anyway. This also is far from guaranteed. For example, what if the purpose of the library is to perform some complicated calculation, and you do not know in advance how to construct a reasonably representative set of inputs to use in a test suite or what the corresponding correct outputs would be?


I think we can all agree that software should be designed to last :)

My experience tells me that when this fails to happen its usually not explicitly due to too many dependencies. Rather it is because the developers have a poor/partial understanding of the problem domain.

External libraries help this problem a bit because it allows developers to offload tasks onto other more experienced developers. They can (and occasionally will) misuse those solutions, but that is not the dependencies faults. On the contrary skilled developers will produce quality solutions with or without dependencies.

The signal seen (poor software has many dependencies) does not mean that software with dependencies are poor. It means instead that developers who have a poor domain understanding but good business skills are disproportionately successful compared to their peers that have good domain understanding but poor business skills.

Make of that what you will :)


Isn't the inevitable conclusion "you should design your interfaces so that they last" (and/or choose existing interfaces which look likely to last) rather than "you should avoid dependencies"?

It's all an illusion. Software kinda isn't lasting, some things merely have competence behind them

Look at the GNU coreutils, things you probably think shouldn't have changed in a while: https://github.com/coreutils/coreutils/tree/master/src

There's actually frequent edits. Legendary disk destroyer was just updated last month. The difference is the competence


Building things to last has a really high cost and is usually not worth it. The vast majority of the buildings and infrastructure humans have constructed have been torn down, completely rebuilt or abandoned. What remains are usually extremely expensive monuments to religon, culture or governmental constructions.

Most software is not worth to preserve the same way most buildings are not worth to preserve (except for perhaps the aesthetically pleasing facade).


90 days certs that needs renew based on external service API that can change says No.

Software in a hardware that is designed to do certain job should indeed last. Like Telecom towers should work fine "forever".

But software as in APIs and services that constantly has to evolve and be expanded. Yeah that is a harder sell.

We live in crypto world where each advancement in cputing or ways to solve could render algorithms "bad" means we have to update and adapt.


Software that lasts is software that is a joy to use.

Imho the tech, how good you are at programming is all not very relavant when the user doesn't like to use your software.

I have written a small CMS for a company which was replaced twice but every time they ditched it to return to my old simple version.

This is just one example but I think that should be in people's minds when they create software that lasts: people must enjoy using it.

This can even apply to a command line program.


"...bridges and civil engineering projects are designed to last, why can we also design software for it to last?"

Some software has lasted for many decades. This isn't necessarily a good thing. [1]

[1] https://arstechnica.com/tech-policy/2020/04/ibm-scrambles-to...


On the other hand, there’s lots of software that’s been around for decades that still works quite well today. Most GNU software for example, and particularly Emacs.

If somebody is a solo developer working on a huge project, it would be lovely if the software lasted... but the big fear is that it won't get users. Can't really spare the time to worry about 10 years from now when you are thinking about how you have to get a regular job if you can't get X users in the next 6 months

Civil engineers and spacecraft designers build their systems to last because they don't have a choice. You can't deploy fixes to a production bridge ten times a day.

There might be good reasons to build software to last, but "These other engineering fields build their things to last" isn't one of them. I have no doubt that if you could reliably send fixes to bridges and spaceships, people would do it - and that many lives would have been saved.

My personal opinion is you should design the larger ecosystem around your software to last and be robust to the replacement of any individual part. Building on Go's standard library HTTP server would have seemed needlessly cutting-edge about five years ago. Building on Apache and mod_cgi would be considered dated today (even though it would work). The stack you should be using will change. Build your system so that, when change comes, you're not afraid to write new software.

In practice, this means making good designs to avoid hairballs of complexity, having deployment pipelines that let you roll out canary versions (or better yet, send a copy of production traffic against a sandbox environment), keeping track of who's using APIs internally so you can get them to change, having pre-push integration testing per the Not Rocket Science Rule so you can be confident about changes, etc. You should think about your choice of web framework or JSON library up front, sure, but more importantly, if a sufficiently better web framework or JSON library comes along tomorrow, you should be able to switch. As a principle - any time you see something that you think you'd be afraid to change in the future, don't ship it if you haven't shipped it yet and figure out how to build the right abstraction around it if you can.


The software (as in the actual code for an implementation), should be ephemeral in my opinion.

The data model, protocols and knowledge base (docs, requirements etc) should be designed to last and be extensible.


Reusable abstractions is the foundation of what makes programming productive. This person does give respect for libp2p, which is good, but clearly they've never seen a small abstraction that is still immensely valuable.

Constructive criticism if the author is in here, I find multiple bold sentences per paragraph to be taxing on the eyes.

> If all three are in agreement, they carry out the operation. If even a single command is in disagreement, the controller carries out the command from the processor which had previously been sending the correct commands.

By this definition, wouldn't all three processes be the last ones to send the correct commands?


Actually the way it usually works is a voting system, where if one component disagrees, and two agree, the majority vote is upheld.

for some reason when I saw the headline I went over to check up on grc.com that guy's obsessiveness into coding everything in assembly gives me comfort

>> Software should be designed to last

Not necessarily true.

If the requirements of the project are to be secure or fast or easy to maintain or built quickly or have beautiful user experience or be maintainable or be designed to be robust & failure resistant, then that is what it should be.

And if the requirements of the project are "the software must be designed to last", then the software should be designed to be designed to last.

But if it's not a requirement, then no, it does not matter if it lasts.

The point being that blanket statements about what software should be are missing important context about the purpose of the software and the constraints under which it was developed and the specified requirements.


Couldn't you have just said "Not all software must be designed to last"?

Also, the author said software should be designed to last not that it must.

Lastly, you didn't address the core argument of the article which is that you should carefully select and manage dependencies. Reading your comment I wonder if you even read the article..


True. somewhat long winded, edited thanks.

I’m working on a project/product now with no established value or customer, yet. Imagine if I told sr. Mgmt “no, we still haven’t provided value. But we have ensured the design is robust and will last” lol. I’d be fired so quickly.

The problem is requirements change and often. There is definitely engineering work in making these trade offs, but I generally agree with the op that it's better to err on the side of longevity.

It would be interesting to hear of software that served it's purpose well for a year or two and then was no longer needed. I'm sure these projects exist, but I'd imagine they aren't very common given how expensive software development is in the first place.


This is why I like long-established technologies. You can build most anything with typical POSIX utilities and time-worn libraries. And they will be there for you in 25 years. Who knows where Rust and Go and Node will be in a quarter century?

I worry about go 2.0 and personally will revisit the question based on what the outcome looks like. Rust also seems to have good momentum and as an outsider glancing in, generally sane architectural decisions. It remains a language I would _like_ to consider on a future toy project if it fulfills the needs of the space.

I feel they will continue to be "supported" for a long time, like Perl 5 surely will be. (Though I'd avoid starting anything new in it, there are stable mature things written in it that, like with mainframe stuff mentioned elsewhere, might just end up being encapsulated.)


You can apply the Lindy power law :)

Looked it up finally, I agree. It's why we have steering wheels instead of joysticks (except for the people who have a leg-related disability, they've been driving with joysticks for like 100 years).

What’s Node? Is that like Deno?

I mean, like totally, I also very thoughtlessly forgot to call Go by its official name as well. And neglected the registered trademark and copyright symbols!

no, software should be designed to work for as long as necessary, because we're responsible professional and don't want to squander other people money on infinite perfection

the whole bit is nonsensical included the parallels with civil engineering, bridges are built with tolerances, maintenance schedule, and a disposal plan for when maintenance cost become higher than replacing the bridge, because nothing in this world actually lasts.

heck our internet is built around operating systems that didn't exist 25 years ago. imagine investing capital and sink opportunity costs in delivering the perfect vax typing program...

mind, this is not to say that all software should be quick and dirty. but the conscious engineer knows whether it's building infrastructure to last decades or a temporary bypass to support some time limited traffic spike.

the important bit is to ask the stakeholder what's more appropriate.

I also want to see people build their own json parsers at acceptable level of performance, possibly as streaming parsers instead of just gobbling everything in memory. imagine wasting a month on building that and then having to defend your choice every time a bug or a malformed input causes silent data corruption somewhere in the backend. http://seriot.ch/parsing_json.php




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: