A computer was to control a new assembly line for a car company. They couldn't get the software to work. They called in an outside insultant. The outsider developed a program that worked. (It was more complex.) The book was about the psychology part: The original programmer says: "How fast does YOUR program process a punched card?". Answer: "About one card per second." "Ah!" said the original programmner, "but MY program processes ten cards per second!"
The outsider said, "Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.
Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.
The problem are not the programs that obviously do not work or who break in a very visible fashion.
Programs whose deficiencies are known can be fixed or worked around.
The real problem are programs that appear to work correctly but aren't.
To say it with the words of Tony Hoare:
There are two ways of constructing a software design:
One way is to make it so simple that there are obviously
no deficiencies, and the other way is to make it so
complicated that there are no obvious deficiencies. The
first method is far more difficult. It demands the same
skill, devotion, insight, and even inspiration as the
discovery of the simple physical laws which underlie the
complex phenomena of nature.
For every problem there is a solution that is simple, neat - and wrong.
I think, however, that the more important part of this quote are the words 'problem' and 'solution'. Until you have an understanding of the problem that is correct, it is unlikely that you will come to a solution at all. Avoiding the introduction of gratuitous complexity is not necessary to reaching that understanding, but it sure helps.
That's... literally what I just said? "Achieving correctness is the whole point of making things simple, after all."
If your solution is not simple, it will not be correct or fast.
I can never say "We are 5% more correct than last week. Keep up the good work!"
Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.
Excellent, so we both agree with the author that correctness is the ultimate point and that simplicity is just a useful tool for achieving correctness. :)
> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.
How does one purport to measure simplicity?
My thinking was like this. The complexity of software is synonmyous with us saying we don't know what it will do on given inputs. As complexity goes up, it gets more unpredictable. That's because of the input ranges, branching, feedback loops, etc. So, a decent measure of complexity might be simplifying all that down to purest form that we can still measure.
The ASM's are a fundamental model of computation basically representing states, transitions, and conditionals making them happen. So, those numbers for individual ASM's and combinations of them might be good indicator of complexity of an algorithm. And note that they can do imperative and functional programming.
What you think of that idea?
It's the other way around. Correctness is obviously the goal (and likely performance too, depending on your use case), but the way to achieve it is through simplicity. So simplicity should be prioritized - as it allows you to ensure correctness.
> if your solution is not simple, it will not be correct or fast.
The point of the article is that "simple" is a prerequisite of "correct" (and "fast").
>> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.
>How does one purport to measure simplicity?
There's 40 years of research into that.
And loads of tools to support dev teams.
You can start here: https://en.wikipedia.org/wiki/Cyclomatic_complexity
Also related are costing models: https://en.wikipedia.org/wiki/COCOMO
Have you seen a program that comes with a formal proof of correctness? I have. And boy, they are really simple.
The end result can be complicated. But the program is broken up into small, simple, easy-to-understand pieces that are then composed.
Working, simple, correct, optimized.
My approach is usually sending out a PR as soon as I can to a group of reviewers / users and goes in following stages.
1) POC - proof of concept. It does 90% of things, some parts are ugly and messy but validates a hypothesis. The unknown unknowns are discovered. I want to stage this and get this in front of some alpha internal users as soon as I can. First pass reviewers give a on the plan of attack. Lots of sub TODO’s are listed in PR. The goal is to discover edge cases and unknown unknowns.
2) Simple - Go through PR and refactor any existing / new code so it’s readable and DRY. If reviewers don’t understand the “why” of some code, a comment is left. Now 90% of scenarios are covered, probably some edge cases may not work but the edge cases are known. The code is simple and at right layer of abstraction.
3) Correct, Testable - Edge cases are covered, tests are written, internal users have validated that the feature is working as expected.
4) Polish - if it’s slow, then slow internals are swapped out for fast parts. Tests would mostly work as is. Same with UI, css fixes to make it elegant and pretty.
Sometimes the process is a day, sometimes it’s a week.
I think he is. Premature optimisation is putting the order: fast, simple, correct.
So although the author doesn't explicitly state it, premature optimisation is something that would be avoided if you followed his advice.
This is either a great typo, or a hilarious moniker I have somehow missed (almost 40 years in the business). Either way, it's worth recognizing.
Equal parts hilarious and accurate as "/in/con/sultants" are often brought in to play the part of the court jester -- they can speak the hard truths no-one else could, and survive.
>>"Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.
I think I wrote a device driver like that, more than once. :( Fast as hell, to the point of outstripping the bitrate of the device it talked to, and about as useful as a sailboat on the moon.
> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.
Correctness isn't binary. Roughly no software today is 100% correct, but for most purposes you'd still pick the current version over a highly complex, slower, more-correct version.
Simplicity can save you a lot of cost as you edit the software, which helps you make it correct sooner. Simplicity and correctness go very well together.
"Good enough" is the destination of any piece of software. Sometimes that means correct, but more often it means "oh yeah, sometimes it starts acting funny, just restart it when that happens"
It is slightly better to be simple than correct.
In which case please consider that everyone here is using "correctness" to mean "correctness that is achievable by reasonable human effort". :P It's easy to win any argument by taking one side to its logical extreme and asserting that it is therefore impossible, but that doesn't create a useful discussion. By the same logic we could assert that 100% simplicity is impossible, but that would be just as silly.
I use more-* phrases because it's always in a relation. Even NASA can't claim to have 0 bugs although people die if they fail.
bit OT: There's a great article about NASA programming: https://www.fastcompany.com/28121/they-write-right-stuff
Because the original author neglected to provide an adequate definition of correctness, thereby inspiring an epic HN flamewar as people now must run around endlessly debating semantics. :P
Plenty of times correctness is binary. In some cases it would be: passes all tests. Or: meets all requirements. Even if it could be "more" correct (or "more" simple), but those aren't part of the tests / requirements.
Maybe it's supposed to move from A to B, maybe it should do it in under x seconds, maybe it should go via Y, maybe it has to be easily understood by a 6 years old, etc.
But I can't really imagine something that has simplicity as the only requirement ("nothing" is the simplest thing so that requirement would always be met with no action). So as long as the other requirements are met simplicity is usually the nice to have "add-on". And you can have correct and simple, or correct and complex. But correct (does the job) trumps simple. And the world is surrounded by examples that prove this point.
I think the author meant "simple should be part of good design" but couldn't properly convey the message. He focused on making the message simple and ignored the fact that it's not correct.
What about a process so painful nobody has even thought of it?
And you never know if it can be done in an even simpler fashion later.
A good program does only the correct thing in a particular area. It is known to be reliable in that area, sometimes even formally proved to be so.
Outside that area, an ideal program refuses to work, because it detects that it cannot obtain the correct result. This is normally called "error handling".
There's also some gray area where a program may fail to reliably detect whether it can produce the correct result, given the inputs / environment. A reasonably good program would warn the user about that, though.
A "garbage in, garbage out" program is only acceptable in very narrow set of circumstances (e.g. in cryptography).
Seconded. I'm highly confused at how many upvotes the OP has gotten in such a short time despite appearing to say that implementation details matter more than program output. A beautiful machine that doesn't work is, at best, a statue. I'm all for the existence of pretty things that do not need to demonstrate inherent practicality, but most people are not printing out source code for use as wallpaper.
I think the author made this a little inflammatory to get people to think about it in these terms.
Easier to extend, almost never. Proper design for extensibility has an extra bit of complexity over the most obvious.
Simplistic implementations tend to be tossed away and are good for unscalable prototypes.
Easier to learn, definitely not. The simplest code comes from deep understanding of problem domain and algorithms. It is almost exactly like with writing brevity while not losing the point.
It is easy to end up with simplistic instead of simple.
There is that famous quote by Dijkstra which I'd rather not butcher from memory.
What happens when it breaks? What happens when you need to produce doodads as well as gizmos, or a different size gizmo is desired? Who wants to reach inside the silly string and hope for the best?
I'm reminded of that old saying that even a broken clock is right twice a day; an overly complicated piece of software that produces the correct output is only coincidentally correct. Which I think is the point of the article.
I think the author implicitly assumes the software basically works right from the beginning of the article.
Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.
That said, Coq itself is not the best vehicle for this. There are nicer high order logic languages.
Agreed, see Rich Hickey's "Simplicity Matters" presentation on the difference .
Simple-Complex vs Easy-Hard
Third is performance.
1. Write a working piece of software that does the job.
2. Refactor to make the working piece of software do the job more efficiently and elegantly.
3. Refactor to make the working piece of software do the job as fast as possible.
I've never thought of simplicity adding upfront cost. That's probably true, but also true that it pays dividends later on in the project.
I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.
And think of the word "factoring" as in high school algebra. or rather "factoring out" something.
I have a dozen examples of this calculation. How about let's refactor it into a function, and replace all the instances with a function call?
This kind of transformation is precisely what the person who coined the term meant: Taking code which works and turning it into easier-to-read code which works precisely as well, because refactoring never introduces a change in behavior.
To quote Martin Fowler and Kent Beck:
> A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior… It is a disciplined way to clean up code that minimizes the chances of introducing bugs.
Not a direct quote this time:
> Fixing any bugs that you find along the way is not refactoring. Optimization is not refactoring. Tightening up error handling and adding defensive code is not refactoring. Making the code more testable is not refactoring – although this may happen as the result of refactoring. All of these are good things to do. But they aren’t refactoring.
As code is originally written, people are (or should be) using the most "obviously" simple approach.
A Breakthrough in simplicity is often the result of additional thinking and hard work. (And cost)
So correctness is generally never satisfied in my mind. At any given moment, the programs I am working on are in some way broken in my mind. Even if the other programmers thought that correctness was priority number 1, I will never consider the program correct. I will always suspect there is some snake in the grasses.
I suppose you could feel the same way about simplicity. I think the most charitable stance would be to give them the same level of importance. Overtly complex code cannot easily be proven to be correct amid changing business requirements. Easily testable, complex code with a full functional test suite is at less simple in one sense. Patently incorrect code is hardly valuable regardless of how easily one can understand its function.
It is relative preferences, more about what takes precedence over what than an absolute measure. Nothing is ever perfectly correct, nor perfectly simple nor perfectly fast.
One way to achieve greater simplicity is to negotiate for fewer/simpler requirements for the first revision. There's often a core set of functionality that can be implemented correctly in a simpler way, and that gets the work done. Once that's in place it's interesting to see how often people lose interest in what were "hard" requirements before. It's also common that new asks have little to no resemblance to those unimplemented features, and are instead things that they found out they needed after using the new system.
> Under these conditions it's important that the software be amenable to change.
At the same time, under all conditions, it is important that the software actually works (i.e. correctness), which is why it's more important than simplicity. Irate users who come to us telling us that our program doesn't work will find little comfort as we regale them with how simple it is.
First, make it correct. Then, make it simple. If requirements change what correctness means, then make it correct again, then make it simple again.
He also talks more abstractly about the value of software (as opposed to hardware for instance) being primarily in its "soft"-ness, or ease of changing.
Ultimately this comes from his point of view as an architect, who fights more for system design than say, a PM might for user features. I've encountered the opposite school of thought that says: MVP to deliver features, refactor/rewrite later. I think the strategy to use will depend on the project and team (budget, certainty, tolerance for failure, etc)
- John Ousterhout
It is true mainly for one-time contracts where you actually might not care about simplicity at all. Enough is enough.
However, in the case of iterative projects keeping complexity under control has much higher priority including top priority for very big projects. Complexity and high entropy can easily kill everything.
This may be a matter of definitions. It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get. That way allows us to dismiss clearly degenerate cases (you can always do a one-statement no-op program that will be simple but do nothing).
General correctness is what I want in most cases. Example: voice dictation. It requires a final read & polish, but errors are infrequent enough to save me a lot of time. Full correctness is usually requested for jet avionics, nuke power plant control, etc.
With that addition one should optimize for general correctness and simplicity as a first goal, full correctness and performance as a very distant second.
When I write software (or build systems) what I end up with is usually significantly different from what I started with; not externally, but under the hood. Keeping designs simple (on large teams being almost militant about it) helps large systems morph as it goes from a proof of concept into an actual thing. My 2c.
Which is the root of the endless back-and-forth in this thread: a program has to do what it says on the tin ("general correctness") before anything else, and then probably be as simple and as "fully correct" as possible. But it's easier said than done for us to posit a distinction between general and full correctness than to actually find exactly where the dividing line lies between the two. A blog post to discuss such a dividing line might have been valuable, but the one we've got here unfortunately just handwaves away all the hard questions.
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system." - John Gail
Yes, but what is "Correctness". Its not usually so binary. Get to "good enough" and move onto the next thing.
I agree with this.
Interestingly, the post is very simple, and not correct. I prefer posts which are slightly more complex but correct, but those don't get as many upvotes.
More info here: https://en.wikiquote.org/wiki/Donald_Knuth
In that sense, simplicity is like insurance against the future, and so at any given moment you don’t solely care about the system’s total correctness or performance right now but also you care about some diversification benefit of investing in simplicity too.
Very much like how you don’t choose stocks based solely on what will have the highest expected return right now, but instead you also incorporate some notion of risk management when optimizing.
Simple correctness is the best way to create beginners that use software to get faster results. Fast isn't all about computation - it's taking the least amount of the user's time as reasonably neccesary.
I’m a novice of sorts. Thanks.
Strong disagreement here. A program that isn't kept simple will stop being correct, fast, or any desirable quality over time.
Not always. Have you ever used a SNES emulator? There is one emulator that is more correct than all others combined - it's called BSNES and it's the most true to the original SNES hardware of all the available emulators. Yet it is horrifically memory/cpu hungry - that correctness comes at a huge cost.
So no, correctness does not always come first, especially if you value other things like user experience.
There's no clinical definition of correctness here. Intent matters.
I believe that it does so through attemptng to mimic the working circuit logic and chips, the physical hardware, within code alone, hens it requiring a powerful computer. This is an incredibly unoptimized way of doing it, especially since it's formed out of incorrect assumptions n what "accurate emulation" is.
It's the effects that we want, not the logic. If you're going to emulate something that, through common sense, shouldn't even require that much power, you're doing it wrong.
The saying goes, "keep it simple, stupid!" To overcomplicate things, like the programmer of BNES did, results in unweildy an unoptimized code.
Even Nintendo doesn't do this tactic with their official emulators. Yeah, sure, they're known to be inaccurate at times, but that's only because Nintendo's not aiming to build a general emulator to handle all case scenarios. Besides, much of the inaccuracies, as far as I could understand, deal with undefined behaviors of the system, something only things like glitches and bugs ever take advantage of.
But I'll assume that you want the software that calculates your paycheck to be correct.
I think you're using a different definition of "correctness" than most other people in this thread. Which is understandable, a lot of folks are using different senses of it. What matters is not, "Does this perfectly and unobservably play hardware" in the definition of correctness for an emulator. What matters is, "Can this emulate the cart I want to play right now with a good experience?" and perhaps, "Will this allow a malware maker to own my entire computer if they run a cleverly crafted fake cart file?"
Another thing: what was an entire program back then, is sometimes a mere function, or maybe a class or code library today.
The conclusion I came to personally was always
Accuracy > Maintainability > Performance
in that order
Consider timezones: it's simpler to pretend there's 24 time zones, one for each hour. But the correct assertion is there's 37 time zones (as of this writing). So, the simple solution results in a third of your potential user base having issues.
Other issues to pick: accessibility, cross-browser compatibility, legacy device compatibility... the list goes on.
That's a flagrant example of "simple and wholly incorrect". If you don't store timezones, your future dates will eventually turn out incorrect when timezone offsets change e.g. create a meeting at 9AM local, store as UTC, country decides to not follow DST that year bam your reminder will ping an hour early or late.
Or a day off when the country decides to jump across the international date line (https://en.wikipedia.org/wiki/International_Date_Line#Samoan...).
Unless restrictions are specified I will assume we're talking about the general case, and for the general case it's just plain wrong.
There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.
Btw, keeping the timezone is insufficient as well if you're building a calendar/scheduler.
If the user changes the timezone after scheduling the event... do you keep to the old one and alert him whenever, or do you adjust? There are a lot of edge cases with schedulers -- yet as i said before, most applications don't schedule into the future. They're mostly just doing things right now or within the next few minutes and keeping a log of their actions.
> There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.
My experience is the exact opposite: there are few applications which only store past dates, and in those said date is usually indicative/barely even relevant and could just as well be part of a freeform comment or removed entirely.
The solution isn't incorrect, it is modular.
So you provide an alarm clock which is works as neither a clock nor an alarm.
> The solution isn't incorrect, it is modular.
It's either not correct or not a solution, either way it's useless.
I also like how proponents of "simplicity at all cost" apparently assume/assert the composition of two systems is no more complex than either, and that there is no additional complexity to the composition layer.
Know your data, and most of your problems get easy.
Even "notify me in exactly 24 hours has its own complications. Leap seconds will screw up your day (as will the vague request of "exactly 24 hours").
Corner cases, the bane of simplicity everywhere.
How about another example? You're building an android application. Let's pretend there's an API in the latest version of Android that reduces a dozen lines of code down to one function call - ShinyNewMethod().
You can use that ShinyNewMethod() call. It's certainly simpler.
But the vast majority of android devices in use are not running the latest OS. So the NewShinyMethod(), while being simple, will cause your app to not work for them.
Hopefully the framework designers figured a way to have this auto-reverse engineer for older devices, but that's not always a guarantee.
Make it work.
Make it work right.
Make it work fast.
In that order.
Now it could be argued that "work right" can be read as "make it (work right)", or "(make it work) right", or both, but I think the point of this saying is that the "fast" part should always come later.
If your software doesn't solve the problem, it's useless, no matter how correct or fast it is. Once it solves the problem, then you can work on making it bug free and elegant. Once you're done with that, only then should you look at making it fast.
Note, of course, that "make it fast" refers to gratuitous optimisation. If it's too slow to solve the problem, then it doesn't work, and that needs to be fixed.
A similar adage states the rules of code optimisation:
2) (For experts only) Do it later.
Once you've got this pretty optimized and it's still taking up the lion's share of your execution time, you have to look elsewhere (probably changing your overall approach or applying some higher level optimisation) to improve things further.
Sometimes it is quicker to start with just that instead of "polishing a turd". You can get it to be shiny but still nowhere near as shiny as gold.
Hope the code is testable and reasonably easy to modify. Otherwise it's going to be a rewrite.
The profile is then useful as a benchmark on real data. If you have enough time, you can turn that into a high level performance test.
By the by, is there more than one kungtotte on the Internet? It took me a minute to think why that name was so familiar, but then I remembered watching a few hundred Beaglerush videos.
I've used this handle for a long time though (20 years or so), so it's all over the internet.
If you're into strategy games, XCOM is good, and Long War is matchless. However, Beaglerush is actually surprisingly entertaining even if you don't care for his subject; the girlfriend is still not much into the game, but after the first couple episodes she insisted on watching the other hundred-thirty-odd videos. It's probably not everyone's cuppa, but it could be a thing.
Way back when a bunch of us put our names into a custom name file for XCOM so people could make campaigns featuring ShackTac people instead of generic dudes. I completely forgot about that until you reminded me. It must be five years since I talked to him :)
I take this to the extreme, I probably wouldn't implement the complicated API / regex chain without 4+ hours of reading documentation and other research. It bothers me that much. If it seems like a simple and common task, I refuse to believe that there isn't a simple API call already to do what I want, I just have to find it. Sometimes, the simple API call really doesn't exist though, and you have to do what you can, with some comments explaining why.
I've noticed some developers will implement the 4 API chain followed by a regex as soon as they find it, and never give a second thought that there might be a simpler way.
The "make it work" implies a level of correctness & performance that is acceptable, which is why any subsequent steps are after thoughts.
If you skip that, you will relatively quickly reach the point of a full rewrite.
Changes are not necessarily easy or even possible to make safely if things are correct but not simple.
"Simple" is a proxy for "can be changed safely" and so IMO is the most important quality to have.
If simplicity helps achieve correctness, then great.
But correctness is not always simple.
Most people think there is a leap year every four years. They are wrong.
In their minds, when things stop being simple, fast trumps correct.
If I'm writing a file backup program, it absolutely must back up all the files without leaving any out or corrupting the data.
But let's say it has a feature that prints progress indicator percentages on the command line, ranging from 0% to 100%. Maybe under certain circumstances (like files added to a directory after the backup starts), it prints 102%. It's not what I had in mind, nor is it something I'd call correct. But if fixing it complicates the code a lot, maybe leaving it that way is the better choice.
(This is a bit of a contrived example because you could just clip the value at 100, but you get the idea.)
Abstraction allows you to hide complexity and make it a simple, reusable part again.
Plus, a complete leap year implementation is already what I would consider simple and most standard libraries already have an implementation for it you can use directly.
That assumes your user base is evenly distributed across time zones. The US has 6 time zones, but if you only handled 4, you'd cover 99.3% of people.
The problem is always, how to get enough correctness that your customers are happy, but don't spent too much time on it so that rivals won't overtake you.
In your example, I would take simple to mean "only use UTC". As soon as you need timezones I would say you've moved into correctness territory and need to do them all properly (you would use a good library, of course).
> The complex problem comes later, and it’ll be better served by the composition of simple solutions than with the application of a complex solution.
Complicated problem domains can be made into simple ones by breaking them down into their constituent components. You can solve time zones by having 5000 Rube-Goldberg-esque lines of if/else-if statements, or you can organize the system into simple components that build on each other.
At any given component or level the problems are clear, simple, and identifiable, and the complexity arises as the components join to form abstractions upon which higher levels operate.
Premature abstraction is a kind of premature optimization, except you're not buying performance.
Bad abstractions tend to stay in for a long time.
What matters is clear delineation between functional components and weak binding, so that internals can change, and that the interfaces are relatively minimal.
OpenBSD is not bug free at all, it is just security oriented in the implementation.
Windows got traction because it has even better hardware support, a bunch of backroom OEM deals and nice UI features (at the time of 95), then went far on software availability.
in any case, I think it's also worth mentioning that the article is probably talking about "incorrect" as in "accidental bugs", not as in "purposely ignoring the complexity of the problem". with the idea of preventing over-engineering rather than dismissing the specifications of a valid solution
I meant more for dealing with edge cases like an external API invocation failing. A final “correct” implementation would need to handle failures but an initial simple one would only handle success.
It is not possible to store UTC unambiguously on the db server for all future local wall-clock times. (Previous comment about the erroneous assumption of "UTC everywhere" being a "simple solution".)
Therefore, redefining "correct" to be "store UTC everywhere" achieves the exact opposite: an incorrect and buggy program. That's because the "universal" in Universal Time Code doesn't apply to governments changing DST and Time Zone rules in the future.
Pure UTC doesn't have enough metadata to encode future unknowns. For correct handling with zero loss of data, one must store the user's intended "wall-clock" time in his local TZ in the db.
Congratulation, your emotional refusal to deal with zoned datetimes has led you to a non-standard ad-hoc reinvention of timezones, your misguided quest for simplicity and obstinate rejection of reality has thus led you to a system which is definitely more complex, probably less correct and likely less performant than if you'd just done the right thing in the first place.
I've commented previously that it's not a good idea to change the rows of UTC times in the database.
Designing "system correctness" to depend on on the reliability of a correctly written SQL statements completing atomic transactions for millions of rows is not a good idea. In addition to batch db updates of UTC being extremely fragile, it's also not simple.
(It's fascinating to note that the multiple programmers independently arrive at the approach to update database rows of UTC times. There's something about it that's cognitively satisfying that attracts repeated reinvention.)
We should store the actual time of an event and update it when the scheduled time changes.
A countdown timer is a runtime concept.
Storing pure UTC and/or intended_localtime_plus_TZ in the database is a static concept of data-at-rest.
A timespan/timer is a different abstraction than a desired point-in-time.
Depending on the use case, the correct timer/timespan value can be derived from pure UTC (e.g. scientific celestial events) -- or -- user_specified_localtime_plus_TZ (recurring yoga class at 5:30pm every Wednesday, or take medication every morning at 7:00am).
For user calendaring and scheduling of social appointments, storing pure UTC will lead to data loss and errors. Instead of complicated mass updates of millions of db rows, it's much more straightforward to take a stored localtimeTZ, and then calculate an up-to-date UTC time at runtime, and then derive a countdown timer from that. The key insight is that the best time to use UTC is when the users need that timer at runtime -- and not when they store the row in the db.
I would love to see some (simple) code which will send a single alert to me at 1:30 am and another at 2:30am. My client registered me as MST (-7) when I set these two alarms in Feburary.
Of particular note for corner cases: Nov 4th and Mar 10, 2019.
The "scheduled time" will change, for many locations, twice yearly.
EDIT: For added fun, instead consider the registration date as May 10th with the same timezone.
Seriously? That, in your opinion, is simpler?
You should probably mention somewhere that you're the author of the blog post under discussion. And it looks like you're going to make a reputation for yourself as the guy who argues that it's more important for software to be simple than for it to function correctly.
Good luck with that.
"The single most important quality in a piece of software is simplicity. It’s more important than doing the task you set out to achieve. It’s more important than performance. The reason is straightforward: if your solution is not simple, it will not be correct or fast."
It praises a quality that is great as an add-on, not really by itself. Pretty sure everyone prefers a complex thing that "does the task" than a simple one that doesn't.
Simple "helps". Simple never "does". I think the author's values are a bit mixed up.
>The single most important quality in a piece of software is simplicity.
How panglossian, imagining the best of all possible worlds. Well, the world is intrinsically complex, as Fred Brooks explained in his No Silver Bullet essay from 1986.
"The complexity of software is an essential property, not an accidental one."
Sure, there is accidental complexity in most software problems, that can be tackled with skill and experience, and maybe reduced to zero. But then you are left with the essential complexity of the world. And you are done reducing the complexity; you can only manage it from then on. The world is very, very complex and it is a pipe dream to imagine that we can eliminate its complexity just by some bold engineering.
In a sense, this post is simply stating the obvious.
The biggest differentiator of skilled software practitioners is the ability to construct simple systems.
To call this claim panglossian or meaningless is to hold the philistinic line that this skill set doesn't matter, that any complex system is effectively the same as any ol' simple one -- don't worry about cultivating the skill, it doesn't matter anyway...
But simplicity is the single most important thing that matters in any maintaining system other than one-off scripts, hack jobs, etc. -- It's absolute torture to collaborate on a software project with anyone who rejects this premise.
And by that I mean, it has to account for more scenarios or do additional things... unless your software is growing in complexity while you're only removing features...
Bugs in software come from thinking we're simplifying the world in one way though the program, while in reality it receives a slightly different picture.
Faulty data models and system designs -- that aren't fixed -- lead to ever-increasing complexity. But that is the fault of the data model/designer.
I.e., there is a way to build (and grow) systems w/o linear increase in complexity -- but it takes a particular rare skill set.
I would say it's to construct simple enough systems, and the hallmark of skill is a developer's ability to define enough.
_One_ hallmark of developer ability is the discretion/wisdom/experience to know how flexible to make the thing. (How to prioritize and limit feature-creep, etc.)
But this is different than Simplicity. A general purpose programming language or database -- highly flexible/generic systems -- for example, can be built well/simple. But so can highly _specific_ systems.
In both cases though, one can build something that is decoupled and manipulable or one can build something that is coupled and rigid -- and the ability to do so is a function of skill set _not intrinsically_ a function of time. In other words, a skilled developer doesn't have to "take time" to deliver a Simple capability.
And sometimes it is good to have a replaceable head on the hammer.
A Hammer's construction can be Simple. A Knife's construction can be Simple. Or not.
It's possible to have a correct solution that is neither simple nor fast, and it can be worth your while to speed up a correct solution while sacrificing simplicity. So there are trade-offs involved in the relationship between simplicity and speed, but correctness is not negotiable. Acknowledging that all software has bugs is not the same thing as throwing out correctness as your first and primary objective in implementing an algorithm, and accepting that your solution may only be partial or fail with certain inputs is fine if that is acceptably correct for the problem at hand, but ascertaining that still comes first. Preferring simplicity over complexity because it makes debugging, profiling, etc. easier is not a reason to insist that correctness can go out the window in service to simplicity--who cares if you've removed all the bloat from your code if it's wrong?
"Some compilers allow a check during execution that subscripts do not exceed array dimensions. This is a help … many programmers do not use such compilers because “They’re not efficient.” (Presumably this means that it is vital to get the wrong answers quickly.)" (Page 85)
Languages like Ada Spark or Rust tend to rarely use or need such runtime checks. (they are available as an option to check unsafe code)
Others like Python and Java do check and give you traces. Not for free though.
And then you probably want something more powerful, such as a virtual machine like Valgrind; full sanitization of Address Sanitizer, etc.
Sure... but define "correctness".
Suppose my manager comes to me with some incredibly complicated problem. It's going to take six months to solve properly. Suppose in the first three weeks I implement a program that is 98% correct, and let's say it can detect the other 2% and kick it out for a human to solve. But it clearly does not fully and correctly eliminate the problem as brought to me by my manager. Have I solved the problem?
The correct answer is not "no, because your solution is incorrect and there is no such thing as an 'incorrect solution' because all solutions must be correct to even be solutions; you have no professional choice but to spend the next 5 months and a week implementing the correct solution". The correct answer is "the question is underspecified". I need to go to the manager and work with them on the question of what the benefit of just deploying this is, what the benefit of doing it "correctly" according to the original specification is versus the cost, and whether or not there are any other in-between choices. The business may require the full solution, sure. On the other hand, your manager may be inclined to thank you profusely for the 98% solution in a fraction of the time because it was far more than they dreamed possible and is way more than enough to make the remaining 2% nowhere near the largest problem we have now.
"Correctness" is only fully defined in a situation where the spec is completely immutable. Specifications are almost never completely immutable. So for the most part, everyone in this conversation using the word "correct" without being very careful about what they mean are not using a well-defined word.
It's all about costs and benefits, not correctness and incorrectness. For nearly two decades, Python's sort algorithm was technically incorrect: http://envisage-project.eu/proving-android-java-and-python-s... Does this mean that any program that used Python sort was worth "nothing", because it was not correct? Obviously this is absurd (in practice at least), so correctness must be understood in terms of costs & benefits to make any sense. And such an understanding must also be grounded in an understanding of the mutability of requirements as well, to make any sense of the real world.
From this perspective, it honestly isn't even 100% clear to me what prioritizing "correctness" over everything else would even mean. That we are slaves to the first iteration of the spec that comes out, no matter what? (Obviously not, but I can't come up with anything better that it might mean.) Correctness can't be prioritized everything else because it can only be understood holistically as part of the whole process. There is no way to isolate it and hold it up as the top priority over everything else. And there is no way for the correctness of a bit of software to exceed the scope of the specification itself, almost by definition, which in the real world tends to put a pretty tight cap on how correct your software can even be in theory, honestly.
“Simplicity first” means having a minimal skeleton code with glaring weaknesses, unimplemented features, and bugs, but having a simple design that sets you up well to absorb the inevitable shitstorm of changing priorities, pivots, revised performance constraints, feature wishlists, budget, deadlines, etc., and to manage extensibility, integration, or abstraction needs as they arrive in random, ad hoc ways.
Usually project stakeholders don’t care about absolute functional correctness, meeting performance criteria, or completeness until far far later in a project lifecycle, after those requirements have been thrashed around and whimsically changed several times.
Early on, they care about a tangible demo apparatus and solid documentation about the design and tentative plan of implementation. They want to see steady progress towards correctness & performance, but generally don’t care if intermediate work-in-progress lacks these things (often even for early releases or verson 1 of something, they’ll prioritize what bugs or missing features are OK for the sake of delivery).
In terms of interacting successfully with the business people who actually pay you and determine if your project lives on or gets scrapped, “simplicity first” is a total lifesaver, and matters far more than any of the notions of correctness discussed here.
That begs the question (in the original sense) of "unacceptable". If I banged out that 2% solution in an hour, and it lacked other costs that outweighed the benefits, it may still be something we ship! It is unlikely that we'd stop there, just because the numbers as you've given are unlikely to favor it because something else substantial would have to overcome the small amount of the problem we've solved, but to be firmly confident it's "unacceptable" you'd have to define "acceptable" a lot more carefully.
I understand the deep temptation to turn to discussions of the virtues of letting bugs through or something, but the costs/benefits framework completely handles that already. If you ship a buggy piece of "incorrect" shit, well, you've incurred a ton of costs with no benefits. That's wrong, by whatever standards you are measuring costs and benefits by. There isn't a "what if your 98% solution actually has a massive bug in it because you were unconcerned about 'correctness'?" argument to be made, because if it does have a massive bug, it's not a 98% solution.
How about this, for example:
The patient lives.
But I would put $10 down that if I asked you to assert that all medical software in current use that has never killed a patient because of its software issues is therefore "correct", you'd walk back hard. You'd have to be crazy to assert that all such software is "correct".
Unless you are willing to make that assertion, you don't really mean that as a definition.
This is also an example of what I mean in my cousin message about the temptation to turn this into a discussion about attention-grabbing bugs. But my framework already encompasses that. Software that kills patients is software very high on the costs side. There's a complicated discussion to be had about how to exactly quantify probabilities of failure vs. cost, but you can't have that discussion if you're stuck in a "correct or not correct" mindset.
I made no such assertion. That's a straw man.
But I think I can safely assume that if the patient dies as a result of the software's functioning, that software is not "correct".
You may disagree, but I think it's preferable to have a patient kept alive by an overly-complex system than killed by a simple, elegant, incorrectly functioning one.
Not to be intentionally blunt or snarky, but I think Drew DeVault's post was a bunch of rambling, hand-waving nonsense. Until today, I wouldn't have expected anyone to seriously argue that simplicity is more important than correctness. But he comes along and makes that very argument, with a self-assured, authoritative tone, but very little in the way of concrete reasoning, and to my surprise, the number of people on HN who apparently agree with him is non-zero.
The simple code of present was almost always written by someone who understands the problem domain really well in one or two tries.
This is one of the reasons why I am suspicious about the long-term saliency of so-called "smart contracts" on the blockchain. The immutability of code, while super amazing for digital assets, seems like a horror-show of a liability for dApps.
In my own area of work (database engines), the common mistake is that inexperienced designers do focus on simplicity first, instead of correctness and performance, not understanding that it is at best difficult and sometimes impossible to add correctness and especially performance later. The fast win of "simple" can turn into nearly insurmountable technical debt when you are asked to deliver scale and performance. People often grossly underestimate the minimum amount of initial implementation complexity required for good architecture.
There are many types of software where "simple, correct, fast" is sound advice but it is far from universal.
My definition of simple software is software that I can validate the correctness of using only equational reasoning and the mathematical tools used to carry it out without any specialized knowledge or verification systems.
If I have to learn a new way to reason about a software system in order to understand it then it is complex.
A priori any system written in C fails this litmus test: one must understand and identify the many ways that undefined behavior can enter into their program and be leveraged by their compiler. One cannot reason about a local expression in the presence of global effects and unchecked side-effects. And if it is possible to write a correct C program it takes considerable effort and the use of very specialized verification tools.
There are many reasons to prefer C however; if we're willing to live within some tolerance of "correct" and "incorrect" then we can leverage a tool-chain that can produce highly performant code... but then we're forced to restrain ourselves from introducing complexity instead of spending that effort on other things.
One of the best clarifications of what it means to be Simple, to put it out there, is ; but the key point: Simple != Easy.
Simple means minimal coupling, high-cohesion etc etc.
Yet IME many developers do not understand the distinction and mistakenly believe that easy is the same as simple, and are willing to couple the hell out of the world under some false notion of "simplicity"...
As in math, you come up with the "simple" solution of 0.5 only after you've realized that the "complex" solution is, for example, "sin(pi/4) * cos(pi/4)". There might be no other way to discover the simple solution.
> A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.
A complex system with a good workable and testable architecture will work, starting with passing the tests down to satisfying the user...
Such systems are not designed in detail but in general, and usually start with a single, simple but powerful overarching idea, which is actually quite complex to implement, but ends up working evidently well once even halfway done.
Examples would be message passing architecture, event driven programming, time tracking, microservices, reactors, literate APIs, contract programming, Model-View-* and more...
Note how half of those deal with reducing coupling by adding complexity.
I also disagree. I have yet in my life see any programmer crank out a simple solution on the first try for anything that isn't a trivial requirement. The way I think most of us work is to create a complex solution first and to have to refactor at least a couple of times before we get to simple and elegant. That doesn't mean the complex version didn't work.
I've made plenty of code that is bug free according to the requirements. I tend to start with tests and I'm pretty good at figuring out edge cases and other ways to break my code before I've even written it, so what I end up with is pretty robust. But the first version is rarely elegant or simple. By the time I'm done with the first version I understand the problem space so much better and might throw out 90% of my original code in the first refactor. Am I the only one doing this? Sometimes it even takes weeks or months to get to simplicity. I keep understanding the requirements better and better and noticing how I could eliminate code, often after I've noticed some code I'm still not happy with and having slept on it. Sleep does wonders for seeing how to simplify.
If that's what he meant, he's flat wrong. Simplicity is neither necessary nor sufficient for correctness.
I have a simple solution and a complex solution. Does the simple solution meet the requirement(s) before me? If so, I prefer it. Let's move on to the next requirement and consider my options again.
The alternative might be to look at your requirements, but choose a complex solution (over a simple one) because you think it might meet other requirements, either ones that have not yet been identified or ones you think are likely to happen in the future.
Are there times that the more complex solution wins? Probably. Consider you want to write a blog. You know that you can create an HTML (text) file and slap it on a web server and your blog has started. But if you've done this before, you might also know that you can throw WordPress on your server for a little more up front pain. You know you want comments and word clouds and date/time stamps and navigation. So you choose the complex solution. (You also know that you know face potential security implications, upgrades, dealing with users causing trouble with comments, having the PHP/MySQL infrastructure/hosting requirements...) Maybe you just wanted to dump your thoughts to the internet. Maybe the text file approach was better...
It may just be another way to say "avoid gold-plating your software."
But simply meeting requirements is only doing the minimum possible. Now in a government job, that's okay.
But in a real job, if you see where the simple solution is OBVIOUSLY wrong for certain likely cases not considered in the requirement, then THE REQUIREMENTS ARE WRONG, or incomplete and this should be pointed out!
And yet, if you try to compete with Intel with a CPU missing the above optimizations, you will get absolutely creamed in the marketplace. No one, not even those touting the importance of simplicity and correctness, will buy what you're selling.
Today's free market is too complex for these overly simple rules. Choosing between simplicity, correctness and performance, is a complex tradeoff that needs to be made on a case-by-case basis. Trying to find shortcuts to avoid these analyses may feel liberating... but you're ultimately only shooting yourself in the foot.
Also, there's the case of Intel losing to in-order ARMs in mobile. First with XScale, and later on with the in-order Atoms. (https://appleinsider.com/articles/15/01/19/how-intel-lost-th...)
And yet, if someone tried to sell a server CPU today that was not pipelined, not OOO, and didn't have branch prediction, it would absolutely tank in the marketplace.
I never said that performance optimizations should always be implemented. Just that performance optimizations should sometimes take precedence over simplicity.
You could sell it as a niche product for high security applications, since OOO execution is a nasty side-channel.
The first CPUs were simple as heck. They spent 20 years making them reliable(and faster without compromising simplicity, mostly just node-shrinks), and they've only really been complicating the architecture in the last 30 years.
If you insist on only hiring chauffeurs who drive at 100mph, you can hardly complain when they get into a few accidents.
That being said, consider me lined up to buy one of these CPUs.
If you can find a CPU that has the same number of non-cache transistors as a Intel/AMD chip, but spends them on a larger number of simple (and preferably independent/non-hyperthreaded) cores, rather than squandering them on speculative execution and ten thousand obscure model specific registers, I would absolutely buy several of them.
1: and similar amounts of cache, of course.
Very niche products.
For massively parallel number crunching, GPUs are much better in both performance/watt and performance/dollar. That Xeon Phi 7290 delivers up to 3.45TFlops, costs $3200, and consumes 245W. Compare with GeForce 1080Ti 10.6 TFlops, $700, same 250W.
For general purpose software they don’t work particularly well either. Most IO interfaces is serial, SATA, PCI-X, they have very few wires going to CPU. If you’re IO bound and you don’t have enough single-thread performance you’ll struggle to saturate the bandwidth, doable but very hard.
Also for general-purpose software latency matters. Namely, input to screen latency for desktops and mobiles, or request to response latency for servers. Get Windows or Android tablet with Intel Atom Z8300 (available for $80-100), and see how it performs, it has 4 very similar cores (minus AVX-512), and frequencies are very similar, too.
It isn't simple, it's designed to be incorrect (and even the parts that are supposed to be correct aren't), and I'm not surprised it fails on fast as well.
X86-64, SSE, AVX, AVX-512, AES-NI, etc. Their key selling point is software compatibility.
> Intel does not make [simple cores]
The cores are quite simple by today’s standards; otherwise Intel wouldn’t be able to pack 72 of them on a single chip. IME is unrelated to the cores, it’s a separate piece of silicon.
But if you don’t like the IME and don’t need backward compatibility with x86, maybe you’ll like this: https://www.qualcomm.com/products/qualcomm-centriq-2400-proc... But again, performance benefits of the architecture (48 simple cores) is questionable, GPUs are way faster for parallelizable number crunching, and you need single thread performance for almost everything else.
So it has the ten thousand x86 and x64 registers in addition to the ten thousand ?PCI registers?
> The cores are quite simple by today's standards
That's my point; today's CPUs don't have "ultra-simple" as a option (at modern feature densities).
> IME is unrelated to the cores
Fair point, I probably should have added "and doesn't have technicalities like builtin malware" to my original post.
This looks interesting, although I'll need to research a bit more (and "SOC - Features - Integrated management controller" isn't encouraging). Thanks!