Hacker News new | comments | show | ask | jobs | submit login
Simple, correct, fast: in that order (drewdevault.com)
519 points by Sir_Cmpwn 11 days ago | hide | past | web | favorite | 344 comments





When I was in school in the 70's. (That's NINTEEN seventies.) There was this book called The Psychology of Computer Programming. This predates the microcomputer era as we know it. Punched cards were still common when the book was written.

A computer was to control a new assembly line for a car company. They couldn't get the software to work. They called in an outside insultant. The outsider developed a program that worked. (It was more complex.) The book was about the psychology part: The original programmer says: "How fast does YOUR program process a punched card?". Answer: "About one card per second." "Ah!" said the original programmner, "but MY program processes ten cards per second!"

The outsider said, "Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.


Your example is a case of premature optimization. That is not what the author is concerned with.

The problem are not the programs that obviously do not work or who break in a very visible fashion. Programs whose deficiencies are known can be fixed or worked around.

The real problem are programs that appear to work correctly but aren't.

To say it with the words of Tony Hoare:

    There are two ways of constructing a software design:
    One way is to make it so simple that there are obviously
    no deficiencies, and the other way is to make it so
    complicated that there are no obvious deficiencies. The
    first method is far more difficult. It demands the same
    skill, devotion, insight, and even inspiration as the
    discovery of the simple physical laws which underlie the
    complex phenomena of nature.
Source: 1980 Turing Award Lecture; Communications of the ACM 24 (2), (February 1981): pp. 75-83.

That quote actually refutes the OP by reinforcing that correctness is more important than simplicity. Achieving correctness is the whole point of making things simple, after all. To put simplicity before correctness would be missing the forest for the trees.

In terms of designing solutions, I would say that "correctness" is relative to the problem statement at hand. It's also a degree and not an absolute. It may be correct and incorrect at the same time, depending on the context. From this I would prefer simplicity over correctness to allow for ease of optimization.

Correctness is achieved by simplicity.

    For every problem there is a solution that is simple, neat - and wrong.

Indeed - for example, you can make things look simple by leaving necessary parts out.

I think, however, that the more important part of this quote are the words 'problem' and 'solution'. Until you have an understanding of the problem that is correct, it is unlikely that you will come to a solution at all. Avoiding the introduction of gratuitous complexity is not necessary to reaching that understanding, but it sure helps.


Clearly, that solution isn't simple enough.

> Correctness is achieved by simplicity.

That's... literally what I just said? "Achieving correctness is the whole point of making things simple, after all."


That's... literally what the author said.

    If your solution is not simple, it will not be correct or fast.
Correctness may be the end-goal. But correctness is absolute. So it is a bad performance indicator to set as the goal. Yes, we can track bugs. But the absence of open bugs is no guarantee for correctness.

I can never say "We are 5% more correct than last week. Keep up the good work!"

Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.


> That's... literally what the author said.

Excellent, so we both agree with the author that correctness is the ultimate point and that simplicity is just a useful tool for achieving correctness. :)

> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

How does one purport to measure simplicity?


I was considering ASM's for it:

http://pages.di.unipi.it/boerger/Papers/Methodology/BcsFacs0...

My thinking was like this. The complexity of software is synonmyous with us saying we don't know what it will do on given inputs. As complexity goes up, it gets more unpredictable. That's because of the input ranges, branching, feedback loops, etc. So, a decent measure of complexity might be simplifying all that down to purest form that we can still measure.

The ASM's are a fundamental model of computation basically representing states, transitions, and conditionals making them happen. So, those numbers for individual ASM's and combinations of them might be good indicator of complexity of an algorithm. And note that they can do imperative and functional programming.

What you think of that idea?


> ...reinforcing that correctness is more important than simplicity

It's the other way around. Correctness is obviously the goal (and likely performance too, depending on your use case), but the way to achieve it is through simplicity. So simplicity should be prioritized - as it allows you to ensure correctness.


I'm glad that we can agree that correctness is the goal, though I still take umbrage to the blog post's title, thesis, and conclusion. :P

By that logic, "fast" goes before "correct"; you can't print the answer quickly if you don't have the answer, after all.

> if your solution is not simple, it will not be correct or fast.

The point of the article is that "simple" is a prerequisite of "correct" (and "fast").


Yes but I think OP is saying that, paradoxically, prioritizing correctness over simplicity actually makes correctness more elusive than if simplicity were prioritized.

We reached the maximum thread depth.

>> Simplicity is a much better goal for the day-to-day work. Because it can be tracked, measured and evaluated for every individual change.

>How does one purport to measure simplicity?

There's 40 years of research into that. And loads of tools to support dev teams.

You can start here: https://en.wikipedia.org/wiki/Cyclomatic_complexity

Also related are costing models: https://en.wikipedia.org/wiki/COCOMO


Derek Jones argues McCabe Complexity and COCOMO were scientifically unsupported with little bandwagons pushing them for reasons of fame and/or funding:

http://shape-of-code.coding-guidelines.com/2018/03/27/mccabe...

http://shape-of-code.coding-guidelines.com/2016/05/19/cocomo...


We also have 40 years of research into improving program correctness, e.g. static analysis, test suites (unit, integration, etc.), fuzzing/mutation testing, and the benefits of code review. The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.

> The idea that simplicity (which I'm pretty sure that nobody in here is using to specifically mean "the lack of cyclomatic complexity") can be measurably improved but that correctness cannot is incorrect.

Have you seen a program that comes with a formal proof of correctness? I have. And boy, they are really simple.

The end result can be complicated. But the program is broken up into small, simple, easy-to-understand pieces that are then composed.

http://spinroot.com

https://frama-c.com


I think maybe you mistakenly assumed that response was in opposition to your comment, I read it as a simplification and restatement of what you said.

No, that's just the easiest path to it if your only tool is an unaided human brain.

That doesn’t mean simplicity is more important than correctness. The simplest program ever is an empty file, and it doesn’t solve any problem.

Depending on interpretion of terms, I'd agree with either simplicity or correctness first. To disambiguate I would say:

  Working, simple, correct, optimized.

Would deffo agree to this.

My approach is usually sending out a PR as soon as I can to a group of reviewers / users and goes in following stages.

1) POC - proof of concept. It does 90% of things, some parts are ugly and messy but validates a hypothesis. The unknown unknowns are discovered. I want to stage this and get this in front of some alpha internal users as soon as I can. First pass reviewers give a on the plan of attack. Lots of sub TODO’s are listed in PR. The goal is to discover edge cases and unknown unknowns.

2) Simple - Go through PR and refactor any existing / new code so it’s readable and DRY. If reviewers don’t understand the “why” of some code, a comment is left. Now 90% of scenarios are covered, probably some edge cases may not work but the edge cases are known. The code is simple and at right layer of abstraction.

3) Correct, Testable - Edge cases are covered, tests are written, internal users have validated that the feature is working as expected.

4) Polish - if it’s slow, then slow internals are swapped out for fast parts. Tests would mostly work as is. Same with UI, css fixes to make it elegant and pretty.

Sometimes the process is a day, sometimes it’s a week.


> Your example is a case of premature optimization. That is not what the author is concerned with.

I think he is. Premature optimisation is putting the order: fast, simple, correct.

So although the author doesn't explicitly state it, premature optimisation is something that would be avoided if you followed his advice.


>> They called in an outside insultant.

This is either a great typo, or a hilarious moniker I have somehow missed (almost 40 years in the business). Either way, it's worth recognizing.

Equal parts hilarious and accurate as "/in/con/sultants" are often brought in to play the part of the court jester -- they can speak the hard truths no-one else could, and survive.

>>"Yes, but MY program ACTUALLY WORKS". If the program doesn't have to work, I could make it read 100 cards per second.

I think I wrote a device driver like that, more than once. :( Fast as hell, to the point of outstripping the bitrate of the device it talked to, and about as useful as a sailboat on the moon.


There's a great Dilbert where Dogbert wants to both con and insult someone. So he goes to consult for Dilbert's PHB.

> If the program doesn't have to work, I could make it read 100 cards per second.

> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Correctness isn't binary. Roughly no software today is 100% correct, but for most purposes you'd still pick the current version over a highly complex, slower, more-correct version.

Simplicity can save you a lot of cost as you edit the software, which helps you make it correct sooner. Simplicity and correctness go very well together.


Simplicity (however it is vaguely defined) is desirable, but at the end of the day it is a vehicle for correctness, and hence necessarily subordinate to it. Correctness is the destination of any piece of software (ultimately the goal of any piece of software is to work), and simplicity is just one route to it.

> Correctness is the destination of any piece of software

"Good enough" is the destination of any piece of software. Sometimes that means correct, but more often it means "oh yeah, sometimes it starts acting funny, just restart it when that happens"


Seems a bit like the "worse is better" philosophy :

    It is slightly better to be simple than correct.
See https://en.m.wikipedia.org/wiki/Worse_is_better

Agree. And "good enough" depends on your use case.

It never means "correct". Not to mention that 100% correctness is even impossible.

> Not to mention that 100% correctness is even impossible.

In which case please consider that everyone here is using "correctness" to mean "correctness that is achievable by reasonable human effort". :P It's easy to win any argument by taking one side to its logical extreme and asserting that it is therefore impossible, but that doesn't create a useful discussion. By the same logic we could assert that 100% simplicity is impossible, but that would be just as silly.


They said "destination" being correct, with me interpreting "destination" in the sense of "goal." My point was that some software has the goal of being 100% correct, but most software does not.

It depends on the severity of a bug. If it's very severe, you'll favor the complex but more-correct solution. Otherwise, you'll favor the simple but more-often-wrong solution - because it's easier to fix and get progress.

I use more-* phrases because it's always in a relation. Even NASA can't claim to have 0 bugs although people die if they fail.

bit OT: There's a great article about NASA programming: https://www.fastcompany.com/28121/they-write-right-stuff


Why do you equate working software with correctness? Software that works is never correct by any definition of correctness. Because working software is a system that exists in the real world and therefore can never have a specification against it.

> Why do you equate working software with correctness?

Because the original author neglected to provide an adequate definition of correctness, thereby inspiring an epic HN flamewar as people now must run around endlessly debating semantics. :P


Sometimes people say things that might not be literally true or even "true in spirit", not because they are lying liars who love to lie, but because relating an exaggeration or a caricature or some other sort of not-totally-true thing will have a better effect on their audience than the strict truth would have. As we're now up to 17 comments you've made here emphasizing the skepticism you have for TFA's message, it seems that you value the "correct" more than the "simple". It could be that you are in the intended audience for TFA...

If correctness is some kind of continuum rather than a binary choice, then pick whatever trade offs, cost, and other factors you want.

Plenty of times correctness is binary. In some cases it would be: passes all tests. Or: meets all requirements. Even if it could be "more" correct (or "more" simple), but those aren't part of the tests / requirements.


I always thought correctness begins when the result of your work does what it's supposed to do.

Maybe it's supposed to move from A to B, maybe it should do it in under x seconds, maybe it should go via Y, maybe it has to be easily understood by a 6 years old, etc.

But I can't really imagine something that has simplicity as the only requirement ("nothing" is the simplest thing so that requirement would always be met with no action). So as long as the other requirements are met simplicity is usually the nice to have "add-on". And you can have correct and simple, or correct and complex. But correct (does the job) trumps simple. And the world is surrounded by examples that prove this point.

I think the author meant "simple should be part of good design" but couldn't properly convey the message. He focused on making the message simple and ignored the fact that it's not correct.


I’ve noticed a pattern where the simplest solution DOES accomplish the goal, but isn’t what a user might consider the “shortest path”. How do you count a workaround where it technically can accomplish the end result, but requires a minor annoyance? What about a major annoyance?

What about a process so painful nobody has even thought of it?


It's nigh impossible to solve any complex issue with the "simplest solution" from the first try. This means that when you're faced with a complex issue you will postpone the fix because it's not the simplest.

And you never know if it can be done in an even simpler fashion later.


This seems to mix correctness and completeness.

A good program does only the correct thing in a particular area. It is known to be reliable in that area, sometimes even formally proved to be so.

Outside that area, an ideal program refuses to work, because it detects that it cannot obtain the correct result. This is normally called "error handling".

There's also some gray area where a program may fail to reliably detect whether it can produce the correct result, given the inputs / environment. A reasonably good program would warn the user about that, though.

A "garbage in, garbage out" program is only acceptable in very narrow set of circumstances (e.g. in cryptography).


Agree. In many (most?) cases there is no formal, verifiable correctness proof. And then you are way better of with the simpler solution once feedback from the real world arrives.

> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Seconded. I'm highly confused at how many upvotes the OP has gotten in such a short time despite appearing to say that implementation details matter more than program output. A beautiful machine that doesn't work is, at best, a statue. I'm all for the existence of pretty things that do not need to demonstrate inherent practicality, but most people are not printing out source code for use as wallpaper.


To defend the idea, I think it starts with the assumption that software is often a moving target so "correctness" is at best a temporary state. If you had to use a codebase at any point of time you would obviously want the correct one, but if you look at the lifespan of software it would be better to have the simpler code. Simpler is (usually) easier to fix, easier to extend and easier to learn.

I think the author made this a little inflammatory to get people to think about it in these terms.


Easier to fix, yes. Tends to get more complex in nasty and ugly ways.

Easier to extend, almost never. Proper design for extensibility has an extra bit of complexity over the most obvious. Simplistic implementations tend to be tossed away and are good for unscalable prototypes.

Easier to learn, definitely not. The simplest code comes from deep understanding of problem domain and algorithms. It is almost exactly like with writing brevity while not losing the point. It is easy to end up with simplistic instead of simple. There is that famous quote by Dijkstra which I'd rather not butcher from memory.


I think the core consideration is that software isn't static, and a machine that is held together with chewing-gum and silly string can produce the correct output and be a terrible machine at the same time.

What happens when it breaks? What happens when you need to produce doodads as well as gizmos, or a different size gizmo is desired? Who wants to reach inside the silly string and hope for the best?

I'm reminded of that old saying that even a broken clock is right twice a day; an overly complicated piece of software that produces the correct output is only coincidentally correct. Which I think is the point of the article.


That was my first thought as well but then I realized that by correctness the author means "no bug", which is quite more ambitious than just making it "work".

I think the author implicitly assumes the software basically works right from the beginning of the article.


If that's the case then the author is attacking a straw man, because nobody (besides Dijkstra) is suggesting that we rewrite all the software in the world in Coq in order to 100% eliminate bugs at the cost of simplicity.

That's not what Coq would do, and a misrepresentation of Dijkstra's position. We certainly could use tools like TLA+ to assist us with existing code.

Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.


I apologize for using Coq speicifically, I just needed a scapegoat for formal verification that people might have actually ever heard of. :P I'm happy to debate definitions, which the author of the OP has regretfully omitted (and the contentious definition here is probably the OP's notion of correctness, rather than their notion of simplicity).

You'd be surprised how simple a well written proof can be compared to a program implementing an algorithm to do the same.

That said, Coq itself is not the best vehicle for this. There are nicer high order logic languages.


To be honest, I'm fine skipping it. I don't understand why this article is so upvoted anyways.

> Folks are using "simple" and "easy" interchangeably here. That's probably inappropriate.

Agreed, see Rich Hickey's "Simplicity Matters" presentation on the difference [0].

Simple-Complex vs Easy-Hard

[0]: https://www.youtube.com/watch?v=rI8tNMsozo0


I agree. What he means is, it should work first, as simply as possible. Then you worry about correctness- correctness is not referring to working/no working. Correctness means, 'how SHOULD this work?' or 'How should this logic or code be written to be most efficient or effective?'

Third is performance.

1. Write a working piece of software that does the job.

2. Refactor to make the working piece of software do the job more efficiently and elegantly.

3. Refactor to make the working piece of software do the job as fast as possible.


Seen this several times when someone refactors. The code is much simpler and easier to read, but does not actually work for several important test cases anymore.

I've never thought of simplicity adding upfront cost. That's probably true, but also true that it pays dividends later on in the project.


If it no longer works then I don't consider that to be "refactoring" but "rewriting".

I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

And think of the word "factoring" as in high school algebra. or rather "factoring out" something.

I have a dozen examples of this calculation. How about let's refactor it into a function, and replace all the instances with a function call?


> I think of refactoring as a series of SIMPLE transformations that clearly do not have any effect on the correctness (or incorrectness) of the code. That is, there is no possible change in behavior.

This kind of transformation is precisely what the person who coined the term meant: Taking code which works and turning it into easier-to-read code which works precisely as well, because refactoring never introduces a change in behavior.

To quote Martin Fowler and Kent Beck:

> A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior… It is a disciplined way to clean up code that minimizes the chances of introducing bugs.

[snip]

Not a direct quote this time:

> Fixing any bugs that you find along the way is not refactoring. Optimization is not refactoring. Tightening up error handling and adding defensive code is not refactoring. Making the code more testable is not refactoring – although this may happen as the result of refactoring. All of these are good things to do. But they aren’t refactoring.

https://dzone.com/articles/what-refactoring-and-what-it-0


Simplicity is ALWAYS something desirable to achieve. Correctness comes first.

As code is originally written, people are (or should be) using the most "obviously" simple approach.

A Breakthrough in simplicity is often the result of additional thinking and hard work. (And cost)


Maybe the test cases / requirements are "wrong"? I think simplicity is the ultimate test that you found a good problem!

How sure are you that a given program is bug free? I feel that only very rarely would I ever assert 100%. In fact, I would generally assert with 100% confidence that there is some overlooked edge case. How many users that bug may affect... well I would generally give that a small percentage, but it still doesn't hit the boolean state of correct.

So correctness is generally never satisfied in my mind. At any given moment, the programs I am working on are in some way broken in my mind. Even if the other programmers thought that correctness was priority number 1, I will never consider the program correct. I will always suspect there is some snake in the grasses.

I suppose you could feel the same way about simplicity. I think the most charitable stance would be to give them the same level of importance. Overtly complex code cannot easily be proven to be correct amid changing business requirements. Easily testable, complex code with a full functional test suite is at less simple in one sense. Patently incorrect code is hardly valuable regardless of how easily one can understand its function.


None of them are absolutes. Just as we do not expect that "simple" before "fast" means "the code must be 100% as simple as it could possibly be before we begin even thinking about speed", we do not mean "the code must be 100% correct in every possible way before we even start thinking about simpleness"

It is relative preferences, more about what takes precedence over what than an absolute measure. Nothing is ever perfectly correct, nor perfectly simple nor perfectly fast.


I can not be sure the code was bug free. It was an anecdote in a book, the focus of which was more about the psychology of those who wrote the code. But it worked, and the first program did not work. The non working code's author took pride in the speed of his code.

Don't get me wrong, I think its an excellent anecdote. I just shy away from a focus on correctness since in my experience people who prioritize correctness above all else usually make a shambles. I feel that people who prioritize simplicity still understand that it still needs to work more or less correctly.

Yes, correctness absolutely comes first.

One way to achieve greater simplicity is to negotiate for fewer/simpler requirements for the first revision. There's often a core set of functionality that can be implemented correctly in a simpler way, and that gets the work done. Once that's in place it's interesting to see how often people lose interest in what were "hard" requirements before. It's also common that new asks have little to no resemblance to those unimplemented features, and are instead things that they found out they needed after using the new system.


The thing is, correctess is often a transient property. Requirements are frequently changing or evolving. What's correct on a Tuesday may no longer be correct by Friday. Under these conditions it's important that the software be amenable to change. It's for that reason I believe simplicity is more important than correctness.

Simplicity is also a transient property.

> Under these conditions it's important that the software be amenable to change.

At the same time, under all conditions, it is important that the software actually works (i.e. correctness), which is why it's more important than simplicity. Irate users who come to us telling us that our program doesn't work will find little comfort as we regale them with how simple it is.

First, make it correct. Then, make it simple. If requirements change what correctness means, then make it correct again, then make it simple again.


I encountered a similar argument in Clean Architecture by Robert Martin of correctness vs maintainability, where he argues for maintainability over correctness. The argument goes that if you had to choose between code that did the wrong thing but was easy to make do the right thing, and code that does the right thing but is hard to change, you should always pick the former.

He also talks more abstractly about the value of software (as opposed to hardware for instance) being primarily in its "soft"-ness, or ease of changing.

Ultimately this comes from his point of view as an architect, who fights more for system design than say, a PM might for user features. I've encountered the opposite school of thought that says: MVP to deliver features, refactor/rewrite later. I think the strategy to use will depend on the project and team (budget, certainty, tolerance for failure, etc)


"A program that produces incorrect results twice as fast is infinitely slower."

- John Ousterhout


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

It is true mainly for one-time contracts where you actually might not care about simplicity at all. Enough is enough.

However, in the case of iterative projects keeping complexity under control has much higher priority including top priority for very big projects. Complexity and high entropy can easily kill everything.


I don't think this is the form of correctness discussed here. I believe this is more the Correct as discussed in the famous "Worse is better" article.

> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness

This may be a matter of definitions. It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get. That way allows us to dismiss clearly degenerate cases (you can always do a one-statement no-op program that will be simple but do nothing).

General correctness is what I want in most cases. Example: voice dictation. It requires a final read & polish, but errors are infrequent enough to save me a lot of time. Full correctness is usually requested for jet avionics, nuke power plant control, etc.

With that addition one should optimize for general correctness and simplicity as a first goal, full correctness and performance as a very distant second.

When I write software (or build systems) what I end up with is usually significantly different from what I started with; not externally, but under the hood. Keeping designs simple (on large teams being almost militant about it) helps large systems morph as it goes from a proof of concept into an actual thing. My 2c.


> It may be worthwhile distinguish between general correctness and full, as close to 100% provable correctness as you can get.

Which is the root of the endless back-and-forth in this thread: a program has to do what it says on the tin ("general correctness") before anything else, and then probably be as simple and as "fully correct" as possible. But it's easier said than done for us to posit a distinction between general and full correctness than to actually find exactly where the dividing line lies between the two. A blog post to discuss such a dividing line might have been valuable, but the one we've got here unfortunately just handwaves away all the hard questions.


There is no line between the two. It's something that depends on how much effort and time is put into this, what methods were used, etc. But, the world doesn't actually care about this specific property, as it has no inherent value. Instead we have various levels of assurance of more practical properties, like safety, but not correctness.

I assure you the world cares if your algorithm is generally correct, passing unit and integration tests, etc. This is a programming basics.

You cannot have correctness without simplicity.

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system." - John Gail


> Correctness comes first. Simplicity is highly desirable, adds additional cost, but always comes after correctness.

Yes, but what is "Correctness". Its not usually so binary. Get to "good enough" and move onto the next thing.


This book, "The Psychology of Computer Programming" is by Gerald Weinberg, an author that really explores the design and complexity of systems. I recommend his other books, esp, "On the Design of Stable Systems : A Companion Volume to an Introduction to General Systems Thinking"

http://www.ppig.org/library/book/psychology-computer-program...


> Correctness comes first.

I agree with this.

Interestingly, the post is very simple, and not correct. I prefer posts which are slightly more complex but correct, but those don't get as many upvotes.


This makes me think of the Donald Knuth quote, "Beware of bugs in the above code; I have only proved it correct, not tried it."

More info here: https://en.wikiquote.org/wiki/Donald_Knuth


Which is why we now have automated theorem provers that can refine proofs into programs.

Works today, but tomorrow the complex solution does not do what is needed and presents a barrier to delivering what is needed now. In a static environment you are right, but static environments are vanishingly rare for software, by definition (because with stops after one iteration!)

What is this comment trying to say? That a simple program that doesn't work today is better than a complex program that works today, but that might not work in some nebulous future? Not all complexity is reducible. The point of software is to work correctly, not to satisfy the author's aesthetic notions, which is what most of the modern hype over simplicity boils down to.

I think you missed the parent comment’s point, which is that a highly complex implementation might have problems very similar to overfitting in statistics. Simplicity in some sense means “room to expand to handle future unseen cases.” If an implementation is very complex, chances are it has some assumptions baked in somewhere and when it hits the wrong corner case or a new requirement is added, it manifests not as some mere refactoring annoyance, but as a complete meltdown where the system is revealed to be incapable entirely, and has to undergo major delays due to huge refactoring that can lead to ripple effect problems in other parts of the system.

In that sense, simplicity is like insurance against the future, and so at any given moment you don’t solely care about the system’s total correctness or performance right now but also you care about some diversification benefit of investing in simplicity too.

Very much like how you don’t choose stocks based solely on what will have the highest expected return right now, but instead you also incorporate some notion of risk management when optimizing.


What I am saying is that a ball of mud that passes all tests is worse than something clear that fails corner cases (genuine corner cases) because sorting that out can be done. Whereas the ball of mud will definitely fail in the future and when it does nothing will help you apart from a complete rebuild. "It passes the tests" simply doesn't cut it.

+1. Simplicity won't work if it isn't correct.

Simple correctness is the best way to create beginners that use software to get faster results. Fast isn't all about computation - it's taking the least amount of the user's time as reasonably neccesary.


Gerald Weinberg. A classic, inpired much of Demarco and Lister's Peopleware.

https://leanpub.com/thepsychologyofcomputerprogramming


Does the time it takes to write come into play here at all?

I’m a novice of sorts. Thanks.


> Simplicity always comes after correctness.

Strong disagreement here. A program that isn't kept simple will stop being correct, fast, or any desirable quality over time.


Nobody's saying that simplicity is unimportant, but if the failure mode of a loss of simplicity is that the program is no longer correct, then it inherently suggests that correctness is the primary metric to strive for. :P

Ability to change ("simplicity") is the key metric that allows to maintain, or further desirable invariant. eg: in b2c "correctness" may be less valued than another trait. Do you prefer to know something or to be able to learn fast?

And since "fast enough" is a part of "correct", the order should really be "correct, fast enough, simple".

> Correctness comes first.

Not always. Have you ever used a SNES emulator? There is one emulator that is more correct than all others combined - it's called BSNES and it's the most true to the original SNES hardware of all the available emulators. Yet it is horrifically memory/cpu hungry - that correctness comes at a huge cost.

So no, correctness does not always come first, especially if you value other things like user experience.


Your definition of correctness is wrong in this case. If the purpose is to emulate the hardware as accurately as possible, BSNES wins. If the purpose is to make as many games as possible enjoyable for as many people as possible on the lowest common denominator hardware available today, BSNES loses.

There's no clinical definition of correctness here. Intent matters.


Correctness does come first, or else you can't play games the wa they're intended to be played, but the way BSNES does it is wrong.

I believe that it does so through attemptng to mimic the working circuit logic and chips, the physical hardware, within code alone, hens it requiring a powerful computer. This is an incredibly unoptimized way of doing it, especially since it's formed out of incorrect assumptions n what "accurate emulation" is.

It's the effects that we want, not the logic. If you're going to emulate something that, through common sense, shouldn't even require that much power, you're doing it wrong.

The saying goes, "keep it simple, stupid!" To overcomplicate things, like the programmer of BNES did, results in unweildy an unoptimized code.

Even Nintendo doesn't do this tactic with their official emulators. Yeah, sure, they're known to be inaccurate at times, but that's only because Nintendo's not aiming to build a general emulator to handle all case scenarios. Besides, much of the inaccuracies, as far as I could understand, deal with undefined behaviors of the system, something only things like glitches and bugs ever take advantage of.


The follow on from that is “performance is a feature”. If the emulator is supposed to emulate a fun, playable, game, then perf would be a required feature :-)

BSNES trades off performance for emulation accuracy. Other emulators trade off emulation accuracy for performance. No widely-used emulator that I know of has any care for simplicity at all (all of them are chock full of one-off special cases to benefit specific games). This has little to do with the screed in the OP, especially given how little the OP appears to value performance.

Correctness then is a trade off against other factors. Also it seems correctness in this case is a continuum rather than a binary choice. And you would prefer to trade other important factors for "true" correctness.

But I'll assume that you want the software that calculates your paycheck to be correct.


> So no, correctness does not always come first, especially if you value other things like user experience.

I think you're using a different definition of "correctness" than most other people in this thread. Which is understandable, a lot of folks are using different senses of it. What matters is not, "Does this perfectly and unobservably play hardware" in the definition of correctness for an emulator. What matters is, "Can this emulate the cart I want to play right now with a good experience?" and perhaps, "Will this allow a malware maker to own my entire computer if they run a cleverly crafted fake cart file?"


How do you know it was more complex because from what I read it was slower which is different.

It has been too long. The book described both approaches that both programmers used. I simply no longer remember those details. As I recall, the working program, when explained gave you the "Ah Ha!" experience, and thus was simple enough. The focus of the entire book was more about psychology aspects. One chapter was about how programmers come to feel "ownership" of code.

Another thing: what was an entire program back then, is sometimes a mere function, or maybe a class or code library today.


If it's not simple, it might be incorrect and you'd never know until it bites you.

I was going to say the same.

The conclusion I came to personally was always

Accuracy > Maintainability > Performance

in that order


I would argue that correct is more important than simple.

Consider timezones: it's simpler to pretend there's 24 time zones, one for each hour. But the correct assertion is there's 37 time zones (as of this writing). So, the simple solution results in a third of your potential user base having issues.

Other issues to pick: accessibility, cross-browser compatibility, legacy device compatibility... the list goes on.


I think it's more in the spirit of the article to say, forget timezones, use UTC millis everywhere. If the server doesn't speak in timezones, then you eliminated all bugs where the server mishandles timezones.

> I think it's more in the spirit of the article to say, forget timezones, use UTC millis everywhere. If the server doesn't speak in timezones, then you eliminated all bugs where the server mishandles timezones.

That's a flagrant example of "simple and wholly incorrect". If you don't store timezones, your future dates will eventually turn out incorrect when timezone offsets change e.g. create a meeting at 9AM local, store as UTC, country decides to not follow DST that year bam your reminder will ping an hour early or late.

Or a day off when the country decides to jump across the international date line (https://en.wikipedia.org/wiki/International_Date_Line#Samoan...).


most applications don't need to schedule events into the future though and its a smart strategy if all you need to worry about is the past

For very specific cases? Sure, but none of the comments talking about UTC everywhere cares to specify this rather important bit.

Unless restrictions are specified I will assume we're talking about the general case, and for the general case it's just plain wrong.


from my point of few its the other way around.

There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.

Btw, keeping the timezone is insufficient as well if you're building a calendar/scheduler. If the user changes the timezone after scheduling the event... do you keep to the old one and alert him whenever, or do you adjust? There are a lot of edge cases with schedulers -- yet as i said before, most applications don't schedule into the future. They're mostly just doing things right now or within the next few minutes and keeping a log of their actions.


> from my point of few its the other way around.

> There are very few applications that need to schedule events into the future, and that is literally the only situation where you have to worry about the timezone.

My experience is the exact opposite: there are few applications which only store past dates, and in those said date is usually indicative/barely even relevant and could just as well be part of a freeform comment or removed entirely.


The UTC example allows the user to translate output using localised tools.

The solution isn't incorrect, it is modular.


> The UTC example allows the user to translate output using localised tools.

So you provide an alarm clock which is works as neither a clock nor an alarm.

> The solution isn't incorrect, it is modular.

It's either not correct or not a solution, either way it's useless.

I also like how proponents of "simplicity at all cost" apparently assume/assert the composition of two systems is no more complex than either, and that there is no additional complexity to the composition layer.


In my experience, it's been easier to push UTC time all the way from the db to the user javascript and operate on time then, than trying to manipulate time before sending to the user. YMMV, and it's only web dev.

There are many kinds of data that get saved as time. For some, yes, it's better to add and remove the TZ at the interface. For some the TZ carries meaning by itself, and it must be stored and keep constant everywhere. For some the TZ carries meaning, but the time must be converted for display.

Know your data, and most of your problems get easy.


Yes, that's the crucial distinction -- whether the TZ, with its coarse encoding of the geography of origin, carries significance to the consumer of the data.

This works for things that have happened. It doesn't work too well for schedules. In those cases, the timezone of the source matters, heavily.

For example, a task that occurs "daily at 15:00" does not always happen every 24 hours. When DST comes into effect, the interval shortens to 23 hours once.

Or, at "2:30 am" in the continental US can occur twice in a day, or not at all. Of course, even that isn't guaranteed, if you're in AZ.

Even "notify me in exactly 24 hours has its own complications. Leap seconds will screw up your day (as will the vague request of "exactly 24 hours").

Corner cases, the bane of simplicity everywhere.


To this day, knock on wood, I have had success at voluntarily avoiding those issues ^^ (aside from school assignements)

I agree - UTC is the way to go. The point of the example was to illustrate how simple could be detrimental to a significant portion of your userbase.

How about another example? You're building an android application. Let's pretend there's an API in the latest version of Android that reduces a dozen lines of code down to one function call - ShinyNewMethod().

You can use that ShinyNewMethod() call. It's certainly simpler.

But the vast majority of android devices in use are not running the latest OS. So the NewShinyMethod(), while being simple, will cause your app to not work for them.

Hopefully the framework designers figured a way to have this auto-reverse engineer for older devices, but that's not always a guarantee.


This results in funny things like android support library.

But no, what about UTC leap seconds? Use TAI! :)))

The old yarn I'd heard for years is close to this.

Make it work. Make it work right. Make it work fast.

In that order.

Now it could be argued that "work right" can be read as "make it (work right)", or "(make it work) right", or both, but I think the point of this saying is that the "fast" part should always come later.


Yep, this is the version I heard. I think the distinction between "make it work" and "make it work right" is important, though - where "work" means "solve the problem", and "work right" means "solve the problem in a robust and reliable manner."

If your software doesn't solve the problem, it's useless, no matter how correct or fast it is. Once it solves the problem, then you can work on making it bug free and elegant. Once you're done with that, only then should you look at making it fast.

Note, of course, that "make it fast" refers to gratuitous optimisation. If it's too slow to solve the problem, then it doesn't work, and that needs to be fixed.

A similar adage states the rules of code optimisation:

1) Don't.

2) (For experts only) Do it later.


3) Profile before optimization.

4) After thinking how to get beat bang for buck when optimizing. Profilers don't always pinpoint the culprit.

I have a hard time seeing how profilers don't at least point you in the right direction. Or are we using different definitions of the word? They tell you how much time is spent in any given section of your program. If 80% of the time is spent in 20% of the code, it's usually a safe bet that's where you should start looking!

Generally I agree, but after you've picked off a few low hanging fruit, you'll end up with the profiler pointing at the function which does most of your grunt work, at which point you'd expect that to take up most of the time.

Once you've got this pretty optimized and it's still taking up the lion's share of your execution time, you have to look elsewhere (probably changing your overall approach or applying some higher level optimisation) to improve things further.


Yes, at that point you typically get to apply far reaching architectural changes or switch out the algorithm.

Sometimes it is quicker to start with just that instead of "polishing a turd". You can get it to be shiny but still nowhere near as shiny as gold.

Hope the code is testable and reasonably easy to modify. Otherwise it's going to be a rewrite.

The profile is then useful as a benchmark on real data. If you have enough time, you can turn that into a high level performance test.


Profilers have limitations like anything else, and it's possible to be pointing the flashlight in the wrong place. I probably wouldn't include that as a list item.

By the by, is there more than one kungtotte on the Internet? It took me a minute to think why that name was so familiar, but then I remembered watching a few hundred Beaglerush videos.


There must be more than one, because I've never heard of Beaglerush.

I've used this handle for a long time though (20 years or so), so it's all over the internet.


Kk, thank you for indulging my curiosity. Beaglerush is a very humorous Aussie who is notable for a video series on the Long War mod for the game XCOM. Long War turns a moderately challenging game of thirty or forty hours into an extremely complex, impossibly difficult ordeal of at least 400 hours per game. Beagle apparently often uses the handles of his friends as character names, and one of the best/worst parts of XCOM is that it's really good at making you care about the little blobs of pixels you order into virtual mortal peril, so to me kungtotte is like, the hero of 100 missions :)

If you're into strategy games, XCOM is good, and Long War is matchless. However, Beaglerush is actually surprisingly entertaining even if you don't care for his subject; the girlfriend is still not much into the game, but after the first couple episodes she insisted on watching the other hundred-thirty-odd videos. It's probably not everyone's cuppa, but it could be a thing.


Oh, that's me then :) I didn't put two and two together because I knew him as just beagle, and it was a long time ago that I hung out with him. We played together in a gaming community known as ShackTac or ShackTactical, playing a game called Armed Assault/ArmA.

Way back when a bunch of us put our names into a custom name file for XCOM so people could make campaigns featuring ShackTac people instead of generic dudes. I completely forgot about that until you reminded me. It must be five years since I talked to him :)


Oh cool :) Yeah, I did get the impression that the name list represented more former acquaintances than current ones, and I think he got pretty burnt out on YouTube generally. He seemed like a pretty great guy, and the series was excellent, so it seems you have a small measure of reflected glory at least :)

Sometimes I try to implement a common task in a new library, and I find that if I chain these 4 API calls and then extract some data with a regex, it will do what I want. I rarely actually implement the task this way though, because I know it's a common task, and there must be a better way. Sure enough, after some more research and reading of documentation I find one simple API call that does what I want.

I take this to the extreme, I probably wouldn't implement the complicated API / regex chain without 4+ hours of reading documentation and other research. It bothers me that much. If it seems like a simple and common task, I refuse to believe that there isn't a simple API call already to do what I want, I just have to find it. Sometimes, the simple API call really doesn't exist though, and you have to do what you can, with some comments explaining why.

I've noticed some developers will implement the 4 API chain followed by a regex as soon as they find it, and never give a second thought that there might be a simpler way.


This is good, because that process now becomes the API's problem and not yours. Their tests cover it, not yours, and it makes your work simpler.

The make it work fast portion isn't controversial. Premature optimization and all that...

The "make it work" implies a level of correctness & performance that is acceptable, which is why any subsequent steps are after thoughts.


The missing part is "make it workable". By that I mean reasonably easy to modify. This may or may not involve simplicity but usually involves modularity and lack of hard impingement interdependencies - weak coupling.

If you skip that, you will relatively quickly reach the point of a full rewrite.


Correctness (and any other kind of change in behaviour) is easier to achieve if things are simple.

Changes are not necessarily easy or even possible to make safely if things are correct but not simple.

"Simple" is a proxy for "can be changed safely" and so IMO is the most important quality to have.


I think you do whatever it takes to achieve correctness.

If simplicity helps achieve correctness, then great.

But correctness is not always simple.

Most people think there is a leap year every four years. They are wrong.


I know a couple of guys that would probably implement isLeap by checking that the lower two bits of the year are not set, then boast about how fast it was, and finally, when told that the logic was incorrect, push back because the additional checks would make it "slower".

In their minds, when things stop being simple, fast trumps correct.


One nuance is that some parts of correctness may be negotiable.

If I'm writing a file backup program, it absolutely must back up all the files without leaving any out or corrupting the data.

But let's say it has a feature that prints progress indicator percentages on the command line, ranging from 0% to 100%. Maybe under certain circumstances (like files added to a directory after the backup starts), it prints 102%. It's not what I had in mind, nor is it something I'd call correct. But if fixing it complicates the code a lot, maybe leaving it that way is the better choice.

(This is a bit of a contrived example because you could just clip the value at 100, but you get the idea.)


The simple thing to do then is to call an `isLeapYear` function that abstracts any complexity.

Abstraction allows you to hide complexity and make it a simple, reusable part again.


But I could write a simpler (incorrect) leap year function than yours!

You missed the point of the article if you interpreted it as correctness being intentionally sacrificed for simplicity.

Plus, a complete leap year implementation is already what I would consider simple and most standard libraries already have an implementation for it you can use directly.


I don't think simple means discard if complex. It means using the simplest and most obvious tools (algorithms, libraries, &c) possible in order to achieve a correct enough software. So it does not mean discarding intricacies of timekeeping, but implementing it with the clearest, simplest use of abstractions and methods possible.

My calculator watch worked perfectly for 50 years on that "wrong" assumption.

> But the correct assertion is there's 37 time zones (as of this writing). So, the simple solution results in a third of your potential user base having issues.

That assumes your user base is evenly distributed across time zones. The US has 6 time zones, but if you only handled 4, you'd cover 99.3% of people.


The US has more than 6. There's a big difference between Denver and Phoenix, even though they're both "Mountain Time". Grouping them together would be overly simplistic. Things like "weekdays at 10 A.M." mean different things half the year.

Obviously to solve the problem, a certain amount of correctness is implied. But once that threshold is reached, simplicity is a better use of your time. At least, that's the thesis of the article.

True.

The problem is always, how to get enough correctness that your customers are happy, but don't spent too much time on it so that rivals won't overtake you.


All general rules like this are terrible if you apply then blindly, but can still be useful if you apply them loosely.

In your example, I would take simple to mean "only use UTC". As soon as you need timezones I would say you've moved into correctness territory and need to do them all properly (you would use a good library, of course).


Every piece of software that has bugs could be considered not correct, which is the vast majority of it.

Depends on the definition of bug.

I don't think what you're saying is in conflict with the OP:

> The complex problem comes later, and it’ll be better served by the composition of simple solutions than with the application of a complex solution.

Complicated problem domains can be made into simple ones by breaking them down into their constituent components. You can solve time zones by having 5000 Rube-Goldberg-esque lines of if/else-if statements, or you can organize the system into simple components that build on each other.

At any given component or level the problems are clear, simple, and identifiable, and the complexity arises as the components join to form abstractions upon which higher levels operate.


Or you can end up with a mishmash of components instead of statements.

Premature abstraction is a kind of premature optimization, except you're not buying performance.

Bad abstractions tend to stay in for a long time.

What matters is clear delineation between functional components and weak binding, so that internals can change, and that the interfaces are relatively minimal.


The point is that it should be as simple as possible to solve the problem but no simpler.

This generic expression indicates why "correct" comes first. "Simple" has it as a dependency. ;)

It's order of importance though rather than an expression of dependencies and I'd definitely err on wanting my solution to be as simple as possible over merely correct. The idea being that whilst you are solving a problem you are always striving to solve it simply and only add complexity as needed.

So long as the software will only be shipped after all three stages, it's a useful metric to go by, I guess. It prevents you getting overwhelmed by the myriad amount of decision-making that writing software is. And also, so long as you accept the fact that you will need minor or major redesigns by the time you get to the "fast" version that is production ready.

Agreed. Software that does what it's supposed to is better than software that is simple (or fast), but buggy.

Which is why everyone uses OpenBSD instead of making a tradeoff with a different OS that isn't designed as securely.

Users go for usability features first, with an OS that is good enough. Good looks second. They can tolerate a few weirdness and quirks as long as they can get the job done without cursing the thing. They will even tolerate non critical crashes sometimes (no or little data loss).

OpenBSD is not bug free at all, it is just security oriented in the implementation.

Windows got traction because it has even better hardware support, a bunch of backroom OEM deals and nice UI features (at the time of 95), then went far on software availability.


as I understand it, both simplicity and correctitude are requirements for most programs, but imo the article talks about the design, the writing process, where you should focus first on writing a program that's not more complex than it needs to be, because you are likely compromising the other parts too. this doesn't mean incorrect code is acceptable.

in any case, I think it's also worth mentioning that the article is probably talking about "incorrect" as in "accidental bugs", not as in "purposely ignoring the complexity of the problem". with the idea of preventing over-engineering rather than dismissing the specifications of a valid solution


My read of this that simple means not necessarily handling all possible edge cases yet having enough of a skeleton in place that the core functionality is there. Correct is extending (or replacing) that skeleton with coverage for all cases.

Edge cases? You mean we can't test for leap year by simply checking if the year is a multiple of four?

No that’s a silly example.

I meant more for dealing with edge cases like an external API invocation failing. A final “correct” implementation would need to handle failures but an initial simple one would only handle success.


Indeed. During my visit to India in February this year, I didn't even realize they were at a half-hour time-zone offset, all thanks to the miracle of modern technology (and hardworking people maintaining it).

But would it be better to account for non-end of year leap second additions vs something simplier that just ignores them but is wrong once every... decade?

Depends on the software.

He made up his mind, don't confuse him with facts.

Came here to say the same thing. OP doesn't know what he's talking about.

If you cannot achieve correct without simple, redefine correct.

Some problems are just hard and complicated. You can try to make code that dissects it into lots of simpler pieces, but then they have to be put together in a complicated way. Someplace in the code there will be some point of irreducible complexity. In my experience.

In my experience you need a paradigm shift then. If your earth centric approach is creating a need for epicycles of complex code, then you need to paradigm shift to heliocentric which eliminates the epicycles and replaces them with a simple elliptical model.

I'm willing to believe that there are sometimes problems that are inherently complex, but I find that much of the time the problem is complex only because you've insisted it be complex. If you take a step back and re-examine the problem you often see that redefining the problem makes everything much more simple.

Having software people interact directly with the user/customer (internal or external) can be very useful because minor changes in requirements can make a huge difference in implementation complexity.

> If you cannot achieve correct without simple, redefine correct.

More hand-waving.


"redefine correct" = "find a better version of the problem". Software development starts with requirements analysis.

I disagree. I just can't get more specific without specific cases to examine. Like others have pointed out, take the matter of timezones: if "correct" is defined as "handling timezones", you should instead store time in UTC everywhere and redefine "correct" to be "convert times to local time when displayed and back again when input", which can be accomplished with much smaller, simpler, and focused tools.

>", you should instead store time in UTC everywhere

It is not possible to store UTC unambiguously on the db server for all future local wall-clock times. (Previous comment about the erroneous assumption of "UTC everywhere" being a "simple solution".[1])

Therefore, redefining "correct" to be "store UTC everywhere" achieves the exact opposite: an incorrect and buggy program. That's because the "universal" in Universal Time Code doesn't apply to governments changing DST and Time Zone rules in the future.

Pure UTC doesn't have enough metadata to encode future unknowns. For correct handling with zero loss of data, one must store the user's intended "wall-clock" time in his local TZ in the db.

There's irreducible complexity when dealing with user-specified appointment times so an uncompromising fixation on programming a "simple" implementation with pure UTC-on-dbserver and localtime-at-only-at-browser-Javascript ... will lead to a broken calendaring program.

[1] https://news.ycombinator.com/item?id=10990240


You're right, future times are more complex and might require more attention to detail. But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.

> But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.

Congratulation, your emotional refusal to deal with zoned datetimes has led you to a non-standard ad-hoc reinvention of timezones, your misguided quest for simplicity and obstinate rejection of reality has thus led you to a system which is definitely more complex, probably less correct and likely less performant than if you'd just done the right thing in the first place.


>a simple way, [...] and running a script to update future dates

I've commented previously that it's not a good idea to change the rows of UTC times in the database.[1]

Designing "system correctness" to depend on on the reliability of a correctly written SQL statements completing atomic transactions for millions of rows is not a good idea. In addition to batch db updates of UTC being extremely fragile, it's also not simple.

(It's fascinating to note that the multiple programmers independently arrive at the approach to update database rows of UTC times. There's something about it that's cognitively satisfying that attracts repeated reinvention.)

[1] https://news.ycombinator.com/item?id=10991894


The event's time has changed, though. The local representation of when it will occur has not changed, but if you set a timer today and they change their timezone tomorrow, the timer will expire at the wrong time.

We should store the actual time of an event and update it when the scheduled time changes.


>, the timer will expire at the wrong time.

A countdown timer is a runtime concept.

Storing pure UTC and/or intended_localtime_plus_TZ in the database is a static concept of data-at-rest.

A timespan/timer is a different abstraction than a desired point-in-time.

Depending on the use case, the correct timer/timespan value can be derived from pure UTC (e.g. scientific celestial events) -- or -- user_specified_localtime_plus_TZ (recurring yoga class at 5:30pm every Wednesday, or take medication every morning at 7:00am).

For user calendaring and scheduling of social appointments, storing pure UTC will lead to data loss and errors. Instead of complicated mass updates of millions of db rows, it's much more straightforward to take a stored localtimeTZ, and then calculate an up-to-date UTC time at runtime, and then derive a countdown timer from that. The key insight is that the best time to use UTC is when the users need that timer at runtime -- and not when they store the row in the db.


> We should store the actual time of an event and update it when the scheduled time changes.

I would love to see some (simple) code which will send a single alert to me at 1:30 am and another at 2:30am. My client registered me as MST (-7) when I set these two alarms in Feburary.

Of particular note for corner cases: Nov 4th and Mar 10, 2019.

The "scheduled time" will change, for many locations, twice yearly.

EDIT: For added fun, instead consider the registration date as May 10th with the same timezone.


>You're right, future times are more complex and might require more attention to detail. But I think you can still achieve the requirements in a simple way, perhaps by storing it as UTC + lat/long and running a script to update future dates when someone changes their rules.

Seriously? That, in your opinion, is simpler?


>I disagree. I just can't get more specific without specific cases to examine.

You should probably mention somewhere that you're the author of the blog post under discussion. And it looks like you're going to make a reputation for yourself as the guy who argues that it's more important for software to be simple than for it to function correctly.

Good luck with that.


Noop is pretty simple...

And if the opening statement of the article is to be considered correct, noop is by design the only way to do things since its simplicity will always win against trying to create more complex code that does the task.

"The single most important quality in a piece of software is simplicity. It’s more important than doing the task you set out to achieve. It’s more important than performance. The reason is straightforward: if your solution is not simple, it will not be correct or fast."

It praises a quality that is great as an add-on, not really by itself. Pretty sure everyone prefers a complex thing that "does the task" than a simple one that doesn't.

Simple "helps". Simple never "does". I think the author's values are a bit mixed up.


Is the software engineering profession doomed to lose its memory every generation? The premise of this post is ridiculous:

>The single most important quality in a piece of software is simplicity.

How panglossian, imagining the best of all possible worlds. Well, the world is intrinsically complex, as Fred Brooks explained in his No Silver Bullet essay from 1986[0].

"The complexity of software is an essential property, not an accidental one."

Sure, there is accidental complexity in most software problems, that can be tackled with skill and experience, and maybe reduced to zero. But then you are left with the essential complexity of the world. And you are done reducing the complexity; you can only manage it from then on. The world is very, very complex and it is a pipe dream to imagine that we can eliminate its complexity just by some bold engineering.

[0] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf


Anyone who has spent any time developing anything but a tiny software system knows that the biggest impediment to productivity (feature delivery, bug fixing, etc) is the complexity of the system at hand.

In a sense, this post is simply stating the obvious.

The biggest differentiator of skilled software practitioners is the ability to construct simple systems.

To call this claim panglossian or meaningless is to hold the philistinic line that this skill set doesn't matter, that any complex system is effectively the same as any ol' simple one -- don't worry about cultivating the skill, it doesn't matter anyway...

But simplicity is the single most important thing that matters in any maintaining system other than one-off scripts, hack jobs, etc. -- It's absolute torture to collaborate on a software project with anyone who rejects this premise.


You are making the common mistake of confounding essential complexity with accidental complexity. One you are stuck managing and one you can eliminate with skill. The world isn't getting less complex just because you work harder on your software.

Also it’s not about working harder; it’s about building smarter—building simple systems—which absolutely makes your world simpler. ‘Essential complexity’ is what sophomoric developers decry when they are unable to architect well—and are unwilling to do the hard work it takes to learn to architect well.

The world, from software’s perspective at least, isn’t growing in complexity. What leads one to that?

The world growing more complex from the software's perspective.

And by that I mean, it has to account for more scenarios or do additional things... unless your software is growing in complexity while you're only removing features...

Bugs in software come from thinking we're simplifying the world in one way though the program, while in reality it receives a slightly different picture.


A well-built software system doesn't have to grow in complexity over time. More features does not equal more complex in the true sense of Simple.

Faulty data models and system designs -- that aren't fixed -- lead to ever-increasing complexity. But that is the fault of the data model/designer.

I.e., there is a way to build (and grow) systems w/o linear increase in complexity -- but it takes a particular rare skill set.


> The biggest differentiator of skilled software practitioners is the ability to construct simple systems.

I would say it's to construct simple enough systems, and the hallmark of skill is a developer's ability to define enough.


I believe you're conflating simplicity/complexity with flexibility.

_One_ hallmark of developer ability is the discretion/wisdom/experience to know how flexible to make the thing. (How to prioritize and limit feature-creep, etc.)

But this is different than Simplicity. A general purpose programming language or database -- highly flexible/generic systems -- for example, can be built well/simple. But so can highly _specific_ systems.

In both cases though, one can build something that is decoupled and manipulable or one can build something that is coupled and rigid -- and the ability to do so is a function of skill set _not intrinsically_ a function of time. In other words, a skilled developer doesn't have to "take time" to deliver a Simple capability.


Sometimes you just want a hammer, not a swiss army knife.

And sometimes it is good to have a replaceable head on the hammer.


Yes! And the property of Simplicity is orthogonal to whether or not you chose to make the Knife or the Hammer.

A Hammer's construction can be Simple. A Knife's construction can be Simple. Or not.


I can't really understand the equivocating tone a lot of folks are taking in response to this, and more importantly I can't wrap my head around how you could make such a statement in the first place: without correctness you've got nothing. Stating authoritatively that correctness comes after...anything is incomprehensible to me.

It's possible to have a correct solution that is neither simple nor fast, and it can be worth your while to speed up a correct solution while sacrificing simplicity. So there are trade-offs involved in the relationship between simplicity and speed, but correctness is not negotiable. Acknowledging that all software has bugs is not the same thing as throwing out correctness as your first and primary objective in implementing an algorithm, and accepting that your solution may only be partial or fail with certain inputs is fine if that is acceptably correct for the problem at hand, but ascertaining that still comes first. Preferring simplicity over complexity because it makes debugging, profiling, etc. easier is not a reason to insist that correctness can go out the window in service to simplicity--who cares if you've removed all the bloat from your code if it's wrong?


I am reminded of this classic snark from The Elements of Programming Style (1974)[0]:

"Some compilers allow a check during execution that subscripts do not exceed array dimensions. This is a help … many programmers do not use such compilers because “They’re not efficient.” (Presumably this means that it is vital to get the wrong answers quickly.)" (Page 85)

[0] https://en.wikipedia.org/wiki/The_Elements_of_Programming_St...


Usually the real reason is that such checks are pointless as they do not pinpoint the bugs. They are too late. You need a real stack trace to begin debugging such issues.

Languages like Ada Spark or Rust tend to rarely use or need such runtime checks. (they are available as an option to check unsafe code)

Others like Python and Java do check and give you traces. Not for free though.

And then you probably want something more powerful, such as a virtual machine like Valgrind; full sanitization of Address Sanitizer, etc.


"without correctness you've got nothing."

Sure... but define "correctness".

Suppose my manager comes to me with some incredibly complicated problem. It's going to take six months to solve properly. Suppose in the first three weeks I implement a program that is 98% correct, and let's say it can detect the other 2% and kick it out for a human to solve. But it clearly does not fully and correctly eliminate the problem as brought to me by my manager. Have I solved the problem?

The correct answer is not "no, because your solution is incorrect and there is no such thing as an 'incorrect solution' because all solutions must be correct to even be solutions; you have no professional choice but to spend the next 5 months and a week implementing the correct solution". The correct answer is "the question is underspecified". I need to go to the manager and work with them on the question of what the benefit of just deploying this is, what the benefit of doing it "correctly" according to the original specification is versus the cost, and whether or not there are any other in-between choices. The business may require the full solution, sure. On the other hand, your manager may be inclined to thank you profusely for the 98% solution in a fraction of the time because it was far more than they dreamed possible and is way more than enough to make the remaining 2% nowhere near the largest problem we have now.

"Correctness" is only fully defined in a situation where the spec is completely immutable. Specifications are almost never completely immutable. So for the most part, everyone in this conversation using the word "correct" without being very careful about what they mean are not using a well-defined word.

It's all about costs and benefits, not correctness and incorrectness. For nearly two decades, Python's sort algorithm was technically incorrect: http://envisage-project.eu/proving-android-java-and-python-s... Does this mean that any program that used Python sort was worth "nothing", because it was not correct? Obviously this is absurd (in practice at least), so correctness must be understood in terms of costs & benefits to make any sense. And such an understanding must also be grounded in an understanding of the mutability of requirements as well, to make any sense of the real world.

From this perspective, it honestly isn't even 100% clear to me what prioritizing "correctness" over everything else would even mean. That we are slaves to the first iteration of the spec that comes out, no matter what? (Obviously not, but I can't come up with anything better that it might mean.) Correctness can't be prioritized everything else because it can only be understood holistically as part of the whole process. There is no way to isolate it and hold it up as the top priority over everything else. And there is no way for the correctness of a bit of software to exceed the scope of the specification itself, almost by definition, which in the real world tends to put a pretty tight cap on how correct your software can even be in theory, honestly.


This is the most important part, and people advocating “correctness first” are missing this point.

“Simplicity first” means having a minimal skeleton code with glaring weaknesses, unimplemented features, and bugs, but having a simple design that sets you up well to absorb the inevitable shitstorm of changing priorities, pivots, revised performance constraints, feature wishlists, budget, deadlines, etc., and to manage extensibility, integration, or abstraction needs as they arrive in random, ad hoc ways.

Usually project stakeholders don’t care about absolute functional correctness, meeting performance criteria, or completeness until far far later in a project lifecycle, after those requirements have been thrashed around and whimsically changed several times.

Early on, they care about a tangible demo apparatus and solid documentation about the design and tentative plan of implementation. They want to see steady progress towards correctness & performance, but generally don’t care if intermediate work-in-progress lacks these things (often even for early releases or verson 1 of something, they’ll prioritize what bugs or missing features are OK for the sake of delivery).

In terms of interacting successfully with the business people who actually pay you and determine if your project lives on or gets scrapped, “simplicity first” is a total lifesaver, and matters far more than any of the notions of correctness discussed here.


But this is what ddellacosta is saying. I imagine his interpretation of this argument (and I agree) is that 98% correct IS prioritizing correctness. A very fast, simple solution that is 2% correct is an unacceptable balance.

"A very fast, simple solution that is 2% correct is an unacceptable balance."

That begs the question (in the original sense) of "unacceptable". If I banged out that 2% solution in an hour, and it lacked other costs that outweighed the benefits, it may still be something we ship! It is unlikely that we'd stop there, just because the numbers as you've given are unlikely to favor it because something else substantial would have to overcome the small amount of the problem we've solved, but to be firmly confident it's "unacceptable" you'd have to define "acceptable" a lot more carefully.

I understand the deep temptation to turn to discussions of the virtues of letting bugs through or something, but the costs/benefits framework completely handles that already. If you ship a buggy piece of "incorrect" shit, well, you've incurred a ton of costs with no benefits. That's wrong, by whatever standards you are measuring costs and benefits by. There isn't a "what if your 98% solution actually has a massive bug in it because you were unconcerned about 'correctness'?" argument to be made, because if it does have a massive bug, it's not a 98% solution.


>Sure... but define "correctness".

How about this, for example:

The patient lives.


A nice snarky reply.

But I would put $10 down that if I asked you to assert that all medical software in current use that has never killed a patient because of its software issues is therefore "correct", you'd walk back hard. You'd have to be crazy to assert that all such software is "correct".

Unless you are willing to make that assertion, you don't really mean that as a definition.

This is also an example of what I mean in my cousin message about the temptation to turn this into a discussion about attention-grabbing bugs. But my framework already encompasses that. Software that kills patients is software very high on the costs side. There's a complicated discussion to be had about how to exactly quantify probabilities of failure vs. cost, but you can't have that discussion if you're stuck in a "correct or not correct" mindset.


>But I would put $10 down that if I asked you to assert that all medical software in current use that has never killed a patient because of its software issues is therefore "correct", you'd walk back hard. You'd have to be crazy to assert that all such software is "correct".

I made no such assertion. That's a straw man.

But I think I can safely assume that if the patient dies as a result of the software's functioning, that software is not "correct".

You may disagree, but I think it's preferable to have a patient kept alive by an overly-complex system than killed by a simple, elegant, incorrectly functioning one.

Not to be intentionally blunt or snarky, but I think Drew DeVault's post was a bunch of rambling, hand-waving nonsense. Until today, I wouldn't have expected anyone to seriously argue that simplicity is more important than correctness. But he comes along and makes that very argument, with a self-assured, authoritative tone, but very little in the way of concrete reasoning, and to my surprise, the number of people on HN who apparently agree with him is non-zero.


There's a sweet spot in the neighbourhood of well-defined problems and low complexity, that sings phrases like Before even trying to implement it I know it can be done right, or Even though it can't be done 100% right, that's just the nature of the problem domain. Give the users a good enough solution that they can understand, so they can use it in the way _they_ need.

I think to those that are asserting that correctness comes first are somewhat missing the point. One, a simple solution is still a solution, that is if your code doesn't solve the problem, you can't stop. I think the author is suggesting that truly _correct_ code (code that produces the correct output under all circumstances) is only attainable iteratively, and if your code is not simple (and let's also remember here: that simple ain't easy!) than reaching correctness or performance will, in the main, be quite difficult. Not only will it be increasingly more difficult to reach a state of correctness again after a bug is found, and it will be found, but even measuring performance will become increasingly challenging. At least that's the lesson I take.

In practice of many years, I haven't found iterative approach to produce either simple or easily maintainable code. If tends to grow rings instead. Each layer is relatively simple but altogether neither it is performant nor simple.

The simple code of present was almost always written by someone who understands the problem domain really well in one or two tries.


Yeah, maybe iterative as a concept is too facile a concept to contain what is meant here. Maybe "fractal", or recursive is better. Maybe though that's the point, it is hard after "multiple rings" to make code simple anymore, so better start off trying to optimize for simple first. Correctness requires exposure to cases you didn't know you didn't know (unknown unknowns).

This is one of the reasons why I am suspicious about the long-term saliency of so-called "smart contracts" on the blockchain. The immutability of code, while super amazing for digital assets, seems like a horror-show of a liability for dApps.


Performance in most non-trivial software, and especially infrastructure software, is architectural. In many cases an architecture that will allow your software to be performant requires a commitment to a very substantial amount of software complexity upfront to ensure adding performance is much simpler (or even possible) later. There are also rarer cases where correctness is not simple, so there is no trivial path between the simple implementation and a rigorously correct one. While "simple" is easier for the software engineer, customers pay for "correct" and "fast".

In my own area of work (database engines), the common mistake is that inexperienced designers do focus on simplicity first, instead of correctness and performance, not understanding that it is at best difficult and sometimes impossible to add correctness and especially performance later. The fast win of "simple" can turn into nearly insurmountable technical debt when you are asked to deliver scale and performance. People often grossly underestimate the minimum amount of initial implementation complexity required for good architecture.

There are many types of software where "simple, correct, fast" is sound advice but it is far from universal.


I think this is lacking a definition of simple. And where in the problem space do we desire simplicity? Simple in the implementation (and conversely complex in the interface? ie: C-style libraries?) Or complex in the implementation but simple in the interface? (ie: Haskell/FP style libraries?)

My definition of simple software is software that I can validate the correctness of using only equational reasoning and the mathematical tools used to carry it out without any specialized knowledge or verification systems.

If I have to learn a new way to reason about a software system in order to understand it then it is complex.

A priori any system written in C fails this litmus test: one must understand and identify the many ways that undefined behavior can enter into their program and be leveraged by their compiler. One cannot reason about a local expression in the presence of global effects and unchecked side-effects. And if it is possible to write a correct C program it takes considerable effort and the use of very specialized verification tools.

There are many reasons to prefer C however; if we're willing to live within some tolerance of "correct" and "incorrect" then we can leverage a tool-chain that can produce highly performant code... but then we're forced to restrain ourselves from introducing complexity instead of spending that effort on other things.


+1 to acknowledging that when it comes to software, "simplicity" does not have one exact quantitative meaning we all agree on -- for now, simplicity is still in the eye of the beholder in my opinion.

Sounds like you're over complicating simplicity

Not convinced. Simplifying a correct implementation can be easier than correcting a simple implementation. Eventually you need to find the correct model and then everything else will follow easily, but a complex implementation that does the right thing will tell you a lot more about what the correct model is than a simple implementation that doesn't do the right thing.

Without defining what "simplicity" and "correctness" are supposed to mean, this article is empty of content. The title appears to be riffing off of the famous saying "First make it work, then make it right, then make it fast" ( http://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast ) which is supposed to be a warning against premature optimization (one that is less often taken out of context than Knuth's famous saying). But by lumping both "making it work" and "making it right" under "correctness", it makes it appear that the author values simple software that doesn't do its job over complex software that does. And the problem is that you can't easily slot simplicity in by drawing a stark dividing line between "making it work" and "making it right", because it's a continuum of correctness. At best, simplicity is more important than performance, much of the time. But at the end of the day the point of software is to perform a specified task, whether or not it is achieved in an aestheically pleasing way underneath.

This is great, but completely lost on the crowd if what Simple means isn't understood.

One of the best clarifications of what it means to be Simple, to put it out there, is [1]; but the key point: Simple != Easy.

Simple means minimal coupling, high-cohesion etc etc.

Yet IME many developers do not understand the distinction and mistakenly believe that easy is the same as simple, and are willing to couple the hell out of the world under some false notion of "simplicity"...

[1] https://www.infoq.com/presentations/Simple-Made-Easy


In a way, simplicity is the end result of reducing the complex and correct solution without affecting its correctness.

As in math, you come up with the "simple" solution of 0.5 only after you've realized that the "complex" solution is, for example, "sin(pi/4) * cos(pi/4)". There might be no other way to discover the simple solution.


That talk transformed the way I think about software development. I highly recommend watching it.

This title is misleading. The post actually says that the reason "simple" comes first is because without it you can't have "correct" (nor "fast", not that that matters so much). So he's not saying simple is most _important_, just that it comes first chronologically, and has the other two as consequences.

e.g. Gall's law

> A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.


Surprisingly to some, this is neither a law nor true.

A complex system with a good workable and testable architecture will work, starting with passing the tests down to satisfying the user...

Such systems are not designed in detail but in general, and usually start with a single, simple but powerful overarching idea, which is actually quite complex to implement, but ends up working evidently well once even halfway done.

Examples would be message passing architecture, event driven programming, time tracking, microservices, reactors, literate APIs, contract programming, Model-View-* and more... Note how half of those deal with reducing coupling by adding complexity.


That's an excellent clarification since so many people aren't getting his nuanced view.

I also disagree. I have yet in my life see any programmer crank out a simple solution on the first try for anything that isn't a trivial requirement. The way I think most of us work is to create a complex solution first and to have to refactor at least a couple of times before we get to simple and elegant. That doesn't mean the complex version didn't work.

I've made plenty of code that is bug free according to the requirements. I tend to start with tests and I'm pretty good at figuring out edge cases and other ways to break my code before I've even written it, so what I end up with is pretty robust. But the first version is rarely elegant or simple. By the time I'm done with the first version I understand the problem space so much better and might throw out 90% of my original code in the first refactor. Am I the only one doing this? Sometimes it even takes weeks or months to get to simplicity. I keep understanding the requirements better and better and noticing how I could eliminate code, often after I've noticed some code I'm still not happy with and having slept on it. Sleep does wonders for seeing how to simplify.


>This title is misleading. The post actually says that the reason "simple" comes first is because without it you can't have "correct" (nor "fast", not that that matters so much). So he's not saying simple is most _important_, just that it comes first chronologically, and has the other two as consequences.

If that's what he meant, he's flat wrong. Simplicity is neither necessary nor sufficient for correctness.


Think about it in terms of each choice you make.

I have a simple solution and a complex solution. Does the simple solution meet the requirement(s) before me? If so, I prefer it. Let's move on to the next requirement and consider my options again.

The alternative might be to look at your requirements, but choose a complex solution (over a simple one) because you think it might meet other requirements, either ones that have not yet been identified or ones you think are likely to happen in the future.

Are there times that the more complex solution wins? Probably. Consider you want to write a blog. You know that you can create an HTML (text) file and slap it on a web server and your blog has started. But if you've done this before, you might also know that you can throw WordPress on your server for a little more up front pain. You know you want comments and word clouds and date/time stamps and navigation. So you choose the complex solution. (You also know that you know face potential security implications, upgrades, dealing with users causing trouble with comments, having the PHP/MySQL infrastructure/hosting requirements...) Maybe you just wanted to dump your thoughts to the internet. Maybe the text file approach was better...

It may just be another way to say "avoid gold-plating your software."


Certainly don't gold plate.

But simply meeting requirements is only doing the minimum possible. Now in a government job, that's okay.

But in a real job, if you see where the simple solution is OBVIOUSLY wrong for certain likely cases not considered in the requirement, then THE REQUIREMENTS ARE WRONG, or incomplete and this should be pointed out!


The OP's advice, if applied in CPU industry, would be disastrous. Modern desktop/server CPUs are incredible complex... in order to drive maximum performance. Pipelining, OOO execution, branch prediction and speculative execution: these are all features that introduce tremendous amount of architectural and design complexity. In many cases, they also harm correctness, because they can lead to functional and security bugs.

And yet, if you try to compete with Intel with a CPU missing the above optimizations, you will get absolutely creamed in the marketplace. No one, not even those touting the importance of simplicity and correctness, will buy what you're selling.

Today's free market is too complex for these overly simple rules. Choosing between simplicity, correctness and performance, is a complex tradeoff that needs to be made on a case-by-case basis. Trying to find shortcuts to avoid these analyses may feel liberating... but you're ultimately only shooting yourself in the foot.


A counter-anecdote: The features you listed started shipping (from Intel & MIPS) in microprocessors in 1996, 22 years ago. Intel's out-of-order Pentium Pro was beaten by the in-order DEC 21164 the same year.

Also, there's the case of Intel losing to in-order ARMs in mobile. First with XScale, and later on with the in-order Atoms. (https://appleinsider.com/articles/15/01/19/how-intel-lost-th...)


Sure, specific optimizations in specific markets may not be worth the cost they incur. Or they may not be valuable enough to overcome other weaknesses in the project.

And yet, if someone tried to sell a server CPU today that was not pipelined, not OOO, and didn't have branch prediction, it would absolutely tank in the marketplace.

I never said that performance optimizations should always be implemented. Just that performance optimizations should sometimes take precedence over simplicity.


> And yet, if someone tried to sell a server CPU today that was not pipelined, not OOO, and didn't have branch prediction, it would absolutely tank in the marketplace.

You could sell it as a niche product for high security applications, since OOO execution is a nasty side-channel.


That's an interesting idea. A "so simple it can't have bugs" design would never win over the mainstream-market, but it might be able to find a niche among extremely security-conscious users. This might be a great project for the open-source community to take on.

They did use OP's strategy in the CPU industry.

The first CPUs were simple as heck. They spent 20 years making them reliable(and faster without compromising simplicity, mostly just node-shrinks), and they've only really been complicating the architecture in the last 30 years.


iAPX432

The OPs advice would probably be dissapointing if applied to making a curry too. It's fortunate he regularly used the word software throughout the blog, really.

The attitude of the CPU industry in this regard led to some recent well-publicized, very bad, and nigh-unfixable security vulnerabilities, as you might have heard.

And yet, I don't see you or anyone else committing to buy ultra-simple non-pipelined non-OOO desktop/server CPUs.

If you insist on only hiring chauffeurs who drive at 100mph, you can hardly complain when they get into a few accidents.


The main problem is that most software leans on those misfeatures as a crutch to excuse heaping layers of abstractions. Unfortunately this problem comes from several places, so it's not as easily fixed.

That being said, consider me lined up to buy one of these CPUs.


> And yet, I don't see you or anyone else committing to buy ultra-simple non-pipelined non-OOO desktop/server CPUs.

If you can find a CPU that has the same number of non-cache[1] transistors as a Intel/AMD chip, but spends them on a larger number of simple (and preferably independent/non-hyperthreaded) cores, rather than squandering them on speculative execution and ten thousand obscure model specific registers, I would absolutely buy several of them.

1: and similar amounts of cache, of course.


Intel makes them and you can buy them today, with up to 72 Atom CPU cores, e.g. (1) https://ark.intel.com/products/95830/Intel-Xeon-Phi-Processo...

Very niche products.

For massively parallel number crunching, GPUs are much better in both performance/watt and performance/dollar. That Xeon Phi 7290 delivers up to 3.45TFlops, costs $3200, and consumes 245W. Compare with GeForce 1080Ti 10.6 TFlops, $700, same 250W.

For general purpose software they don’t work particularly well either. Most IO interfaces is serial, SATA, PCI-X, they have very few wires going to CPU. If you’re IO bound and you don’t have enough single-thread performance you’ll struggle to saturate the bandwidth, doable but very hard.

Also for general-purpose software latency matters. Namely, input to screen latency for desktops and mobiles, or request to response latency for servers. Get Windows or Android tablet with Intel Atom Z8300 (available for $80-100), and see how it performs, it has 4 very similar cores (minus AVX-512), and frequencies are very similar, too.


https://www.intel.com/content/www/us/en/processors/xeon/xeon... shows at least six volumes of datasheets, and I still haven't found a instruction set refence. I have found https://www.intel.com/content/www/us/en/processors/xeon/xeon... (helpfully labeled "Datasheet, volume 2", rather than anything related to it's contents) which describes a subset of the aformentioned ten thousand random control registers. So no, Intel does not make [simple cores], it makes heaping piles of shit complete with malware ("Intel® Management Engine") buried at D22:F0 on a interal PCI bus.

It isn't simple, it's designed to be incorrect (and even the parts that are supposed to be correct aren't), and I'm not surprised it fails on fast as well.


> I still haven't found a instruction set reference.

X86-64, SSE, AVX, AVX-512, AES-NI, etc. Their key selling point is software compatibility.

> Intel does not make [simple cores]

The cores are quite simple by today’s standards; otherwise Intel wouldn’t be able to pack 72 of them on a single chip. IME is unrelated to the cores, it’s a separate piece of silicon.

But if you don’t like the IME and don’t need backward compatibility with x86, maybe you’ll like this: https://www.qualcomm.com/products/qualcomm-centriq-2400-proc... But again, performance benefits of the architecture (48 simple cores) is questionable, GPUs are way faster for parallelizable number crunching, and you need single thread performance for almost everything else.


> X86-64, [etc]

So it has the ten thousand x86 and x64 registers in addition to the ten thousand ?PCI registers?

> The cores are quite simple by today's standards

That's my point; today's CPUs don't have "ultra-simple" as a option (at modern feature densities).

> IME is unrelated to the cores

Fair point, I probably should have added "and doesn't have technicalities like builtin malware" to my original post.

> https://www.qualcomm.com/products/qualcomm-centriq-2400-proc...

This looks interesting, although I'll need to research a bit more (and "SOC - Features - Integrated management controller" isn't encouraging). Thanks!


In the free software world, this is not a huge problem. You can make new CPUs and recompile programs with far fewer effort. All that complexity really comes from monetization.
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: