
Performance First - nathell
https://tonsky.me/blog/performance-first/
======
buster
I think that's a really bad article. It's (as always) not that simple.

He says "if you want to build a really fast program, pay attention to the
performance from the start.". So it's a business requirement from the start
(to build a really fast program). That's fine and should be taken into account
for development. Put it in your definition of done, start with performance and
regression tests or something like that to pay attention to this particular
business requirement. Have checks for your performance requirements and handle
them as any other requirement for your software.

But most projects simply don't have that requirement and that's fine. Sure,
your project will fail if it's unusable and slow but it will also fail if you
put your efforts into unneeded optimizations from the start and slow down
development of other features. I've seen projects fail due to premature
optimizations in the beginning.

~~~
spurdoman77
This is a very good point. In many projects, performance is not just relevant.
Of course it makes sense to follow good practices and not make things slow on
purpose, but thinking a lot about performance when it is not exactly a goal
doesnt make sense.

On startups, I think there is a fine balance. You have to focus on many
things, performance might be critical or might not depending on the business
case. You have to think about where to spend your focus.

~~~
IshKebab
I disagree. Performance is almost always an implicit goal. People don't state
it very often because it's so obvious. Why wants slow software?

But more importantly this article is a rebuttal to people saying "we can
optimise performance at the end". They aren't saying "We never intended for it
to be fast and we aren't going to bother trying to do that." they're saying
that performance _is_ a goal. Look at the examples he screenshotted. Nobody
says "dude, performance isn't a goal of this project; you just have to live
with it being slow".

~~~
cies
Good enough performance is the goal. If the latency is 100-200ms, then it is
often not a big problem if your request cycle is 30ms instead of 3ms. Unless
you expect heaps of traffic, then the 10x performance improvement means a lot.

~~~
RL_Quine
Lets be realistic, no "apps", front end websites etc are hitting sub-second
latencies for anything. Hacker News, and the linked article are stark
exceptions where the page load itself was under 200ms for me. Posting this
comment of course took about 3 seconds. The amount of latency in the modern
web is fairly astounding if we're being honest.

~~~
cies
> Lets be realistic

Ok.

> no "apps"

Sure?

> Hacker News, and the linked article are stark exceptions where the page load
> itself was under 200ms for me

So let's be realistic (sorry :) ), it is possible, but it needs to be a strong
requirement up front as its going to dictate a lot of choices that are hard to
re-evaluate down the road.

> The amount of latency in the modern web is fairly astounding if we're being
> honest.

We can totally high-five on that one. One thing I feel contributes here is
that good devs are expensive. So we prefer them to use tools that help m to
deliver quickly. Super low latency stuff usually incurs build times, this is
not making your devs more efficient. Also less good devs find it often harder
to produce optimized code "by default".

~~~
RL_Quine
I can’t work out if you’re attempting to be a dramatization of the problem or
not. Low latency developing “incurring build times” in particular doesn’t make
a lot of sense. It just takes considered design, nothing about the result is
necessarily expensive.

~~~
cies
I've programmed web stuff in rust and in rails. i can tell you that build
times (and REPL on error, and a console with DB connectivity, and lots of pre-
existing modules/libs/plugins) do matter.

We can get all that stuff in Rust (C/C++/Zig) too, but it is simply not there
yet.

Only Go is both run time efficient AND compile time efficient.

My point: all is possible at unlimited resources, but that's unrealistic.

~~~
loopz
Go is a delight to work with, _if_ you solve new problems in your own way. Not
so much unfortunately solving same old problems again, ie. SOAP/XML and
similar.

------
willvarfar
I work on systems where speed is a requirement, and this think-about-speed-
from-the-beginning is the way to proper way to do this.

I've also been parachuted into projects that are having performance problems,
and usually you can find some nice low hanging fruit. But there's usually some
crucial poor-performing part that can't be made faster without a complete
rewrite that uses a entirely different algorithm and data-structure. And then
it becomes clear that doing so would impact all the other code; basically, its
unfixable because of early design choices.

Most of the slowest programs I've had to crawl into have been 'designed for
web scale' nonsense from the beginning, and die a horrid performance death
through premature distribution and the use of 'scalable' nosql technologies
that were much talked about when the project adopted them but now are rather
out of fashion, thankfully. My experience is that ipc is bad for performance
and having a system on as few boxes as possible is good for performance, which
seems to be quite the opposite to the system architectures that were all the
'web scale' rage this past decade.

~~~
hashkb
And speed is always a requirement. There are tons of studies you can look up
about the impact of page load time on e-commerce revenue.

~~~
willvarfar
TBH the specific examples I had in mind when I was writing my post were mostly
backend systems, including batch ones. A lot of the performance improvements I
extract from software is by dividing up what needs to be fast and what can be
slow, and slowing down the slow stuff to free up resources for the fast stuff
etc. I just chuck this in because I think its interesting how many
perspectives and directions people on HN have.

------
unpythonic
> Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We should forget about small efficiencies, say
> about 97% of the time: premature optimization is the root of all evil. Yet
> we should not pass up our opportunities in that critical 3%.

Knuth, "Structured Programming with Goto Statements". Computing Surveys 6:4
(December 1974), pp. 261–301, §1. doi:10.1145/356635.356640

No one has said to ignore optimization, what the quote is actually telling us
is to avoid falling into the trap of micro-optimizations which don't really
affect the overall performance of the system. Yes, you can spend a week making
the world's fastest substring-in-string function, but if you're only checking
that your email address has an @ in it, that's kinda silly.

Make your programs fast by concentrating on the areas which give you the most
bang for the buck and avoid falling into rat holes where you optimize
something which has no impact on the system overall.

~~~
commandlinefan
> No one has said to ignore optimization

Well, maybe nobody is _saying_ that (Knuth definitely wasn’t) but, as the
author points out, this is how a lot of applications of the “premature
optimization is the root of all evil” rule end up: any discussion of
optimization if forbidden as premature until it’s too late to fix the problems
and then it’s “good enough” because we “have to get to market”.

~~~
loopz
If that's a problem, don't "discuss" it. Just Do!

------
MrBuddyCasino
_> VS Code / Atom eventually became faster versions of their original Electron
prototypes. And I’m not saying it’s impossible to speed up programs after
release. I’m saying these improvements are accidental. It’s sheer luck they
happened._

This is... completely wrong? Those projects have invested considerable work
into making them fast.

He _does_ have a point though: there is a path-dependence when developing
software, and some of the fundamental choices have an influence on the
performance, and they cannot be easily reversed later, when it is time to
_make things fast_. Things like:

\- language (JS vs Rust)

\- framework (Electron vs Native)

\- architecture

------
mastry
In my opinion, the truth is somewhere between what the author is suggesting
and what most developers actually do. I recently came across The Fallacy of
Premature Optimization on ACM [1], which I think is a much better take on this
topic. It clarifies the statement that started this confusion which, in whole,
was "We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil" [2].

The key words being "small efficiencies", but you seldom hear that part of the
quote.

[1]
[https://ubiquity.acm.org/article.cfm?id=1513451](https://ubiquity.acm.org/article.cfm?id=1513451)
[2] Sir Charles Antony Richard Hoare

------
jmull
Uh, sort of. That quote can be abused of course, but it is still correct.

Actually, this article abuses it -- it's confusing performance and
optimizations.

If you want performance, you design for it. When you optimize you profile and
modify an existing system to improve something (usually speed, but could be
other things like memory or requests, or whatever else might be constrained).

I think the article makes a decent point about performance, but just shouldn't
confuse it with optimizations.

The right time to address the performance potential of your app or system is
at design time. That's when you're going to be able to reduce the overall
amount of work your system needs to do its job, and design for contingencies
in case this or that box or line turns into a bottleneck.

Now, it is best to do high-level design of your essential components early --
you'll not only be able ensure your system has the performance potential it
needs, but other things as well, e.g., to minimize the the overall amount of
code you need to write, to make your system more maintainable, etc. So I agree
with the general point of the article -- "performance first" but it shouldn't
really get mired in confusion over optimizations.

Rather than "performance first" I'd say "do high-level design early, and don't
forget to design for your performance goals".

------
strictfp
Similarly, as Jeff Atwood put it: "performance is a feature"

[https://blog.codinghorror.com/performance-is-a-
feature/](https://blog.codinghorror.com/performance-is-a-feature/)

~~~
danesparza
Performance takes time and intention. It needs to be planned for _like it's a
feature_ or it won't get worked on. Usually this means working with business
analysts to communicate to your customer that performance needs to be thought
of like a feature -- AKA given its own time and effort and space.

~~~
cies
Often perf reqs dictate the systems/languages/FWs that may be considered. Thus
these reqs are very important to be known up front.

You can add many features later, but some level of perf is hard once you have
50k line of app code on top of, for instance, Rails. This because it's hard to
change language/FW, compared to adding a new functionality on top.

------
tpetry
So the story should better have been: Develop a sane architecture which is
fast by design but do low-level performance optimizations at the end

~~~
mratsim
Electron out?

But a fast product that doesn't ship will also die.

------
janpot
performance first? accessibility first? mobile first? security first?
correctness first? ...

There can only one thing come first, that's the definition of the word
"first". You know what I do? "refactorizable first". Keep your code in a state
that refactoring things is relatively safe and then you can work on the
problems when they appear.

~~~
fctorial
Look how fast this program is:

    
    
        int main () {}

------
drej
I think there are bits of false dichotomy in here. Writing optimised code from
the get go and _thinking_ about it at a later stage are not your only two
options.

You can always think about performance implications of your implementation and
keep them in mind. This is typically the runtime, your infrastructure, your
chosen level of abstraction etc. You know your implementation is not the
fastest, but you know what your limits are and where the low hanging fruit is.

To make it more specific - I implemented some economic model thingy and all I
wanted was for it to compile and run correctly, so I wrote it in a way that I
could swap the data storage for concurrency-safe structures without impacting
the results - but since that added tons of complexity, I just designed it with
a pluggable storage in mind. Then I developed it as a single-threaded program
and wrote tons of tests. Once that and the external API were done and
validated, I plugged in the concurrent storage layer and enjoyed great
speedups.

So no, my first prototype was not performant at all and I was okay with that.

------
jwr
I think of performance in a different way: "don't do stupid things" is my
mantra. In other words, while not focusing on performance right away, I avoid
doing stupid things. This lets me produce software which runs surprisingly
well, spending very little time on improving performance.

------
vorpalhex
All the performance in the world doesn't matter if your software doesn't work,
doesn't do useful things or isn't complete.

Plenty of good, useful software that is well used is kind of slow. I don't
know anyone using anything that's very fast but incorrect or unfinished
software.

~~~
commandlinefan
> is well used is kind of slow

And users hate that the software is slow, and wonder why it has to be that
way, but grit their teeth and deal with it because they feel like they don’t
have an option. And, until software professionals start behaving like actual
professionals, they won’t.

~~~
jdbernard
Clearly from your comment the software is so useful that despite hating it,
the users deal with the slowness. Which to me only reinforces they parent's
point: correctness and availability is more important than speed. If speed
were the _most important_ thing then they wouldn't just hate it, they wouldn't
use it.

------
jaimebuelta
This article makes a strange assumption. That you know perfectly the end
result before starting the process, which is a rare case in my experience.

Normally, a product will start with a small feature, and all sorts of random
stuff will be added to it to make it grow. Some of them will be abandoned, as
they'll fail commercially, and some will expand as they bring money or
attention. Creating an architecture that can allow that kind of expansion
without crumbling over tech debt is the challenge.

Ultimately, some programs can be reduced to a simple operation, and that can
be designed from the ground up to be really performant. Something like the
silver searcher
([https://github.com/ggreer/the_silver_searcher](https://github.com/ggreer/the_silver_searcher))
or ripgrep
([https://github.com/BurntSushi/ripgrep](https://github.com/BurntSushi/ripgrep))
have the advantage that they have laser-focus in what they want, and designed
to take advantage of that. Most of the developed software have way broader
objectives.

The important part of "premature optimisation is the root of all evil" is
PREMATURE. Don't optimise things that is still uncertain that required to be
optimised. When performance is critical OF COURSE optimisation is a
requirement.

~~~
gwd
> The important part of "premature optimisation is the root of all evil" is
> PREMATURE.

Right -- I think the examples he's arguing against have slightly misunderstood
what "premature" means. The quote is supposed to mean, "Don't optimize until
you know that optimization is necessary." The people in his examples are
interpreting it as, "Don't optimize until your program is feature complete". I
doubt Knuth would agree with that principle. If people are raising tickets
saying that some feature is slow, then you know optimization is necessary, and
it's not premature any more.

That said, as with any proverb, you have to use some common sense. If you know
you're going to be doing key-value lookups in a data structure that's going to
contain hundreds of thousands of pairs, it's OK to start off with a hash
table, rather than starting with flat list + linear search and switching over
only after benchmarking.

------
fefe23
There might be some confusion of terminology at work here. I tend to agree
with the article that you have to think about performance while designing your
program, but that is not a contradiction to what Knuth is saying.

Knuth talks about optimization, which is a trade-off to make your existing
code faster by sacrificing readability.

This guy talks about proper design, which is a trade-off between time to
market and quality.

You can (and should!) have both. Think about what you do before you do it, and
then optimize the parts that need more work.

BTW: I think there might also be a misunderstanding about why optimization can
be bad. djb used to say "profile, don't speculate". I think this is the core
of the Knuth recommendation as well.

If you optimize before you have written all the code, chances are you don't
even fully understand the problem the code is trying to solve yet, so you
might have to rewrite parts of it, and then it would suck if you had sunk a
few months into optimizing those parts.

Also, only after you have all the code and you have test cases do you have an
opportunity to profile and find out which parts of your code are contributing
the most to its lack of performance. The fundamental insight here is called
Amdahl's Law. Or maybe you find out that your code is already fast enough.

I usually spend a ton of time beforehand thinking about how to write code that
will not need much optimization later, but sometimes you find out your initial
assumptions were wrong.

~~~
commandlinefan
> make your existing code faster by sacrificing readability.

I've seen slow code, and I've seen fast code, and I can't say that I've ever
noticed any significant difference in "readability". If anything, the faster
design choices are easier to read because they make more sense: I can
understand why the original author did what he did.

------
jandrewrogers
Performance is largely architectural for most types of software. The extreme
difficulty of redesigning architecture after the software is feature complete
(it approximates a rewrite) makes it a requirement that any performance and
efficiency considerations be designed into the system from day one.
Optimizations you can add later are always severely constrained by the kinds
of optimizations the architecture allows. In most software I see, the
"performance later" designs tacitly remove most possibility of macro-
optimization.

Not all software needs to care about performance, but I also see a lot of
large multi-server monstrosities that could easily be run simply and cleanly
on a single server with capacity to spare if more thought had been put into
performance and efficiency.

------
terminaljunkid
It seems many comments in this thread don't get the point.

The "Premature Optimization is Root of all Evil" is about juvenile low level
optimizations that sacrifice code readability. They don't give high perf gains
either.

But that doesn't mean "Don't think at all about performance in beginning, you
can optimize later". That's bullshit notion. Someone will end up paying in AWS
bills for that. For instance, every programmer should know how latency varies
between disk and memory access, and think about the tradeoffs at design/spec
level.

It is true that there are cases where even that doesn't matter. But in most
cases where people think so, It ends up they pay double in AWS bills to save
two hours of thinking.

~~~
fraktl
No, many DO get the point.

Point is that decision-making isn't binary. If I have a form and UI that
collects the data, of course I won't try to use enums and weak maps or "the
fastest client side framework" to succeed in the actual task: collect and save
data.

Before business logic is known, you can't design for performance. That's why
"Premature optimization is the root of all evil exists". Now, if you design
for performance - that means you KNOW something ahead, and there's less
discovery involved.

I'm not sure why people have the desire to create lists or rules for tasks
whose workloads are unknown.

We all want to get beautiful code paired with performance and features. But
before we know what we have to achieve, it's hard and dangerous to focus on
performance only.

I never adhered to hard rules or thoughts from people that think binary-only.
Every situation requires analysis before action can be taken. Therefore, the
right tool for the job is my mantra and this "article" is extremely dangerous
since it's tunnel-visioned.

------
jandrewrogers
As a tangent to this, the widespread lack of performance/efficiency-first
thinking in software design is responsible for an enormous amount of waste
that directly contributes to things like climate change. We've normalized
using 10x more servers than is required to deliver a workload with
performance-conscious architecture because it is expedient. The "throw
hardware at the problem" mentality has gotten out of hand in my opinion.

Many people with very eco-aware personal lifestyles write code that is the
software equivalent of a coal-fired SUV, oblivious to the contradiction.

------
mamcx
What is lost in this kind of conversations, can be summed at:

You pay NOW or pay LATER.

Some stuff is VERY hard to add later, and will cost much more:

\- Security \- Usability \- Cross platform \- Accessibility \- Type
safety/Type hinting \- Performance

If you delay too much in pay attention to this stuff, then the cost later will
increase so much. And then only a massive refactoring will fix it (or worse,
like in one of my projects, a total change of langs and frameworks!)

------
robjan
I picked up a codebase where people were optimising for performance throughout
the development it's now very hard to change or refactor because there are
preemptive performance optimisations and caching everywhere (front and
backend).

It makes far more sense to create an extensible solution and then identify and
fix the bottlenecks as the vision about the "final product" (or version x)
becomes more clear.

------
jssmith
If you’ve used some heavyweight framework then performance will probably be
hard to add later. If you wrote the code yourself then I’m with Knuth, it’s
not a big deal to optimize it later and that’s a much better way to go.

I’ve found the following rubric helpful for building distributed systems: 1/
correctness, 2/ reliability, 3/ performance. Credit: CochroachDB team.

------
darekkay
> If someone says:

>> "We are building programs correct first, performant later. We will optimize
it after it’s feature complete."

> It actually means: performance would stay mostly the same, unless those
> people find some low-hanging fruits that will allow them to make the program
> fast without changing too much of what they’ve already built.

I agree with the premise, but not (always) with the overall conclusion of the
article. If you go preformance-first, you should also go security-first,
usability-first, and accessibility-first. All of those factors are harder to
implement afterwards than when it's being done from the beginning. But there's
also the other side - you may come up with a performant, secure, usable and
accessible application that is lacking core features, making the product
useless. There's no silver bullet, but _not_ going performance-first is
totally fine in some cases (especially prototypes / MVPs).

------
asadkn
> I’m saying these improvements are accidental. It’s sheer luck they happened.

No. They're not accidental. They're entirely intentional, done by _sheer_ hard
work.

On a related note, I am surprised a person who claims to write about UI design
has a website with that background color. The contrast is just awful.

------
karmakaze
This is better expressed as "consider performance from the start". Don't make
it your #1 priority and don't ignore and neglect it either. Do sensible
things:

1\. use hash (or ordered) collections rather than scanning large-ish arrays

2\. use reasonable database schemas and indices for queries

3\. similarly consider correctness (e.g. constraints)

4\. one place where earlier versions can skimp are sometimes algorithms to be
replaced by better ones. Think of this as 'doing things that don't scale'
which is fine for an MVP but do have a plan for how this will be updated if
you're fortunate enough to need it.

5\. good fore-thought is to offload work to the client in the design which
does scale better.

6\. don't do any/much caching early on and make choices that are compatible
with adding caching later

------
celeritascelery
> Let me put it this way: “Premature optimization being the root of all evil”
> is the root of all evil.

The disconnect for most people is that optimization does not necessarily mean
“performance”. It can mean optimizing for reusability, compatibility,
objected-orientedness. Basically trying to optimize for a use case that you
don’t have yet. I see this happen all the time with people optimizing to make
their solution more general. It makes the code more complex and it may never
be used in a general case.

If you are trying to build a performance piece of software, then you should
absolutely be optimizing for that from the start. That’s not premature
optimization, that’s meeting your design goals.

------
chacham15
The author here seems to be missing the basic point of the phrase. Others have
correctly noted that performance is a feature, but even with that taken for
granted, it is not often apparent what the constraints of the full system are
until they've been built. You may try and guess, but you're more likely to
guess wrong than right. Look at OSes, ISAs, DBs, etc. All of those have been
rewritten numerous times not because the originals paid no attention to what
performance would be like, but rather because they did not know how to improve
performance until they had something they could test.

------
mtm7
Eh, there's a gray area here. Your priorities really depend on the type of
project you're making.

Let's say you're building a web app for a SaaS business. Wouldn't it be more
important to make your code refactorable and develop features quickly? If you
spend lots of time on [Feature X]'s performance and customers turn out to _not
use_ [Feature X], you've just wasted your time and money — basically your
runway, your livelihood. Basically, if a feature isn't worth making, is it
worth making well? :)

Maybe this is good advice if you're working on a different type of project
though.

------
Quarrelsome
I think it depends on if your design is conducive to performance or not and
this is the cross-purpose here.

If you've architected a bunch of end points that are chatty and jump layers in
a weave of a tangled mess or built an app infrastructure that never isolates
expensive operations and just arbitrarily inlines them, then you'll struggle
to optimise it later.

But if your architecture is relatively sane and your data model apt for the
features at hand and those on the horizon then finding hotspots and improving
performance many times fold is totally doable later.

------
seanwilson
High performance code can often be an order of magnitude longer and more
complex to write than moderately performant code. When you're not even sure
what you're building yet, aiming for performance first is a waste of time and
a good way to ensure you end up not realising anything at all.

Optimise for what's important for your specific project. Maybe exploring the
design space is the priority to start. Maybe releasing before the end of the
year is more important.

Not all projects are the same so it's pointless to argue about general rules
like this.

------
bigdollopenergy
Improving performance after the fact is a lot harder than you might think,
depending on the nature of your application and it's users behavior, it can be
a very steep uphill struggle to realise even small gains. Here's an issue I
recently ran into.

When I was tasked to increase performance of a legacy web app, I thought there
were so many "quick wins" that I could take a VERY slow application (we're
talking 10s long requests for most pages) and get most pages down to less than
3s per request, and at MOST 5s for all of them

One example was permissions being stored in a long "bitmap" string of 1's and
0's that was being parsed on each page load (it was hundreds of "bits" long).
The code to handle this was beyond awful complexity wise and added 2s per page
load to parse. Some simple refactors and 2s removed from each and every page
load. There were a bunch of things like this too, I was on a roll fixing these
things.

In my dev environment I set up some performance test harnesses (replaying real
world traffic from a representative amount of users) and got benchmarks
indicating that i'd increased performance by about 4 times on average. It was
still sluggish, but no longer unusable. When I deployed, I found the site was
only 1.5 to 2 times as fast. Not quite enough to hit my very modest goal of
<5s per request. What the hell happened?

Turns out the site was so slow, but we had so many active users that needed to
use it, we were suffering something you'd normally only observe in real life
automobile traffic, a phenomenon called "induced demand". When a site is
really slow users won't use it unless they absolutely have to. So as you make
the site faster they will subconsciously start doing more things things beyond
the absolute bare minimum they would normally do. You get stuck fighting some
weird kind of "equilibrium" where as the site gets faster you start going
below the level of "do what I need and get out" threshold of more and more
users who push back by adding more traffic until the site reaches a new
equilibrium.

So we made the site 4 times as fast, but the amount of requests more than
tripled and negated half our gains. Improving performance on a web app of this
nature after the fact is a LOT harder than you might think.

------
0dmethz
Of course you should consider performance when you start a project - if your
language, framework or core design is slow it seems unlikely you'll be able to
do much to improve it later. You need to at least think about it.

But I don't think "Premature optimization" is synonymous with "performance".
Optimizing or perfecting every detail of a feature that might still go through
significant changes over time seems risky and wasteful.

------
bfung
Slippery slope here is, how far does one take performance first design?

I’ve seen too many devs rely on caching as the upfront solution. That’s
performance first design, right!?

------
intc
I'm not sure if performance has to be first. One could for example do proof of
concept style of work with some higher level language before starting to
create the actual product which then should also take the performance aspects
under consideration.

I think we actually waste a lot of energy by not encouraging our selves to pay
attention on the performance. Nature sort of demands efficiency in long run.
Yes?

------
Nevermark
Avoid “premature optimization” doesn’t mean avoid “optimization”.

What is “premature” is project dependent.

It is not premature to start 99% focused on optimization for a project where
achieving an efficiency threshold is needed to validate project viability.

“Avoid premature anything” is tautological.

“Performance first” violates a tautology. All I need to say about that.

We need tautological reminders, because we humans easily forget obviousness in
the face of complexity.

------
sktrdie
Meh, look at Slack. It's a matter of priority. For me extensibility, important
features and quick bug-fixing is more important than performance. If working
on performance gets in the way of any those things, do it later.

------
johnwish007
I hope he doesn't build software like that website

~~~
adyavanapalli
What do you dislike about it?

~~~
asadkn
I believe OP is referring to the terrible readability and color contrast, with
that background color.

~~~
lorenzhs
Terrible contrast? Firefox's accessibility inspector gives the text a contrast
ratio of 15.17, with 4.5 being the minimum for what qualifies as good and 7 as
the recommended minimum for good readability.

I change text colours in the web inspector a lot because plenty of websites
these days are medium gray on light gray with a font-weight of 100, but this
clearly isn't one of them. It's fine to say you don't like the colour, but the
contrast is great.

~~~
asadkn
It's not that the contrast is low (which is another problem on its own, I
agree). It's the other extreme - too high of a contrast [1] by using a fully
saturated yellow and black. The same reason you don't use #fff on #000.

[1] [https://uxmovement.com/content/why-you-should-never-use-
pure...](https://uxmovement.com/content/why-you-should-never-use-pure-black-
for-text-or-backgrounds/)

------
marcinzm
As I see it, performance only matters if you're losing more customers or users
over it than you gain by having spent that effort on features. And hardware is
cheap nowadays. This is also not a linear relationship, performance focused
development generally constrains the system which makes refactoring harder
which makes new features harder (because you almost certainly did not get the
architecture right the first time around).

~~~
jaclaz
>And hardware is cheap nowadays.

Cheap and available (and correctly setup) are not synonyms, however.

In the years I observed how - generally speaking - programmers (correctly as
it is their work) tend to have the best of the best in hardware (and a fast
LAN, and a fast WAN, etc.) so locally where the program is developed it feels
fast, responsive, etc.

But the user base may have older processors, less RAM, slower mass storage and
slower WAN/Internet access, so at their end the same program will appear slow,
in some cases so slow to make it unusable.

I always thought that at the developers there should exist a few reasonably
old machines possibly with installed the same services/other tools that the
client target machines normally have where the program should be tested for
excessive increased slowness compared to what happens on the development
machines.

~~~
mratsim
Thankfully, Travis, Azure Pipelines and Appveyor are slow as molasses and if
you are forced to wait for them before merging a commit you will optimize the
speed so that at least CI is reasonably fast.

------
mekane8
The Knuth quote stands. It has been and continues to be one of my favorites.
It has served me well.

------
tuananh
the prototype should be crazy fast. as soon as features are added, it's gonna
be slowing down.

------
dnautics
I think this is terrible advice. There's plenty of ways to write highly
performant, wrong sofware, that contains bugs. What's worse than a slow
program? One that doesn't run. 1/10 of the time. Whats worse than a program
that runs 1/10 of the time? One that runs with high uptime and resiliency, but
handles funds with the wrong sign.

~~~
octocop
Is it really terrible advice for anyone to actually think about performance
when programming.

~~~
fatbird
No, but you usually don't know what will be the poorly performing part or in
need of optimization, beyond the macro level. Don't check your brain at the
door, but don't overthink it too early.

Someone else cites old systems with core decisions that are unchangeable
because a data structure or algorithm is central to the entire codebase, as an
example of "should have thought of performance earlier". My reaction to that
was: why is the code structured in such a way that a particular data structure
or algorithm is central to the entire edifice.

I think the error the article makes is in describing later performance
improvements as "accidental" and low hanging fruit, which is nonsense. If your
code can't handle continual refactoring, including to realize significant
performance gains, then you have a deeper problem than performance.

------
zurn
Using performance as a synonym for only runtime speed doesn't feel satisfying.
Shouldn't we include quality of results, such as correctness and fulfillment
of requirements, in the evaluation of whether a program performs well?

~~~
skohan
_Performance_ has a variety of contextual meanings. Market performance would
be an example. When talking about execution performance, we're usually talking
about speed/smoothness, under the assumption that a program is already correct
and complete. If we overload this to include basic correctness, we would need
another term to capture these other qualities.

~~~
zurn
But we are using other, more precise terms already, no? Speed, fast, slow.

------
jerome-jh
Javascript has been made fast as a result of huge efforts.

------
HorkHunter
The colours of this page are an unfortunate choice tbh...

------
vezycash
If the author had said, "Performance & functionality over beauty" it would
have made more sense.

------
mdip
While I agree with the overall feelings that the author expresses -- that
people misunderstand "Premature Optimization is the root of all evil"[0] and
in doing so treat _optimization_ as something that is "evil" rather than
_premature optimization_. There are various misunderstandings, which lead to
incorrect assumptions which lead to introducing complexity (which is likely to
be counterproductive, potentially performance- _killing_ ) too early in the
development process. I consider the following quote to be key:

    
    
         > It’s never too early to start measuring and working on performance.
    

Personally speaking, when I start feeling this way, I see it as a _strong
signal_ that I'm engaging in optimization too early. I'm guided by the idea
that time spent attempting to improve performance at early stages is more
often wasted, or even self-defeating, than it is useful. Knowing exactly
_when_ is the right time, though, is extremely tricky.

Most languages/frameworks have a set of "don't do this"-es that fall into the
performance category -- knowing these and implementing them when the
probability that they will reduce a bottleneck vs. overly-complicate
code/implementation is important. Too often, though, the "performant
implementation" is used because other developers are familiar with it -- not
because it necessarily helps anything (i.e. using StringBuilder in C# for
string concatenation -- in many common implementations, the result is _slower_
but people are so used to seeing it that it becomes "something many developers
just do" even though it is substantially uglier).

Performance shouldn't be an _after-thought_. If I'm expecting that my
application _must_ support 10,000 concurrent users on a single host and the
work its performing couldn't be achieved by a single-threaded server, I'm
going to write the implementation thread-safe from the start. That's not
premature optimization, that's designing an application. Provided the hosting
constraints are not as rigid, it may make more sense to single-thread the
back-end on multiple hosts. Neither of these are as easy to reason about as a
single-threaded, single-host, blocking implementation, but the alternative is
"it doesn't work at all" (under load, anyway).

The problem with measuring/working on performance too early is that the
measurements are measurements of code in isolation. You can't measure what
isn't written and once the various dependencies are introduced, your initial
measurements/adjustments for performance may be pointless. Yes, you sped up
the acquisition of a small blob of JSON that takes a notoriously long amount
of time to produce, but because you hadn't completed the rest of the app, you
neglected the fact that another request with 250MB of JSON ran at exactly the
same time. You traded off memory for CPU to speed up the first implementation,
but now you need a lot more memory for the one running in parallel and the
lack of resources caused swapping, slowing _everything down_ to the point that
the original, slower, less-memory-consuming option for your first request was
a better choice.

The "Premature Optimization is the Root of All Evil" isn't a phrase meant to
say "Optimization is unimportant and you shouldn't waste time on it". It's a
recognition that optimization is _extremely important_ and will almost
certainly _never happen_ due to resource exhaustion. The confusion is about
what resource is being exhausted -- the resource is _time_. We're all
constrained by having to write functionality and make it work. If you want to
make it perform well, you're going to burn a lot of time on optimization. You
will burn _far less time_ there if you optimize your code when those
optimizations are going to have the greatest benefit. Optimizing _too soon_
means you risk spending a lot of time measuring/tweaking with a result that
will not improve performance, or worse, has to be undone. "Optimized" code is
usually more complex/more difficult to reason about, which makes completing
the underlying feature-set more difficult.

Measuring and optimizing _at the right times_ (and identifying the correct
"right times") maximizes the positive effects of the optimizations and reduce
the likelihood that those optimizations will make writing the rest of the code
more difficult. Every program (and kind of program[1]) has a different "right
time". Determining that "right time" can simply be a matter of planning your
hardware / code architecture early enough to know what is actually _needed_
[2] and identifying when one should start reviewing the application for
bottlenecks.

[0] I have much the same problem that I have with the biblical roots of that
phrase and its misunderstanding. People will often repeat "Money is the root
of all evil" but miss the first part of that phrase "... _the love of_ money
is the root of all evil." With appropriate context, its meaning changes
dramatically.

[1] Library code vs. a complete program -- My "general libraries" are often
micro-optimized to seek the greatest performance while maintaining a balance
between memory usage and CPU. This makes them maximally useful to the vast
majority of my consumers. I might end up ripping those out of an
implementation if I find a better trade-off given a complete program's
function, but most of the time it reduces the need for me to optimize around
common bottlenecks

[2] Regular check-points / proper planning are as close to a cure for "the
roots of all evil" as you can get. Performance is _always a feature_ in that
you know in advance what your applications constraints are and what its
requirements are. Designing code that meets neither isn't abiding by the
"Premature Optimization" advice; the point is avoiding writing complex
implementations that eek out a few extra milliseconds of performance for a
code path that is blocked by dependent-data, anyway, which is often the end
result of premature optimization.

------
dziulius
I think author also optimized the time it takes for users to click `ctrl+w`
from his side. That yellow background is terrible, it gave me a headache.

