
Rules of optimization - benaadams
http://www.humus.name/index.php?page=News&ID=383
======
blt
> _But if performance work has been neglected for most of the product
> development cycle, chances are that when you finally fire up the profiler
> it’ll be a uniformly slow mess with deep systemic issues._

Totally true, and I have observed it IRL on large, old projects whose
architecture was hostile to performance. And these products were doing
computationally intense stuff.

> _Performance is everyone’s responsibility and it needs to be part of the
> process along the way._

Yes.

Everyone repeats "only optimize after profiling". It's true: if your program
is running too slowly, you should always profile. But that rule only applies
when your code is already slow. It doesn't actually say anything about the
rest of the development process.

Developers should have a solid grasp of computer hardware, compilers,
interpreters, operating systems, and architecture of high performance
software. Obliviousness to these is the source of performance-hostile program
designs. If you know these things, you will write better performing code
without consciously thinking about it.

If this knowledge is weighed more heavily in the software dev community,
people will put in the effort to learn it. It's not that complicated. If the
average dev replaced half of their effort learning languages, libraries, and
design patterns with effort learning these fundamentals, the field would be in
a much better place.

~~~
hinkley
We need a name for this. The old advice is to ignore performance becaus it’s
not important. When it becomes important there is now very little you can do.
You get the low hanging fruit ( _terrible_ analogy. Ask a fruit grower how
stupid this is), or the tall tent poles.

Eventually you have an unending field of nearly identically tall poles,
everyone claims victory (we’ve done all we can!). But your sales team is still
mad because your product is several times slower than the competing products.

What do we call this? Picket fence? Blanket of doom? Death shroud? Forest for
the trees? Stupidity?

~~~
roel_v
Terribly OT, but the 'low hanging fruit' comes from a time where full sized
fruit trees were the norm, which is something today's (professional) fruit
grower wouldn't know anything about. In old-style orchards (i.e., with full
size trees), it's actually quite common to just leave 'high hanging fruit'
because there's no somewhat safe or practical way to get to it. Unless you're
talking about something else, which I'd be interest in hearing about, as this
is quite relevant to me at this time of the year.

~~~
hinkley
It’s not a simple problem to be sure.

I rub elbows with landscapers in one of my hobbies and the general consensus
seems to be that leaving fruit or leaves under your tree as pathogen vectors
is a bad plan. That it’s better to hot compost everything you can’t grab
before it hits the ground. We have orchard ladders (the ones with the flared
base) and telescoping handheld pickers and dwarf varieties (trees with better
modularity?) that keep most of the fruit within picking distance.

Also leaving ripe fruit in the trees just means the wildlife or storms have a
chance to get it before you can.

I don’t have any mature fruit trees now but my plan is similar to my friend
who has five apple trees. Harvest as much as I can process at a go (she makes
cider). And then whatever the animals don’t get after that is a bonus.

~~~
roel_v
Sure, fallen fruit can cause problems; which is why in 'neat' orchards (only
short grass under trees, very little other vegetation that can house predators
that eat the insects that are attracted by fallen fruit) you pick as much as
possible and pick up and discard fallen fruit. And 'production' orchards use
dwarf or semi dwarf varieties anyway, along with planting patterns that let
them harvest with telescopic lifts.

My point was, the analogy is still valid. In full size orchards, you'rr not
going to be able to get any of the non-low hanging fruit. I know of cherry
trees 25+ m high with kilos and kilos of cherries in absolutely unreachable
places. And many fruit trees that I know of, only the low hanging fruit is
picked at all most years.

My own apple trees are 6ish years old now, I'm hoping I'll get my first proper
(if somewhat modest) harvest this year. But even now, I already know I won't
be able to reach much of it.

Either way thanks for the comment, I think we're on the same page here - I was
just wondering if there was something I missed in your original comment. I
always like learning about peculiarities or practices of people in other
places, although I think in this case there isn't much different after all.

------
Animats
1999 called. It wants its yellow text on a blue background back. (On the other
hand, today's hipster medium grey text on a light grey background isn't an
improvement.)

Knuth made his comment about optimization back when computing was about inner
loops. Speed up the inner loop, and you're done. That's no longer the case.
Today, operational performance problems seem to come from doing too much
stuff, some of which may be unnecessary. Especially for web work. So today,
the first big question is often "do we really need to do that?" Followed by,
"if we really need to do that, do we need to make the user wait for it?" Or
"do we need to do that stuff every time, or just some of the time?"

These are big-scale questions, not staring at FOR statements.

------
keithnz
A lot of optimizing advice from old misses some of the realities of old.
Things generally ran slow. Performance problems were much more in your face
and very common. There was a wealth of "tricks" to do with performance, and
people made code difficult and brittle in the name of optimization tricks. So
there was a bit of push back and a lot more focus put on getting things
correct / simple / well designed. But, because things were slow, and you often
didn't need to profile, because often things were in your face slow. So you
still needed to get it fast, and while you may not have jumped to tricks too
quick, you were still quite well aware of designing for performance. You had
to consider data structures carefully.

These days, things are often just not so in your face and a lot of "don't
worry about it" advice. Which is often not bad advice in terms of getting
things working. But eventually you do find you have to worry about it.
Sometimes those worries happen far too late in bigger projects. So I think
these Rules are pretty good. I'd also add in benchmarking very simple aspects
of the toolset you are using to get an expectation of how fast things should
be able to go. Often I have found (especially in the web world) someones
expectation of how fast they think something can go is way too low because
they have already performanced bloated their project and think its
approximately normal.

------
souprock
This matters. I write emulators. (join me:
[https://news.ycombinator.com/item?id=17442484](https://news.ycombinator.com/item?id=17442484))

If you want to run something like Windows 10 in a full-system emulator that is
something like valgrind, performance matters. For each instruction emulated,
you might need to run a few thousand instructions, and you can't get magical
hardware that runs at 4 THz. Right from the start, you're deeply in the hole
for performance.

Consider the problem of emulating a current-generation game console or
smartphone. You can't give an inch on performance. You need to fight for
performance every step of the way.

Just recently, I did a rewrite of an ARM emulator. It was all object-oriented
nonsense. I haven't even gotten to profiling yet and the new code is 20x
faster. Used well, C99 with typical compiler extensions can be mighty good.

It's important to have a feel for how the processor works, such as what might
make it mispredict or otherwise stall. It's important to have a feel for what
the compiler can do, though you need not know how it works inside. You can
treat the compiler as a black box, but it must be a black box that you are
familiar with the behavior of. When I write code, I commonly disassemble it to
see what the compiler is doing. I get a good feel for what the compiler is
capable of, even without knowing much about compiler internals.

~~~
onion2k
_Just recently, I did a rewrite of an ARM emulator. It was all object-oriented
nonsense. I haven 't even gotten to profiling yet and the new code is 20x
faster._

The key thing in optimization is summed up by the old adage "you can't make a
computer run faster; you can only make it do less." All of those things that
make the dev experience nicer use CPU cycles. The more cycles that are needed,
the longer your app takes to do things, and the slower it seems.

For most apps that doesn't actually matter because your app is idle 99.9% of
the time, but it's good to be able to fix things in when it does.

~~~
Pete_D
I always remember this write-up from the original author of GNU grep:
[https://lists.freebsd.org/pipermail/freebsd-
current/2010-Aug...](https://lists.freebsd.org/pipermail/freebsd-
current/2010-August/019310.html). "The key to making programs fast is to make
them do practically nothing. ;-)"

------
jrochkind1
> Not every piece of software needs a lot of performance work. Most of my
> tweet should be interpreted in the context of a team within an organization,
> and my perspective comes from a rendering engineer working in game
> development.

Tempers his universals a bit.

In general, when working on web apps, which is mostly what I do, you don't
gotta be quite that ambitious I think. On the other hand, you can't be
_totally blind_ either, I've worked on some web apps that were disasters
performance-wise.

But in general, while I gotta keep performance in mind all the time (okay
you're right OP), I don't really gotta be _measuring_ all the time. The top 3%
usually totally gets it.

But, when I worked on an ETL project -- performance performance performance
all the way. Dealing with millions of records and some expensive
transformations, the difference between an end-to-end taking 6 hours and
taking 4 hours (or taking one hour! or less!) is _huge_ to how useful or
frustrating the software is. And I had to start at the beginning thinking
about how basic architectural choices (that would be hard to change later)
effected performance -- or it would have been doomed from the start.

Certainly a game engine renderer is also more the latter.

But I don't know if you need _that_ level of performance focus on every
project.

~~~
nicoburns
I also work mainly on web stuff. I always think it's good to have a plan for
how you would improve performance if necessary, but not necessarily actually
do it. I'm currently working on a Laravel app, and beyond writing sensible
queries, I've done little performance optimisation. But I have a good idea how
I would if I needed to (probably rewrite the only 2 api's that are performance
sensitive in a faster language/framework!)

Interestingly it was working on an ETL project that drove me to learning Rust
(my first low-level language). I tried to implement it in node, and it wasn't
anywhere close to being fast enough.

------
kodablah
The real problem succinctly: people think that quality and quantity are
mutually exclusive. Or to go further, that those are also mutually exclusive
with inexpensive. That's why the industry is flooded with bugs and low
quality, cheap labor. Note, I did not say "junior devs" because I've often
found the attributes of a high quality developer are more innate (albeit
possibly dormant) than taught. If I can write code twice as quickly, it
performs twice as well, and it's much more maintainable, I have a real hard
time emphasizing with anyone's legacy/performance woes. It's like people
forget the word premature in the common optimization quote. It's not premature
to develop with an optimized mindset because it rarely costs anything more
than your more expensive salary. You can have a reasonably optimized mindset
without needing empirical proof on all but the most nuanced problems.

~~~
hinkley
People who don’t like the word optimization carve out large tracts of land
from that territory and call it Design or Architecture or even capacity
planning.

And I agree with you on quantity ‘vs’ quality. A team that is faking it the
whole time will never make it. Building the capacity of the team to deliver
code at a certain quality bar (that is, to stop faking it) keeps the project
from grinding to a halt in year three or four.

------
andrepd
A refreshing and sane way to think about performance, in a culture of
"performance and optimisation are evil and useless; now let us ship our 12MB
webpage please".

~~~
Chyzwar
I am not sure how to picking a random metric is going to help. It is better to
work towards performance metrics that matter to end user.

~~~
TeMPOraL
This is not "random metric", and I find that talking too much about "metrics
that matters to end user" in abstract is muddying the waters. Here are the
important performance metrics for end users:

\- It's slower than my speed of thinking / inputting. If I can press keys
faster than they appear on the screen, it's completely broken.

\- I can no longer have all the applications I need opened simultaneously,
because together, they exhaust all the system resources. Even though I have a
faster computer now, and 5 years ago, equivalent set of applications worked
together just fine.

\- My laptop/phone battery seems to be down to 50% in an hour.

\- Your webpage takes 5+ seconds to load, even though it has just a bunch of
text on it.

\- Your webpage is slowing down / hanging my browser.

\- I'm on a train / in a shopping mall, and have spotty connection. I can't
load your webpage even though if I serialized the DOM, it would be 2kb of HTML
+ some small images, because of all the JS nonsense it's loading and doing.

~~~
Chyzwar
Most users cannot distinguish when websites are hogging resources. Metrics
like the first paint, first meaningful paint and time to interactive are
usually more important than memory usage.

Memory usage sure will help but it is mudding water more than more specific
user-centric metrics. Sometimes for business applications(Slack) feature
richness and in-app performance is more important than memory usage. It all
depends on website/application and throwing tantrum about memory usage do not
help.

~~~
TeMPOraL
> _Most users cannot distinguish when websites are hogging resources._

Of course they can. The user happily works on their computer. They open your
site. Their browser (or entire computer) starts to lag shortly after. Doesn't
take a genius to connect the dots, especially after this happening several
times.

> _Metrics like the first paint, first meaningful paint and time to
> interactive_

Those are metrics for maximizing amount of people not closing your site
immediately; not for minimizing the misery they have to suffer through.

~~~
Chyzwar
> Of course they can. The user happily works on their computer. They open your
> site. Their browser (or entire computer) starts to lag shortly after.
> Doesn't take a genius to connect the dots, especially after this happening
> several times.

Most users have 20 and more websites/tabs open at the same time. Most users do
not even know what is a memory. Funny enough for most users, after initial
load JavaScript rich SPA will be perceived as more snappy than a server-side
rendered page but it will use more resources on the client side.

> Those are metrics for maximizing amount of people not closing your site
> immediately; not for minimizing the misery they have to suffer through.

These metrics are what is important for users. Users do not care how much
resources your application consumes. What matter is performance perceived by
the users.

[https://developers.google.com/web/fundamentals/performance/u...](https://developers.google.com/web/fundamentals/performance/user-
centric-performance-metrics)

------
mpweiher
Yes! Very much this. This is a lesson that, for example, Apple learned the
hard way with Tiger. They now have dedicated performance teams that look at
_everything_ throughout the release cycle.

I'd like to refine the advice given a little bit, an approach I like to call
"mature optimization". What you need to do ahead of time is primarily to make
sure your code is _optimizable_ , which is largely an architectural affair. If
you've done that, you will be able to (a) identify bottlenecks and (b) do
something about them when the time comes.

Coming back to the Knuth quote for a second, not only does he go on to stress
the importance of optimizing that 3% when found, he also specifies that "We
should forget about _small_ efficiencies, say about 97% of the time". He is
speaking specifically about micro-optimizations, those are the ones that we
should delay.

In fact the entire paper _Structured Programming with goto Statements_ [1] is
an ode to optimization in general and micro-optimization in particular. Here
is another quote from that same paper:

“The conventional wisdom [..] calls for ignoring efficiency in the small; but
I believe this is simply an overreaction [..] In established engineering
disciplines a 12% improvement, easily obtained, is never considered marginal;
and I believe the same viewpoint should prevail in software engineering."

That said, modern hardware is fast. Really fast. And the problems we try to
solve with it tend towards the simple (JSON viewers come to mind). You can
typically get away with layering several stupid things on top of each other,
and the hardware will still bail you out. So most of the performance work I do
for clients is removing 3 of the 6 layers of stupid things and they're good to
go. It's rare that I have to go to the metal.

Anyway, if you're interested in this stuff, I've given talks[2] and written a
book[3] about it.

[1]
[http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgrammi...](http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgramming1974.pdf)

[2]
[https://www.youtube.com/watch?v=kHG_zw75SjE&feature=youtu.be](https://www.youtube.com/watch?v=kHG_zw75SjE&feature=youtu.be)

[3] [https://www.amazon.com/iOS-macOS-Performance-Tuning-
Objectiv...](https://www.amazon.com/iOS-macOS-Performance-Tuning-
Objective-C/dp/0321842847)

~~~
molotovbliss
Having optimized others code before.
[https://magento.stackexchange.com/a/13992/69](https://magento.stackexchange.com/a/13992/69)

Those who slam spaghetti code. Posit: Lasagna is just spaghetti flavored cake.
Too many abstracted layers to fulfill a request. Down the layers, & back up,
just to fulfill a simple request.

Autoloading can be expensive as well for I/O until cache is involved,
especially with large code pools. Caching isn't optimization.

Trim the fat, reduce layers, use leaner pasta & or less special sauce.

If you've started tweaking costs/time from the ingredients & cooking process
(lower level, OS, daemons, etc.) to start then move onto frameworks/libraries;
it's less to consume & healthier. Ie., cache invalidation is much faster which
should be at the end of your optimization journey.

Fundamentals are being abstracted away daily, while it's great for rapid
prototyping & easier maintenance of code. It's imperative to understand
problem spaces before delivering a solution.

A very highly recommended book is:

The Elements of Computing Systems: Building a Modern Computer from First
Principles

[https://www.amazon.com/dp/B004HHORGA/ref=cm_sw_r_cp_awdb_t1_...](https://www.amazon.com/dp/B004HHORGA/ref=cm_sw_r_cp_awdb_t1_RP5oBbNNR5SM3)

The demo scene is a very good example to attempt to mimic with the culture of
optimization. It's been bragging rights way early on.
[https://www.pouet.net/](https://www.pouet.net/) granted hardware is much
faster than it was back in the olden days. But a quick note from admiring
their creations you get a taste for older hardware & milking every cycle you
can. Maybe forcing younger up & coming devs to use older hardware means
they'll appreciate it more & you'll know on current hardware it'll fly.

Another thing to think of is the von Neumann bottleneck & how it was solved
(partially) with cache levels or Harvard architecture. I/O is a basis for
optimization.

"There's no sense in being precise when you don't even know what you're
talking about."

[https://en.wikipedia.org/wiki/Von_Neumann_architecture](https://en.wikipedia.org/wiki/Von_Neumann_architecture)

With all of the above said, I'm agreeing whole heatedly with the article &
yourself as it's nice to see others focused on this as it seems to be a dieing
breed.

I'll leave this here for web developers:
[http://motherfuckingwebsite.com](http://motherfuckingwebsite.com)

------
_sh
"Fast is my favourite feature" \--Someone, maybe from Google? Not sure.

~~~
stcredzero
If you look at data relating to user conversion, and users staying on and
revisiting websites, fast would seem to be just about everyone's favorite
feature!

~~~
djake
Can you recommend where I can see such data? It would be a very useful
reference.

~~~
stcredzero
I just googled for a few seconds:

[https://iarapakis.github.io/papers/TOIS17.pdf](https://iarapakis.github.io/papers/TOIS17.pdf)

[https://skipjaq.com/assets/files/Why%20Fast%20Matters%20-%20...](https://skipjaq.com/assets/files/Why%20Fast%20Matters%20-%20A%20SKIPJAQ%20Whitepaper.pdf)

------
theprotocol
"Premature optimization is the root of all evil" is an amusing quote with some
truth to it, but it's brought up as some kind of absolute law these days.

I've seen it given as an answer on StackOverflow, even when the question is
not "should I optimize this?" but more like "is X faster than Y?"

We need to stop parroting these valuable, but not absolute mantras and use
common sense.

~~~
ajnin
Crucially people miss the word "premature", that I interpret as meaning that
you should not optimize before you've measured that the piece of code you're
considering is actually causing performance issues in your program, but if you
have the data then go for it.

~~~
theprotocol
Agreed. They're going by the heuristic that every case they encounter is
probably premature

------
karmakaze
Both sides have merit. The trick is to find a point in between that works for
you. What I tend to do after having to optimize after the fact on numerous
projects amounts to:

    
    
      - write for clarity with an architecture that doesn't greatly impede performance
      - have good habits, always use datastructures that work well at both small and larger scales
        whenever readily available (e.g. hash tables, preallocating if size is known)
      - think longer about larger decisions (e.g. choice of datastore and schema, communication between major parts)
      - have some plans in mind if performance becomes an issue (e.g. upgrade instance sizes, number of instances)
        and be aware if you are currently at a limit where there isn't a quick throw money at the problem next level
      - measure and rewrite code only as necessary taking every opportunity to share both
        why and how with as many team members as feasible

~~~
StillBored
"write for clarity with an architecture that doesn't greatly impede
performance"

I came here basically to say something similar to this. The most important
metric is to have a design in the beginning that attempts to identify where
the problems (Critical paths at a high level) are going to be and avoids them.
That doesn't necessarily mean the initial versions are written to the final
architecture, but that there is a plan for mutating the design along the way
to minimize the overhead for the critical portions.

Nearly every application I've ever worked on has had some portion that we knew
upfront was going to be the performace bottleneck. So we wrote those pieces in
low level C/C++ generally really near (or in) the OS/Kernel, and then all the
non performance critical portions in a higher level language/toolkit. This
avoided many of the pitfalls I see in other projects that wrote everything in
Java (or whatever) and the overhead of all the non-critical portions were
interfering with the critial portions. In networking/storage its the split
between the "data path" and the "control path", some other products I worked
on had a "transactional path" and a control/reporting path.

Combined with unittests to validate algorithmic sections, frequently the
behavior of the critical path could be predicted by a couple of the unittests,
or other simple metrics (inter-cluster message latency/etc).

------
gwbas1c
I find that the code that looks slow often isn't, and the really slow code is
always a surprise.

I work on something that uses a lot of immutables with copy-modify operations.
They never show up in a profiler as a hot spot. The most surprising hot spot
was a default logger configuration setting that we didn't need. Other hot
spots were file API calls that we didn't know we're slow.

I think what's more important is to use common sense in the beginning, and
optimize for your budget. Meaning: Know how to use your libraries / apis,
don't pick stupid patterns, and only do as much optimization as you have time
for.

Sometimes an extra sever or shard is cheaper than an extra programmer, or gets
you to market on time. Sometimes no one will notice that your operation takes
an extra 20ms. Those are tradeoffs, and if you don't understand when to
optimize for your budget, you'll either ship garbage or never ship at all.

~~~
TeMPOraL
I strongly agree.

> _Sometimes an extra sever or shard is cheaper than an extra programmer, or
> gets you to market on time._

Sure, you're bailing yourself out by burning money. That's fine if it's all
contained on your backend. My problem starts when similar situation happens on
frontend - in the JS code of the website, or in the mobile app. Too often
developers (or their bosses) will say "fuck it, who cares", as if their
application was the _only_ thing their customers were using on their machines.
The problem with user-end performance is that suddenly, I can only run 3
programs in parallel instead of 30, even though I just bought a new
computer/smartphone - because each of those 3 programs think they have the
machine for themselves.

~~~
gwbas1c
An extra 20ms, or an extra sever, is very different than sloppy JavaScript
bogging down someone's computer. That's crap code.

------
taeric
I actually saw this tweet the other day. Amusing how often performance is
neglected until it kills something.

I have also felt it would be a fun bingo game in a year to see when a famous
quote of someone would come up. This Knuth quote would definitely be on there.

------
hamilyon2
Not everyone is building web browser, compiler or even a e-commerce site. Even
on commerce-related website, only pages on customer acquisition path and buy
path really matter.

Most of those pages will bubble up when you do your first profiling session
anyway.

You can get away with good data structures/good sql queries and a little big O
analysis almost everywhere.

Premature optimization, premature nines of stability, premature architecture
and abstraction are as evil as ever. They all distract you from moving forward
and shipping.

Of course, if your product is BLAS library, database, compiler, web browser,
operating system or AAA video game, that does not apply. I mean, for most of
us "profile often" is a terrible advise.

(edit: spelling, clarifications)

~~~
kurtisc
>You can get away with good data structures/good sql queries and a little big
O analysis almost everywhere.

So design databases and algorithms for performance from day 1 & understand how
the data needs to be structured? You'd have to verify the assumptions you made
about the data were correct and profile often if you didn't want to worry
about performance regressions. Bad enough regressions are a failed product and
you can't ship something that could never get fast enough without a rewrite.

I don't think everyone in this thread has the same concept of what
optimisation means. It doesn't all have to be obfuscating bit-fiddling. Even
identifying the important paths is optimisation.

------
squirrelicus
This article reminds me of something I read a couple years back that stuck
with me.

The 10x dev is the dev that creates 10% the problems other devs create.

Thinking ahead is a skill the industry at large unfortunately seems to lack

------
chrisbennet
Good performance comes down to using suitable algorithms - not optimization’s
after the fact. Thoughtful algorithm choices are never premature.

There are also a lot of times when it doesn’t matter, possibly the majority of
the time in some domains. I’m working on a project now where the answer takes
a couple of seconds to generate but it isn’t needed for minutes so spending
time to make it faster would be a waste of my clients money.

------
ebikelaw
> But if performance work has been neglected for most of the product
> development cycle, chances are that when you finally fire up the profiler
> it’ll be a uniformly slow mess with deep systemic issues.

Hrmm. In my experience very good programs also have very flat profiles. I
don’t think a flat profile is indicative of bad performance culture.

------
vbezhenar
While I think that writing simple code is preferred to writing optimized code
given a choice, I just hate writing obviously non-optimal code, it leaves bad
taste in my mouth. I'm trying to find some land in between, even if I'm sure
that those optimization efforts won't yield any observable gains.

------
chrisbennet
Something I find helpful is to performance and memory profile your projects on
a semi-regular basis and establish a baseline. When things suddenly deviate
(esp. memory usage) you catch it early before the problem has time to grow.

------
shmerl
Nice article, but my eyes hurt from the colors :) Switching to reader mode in
Firefox helps a lot.

------
DeathArrow
Rules of optimization:

1\. Optimize only if needed.

2\. Premature optimization is the root of all evil.

~~~
croo
I don't know why are you being downvoted as these are famous rules and
contributes to this thread.

I saw rule 2. ignored almost daily at a job - putting caches where it doesn't
matter, taking days on a feature because of analysis paralysis, colleges
thinking about using int instead of Integer or not creating another function
because of 'overhead'.

I think these are wise rules - of course not when you use a 2 MB js library
for an uppercase() function. That is madness.

But the decision if you should use an arraylist or a linkedlist is unimportant
almost every time. Of course there is that 1% when it matters and these rules
are about the other 99%.

~~~
n4r9
I downvoted it because it adds nothing. If you've read the article, you'd know
that those classic arguments are addressed very early on. If you haven't read
the article, you learn nothing about what's actually in the article or why it
might interest you.

------
andrewmcwatters
Replace optimization with security, good design, or any other important facet
of software engineering, and you have the same story.

Good software is a multifaceted effort, and great software takes care of the
important parts with attention to detail where relevant: great games libraries
don't add significant overhead to frame time, great filesystem libraries don't
increase read time, great security libraries don't expose users to common
usage pitfalls creating less secure environments than had you used nothing at
all.

It happens to be that optimization gets deprioritized at the expense of other
things, where "other things" in this context is some category I fail to pin
down because PMs don't give a shit about what that other category could be,
and instead just care that whatever you're working on is shipped to begin
with.

Great software developers will respect the important parts, and still ship.
And yes, it's always easier to do this from the start than it is to figure it
out later. Many things in life are this way.

I have a soft spot for performance, though, so I care about this message. One
day hardware will reach spacial, thermal, and economic limits and when that
day comes, software engineers will have to give a shit, because too many sure
as hell don't seem to give a shit now.

~~~
stcredzero
The _Rule of Everything_ in Software Development:

Everything should be subject to cost/benefit!

Corollaries: (...don't do that!)

    
    
        - If you paint yourself into an untenable corner, you lose! 
        - If you plant a time bomb that explodes and kills you, you lose!
        - If you sink yourself too deeply into technical debt, you die! You lose!
    

Avoiding pre-optimization is just the flip side of the intelligent
cost/benefit analysis coin. Don't start writing hyper-optimized code
everywhere. However, also don't architect your real-time game system such that
making string comparisons is inextricably the most common operation.

Here's a solution that I've found. It's not foolproof, but it will get you out
of trouble quite often. Code such that you can change your mind. HN user
jerf's "microtyping" technique in Golang let me do that with my side project.
I committed the egregious error I mentioned above, of using string identifiers
for the UUID of my MMO game entities. But because I followed the "microtyping"
technique and did not directly use type string, but rather used ShipID, I
could simply redefine:

    
    
        //type ShipID string // old bad way
        type ShipID struct { // new way
            A int64
            B int64
        }
    

Where possible, your code changes should be bijections. Renamings should be
bijections and never surjections. The bigger your changes, and the more
automation is involved, the more you should hew to bijective changes.

~~~
perfmode
what do you mean by bijection/bijective in the context of refactoring and
iteration?

~~~
jessaustin
Presumably GP meant that the mapping from old ShipID to new ShipID _must_ be
_injective_. _Bijective_ would also work, but if the mapping were merely
_surjective_ there could be multiple entities in the old code mapped to a
single entity in the new code. That seems sure to cause problems.

~~~
stcredzero
Actually, a surjection might result in a refactoring in the pedantic sense.
Also, some refactorings work out to be surjections.

------
stockkid
the article had interesting points but i decided to stop reading because the
yellow wall of text with black background was not super readable.

~~~
slededit
Its a dark blue. I think your color balance may be way off. With a more
accurate color balance its readable.

~~~
tom_
Some people just don’t like light text on dark background. It’s often not
something you can fix with a minor tweak.

~~~
slededit
Agreed, but calling the blue black indicates a seriously misconfigured
monitor.

------
bcheung
Rule #9. Optimize webpage colors so it doesn't hurt people's eyes.

------
fold_left
Interesting title but the text was far too small to read on my pixel 2 and
wouldn't let me pinch-zoom in. Optimization of some other metric perhaps?

------
DeathArrow
I don't give a fuck, I tell people to buy faster hardware.

------
mehrdadn
Rules of _software_ (or _code_ ) optimization maybe? I clicked on this
thinking it was going to be about gradient methods.

------
kyleperik
It's interesting to me that people's opinions on this subject are much like
politics. You either make optimization a priority, or you don't optimize
prematurely. You write clever well written and commented imperative code, or
clear concise functional code.

They're two completely different schools of thought and may work well in
either scenario. It depends a lot on your background, and your current context
what way you are going to write code.

~~~
TeMPOraL
I find your comment to be much like politics here. Presenting a false
dichotomy. People who are obsessed over optimization vs. people who aren't.
People who would prefer writing "clever well written and commented imperative
code" vs. "clear concise functional code".

The truth is, you have at least three distinctive groups on the performance
spectrum: people who are obsessed, people who treat it as one of the things to
prioritize, and people who never do it (and tell you it's never the time to do
it). The truth is, a lot of performance problems come from imperative code,
and functional doesn't mean slow if you know what you're doing. The truth is,
most big performance issues can be solved by not doing stupid shit and taking
an occasional look at the resource usage. The former requires a little
knowledge of how computers and your tools work. The latter requires _care_
about end users, which seems to be in short supply.

~~~
kyleperik
I'm not sure quite what you're getting at. I actually agree with you, I'm one
of those who does not like imperative code. I think it's completely
counterintuitive.

I was making an effort to be unbias, by making clear the "good attributes" of
each side.

I wasn't at all trying to make that harsh of a distiction. In fact I was
trying to point out the ridiculousness of it. What I was trying to point out
with irony, I should have said clearly what my opinion is:

If you're too concerned with speed, and you optimize early, I think your code
style is less than ideal. Essentially your code is messed up to the point that
it's too difficult to go back to make optimizations if necessary.

I think the reason functional programmer's don't talk about optimization as
much is because they don't need to. It's a completely different paradigm.

So much like politics, if you only listen to one side, you're going to get
messed up ideas of the way the world works.

I hope this offers a better explanation

