
Performance Matters - MattGrommes
https://www.hillelwayne.com/post/performance-matters/
======
JshWright
Not sure where the author is from, but I'm not aware of any states that still
allow paper PCRs. ePCRs are just a fact of life.

I pretty much never bother opening the laptop until I'm at the hospital. There
are a lot of reasons for that, but a laggy UX isn't really one of them
(despite the fact that the UX is indeed super laggy on any ePCR I've ever
used). Instead I use a pen and paper (or a pen and a strip of white medical
tape on my thigh, for a more critical patient).

The reason I don't bother is because the whole UI is just too much of a
hassle. Not the application frontend, the whole thing. The laptop, the
keyboard, etc. With a pen and paper I don't have to keep taking my gloves off
and putting new ones on each time I need to switch from documenting something
to caring for the patient (a pen is trivial to wipe down, or just toss if
needed). With a pen and paper I can jot simple notes in a shorthand I'm
familiar with, rather than having to navigate to the right page to enter the
information I want to record at the moment.

I know very few providers that bother touching the laptop prior to arrival at
the hospital. Generally we write notes on paper, then bring the laptop into
the ER and start typing while we we're waiting for a bed for our patient (it's
not uncommon that this takes long enough to get the whole chart written). At
my agency we have two hours from when we transfer care to when the chart has
to be signed off and locked.

I also chuckled at the suggestion that "0.1% of PCRs have errors that waste an
hour of a doctor's time". In my experience 0.1% is pretty much how frequently
the PCR gets reviewed at all in the ER, and anything important enough to spend
an _hour_ on would have been mentioned up front during the transfer of care
(nevermind the fact that an hour is an absurd amount of time for a doctor to
be spending on a single patient).

~~~
joshstrange
My boyfriend is a nurse and said a very similar thing. He said he very rarely
looks at a PCR as everything important is conveyed verbally when the handoff
from EMT->Hospital.

~~~
JshWright
I'm glad I omitted the ER nurse targeted snark in that case... ;)

------
Ensorceled
When I was writing software for medical devices in the '90s, we had a very
clear policies for dealing with cognitive drift that included performance
ceilings, testing for drift in beta testing, etc. In our testing, 10 seconds
was the absolute maximum time that a surgeon could "idle" and stay on task in
surgery.

In addition, we found that including spinners, progress bars, etc. would not
necessarily reduce the cognitive drift but would reduce emotional frustration.

The worst thing for frustration was UI elements being slow; buttons that
responded slowly, scrolls that lag, pull-downs that didn't pull down. Exactly
as this author notes. There was very little tolerance for those kinds of
delays.

I presume that, like website response rate tolerance thresholds that have
dropped from 5 seconds to less than 2 seconds now, the medical industry is
probably using much lower times now.

~~~
phkahler
>> The worst thing for frustration was UI elements being slow; buttons that
responded slowly, scrolls that lag, pull-downs that didn't pull down. Exactly
as this author notes. There was very little tolerance for those kinds of
delays.

IMHO there is NO excuse for those kinds of delays. None. If you have those
issues you're doing something terribly wrong.

~~~
penagwin
I don't know the details, but many embedded interfaces/controllers don't have
very fast processors/microcontrollers and UI is really expensive compared to
everything else it has to do (take some inputs, and submit some data).

Think about how laggy the raspberry pi 2 or even 3 in the desktop interface.
Sure they can be optimized etc, but now imagine what they wouldn've used 10+
years ago and how slow it would be.

EDIT: I have to agree with you guys, even given the tech they had they
should've put a higher priority on responsiveness. I don't know what the
status is today, but I know even the 2015 Prius's touch screen feels too
unresponsive to my liking.

~~~
dahart
Respectfully, that is a common and tempting argument, but no, no, and no. The
raspberry pi 3 has a freaking dual core 64-bit 1.4Ghz processor that is more
than fast enough to run full-screen video games at 60Hz.

It's more than just an "optimization" that is making UIs on these plenty-fast
processors so slow, it's a large scale failure of software engineering,
building slow layers on top of slow layers.

The processor is not the reason for slowness, and we've had absolutely
responsive UIs not just 10 years ago, but 50 years ago too. The actual reason
is economic: cheap engineering. Using freely available components not well
suited to embedded devices, writing layers using scripting languages because
it's easy, treating the problem as solved when the functionality is there
without regard to the speed.

~~~
Faark
On the other hand, the times were the CPU was polling keyboard resistances and
then directly controlling your CRT's electron beam accordingly are long over.
Today's systems are complicated, and we try to hide that complexity behind
abstractions. All of those with buffers. Input buffers, OS message buffers,
tripple image buffers, and so on. All of it adding time.

Yes, we won't again have the responsiveness we had 30 years ago. Because the
tradeoffs are just not worth it. At the same time, stories like the article
are a necessary reminder of there being worth in performance.

~~~
dahart
There's unnecessary complexity and abstractions -- especially in poorly
designed systems, yes, that's part of my point, so I agree there.

> we won't again have the responsiveness we had 30 years ago.

Not sure I can agree with that. For the best systems, we absolutely have
better responsiveness now than we've ever had, shorter latencies now in the
hundreds sometimes thousands of hertz, and orders of magnitude more compute we
can put in between input and response.

Maybe the worst systems are getting worse, but I think on the whole
responsiveness has been monotonically improving since the invention of the
microchip.

> the times were the CPU was polling keyboard resistances and then directly
> controlling your CRT's electron beam accordingly are long over.

I don't know every system ever made, but I don't think there was any long
period of time when application software on digital computers controlled the
electron beam in a CRT directly or polled keyboard resistances directly, those
were abstracted by the hardware via DACs & buffers more or less from the
start. Maybe some of the old vector display video games did, I'm not certain,
but in any case, it's been abstracted that way for more than 30 years. The
first framebuffers happened almost 70 years ago, before 1950.

~~~
mntmoss
The raw throughputs are definitely better but there is concern regarding
latency even at the lowest levels, since today's CPUs have complex cache,
power state and frequency throttling mechanisms. You cannot guarantee that
something will perform with identical runtime in all expected use-cases unless
you take care to use hardware that is optimized in that direction. And because
the software environments are more complex a lot of capability is just dropped
on the floor because the intermediate layers get in the way.

W/R to buffers in front of things, a decent number of the micro systems of the
70's were so memory starved that they would make these tradeoffs to retain
video sync - that describes the Atari 2600("racing the beam" is the name of a
book about the console), and Cinematronics vector games(if it did not complete
drawing at 60hz, it reset). Most early arcade games did work with DACs(or
rather ADCs) but ran their own calibration and button debounce code - and even
with layers of abstraction that's still basically true today.

With graphics, the move towards desktop environments doing GPU compositing
impacts graphics coders, since they now often have window manager buffers in
front of them instead of direct access to a rectangle of pixels.

Web browsers are a more extreme example of this. Because the graphics model
revolves around document presentation, things that aren't really documents, or
are extremely customized documents, often get burdened by latency that they
wouldn't have if it weren't for the browser.

~~~
dahart
Ah, I've been meaning to read Racing the Beam for years!

That's all true, and thanks for an insightful comment!

I would just add though that it's easy to frame things in a way that makes it
seem harder than it really is to maintain responsiveness. For example, while
yes caching makes guaranteeing exactly repeatable timing a problem, that's an
issue at the nanosecond/microsecond level, and not really a problem at the
millisecond (human perception) level at all. Today's hardware doesn't have
issues maintaining 60hz unless the software isn't even trying.

Another counter-example would be that while yes, browsers and desktop UIs are
doing compositing and don't have direct access to the pixel buffer, the
compositing is actually done on the GPU via low-latency commands. Direct
access to the pixel buffer would actually be much slower than what we have
right now. The browser has no trouble responding and rendering at 60hz unless
you do things that cause more than the ~16ms of compute you have time for.
Triggering page reflows will do that, but compositing an animated WebGL canvas
over something else on the page is plenty fast.

------
mfer
Something is missing in this conversation. Do the people who are creating the
applications that deal with medical data have motivation to make it better or
more useful?

I've spoken with numerous people in my life who deal with these types of
systems. Everyone complains. Fields for information are in unexpected places.
There are too many things to click on. All of the people I've talked with
agree on the same things.

It's as if no UX people were involved. The experience doesn't fit the need
quite right.

What would motivate the makers of these things to improve on the situation?
How are the contracts won? Do the people who have to use this stuff have a say
in it? Do the hospitals and doctors offices measure stats to see what's
working and what's not?

This space is ripe for disruption simply based on user experience.

~~~
Jwarder
I worked on EMS data entry software for a while (deepest apologies to any EMTs
reading) and the big issue we ran into was needing to fulfill
doctor/administrator pet projects to get a sale.

The county wants to collect data on if a drowning incident happened at a pool
without a locking gate. Now all services need to update their software to
include a random checkbox. An EMT was caught without gloves and now the parent
hospital wants EMTs to record what levels of PPE they wear on every run. The
customer might get a grant if they can help track infectious diseases so now
there is an extra box where the EMT can guess if the patient has the flu.

In the abstract that is good data to want to know, but it ends up being more
junk on the screen for the EMT to skip over while dealing with the patient or
sitting in the hospital parking lot after a run.

~~~
JshWright
If I had a dollar for every time I swore at NEMSIS, I could quit my day job...

~~~
Jwarder
NEMSIS is ugly, but it is vastly easier to work with than HL7. That was an
evil XML standard.

~~~
ScottFree
Was? Has the US medical system finally moved on from HL7?

~~~
JshWright
Newer versions of HL7 (and FHIR) aren't (exclusively) XML.

------
Gene_Parmesan
People who quote "premature optimization is etc." never provide the full
quote. The full quote has a significant degree of highly important nuance.

> "Programmers waste enormous amounts of time thinking about, or worrying
> about, the speed of noncritical parts of their programs, and these attempts
> at efficiency actually have a strong negative impact when debugging and
> maintenance are considered. We should forget about small efficiencies, say
> about 97% of the time: premature optimization is the root of all evil. Yet
> we should not pass up our opportunities in that critical 3%."

"Premature optimization is the root of all evil" is incomplete without the
surrounding context, and people unfortunately take it to mean "never worry
about how efficient your code is," which is not at all what Knuth intended.

~~~
anaphor
Basically it can be summed up to "don't optimize until you've identified the
critical paths"

~~~
defined
That’s oversimplifying it. Knuth was writing about micro-optimizations, like
writing a section of code in assembly language to shave off some milliseconds
from a critical section of code.

What many people today mean by optimization is a far cry from that.

Here’s a real-life example. Some code was super-slow and nobody could figure
out why. It took only a minute to see the code was passing a string (in C++)
by value, causing unnecessary copy construction and destruction (in a tight
loop, to boot).

Writing that code by passing a const reference in the first place is Good
Coding 101, but many people might consider that “premature optimization”.

------
agentultra
I believe performance does matter and should factor into the design of a
system. There is an ISO guidelines on software system requirements and
specifications that has a section for performance requirements. It does matter
even for banal, non-critical or non-life-threatening software systems.

Another often-overlooked requirement: environment. Does the human using your
software system have to pay attention to more important situations around
them? Is the environment they're using your software in noisy, stressful,
dangerous or dirty? Should you expect frequent wireless interference or
degraded wireless performance?

Reading the ISO specs are pretty boring, so is the SWEBOK guide, but it's
rather interesting to think about software in terms of the whole system and
not as an artifact unto itself.

~~~
nickpsecurity
You'll probably like Nancy Leveson's model and work if you haven't seen it:

[http://sunnyday.mit.edu/accidents/safetyscience-
single.pdf](http://sunnyday.mit.edu/accidents/safetyscience-single.pdf)

[http://sunnyday.mit.edu/](http://sunnyday.mit.edu/)

------
whistlerbrk
As a former EMT and someone who built their own software to create run
reports, I'm a bit skeptical as to that being the only reason.

Paper can be edited, even after the point at which you given a carbon copy to
a hospital, if you're friendly enough. This probably doesn't happen often or
at all but there is a psychological safety there.

People make small mistakes all the time on the ambulance in the rush to get
them to the ER. We live in an extremely litigious society.

~~~
ben509
That's a good insight. Never dealt with any medical stuff, but I dealt with
paperwork in the military as it was going through a (long painful) transition
to electronic documents back in the '00s.

And I don't think it's just that society is litigous, but that databases
actually (mostly?) work in terms of enforcing the rules you give them.

One anecdote of how things changed was that you used to be able to arrive on
post, and if you didn't like your orders, talk to a sergeant major of another
unit and quietly get your orders changed, which no doubt drove PERSCOM crazy.

When things were done with paper, there was a certain degree of flexibility in
the rules that gave people some amount of local autonomy. When those rules are
enforced by a computer, there's much less possibility of someone being able to
override the rules when it makes sense.

Normal human social interaction tends to create unwritten rules that are
simply more flexible and realistic than the rules we're willing or able to
write down.

~~~
abraae
We have a platform that amongst other things automates filling in a fairly
complex government form, including online signature.

We did a demo the other day to a large new customer. I learned that when they
filled in the existing paper form, they didn't just sign it, they also stamped
in the signature box with a little rubber stamp that "signed on behalf of the
CEO".

It wasn't immediately clear whether anyone handling the form was looking for
that stamp, nor what they did when they saw it. As a result, it's not clear to
us at this stage whether our pdf generation now needs a new feature to allow a
user-uploadable image file to be imprinted over the form.

That's the sort of thing you get with the joyous "flexibility" of paper.

------
rossdavidh
So, interesting and all, but it ignores the main reason for the advice to
build it first, then optimize for performance, which is that if you build
everything for performance from the beginning, you end up with a lot of code
that is optimizing for performance of something that isn't the bottleneck. In
other words, performance that doesn't show up in the user's experience,
because something else is the main delay.

Now, if all that meant was that the programmer has to do some more work, and
you aren't worried about paying more for the software, this may not matter,
but that is far from the only (or even most important) result of optimizing
everything for performance. Instead, what you get is code that is much longer,
and more complex, and therefore harder to update, and more likely to be buggy.

For example, one common thing you have to do to get performance, is to cache
values in multiple places, instead of looking them up every time. This can
result in big improvements to software performance if done in the places which
are the current limiting step, but now you have to make sure you invalidate
the cache correctly. In particular, if you don't, you get stale values in the
cache, which is to say false data.

If you make a system with cacheing all over the place, it will be very hard to
make changes correctly, which means sometimes they will get made incorrectly.
In mission-critical systems, this is even less acceptable than elsewhere.

It's not just "I don't feel like optimizing". More often, it's the cost of
optimizing is not worth the benefit, for this particular part of the code. How
do you know what spots in the code it's worth it? You don't optimize, at
first, and then see which two or three spots in the system are rate-limiting.

~~~
TeMPOraL
> _Instead, what you get is code that is much longer, and more complex, and
> therefore harder to update, and more likely to be buggy._

I'm gonna repost a chart I made previously[0][1].

    
    
      Spectrum of performance:
      LO |---*-------*--------*------------*-------| HI
             ^       ^        ^            ^
             |       |        |            |_root of all evil if premature
             |       |        |_you should be here
             |       |_you can be here if you don't do stupid things
             |_you are here
    

Tricks like denormalizing your data model ("cache values in multiple places")
start somewhere around the "you should be here" point. But in any typical
program there's plenty of stuff to be done left of that point, and those
things don't make your code more complex or longer. They only require care and
willingness from the software vendor.

\--

[0] -
[https://news.ycombinator.com/item?id=20389856](https://news.ycombinator.com/item?id=20389856)

[1] -
[https://news.ycombinator.com/item?id=20520605](https://news.ycombinator.com/item?id=20520605)

~~~
unrealhoang
Usually, it's more like this:

    
    
      LO |---*------------------*----*--*-| HI
             ^                  ^    ^  ^
             |                  |    |  |_root of all evil if premature
             |                  |    |_you should be here
             |                  |_you can be here if you don't do stupid things
             |_you are here

------
jancsika
Welcome to the world of soft realtime applications. Sounds like this one is
just so bad that a) the designers didn't realize that was what they were
designing and b) consequently it failed to meet any of its deadlines in the UI
domain.

One detail I've noticed is that even if you do a shitty job of hitting the
deadlines, users will often find workarounds. As long as a critical mass of
features respond consistently within the deadlines, users will fiddle around
to make the rest work-- even if you tell them directly that it isn't possible.

Say the dropdown menus here performed fast enough, but going to the "next
page" required a bunch of DOM mutation or something that grinds to a halt for
two seconds. I'd bet EMTs would have trained themselves to do branch
prediction-- hit the button to change pages before setting it down, hit it
again based on what the patient is communicating to the other EMT, and so
forth. I've seen users do weirder stuff.

The fact that they discarded it altogether tells you how bad the UI was.

------
ecpottinger
I use to use Amiga computers, I presently use Haiku - i am not trying to say
those are the best OSes. But when I use Windows I am surprise how slow some
functions are.

Worse, I can use two different programs in Windows and one will be dog-slow
compared to the other.

I blame the use of pre-written libraries that in turn call more libraries that
in turn call still more. There was a site on Windows bloat-ware, there were
for example programs having multiple version of the same library link into the
code even while the program only used one library ever.

Also the screen layout is runned thru a layout program/library that slows
things down, the layout system is great when you are designing the system, but
once you reach the point of final and fixed design there are faster ways to
display it.

on the other hand, management is too often cheap, and says "It works, don't
it?" and do not want to pay for the extra coding that would make so much
faster.

~~~
gameswithgo
It isn't just management. Most software ecosystems (Rust, C, c++ excluded)
discourage any thinking about or tinkering with performance knowledge. You see
evidence of this on stackoverflow questions, questions on reddit and twitter.
Young people or new programmers curious about the fastest way to do things are
always lectured about how this is not good to do. It is quite unfortunate
because building some experience for how to do things fast is very useful. It
isn't always correct to optimize the hell out of things, of course, but we
should encourage the curiosity behind it, so that more of us have the tools to
do it when it is correct to do so.

~~~
mark-r
I have often found that doing things efficiently isn't really harder than
doing them slow. So why not aim for software that's reasonably performant from
the start? Sure you can micro-optimize to get every last nanosecond, but
that's not what I'm talking about.

------
anderspitman
My philosophical rule of thumb is that it is morally wrong for humans to ever
wait on computers, period. It's an unattainable standard but results in much
better software IMO.

------
Kaiyou
Always remember the Doherty-Threshold.

"Productivity soars when a computer and its users interact at a pace (<400ms)
that ensures that neither has to wait on the other."

[https://lawsofux.com/doherty-threshold](https://lawsofux.com/doherty-
threshold)

------
ChrisSD
> Something like a quarter-second lag when you opened a dropdown or clicked a
> button

So the title would more accurately be "UI responsiveness matters"? I doubt
anyone would think this kind of lag doesn't matter. It sounds like the
software was designed (and tested) for better hardware than it is running on,
or at least different hardware. But that's just a guess.

~~~
ziddoap
Genuinely curious, what is the difference between performance and
responsiveness in this context, and why would it be better as
"responsiveness"?

~~~
hoorayimhelping
You can make a UI feel responsive even if the underlying system is not
performant.

Let's say you have a client app with checkbox that toggles some boolean
variable that communicates with a server. If you're on a 3g connection in an
ambulance, it might take a few seconds to send that request to the server and
get a response back.

An unresponsive UI would sit there and make you wait for the request/response
to complete. You click the checkbox, then nothing happens, then suddenly, some
time later, the checkbox is checked and the page refreshes, or a state change
happens. It's jarring and weird. A responsive UI would update the state of the
app based on the user's actions even if the request hasn't completed yet.

Tightly related to this the concept of optimistic UIs - the UI acts
optimistically and updates the state of the app under the assumption that
server-backed state changes will work. The iPhone's messages app sending
messages is the prototypical optimistic UI interaction. You send a message,
the blue / green speech bubble shows up on your messages app while the message
is in flight. If it succeeds, nothing changes and you're none the wiser. If it
fails, you get an option to delete the message or resend.

~~~
wool_gather
Some (subtle) confirmation is nice, too, when the user is actively aware that
the process is fallible. The Messages example you chose has this: a small gray
"Delivered" tag is added below the message when the sending device hears back
from the server.

------
shay_ker
_Did that quarter-second lag kill anyone? Was there someone who wouldn’t have
died if the ePCR was just a little bit faster, fast enough to be usable? And
the people who built it: did they ask the same questions? Did they say
“premature optimization is bad” and not think about performance until it was
too late? Did they decide another feature for the client was more important
than making the existing features faster? Did they even think about
performance at all?_

This seems like a lot of imagining of scenarios that may or may not have
happened. Perhaps another way of looking at this is - did the company hired to
build this actually care whether EMTs used their software, or were they
looking to get a paycheck?

So many health tech firms fall in the latter bucket. Such is the reality of
the business - with huge enterprise sales funnels and hospital networks that
don't understand the value of UX & great product (and caring a lot about their
bottom line), this result isn't unexpected.

~~~
JshWright
If someone is sick enough that a "quarter second" matters, there's zero chance
the tablet is out in the first place. In reality if a quarter second is enough
to kill someone, they're gonna die anyway...

~~~
bananocurrency
I feel the same about my EKG. What's a quarter second when it comes to
measuring a heart? Who cares about precision.

~~~
JshWright
A quarter of a second in measurement resolution of an EKG is huge. An extra
quarter of a second interpreting or documenting that EKG couldn't be any more
meaningless.

------
l0b0
This sounds like a clear case of consultant software. _The people building it
won 't be dogfooding the result_ \- at most they'll be clicking around the UI
on their powerful desktop machine and patting themselves on the back because
it responds in less than half a second. _The requirements will be vague,_
because getting performance requirements anywhere beyond stupidly vague
"performant on modern hardware" or some shit requires _real expertise_ in UI
design and will cost _real money_ to build, and UI designers won't be involved
in the details until the contract is signed.

"Premature optimization" in reality is a vacuous phrase, because premature
_anything_ is bad - it's right there in the word! (Insert joke here.) The
problem is that most developers (myself included in many cases) are just not
qualified to say when it is premature to optimize, _because the requirements
do not state anything meaningful about performance._

If you think this is overly pessimistic, I would encourage running pretty much
any Android app or opening any major website on a 2018 or older smartphone.
It's pretty obvious developers don't know how to build software for the
probably 50% or more of customers who don't buy a new top-of-the-line phone
every year.

As for performance guidelines, have a look at Jakob Nielsen's amazing
_evidence-backed_ UX guidelines such as Response Times: The 3 Important
Limits[0]. The ones which should stick are that 0.1s feels instantaneous
(probably not when using a scroll bar, but that's another matter) and 1s is
the limit for not interrupting the user's flow of thought. In other words, if
_anything_ your program does takes more than 0.1s to respond to user input on
the target hardware that should at least be acknowledged and prioritized
(maybe at the end of the backlog, but at least then it's a known issue).

[0] [https://www.nngroup.com/articles/response-
times-3-important-...](https://www.nngroup.com/articles/response-
times-3-important-limits/)

------
hinkley
I have a theory on the dynamic being described, based on my experiences as a
software developer.

When you are juggling and prioritizing a list of tasks, they are occupying
short term memory. If you can perform the current task from muscle memory, you
don’t upset working memory. But as soon as you start having to think about it,
and worse once you start editorializing in your head, things start to drop
off.

Worst case results of this phenomenon have occurred when you finish a hard
task and you have to ask yourself, “what was I doing?/why was I doing this?”

So you want your tools to be unobtrusive. People who love hand tools know
this. Somehow we do not.

~~~
desc
Indeed.

"If I have to pay nearly as much or more attention to the use of this software
than it saves me, it is by definition worthless."

Intellisense in VS/Resharper went through a nice period a few years back where
it was smart enough to match what was meant, fast enough that it didn't
(usually) drop keypresses, and (crucially) stupid enough that typing the same
thing twice would get the same results. Sadly, all of the above have been lost
in the hail of features...

Context sensitivity is not always a good thing in a tool. Muscle memory wins.

------
zentiggr
Sounds like this PCR software should be structured around letting the operator
enter information almost as freeform as possible, basically collecting
timestamped notes that are reviewed and formalized afterward.

UI up front can be very fast capture, with asynch layer to parse and formalize
constructs, with a third stage of prefilling the final form (if paper layout
needs to be maintained etc) and allowing very flexible revision.

Last step would be archiving locally and then transmitting to whatever outside
system.

~~~
smacktoward
It'd be even better if, say, your insurance card had your basic medical info
stored on it in some encrypted form, and then the EMTs could just scan the
card and have half the form populated with accurate information automatically.
Or if it just stored some kind of UID that the PCR system could use to pull
that information from the insurance company's database.

------
tibbon
I saw this myself in an ambulance in the Boston area. The PCR system they were
using was on some Toughbook-style computer, and I could tell that it was
lagging and slow to input basic stuff on. Apparently whoever had put it
together cared more about durability (important) than performance. Or maybe it
worked great when it was deployed, but the newer software updates weren't
built for such a slow machine. They used it; but it was like inputting a data
on a slow ATM.

~~~
TeMPOraL
Or maybe they made some stupid decisions like making each update query
something over the Internet, which works fine when testing, but not when
you're on the streets, in close proximity to a lot of other cellular Internet
users.

------
elevation
I'm can't dispute that fast software is a desirable trait for users. I would
love to spec low latencies into the requirements documents for my GUI
projects. But from a business standpoint, UI latency requirements are a good
way to sabotage a project.

Slow solutions have a dramatic business advantage: using libraries, you can
shave months off your delivery schedule by adding miliseconds to your UI
delays. The months add up; so do the miliseconds. By using someone else's
crufty libraries, you can write a calculator app over the weekend. Libraries
will handle the double buffering for screen painting, unpacking fonts and
rendering them into bitmaps, how to reflow text onto a screen, etc. It might
take seconds for the app to load all it's assets and display the first user
activity, but it was cheap to create, easy to iterate, and it gets the job
done. These are hallmarks of a valuable startup engineering effort.

But the development process for speedy applications is anathema to businesses
of any size. You have to throw away Electron and hire systems engineers with
knowledge of the entire platform. After some profiling, they may determine
that the required speed is only possible once the system event handler has
been replaced with a pared down routine, or when the framework's audio
interface is bypassed so the sound card can be accessed through low-latency
DMA mode. You have to repeat this for every platform you intend to support.
You have to throw away the wheel that everyone else is using and pay to
reinvent one that spins a little faster.

To paraphrase Joel Spolsky's comment on rewriting software: are you sure you
want to give that kind of a gift to your competitor? For most companies it
isn't worth it.

Are there exceptions? Yes: a motivated engineer like John Carmack may work
months to eliminate another 3ms frame latency out of a VR headset simply
because of personal passion. Google can amortize additional billions by
wringing 100ms out of page load times by funding a browser project so complex
that you also have to design a new build system for it. But if you're not
Carmack and you're not Google, you probably can't afford to reinvent the
wheel.

~~~
djmips
John Carmack doesn't work hard to eliminate 3ms of frame latency simply
because of personal passion. In VR applications, latency kills and will make
people sick. Those hard fought ms are needed. You only have 11 ms per frame at
90 Hz.

As someone who works in the performance arena myself I feel like a lot of the
work I do is undoing the mistakes that the attitude that it's OK to not care
about performance so I'll just do it the 'easy' way. It's pretty much a form
of technical debt that is somehow culturally acceptable. For years we've had
Moore's law covering everyone's asses but perhaps there will be a reckoning
yet.

~~~
elevation
I don't mean to belittle performance work, but the value of Carmack's
successful optimizations in the VR industry doesn't translate to most
business.

Let's say we have a $1M/year business opportunity with a requirement that a
user be able to take 1MB photos on a phone and have them appear on a PC/laptop
which are both connected to a 802.11G access point and on the same internet-
connected network segment. The time requirement on this phone->pc transfer
will dramatically affect the software development effort in terms of time,
money, and skills.

With a requirement spec of 1 photo every 10 seconds, commodity software can
already accomplish this. With a cloud account from their smartphone vendor, a
user can monitor their photo album from their PC browser, and photos taken on
their phone will be sync'd to the cloud, then visible on the laptop. Assuming
both devices are on wifi with a typical cable modem connection, the transfer
of the photo from phone->cloud->PC-browser would occur within 10 seconds. No
software engineering required, supports MAC/Windows/Linux and iOS/Android out
of the box.

Now let's spec a 1 second transfer time. Cloud syncing is out; we'll need the
phone and the PC to communicate directly over the wifi link; we'll have to
write a phone app that sends the photo file over a socket, and a PC app to
receive it and display it on screen. 1MB will take 500ms across a healthy
802.11G link, so we have 500ms leftover to allow the phone's camera app to
store the photo as a file, for our app to establish a TCP connection, for our
PC app to accept the incoming file, and display it in an ImageBox control. We
can still meet the 1 second spec on a poor wifi connection... Wait, how does
the phone know the address of the PC it's going to send the picture to? We'll
need to write our own discovery service to allow the phone to learn the IP of
the PC... Should we do layer 2 , or cloud assisted? Or maybe we'll have the
phone scan a QR code on the PC in order to link the phone to the PC... We've
only imposed a 1-frame-per-second requirement, but we're going to require a
software team familiar with PC app development, phone app development, layer 2
and layer 3 network protocols. You could hard code the IP addresses for a
quick demo, but you're looking at a small team coding in several languages for
several months just to support a single platform (say, Windows PC with Android
phone.) Additional platforms will require a repeated effort.

But Carmack would never be happy with 1FPS. What if we spec a 40ms transfer
time (24FPS), such that the user could use the laptop/PC screen as a
hollywood-movie-grade viewfinder. Now we have to throw away the phone's camera
app and write our own that bypasses the save-to-file step and streams the
image data directly to the PC over UDP, perhaps RTP. Since the raw camera data
takes too long to send over the network, we may also need to visit image
compression techniques. Some techniques will be faster on certain phones, so
we'll need a heuristic compression algorithm selector to make sure we're
correctly optimizing for spending the least amount of time in the combination
of compression+transport. On the PC side we'll need an RTP client to receive
the data, but also a method to paint the frame buffer data directly into the
viewport (no more loading files into an ImageBox control). We may need to
explore the gains from painting the image as an OpenGL texture. At this point,
our engineering budget has exploded! We're talking about hiring telecom
engineers with experience tuning VoIP stacks, digital signal processing and
codecs, and video game engine experts to help with low level graphics in
OpenGL/DirectX. Perhaps we'd need to retain Carmack himself to advise.

If you have a business opportunity worth $1M/year that relies on moving photos
from your iPhone to your Macbook, spec'ing 10s is a great way to get your
business started leveraging existing technology. Spec'ing 1s might be
acceptable to give your app some polish once you're established, and if you're
not FAANG and not Carmack himself, spec'ing 10ms would be a great way to kill
your business regardless of size.

------
api
"Premature optimization is the root of all evil" is one of those programming
sayings that's very often misunderstood.

It _does not_ mean that performance doesn't matter or that you shouldn't think
about performance.

What it means is two things:

(1) It's usually a waste of time to tinker with small optimizations until
you've made things work and until you've profiled.

(2) Don't compromise your design or write unmaintainable code out of premature
concern over performance.

The second is the more important and subtile meaning. It means don't let
concern for performance hobble your thinking about a problem.

------
wawawiwa
Why does this post have so many points? How is that article useful to most
readers? I sometimes don't understand HN.

Performance matters... the article talks about a case that most of us aren't
confronted with.

Beside, it concludes that in spite of paper being slower, it's still prefered,
so how does that prove that "performance matters"?!

Should an article be upvoted just because an article is about life and death
and has a catchy (pretentiously) laconic title?

~~~
Kaiyou
Paper is preferred because it has better performance than the software, which
is the thing that people should think about. Writing software that can
outperform paper shouldn't be hard to do, but apparently it is.

------
grumpy8
It's about trade-off. You improve performance at the cost of something else
you're not working on. Without the context, it's hard to tell if that cost was
justified. I.e. maybe there were 20 critical bugs and the team made the call
that perf. was good-enough and it was better to stabilize the software. Most
often than not, you can always improve performance and drawing the line is
hard.

------
arendtio
My first tablet was kinda slow. At least slow enough that when it got too old,
I wondered if I even wanted to buy another tablet, because I rarely used the
first one.

Then I bought a new one, which was faster and more responsive and my tablet
usage completely changed ;-)

So I can personally testify that performance is a most relevant part of the
user experience.

------
wickerman
I don't think the word performance is key, but rather /user experience/.

------
yen223
Sometimes you need to take a step back and consider the bigger picture. Is it
really necessary to make an EMT fill out a 100-field form?

------
aflam
B

------
tabtab
Quote: _It wasn’t even that slow. Something like a quarter-second lag when you
opened a dropdown or clicked a button. But it made things so unpleasant that
nobody wanted to touch it._

I'm skeptical that's the reason they didn't use it unless it's a high-volume
application. Like everything in IT, it depends.

A high-volume app definitely needs attention to response and performance
because employees will be wasting tons of hours if it's not. That's why
primary applications should probably be desktop client-server applications,
where it's easier to control the UI. But that's typically only roughly 1/4 of
all apps used by an org.

Something used only occasionally, such as for customers with special
conditions, a somewhat sluggish web app is probably fine. If it's easy to make
it snappy, great, but not everything is, due to our annoying web (non)
standards[1]. If it takes 100 hours of programming to reduce 50 hours of UI
waits over say a 10 year period, then the org doesn't benefit. 10 years is a
good default rule of thumb duration for software lifetime. Your org may vary.
Remember to include program maintenance, not just initial coding.

In short, use the right tool for the job, and do some analysis math before
starting.

[1] My "favorite" rant is that web-standards suck for typical work-oriented
productivity applications, and need a big overhaul or supplement. Nobody wants
to solve this.

~~~
mcguire
I'm sorry; are you saying that electronic patient care reports are not a "high
volume application", that they should be "desktop client-server applications,"
or that the statements by the users about why they don't use the system should
be disregarded?

~~~
tabtab
I don't have the usage & programming labor stats to make a definitive
statement on that _particular_ application including the size of the org using
it. My main point was that "it depends" and should be subject to further
tradeoff and business analysis rather than "X is _always_ bad".

A quarter-second lag does not sound like a significant problem to me, and may
require a lot of IT funds to remove. Often such is only justified if it's a
high-use application. Would you agree that 100 hours of extra programming to
save 50 hours of data entry time is usually not worth it (all else being
equal)? If so, let's start there and slice into further factors.

Please don't give me bad scores for trying to be logical and rational, people!
Resources are limited, orgs have to spend with care. Sure, it's nice job
security for us IT people if orgs spend big bucks to make all apps snappy, but
from a business and/or accounting standpoint, it could be the wrong decision.
You view the world different when _your_ money is on the line.

If you know a way to make all apps cheap, good, AND snappy at the same time,
let's here it! I'm all Vulcan ears.

Note that one can make web apps snappy, but there are often maintenance and/or
creation time tradeoffs for doing such. A few well-run shops do it well, but
most shops are average-run.

~~~
tabtab
Correction: "let's hear it" instead of "let's here it". Modnays.

