
The Ethics of Web Performance - gmays
https://timkadlec.com/remembers/2019-01-09-the-ethics-of-performance/
======
guggle
The article seems to put the burden of performance on the developers... but
I've been in situations where no matter what I did in favor of performance,
the efforts were always negated by the installation of third party marketing
tools. Things like retargetting platforms, tracking scripts or even
recommandation systems. And let's not talk about the unreasonable client
expectations about features per page...

~~~
KaiserPro
Sorry, but no.

You can't just blame it on the marketing department.

Yes it is a business decision, but, you have the power to demonstrate that
slower pages lead to less clicks and hurt the bottom line.

If the FT's developers can manage to sell this, then so can you, We, as
developers can't live in a bubble. We are part of the business and we need to
steer the business as we see fit.

~~~
nextlevelwizard
OK, you go to your boss and tell him that because of the marketing team's
scripts the app takes twice as long to load. Your boss on other hand says:
"Who cares? This is how we make money". You try to explain that lower income
people with slower/lower end devices can't access the content. Boss asks how
much money are we losing because of this. You have no answer. What do you do?

Almost standard HN answer when it comes to moral programming questions is to
stand by your principles and resign immediately, but this usually comes from
people who haven't had to deal with paying bills or become homeless. Even then
if you don't do it someone else who needs the money will.

What would you KaiserPro do in this situation?

~~~
brlewis
> OK, you go to your boss and tell him that because of the marketing team's
> scripts the app takes twice as long to load. Your boss on other hand says:
> "Who cares? This is how we make money".

What company do you work for that makes more money by causing your app to load
twice as slow? I think it's reasonable to expect a developer to measure and
present the correlation between performance and revenue.

~~~
mbesto
> I think it's reasonable to expect a developer to measure and present the
> correlation between performance and revenue.

I'll bite. Do you really do this? If so, I'm seriously curious about how you
could conclude a straight line causation (not just correlation) between an
increase in revenue by an increase in performance? Potentially for a large
scale site like Twitter, Facebook where performance means less eyeballs, but
even then it's just a correlation.

~~~
KaiserPro
Graphs.

I was here [https://medium.com/ft-product-technology/a-faster-ft-
com-10e...](https://medium.com/ft-product-technology/a-faster-ft-
com-10e7c077dc1c) when they did this.

Its easy to find graphs that support your conclusion.

------
bem94
I recently emailed the Guardian (UK newspaper) to tell them that an element of
their website caused one of my CPU cores to spike to 100% utilisation while it
was in view. It happened on my laptop, work desktop and phone. If you're a
visitor, it was the graphic next to the podcast banner which looked like a
sound wave.

They've removed the element now and sent a very nice message saying they'd
investigate.

I really think/hope that the next "big thing" in software engineering will be
energy efficiency in some form or another.

~~~
YawningAngel
Oh praise be. That's been screwing with me for months and I couldn't find a
bug report page. Thank you anonymous Internet person!

------
chrisdone
I think that the lumbering hogs we call publication web sites these days are
making sure that you as a reader are paying a cost to use their otherwise free
web site; paying a subscription is better, but no one will do that. People on
mobiles pay a tax that people on desktops don't, and people on bad connections
in poor areas pay a tax that people in rich fibre-wonderlands don't, and yet
energy and money is wasted on transferring garbage that nobody wants.

I seem to get dismissed whenever I suggest this, but these web sites could
instead try another approach: remove all the garbage, have a beautiful, clean
web site, and implement artificial rate limiting on all connections to it. If
you want a super fast experience, pay a subscription! What are the upsides?

* Less bandwidth and energy and mobile phone battery is wasted.

* The site will still load in the same time that people are normally used to anyway.

* Rich people and poor people get the same experience; you don't pay a poor people tax if you are poor. Equity!

* Mobile phone users don't pay a tax, either.

* Meanwhile, Rich people have disposable income; they should be spending money on magazines and newspapers that people are producing anyway, to show their support, but they don't, so here's an incentive. They also indirectly support the poor people with news/media, who can't afford to pay a subscription (or wouldn't benefit anyway).

Isn't anyone trying this model?

~~~
leetcrew
if you make the website simple enough that it loads quickly on a bad
connection and it loads the same content on a fast connection, why would
anyone pay extra for it to be even faster?

~~~
dodobirdlord
I think the idea suggested is that the website _could_ load quickly but
doesn't, instead artificially loading very slowly regardless of connection
speed.

------
pflenker
I know it's comparing Apples and Oranges, but: "War and Peace" by Tolstoi is
1.9 MB. A Twitter profile page is 2.5 MB, and that is already optimized (Posts
below the fold are not loaded).

Websites are bloated - so I am glad the ethical aspect is getting more and
more into the focus.

~~~
okaleniuk
I have a rule of keeping every page on wordsandbuttons.online under 64KB. And
no dependencies so, apart from occasional pictures from wikimedia, no other
hidden costs for the visitor.

The number is of course arbitrary, but surprisingly it's usually quite enough
for a short tutorial or an interactive explanation of something. And I don't
even compress the code or the data. Every page source is human-readable.

So it is possible to have leaner Web. it just requires a little bit of effort.

~~~
Nyandalized
With the help of transport compression like gzip, the total size can still be
reduced by almost the same amount even if you don't minimize it.

~~~
vardump
I once wrote (maybe 15-20 years ago?) an html output processor that tried to
make it more compressible while still producing the exact same output. It did
things like removed comments, transformed all tag names to lower case, sorted
tag attributes and canonicalized values, collapsed whitespace (including line
feeds).

And some more tricks I've forgotten (some DOM tree tricks, I think), mainly to
introduce more repeated strings for LZ and unbalanced distribution (=less
output bits) for Huffman. In other words, things that help gzip to compress
even further.

Output was really small, most pages were transformed from gzipped sizes of
10-15 kB to 2-5 kB without graphics.

The pages loaded _fast_ , pretty much instantly, because they could fit in the
TCP initial window, avoiding extra roundtrips. Browser sent request and server
sent all HTML in the initial window even before the first ACK arrived! I might
have tweaked initial window to 10 packets or something (= enough for 14 kB or
so), I don't remember these TCP details by heart anymore.

I wonder if anyone else is making this kind of HTML/CSS compressability
optimizers anymore. Other than Javascript minimizers.

~~~
barryvan
They are! Around five years ago I wrote a CSS minifier (creatively called
CSSMin, available on GitHub, and still in use at the company I work for) which
rewrote the CSS to optimise gzip compression. Although it never really took
off, I think that some of the lessons from it have been rolled into some of
the more modern CSS optimisation tools.

~~~
vardump
It's important to understand minimizing does not necessarily produce the most
compressible result. You need to give LZ repeating strings as much as possible
while using as few different ASCII characters as possible with as unbalanced
frequency distribution as possible.

~~~
hinkley
I wrote (well, expanded) a similar tool for compressing Java Class files. I
had a theory that suffix sorting would work slightly better because of the
separators between fields, and it turned out to be worth another 1% final size
versus prefix sorting.

~~~
vbezhenar
I've found a cheap trick to compress Java software: extract every .jar file
(those are zip archives) and compress the whole thing with a proper archiver
(e.g. 7-zip). One example from my current project: original jar files: 18 MB
expanded jar files: 37 MB compressed with WinRar: 10 MB

And that's just a little project. For big projects there could be hundreds of
megabytes of dependencies. Nobody really cares about that...

~~~
Cthulhu_
It's a tradeoff; in a lot of cases, the size of a .jar doesn't really matter
because it ends up on big web containers.

It does matter for e.g. Android apps though. But at the same time, the size of
the eventual .jar is something that can be optimized by Google / the Android
store as well, using what you just described for starters.

I know Apple's app store will optimize an app and its resources for the device
that downloads it. As a developer you have to provide all image resources in
three sizes / pixel densities for their classes of devices. They also support
modular apps now, that download (and offload) resources on demand (e.g. level
2 and beyond of a game, have people get past level 1 first before downloading
the rest).

~~~
hinkley
It's true, but this was brought up as an anecdote/parallel.

Attributes in html have no fixed order, and neither do constants in a class
file. There are multiple ways to reorder them that help out or hinder DEFLATE.

And also I was compressing the hell out of JAR files because they were going
onto an embedded device, so 2k actually meant I could squeeze a few more
features in.

------
lewiscollard
I think this is a very thoughtful article. As web devs we are much more likely
to be developing and testing on beefy machines and beefy Internet connections
than the general population is. Let's remember that we're building this stuff
for other people, not just for us. :)

I'll take one issue with one thing in the article (emphasis added):

> The cost of that data itself can be a barrier, making the web prohibitively
> expensive to use for many without the assistance of some sort of proxy
> technology to reduce the data size for them— _an increasingly difficult task
> in a web that has moved to HTTPS everywhere_.

But: HTTP/2! It's _only_ available over HTTPS, gives really significant
performance improvements and is supported on Chrome on those low-end Android
devices the article mentions (and basically everything browser other than IE11
on Windows < 10 and Opera Mini).

~~~
pjc50
The "proxy technology" sounds more like Opera: it's not simply compressing the
existing stream, it does things like aggressively recompress your images at
lower quality or even size. HTTP/2 will deliver your 2MB hero image using 2MB
of data. A compressing proxy will deliver a slightly uglier 100kb version.

Oh, and save a lot of data by never downloading the ads.

~~~
Crinus
I remember many years ago when i had a 3G internet connection (which sometimes
even had to work via 2G networks) for my laptop, the service would
automatically recompress images to lower bandwidth and speed up times. To
enable the connection i had to use a custom tool (it was essentially a fancy
terminal - AFAIK it communicated with the 3G dongle using AT commands) which
also had an option to adjust the (ISP-side) compression (i could disable it,
leave it at default or make everything look like garbage but load fast...ish
:-P).

------
Nyandalized
Anything that becomes 19 seconds to become interactive on a flagship phone
should be classified as faulty. It's not just unethical for people, but the
environment. It's badly written code that is basically turning your cpu into a
space heater.

~~~
ajsnigrutin
Not just that. Anything that takes 19 seconds, will probably be refreshed
after 10 and closed after 15 seconds.

~~~
cameronbrown
That's being incredibly generous too.

Google: "53% of mobile users abandon sites that take over 3 seconds to load"

[https://www.thinkwithgoogle.com/marketing-resources/data-
mea...](https://www.thinkwithgoogle.com/marketing-resources/data-
measurement/mobile-page-speed-new-industry-benchmarks/)

------
pacaro
Similar logic applies server side too. If your backend is inefficient then you
may well be consuming orders of magnitude more power per query. While I take
these numbers with more than a pinch of salt, I've seen comparisons that
suggest that a backend written in ruby/python/php may be over 10x less
efficient than one written in Java/C++

~~~
sieabahlpark
When you can be just as productive as a web developer in c++ let me know.

~~~
okaleniuk
Let's skip this and go right to web development in assembly:

[https://board.asm32.info/](https://board.asm32.info/)

The author claims that it's only twice less productive to write in assembly
than in high-level languages. But the performance gains are overwhelming so it
might very well pay back the effort.

~~~
mr__y
Assembly? Thats for amateurs, we should move right to FPGAs

~~~
vardump
To cut this short (own silicon, sand, etc.), we should just create a universe
first.

FPGAs are fun, though. Saving 12 microseconds (assuming gigabit and "standard"
frame sizes) of latency by sending ethernet frames (that contains frame data
CRC!) _before_ you even have all of the data to send and then _modifying data
at the end of frame_ to match with whatever CRC we sent ~12000 bit times (= 12
microseconds) before.

~~~
mr__y
That seems extremely interesting to me! Would you care to tell any details
about that or is it covered by NDA? I'm guessing that's something HFT related?

~~~
vardump
This was just for fun (tm).

I know HFT guys pull of tricks like these, but no, this is either not
difficult to pull off on an FPGA nor is there any NDA. Easy if you send UDP
with checksums off or raw ethernet frames. Receiving party will of course need
to ignore the last 4 bytes needed to make the CRC computation correct.

Would be interesting if anyone managed to do this with TCP. If it's even
possible to get both TCP and ethernet frame FEC match in real time and ideally
somehow even mask the data from the recipient. Probably not possible, but...
who knows.

------
okaleniuk
I'm really glad that's becoming a part of public discourse.

Of course, poor performance is an ethical and economical issue. As the latter,
we've been largely ignoring it in PC era (as a software guy, you don't pay for
the hardware you're wasting so why should you care?) and now it's starting to
be a concern with the cloud.

------
commandlinefan
Thank God _somebody_ is talking about this instead of shutting down every
suggestion of speeding things up with (their misunderstanding of) Knuth's
famous "performance optimization is the root of all evil" quote.

------
KaiserPro
I think positioning this as an ethics issue will not have the impact that I
think it should.

We should really be making the argument on a capitalistic basis. (stay with me
here.)

I was at the FT when we were re-designing the site. The page was lighter than
the dailymail (but then most things are) but not as fast as skynews. (it
loaded within a second with pictures on a slow desktop)

The marketing team really wanted more tracking, and the advertisers also were
insisting on inserting _another_ three tracking systems.

So the developers (and product owners) pushed back. The writeup is here:
[https://medium.com/ft-product-technology/a-faster-ft-
com-10e...](https://medium.com/ft-product-technology/a-faster-ft-
com-10e7c077dc1c)

There was a clear correlation between load time and dwell time.

Which lead to the undeniable conclusion that tracking stuff cost the business
more money than the extra info brought in.

The take away from this is when ever you are making a case for business
change, you have to argue that _your_ change will make the business better,
using the metrics that the business understands.

The skill is making your moral choice look and smell like a money
making/target hitting opportunity.

------
dangerface
> 17.6 million kWh of energy to use the web every single day.

17 million sounds like a lot but it's nothing.

In Ireland there are 2,878 wind turbines producing 30% of our electric, 47
million kWh of energy every single day more than enough to power the worlds
internet even if you use the larger numbers in this article.

This "Wasted energy" is a non issue.

------
miki123211
Performance is part of accessibility. I haven't realized this earlier, but now
it's clear to me. I think accessibility is much more than what we believe it
to be. It's about allowing as many people as possible to access our services.
This, in my opinion, is a moral issue, and I think not doing that is just
plain unethical. Making a website that's not WCAG compliant, i.e. doesn't work
well for disabled people, one that is not available in specific countries or
for specific age groups is bad, but making website that wastes resources is
bad too. If you're doing one kind of accessibility because moral issues, you
should be doing the other kinds too.

------
donohoe
The state of web page performance is pretty bad and could be so much better.

In 275 tests of 75 news articles the average page size was 3.6MB, made 345
requests, and took 46 seconds to load (on a ‘3G’ connection using
WebPageTest.org)

More info and date: [https://webperf.xyz/](https://webperf.xyz/)

Some publishers can load in seconds (with advertising etc) so there is little
excuse. We know how to fix this.

~~~
pradn
Investopedia, the fastest site profiled at webperf.xyz, still places at least
image 4 banner ads. So it's possible to support yourself with web advertising
and still be fast.

------
anilgulecha
The cost numbers are suspect. The paper states:

> Our major finding is that the Internet uses an average of about 5 kWh to
> support the utilization of every GB of data, which equates to about $0.51 of
> energy costs.

Which I read as equating what's on storage -- A GB of data in a datacenter.
Somehow the articles runs with this as a GB downloaded by a user, which is so
bad, it's hilarious.

------
waylandsmithers
I get it and I agree, but my assumption has always been that we're all trying
to write the most efficient code anyway right? (Maybe with a few exceptions,
like intentionally slowing down auth to prevent brute forcing passwords). But
this article is saying that if I don't, I'm not just a bad programmer, I'm a
bad _person_ too. Hm.

~~~
tkadlec
Hopefully that's not the way it comes off! The entire last section of the
article is my attempt to make it clear that this _doesn't_ happen because
we're bad people.

> So clearly, folks who have built a heavy site are bad, unethical people,
> right?

> Here’s the thing. I have never in my career met a single person who set out
> to make a site perform poorly. Not once.

> People want to do good work. But a lot of folks are in situations where
> that’s very difficult.

> The business models that support much of the content on the web don’t favor
> better performance. Nor does the culture of many organizations who end up
> prioritizing the next feature over improving things like performance or
> accessibility or security.

------
NohatCoder
Overall a good article, but please note that the 5 kWh/GB figure is plain
wrong. The source attributes all power consumption of all connected devices to
data transfer, and it simply makes up the base figures.

Actual marginal power cost of data transfer is probably around 1/1000 the
cited figure. Still doesn't mean wasting data is a good idea.

------
wazoox
_Or, for our specific purposes, why would I need an expensive device with
higher-powered CPU if the sites and applications run well on a lower-powered
device?_

Precisely that. We need more human work (optimizing code) and less brainless
energy spending, i.e. more jobs and less consumption of non-renewable
resources.

------
jmull
It seems weird to me to belabor this point.

The arguments make sense, but these are little teacup arguments when we
already have multiple barrel-sized arguments in favor of good performance. (I
mean generally -- there could be specific situations with different dynamics
-- but we're talking generally here.)

------
tempodox
Putting the words “Ethics” and “Web” in the same sentence would already make
for a perfect April's fool joke but also adding “Performance” is really
overdoing it. If I had what the author smoked for breakfast, I might actually
die laughing.

------
thethirdone
> When you stop to consider all the implications of poor performance, it’s
> hard not to come to the conclusion that poor performance is an ethical
> issue.

I don't think its clear cut and the article does not make a good argument that
"poor performance is an ethical issue"

The two supporting points are roughly:

1\. Poor performance will make sites unusable for people without good CPUs /
internet 2\. Poor performance will waster energy and lifespan of devices

Point #2 is very weak, because the alternative to slow sites is spending time
optimizing them which may waste human time. Its not obvious what the optimal
ratio is to minimize total waste.

In order for #1 to be an ethical issue, I would require it to only or
disproportionately affect those without high speed devices. However, web
performance seems to scale decently so improving a sites performance on high
end machines by 2 probably also improves it on low end machines by ~2.

~~~
TeMPOraL
Point #2 isn't weak, because it scales with the number of users. Slow sites
waste not only energy and money (through electricity bills and device wear) ×
number of users, but also user's time, again, × number of users. If you want
to count human time, then that extra second or three your thousands of users
have to spend waiting for your site to load easily adds up to justify getting
an engineer to muck around your site with a profiler for couple of hours and
identify hot spots, and then more hours (over next days) to fix them.

~~~
thethirdone
The argument for users time is a good point, but I don't think the author was
intending that. My point was just that it was not sufficiently argued to be
obvious.

However, even that does not make the case that web performance is an ethical
issue. It would only do so when the number of users is much greater than the
amount of effort it takes to make a site.

If I make a site that I expect 1000 people to look at, then the performance of
that site isn't really significant. And not an ethical issue.

~~~
TeMPOraL
There's a categorical imperative aspect to this: _a_ site with 1000 daily
visitors (not necessarily unique) is not big enough to make its performance a
significant issue. But if _all_ such sites think like this, then it suddenly
becomes an issue.

------
kmlx
disagree with the premise of the article.

performance is a technical issue, not a moral or ethical one.

the experience that is derived out of poor performance might be classified as
a moral/ethical issue (emphasis on might - a lot of apps use 100% of cpu, but
your experience isn't affected, so it doesn't matter - in other cases your
experience is affected so it matters), but poor performance by itself i think
not.

i might be wrong on this, but i didn't find the article compelling enough to
change my mind. any other opinions that might change my mind?

~~~
KaiserPro
The ethics come from exclusion.

If a reasonable device(ie cheap & modern) is unable to load said website, then
they are stopped from using that site. If you are "poor" to use a term, then
it might be that your phone is your only way of interacting with said service.
(libraries and other places are not an option due to cost of getting there, or
lack of time.)

You are effectively saying that you must be this rich to use our stuff.

Now, if you're bently, rolex or similar thats basically your whole way of
life. But if you're a utility that provides a monopoly, then its morally
dubious. Especially as things like paying bills over the phone, or by post
incurs an extra cost.

~~~
kmlx
my argument is that performance doesn't matter if user experience isn't
affected.

if it is, then sure, performance becomes a moral issue.

but if there is no effect, then poor performance simply becomes a technical
issue.

~~~
kdmccormick
Sure, but the entire article is about how experience is widely being affected
by poor performance.

I'm not sure what point you're trying to make outside of pedantics.

~~~
kmlx
poor performance is not a pre-requisite of bad experience. in reality it's a
combination of UI & UX that contributes the most to bad experiences. not poor
performance.

so don't focus on poor performance when there is no effect. and even when
there is a marginal effect, UI & UX matter a lot more and could even negate
the poor performance aspect.

so my point: UI&UX > performance. almost every single time.

all of this is based on various analysis i have done for a whole slew of
clients dating back to 10 years ago.

------
baalimago
This is not an ethical issue, this is a capitalistic opportunity. The company
who starts writing effective, clean and fast websites will gain more traffic =
more money.

There's nothing more frustrating than waiting for a website to load, only to
have the button you know you want to press suddenly get pushed down 5 cm by
some interactively loaded DOM element.

------
Nyandalized
There are lots of other problems piled on web development that is strictly not
their fault. The aging network infrastructure hasn't seen updates in sometimes
decades, despite billions and billions of tax money poured into it.

You really should be able to get more than 3mbit/s in 2019. Pictures should be
allowed to be more than a handful kilobytes in size.

Datacaps are a ginormous moneygrab with no basis on technology. Abolishing
corrupt corporate brotherhoods would do much more good than inventing yet
another band-aid compression/surveillance point to the mix.

~~~
vbezhenar
You can't easily improve the latency. My latency to EU is 100 ms, my latency
to US is 250 ms, so request-answer is 500 ms. Make two request-answer cycles
and that's already 1 second. And TCP already have some cycles. Yes, I have 100
megabits (which is not easy to utilize, because of TCP algorithm), but that
does not help. And that's not only about size.

~~~
def_true_false
The number reported by tools like ping is round trip time -- it already
includes the return trip.

