
Front-end performance for web designers and front-end developers - peterkchen
http://csswizardry.com/2013/01/front-end-performance-for-web-designers-and-front-end-developers/
======
Silhouette
There is some decent material here, but contrary to all the posts saying
"great article", I would like to sound a note of caution. There are also quite
a few half-truths, implicit assumptions and significant omissions in this
article, and I would advise anyone who isn't already familiar with the field
to seek out other material that is more comprehensive as well.

I hate it when people criticise without giving specifics, so here are some of
the more obvious examples:

\- The article mentions image sprites, complete with the trendy but
semantically dubious hijacking of the <i> element for icons, but does not
discuss data: URLs.

\- The article talks about the performance implications using subdomains, but
doesn't mention anything about cookies.

\- The article makes many assumptions about how assets are loaded that may be
true in the major browsers right now but tend to change very fast, and even
then it doesn't mention that there are actual standards for how browsers are
supposed to behave in this respect (which in turn not all browsers follow).

\- The article promotes techniques that will achieve lower quality results,
such as serving low-res assets to high-res devices and using CSS
approximations of effects to avoid using image files at all, without
considering techniques for serving some but not all assets depending on the
nature of the requesting device. Crucially, there is also no hard data at all
about what the relative costs actually are and therefore whether these implied
optimisations are actually giving worthwhile savings.

In short, this article is a decent thought-provoker if you're new to front-end
development and aren't necessarily aware of these kinds of issues, but it's
not comprehensive (and there are other resources that _are_ ) and it does
advocate some questionable practices.

------
seriocomic
Great article, however I feel this important information was made more
appetising by the folks who made <http://browserdiet.com/>

~~~
wqfeng
This site looks great! I'd like to submit again to bookmark it in my HN
account.

------
leeoniya
i think most, if not all this advice (and more) can just be gathered from
running your page through Google's PageSpeed Insights (or their chrome
plugin). not only that, but it comes with a huge bonus that it gives you exact
direction regarding your specific case. less reading, same results.
<https://developers.google.com/speed/pagespeed/insights>

~~~
sreyaNotfilc
Thanks leeoniya!

Just tried out my site on Insight". Wow! I scored horribly lol. This is good
news though and I will learn a lot from this.

One of my favorite quotes comes from my favorite pitcher Greg Maddux. (I'm
paraphrasing of course). "You learn more when you have a game where things go
wrong as oppose to a game where things go right" (something like that)

------
just2n
Plenty of good advice here. Not to disagree with industry standard methods of
getting around browser connection limits by parallelizing requests over many
domain names, but this is something I've long felt uneasy about. It feels like
one of the biggest, ugliest hacks that exists in basic website performance
optimization techniques.

SPDY and now HTTP2 solve this problem with multiplexing and pipelining, but it
seems like an obvious addition to lump in with chunked encoding and persistent
connections. We have the benefit of hindsight now, but that does still seem
like a clear oversight.

This mistake and its workarounds has lead to websites being much more latent
and DNS lookups being more apparent in page loading performance (forcing a
meta hack around that problem, too), and now that the connection speed is
rarely the limiting factor, this is one of the most glaring perf failures in
the design of the spec. When I look at how we get around browser connection
limits and the fact that HTTP is inherently serial in its requests, I can't
help but feel overwhelmed with disgust at how massive a hack this is.

~~~
bradleyjg
The browser connection limits per domain weren't baked into any standards, so
it's something that has been dramatically improved over time. Most browsers
maxed out at two connections five years ago, now six is the most common limit.

Edit: turns out the limit of two connections is in the RFC for http 1.1 but
browsers starting with ie 8 have ignore that part of the spec.

~~~
eurleif
The two-connection limit is a guideline in the HTTP standard
([http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1...](http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4)):

>Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A single-user
client SHOULD NOT maintain more than 2 connections with any server or proxy.

------
nasmorn
Along those lines I found that CDNs (I only tried Cloudfront) are actually
slower then my dedicated hardware for regions reasonably close to my server. I
anticipated this for cold edge locations but unfortunately it is also true for
cached files. Thus unless you have a global audience and/or performance issues
because of traffic spikes CDNs might not be worth the trouble. Also the
download speed from Cloudfront was only halve of the speed from S3 but latency
was much better.

edit: specify CDN used

------
kcthota
Nice article..usually we profile all the pages using YSlow and strive to score
>90 on all the pages.

------
anonfunction
Most interesting part for me was the suggested use of SVG in lieu of retina
optimized images. As someone doing a project in D3.js this is eye opening.

------
dsego
These are easy-peasy, low hanging fruit. How to solve something like this -
[http://stackoverflow.com/questions/15323228/css-
transition-i...](http://stackoverflow.com/questions/15323228/css-transition-
in-chrome-stops-or-is-jerky-while-an-image-is-being-rendered)?

