
Tips for Application Performance (2015) - yonnybejarano
https://www.nginx.com/blog/10-tips-for-10x-application-performance/
======
mozumder
I went from 20second response times to 250us response times (yes, almost a
100,000x speedup) in Django doing the following:

1\. Moved from a shared hosting server to my own custom server. This alone my
latencies went from 20s (sleep wakeup latencies) or 5s (woke response
latencies) down to about .5s-1s.

2\. Use Streaming HTTP response to immediately send HTTP header & initial HTML
data before database is even accessed

3\. Reduce the number of database lookups with
select_related()/prefetch_related(). Each cached page fragment should only
have one SQL query at most.

4\. Don't even bother with Django ORM and use Postgres prepared SQL statements
directly.

5\. Use database materialized view.

6\. GZIP cached data & serve GZIPed response (has side benefit of effectively
making cache 10x bigger)

7\. Move from NGINX to H2O web server for HTTP/2 (awesome little server that
does http/2 cache-aware server push. See
[https://h2o.examp1e.net](https://h2o.examp1e.net))

8\. Build a simple Javascript single-page app framework

9\. Use Postgres JSON data types for API calls

10\. Run a separate python logging process outside of Django response.

11\. Further optimizations to speed up HTML (cut down repaints), CSS (GPU
animations), JS, DNS, and TLS/SSL.

It's amazing the amount of crap & inefficiencies that are in web services. A
CPU can do trillions of calculations a second.. why should a web site take
trillions of calculations to send a single page?

~~~
chubot
That's interesting, but #1 seems like a bug in your shared hosting service. 20
seconds for a single request isn't even usable, and an unoptimized Django
config doesn't start anywhere near that slow.

A $5 Linode and stock Django should be able to get you the 500-1000ms.

Also, I'm not sure how you're measuring 250 us, but I doubt it's meaningful
number, because in practice either of these two numbers will be greater than
that:

1) The network latency to send as single packet from the user to your server.
This is usually more like 10 ms, or 40x what you're quoting.

Great link I saw on lobste.rs -- a website in a SINGLE PACKET:

[https://github.com/diracdeltas/FastestWebsiteEver](https://github.com/diracdeltas/FastestWebsiteEver)

2) The time it takes the browser to render the page (e.g. loaded from the
memory of a local server). Static HTML might render in 250 microseconds, but I
doubt that anything with JavaScript or even CSS will.

In other words, I highly doubt you have 250 us end-user latency; it's
basically impossible with the web and "normal conditions". You can choose to
measure it in some weird way, but it doesn't reflect what your users are
experiencing.

EDIT: I guess if you throw out the connection time, 250 us possible. But I
don't think it's meaningful to throw out the connection time -- you at least
have to average it over many requests.

~~~
mozumder
It was on a cheap Bluehost account. These shared hosts usually don't receive a
lot of traffic, and often went to sleep which takes 20 seconds to wake up. The
system wasn't scalable at all and we had occasional days of high-traffic when
it mattered that would cause the site to remain fully inaccessible.

I'm measuring response times at the server, with no network latencies. My
optimization process involved reducing TTFB at first and then complete full
page generation in later steps.

~~~
chubot
OK understood, thanks for sharing your experience!

------
hamandcheese
Solid advice, but also nothing particularly noteworthy. Feels more like a
marketing piece for nginx.

~~~
carlmr
Also seems very application specific, not generally useful as the title
implies.

------
seeekr
(2015)

~~~
dang
Thanks, added.

