Hacker News new | past | comments | ask | show | jobs | submit login
Latency Numbers Every Programmer Should Know (gist.github.com)
107 points by ashitlerferad on Jan 31, 2017 | hide | past | favorite | 22 comments



Relevant: Grace Hopper explains Nanoseconds

[1] https://www.youtube.com/watch?v=JEpsKnWZrJ8


Agreed. I love that bit, especially how well it plays to her target audience.


brilliant


This is an exercise I recommend every developer take. https://computers-are-fast.github.io/

It shows you by series of questions, how fast code run, what is speed of ram vs cache, ram vs hdd, file vs sql read.

Highly recommended.


This is a great list. The one that I see people ignore the most is the fact that spinning disk access is 20x the cost of a datacenter roundtrip. If you need low-latency persistence, something like Apache Kafka is a much better solution than a disk (as long as your scale justifies it).


There's networking overhead, hand-shaking, latency etc that I don't think is included in that.


A big thing that is missing is dropped packets. Our IT director moved everything offsite with the same justification, and now we constantly have issues because our front-end software does not handle dropped packets well.

Edit: Also, in 10,000 ns, light travels about 3 km in a vacuum. I know latency is addressed a bit in there, but our closest DC is about 70 km away from our main office. That's a pretty substantial hit on latency, especially considering our servers still have spinning discs in them.


Kafka is backed by spinning disks though. Am I missing something?


Generally Kafka takes advantage of the OS page cache and the fact that its data is sequential. The reason you can back Kafka effectively with spinning disks is because Kafka is just an immutable log.

OTOH you won't always have sequential access so services that store data in memory cover some of those other cases (eg a KV store like Redis).


I'm not familiar with Kafka specifically, but I reckon caching in memory is the way to work around that.


I think it's assuming that Kafka caches a lot more than your local machine, and therefore is less likely to need to access disk.

Note that it also assumes low network latency, though; if you have congestion within the data center, things change.


This is apparently from 2012.

How outdated are some of these numbers?

According to this[1] source (linked in the gist comments), ping is down to 144ms from LA to Amsterdam.

[1]: https://wondernetwork.com/pings/


I believe that the gist is not really meant to provide accurate numbers in every case. Especially with things like network latency, it is very difficult to say that any time is the 'correct' time because it will be different in any case. The real purpose of the gist, which is posted at regular intervals here, is to illustrate the order of magnitude differences between e.g. the L1 cache and L2 cache latency. It is meant to get a programmer to think about cache locality. If I can keep something in L1/L2 cache, it will be orders of magnitude faster than something that needs to make regular round trips to main memory, which is orders of magnitude faster than something that needs to make regular round trips to disk, and so on.


I guess I took the title too literally. I would expect every competent programmer to know what you explained, and that is probably what the title was alluding to.

Thanks for clarifying.


That sequential reading from an HD varies a lot, and can be anywhere from where it is to somewhat faster than SD (but seek doesn't change much).

The datacenter round-trip and sequential reading from memory sometimes change places now, and sending 1MB on some networks is on the same ballpark of reading 1MB from memory. (There has been some articles about this change here on HN.)


I linked to this interactive chart by year from my blog [1]:

https://people.eecs.berkeley.edu/~rcs/research/interactive_l...

(Use the slider)

It shows disk seek and and sequential SSD read going down a lot. However, it also extrapolates out to 2020.

The code listing does have a lot of sources, and shows the extrapolation done, which I didn't look into in great detail.

[1] http://www.oilshell.org/blog/2016/12/23.html


The data isn't outdated, it just had 1.6 x 10^17 ns latency :)


It's worth noting that opening a file can have much longer latency than the disk read number in this table on consumer machines due to two factors: a) power policy causes hard drive to power off. Spin up time is on the order of seconds. b) user installed virus scanner hooks your process' file open or read to perform scan. This can typically take time on the order of hundreds of milliseconds.

These numbers are also important when writing software that runs on consumer PCs where real-time performance is a feature (e.g. most PC games).


So I should store everything in a mutex. Gotcha.


Not sure the numbers make much sense...

    Main memory reference - 100ns
    Compress 1K bytes with Zippy - 3,000ns
So it takes 30 main memory accesses to zip up 1K? Not likely.


If you have 32 byte cache lines and no hardware prefetch then yes.


Previous discussion on Hacker News: https://news.ycombinator.com/item?id=4047623




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: