Hacker News new | past | comments | ask | show | jobs | submit login
100ms in additional latency costs you 1% revenue? (niels-ole.com)
59 points by nielsole on Oct 27, 2018 | hide | past | favorite | 23 comments



I work in mobile ads, and for fun, i injected 50ms, 100ms, 250ms, 500ms latency in an experiment. The results were not as dramatic. 500ms had less than 1% impact. 50ms looked almost positive (didn't wait enough to have it statistically significant). This is 2018. Results are rather nonlinear, and different formats with different latency requirements had different results (like formats lending themselves to advance preloading would not be impacted at all even at 500ms, but ohers which had tighter UX requirements would).

I realize results are context-dependent, this is why everyone should make their own conclusions from their own analysis. You might have bigger bangs to make than shaving off some latency.


> "50ms looked almost positive (didn't wait enough to have it statistically significant)"

Did you test the other delays (100ms, 250ms, and 500ms) for long enough to get statistical significance in this context?


> This is 2018.

Yeah, people are used to slow web sites.


Ads aren't the first thing people look at though. If the rest of the page loads and content is visible then i won't leave because that ad I just had to see didn't become visible.


Exactly, but the point was all of this is context dependent.

For example, if you are involved in a 3rd party bidding where you bid on some other network's impression, you'll have a deadline to respond with a bid. Let's say that deadline is 500ms (it's not). 100ms increase in median would lose _significant_ amount of impressions in that case.


500ms in delays actually led to 1.2% lost revenue for Bing per affected user, in this case a decade ago.

http://radar.oreilly.com/2009/06/bing-and-google-agree-slow-...

https://conferences.oreilly.com/velocity/velocity2009/public...


500ms is a lot of time for both customer and vendor. In questioning whether these 'millisecond timings' make a difference to revenue it helps to think of how it feels at the other end of the transaction, as a customer.

500ms sadly is a lot of time if you are trying to buy tickets for Glastonbury. Click a fraction of a second too late and your name is not down. Same with eBay, click a fraction of a second too late and someone else gets that vital bit of junk you thought you needed.

Last year when crypto-ponzi-schemes were all the rage the pump and dump schemes gave people that were in on the scam different levels of access.

In one scheme if you brought a few 'friends' in to share the wealth then you would get the notification of when to buy a little bit earlier. If you brought 150 of your 'friends' along then you would get the message a whole three seconds earlier. If you brought 500 to the party then you would get the supreme advantage of getting the message an incredible 3.5 seconds earlier.

500ms really determined if you ended up 'lambo to the moon' or a 'bag holder'. Funny. Most of the people in on this were just kids in their mum's basement on whatever internet connection was available. Nonetheless there was a considerable premium on 500ms which you would not think would be that big a deal given the other latencies in the system.

https://theoutline.com/post/3074/inside-the-group-chats-wher...


Shopzilla reported similar results with intentionally slower pageloads materially impacting revenue, though the study was only revealed during the talk and not in the abstract —- https://conferences.oreilly.com/velocity/velocity2009/public...


This statistic is trotted out constantly, and I just don’t understand why. 100ms delay could be a 100% or a 1% regression in performance depending on which site you’re looking at. Are we meant to believe that we’d see the same impact on both?

My other bugbear is the classic case study that some company rebuilt their stack and product, saw a perf improvement, and a corresponding revenue lift. No consideration of confounding effects (like the massive redesign that accompanied the changes) or the fact that no one shares the disasterous effects of their rebuild at a tech conference (selection bias, anyone?)

Ironically it’s usually professionals in the performance community who do this. Big tech company developer advocates seem to get a little to far to the “sales” side of the picture and drop and rigour around what they’re selling.


Even if you did lose 1% of customers because of 100ms delay; that could be a good thing.

The kind of customer who would change their mind about purchasing something because of 100ms delay may not be the kind of customer that you'd want to do business with anyway.

If 100ms is all it takes to put this customer off; maybe it means that they didn't really want to buy that thing to begin with... That kind of customer is probably more likely to get buyer's remorse, to file a complaint, require follow-up and demand a refund.

My point here is that it's silly to just look at statistics as black-and-white good or bad and pretend that they tell a complete story.


100ms additional delay, not 100ms delay. Someone could be patiently waiting for 2 seconds and hit back, just as it was about to load 100ms later.


I was assuming that. 100ms is barely enough time for the request to reach the server depending on the location.


In my experience 10% of the customers cause 90% of the trouble, and they're usually the cheapest ones.


For a moment I thought the title was talking about my line of work in trading. 100ms would cost more than 100% of revenue.


I find that hard to believe. If your prediction is wrong any of the time then surely microfluctuations could have chance to revert or move back the other way.

I guess if all you're doing is responding to a value before others have chance to that in theory it's possible you lose 100%; but that seems to require all traders to be lock-stepped on a trading tick.

Can you explain further what I'm missing.

Why aren't trades locked. Is more than, say, 1s resolution really increasing fluidity or making more capital available or is it just a means to extract money from wealth generating activities of others?


You obviously have no idea about how HFT works. 100ms later than your peers and you're out of business. It's a real time auction, if you react too late, the price would most likely have moved against you. If you were trying to buy, the prices would have climbed up higher than your models projected to be profitable, if you were trying to sell, the prices would have dropped.


in general terms, what is the average delay that is tolerable for trading?


Depends on the type of trading, typically <<1ms for sw, <<1us for hw.


I couldn’t imagine citing that statistic and thinking it was meaningful for every business.


100ms additional latency will cost me losing my mind if its some interactive piece of UX like typing, scrolling, pwning noobs in battlegrounds...


I play too much CSGO, I found that a ping above 120 makes play too hard. Thankfully a change in ISP/technology dropped my ping by about 80.


Were you switching from dialup to high-speed internet? The latency numbers sound about right for dialup to a local server - if memory serves from the early 2000s.


No, ADSL to FTTC. Same POTS twisted copper pair over telegraph poles from cabinet to internal master socket.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: