I work in mobile ads, and for fun, i injected 50ms, 100ms, 250ms, 500ms latency in an experiment. The results were not as dramatic. 500ms had less than 1% impact. 50ms looked almost positive (didn't wait enough to have it statistically significant). This is 2018. Results are rather nonlinear, and different formats with different latency requirements had different results (like formats lending themselves to advance preloading would not be impacted at all even at 500ms, but ohers which had tighter UX requirements would).
I realize results are context-dependent, this is why everyone should make their own conclusions from their own analysis. You might have bigger bangs to make than shaving off some latency.
Ads aren't the first thing people look at though. If the rest of the page loads and content is visible then i won't leave because that ad I just had to see didn't become visible.
Exactly, but the point was all of this is context dependent.
For example, if you are involved in a 3rd party bidding where you bid on some other network's impression, you'll have a deadline to respond with a bid. Let's say that deadline is 500ms (it's not). 100ms increase in median would lose _significant_ amount of impressions in that case.
500ms is a lot of time for both customer and vendor. In questioning whether these 'millisecond timings' make a difference to revenue it helps to think of how it feels at the other end of the transaction, as a customer.
500ms sadly is a lot of time if you are trying to buy tickets for Glastonbury. Click a fraction of a second too late and your name is not down. Same with eBay, click a fraction of a second too late and someone else gets that vital bit of junk you thought you needed.
Last year when crypto-ponzi-schemes were all the rage the pump and dump schemes gave people that were in on the scam different levels of access.
In one scheme if you brought a few 'friends' in to share the wealth then you would get the notification of when to buy a little bit earlier. If you brought 150 of your 'friends' along then you would get the message a whole three seconds earlier. If you brought 500 to the party then you would get the supreme advantage of getting the message an incredible 3.5 seconds earlier.
500ms really determined if you ended up 'lambo to the moon' or a 'bag holder'. Funny. Most of the people in on this were just kids in their mum's basement on whatever internet connection was available. Nonetheless there was a considerable premium on 500ms which you would not think would be that big a deal given the other latencies in the system.
This statistic is trotted out constantly, and I just don’t understand why. 100ms delay could be a 100% or a 1% regression in performance depending on which site you’re looking at. Are we meant to believe that we’d see the same impact on both?
My other bugbear is the classic case study that some company rebuilt their stack and product, saw a perf improvement, and a corresponding revenue lift. No consideration of confounding effects (like the massive redesign that accompanied the changes) or the fact that no one shares the disasterous effects of their rebuild at a tech conference (selection bias, anyone?)
Ironically it’s usually professionals in the performance community who do this. Big tech company developer advocates seem to get a little to far to the “sales” side of the picture and drop and rigour around what they’re selling.
Even if you did lose 1% of customers because of 100ms delay; that could be a good thing.
The kind of customer who would change their mind about purchasing something because of 100ms delay may not be the kind of customer that you'd want to do business with anyway.
If 100ms is all it takes to put this customer off; maybe it means that they didn't really want to buy that thing to begin with... That kind of customer is probably more likely to get buyer's remorse, to file a complaint, require follow-up and demand a refund.
My point here is that it's silly to just look at statistics as black-and-white good or bad and pretend that they tell a complete story.
I find that hard to believe. If your prediction is wrong any of the time then surely microfluctuations could have chance to revert or move back the other way.
I guess if all you're doing is responding to a value before others have chance to that in theory it's possible you lose 100%; but that seems to require all traders to be lock-stepped on a trading tick.
Can you explain further what I'm missing.
Why aren't trades locked. Is more than, say, 1s resolution really increasing fluidity or making more capital available or is it just a means to extract money from wealth generating activities of others?
You obviously have no idea about how HFT works. 100ms later than your peers and you're out of business. It's a real time auction, if you react too late, the price would most likely have moved against you. If you were trying to buy, the prices would have climbed up higher than your models projected to be profitable, if you were trying to sell, the prices would have dropped.
Were you switching from dialup to high-speed internet? The latency numbers sound about right for dialup to a local server - if memory serves from the early 2000s.
I realize results are context-dependent, this is why everyone should make their own conclusions from their own analysis. You might have bigger bangs to make than shaving off some latency.