I realize results are context-dependent, this is why everyone should make their own conclusions from their own analysis. You might have bigger bangs to make than shaving off some latency.
Did you test the other delays (100ms, 250ms, and 500ms) for long enough to get statistical significance in this context?
Yeah, people are used to slow web sites.
For example, if you are involved in a 3rd party bidding where you bid on some other network's impression, you'll have a deadline to respond with a bid. Let's say that deadline is 500ms (it's not). 100ms increase in median would lose _significant_ amount of impressions in that case.
500ms sadly is a lot of time if you are trying to buy tickets for Glastonbury. Click a fraction of a second too late and your name is not down. Same with eBay, click a fraction of a second too late and someone else gets that vital bit of junk you thought you needed.
Last year when crypto-ponzi-schemes were all the rage the pump and dump schemes gave people that were in on the scam different levels of access.
In one scheme if you brought a few 'friends' in to share the wealth then you would get the notification of when to buy a little bit earlier. If you brought 150 of your 'friends' along then you would get the message a whole three seconds earlier. If you brought 500 to the party then you would get the supreme advantage of getting the message an incredible 3.5 seconds earlier.
500ms really determined if you ended up 'lambo to the moon' or a 'bag holder'. Funny. Most of the people in on this were just kids in their mum's basement on whatever internet connection was available. Nonetheless there was a considerable premium on 500ms which you would not think would be that big a deal given the other latencies in the system.
My other bugbear is the classic case study that some company rebuilt their stack and product, saw a perf improvement, and a corresponding revenue lift. No consideration of confounding effects (like the massive redesign that accompanied the changes) or the fact that no one shares the disasterous effects of their rebuild at a tech conference (selection bias, anyone?)
Ironically it’s usually professionals in the performance community who do this. Big tech company developer advocates seem to get a little to far to the “sales” side of the picture and drop and rigour around what they’re selling.
The kind of customer who would change their mind about purchasing something because of 100ms delay may not be the kind of customer that you'd want to do business with anyway.
If 100ms is all it takes to put this customer off; maybe it means that they didn't really want to buy that thing to begin with... That kind of customer is probably more likely to get buyer's remorse, to file a complaint, require follow-up and demand a refund.
My point here is that it's silly to just look at statistics as black-and-white good or bad and pretend that they tell a complete story.
I guess if all you're doing is responding to a value before others have chance to that in theory it's possible you lose 100%; but that seems to require all traders to be lock-stepped on a trading tick.
Can you explain further what I'm missing.
Why aren't trades locked. Is more than, say, 1s resolution really increasing fluidity or making more capital available or is it just a means to extract money from wealth generating activities of others?