
Google and Nasdaq Pursuing Nano-Second Precision in Network Time Protocol - Osiris30
https://www.nytimes.com/2018/06/29/technology/computer-networks-speed-nasdaq.html
======
chaboud
"So-called high frequency trading firms place trades in a fraction of a
second, sometimes in a bet that they can move faster than bigger competitors."

First off: no. Big money plays in high frequency trading (roughly half of all
trading activity), and the smaller traders without instantaneous access are
the losers in this game.

Secondly, NASDAQ's obsession with precise global sequencing is A) misguided
and B) effectively impossible to do right 100% of the time. Given this, I
would argue that the appropriate thing to do is change the market
requirements. And I'd argue that like this:

1) Temporally quantize the market. Orders come in on an open temporal window
that is sufficiently long to account for global latency of non-pathological
communication (sorry, tor users) and a bit of computation time. Everyone gets
to swim in the same pool. Maybe one second, maybe more. Nobody gets to see the
order book until it's resolved. Write-only. 2) Lock the book and fulfill
orders from the set of satisfiable orders. If there just contention for a
trade (there will always be some), fulfill the contentious trades randomly
using random zeedig generated from a pre-announced salt and a hash of some or
all of the order book for the window. 3) Return the results and the hashes of
the order book, next salt, etc, for verifiability and prep. 4) Re-open the
order window.

High frequency traders would hate this, because they wouldn't be able to
pounce on quick movements, even without fronting slower traders.

It would, naturally, increase latency for trades by virtue of having to wait
for market resolution. However, mere sequencing doesn't solve the problem of
having to resolve and confirm trades (the speed of light is so cruel), so I'm
left utterly unsold on the market-efficiency benefit of ultra-high order
resolution. Wealthy high frequency traders want to use time to buy an
advantage, and the liquidity support they provide to the markets is dubious,
at best, since they pull the plug as soon as things get crazy.

Quantize the markets.

~~~
qeternity
Most HFT shops are relatively small. HFT is all about latency and turn over.
Big quant shops might have HFT elements but lean far more towards
systematic/algo strategies that can be relatively high latency (still super
low latency, but not HFT) because these are the only strategies that you can
deploy serious var with. The guys crushing HFT are not huge hedge funds, and
they are solving more engineering problems than building trading models.

Also, no, small traders don't lose. Retail traders et al get much tighter
spreads, cheaper execution by routing to internalizers, etc. It's big
institutionals with size to trade that get front run and have to worry about
HFT killing their shortfall. On the institutional side, it's about lit venues
preferring HFT w/ special orders types and thin top of book. On the retail
side, the issue mostly comes down to direct feed vs. SIP/CQS thanks to NBBO
that opened the door for latency arb courtesy of yet more regulation. Blame
your regulators folks. This is why dark pools became a thing.

> and the liquidity support they provide to the markets is dubious, at best,
> since they pull the plug as soon as things get crazy

This bit is certainly true.

Source: hedge fund trader who hates HFT not in principal but because they are
good at what they do

~~~
MR4D
Of course the retail guy gets hurt. Who do you think buys the mutual funds
that pay a higher price.

Seriously, I hear this all the time, but it’s only one step removed. Why does
everybody keep repeating this lie?

~~~
physguy1123
Vanguard states that hft activities have lowered their trading costs, so I
don't think this claim is necesarily true.

~~~
qeternity
HFT has reduced costs under 99.99% of market environments. My direct cost and
slippage is still so much lower than it would have been 30 years ago. Hell,
even 10 years ago.

~~~
arcticfox
Where would the profits that go to HFT outfits go if HFT wasn't a thing?
Genuine question, I have no idea how that works.

~~~
mrchicity
They would simply vaporize. At the margin, people trade because the frictional
costs (spreads, fees, pricing/tracking error, risk) of trading are low. Fewer
people would trade.

HFTs basically play an intermediary role: risking capital to buffer
supply/demand imbalances, aiming to buy things at a discount or sell at a
premium to their perceived value. The more transactions an intermediary does,
the smaller his margins per transaction can be. Low margins fuel even more
transactions in a virtuous cycle, and competition drives margins down.

Take this thought experiment to an extreme level. What would happen if short
term speculation were banned, all stocks traded January 1, and had to be held
for a year? Only very wealthy people with high risk tolerance could
participate in the market, since they couldn't sell companies at will to fund
personal expenses or if the business underperformed.

Volumes would plummet. Exchange/brokerage fees would be a significant
percentage of the deal size, similar to what real estate agents charge, since
they can only do a few transactions. Intermediaries would be something akin to
a private equity fund, bidding 10-20%+ under value to cover the risk of
holding for a year.

Even with trading reduced to once a minute/hour/day, many trades HFTs take the
other side of now--say a medium frequency quant fund believes a company is
underpriced by 0.1%--simply would not exist anymore, because spreads and fees
would increase. Most ETFs would disappear. The marginal cost for an HFT to
make markets in some small ETF is basically 0, but a human would make more at
McDonalds than market making an ETF that trades a few hundred thousand shares
a day.

~~~
ISL
As noted elsewhere in this thread, I suspect new markets would spring up. If
the underlying could only trade once a year, the options market would be huge.

------
geofft
This looks like a paper about the Huygens system, with comparison to other
protocols like PTP:
[https://www.usenix.org/system/files/conference/nsdi18/nsdi18...](https://www.usenix.org/system/files/conference/nsdi18/nsdi18-geng.pdf)

Also these slides:
[https://platformlab.stanford.edu/Seminar%20Talks/retreat-201...](https://platformlab.stanford.edu/Seminar%20Talks/retreat-2017/Yilong%20Geng.pdf)

~~~
mlichvar
I maintain an NTP implementation. That comparison doesn't seem fair to me. It
looks like they are comparing the old reference NTP implementation and not
really the protocol itself. An NTP implementation can certainly synchronize
clocks with better accuracy than 1 millisecond, or even 1 microsecond with
hardware timestamping and good network switches.

There are some interesting ideas in the Huygens paper, but I don't see
anything that couldn't be done also with NTP.

~~~
geofft
The other thing I don't understand is that this paper argues against hardware
timestamping on the grounds that users won't want to buy expensive hardware,
and that Huygens is for "standard hardware ... in current data centers".
Expensive, niche hardware is _normal_ for the HFT folks that care about
nanosecond precision.

~~~
justicezyx
The paper did not have hft as its primary use case, but you cannot contain
reporters tendency to cherry pick eye catching aspects.

~~~
acqq
Yes, it seems that the designers tried to make something that would work in
the current Google data centers. Which is not surprising as some of the
authors work at Google.

I'd more want to know: why does Google need even better synchronization of
time stamps than what they have now?

------
bcaa7f3a8bbc
It is great news if this algorithm can work directly on the public Internet,
without requiring a specialized network - many scientific and engineering
applications will be able to get its time reference directly from the
Internet!

NTP and other protocols currently used are unauthenticated (there is NTP
autokey, etc, but its security properties are not ideal, and mostly not
deployed) and it is a big security hole, especially more and more
cryptographic programs are being putting online, since this protocol is meant
for financial applications, hopefully the security issues can also be solved
by using digital signature.

I guess it's a rare scenario which the finance industry makes a _direct_
contribution to technology.

~~~
simias
I can't imagine how you could hope to reach nanosecond-precision over the
internet without changing all the routing hardware in use today. With PTP you
can already reach sub-microsecond synchronization but you need full hardware
support for every network element on the route if you want to achieve that.
The timestamping is done on the network PHY itself, which is how you can
remove all jitter introduced by the kernel stack. Anything receiving and re-
transmitting the packet along the way must update the timestamps to account
for the processing delay.

Huygens is a little more clever and uses a statistical approach to sample
multiple clocks and correlate them (if I understand correctly) however it
still seems designed to work within a datacenter, I assume that over the
internet the signal-to-noise ratio for measurements would worsen very
significantly and lower the precision dramatically. It could well still
outperform NTP however.

~~~
makomk
The signal-to-noise ratio would be a nuisance, naturally, but there's a more
fundamental problem that's stopped anyone from trying this. Internet routing
and latency is asymmetric - your request to the time server will, in general,
go via a different route that takes a different amount of time compared to the
message back. This introduces a bias that is indistinguishable from local
clock error. Without precisely synchronised clocks at both ends, you can only
determine the overall round-trip time and not the time each leg took, and
therefore you cannot calculate the exact difference between your local clock
and the server. This is an inhererent limit on how accurate NTP can be over
the Internet. (The other reason not to bother is that GPS is cheap and can
provide a very accurate synchronized clock source over long distances.)

------
jrockway
This technology will also be excellent for games. Right now, the server has to
decide how it's going to break ties, and that results in confusing moments.
For example, Overwatch favors the shooter, so you might see someone in a
position to shoot you, use a defensive ability, and die even though you used
it, because the server decides that if two events are close in time, the
shooter wins. With a trusted time reference, you can actually order events
from the perspective of the client instead of the server; so if you actually
used that defensive ability before the shooter clicked the mouse, you don't
die.

I actually think NTP might be good enough for this (despite what people say
their reaction times are, milliseconds don't actually matter), but I guess
game developers don't think of games as globally-distributed transaction
processing systems (which is what they are, just a lot more write conflicts to
resolve than your average database), and haven't experimented with ideas that
are still only a few years old in that field. (The game industry also doesn't
reward experimentation. If you're Google and you try to replace Bigtable with
Spanner and it fails, it doesn't matter, you just keep using Bigtable. If
you're a game company and your netcode is janky, you launch late, a competitor
releases a similar game before you, and all the money you spent on development
is gone.)

With games there is always the trust issue; can someone write a client that
lies about the time they took an action? The answer is yes. But if we have
technology that relies on similar client trust working in high-frequency
trading, it should be safe enough for games. The stakes are a lot lower in a
computer game than the financial markets. So I think good things are on their
way.

~~~
throwaway2048
You still cant trust the client here, no matter how precisely synced your
clocks are, because it could maliciously reorder events/tamper with time
(always claim you activated first, with a certain fudge factor to prevent
detection) and it would be undetectable within the bounds of internet latency.

~~~
jrockway
I'm not completely convinced that cheating would not be detectable after
aggregating some statistics over time. (Similar to how you can extract
encryption keys from server processes, simply by timing how long certain
operations take.)

~~~
throwaway2048
The problem is a statistical approach to anti cheat like that results in
banning tons of innocent players, which is awful PR.

------
NelsonMinar
HFT aside, this also has interesting applications in distributed databases.
Spanner, for instance, has consistency guarantees directly related to how
synchronized the clocks are (keyword: TrueTime). It seems plausible that
Huygens could make a significant improvement on performance of this kind of
distributed database.

This looks like a very impressive result. NTP has been doing its thing well
for years but a factor of 100 improvement on time accuracy would be amazing,

------
SEJeff
IEEE 1588 v2 + rubidium decay grandmaster time sources do this quite nicely.

Source: have worked as a software / Linux monkey for two of the biggest US HFT
firms the past 10 years.

------
red75prime
It's interesting, that time synchronization in a rotating reference frame is
path dependent ([0], for example).

[0]
[https://tycho.usno.navy.mil/ptti/1974papers/Vol%2006_26.pdf](https://tycho.usno.navy.mil/ptti/1974papers/Vol%2006_26.pdf)

------
amenod
From [0]:

> In this paper, we present HUYGENS, a software clock synchronization system
> that uses a synchronization network and leverages three key ideas. First,
> coded probes identify and reject impure probe data—data captured by probes
> which suffer queuing delays, random jitter, and NIC timestamp noise. Next,
> HUYGENS processes the purified data with Support Vector Machines, a widely-
> used and powerful classifier, to accurately estimate one-way propagation
> times and achieve clock synchronization to within 100 nanoseconds. Finally,
> HUYGENS exploits a natural network effect—the idea that a group of pair-wise
> synchronized clocks must be transitively synchronized— to detect and correct
> synchronization errors even further.

Not an expert, but this seems quite a complex system. Since HF traders have
huge incentive to game the system, my fear is that the next headline about
Huygens will be about a new exploit found in the wild.

[0]
[https://www.usenix.org/conference/nsdi18/presentation/geng](https://www.usenix.org/conference/nsdi18/presentation/geng)

------
amelius
How would you use that nano-second precision on a CPU/OS that supports at most
microsecond precision?

~~~
dirkgently
Only if the article mentioned it!

Oh wait, it did!

[https://www.usenix.org/conference/nsdi18/presentation/geng](https://www.usenix.org/conference/nsdi18/presentation/geng)

~~~
geofft
I don't think the article answers amelius's question, which is not about how
you would _achieve_ that precision but how you would _use_ it. On an OS that
only supports recording timestamps up to microsecond accuracy, what is the
point of synchronizing your clocks within nanoseconds of each other, even if
you could?

~~~
egwor
On linux you can get nanosecond timestamps, and so could record them. This
could be useful for capturing certain events. Is that what is meant? IIRC
Windows XP didn't support resolving timestamps smaller than 8ms but we've
moved beyond this.

------
drumttocs8
Confusing title. Perhaps better to say "successor to Network Time Protocol",
which, like IRIG or PTP, is a pretty common timekeeping protocol.

------
aswanson
Seems sad so much technical effort to be exerted towards something so venal,
trivial, and fleeting.

~~~
txdv
If there is money in it, it will get solved!

------
partycoder
I think this is no longer about NTP but all the stack below it including
hardware.

------
cordite
What value does this add to the world?

------
jfoutz
I wonder if the next wave of hft will be a geostationary space station over
New York, to get the extra occasional nanosecond.

~~~
chaboud
Uh, I know it's a joke, but isn't geostationary orbit almost 36K km over the
equator? The round trip time to NYC would be over a quarter of a second...

Hence, drones. With lasers... Laser drones...

------
downer68
An interesting side-effect of this, is that it would enable a standard of
synchronization, across geographic regions, such that one could treat a set of
virtual machines as one ultra-wide-bus CPU with a 1 GHz clock speed.

All of the local overhead of real system resouces and network synchronization
could handled by the remainder of the real CPU clock available to the bare
metal, but contribute to the computation of a segment of a virtual bit field,
at speed.

So, now maybe we get a commodity 4096 bit 1 GHz CPU as a service. Which, is
maybe comparable to a 64 core processor, but without the overhead of chunking
down to the width of 64 bits.

~~~
dagenix
Are you saying that 64 bit CPU + 64 bit CPU = 128 bit CPU (as long as they are
time synced)?

1\. It doesn't work this way 2\. Why would you want a 4096 bit CPU?

~~~
tonysdg
For financial transactions, it would certainly allow for fast high-precision
floating point math. Imagine IEEE 754 4096-bit floats. Not sure anyone would
_actually_ use this, and you'd still have to standardize the rounding
precision, but it might be an interesting vein of research.

Still, I agree with you -- what the OP described is _not_ a 4096-bit
processor.

Now highly-synchronized VMs -- that's an entirely different matter. Probably a
boatload of use cases for those.

~~~
fjsolwmv
Why would you use floating point math for finance?

~~~
danbruc
A floating point representation is not really the issue, the issue is not
using base 10, and IEEE 754 specifies base 2 and base 10 floating point
formats and operations. But I am of course not sure whether the original
comment referred to base 2 or base 10 and given how common the mistake of
using base 2 floating point numbers for financial calculations is, you may be
correct with the intention of your comment.

~~~
tonysdg
I'm aware of the fact that you don't use floating point math for finance --
for exactly the reason you described -- but the academic in me wonders if you
could formally specify a high-enough degree of precision -- and all the corner
cases -- to allow FP math for even just a subset of transactions. This would
(in theory) allow to programmers to bypass the Decimal classes in your
favorite OO language (or GMP if you're a C fan).

Again, purely an academic inquiry :-)

~~~
danbruc
My point was more that it is wrong to say that financial calculations should
not be done using floating point formats, for example Decimal in .NET and
BigDecimal in Java are floating point formats and they are the types you
should use for financial calculations. The important difference as compared to
formats like IEEE 754 binary32 (formerly single) and binary64 (formerly
double) is that the representation is based on base 10 instead of base 2.
Fixed point or floating point and base 2 or base 10 are two orthogonal
choices.

So when you initially mentioned high precision floating point numbers for
financial calculations that was not necessarily a bad idea because you might
have thought about base 10 floating point numbers. The comment I replied to
however assumed you meant base 2 which of course most people do if they say
floating point numbers without specifying the base and which of course is a
bad idea for financial calculations more often than not. I just pointed out
that assuming base 2 is usually but not technically correct.

And you can of course use base 2 floating point numbers for financial
calculations - 32 bit, 64 bit, or 4096 bit - you just have to keep track of
the accumulated errors and stop or correct the result before the error grows
into the digits you are interested in. But why would one want to do this? The
only thing I can really think of is that you need maximum performance and
there is no hardware support for base 10 floating point numbers. And just
using integers as base 10 fixed point numbers, which would often be a even
better solution, must not be an option.

------
viburnum
This is a huge waste of effort. HFT should be banned so smart people work on
solving real problems instead.

~~~
JumpCrisscross
> _so smart people work on solving real problems instead_

Like making people click on ads?

------
wpowiertowski
Why not just switch to PTP (IEEE 1588)?

