Hacker News new | comments | show | ask | jobs | submit login
Roughtime: a protocol for secure, auditable time synchronisation (imperialviolet.org)
120 points by bruo on Sept 20, 2016 | hide | past | web | favorite | 16 comments



This is awfully similar to Ben Laurie's good ideas about how to run a distributed time stamping service for a cryptocurrency. http://www.links.org/files/distributed-currency.pdf

Great to see these ideas used for general security applications.


(Ben was involved in the design of Roughtime.)


Just intended to point out this is a great example of the general applicability of crytocurrency inspired research.


I'm curious why the clients chain requests but the servers don't chain replies. It would be straightforward for servers to emit a CT-style log as they go, thus forcing servers to prove that all their timestamps are in order. With a bit of refinement, it would also allow servers to prove that they didn't generate, say, Thursday's timestamp prior to learning a nonce that was sent to them on Wednesday. If nothing else, this would substantially strengthen its use as a timestamping service.

Also:

> It is the case that the signature, even assuming one request per batch, will add some number of microseconds of latency to the reply. Roughtime is not going to displace PTP for people who care about microseconds.

I'm not convinced that this should prevent extremely precise timestamping a la PTP. The server just needs to indicate, outside the signature, how long its processing took. Sure, this prevents authentication of the fine-grained time, but a client could easily bound the amount that a server can cheat.


I'm curious why the clients chain requests but the servers don't chain replies.

One challenge would be around server scale -- for the servers to maintain a strict chain, they would need to coordinate amongst themselves, which can be costly. The proposed approach doesn't introduce that requirement.

But they could get close by maintaining a branching history, like what you might see in a git history, or a vector clock. Neither of those approaches provides the same causality assurances as your proposal, but either would provide something close without incurring any blocking server-side state management.


I just meant for servers to maintain a local chain. Cross-server checking would be free, sort of, as clients that talk to multiple servers would inject the chain state from the first server into the second via their nonces.


At scale, a time server's DNS address will likely either be a reverse proxy or a multi-valued A record. The coordination amongst those separate physical servers that are serving requests for the same DNS name would become a scale challenge if they needed to share state.


But they wouldn't need to share state. They could run independently, and any client that wanted to tie their clocks together could request the signed time from one and then send it to the other, causing it to get logged in the other server's chain.


SpiderOak might be interested in running a Roughtime service / participating in a Roughtime community to support Semaphor.

As it is, Semaphor clients include a local timestamp among the signed content of most actions, and the server rejects actions that aren't within some tolerance (strict ordering however is accomplished via hash chain.) Roughtime would allow improving this situation from several angles.

One of the logistical challenges is client network traffic footprint. Enterprises that deploy a collaboration solution often want to have a strict definition of which upstream servers it can be expected to talk to. Would it be possible for a single Roughtime service to incorporate verifiable information from a larger Roughtime community, such that end user clients don't have to communicate with additional addresses?


At the bottom of the page, they mention to get in touch, either via the mailing list (https://groups.google.com/a/chromium.org/forum/#!forum/proto...), or directly.


> There have been efforts to augment NTP with authentication, but they still assume a world where each client trusts one or more time servers absolutely.

OpenNTPD has "constraints" where it makes HTTP requests (using TLS) to webservers and checks that the time provided by the NTP server is within a certain threshold of the time returned in the HTTP Date header.

Much simpler and doesn't require dedicated servers.


tlsdate is a much cleaner implementation of this idea, taking the time from the handshake. TLS 1.3 as it stands makes sending the server time optional.

The 'Date' header is tricky because it is a timestamp of when the document was generated, not when it was served. Caching proxies have no obligation to (and in most cases shouldn't) update the value.


Some TLS implementations return a randomised date for the handshake anyway, which is why constraints works the way it does. TLS 1.3 killing it is just gravy.

If you're worried about a caching proxy you can set the constraint to a URL that returns something dynamic. Although it would be interesting to see what % of the top TLS-enabled webservers don't return something recent for HEAD / HTTP/1.1


This is great, I work in distributed systems and am always dealing with wall clocks. Most people ignore/forget about clock sync or reject using wall clocks entirely because of it. But you can get very practically reasonable results by thinking these things through and doing sync. I'm glad to see other people doing more work on improving and validating NTP servers.


Seems like this might be useful with proving that something happened at a certain time, no? Like use the hash of something as a nonce to a roughtime service?


Reads a bit like PGP's trust model applied to time synchronization.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: