Hacker News new | past | comments | ask | show | jobs | submit login
Trickles - Stateless High Performance Networking (cornell.edu)
38 points by andrewflnr on Feb 3, 2013 | hide | past | favorite | 15 comments



I just noticed that this work got linked from HN today. This is my PhD student Alan Shieh's work, jointly with my colleague Andrew Myers. I'm surprised to see it here, as we did the work back in '05 or so. But it was a lot of fun and I think it still represents the extreme point in pushing state out of the server to the client side.

So, I'll try to answer some of the questions here and provide some of the insights and background that do not appear in academic papers.

The intuition behind trickles is that the packets act like continuations, the same continuations you may be familiar with from programming languages like Scheme. Instead of a server holding state, it pushes that state to the client. To get service, the client presents the continuation to any server host, which can reinstate its state, and provide the service.

Since the servers are stateless, you can direct a client request to any host. This kind of handoff is, let's just say, not easy to do when you have stateful TCP connections to maintain.

Not every state machine can be converted into a format where it can run over Trickles. But I think the community was surprised to see that something as complicated as TCP could be trickle-ized.


I just stumbled on it in a lambda-the-ultimate thread and thought it looked neat.


Interesting notion, but I'd be leery of the security implications of running code on my server that the client had its hands on. Which is not to say it couldn't be secured properly.


* The state they're pushing off to the client is analogous to the state a server would maintain for e.g. a TLS session and its associated TCP TCB; it's sensitive in the context of transport security but probably not otherwise.

* They're encrypting and MAC'ing the client-held state, the same way IIS/ASP.NET does with ViewState.

It's not "more" secure than just holding the state serverside, but if it's implemented correctly, it's asymptotically close. (FWIW: the code is pretty messy [the relevant stuff is grafted on to the Linux ipv4 stack], I'm finding it challenging to reason about, and it's using memcmp to check the MAC, and the HMAC key is a charstar --- but this code isn't really the point).


Yeah, it's totally possible to secure it properly, just leery until I've heard that those with some chops in the area have looked at it. In principle, signing it does the job, but the devil's always in the details. Sounds like what they're doing may well be sensible.


Their use of a MAC to prevent tampering and encryption to add confidentiality it seems like they have that covered rather well.

Their replay attack prevention mechanism (http://www.cs.cornell.edu/~ashieh/trickles/security.php) seems to be a little bit weaker at first glance. They're preventing replays by enforcing freshness and keeping short-term state about packets that have been seen recently. That seems strong enough, but also seems like exploiting the limited size of the short-term state store may be DoS attack vector, or a vector that allows you to do replay attacks during high-traffic periods. I haven't read their proposal in enough detail yet, though.


Regarding the general concept of pushing state out to clients and using cryptography to protect it, I agree, and further think the DOS vector you're considering isn't very attractive.

Regarding this particular implementation: it's not documented well enough to tell, is it? It does use an HMAC. It does use AES. In the few minutes I gave myself to go through the code, I can't even figure out what mode they're running AES in, though, or what order the operations are applied in. How are keys generated? What keys are shared between client and server, and what keys are held serverside?

The latter questions are irrelevant to the academic project of proving the Trickles concept, so I'm not criticizing the team. I'm just saying, there's a lot of detail you'd want to have before saying they've got things covered.


The prototype did not use AES to encrypt continuations. As you said, the details are tricky. For instance, CTR mode may well be insecure. In a stateless environment, it's quite likely that the client could trick the server into encrypting multiple plaintexts with the same counter value.

The prototype did use AES to compute and check nonces -- AES is faster than HMAC for such small input sizes.

Since the server is effectively sending encrypted data and MACs to itself, there is no need for key distribution. Only the server holds keys; the client treats the encrypted data and MACs as opaque data.


Thanks for your interest in chatting about our work.

The replay protection is intended to prevent clients from tricking the server into sending more than permitted by TCP congestion control. I.e., it protects the network and other clients from DoS.

Protecting TCP congestion control prevents a couple of attacks: * a selfish client may consume more than its fair share of the bandwidth * a malicious client may use the server to amplify a DoS attack. E.g., in the absence of replay protection, an attacker limited to only 1 Mb/s of bandwidth could cause the server to consume 10-20 Mb/s (other examples of amplification include smurf attacks; some TCP implementations from the 1990s could also be tricked). With replay protection, the attacker would actually have to have the full 10-20Mb/s of bandwidth to cause that level of damage.

Note that the HMAC protects against related attacks that are based on spoofing the client IP.

The amount of state stored at the server is proportional to the bandwidth. E.g., a server with a fatter pipe will need larger bloom filters to handle the higher packet rate. In Trickles, the amount of bandwidth-proportional state is mathematically clean to compute -- typical Bloom filter collision equations. By comparison, TCP will hold some fixed overhead per connection, consisting of TCB (TCP control block, for congestion control state) and socket/fd structs. Every TCP connection also buffers a variable amount of sent but unacknowledged data (proportional to window size).

The fixed per-connection overhead of TCP alone requires asymptotically more server-side state than Trickles. Since the size of TCP send buffers varies according to window size, which is determined by protocol dynamics, it’s tricky to construct a model of how much server state will be consumed by socket buffers. In our experiments, the socket buffers dominated server-side memory consumption.

- Alan Shieh


So they're preventing replays by maintaining state? How's that stateless, then?

Regardless, in truly stateless protocol, a replay attack shouldn't be a threat - who cares that someone can regenerate the same response from the same request? If it's really stateless, that replay can't _do_ anything. It turns into a threat when you put something stateful on top of this stateless machinery, which simply means that applications using this would need their own replay prevention. Isn't the right place to put the responsibility for preventing potentially dangerous erroneous state-transitions where you administer state in the first place - and not here?


> How's that stateless, then?

The protocol is stateless, not necessarily the application and/or session. The state is serialized to/from the client as needed.

> If it's really stateless, that replay can't _do_ anything.

A stateless pipeline may front a stateful backend. An application commit point may be reached which should invalidate all possible previously valid requests. It's definitely a necessity.


This was my first thought as well.

"... shortcut information for finding a particular object, such as a file system inode."

How do you guarantee that the client isn't handing you something malicious?


Everything that the server pushes to the client is MAC'ed. If a client were to tamper with, say, the inode number, the MAC check would fail. The client continuation can optionally be encrypted and timestamped as well.


In this case too, you would need to maintain a reference to the inode server side to avoid it being unlinked. Perhaps if you're actually passing the fd to the client to hold on to, but that fd represents an awful lot of state.


To be fair, what webserver guarantees that a url it generates will be valid later on? Perhaps the inode example isn't the best, but in principle, in the framework of a stateless protocol I think this risk is not problematic. And at the end of the day, it's an impossible problem to solve perfectly - you can't keep something alive serverside just because a client wants it too. The lifespan of an inode is something you'll need to manage externally to the protocol.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: