* The "nacltai" protocol driver uses C VLAs to store attacker-controlled numbers of bytes; a VLA is morally the same as an "alloca" call. I have no clue if this is exploitable but reading it gives me a painful burning itching sensation.
* I sort of wish the authors didn't parse strings to recover crypto parameters from the packets, but instead just used a straightforward binary encoding that would require less use of C's terrible string functions.
I'm still a lot more comfortable with this codebase than with OpenVPN. :)
Some compilers already support the spec, even if it isn't going to be in C++14.
VLAs too here - but there are many situations where it is necessary to dynamically allocate based on message contents ... just not at this level. alloca is no more a risk than most other allocation methods when it is necessary - at least in the worst case
(I have no idea if it's an issue here and sort of doubt it, and would not call this "rank amateur bs").
Or maybe because OpenVPN is less auditable relative to something like SigmaVPN?
Maybe the subtext to this submission is "OpenSSL totally sucks so bad you need to evacuate NOW NOW NOW!!!!" If so, I'd point out that now is the time to upgrade OpenSSL, ditch all of your old keys, and keep an eye on the future.
Today is not the day to forklift in a whole new chunk of infrastructure.
Instead, set up a lab, play with it, get a feeling for it.
What I see: when a packet is decrypted the time-based nonce is unpacked here  and this function seems to be called from here  but nowhere I can't find where this nonce is verified not to be too old, what am I missing?
From a quick skim of the code, it does appear that they might be doing this. They are calling the right functions for it (crypto_box_curve25519xsalsa20poly1305_afternm. (See "C precomputation interface" in http://nacl.cr.yp.to/box.html)
If we assume every packet contains the client's (random, generated at process start) public key then the server could (in theory) perform key-agreement for every packet, decrypt it and handle it. In practice, it would cache the result so that it only does one key-agreement operation per client.
If the server crashes (did you have the server in mind when you said "other side"?) then it looses its cache, but the first packet that the new instance of the server sees will have the client's public key in it so it just refills its cache immediately.
It's still OpenVPN but might be useful to those that care about this story too.
/full disclosure: I wrote this.
Also, please use something like twitter's bootstrap and buy a nice theme, your site and logo currently look like very untrustworthy. I couldn't get my boss to sign off buying something from a website that looks like that.
I'm not sure what you mean by diagrams though. This doesn't work that way. It creates virtual networks that look completely flat, as if you and every other peer were plugged into the same Ethernet switch.
One of my difficulties has been getting this across... you don't have to think about topology. It's easier than that. It's a virtual switch. Install, join network, done. Everything is automatic.
You can peek at the new design here (but don't try to use it): https://test.zerotier.com/
But a network in which every peer is connected to the same ethernet switch isn't flat, it's a star. But a star network doesn't scale as well as you claim.
Behind the scenes to make it scalable perhaps you build p2p connections, this would make the diagram be more like a complete graph, which it has its own scaling problems.
To solve it, perhaps you have stateless or on-demand connections, so the actual number of connections is lower than the worst case. Or perhaps you promote peers to super-peers and build a stars of stars network.
A couple of simple diagrams answer these questions and quickly let me decide whether your solution is suitable.
I'm not sure why that wouldn't scale. Even if you had millions of users on a network you'd only be connecting to those with whom you're communicating.
It is also as you mention a star of stars network. The supernodes are at very high bandwidth sites and their number and size can be increased on short order. They only relay data if P2P/NAT-t fails, which only happens for about 1-2% of users. Otherwise they just shepherd NAT-t. They're geographically distributed for high performance: Singapore, Tokyo, San Francisco, New York, Amsterdam, and (soon) Sydney. If a supernode fails it takes 10-30 seconds to fail over.
I have plans to further decentralize and add automatic promotion of nodes at some point in the future, but that's a hard problem that requires more study.
Edit: this isn't just another VPN. This is the result of over four years of work, including a huge amount of research into networks and cryptography. It's a completely new system.
Who said it needed to scale?
How many Facebook friends does the average user have?
Do these these networks need to be larger than that?
If users want larger networks they can bridge their VLAN's.
"I don't like your logo!"
I'm using a text-only browser so I really don't care what your logo looks like.
Keep up the good work and ignore the critics.
Suggestion: Make the crypto fungible, so if a user wants to use a different library, e.g., NaCl, they can.
Actually, thank you to the previous poster. I didn't mind the criticism. His point -- "your existing site doesn't look good enough to convince my boss" -- is very valid. The new site looks a lot better and it's not done yet.
It does in fact scale pretty darn well, mostly due to the fact that it's connectionless, stateless, and opportunistic. If you're on a network with ten million people but are only talking to ten, you'll only be sending packets to/from ten.
The supernodes have to know about all ten million, but last I checked that wasn't very much memory... maybe a few gigs tops? So that's what, $20-$30/month per node? Or I could add the ability to put a real database under it and use SSD cloud nodes and handle billions of users with sub-10-ms lookup latency.
Of course if I get that many users that'll all be in the good problem to have category and I'll have plenty of money to scale out and if necessary improve the protocol/architecture. There are many directions I could explore: M:N supernodes with load balancing, various other sharding techniques, moving to beefier cloud providers, further decentralization in the protocol, all of the above, etc. I could set up big labs, run simulations, do all sorts of cool stuff. I've done enough so far to convince me that the problem of monstrously scaling this thing is very solvable. Just have to do the work.
I'm not making the crypto fungible. The protocol does have flags that could be used to indicate new algorithms if upgrading the crypto becomes necessary, but I have been an absolute simplicity nazi with this thing so far and will continue to be.
Any thoughts of how it compares with i2p? (http://geti2p.net/en/)
Compared to I2P and Tor: it's neither of those. This is about network virtualization and making it easy to set up ad-hoc networks across physical boundaries. It's not a privacy tool per se, though it is end-to-end encrypted so the content of your data is hidden. My goal isn't to duplicate the work of Tor or I2P-- if you want strong anonymity, use those. (You could use ZeroTier One through Tor, though it would be slow.)
There is an incomplete beginning to a technical FAQ here that answers some of these questions in more detail:
More importantly, QuickTun might have some usability issues. Port numbers are hard-coded, and most configuration options seem to be set as environment variables. This should be expected though, as a result of trying to keep the code as small as possible.
I'd recommend it, but only to someone who's willing to read the source to figure out what's going on. If someone wanted to write their own VPN, it would be an excellent reference. It's definitely not for everyone.
SigmaVPN would look awesome, but lacks Windows support. I'd give ZeroTierOne a shot, but that lacks Android support. Any other solutions for a "VPN Of Things" if you will?
PPTP is completely broken (MS-CHAPv2 especially), OpenVPN is hard to setup and maintain.
I've been using ssh as an impromptu VPN-like thing but I'd really, really like an actual VPN solution.
That's not true. OpenVPN is the easier, most straight-forward solution when it comes to set-up and configuration (routing, firewalling, client auth, etc.). Try to setup OpenSWAN and you'll see what hard to setup really means. I don't know about new software like SigmaVPN.
 Or maybe I am too used to it.
There's a usage howto in the comments and this should be short enough to fully grasp what it does. No third party requirements, just ruby core + openssl.
It creates client and server configuration and creates and manages CA and CRL.
The VPN uses tun mode over UDP.
Required changes on the server are written down in comments at the beginning of the server configuration.
If there is sufficient interest, I can make it a real repo so it can get issues and pull requests.
After this thread I'll be looking at fastd and zerotierone, though.
Do you have something else to create L2 overlays that is more secure?
Someone has to run an OpenVPN server. Everyone on the network has to trust that server.
And connections between network participants are not peer to peer.
With OpenVPN and most other VPN's, if I'm not mistaken, each person's traffic passes through a central point: some VPN server/appliance.
This is a major difference and has its own set of security implications.
I was looking them up the other day they seem very nice.