Hacker News new | past | comments | ask | show | jobs | submit login
Vita: simple and fast VPN gateway (github.com)
161 points by yarapavan 6 months ago | hide | past | web | favorite | 59 comments

I have used tinc [0] for this scenario successfully for 15 years. It not only supports full mesh but automatic full mesh. It will use UDP for the data stream when possible, supports RSA and Ed25519, and supports transporting either IP or Ethernet frames.

I used the mode where it supports Ethernet frames to merge the VLANs of two datacenters across the WAN (with some additional ebtables to prevent some kinds of frames) to add the ability to migrate systems from one datacenter to another during a partial outage.

Is tinc okay security-wise? IIRC last time I looked the older crypto was really iffy looking, and the newer crypto (ED25519 rocks) was only in the dev/unstable version.

There were some issues in 1.0 that are fixed in 1.1, for which the protocol is not yet finalized although the beta releases are stable. The lack of a final release is annoying, since it does not guarantee upgrades will work with older clients but this is because the protocol isn't finalized.

They are documented towards the bottom of http://tinc-vpn.org/security/

Why 15 years to finish a protocol?


Good things take time.

All good, I've been using 15 years too. Unsung hero.

bummer its single threaded...

I see some people experimenting and achieving more Mbit/s with their separate async thread host


Since it's ESP-based, what are the improvements here over a conventional IPsec VPN?

It would also help to know how different it is from OpenVPN, for example, and other VPN options including WireGuard.

I mean it's nice to see another VPN, but it'd be useful to know right off the bat how it compares to existing options.

This is polling-based, so it's probably faster. But if you really need the performance I would lean towards VPP since it seems to be more tested.

From the readme:

“~2.5 Mpps (or ~5 Gbps of IMIX traffic) per core on a modern CPU”

Note CPU unspecified in above.

A Xeon-D 1541 (2.1GHz) will do 2.44 Mpps, (Simple IMIX) and a i7-6950X will do 3.27 Mpps running AES-GCM-128 and VPP.

A kernel-based Linux implementation will do a bit under 500 Kpps running the same algorithm.

The keying algorithm seems interesting, and is vaguely Wireguard-like (would be more if it used a public-private keypair).

I was curious about that too. I know there were some WireGuard/OpenVPN benchmarks a while back. It'd be interesting to see these technologies compared for speed.

The published wireguard results are questionable, as the reported results 1011Mbps on a documented -Gbps NIC is impossible even before considering the protocol/framing overhead, which is within 2 bytes of IPSec ESP using AES-GCM-128 with the standard nonce.

OpenVPN is slow.

ZeroTier has been filling this gap for us for a few years now.

Yeah, ZeroTier is awesome! Works great on every platform, simple authentication scheme, and it's always connected. I use it for access to remote servers (have the zt subnet set to bypass firewall for accessing various debug servers) for a nonprofit project and for all kinds of personal uses as well. The free hosted version has up to 100 devices on a network and you can self host as well.

Fantastic product that still surprises me with its ease of use.

Indeed.. somehow it's not getting any love

Is it possible to use cjdns for that kind of thing? I'm genuinely asking as I don't understand it.

Once I tried to set up a VPN using some odd Windows software (OpenVPN, maybe?) and the results were disastrous. I didn't understand any of the jargon the program used, and I think I didn't get what was its main use case (most certainly it wasn't what I was trying to do, that was creating a local subnet between two computers or two LANs).

Then some months later I tried ZeroTier and was able to understand everything, it seemed a perfect fit.

But still, people call ZeroTier a VPN. So why is it so different? And why does it use a jargon so different?

I think zeroTier is meant to be easy whereas openVPN is versatile and can be configured in many ways.

One use case of a VPN is to connect two local networks together. Another is to have your traffic appear to be coming from a different geographical area.

In some cases you wouldn't want the local networks to be able to speak, like commercial VPN services, for security reasons.

Shouldn't these two cases/features have different names? They're completely different. Calling both "VPN" is wrong.

I ran cjdns for 5 years but recently switched off it. For one, it seemed to have too much traffic over my limited outbound link when things should have been more idle. Also, the commit logs were a little unnerving to me for code I'm trusting with my network security:

All kinds of "oops" edits: https://github.com/cjdelisle/cjdns/commit/3abe50b8e744f72696... https://github.com/cjdelisle/cjdns/commit/feabb1970fbaecf65c... https://github.com/cjdelisle/cjdns/commit/c51d89431f1fa42955... https://github.com/cjdelisle/cjdns/commit/1b0c999bd2e5988c3f... https://github.com/cjdelisle/cjdns/commit/355d7d77cc82c52bf1... (function args passed in the wrong order)

Some are about extra traffic: https://github.com/cjdelisle/cjdns/commit/8bcfbf227a87020931... https://github.com/cjdelisle/cjdns/commit/0ee9cb7f45b466232b...

Also it's ipv6 only, which took some fussing. Performance was fine, even on rpi. Reestablishing links was slow and sometimes required restarting of the daemon. Daemons would occasionally get wedged. Log output is unconventional and uses a special reader tool (a la adb for android). Occasionally routes wouldn't be chosen right (two home computers would route via an external cloud box), which I'd fix by restarting the right daemons.

I laboriously switched to openvpn (two networks) and haven't worked out all the routing and hotswitching for my roaming phone+laptop yet. Now I'm considering vita, tinc, zerotier, or wireguard. Probably I'll try out zerotier to see if it works out of the box, then try wireguard if I'm going to have to configure it a lot, since it seems like WG is the most unix-toolbox-do-one-thing out of all of them.

Check out Yggdrasil - https://yggdrasil-network.github.io/ - we've tried very hard to solve the problems that cjdns has, seem to be much more reliable in real world conditions and we send/receive much less idle traffic to do it. We also have Wireguard-like crypto-key routing for both IPv4 and IPv6. (I am one of the developers.)

Wow, that's nice. I'm going to start using that now.

cjdns always seems so intimidating when I look at it, Yggdrasil looks super friendly and easy.

Thank you very much.

Actually, I don't know what are people trying to achieve when they run these things.

Did you had a problem cjdns was solving?

Yes, some machines on different networks in my house plus a roaming laptop all got to talk to each other without hassle. They got random-looking ipv6 addresses, so I don't know if 'subnet' is the right word here.

Before, I had to act differently on the laptop based on where I was, and the raspi nodes on my guest wifi couldn't reach the influxdb server that was not exposed to the wifi net.

Why not just wireguard?

IPsec is pretty much universal in networking hardware and cloud provider networks nowadays. There's a better chance it'll work for you if you can't or don't want to control both ends of the connection.


* Cisco: https://www.cisco.com/c/en/us/td/docs/net_mgmt/vpn_solutions...

* Juniper: https://www.juniper.net/documentation/en_US/junos/topics/top...


* AWS: https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connect...

* Azure: https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gatew...

* gcloud: https://cloud.google.com/vpn/docs/concepts/overview

Also, some software environments have better support for IPsec than Wireguard; a glance at the Algo docs (https://github.com/trailofbits/algo) suggests that Windows and OpenWRT are both in this category today.

FWIW, I work for Google, I haven't configured IPsec in forever, and I'll probably reach for Algo first the next time I think I need IPsec; I don't think I have enough endpoints in my home network to need hardware offloading :)

Last time I configured IPSec it was so horrible, really-really-really horrible, I will never touch it again with a ten-foot-pole. Starting from the fact that the software was hard to configure, so was it hard to find working (new) configuration examples as well as secure configurations. It never felt right after setting it up and I did not want to spend any more time on it, wireguard has been a blessing in that aspect.

Right, but this is a solution in which you're meant to control both ends (IPSEC runs from Vita to Vita). That's an ideal environment for WireGuard.

Having alternatives is not a bad thing.

Except in this case, it's a VPN library built with Lua ontop of a Lua network library with nearly 6 times as many lines of code to hide bugs in:

$ cloc WireGuard/ 214 text files. 199 unique files. 73 files ignored.

http://cloc.sourceforge.net v 1.58 T=0.5 s (358.0 files/s, 127864.0 lines/s) -------------------------------------------------------------------------------- Language files blank comment code -------------------------------------------------------------------------------- C 71 2632 1397 33397 Perl 7 1598 1084 10797 Assembly 5 215 218 3608 C/C++ Header 45 569 544 3159 Bourne Again Shell 6 183 52 1570 Bourne Shell 12 81 97 652 make 7 94 11 576 ASP.Net 14 1 0 254 Go 1 11 5 171 Haskell 3 50 6 170 Javascript 1 15 4 165 HTML 1 20 0 158 Rust 1 13 1 123 C++ 1 3 0 87 Python 1 22 8 64 YAML 2 5 0 37 IDL 1 0 0 5 -------------------------------------------------------------------------------- SUM: 179 5512 3427 54993 -------------------------------------------------------------------------------- $ cloc vita 2309 text files. 2150 unique files. 777 files ignored.

http://cloc.sourceforge.net v 1.58 T=4.0 s (387.5 files/s, 87054.8 lines/s) -------------------------------------------------------------------------------- Language files blank comment code -------------------------------------------------------------------------------- Lua 730 13869 10822 126870 C 265 6868 5197 72704 Bourne Shell 107 5310 5226 31016 C/C++ Header 218 3439 5976 22485 m4 16 1167 110 10958 Assembly 10 304 14 6802 HTML 14 334 8 5709 Pascal 25 961 54 3519 make 23 367 198 1835 Expect 56 4 0 1042 Python 13 219 174 919 Bourne Again Shell 43 131 72 761 XML 4 0 0 756 CSS 3 21 25 567 Teamcenter def 1 0 0 558 SQL 6 36 54 176 DOS Batch 5 16 4 138 C++ 1 14 0 115 ASP.Net 2 25 0 110 YAML 6 22 2 97 Perl 1 5 3 19 Visual Basic 1 1 0 11 -------------------------------------------------------------------------------- SUM: 1550 33113 27939 287167 --------------------------------------------------------------------------------

Fair enough. My answer was mostly that it's far from the first time I see this "Why bother developing X, Y exists" argument, and it's rarely a good one. Alternatives drive innovation. Relying too much on a single solution can be dangerous.

Yes, but security requires near perfection to maintain its security guarantees. It is very difficult to have both perfection and diversity, as resources spread thin.

When you see that "argument" made in HN comments, I do not think it is for the reasons others in this thread are suggesting.

These commenters are potential or existing users of X, not the authors of X. The people writing X would never try to argue that people should not write Y.

As for the psychology that drives these sort of statements from HN commenters, that is left as an exercise for the reader. One theory is that some people do not like to make choices. They would rather be told what to do. Alternatives may mean choices must be made.

In this case, Snabb discloses that Vita was funded by NLnet. NLnet once wrote an alternative DNS library and various programs (nsd, drill, unbound, etc.) that I would bet many former BIND users are now happily using. OpenBSD made the switch years ago and NetBSD is making the transition as well.

I think these "Why not use X" comments are a sign of users who dislike decision-making and want to be told what to do. I would bet some commenter was asking "Why not use IPsec?" when Wireguard was being introduced.

Wireguard is of course very Linux-specific. As of today, I still cannot use it reliably on BSD. What that means is that in order to experiment with Wireguard, 3xblah's router has to run Linux not BSD, 3xblah's preferred router OS. I am not anti-Linux, but that choice, being made for me, is significant.

I interpret the configuration choices that GNU/Linux distributions make as being told what to do. With BSD, NetBSD for example, programs that interact with the network are generally off by default. It is up to the user to decide which ones to start. IME, this is different from the Linux distros where programs are pre-configured to start without any input from the user.

Alternatives that work on a variety of OS are helpful for some users.

I think the "Why bother..." questions come from users who cannot be bothered with decision-making. More alternatives may mean more decision-making.

The reaction of these users might be "Why bother developing..." in which case they are upset that now with the arrival of new software they feel there is a choice to be made, or it could be "Why should I switch to..." which signifies they want someone else to make the decision for them.

I can't really agree with you. I understand the premise, and I certainly agree that some users genuinely don't like choice. But I think a much more reasonable explanation of that attitude fits the following two points.

1. "Why not use [X]" is a perfectly valid question for gaining insight into the merits of new tech. If I have problems that I've solved with [X] and you're now implying that I should use [Y], I want to know what makes [Y] different or better than [X] in your opinion. I want to know what problems you're solving, so I can compare them to mine. I want to know what benefits you're getting that I may not have considered. That's a totally valid approach to understanding a new product.

2. Assuming you aren't solving new problems with [Y] and you're really just directly competing with [X] and I already use [X]... I may be inclined to think it's a waste of effort because I won't get any benefit from your work (I'm already using [X]!). Worse, you could have been making [X] better instead of just competing with [Y]!

Of the two, I think both play a role in the mindset of people making that argument. I'm not very sympathetic to the second, but the first is fine.

"... and you're now impying that I should use [Y]..."

You perceive someone is "implying that [you] should use" [Y].

It is as if you believe the mere publication of software is somehow didactic.

As if the author by the mere act of writing and sharing a program is telling you what to do.

If the question in 1. were phrased as something like "How does Y compare with X" then that is not what I am addressing.

I am addressing this idea that someone (who?) is "implying" that you should use Y. What if that was not actually the case and it was just your interpretation?

What if the authors are not telling you what software to use.

What if it was simply a case of a person or group writing some software, e.g., maybe to scratch their own itch, and then publishing it in case others may want to use it.

Keep in mind I am now referring to the general case, e.g., each program source published on Github, not Vita.

It would be interesting if users, without any financial contribution, could tell software authors what programs to write or refrain from writing, but that is not what I see when I look at the large amount of software published on the internet.

If the authors of Vita were getting paid to write it, then I doubt they would view it as "wasted effort".

Dude, you are so incredibly far off in strawman land right now I can't even...

you got stuck on the work imply and just couldn't come back.

Even assuming they're just sharing their work with the world entirely for generosity (and they're not, it's sitting in a company owned repo and the tag line is literally "Software Bureau. Hire us to work on code.")

Then yes, I'd still say espousing the merits of your product publicly is a pretty strong implication that I should use it. That's also why I expect you to have shared it.

Have you tried FreeBSD? I run wireguard on FreeBSD. Use it every day, rock solid.

I was not aware it runs well on FreeBSD. Have been using it on OpenWRT. Thanks for the tip!

Lots of vying alternatives results in development being spread out and no one single solution getting all the features and clean-up. Plus, "more" does not equal "better" when it comes to security development; not that many people can write secure apps well.

> nearly 6 times as many lines of code

SQLite has 711 times more lines of tests as it does of code.

Does that make it a worse software product?

No, but if it has more lines of code (not tests) than a competing product, then it is more complex, and has a higher probability of defects.

In security, defects have serious consequences, so it is in best interest of everyone to have the lowest possible complexity, much more so than for other types of software. The only "stricter" category would be software related to operation of equipment whose failure could cause direct physical harm.

Based on empirical observation, the number of bugs goes up with lines of code on average. There's also some point where something is too big for the human mind to understand in its entirety, either developer or reviewers. To say something is secure, you have to understand everything it will and won't do. Those analysis grow exponentially with size due to input/output ranges and combinations of paths (i.e. combinatorial explosion). So, smaller is better. Ideally, small like Wireguard so the most thorough analysis possible.

Lets look at your counter example. It's smaller than most databases as I advise. Following the other rule, it was untrustworthy by default with developers adding piles of tests to uncover bugs, increase confidence, and make changes with less breakage.. Nonetheless, CompSci papers I read on static analysis and fuzzing often test SQLite finding bugs. Even such a well-tested application still had plenty of bugs over time due to its complexity. Most of which might be just intrinsic to the kind of features they're developing.

Back to VPN's, you want to know it will maintain security policy (esp correct info flows) under all inputs in all states, normal and failure. And if no progress can be made, you want it to fail-safe. So, making it six times larger without a need to is definitely worse than not making it six times larger. The larger product will, based on empirical data like SQLite, have more bugs with more code injections or data leaks following from that. I advise minimal, careful, rigorously-evaluated implementation of a formally-verified protocol. Wireguard is closest to that right now.

Reminds me of tinc, which is under appreciated.

Wireguard just makes more sense to me.

For now tinc makes more sense to me. I plan on fully switching everything to WireGuard once it actually enters the mainline kernel.

Tinc could still make sense as a control plane for the WireGuard VPN though. There have been talks about WireGuard as a backend for tinc [0], hope that sees some progress.

[0] http://www.tinc-vpn.org/pipermail/tinc/2017-February/004755....

Sadly, using wireguard would come with some notable drawbacks since the protocol isn't as flexible as tinc's.

First while the control plane would be TCP (since it's low traffic), the data plane would then be UDP-only leading to issues where the data plane would not work even though the control plane did. tinc currently starts the data plane out over TCP and migrates it to UDP if it finds it works, and later migrates it back if it discovers it stops working.

Second, wireguard only supports Ethernet frames, while tinc supports Ethernet frames or IP packets depending on the mode. This is useful for, for example, sending a bunch of IEEE 802.1Q VLAN tagged things over the VPN interface. This use case could be migrated to be VXLANs, but it would require breaking the existing tinc contract with its users.

Third, RSA keys are not supported and that is the primary mechanism used in tinc 1.0, it would be a breaking upgrade for all users or require a long migration time where RSA keys were replaced with wireguard compatible ones but both were still supported while wireguard was not used.

Is this similar to what Hamachi used to do?

I remember when Hamachi came out. It felt magical. It just worked™.

Have you tried ZeroTier? https://www.zerotier.com/

I have but had a lot of trouble getting it working. (Although I only spent about 30 mins trying)

Used to?

So, can I easily host the server?

ZeroTier is great, but hosting the server is very complicated (I'm not going to stop using it now, since it's so easy and is already set, but it's good to know of alternatives).

It's based on Snabb, a user-space networking platform, so it'll need direct hardware access and supports only a few specific NICs. But in exchange for that, it should be really fast.

> Each route uses a pre-shared super key that is installed on both ends of the route.

Woof, not asymmetric? Is that normal for IPSec? I realize it's never sent over wire, but still makes me nervous.

Static symmetric keys is one of the ways to authenticate an IPSec tunnel. It does also support certificates for authentication or fully unauthenticated connections (but still encrypted).

Their key exchange implementation looks scary and like a last minute scramble to not be shit. No confidence here

been using wiregurd. How's Vita different from it?

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact