Hacker News new | past | comments | ask | show | jobs | submit login
WireGuard on K8s: road-warrior-style VPN server (levine.sh)
165 points by sclevine 73 days ago | hide | past | favorite | 52 comments



I think we all understand the usefulness of a road-warrior-style VPN. But it doesn't seem so clear what k8s is adding here?

Anyway, on the topic of scalable UDP services, does anyone have any experience of load balancing a UDP service? Because UDP is connectionless there's no obvious way to make UDP packets "sticky". Are there any established practices that could help scale this k8s Wireguard service to 2 or more containers?


I'm just using K8s (specifically: K3s) for configuration management in this case. This post hits the nail on the head: https://news.ycombinator.com/item?id=23006114

That said, NGINX can do UDP load balancing and WireGuard is stateless, so it should be possible to use this with a Service + NGINX ingress controller at scale: https://kubernetes.github.io/ingress-nginx/user-guide/exposi...

I have not tried it though.


Load balancing UDP isn't too difficult. However that is not the hard part here. It is ensuring the routing happens correctly.

A client must hard code it's IP address currently, which means if it can connect to more than one node, then it is unclear which path a response from a server should take to get back to that client. Each VPN instance could run NAT, but then users would never be able to talk to each other.

Wireguard makes this significantly harder than say ipsec. WG has nothing to indicate when a client connects. And there is no dead peer detection, so you cannot tell one a client disconnects. IE. Scripting something to update a global routing table to say which sever has which client is near impossible.

I use wireguard daily for personal stuff. However I cannot think how I would make it work in an active-active situation besides NAT, which I don't want.


WireGuard proponents would probably tell you to run BGP or some routing protocol over the VPN, maybe GRE too?

I agree with you, WireGuard makes this significantly harder than it needs to be. Other protocols do better in this respect.


Well if you make it a DaemonSet you could technically use the container as the network interface of other containers throughout the whole cluster. That said, I'm very happy that his example k8s deployment uses secrets.

I didn't know Ubuntu 20.04 back ported WG into its 5.4 kernel. I spent a few hours yesterday fixing a node after breaking ZFS because I upgraded to 5.6 for WG support. I feel rather silly now..

edit: rektide mentioned 'kilo' which actually does exactly what I said (https://github.com/squat/kilo).


That's an interesting idea about using a unified network interface. Do you know how you might then get the right packets to the right containers/processes? Does that even matter with Wireguard?


You can use a different container as network in docker: 'container:<name>' would route the container traffic through specified container.

Example vpn container:

  docker run --name foo --cap-add=NET_ADMIN ...
Other container:

  docker run --net=container:foo ...
Now you'd need to specify the respective routing rules [1] in the container.

[1] i.e. https://github.com/bubuntux/nordvpn/blob/master/start_vpn.sh...


This has been done for 2 decades or more by hashing the connection tuple somehow, e.g. hash(src ip | src port) % number-of-replicas, etc.

Naturally it breaks if replica count changes.

The other option is conntrack but then you have another stateful component that doesn't scale


That makes sense, though I suppose for a road warrior setup the source IP might change every so often right?


Wireguard, inspired by MoSH, handles reconnections especially well. I guess, TCP flows tunnled through UDP might be reset depending on which server (behind the load-balancer) is handling them?

Cloudflare shared, in some detail, how they load-balance wireguard traffic for roaming-ip and ports: https://news.ycombinator.com/item?id=21070315

Usually, I've seen UDP client-affinity set on (source-ip, destination-ip) tuple to handle port changes, but it doesn't help client with roaming-ips.


As for what k8s adds here I don't know, but this thing adds to k8s knowledge one interesting fact: It can be useful to run container that does not contain any process doing useful work ;)


Worth metnioning Kilo, which is an enhancement or a CNI (container network interface) provider that does Wireguard for Kubernetes.

https://github.com/squat/kilo


Yes! When you think of Wireguard and Kubernetes, you should think of Lucas! He spends a lot of his free time experimenting with the combination of these two technologies. At KubeCon EU Barcelona, he gave a talk about cross-cluster networking using Wireguard: https://www.youtube.com/watch?v=iPz_DAOOCKA


GitHub has several projects that automate setting up a wireguard VPN on various cloud VMs without K8s: https://github.com/topics/wireguard. There's also this tutorial that sets up a VPN along with proper DNS configuration so that DNS doesn't leak: https://www.ckn.io/blog/2017/11/14/wireguard-vpn-typical-set....


You can install the WireGuard tools only, without the kernel extensions etc, with:

    apt-get install -y --no-install-recommends wireguard-tools
This is all you need with the server flavour of 20.04. For the minimal one, you need a couple more.

So no need to use a builder image


A few people seem to be confused why K8s is needed when you can just run this on the OS itself. I think they miss the point that this is not a guide to setup Wireguard using K8s but setup Wireguard if you only have/want a K8s environment.

As the author notes: "you can run a road-warrior-style Wireguard server in K8s without making changes to the node."

Which makes this guide ideal for me. I run a lightweight K8s flavor (K3s, https://k3s.io/) as "configuration management" on my home server and home automation Raspberry Pi's because I don't want to mess with OS/userland configuration or the associated tools (Puppet, Ansible, hacked together scripts, etc) or want to maintain any OS state manually.

For my setup I just flash K3s to disk or SD card and let it join the cluster. Everything else is configured in Kubernetes and stored nicely as configuration files on my laptop so I have an overview of everthing and can modify/rebuild whenever I want.


You say you don't want to use Puppet or Ansible but you are basically using kubernetes manifests for the same exact reason: configuration management. I know it can be funny and I totally support it but I thought it should be pointed out anyway.


The problem I have with traditional configuration management is that in the end, even if it's declerative, you are stil modifying a imperative OS/userland. So it will collect state at some point. Things like undoing changes with those tools is not that trivial. You have to actively reverse them in your configuration. Which turns nice CM code into mess. Want to try out something quick? Better not be afraid it messes up your OS/userland as there is no simple undo.

So since I'm doing isolation in containers/Docker already it's a small step to a lightweight Kubernetes. What Kubernetes gives me on top of that is that I can consider everything below the application layer as a declarative API.


not really true anymore with systemd portable services. or package managers.


k8s manifests are declarative though, not imperative config mgmt like the other tools.


Both Puppet and Ansible are declarative.

Why do you think people use them rather than shell scripts ?


That's maybe the theory, but in reality, the only thing Ansible hopefully is, is idempotent between playbook runs - but there are no guarantees there, at all. Only in very simple setups things can be fully declarative in it's totality.

Don't have much puppet experience, but I can't count the times anymore that I've had to add steps to playbooks just to determine stuff used in one of the following steps. The other option was to write a snowflake Ansible module. The individual steps/plays might be declarative, the playbooks are not.


They look declarative but every Ansible playbook I have ever read or written has involved some imperative code. And even if you only use it in a declarative fashion, it doesn't change the fact that it's very much a step-by-step ordered list of things to install.

The declarative syntax is certainly a step up from shell scripts, but it's not as pure as K8s.


Ansible is only declerative on a action level. At a playbook level it's imperative. You can install and remove the same package within a playbook, the outcome will be dependent on the order.

Puppet is fully declerative but for me it lacks an easy way to undo changes. It would be nice if it could work like Terraform where it keeps a 'state' of all changes it made in the past so when you remove a resource from your config it could 'undo' the change.

I still use Puppet (mostly with Bolt nowadays) for systems that don't fit Kubernetes, but they're becomming less and less.


This was indeed the motivation for my write-up :)


Thanks, I was meaning to look into this but your post will save me some research work.


Well, there's road-warrior, and then there's road-warrior.

I've been trying out Glorytun, it does multi-path VPN with a relatively similar wire format to WireGuard. Being mostly indoors, due to the microbial boogaloo, I've not been trying it with the most interesting applications.


Would like to use something like this to aggregate a few DSL connections. Any idea how well it works for that use case?


It seems to work well when the connections are of roughly equal speed and stability, so that sounds like a rather ideal use case. :+ )

I think it'll need work for connections with varied performance.


Nice! Thanks for the reply, May have to give this a go.


I looked into aggregating DSL connections in the past, a few years ago I think you had to get your own router for that as well as a VPS. OVH launched a service "over the box" that goes just that, they provide a router and a VPS where a VPN runs. They claimed you'd get a total bandwidth equals to the sum of all connections and I think it's not required for the connections to be similar in bandwidth.


Worth taking look at http://tailscale.com - Their tag line: Private networks made easy. No affiliation -- just like their product.


My main annoyance with Tailscale is the reliance on Google. I need to refresh my memory, but I think this makes a VLAN shared with other people impossible.

This is why I'm still using https://zerotier.com -- also no affiliation.


Honestly that's the least of all problems and catastrophes of Tailscale. You must have 1000% of confidence in their own servers security, if the published public keys hosted on their servers have been tampered then the entire network is compromised. Also, if their service is down, you will be unable to connect to your network even if it is completely fine and working.


Tailscale is open source, it should be possible to set up your own server.

The hosted Tailscale product is meant for GSuite customers who want an peer-to-peer VPN with corporate SSO. Yes, you have to trust them - SSO login is inherently centralized. My company uses it, it works great.


I am not really sure you understand how it works. There is no hosted/not hosted versions of it. You must connect your "opensource" client/agent through their coordination servers hosted by them to host and publish the public key to the other devices in your network and you can not skip their service. So Tailscale is effectively as opensource as any commercial opensource VPN client. It's entirely useless when not used with their commercial service and users have zero control over the software unless when used with their servers. The "open source" thing is great from a marketing and business perspective because you basically benefit from the open source marketing and the community thing from the unsuspecting users and enthusiasts pros without giving away literally anything.


The backend (minus the web UI?) is open source as well.


What's the more-secure alternative?


Exchange your keys ahead of time, preferably offline, and just run wireguard yourself. You may need a service discovery solution depending on your networking situation.


"You may need a service discovery solution"

You mean... like tailscale does? (e.g. They have devices registered with a name and you can access them. They're all given static IPs so an internal DNS server could simply resolve their names... kind of like service discovery)


Right and Tailscale is a fine product for a variety of cases but there are cases where TailScale may not be a fit for you either due to the gSuite Integration, different privacy constraints or just not wanting to trust someone else with your vpn.


Zerotier doesn't use wireguard though - which makes a difference. I have a private mesh of my family's computers on different networks and tailscale/wireguard was blazingly fast. I ended up using zerotier though, because it had an android client and availability was more important to me than speed at this point.


For me, WireGuard isn't really a viable option because I want functional mDNS name resolution.

As a test, I did set up a vxlan tunnel through a wireguard tunnel (linux to linux) to prove that it is possible to get that working. However, I can't do that on something like a mobile android client.


What advantages does tailscale have over zerotier?


wow, good for you. No luck at all with zt in mainland China


A quick search for overlay mesh network wireguard resulted in https://github.com/costela/wesher. Anyone experience with wesher or the like?


this example uses k3s - which is the k8s distribution by the Rancher guys. Really cool distro - simple UX. Runs equally good on a raspberry pi or the cloud


I'm not sure to understand the use case. Is the goal to replace things like flannel? Or route all your traffic from a single gateway?


Is there any reason why OP installs iproute2 and iptables not in the builder together with the wireguard package but in the final container image?


The packages installed in the builders are essentially never used since it never runs. The builder makes the files to install in the final container during the build phase, and then gets thrown away.


Oh, of course. Brainfart there.


While we are discussing this, I see NordVPN has also released support for Wireguard and named it NordLynx




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: