Hacker News new | past | comments | ask | show | jobs | submit login

Fantastic news. I deploy WireGuard to provide a private network (mesh) between VPS servers. Each VPS instance has each other vps as peer. So no single source of failure. I run PostgreSQL with Patroni and GlusterFS over this mesh with no issues. When I add or destroy a VPS with Ansible all VPS nodes get an updated config and reload. This way I don't rely on a single cloud provider because I do not use their private network service.

I used to do the same thing, but adding one node meant I had to reprovision all other nodes so each had an updated config file written and reloaded. I decided I want something akin to DHCP, which seems to be worked on here: https://github.com/WireGuard/wg-dynamic It's still WIP though.

I use tinc for this. It does the mesh dynamically so I have a few nodes that are fixed and the others will connect directly or indirectly automatically. I deploy it using puppet and there's no need to update all nodes to add a new one. The cryptography and performance is probably not as good as wireguard but more than good enough for my uses. I think there's been consideration to using wireguard as the transport instead of doing everything in userspace.

Yeah, tinc is great in that scenario. I use it similarly.

This problem has be prevented me all the time from rolling out wireguard. But how dows wg-dynamic help which seems just to be a DHCP on wireguard implementation? You still have to sent every existing node and updated configuration because you provisoned a new one.

An overlay network on top of wireguard would be really nice. For example you are running a wireguard network on So every peer which is assigned an ip address within this range is by configuration of the network allowed to forward packets to another peer in the network. So the only things needed to be implemented would be: * an internal routing system to forward packets on some way to the destination * a concept on how peers are found and how they build a secure channel (pre-shared key?)

Edit: A better way would be to have multiple shared secrets for every server. So you could basically assign roles to every server. So if a server has the keys "db" and "middleware" he can communicate with every same in the network for forwarding but the final destination can only be a server which has also one of the keys "db" od "middleware". Maybe such a server would have 2 virtual ips within the subnet, one for it's role for db and for middleware.

While WG is pretty cool, you're starting to describe a simple version of ZeroTier. You can achieve exactly what you say with it, along with multiple networks, chosen/assigned ips, p2p routing, shared keys for authentication to the network, etc. You can put extra filtering or routing rules on top of each of the networks.

Do u maybe know when wireguard would be better than ZeroTier?, been using it for months for p2p(Hamachi like), and for access to the internet like a VPN service. Seems to be most versatile since it works everywhere even behind the deepest nat jungle, and with blazing fast speeds (compared to openvpn haven't tried wireguard yet)

If you have a stable (network) configuration with no roaming machines, and you want as few dependencies as possible, wg sounds perfect. If you want features and don't mind an extra daemon, and don't know what what nats/firewalls are in the way, zerotier rules.

If you want an open source ZeroTier, try Slack's Nebula.


I thought about doing something similar, but with Slack's Nebula or with ZeroTier (v2, which is not released yet). They're specifically designed for this kind of overlay network if I'm not mistaken, taking care of node additions and removals automatically. Nebula with fixed "lighthouses", ZeroTier with a decentralized KV store.

Did you look into these as alternatives?



Didn't know about nebula, definitely interesting. I looked into ZeroTier but I believe it has a central control server for connection initiation, and I read some comments about slow connections if I remember correctly.

OP is referring to ZeroTier 2.x which makes it easier to run your own control servers (called root servers in ZT).

Any connection flakiness is probably due to NAT or firewall issues and is going to occur in any P2P network layer since they all use a toolbox of common techniques such as UDP hole punching.

That's really interesting. So you essentially implemented a Virtual Private Cloud(VPC) on top of the "PHY" network of your hosts?

Does that mean that all your nodes have to be accessible to the public internet?

In my case yes and yes, but mostly because I spread out over two cloud providers.

But it only needs to be accessible on the port WireGuard uses for communications, and WireGuard also has a nice property where it acts passively for non-wireguard packets.

So someone on the internet doesn't necessarily know the node is reachable from the internet if they try and scan it for example.

Edit: IIRC only one end of the connection needs a stable endpoint as well. IIRC WireGuard supports mobility (changing IP addresses) for one end of the connection.

afaik, both ends can move, they just send packets to the latest IP they received a valid packet from.

Not necessarily, no. All hosts talk to each other on their private (v)NIC.

I'm also doing this with internet-connected vms, but I have closed all ports using Iptables

Did you follow a particular tutorial or do you have any resources you'd recommend to help replicate this setup?

I've been interested in setting up a private network similar to what you describe and your comment has piqued my interest in finally building it

I'm considering writing a blog post about it as I documented most steps. Plus having everything in Ansible is more or less a guide / tutorial in itself.

Oh awesome, well if you need someone to test out a draft of the post let me know. I'd be glad to help

Can you talk a bit more about your setup with patroni, postgresql and glusterFS. Are you running postgres on a glusterfs? How well does that work?

From my experience file locking on a distributed filesystem is either not implemented correctly or has piss-poor performance -- and databases use them

I run a distributed Node.js Express app that has a media folder mounted (fuse) across all node instances. GlusterFS runs with replicated volumes. As GlusterFS is not super fast, I use an Nginx cache in front of all media files. (I should probably use a CDN) PostgreSQL does not run over GlusterFS for the reasons you mentioned. For ease of configuration each node has an HAProxy instance that knows where the master PostgreSQL instance lives. Patroni as well as the Node.js (typescript) app uses Consul for leader election.

Is there a way to implement a mesh using only the public IP addresses?

Someone else linked https://tailscale.com in this thread. Could that be similar to what you're looking for?

I haven't dug fully into but definitely will later today.

It seems like it isn't free, I would appreciate not handing my network over to anyone. Since ZeroTier only runs over udp (which can be problematic) and doesn't route over other peers if p2p isn't possible between nodes I've been thinking about building the same admin and ease of deployment around tinc, which can fall back to tcp443 if necessary.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact