Though admittedly, mine doesn't have SOCKS support, and the code is not as lean as yours!
- receives connections and assigns a random internal port for it
- wraps the data packets in a transport(TCP/UDP) packet that's routed from the internal port to the remote
- wraps the transport in an IP packet that's routed from the address assigned the the proxy, and to the remote WireGuard address
- wraps that with WireGuard's protocol (encryption)
- sends off the encrypted packet to the public WireGuard UDP endpoint
The packet-wrapping and TCP state machine is implemented using smoltcp in Rust, which is similar to netstack in Go
The WireGuard encapsulation and state machine is implemented with boringtun, Cloudflare's implementation of the WireGuard client in Rust.
I do have a more thorough architecture explanation in the Readme: https://github.com/aramperes/onetun#architecture
Basically a kitchen sink for this sort of thing using lwIP for it's IP stack
This plays well with proxychains to make proxy-naive programs use SOCKS5 proxies.
> If you configure your browser, for example, to use the SOCKS5 proxy, it will direct all of your internet access via the proxy which is only accessible through Mullvad. So if you haven't turned on the app, your browser will prevent all internet access and therefore won't leak any information.
AllowedIPs = 10.64.0.1/32
I hope someday wireguard addresses this issue and makes itself fully transparent as a data link layer.
This is a common misconception, due to that this is the way wg-quick works (unfortunately IMO; presumably to make it easier, and I guess wg-quick was never meant for people with advanced needs). On a lower level, AllowedIPs is really just "allowed IPs", and does no routing. You can have multiple active peers with overlapping AllowedIPs.
If you set up the tunnel through other means, you can make your own routes.
For example in systemd-networkd, see `RouteTable` under the `[WireguardPeer]` section of systemd.netdev(5).
(This was unfortunately broken for a brief while in systemd in Jan, but should now be fixed again: https://github.com/systemd/systemd/pull/22136. If it's not clear from the link, old and current behavior are that no routes are added unless RouteTable is explicitly set)
You should also be able to set it up manually and then add routes, policies and rules manually however you would otherwise.
(You're of course right on the protocol layer, but that is not the cause of the problem you want to solve)
> In other words, when sending packets, the list of allowed IPs behaves as a sort of routing table, and when receiving packets, the list of allowed IPs behaves as a sort of access control list.
> This is what we call a Cryptokey Routing Table: the simple association of public keys and allowed IPs.
Which is, I believe, also why zx2c4 called to revert the whole systemd-networkd feature.
I would really want it to work as it would simplify my network configuration by a bit. Please share a working example if you are able to make it work, thanks!
Last I tried that I used iproute 2 to manually setup interfaces, use wg setconf to load WireGuard configurations. So I don't think it's my tool to blame.
If we're talking simply about decoupling routing from AllowedIPs, yes, using that right now and set it up several times. For redundant routers, see below.
> > In other words, when sending packets, the list of allowed IPs behaves as a sort of routing table
This does seem in conflict with my understanding... Depending on exactly what devil-details go into that "sort of", of course. Not deep enough into it to tell you, though.
> Which is, I believe, also why zx2c4 called to revert the whole systemd-networkd feature.
Rather the opposite AIUI; To allow for setting routes explicitly (which introducing the wg-quick behavior broke).
What started making it click for me was this ArchWiki section. The discussion under this GH issue may also provide some pointers. Also . IIRC I did get multiple outbound redundant routers with failover in the end. There may be WG-specific gremlins I glanced over but ascribed it to not fully grokking the Linux IP stack and issues with *tables in general at the time - the goal I had is hairy enough without Wireguard in the mix. Please report back here on your progress if you have the time :)
EDIT: Went back to take a look and I never did get proper HA routing sorted - ended up "solving" it with a script regularly checking reachability and bringing routes up/down accordingly. No need to bring the actual WG interfaces or IP assignments up/down for that, though.
> On a lower level, AllowedIPs is really just "allowed IPs", and does no routing.
This is contrary to what the official documentation says https://www.wireguard.com/#cryptokey-routing
> You can have multiple active peers with overlapping AllowedIPs.
You can, but the most specific CIDR wins route selection, which is exactly what *routing* does.
> Cryptokey Routing, which works by associating public keys with a list of tunnel IP addresses that are allowed inside the tunnel.
> In the server configuration, when the network interface wants to send a packet to a peer (a client), it looks at that packet's destination IP and compares it to each peer's list of allowed IPs to see which peer to send it to.
> the list of allowed IPs behaves as a sort of routing table
> This is what we call a Cryptokey Routing Table
You can just set your peers on separate wg interfaces. At least on Linux and BSD, you have tables to control routing before packets reach the interface.
So you can have two wg interfaces, each with a single but distinct peer both with CIDR 0.0.0.0/0 (or what have you), and use ip-route/nftables as usual to pick the appropriate outgoing interface.
It makes sense if you think of each wg interface as a NIC connected to an L3 switch, and each peer to a host connected to another port on the same switch. AllowedIPs would correspond to the table+ACL in the switch.
But yeah, me saying it "does no routing" was not really correct. But that routing happens after that of the (rest of the) Linux kernel, not overlapping with, replacing, or conflicting with.
While this understanding does come from a decent amount of experience, in case I'm wildly misrepresenting things, do set it straight.
I don’t have to do this with a normal data link layer, that’s the entirety of the complaint.
AllowedIPs can be disabled if you want; just set it to 0.0.0.0/0. AllowedIPs is needed because netfilter can't "see" which public key an inbound packet is coming from, so by the time a packet gets to netfilter it's too late to accept/reject based on which peer sent it to us.
Yes but UI wise it presents itself as one, since it’s acts as an interface. The fact that it is not a true data link layer is the basis of my comment.
> AllowedIPs can be disabled if you want; just set it to 0.0.0.0/0.
Only one peer is allowed to use 0.0.0.0/0 for AllowedIPs
This is simply incorrect. You can have two peers with the same AllowedIP; you just have to put them on separate interfaces (wg0 and wg1 for example). This is for exactly the same reason that a routing table can only have one default entry. If you want two default entries, make two routing tables.
> Yes but UI wise it presents itself as one
No, it doesn't present itself as one.
> since it’s acts as an interface
So does /dev/net/tun, which is definitely not a layer 2 interface either.
I don’t have to do this with normal data link layers. That’s the point of the complaint. Wireguard is not a true data link layer. Manually configuring multiple interfaces for something I can do with just one interface with a normal data link layer at runtime is an extra inconvenience.
> This is for exactly the same reason that a routing table can only have one default entry. If you want two default entries, make two routing tables.
Using nftables I can specify different routers to use based on arbitrarily complex packet rules. Using just one interface. I can’t do this with wireguard, it will only allow me to to route arbitrary packets to a single peer on an interface. This is an inconvenience.
I have tailscale running on a lot of devices.
Servers, workstations, raspberry pi, various other appliances, SFPs.
This SFP has an embedded ARM processor running Linux. It’s pretty meta, but one could imagine a wireguard control network for these. The article even describes using wireguard-go on the embedded side.
(The below is meant to be tongue-in-cheek:)
>>> APT for your network... in a box! Market now ripe for someone to use this to deliver APT4UaaS.
E.g. you could easily have a bittorrent client use a certain VPN without routing all your traffic over it, or you could have a tab container in firefox use one connection, and another container another connection.
The websocket approach is a lot easier to configure, so I'm definitely going to look in to this.
I used to run OpenVPN in a Docker container together with a SOCKS proxy for this exact use case (using a commercial VPN provider that doesn't offer SOCKS with different endpoints on a per-site/per-tab basis, without wanting to change my default route or non-browser traffic), but this is much more efficient (and safer).
wireproxy's wg only forwards TCP and UDP. I am not sure how ICMP is handled. Other transports, though rarely used, won't be tunneled (and may leak, if not dropped).
And it's fast/low overhead.
And yea, surprisingly easy, "just works"
I lurk their maillist, seems a nice group.