For routing all your internet it's as simple as this (on the client only, no server setup):
sshuttle -r firstname.lastname@example.org 0/0
It's also far more powerful for slicing up and mixing subnets or only routing specific targets ... for example unblock a specific site, but don't re-route other traffic:
sshuttle -r email@example.com sci-hub.tw
Minor issue worth mentioning, not to disappoint people trying this out - it's currently necessary to use the -x option to exclude the server itself from being routed on Linux, I think this is due to a kernel bug? which is a little annoying, hoping this will go away eventually. This is not relevant to BSD or Mac, although on Mac you have other kernel bugs to worry about in XNUs network stack.
sshuttle -r firstname.lastname@example.org -x 22.214.171.124 0/0
As "icelancer" has pointed out bellow, please note that using your own server ties your activity to your identity more definitively if you are the only one using the server and you pay for the server in your name. Not being a purpose built consumer VPN makes it a less likely target through significant obscurity, however in the event it IS targeted, it's uniqueness will make it easier to associate activity with you via the VPS provider.
> This also ties your identity to a provider definitively. That's fine, as long as you tell people that's what is happening. A good consumer VPN that isn't a garbage one offers plausible deniability.
"Generate a private and public key pair for the WireGuard server:"
wg genkey | tee privatekey | wg pubkey > publickey"
"This will save both the private and public keys to your home directory; they can be viewed with cat privatekey and cat publickey respectively."
"Create the file /etc/wireguard/wg0.conf and add the contents indicated below. You’ll need to enter your server’s private key in the PrivateKey field, and its IP addresses in the Address field."
That's not within reach of your average computer user.
Same for sshutle.
Given that WireGuard is headed for inclusion into Linux mainline soon, it probably would be a good idea for folks to take a few minutes to learn how to use a technology that is going to be part of core Linux.
But you're quite right that if you already have a config that you know works, WireGuard has no significant advantage in this area (in terms of ease-of-configuration -- though the keys being quite short is nice for SSH-like key distribution). But if you're starting from scratch then you need to first figure out what is the right configuration to use (or you need to pick from the many dozens of "set up OpenVPN quickly" scripts) and then you need to hope that your configuration is not insecure.
WireGuard can be set up and work just as well as any other configuration without a script in a couple of minutes (or less than a minute with a script). The script that was linked in a sister comment to "set up OpenVPN quickly" also sets up Apache for god's sake...
sshuttle uses ssh, which in turn is not wedded to any one cipher. How does wiregaurd improve on this?
Among many other things, you cannot do a port scan for WireGuard servers. You can do a port scan for SSH. This is because the WireGuard handshake was designed such that there is no response to unauthenticated packets (the first packet is authenticated by the client knowing the server's public key -- something port scanners won't know).
Jason Donenfeld has a few talks that explain why the cryptographic design is the way it is, and it has several very clear improvements over SSH (as a VPN protocol).
I really can't overstate how awesome WireGuard is. I really would suggest you take a look at it.
The "agility is bad" crew have a decade or two to wait before they can show anything at all meaningful beyond "my new thing is newer than your old thing".
That doesn't make them wrong, but it makes their position unproven in practice.
By having cipher agility, both clients and servers are incentivised to support the widest possible set of ciphers (because nobody can agree on what cipher to use). This means that it's hard for a known-bad cipher to stop being used (see: the entire history of RC4 usage in TLS) and any downgrade attacks become catastrophic (see: the entire history of SSL/TLS). It also ends up adding complexity to the protocol -- which is always a good thing to have in cryptographic protocols (see again: SSL/TLS)!
Most importantly, if all currently-known ciphers are broken tomorrow, then all servers and clients will have to be upgraded in order to be secure. So cipher agility doesn't help you with the doomsday scenario (everyone needs to upgrade anyway) instead it just ensures that older (completely insecure) clients will still be able to communicate with servers. Why is that seen as a feature? If you really want an insecure fallback mechanism you can implement it with non-agile systems by supporting the two most recent versions of the protocol (I expect this is what WireGuard will do once it's upstreamed). But not everyone wants the "feature" that some clients will silently become insecure.
I don't understand what you're saying with this point:
> The "agility is bad" crew have a decade or two to wait before they can show anything at all meaningful beyond "my new thing is newer than your old thing".
How can the "agility is bad crew" prove their point in a few decades if you're arguing that we shouldn't use such protocols? If they followed your advice, there wouldn't be any zero-agility protocols to compare against in a few decades...
I'm arguing that the case for them is weaker than is often put, but that's not the same as nobody should use them. If a flag day is fine for your use case there's very little reason not to choose this design approach, it is simpler and simpler is good. But you'll notice that the example cited (including by you) for why agility is bad is almost invariably TLS and clearly a flag day isn't practical for TLS because it's far too broadly used.
TLS illustrates my other main thrust of concern on "agility is bad". You describe RC4 as "known bad" and the downgrade attacks as "catastrophic" and this sort of apocalyptic thinking is very popular in the "agility is bad" crowd, but it doesn't truly reflect the ground reality for actual users which is that things went from "It's definitely fine" to "It's probably fine but to be sure we should upgrade". Grey areas are a real thing.
There were protocols that didn't exhibit any cipher agility before by the way. Lots of them. What happened was that they broke, and so agility was added to them retrospectively in new versions that fixed the brokenness. The arguably new thing in the latest round of "no agility" protocols is a supposed determination never to do this. To see how that works out, as I said, you'll have to wait a decade or two.
For those of you who are thinking "eh, I like my `ssh -D8080 email@example.com` solution", sshuttle has the following two advantages:
1. no need to configure your SOCKS proxy in your applications
2. it works even when dynamic forwarding is disabled on the host you're connecting to
There's a reason VPN providers have exploded in popularity: mobile internet devices have been mainstream for 5-10 years and they are system-locked but you can install apps.
If you are using ssh keys you can at least use a bash while loop without incurring any password prompts:
while ! sshuttle -r firstname.lastname@example.org 0/0; do sleep; done
Any server you have a login to, right? So in some respects wouldn't a commercial VPN be simpler?
It's almost as simple, faster, and importantly, far more obscure... vs consumer VPNs which are almost honey pots.
It's also more powerful, you can selectively route things through different servers simultaneously.
I suppose that negates my point about it's obscurity, since you only care about that if you are evading prying eyes of some sort.
I've updated my original comment to include your point.
Note that sshuttle deconstructs the TCP packets before sending them over SSH which already uses TCP, it also performs differently to `ssh -D` and manages the buffer to prevent blocking behaviour over bandwidth limited connections:
Sacrifice latency to improve bandwidth benchmarks. ssh uses re‐
ally big socket buffers, which can overload the connection if
you start doing large file transfers, thus making all your other
sessions inside the same tunnel go slowly. Normally, sshuttle
tries to avoid this problem using a “fullness check” that allows
only a certain amount of outstanding data to be buffered at a
time. But on high-bandwidth links, this can leave a lot of your
bandwidth underutilized. It also makes sshuttle seem slow in
bandwidth benchmarks (benchmarks rarely test ping latency, which
is what sshuttle is trying to control). This option disables
the latency control feature, maximizing bandwidth usage. Use at
your own risk.