AutoSSH + GatewayPorts on a public server has been solid for me. Setup is a service file on the private machine and "GatewayPorts yes" in sshd_config of the public computer. Plus there's no keep alive traffic. Combine it with dynamic DNS, a web proxy and Lets Encrypt for access to a website running on the remote computer.
Quite neat, and solves a big problem I had (devices at customers sites). However for me it would be very hard to trust you (security and availability wise).
I opted (some months ago when the problem arose), to simply set up a private VPN server, and every device I have at customers connects to it. That way, I can simply `ssh device.cust.vpn.mycompany.tld` and I'm in.
This is easy to set up self-hosted - there would be no value in them providing that. The value is that it is cloud hosted so you don't have to do anything.
I manage a few network devices as a hobby and recently implemented just this.
I have not yet setup the DNS part as I'm still trying to decide if I want to go IPv6 only on this network (which makes unique determinable addresses a lot easier).
The current implementation is a hybrid of Puppet (Openvpn, firewalling) and Docker (radvd, librenms, other monitoring). Don't know if it will be useful for you but I might share it if its done.
Actually, I have a hook on the openvpn server (on docker), that talks to the DNS server when a new device connects to it. The DNS and reverse DNS is automatically added.
I use the 11.0.0.0/16 zone. It's not a good practice at all, but in this case it's very useful (doesn't conflict with any network map, only routed inside my devices, which only talk to each other).
But I would love to take a look at it if you share it ;-)
Of course it's large enough, but it's also widely used. If you use 10.11.0.0/16, then you can be sure that you'll run into a company that also uses it. And when there's a conflit between your VPN device space and your custommer's IP space, … well, it's really annoying.
Sure, you could use 192.168/24, or 172.16/12 but you'll run into the same problem. Once it'll be a hotel that uses this space, and another time it'll be an industry.
So that's why I use 11.0.0.0/16. Yes it's allocated, but my devices will (never) need to talk to it (they only talk with me, and I'll never install devices in the DoD, so there won't be a conflict).
Possible, but improbable (the private address space is fairly large, and most companies are very imaginatively using 10.[0-9], I have yet to see a 10.11. anywhere. 192.168 is indeed everywhere, but 172.16 is by far the rarest). You're setting yourself up for the exact same issue that Hamachi had, when it started using the 5/8 space for VPN - suddenly the block owner started using those addresses for outside purposes.
Well, sorry, but on the top of my mind, I have a client that uses 10.96.0.0/16 and another that uses the whole 172.16 subnet...
I don't really see how I'm setting up for these kinds of problems as these devices only talk to my infra, but I'm on the other way avoiding a lot of problems.
Currently my implementation only has 1 VPN endpoint per 'location' so the other devices still need to get their IP some way else (DHCP/router advertisement). Since my network is not that large layer2 is doable and i'm trying to see where I hit practical limits.
A "private VPN server" is meaningless, could be IPsec or PPTP or OpenVPN or SSH or ...
If you're already using SSH you can use it to forward ports. You can SSH into that SSH server (with Mosh you'd achieve low latency) and craft a SSH config (at ~/.ssh/config or /etc/ssh/config), make a bunch of shell scripts (including ProxyCommand if needed). You could even run your SSH server over Tor, or as I briefly suggested above you could go with a SSH server as VPN. One disadvantage that has, is that on Android it will not work with without root. Tor also won't work with Mosh (uses UDP), so you'd have higher latency.
See also this recent HN post "SSH vs. OpenVPN for Tunneling" [1]
SSH tunnels adds hassle to deploy and maintain. Compared to openvpn once your server is up you just need to deploy software and install certs and byebye! It is my prefered solution for devices since it takes 5min to deploy. And you have instant access to all ports of the device(s)
1) you have to know which ports you want to forward before you ssh into the remote host and
2) you have to change configurations (hostname and ports) in all the programs you use (for example, instead of pointing your browser to http://foo, you're going to have to go and type http://localhost:8888 - not to mention all the scripts or other pieces of software that "just work" using the canonical name and standardized port to connect, and would require customizations to change that)
With Sshuttle you got access to all TCP ports. You can add hosts in /etc/hosts, ~/.ssh/ssh_config, /etc/ssh/ssh_config, dnsmasq, or aliases in your favourite shell. If you prefer port 80 and it is being used on local host already you can manually do what ZeroTier does: add a /32 or /128 to an interface and route it.
I just tried ZeroTier yesterday and it is nice indeed, basically an advanced, open source Hamachi. But it is not rocket science. What I especially like is that you can completely self host.
I don't think that the implementation in this case is relevant. The underlying idea is fairly similar as far as this use case goes.
And while your method works, I think it's also more manual and involved than a simple VPN setup. At the minimum, you just set it up, connect all your devices to the server and write down the IPs.
Very neat, but SSH already has capabilities built-in to handle this scenario without harming security or increasing complexity.
To add to irq-1's excellent response, another autossh method is to autossh from the device into a remote relay server (ie jumpbox) that forwards the local port back to the local ssh server running on the device, which can now be listening on localhost.
You can try this out in literally ONE COMMAND LINE (below) and automate it quickly without installing any additional software (except perhaps autossh, which is usually in your distribution's repositories).
device (autossh) -> jumpbox <- you
You can also use Userify (https://userify.com) or similar to keep keys synchronized on the device and jumpbox in this scenario. (Userify only needs outbound https.)
Use RemoteForward (-R) on the autossh command line for this. See man page for ssh(1) and especially the RemoteForward section under ssh_config(5) for details.
Example:
# on the device:
$ ssh jumpbox -R 22001:localhost:22
Now you can just log into the jumpbox on port 22001 using SSH's built-in tun support (-w) in your SSH client (or forward your agent by passing -A when logging into the jumpbox, but this could be hijacked by an attacker who'd compromised the jumpbox, so do -w instead.)
That's all. You can automate this with ssh_config, autossh, etc, and also lock down the remote host authorized_keys file and use a restricted shell.
You have to log into jumpbox (presumably still running on port 22) in order to log into the device who's SSH listening port has been forwarded from port 22 on the device itself across the encrypted tunnel to port 22001 on the localhost of the jumpbox.
If instead you prefer to expose the 22001 to the entire world (so you don't need to log into the jumpbox first), try this:
device$ ssh jumpbox -R 22001:*:22
Then, to log into the device, you can just
laptop$ ssh jumpbox -p 22001
which will pass your encrypted packets securely through from your laptop to the end device (and the jumpbox can NOT see them.)
Keep in mind that, with this method, any potential attacker can now directly try to break into the device itself through the jumpbox rather than have to break into the jumpbox first, so make sure that you harden both the jumpbox and the end device (which is always a good idea anyway - don't deploy an IOT device that isn't hardened!)
You don't seem to mention pricing at all beyond the "5 devices and 5Gb/mo free". That would be useful to what I expect your main audience to be (people for whom the other obvious alternative is a cheap VPS and either OpenVPN or more manually setup SSH tunnelling).
Momentum mainly I suspect: it has been around and stable for quite some time so a lot of people have good experience with it so it is their go-to when thinking about VPN options (at least F/OSS ones).
It has also been audited by third parties (example: https://www.theregister.co.uk/2017/05/16/openvpn_security_au...) which is reassuring as it has passed with minor issues which were fixed FDQ. I don't know if that is the case for the other options you've listed.
I set up an OpenVPN server on DigitalOcean from scratch and it wasn't nearly as easy to use as I expected. Connecting from ubuntu meant connections would randomly hang and I'd have to restart the client and iPhone didn't work out without some third party app. Honestly if I weren't in Ukraine I would have given up completely, but paranoia is justified in Kiev.
Use IKEv2 instead -- it's natively supported in iOS/macOS.
I'm using [this](https://github.com/gaomd/docker-ikev2-vpn-server) Docker recipe on the $5 Digital Ocean instance and it works great. Also setup was much easier than OpenVPN.
I have no experience with iDevices, but when I did use OpenVPN on Android I found it to be reliable.
If you find 3rd party apps unrelibale, perhaps you could try setting up a server that uses one of the protocols that are officially supported out-of-the-box (https://support.apple.com/en-us/HT201533) and see if they are less troublesome?
It takes more configuration, and does not seem to have some of the nice properties, like direct connection between two VPN-connected hosts when on the same LAN. It's a killer feature for me, when I can sync a gig or two between my laptop and my home server when at home, without routing them through a box outside the LAN.
> like direct connection between two VPN-connected hosts when on the same LAN
If they are on the same LAN then they should be able to see each other through the local interface.
If connections between them using local addresses are going through the VPN then you have a routing issue - even with the OpenVPN interface set as your default gateway the gateway for the local subnet should still be the local, presumably physical, network interface.
If you are using VPN provided addresses (or public addresses) for the connections then that is the issue. It shouldn't be the VPN's job to say "I think this should be routed locally instead".
If you are using non-local local addresses, i.e. if you have two subnets on your physical network are talking between them, then you need to set the gateways for those subnets appropriately. The VPN will see them as separate networks (they are addressing-wise) and again it is not its job to decide routing apart from for its own virtual network.
I want a private network which has its traffic encrypted while routed over public internet, and just always use the names bound to that private network, not caring about the local physical addresses I receive from a particular point of connection.
I also don't want to pass the traffic over a remote gateway if a local link is available.
I don't know and don/t care if it's a "VPN proper" or something else, as long as it gives me an IP interface with these properties, and is open-source.
ZeroTier provides that. Tinc provides that. OpenVPN provides something more narrow, which I happen to prefer less.
If you use local addresses (or names that map to local addresses) and your data is going over the VPN then you have a routing problem that is not OpenVPN's (or any other VPN's) fault. If you set the VPN as the default gateway it should only end up being the default for non-local addresses.
Either that or the names are not getting mapped to local addresses as you expect, so the second possibility is a name resolution problem. If you have a split-DNS setup this could be because DNS requests are getting sent out through the VPN your the DNS servers are seeing you as external and giving out public addresses instead of local ones.
I really wish people would stop doing this. It gets tiresome having to check every stupid little install script before running when there is a PERFECTLY GOOD one-liner that does the same damn thing without hiding anything from the user:
Your one-liner only works on Debian-derived Linux distros. Usually the whole point of a curl|bash one-liner is that it works on every Unix-like system.
Fair enough. I don't know enough about other package managers to know if they can be sequenced like this, but there's nothing wrong with providing another listing like OP with the script link.
On a side-note, I was (pleasantly) surprised to recently learn that if you have macs set up with Back To My Mac, you'll get an "iCloud BTMM" IPv6 encrypted tunnel where all your devices appear via dns-sd, even across the internet: https://apple.stackexchange.com/a/53776
The Teredo protocol (implemented by miredo-client on GNU/Linux) provides a simple way to get a public dynamic IPv6 address on a host behind NAT. Combined with dynamic DNS this solves the problem of accessing my devices from anywhere I need to.
a while ago I found out that you could do this by running a tor hidden service on the device. If I needed it, I think I'd rather use that, or a reverse tunnel than going through a third party.
via reverse proxy: `ssh -NT -R 1234:localhost:22 youruser@yourmachine.com`
Now if you connect to youruser@yourmachine.com you can then ssh to your inaccessible machine with `ssh inaccessibleuser@inaccessiblemachine:1234 `
After the HN comment on this a few weeks ago I implemented the Tor ssh method. The bottom of this script is what I run on my remote servers to get Tor ssh access, if want to see an example of working code - https://github.com/riazarbi/serversetup/blob/master/remote_d...
Instead of wrapping SSH calls with an additional command, would it be possible to use a ProxyCommand? This way, anything working with "ssh" would work out of the box.
That's what happens internally (so yes, you could for example use `ProxyCommand ondevice pipe %h ssh` for other tools in the ssh ecosystem - e.g. in your `.ssh/config`).
Zerotier seems to be an overlay network with a custom addressing layer above virtual L2 networks. There's also an appliance in the works that will presumably work as a "hardware vpn" type of gateway on your network. There's no easy setup for ssh access like ondevice.
Similar to StrongDM, except it does less. SDM implements the SSH protocol which allow both session logging (for audit and training purposes) and on-prem deployment. It supports all SSH services such shell (for interactive operation) exec (for remote scripting, like ansible and scp) and subsystems (like sftp). And on top of that it supports DB connections through the same tunnel (again with query logging for audit purposes).
OTOH, the ondevice setup doesn't require me to trust the service: FWIW, I'm getting a dumb pipe to tunnel SSH through. StrongDM seems to be something quite different IMNSHO.
ngrok is aimed at making applications that are under development available for testing by someone outside of your network, for testing webhooks, etc. It's not for production stuff.
ngrok is useful for those things, but also for more. I don't see how ondevice is more for production than ngrok though, especially since any of ngroks plans gives you custom subdomains and reserved addresses.
- ondevice makes use of `ssh`'s `-oProxyCommand`, which makes ssh send its protocol data to a command's stdin and expect the responses on its stdout (you can use that for example to get around certain proxy servers using `nc`)
- `ondevice ssh` basically executes `ssh -oProxyCommand=ondevice pipe %h $@` (there's a bit more to it, but that's the gist)
- internally `ondevice pipe` creates a websocket connection to the ondevice API servers, who in turn tell the device (where `ondevice daemon` is running) that there's a connection incoming.
pretty much the same goes for `ondevice rsync` (and the - not yet released... - `scp` and `sftp` subcommands)
I recently thought about using websockets for tunneling, too, but found out that "security appliances" which do MitM and virus scanning seem to block websockets by default and even need the newest version to support them at all.
So for some scenarios, using websockets seems to be not enough
Cool, but I don't really understand the need for a replacement script. You can tunnel your ssh connexions using .ssh/config "proxy command" option[1], so they could use this functionality to achieve the same goal, and make any command using ssh (e.g. git) usable in a transparent way.
The tor option has already been mentioned. The other useful option if you don't have your own server with static addresses is dynamic DNS and then simply set up a VPN or reverse SSH connections to the dyndns hostname on your DSL or whatever. Certainly more sensible than a cloud service with a hilarious 5GB traffic limit that also unavoidably adds unnecessary latency to the connection.
I've contemplated doing something similar. Except the way I'd do it is statically link OpenSSH and Tor into a single portable binary. Then create an onion service to a local sshd. Then just provide the onion address (still would use traditional auth). Of course lots of configurability. This has lots of benefits of course and can be completely self contained and ephemeral.
At a previous job we had the same setup to SSH to our customer's embedded devices for diagnostics. Incredibly useful. I've been playing with the idea to provide this as a service since then, interesting to see if this is viable.
Instead of being a VPN it is a virtual network device. It creates a private virtual ipv6 network. It acts like being on a local network, so all devices inside that network are visible. Software defined networking and all that. Quite impressive.
You have no idea unless you try it. It is different. for example this network device is meant to make a network between all your iot devices, or over a cross provider cloud infrastructure. It does not route all tradfic through a vpn server. Nodes connect directly to each other. you do have a fallback openvpn mode for iphone devices though. You cannot really imagine if you havent tried it (and that is a problem for zerotier anyways)
Have you tried Hamachi? That's exactly how it works. It's not a VPN that routes all your traffic, it's a virtual LAN. You get a private IP, and can communicate with other devices on the same network over their private IPs. And yes, nodes connect directly to each other.
from the systemd service file: