Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I made an SSH tunnel manager to learn Go (github.com/alebeck)
213 points by 0x12A 57 days ago | hide | past | favorite | 67 comments



Well done.

The ease of use and high quality of the Go SSH libraries (golang.org/x/crypto/ssh) is a killer feature of Go, imho.

Also, there is a high level abstraction, github.com/gliderlabs/ssh, which makes it completely trivial to embed an ssh server into an application, giving you a nice way to inspect counters and flip feature flags and tuneables.


The only major downside to golang.org/x/crypto/ssh is that open issues seem to linger for years lately, even when people try to submit patches. So it's often necessary to look for third-party solutions.

The knownhosts handling in particular has a bunch of common land-mines. I'm the maintainer of a wrapper package https://github.com/skeema/knownhosts/ which solves some of them, without having to re-implement the core knownhosts logic from x/crypto/ssh.

Just to illustrate how common these land-mines are, my wrapper package is imported by 8000 other repos on GitHub, although most of these are indirect dependencies: https://github.com/skeema/knownhosts/network/dependents


Another thing I want but is completely missing from golang.org/x/crypto/ssh is compression support: https://github.com/golang/go/issues/31369


I think in an ideal world, this would be the normal case. A hierarchy of packages, maintained by many independent parties, that extend useful base functionality, without too much logic being put in any one package. If one thing doesn't work well you can just create a new package to replace the one part. And building on top of simpler, smaller modules allows you to keep code DRY, reduce maintenance burden (like the 1000 open PRs...), and easily extend functionality by simply making a new package.

That was my experience with CPAN, anyway. It's not perfect but it's miles above other language module cultures.


The base functionality isn't always terribly extensible, though. And Go isn't like Perl or Ruby where you can monkey-patch arbitrary logic in a pinch.

I originally created my knownhosts wrapper to solve the problem of populating the list of host key algorithms based on the knownhosts content. Go's x/crypto/ssh provides no straightforward way to do this, as it keeps its host lookup logic largely internal, with no exported host lookup methods or interfaces. I had to find a slightly hacky and very counter-intuitive approach to get x/crypto/ssh to return that information without re-implementing it.

And to be clear, re-implementing core logic in x/crypto/ssh is very undesirable because this is security-related code.


Sometimes the hierarchy can be used without directly/perfectly extending the code. For example, in the CPAN world, you might publish your own module as "x/crypto/ssh/knownhosts/client". You don't even have to use the "x/crypto/ssh/knownhosts" code at all, it just looks like a similar namespace. (IIRC, CPAN requires a human in the loop who's moderating what new packages are listed; none of the craziness of PyPI where any insane person can release thousands of typosquatting malware modules)

You would hope a new module would reuse as much previous base modules as they can, but sometimes it's enough to just put some new code in that namespace, with the intent then that someone will find it easier, and build off of it. The hierarchy is for organization, discovery and distribution, as much as it is about good software development practice. The goal being to improve the overall software development ecosystem.


For critical security-related code, I'd argue that's not a good property at all for module namespacing! Quite the opposite. Even with a human in the loop.

(and I was a professional Perl programmer for the first 5 years of my career, so I'm not asserting this out of lack of familiarity with CPAN!)

That all said: I don't even think what you're saying about CPAN is terribly similar to the situation being discussed here, since Go's x/crypto/ssh (and all other x/ packages) are officially part of the Go Project and are maintained by the Go core maintainers. See https://pkg.go.dev/golang.org/x. Third-party Go developers cannot add new packages to this namespace at all.


I do not mean this as a loaded question, but what happens in this model when maintainers die?

Everything you've said sounds great, with the assumption that the maintainers can maintain their pieces indefinitely and independently. But we're mortal. And I know the independent maintainers in places like CPAN are humans, not companies.

I guess it's a sign you're getting old when you start worrying about this kind of thing


Assuming people want to keep using/maintaining the code, you just prove the original maintainer has either abandoned it or died, and then you contact the repository admins (i.e. CPAN). Make your case that the original maintainer is gone and they'll probably make you the new maintainer.

If nobody wants to maintain the old code, or the design wasn't ideal, often times people will create a "v2" or "-ng" rewrite of it and try to keep backwards compatibility. Then the people who made sub-modules can simply publish their modules on top of the new base module. Old code continues running with the old dependencies until somebody links the old code to the new base module.


How is performance?

We found the native Go SSL libraries (as used in, e.g. the http package natively) to add many ms to web api calls. We eventually substituted OpenSSL (despite not really wanting to). It significantly sped up the app.

YMMV, this is for ARM 32-bit targets.


I highly doubt that claim, maybe it's an ARM thing but there is no way that using the TLS package from Go add ms of processing on requests.

Did you tried with GOEXPERIMENT=boringcrypto ?


It is pretty good. Most of the CPU is spent on crypto, which is what you'd expect. The overhead is low enough that I've had no problems having rather meager machines handling thousands of concurrent connections.

If you're having performance issues with TLS I would look at what sort of crypto you're using. At least for SSH, RSA is dog slow. It wouldn't surprise me if you can irk out quite a bit of performance by switching to ed25519.


Agreed. There's also cool apps you can build with things like https://github.com/charmbracelet/wish


Definitely... first became roughly aware of it with the doorparty connector service[1]. Which is a niche fit, but definitely was cool to see how it worked.

1. https://github.com/echicken/dpc2/


I'm curious what are some prototypical use cases for you to embed an ssh sever into an application?


I work for a C++ company but the game we work on has a debug telnet server. It’s super useful to inspect state or even run automation scripts. Also has a bunch of useful debug commands like the ability to live reload shaders or change how various subsystems work.


[redacted for accuracy]


Going through the code, I couldn't find a server but only usage of ssh client. May be I missed it. But I think GP was looking for usecases where its helpful to run an embedded ssh server using a go binary.

Ansible facts can probably be a cross platform way to collect most of the information you need. For the usecases where scp'ng the binary is needed, I think ansible supports jumphost config too. But I agree that for one off tasks, running a single binary is convenient compared to setting up ansible.


Oop - you're right, I missed that they wanted server examples specifically. Thanks for the save.


Well done! if you want to extend your CLI UI, check out Bubble Tea (https://github.com/charmbracelet/bubbletea)


Nice project! I would advise to use $XDG_CONFIG_HOME instead of $HOME for storing the configuration file though :)


I hate XDG stuff so much. I just wish every app had their own folder in which they can put whatever they want. If home directory clutter is the issue, then just ~/crap/.{app1,..n} can be standardised.

Basically, I want app/kinds-of-data and not the other way around.


$XDG_CONFIG_HOME is usually "~/.config/{app1,...n}" so, it's close? Plus it allows a user to redirect it to a path of their choice, if all apps used it to begin with.

Don't get me wrong -- some of the choices made by the XDG/FreeDesktop folks rub me the wrong way too ...


No, not quite. XDG-compliant programs end up storing stuff in one or more of the following places:

~/.cache and ~/.config and ~/.local/share and ~/.local/state and ~/.local/bin

I used to get annoyed by non-compliance to XDG. Now I wonder if I'd actually prefer apps to reverse the hierarchy (eg, ~/.apps/nvim/{cache,config,state}).


I find it obnoxious when apps make me hunt for all of their cache directories. Just put all the cache data in one place.

Make it clear what needs to be backed up, what is ephemeral, and so on. Just put everything in ~/.cache. Chromium in particular is bad at this and has many types of cache.


That's where I would probably split myself... ~/.cache/appname for cache data, and ~/.???/appname/* for everything else.

This is a huge part of why I like docker-compose and docker in general, I can put everything I need to backup in a set of volume maps next to each other.


I would definitely prefer this. I've never wanted to see the "cache" stores for all (XDG-compliant) apps, but often want to see everything for a single app.


It’s less about wanting to see all the caches, and more about excluding all the caches, e.g. from backups. Likewise, there is one directory for machine-independent configuration which you might share, and another for machine-specific state (such as window positions).

Is the spec perfect? No, of course not. But is it thoughtful, and does it address genuine needs? Yes, certainly.


It also enables you do things like:

a) store caches & libdata on different disk

b) consistently 'reset' cached data for kiosk style logins

c) make config read-only, or reset to a known good state

d) Roaming profiles where the cache is excluded from sync across machines

Most computers + home directories are 'personal' where this largly doesn't matter, but there are often sound operational reasons for this seperation in cases where you are responsible for a fleet of computers. I too perfer the 'everything related to this app in one dir' approach. Crazy idea: for apps adhering to XDG, you could point all these vars at a directory under a FUSE-style mount, which then remaps the storage any way you'd like. :)


The reasoning behind historical convention of kinds-of-data/app in Unix is so you can partition the disk easily and apply policies based on type (like backup /etc, tmpfs on /tmp, mount /usr read-only)

Although I'll never forgive XDG for renaming etc to config and var to state. Would be so convenient to set PREFIX=~/.local for some things


as someone who works on 3 different machines regularly and likes to have the same environment on all of them... i would LOVE if applications would stop cluttering my .config with cache data and other bullshit i keep having to exclude from sync.


`rsync` should have something like `.nosync` akin to `.nomedia`, and the directory should be added explicitly if one wants it to be synced. Or something like a `--profile` option where `.nosync` then can contain an allow/disallow filter for profiles.

I have the same issue with the scripts which trigger `rsync` getting confusingly complex because of all the include/exclude arguments.


I've been using a .rsync-filter file for something like what you mean for ages for my homedirs backups. It's a bit tricky probably to make it right the first time but once it's there it just works.

https://manpages.debian.org/bookworm/rsync/rsync.1.en.html#f...


That's generally what the Cache Directory Specification attempts to cover: https://bford.info/cachedir/

Lots of things like the Rust tool chain now create the CACHEDIR.TAG files so that backup tools can ignore that part of the hierarchy. Alas, I believe the rsync folks refuse to implement it.


XDG is so bad. There was actually a working best practice before those people came around.

Not only did they fragment the ecosystem with their self-defined standards, their standard contains a whole search path with the priority hierarchy baggage, but unspecified enough that all software does it differently.

Just ignore it and pretend it doesn't exist.


Is XDG_CONFIG_HOME Unix? Isn't it just some Linux convention?


XDG = X (pronounced “cross”) Desktop Group, aka freedesktop.org, promulgator of conventions for desktop apps.

So, neither one really.


Yeah, I'm gonna stick with POSIX. All systems I'm aware of (other than Linux Desktop apps) use $HOME. If you want to extend your functionality to use an OS-specific directory, that's fine, but $HOME is the safest default. (Same for things like $TMPDIR)


None of that is defined in POSIX, hence the perceived need for XDG.



After having spent the last year writing rust, it's a breath of fresh air to clone and read through a concise and straightforward repo like this.


Is Rust still that hard to grok even after a year to you? This is by no means meant to be disrespectful but I'm itching to start learning Rust but having only worked in Python/C#/Go I'm getting cold feet just looking at a Rust codebase

Disclaimer: I'm usually very good at hitting the ground running, but I am just as much bad at "keeping the pace", i.e. diving deep into stuff


I wouldn't say that it's hard to grok.. even a year ago I found that rust projects lent themselves well towards understanding the project structure due to rust being fairly explicit about most things, and with an LSP integration I could follow along fairly easily compared to something like a python or a ruby project.

Go is just easier to read. You don't have a lot of generics typically to assemble in your mental model, no lifetimes to consider, no explicit interface implementations, and so on. All of those things in Rust are great for what they do, but I think it makes it more difficult to breeze through a codebase compared to Go.


> I'm usually very good at hitting the ground running, but I am just as much bad at "keeping the pace", i.e. diving deep into stuff

At a beginner level, rustlings[1] is an excellent resource for following along with any book/tutorial and do relevant exercise to apply the concepts from the learning material.

On a more higher level, I guess (re)implementing some tool that you use daily is another way to deep dive into rust. I suspect it's one of the reasons why we see an unusual number of "rewrite of x in rust" projects.

[1]. https://github.com/rust-lang/rustlings


For me it's not the language concepts that are hard, it's that things are sometimes very different and if you come from other languages it's easy to make wrong assumptions.

One resource I would highly recommend after the basic stuff people always recommend is a book called "Learn Rust With Entirely Too Many Linked Lists".


As someone who jumps between Go, Rust and Scala - Go is by far the worst.

Antiquated and verbose error handling model. The reliance on code generation because of the lack of a decent type system. The fact you have to carefully read through every function because it's not immutable by default, has pointer arguments and no functional operations e.g. filter.

It's a language that belongs back in the 1990s.


Nice work! SSH tunnels can be a pain, so this looks handy. What was the toughest part of building it in Go? Any features you’re thinking of adding?


I agree! Honestly, Go made building this quite pleasant, as it has nice abstractions for networking and a great concurrency model. I'm planning to keep it minimal for now, but I would like to add Windows support, SSH multiplexing and maybe some form of throughput measurement. But I'm open to ideas :)


Ah, I just started learning Go, and this project looks awesome! I hope I can write something like this in a couple of months too!

Well done!


Thank you. I found that you can get really productive quite fast in Go, so happy learning :)


If you don't mind a few small advices: don't use global variables that you mutate, prefer structs with methods. Add a main context with signal.NotifyContext to globally handle sigkill/sigterm and have a gracefull shutdown. Also use DialContext when available instead of Dial. You could use errGroup to handle multiple goroutines that return errors (rather than iterating on a channel).

Otherwise it looks good, great job !


Great, thanks for the advice!


This looks so good! I have two questions

1. What happens if the tunnels breaks? Does it retry instantly? Is there any sort of exponential backlog time? Just wondering if the server is down, if it would spike the cpu or would be gentle (while still fast enough)

2. Would you be adding support for Socks Proxy? The ssh command is quite simple, and it is as useful as regular remote and local tunnels.


Thank you! Yes, there is an exponential backoff strategy for reconnection attempts. Supporting SOCKS sounds like a nice idea, I'll look into it!


I think there are a couple packages out there for using Websockets to proxy a tcp connection, and some of them support SOCKS. I think they all overload that Dialup function as a generic way of opening connections


So what do you think of Go after the project? What language(s) did you come from?


IMO, it hits a nice sweet spot between performance and level of abstraction, especially w.r.t. concurrency and networking. Also I found that you get things done incredibly fast. I am mostly doing Python and some C, so Go feels like "somewhere in between".


I've been meaning to learn Go for a while. This looks like a nice project to go through and pick up a few techniques.


What would one do with a command line SSH tunnel manager?


nice app, i was actually going to make a version of this with a small macos ui myself using a menu item.


Any plans for windows support?


Yes, it's in my backlog, but I don't have a concrete timeline as of now.


oh, sweet, I was planning to do something like this, now I don't have to


The title was so confusing to me, the reason I opened the link was to understand how you made the SSH tunnel manager learn the GO programming language


I don't think the title is confusing, if that were the desired meaning then it'd say "I made an SSH tunnel manager learn Go" i.e. no "to".

I don't think "I made X to do Y" ever means "I made X do Y" does it?


Not for native speakers, but I've heard non-native speakers use "I made X to do Y" in that way.


To be fair: it is a "Show HN" title (which I believe is typically used to denote a project being "shown [off]" by the op).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: