Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Run SSH and HTTP(S) on the same port (github.com)
165 points by jamescun on Jan 21, 2015 | hide | past | web | favorite | 65 comments

You can also do this with haproxy. I ran it for a while (I think it was for git). Here is a page about how to do it:


    timeout connect 5s
    timeout client 50s
    timeout server 20s

  listen ssl :443
    tcp-request inspect-delay 2s
    acl is_ssl req_ssl_ver 2:3.1
    tcp-request content accept if is_ssl
    use_backend ssh if !is_ssl
    server www-ssl :444
    timeout client 2h

  backend ssh
    mode tcp
    server ssh :22
    timeout server 2h

haproxy is truly a swiss army knife. I recently worked on a geographically distributed 300 server deployment, and our ops team ran haproxy on every node just for ssl termination and the operational insight and flexibility it provided.

AFAIK, it's the only web server that is able to log when a client first connects. Otherwise, attacks a la slowloris go unlogged as the attack is happening.

HAProxy isn't a webserver, it's a TCP connection proxy.

Well, it can be a webserver if you're happy serving only a single file loaded into memory at startup: http://comments.gmane.org/gmane.comp.web.haproxy/17962

Pardon, an HTTP server. It talks HTTP and HTTPS, as well as raw TCP. If you define a web server as something that talks HTTP/HTTPS and also is able to serve static files off the filesystems then, not HAProxy is not that, but this is really splitting hairs.

No, it's not an HTTP server. It has that capability to serve a static is almost solely for the purpose of maintenance pages and is severely limited, even to the point of needing to restart the server if you want to update the page.

It speaks HTTP in as much as it needs to to figure out how to forward requests. It doesn't generate return headers for content; it doesn't serve content; it moves streams from A to B.

First off, from the HAProxy docs:

> In HTTP mode, it is possible to rewrite, add or delete some of the request and response headers based on regular expressions.

Second of, it speaks HTTP, and it serves content that it is able to fetch from a content producing backend. In my book it's an HTTP server.

Third off, the difference is so pedantic that I don't think it makes any difference what we call it. We both know what it is, and what it is used for in the context of hosting web applications.

We don't call Varnish a web server, and it does quite a bit more with HTTP than HAProxy does.

We don't call a car a truck, even if you can haul things around it it.

Pedanticism is never a good argument against someone. 1) It's an ad hominem. 2) It doesn't actually do anything. 3) If everyone knew what it was, they wouldn't call it a web server.

any insights on using nginx for ssl termination vs haproxy ?

Nice thing about ssl termination with haproxy is that in that case, since the backend is http, it can make active http health checks. If this check determines a backend has failed, it can be taken out of rotation.

With nginx doing ssl termination, haproxy is just tcp passthrough so it only does passive health checks (ie. it can notice when the backend doesn't respond properly), but that means the current http request has failed.

  > apt-cache search ssh http
  sslh - ssl/ssh multiplexer

I mean, it's cool that you got the exercise of implementing this in Go and all, but I don't see what's new and interesting about it.

(and another implementation in Perl almost 4 years ago: https://news.ycombinator.com/item?id=2395787)

Edit: oh whoops, didn't get to OP's "Why not sslh" section :\ "The result is useful in its own right through use of Go's interfaces for protocol matching (making adding new protocols trivial), and lightweight goroutines (instead of forking, which is more CPU intensive under load)."

Well alright, the first point I'll concede, though I'm wary of the "reinvented wheel" scent, the second point I'm even more uncomfortable with as I think it makes wild assumptions of the kind of environment this tool could be useful in.

> and another implementation in Perl

[sslh in Net-Proxy](https://metacpan.org/pod/release/BOOK/Net-Proxy-0.03/script/...) predates that even longer - first release in 2006.

The latter part makes me question whether the author read sslh's readme; it clearly notes that there is sslh-fork and sslh-select. If I recall correctly, upstream defaults to sslh-fork because it has lower idle overhead, and is arguably more reliable due to use of less code.

"wild assumptions" ? That's a bit hyperbolic, no?

I wonder if it hides ssh from an nmap scan. Since it requires a timeout for ssh "since the server waits for a bit if the client send a http request" then scanning for this type of hiding ssh would be really time consuming if it's on a random port hiding behind a fake http server.

I know obscurity can't replace security, but security + some obfuscation could help you a bit for not getting hacked instantly by 0-days. It's easier to setup on client side than port knocking (you just have to set the port) but it's less detectable than sshd on a random port.

It's interesting that SSH requires the server to respond with a message upon connection, independent of whether the client sends anything - perhaps hiding the service on a different port was not a strong consideration when it was designed.

On the other hand, HTTP and SSL/TLS servers will just wait silently for the client to initiate the conversation.

Nice! I wrote my own version of this in Go about 7 months ago. It's running in production and has been very reliable. You can find it here on my github account: https://github.com/JamesDunne/sslmux

I notice you haven't set any IO timeouts on your protocol sniffer. I had to add a read timeout because PuTTY (a Windows SSH client) waits for a packet from the SSH server first before sending any itself.

Another interesting and enabling lower-level system tool written in Go.

I don't know if it's because I surf HN , but a lot of really cool "ops" and system stuff seems to be written in Go lately.

It's generally the sort of thing that would be written in C a few years ago, except C is a horrible language, and Go is the first semi-reasonable replacement. Personally, I dislike coding in Go (far too much of my code ends up being manual error checking), but Rust is on the horizon which might subsume a significant part of this space.

A showdead comment says:

> I'm fairly amused that you want Rust after whining about Go's error handling.

Fact is, Rust's error handling story is quite nice, with plans to get nicer. It has the try! macro which can automatically convert errors into my own error type with the boilerplate outside the function (thus successfully separating success and error paths to a large extent), and chainable Options and Results for when I need those.

With Go, nearly all of my code revolves around handling errors manually, generally repeating how to handle errors several times over, interleaved with the success code, when 99% of the time it's "return an error".

EDIT: Another comment: I'm well aware of Option and try!, etc in Rust. But overall, Rust doesn't mean that you don't have to think about error handling, it just has different mechanisms for reducing the noise in your code.

Indeed, and I said "most of my code is error handling". I didn't suggest I don't want to think about it - I'd just prefer that I can read what a function does without 2/3rds of the lines in that function being error handling, and thus obscuring what the function does in the success case.

I'm sorry you don't like C. I find it to be a great language for the kinds of things it's good at. I always have a lot of fun when I program using C. No, it's not the most productive language to use, but in some cases you'll be hard pressed to find an alternative; think memory constrained embedded systems, or even VM's, where you are not allowed to use malloc.

You might want to read [1] :-)

[1] http://blog.golang.org/errors-are-values

That's kind of the segment to Go was aiming for (systems/scalable)

Right, except the other comment ("it's trendy") is casting some doubt on whether it's just because they needed a systems language, when I made my original comment I was kind of wondering out loud which force was really driving Go adoption...

I personally am using Go on some big projects and am enjoying it, but I personally noticed so much awesome systems stuff being made in go

It's trendy.

Or it's handy.

Or both! Trendy and handy are not inherently mutually exclusive.

It doesn't appear to set tcp nodelay, as with most of these forwarders. https://github.com/stealth/sshttp does it at the kernel level, much better.

I wonder how far you could take this in terms of protocols. I can't think of a good use case yet, but is it at least technically possible to detect and quickly proxy away most common protocols?

sslh [1] (which is packaged and included in may linux and BSD distributions) has grown to support HTTP, SSL, SSH, OpenVPN, tinc, XMPP from the same port. It claims it can be extended to filter anything that can match with a regex.

What I find lacking lately is I've mostly wanted to extend the forwarding/routing of ssh connections based on username (or better by identity) to different VMs or hosts, but I have no idea how to achieve that at the moment (without creating dummy users on the sshd server).

[1] http://www.rutschle.net/tech/sslh.shtml

Github: https://github.com/yrutschle/sslh

I had a herp derp moment recently after installing OpenVPN then discovering that Nginx refused to restart because port 443 was 'bound by another process' or something like that. This looks like a pretty easy workaround.

Do note that 443 is typically the web interface for OpenVPN, not the actual VPN port.

I didn't have an IPTables rule allowing the VPN traffic (UDP port 1194) and so OpenVPN falls back to using 443, which triggered the problem. I looked at the workaround and decided not to bother as I didn't really need a VPN, I was intending to use the machine as an Nginx server only.

A simple question. How come two programs can run on the same port? How does this being implemented?

I haven't read it too carefully but typically something like this would be done by having one program run on the port in question and then it would determine which protocol a given connection is speaking and hand it off to a different port based on config. So your https and ssh servers run on, say, 2000 and 3000 and the protocol recognizer runs on 443 and hands off connections to either 2000 or 3000. See, for example, reverse-proxying with nginx or haproxy.

Ya I guess this is the way that they would have implemented.

For the case of ssh versus ssl the detection is trivial: first message for ssl comes from client but for ssh first message comes from server. So server simply waits for a while and assumes that client speaks ssh when it does not receive anything.

One can also match whatever is received in first message from client against known protocols (this approach makes it possible to for example run HTTP and HTTPS on same port as is done by IPP servers, particularly CUPS).

What do you mean by:

       first message for ssl comes from client but for ssh first message comes from server

With SSL/TLS, the client connects and sends a ClientHello message first, and then the server responds; with SSH, the server will send a version string upon connection (and expects the client to do the same). With plain HTTP, the client sends the request first. The server can detect which one of these cases it is.

I'm still being not clear of what you mean by:

>with SSH, the server will send a version string upon connection (and expects the client to do the same).

Say for example if I do:

   ssh some.machine.com
I'm (being the client ) call the ssh port ( server ) running on the some.machine.com, right? If thats the case, client is who starting the conversation right?

Correct me if I'm wrong.

Establishing a connection isn't considered 'sending a message'.

Oh ok. Now I understood, thanks.

Very interesting, but it's a real shame that you have to stoop to this level to access in order to access SSH from all networks. Interestingly, I guess this wouldn't work where HTTPS is MITM'd.

Can this help me to ssh to my servers on DO.

I am behind my institute squid proxy which is a HTTP proxy. All the ports are blocked, all connections has to be through proxy. For https, it uses the connect request.

This is neat. What practical applications does it have?

From the repo the project creator linked in the README (https://github.com/yrutschle/sslh): "A typical use case is to allow serving several services on port 443 (e.g. to connect to SSH from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port."

MIT's SSH server pool (athena.dialup.mit.edu) listens on port 443 for both web-based login using a JS terminal and for actual SSH, using sslh as the multiplexer. SSH on port 443 is useful for people who are behind weird guest that only allow certain protocols out.

a different usecase / protocol selection

ZNC uses the same port for HTTPS (webadmin) and ssl for irc.

Sure, those can be different ports, but if you don't need to run them on different ports, why waste the extra port?

I still don't get the practical advantage of this. As someone said below he/she wrote one and running similar setup in production. Just why? Typicall you access internal stuff over intranet over VPN. How is running both 22 and 443 on 443 in this multiplexing way help security?

That's cool. :-)

I wrote something very similar in Node.js at work recently: a JSON-RPC server that accepts TCP and HTTP messages on the same port. I remember being very excited when I realised this was actually possible!

I wonder if you could do this with Nginx and the Upgrade header.

If you could get SSH to negotiate the upgrade header before initializing its own connection, then you probably could.

I think this could also potentially be done via iptables rules as well, since iptables is capable of inspecting & rerouting packets.

A little known fact is that newer versions of openssh support ProxyUseFDPass which allows you to do some protocol negotiation before passing the socket onto to the regular client. http://www.openssh.com/txt/release-6.5

Nice. I'd love to see that in practice. Might have to dig into that myself...

At weblogic, http, https and t3 (custom protocol) were all on the same port (since '97). No real need for multiple when they all have different negotiation protocols.

Anyone knows how nmap -sV (Version detection) would report the open port? HTTP, SSH or neither ?

Edit: Looking at the source code [0], looks like all probes that this service matches will be reported.

[0] https://svn.nmap.org/nmap/service_scan.cc

I've been using the same trick with OpenVPN for a while. Perfect to go trough proxies.

It's a nice idea, but far from the only implementation. sslh is nice as it doesn't need go, and is packaged for most distros: http://www.rutschle.net/tech/sslh.shtml

sslh is linked from his page

it could perform better by using splice() kernel calls.

Liking the Go, not liking the .gitignore.


The .gitignore is a good way to see if a project cares about being precise.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact