timeout connect 5s
timeout client 50s
timeout server 20s
listen ssl :443
tcp-request inspect-delay 2s
acl is_ssl req_ssl_ver 2:3.1
tcp-request content accept if is_ssl
use_backend ssh if !is_ssl
server www-ssl :444
timeout client 2h
server ssh :22
timeout server 2h
It speaks HTTP in as much as it needs to to figure out how to forward requests. It doesn't generate return headers for content; it doesn't serve content; it moves streams from A to B.
> In HTTP mode, it is possible to rewrite, add or delete some of the request and
response headers based on regular expressions.
Second of, it speaks HTTP, and it serves content that it is able to fetch from a content producing backend. In my book it's an HTTP server.
Third off, the difference is so pedantic that I don't think it makes any difference what we call it. We both know what it is, and what it is used for in the context of hosting web applications.
We don't call a car a truck, even if you can haul things around it it.
Pedanticism is never a good argument against someone. 1) It's an ad hominem. 2) It doesn't actually do anything. 3) If everyone knew what it was, they wouldn't call it a web server.
With nginx doing ssl termination, haproxy is just tcp passthrough so it only does passive health checks (ie. it can notice when the backend doesn't respond properly), but that means the current http request has failed.
> apt-cache search ssh http
sslh - ssl/ssh multiplexer
I mean, it's cool that you got the exercise of implementing this in Go and all, but I don't see what's new and interesting about it.
(and another implementation in Perl almost 4 years ago: https://news.ycombinator.com/item?id=2395787)
Edit: oh whoops, didn't get to OP's "Why not sslh" section :\ "The result is useful in its own right through use of Go's interfaces for protocol matching (making adding new protocols trivial), and lightweight goroutines (instead of forking, which is more CPU intensive under load)."
Well alright, the first point I'll concede, though I'm wary of the "reinvented wheel" scent, the second point I'm even more uncomfortable with as I think it makes wild assumptions of the kind of environment this tool could be useful in.
[sslh in Net-Proxy](https://metacpan.org/pod/release/BOOK/Net-Proxy-0.03/script/...) predates that even longer - first release in 2006.
I know obscurity can't replace security, but security + some obfuscation could help you a bit for not getting hacked instantly by 0-days. It's easier to setup on client side than port knocking (you just have to set the port) but it's less detectable than sshd on a random port.
On the other hand, HTTP and SSL/TLS servers will just wait silently for the client to initiate the conversation.
I notice you haven't set any IO timeouts on your protocol sniffer. I had to add a read timeout because PuTTY (a Windows SSH client) waits for a packet from the SSH server first before sending any itself.
I don't know if it's because I surf HN , but a lot of really cool "ops" and system stuff seems to be written in Go lately.
> I'm fairly amused that you want Rust after whining about Go's error handling.
Fact is, Rust's error handling story is quite nice, with plans to get nicer. It has the try! macro which can automatically convert errors into my own error type with the boilerplate outside the function (thus successfully separating success and error paths to a large extent), and chainable Options and Results for when I need those.
With Go, nearly all of my code revolves around handling errors manually, generally repeating how to handle errors several times over, interleaved with the success code, when 99% of the time it's "return an error".
EDIT: Another comment: I'm well aware of Option and try!, etc in Rust. But overall, Rust doesn't mean that you don't have to think about error handling, it just has different mechanisms for reducing the noise in your code.
Indeed, and I said "most of my code is error handling". I didn't suggest I don't want to think about it - I'd just prefer that I can read what a function does without 2/3rds of the lines in that function being error handling, and thus obscuring what the function does in the success case.
I personally am using Go on some big projects and am enjoying it, but I personally noticed so much awesome systems stuff being made in go
What I find lacking lately is I've mostly wanted to extend the forwarding/routing of ssh connections based on username (or better by identity) to different VMs or hosts, but I have no idea how to achieve that at the moment (without creating dummy users on the sshd server).
One can also match whatever is received in first message from client against known protocols (this approach makes it possible to for example run HTTP and HTTPS on same port as is done by IPP servers, particularly CUPS).
first message for ssl comes from client but for ssh first message comes from server
>with SSH, the server will send a version string upon connection (and expects the client to do the same).
Say for example if I do:
Correct me if I'm wrong.
I am behind my institute squid proxy which is a HTTP proxy. All the ports are blocked, all connections has to be through proxy. For https, it uses the connect request.
ZNC uses the same port for HTTPS (webadmin) and ssl for irc.
Sure, those can be different ports, but if you don't need to run them on different ports, why waste the extra port?
I wrote something very similar in Node.js at work recently: a JSON-RPC server that accepts TCP and HTTP messages on the same port. I remember being very excited when I realised this was actually possible!
I think this could also potentially be done via iptables rules as well, since iptables is capable of inspecting & rerouting packets.