Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a technical reason why you would implement HTTPS in a HTTP server? If you ran a separate process on port 443 to terminate SSL connections, and then proxy that request to a HTTP server running locally, there would be better separation of concerns.

For example, this setup would mean that a security flaw in the HTTP server that allowed a user to read memory would not be able to read any private keys used in the HTTPS server.

I guess some downsides would be some extra latency while the request is proxied, and some extra memory overhead for the second process.

I'm interested in anyones thoughts on this.



OpenBSD has added support into libressl for privilege separated processes that hold SSL keys, any operation requiring the use of private keys such as the creation of session keys, and signing things are shuttled off via a small api to a separate process. This is somewhat analogous to what ssh-agent does for openssh clients.

OpenBSD's TLS private key consuming daemons have moved to this model or are in the process of doing so. This helps to mitigate the problem of access to process memory results in disclosed private keys, also the requirement of the daemon's user facing bits to have access to the keyfiles.

http://article.gmane.org/gmane.os.openbsd.cvs/139527/


At work, we run stud in a freebsd jail to handle SSL termination. It uses the haproxy proxy protocol (v1) to send the client IP to the http daemon.

Downsides include: Three sockets per client connection (this gets problematic around 1M client connections). Lack of information about the SSL negotiation in the http context. Stud doesn't have the typical graceful restart options that are typical with web servers.

On the plus side, stud is a lot less code than an http server, so its easier to modify things if you need to. I added sha-1/sha-2 cert switching for example. Would have been doable in an https server too, but a lot more to avoid.


I'm not into the specifics of https but i've been writing a lot of gateways for other protocols and it's never as easy as "just forward the message". Protocols some times have state that the proxy must be aware of and sometimes the forwarding is conditional which means that the proxy must understand both types of protocol and be able to act based on information in it. Say for example you want to block all requests to a specific resource, if your https server knows about this you might be able to reject the request before you decrypt everything of it.


Proxying typically loses a lot of information, like client address, client certificate info and so on. If you take care to make carry over everything transparently, it would more accurately be called privilege separation (ala sshd).


SNI, for example. If you're running multiple virtual hosts, the proxy would have to be aware of all of them. But yes, SSL termination is not uncommon, especially if you have a frontend/backend architecture.


We run exactly that setup. Apache on the front end proxying and doing SSL for a mish-mash of Java EE, .Net and native apache modules.

There's very little latency added, it allows centralised logging and TBH apache is a ton more reliable than anything else out there. Does about 2-3 million requests a day.


That's how plan9 does it. Ssl is a wrapper around whatever connection.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: