Hacker News new | comments | show | ask | jobs | submit login
Add SPDY support to your Apache server with mod_spdy (googledevelopers.blogspot.com)
93 points by joedevon 1698 days ago | hide | past | web | 33 comments | favorite



Can't wait for SPDY to be implemented for nginx. There's been discussion about it several months ago, but I haven't heard anything since.

Does anyone who's involved with nginx have more info?


According to the nginx Twitter, they're targeting a May release: http://twitter.com/#!/nginxorg/status/192301063934705665


They're targeting _May_ release.


OK, I removed the joke about software schedules for the sake of not confusing anyone (especially since I realized this thread will probably rank for [nginx spdy] in like two minutes).


A bit OT, but has anyone noticed that Twitter seems to have stopped using SPDY? Any information on that?


I wonder how SPDY would work in a load-balanced environment.

- Should the load balancer use implement SPDY externally and use HTTP internally? - Should internal request be 100% SPDY-only and the load balancer implement both?


I think currently the most convenient thing is to use HTTP internally and have something on the front end that does SPDY between end-user and server.

Seems like SPDY would be beneficial as a replacement for HTTP everywhere though, because the payload for each request is smaller. However, SPDY seems to be optimized for the longer, higher latency, and slower connection between user and server rather than the faster internal connection between load balancers and servers. Finally, all SPDY communication is encrypted via TLS, so that process might add to a slowdown between internal systems.

Guess we'll need to benchmark it and see! Looks like Apache and Jetty are the only supporters of this so far. And I'd never us Apache as a direct front-end.


It doesn't really matter.

For simplicity put SPDY on the edge and reverse proxy HTTP to whatever you used to use, this gets you 90% of the way there.

In the future, I expect we'll see tools like Mongrel2 take over. SPDY on one end, and some type of messaging on the other side. This will become more desirable with technologies like web-sockets that will stream data back. HTTP is simple, but hardly optimized for speed, even on a LAN. Thrift, Protobuffs, MessagePack, even Erlang's Binary Protocol (i.e. BERT) are all better fits for the internal RPC protocol


From the comments on that article, it looks like Google will only implement SPDY for HTTPS, because Google believe that's the future of all HTTP requests.

I'm not cool with that. I feel like a ton of HTTP requests are for very simple html pages that don't require any kind of login/security and the overhead of HTTPS is of no benefit.


I'm completely cool with it. Modern equipment can handle plenty of SSL traffic, and SSL-by-default will protect me (and you) from lazy developers.

The real question is whether or not this will proliferate and to what extent.


Yeah but you've to pay for SSL certs that works in everyones browser without a fat "DONT GO TO THIS EVIL WEBSITE OMGOMG SELFSIGNED TERRORIST"

Meaning you'd have to pay a tax to put your site on the net when everyone only uses SPDY.


You haven't had to pay for SSL certs for several years now - for example http://www.startssl.com/.


There's been some discussion that you could run SPDY with a self-signed cert and it just wouldn't show the lock. People need to make their opinions on this topic known to Google and Mozilla before the security model is finalized.


Do you have any info on that? Links, references, mailing list messages, anything? That's super interesting and honestly, the way it should be. HTTPS w/out certificate verification is STILL more secure than HTTP and it should be treated as such - not as the black sheep that it currently is.



Pervasive end-to-end encryption is the only way to defeat deep packet inspection-based spying, throttling, and censorship, which is increasingly popular among governments and ISPs (and even end users e.g. Firesheep). The overhead of encryption is actually tiny, contrary to common perception. It should never be optional.


that is not 100% true, you are always going to pay at least 1 extra round trip up front for an encrypted connection, for NY-Sydney latency this would wack on 250ms extra to the first request, http keep alive mitigates a bit.


Encrypted protocols can prearranged a session key and/or initialization vector to use for the next connection, dramatically reducing the start up delay.


If the page is just 'simple html', why does the overhead matter?

I would imagine that the overhead of fetching data from a database, talking to the network, etc, would outweigh the cost of doing an SSL handshake if you bundle your resources correctly.


You're contradicting yourself here. If it's plain HTML, there's none of the database/intranet/etc going on and so every bit of overhead is noticeable... which is the point the GP was trying to make.

Not that I share that opinion. I think the whole world needs to move on SSL, even though it's kinda broken in the current method where a select few companies make a crazy killing selling their SSL certs (although there are cheap alternatives).


My two points were unrelated.

If it's a simple HTML site - who cares? A simple HTML site with < 100KB of content and < 15 resources to fetch isn't a bit deal anyways. Two or three seconds to a user on a mobile device isn't unreasonable.

If the site is more complex, the SSL handshake most likely isn't your bottleneck.

I found this[1] to be an interesting read.

[1]: http://www.semicomplete.com/blog/geekery/ssl-latency.html


This is my biggest concern. This means that whenever you want to run a simple HTTP server, you need a signed certificate. Well, thats a huge hassle. So its likely shortcuts will be adopted to make HTTPS adoption more practical for everyday usage. This could lead to the weakening of the entire trust based certificate system due to the demand/pressure of scale. Ok, so now you strengthen the trust system and inadvertently untrust large parts of the unregulated internet. Great, you have just implemented tiered internet. Those who qualify for the trust based system get rewarded with access to HTTP(S) 2.0, those who do not are condemned to HTTP 1.X. Congrats guys, great foresight.


I don't think browsers will drop HTTP support in your lifetime. They still do FTP for example.


What? Why not Apache 2.4? That's the current stable version.


I'm glad they made 2.2 packages which is the version deployed on virtually all production servers in the last couple of years.


No linux distributions that matter are shipping Apache 2.4 yet. E.G. Ubuntu 12.04 will include Apache 2.2.22.


That architecture sounds like a DoS nightmare. Now you can run out of Apache processes much faster! I wonder if mod_spdy has any built-in mitigation for this.


Please stop using mpm_prefork... it is 2012: mpm_event just officially went stable (although it has worked fine for years) and mpm_worker has been the correct option for most of Apache 2.x.

(edit: I make this point because I handle thousands of concurrent requests with a small handful of processes; I run out of memory way before I run of processes.)


It is 2012, and anyone using PHP with Apache 2.2 will be using mpm_prefork.

Much as the popularity of nginx deployments with PHP/fastcgi has grown, I suspect we'll see more mod_php-less Apache deployments as Apache 2.4 grows in popularity.


AFAIK, mod_php has been compatible with mpm_worker for many years now... it is only that there are a few PHP extensions that are incompatible, and people don't want to spend the time documenting which ones those are, so their default recommendation has always been "use mpm_prefork": if you actually care about your deployment you can just figure out whether your extensions are compatible.


Its not clear from the post, are they implementing their own mpm? as in mod_spdy takes the place of mod_prefork/worker/event?


[deleted]


That doesn't look right to me. Here's how it looks for me: http://www.dropmocks.com/mBiQ4E


" make BUILDTYPE=Release"

cool but there's no makefile in their source.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: