The other protocol tunnels an HTTP connection over another HTTP connection in the opposite direction. Tunneling asynchronous messages over HTTP is an old technique which can be implemented in a browser.
Neither protocol enables any kind of novel functionality. They merely add another layer of HTTP cruft.
For the case of "works in a browser today" aka Comet, it is again better to solve the more general problem of bidirectional tunneling over HTTP (e.g. http://xmpp.org/extensions/xep-0124.html), through which you could make any sort of connection, including a reversed HTTP connection or the gateway request protocol above.
Imposing an extra HTTP layer and/or building on top of HTTP (in the first case) needlessly complicates and significantly restricts both of these protocols without deriving significant value from existing standards or infrastructure.
XEP0124 ("BOSH") is very similar indeed to what I've defined; the differences are (1) BOSH is XML-specific and (2) it only provides a tunnel between the client (browser or not) and the server. What I've been experimenting with is content neutral, and, crucially, not only specifies the tunnel, but also specifies how the gateway server should expose the application at the end of the tunnel to the rest of the world. That's something that I have not seen before anywhere. (Except, as you mention, by SSH in a limited way.)
With regard to leveraging existing infrastructure: this is exactly why restricting ourselves to carrying HTTP over the transport is a good idea. We get to reuse all existing infrastructure such as URLs, proxies, and of course the ubiquitous HTTP client libraries. Raw TCP sockets, even if the public namespace allocation issue were addressed, do not have an URL-like notion, and caching proxies do not exist; further, raw TCP access is in many environments not permitted or not available (e.g. within corporate firewalls, or running within a browser). Using HTTP rather than TCP is a deliberate choice to structure the network by providing not just a transport (packet-based, at that!) but a notion of addressing and a content model. HTTP out-of-the-box is a much richer protocol than TCP.
In conclusion, what I've proposed is in its transport aspect no more complicated than XEP0124, and in its name-registration aspect AFAIK not comparable to anything currently existing. The restriction to HTTP gives us an addressing model already widely supported and understood, and lets us reuse existing infrastructure and avoid needless reimplementation or reinvention.
This is just off the top of my head and there are surely better approaches but the point is that it's quite doable and probably as simple or simpler than something at the HTTP layer.
HTTP's "richness" is also what makes it a pain in the ass. It's a megalomaniacal protocol designed for a very specific purpose and when you are forced to use it for any other purpose, you have to carry a lot of baggage, and the baggage is full of rocks.
This gateway service is nearly always going to be used to create some sort of ad-hoc messaging endpoint, rather than a proper web server with web pages, so why force tunneling over TWO layers of HTTP while precluding the use of any existing wire-level level protocols?
It's time we buried the "use HTTP for everything" meme. We already have an everything protocol called TCP and if there's going to be a layer after that, it will be a carefully designed, flexible messaging protocol like AMQP. HTTP adds negative value as a general purpose transport and it's not even that great for serving web sites.
It's interesting you mention AMQP; this work I'm doing actually came out of some of the work I did as part of my work on RabbitMQ.
I think the URL request protocol itself would be fairly simple but any use case would be application specific so coming up with a general purpose implementation might be tricky.
I'm just finishing off the Erlang book and for my first project, I was going to either build a general purpose Comet server (improving on Orbited, Meteor, cometd, etc) or flesh out the above protocol and implement it... or a combination of the two. If you want to offer input or be involved: jedediah at silencegreys dawt kom.
This is apparently "ReverseHTTP" and the specification looks about twenty times longer, needlessly. I'd love to hear any good reasons why I should use this "ReverseHTTP" instead of long-polling or Reverse HTTP... the IETF draft
My current draft is far too long, I agree; it describes not only the use of HTTP to retrieve requests (which is equivalent to Donovan Preston's idea), but also the interactions and headers needed to manage the tunnelled service. That latter is something that Lentzcner & Preston haven't addressed yet, I think.
Neither pull nor push solves the problem that SUP tries to address. For that, a layer on top is required -- essentially an embedded message broker with configurable private/shared queues and bindings. One very promising approach, once the transport is sorted out (which is what ReverseHttp is trying for), is to transplant the AMQP model (objects and operations) into the new setting.
I'm sure there are a lot of contrived examples, but are there any good ones? Facebook and Twitter.
Ok, but are there any good and common ones?
I'm failing to see the advantage of reverse http over long polls, even though I am completely sold on the difference that having server push would make...
The only reason I hadn't use xmlhttprequests extensively before it became popular (and renamed Ajax) was because it was too much work, too error prone. The popularization of Ajax lead to solid frameworks and libraries, which fixed that.
Or a bug tracking system. Or a newspaper. Or pretty much any site, really.
I don't really see what this is trying to solve either.
What this is doing is twofold: letting HTTP push be used consistently, with long-polling pushed out to the edges of the network, where it belongs; and making it less of a burden to spin up and shut down HTTP-based services.
The registration and management aspect -- enrollment, in short -- is to HTTP as DHCP is to IP, if you like. It lets you avoid the equivalent of manually assigning IP numbers.
Comet is generally used to refer to any method that can emulate a raw socket - iframes, xhr, inserting script tags, etc etc
All this is doing is proxying http from the server, to the browser and back. It's an interesting thing to try, but I can't see any real life use for it.
The problem is that this isn't supported by any clients that are in wide-spread use, so you can't just load up a webpage and see a demo.
I've used a very similar technique to link compute nodes to a job server where the compute nodes were behind a NAT. This eliminated any long polling required and still allowed the server to query the nodes for their status.
Again, not the type of thing where you're running anything in a browser, but I wanted to use HTTP as the protocol for simplicity, and needed a way for the server to talk to a client behind a NAT.
Now the down side is that you basically have to rewrite a web server in order for this to work. I'm not sure if this could be bolted on. You also need some sort of session management built in, so you can pair incoming (client->server) requests and outgoing (server->client) requests. And then you need a client library that can spin up it's own http server and handle it's own requests.
In my case, I was able to write everything from scratch. But I doubt my code would scale very well. I'm also not sure that in this case it isn't better to just make a new protocol. There is a lot of hackery required to get this to work, and I doubt you'll see web browsers support anything like this.
HTTP exists so that any browser can access any web server, it doesn't re-implement or otherwise allow the usage of TCP/IP.
As a corollary, I don't see why I need to know about your application's communication protocol, let alone adhere to it because it's now a standard.
This proposal would be to convert HTTP from being a client making requests to a server to (effectively) a server making / receiving requests from another server. So your browser would also be a (mini) server, handling requests from the main server.
This is largely for people that want to use HTTP as a message-passing protocol, but use it in a bi-directional manner between possibly NAT'd hosts.
That is exactly it. You've got it.
HTTP makes an almost ideal message passing protocol: it has a rich and battle-tested addressing model; it is asymmetric in a helpful way (really! the response codes are similar to ICMP messages, where the requests are similar to IP datagrams); it is widely supported and deployed; it is content neutral.
It doesn't even have to be inefficient ;-) (http://www.lshift.net/blog/2009/02/27/streamlining-http)
I don't know. I'm sold on using long polls in conjunction with actors...