Hacker News new | comments | show | ask | jobs | submit login

Check out http://www.orbited.org/ for TCP-sockets in the browser (though I don't think it lets you act as a server yet). As you point out, nothing yet handles public namespace allocation, and this is key; for the specific case of TCP servers in the browser, port contention could become an issue fairly quickly, in which case lifting the level of abstraction to something like XMPP or HTTP (as I've done), where the addressing model is more flexible than TCP's, seems the sensible thing to do to avoid this.

XEP0124 ("BOSH") is very similar indeed to what I've defined; the differences are (1) BOSH is XML-specific and (2) it only provides a tunnel between the client (browser or not) and the server. What I've been experimenting with is content neutral, and, crucially, not only specifies the tunnel, but also specifies how the gateway server should expose the application at the end of the tunnel to the rest of the world. That's something that I have not seen before anywhere. (Except, as you mention, by SSH in a limited way.)

With regard to leveraging existing infrastructure: this is exactly why restricting ourselves to carrying HTTP over the transport is a good idea. We get to reuse all existing infrastructure such as URLs, proxies, and of course the ubiquitous HTTP client libraries. Raw TCP sockets, even if the public namespace allocation issue were addressed, do not have an URL-like notion, and caching proxies do not exist; further, raw TCP access is in many environments not permitted or not available (e.g. within corporate firewalls, or running within a browser). Using HTTP rather than TCP is a deliberate choice to structure the network by providing not just a transport (packet-based, at that!) but a notion of addressing and a content model. HTTP out-of-the-box is a much richer protocol than TCP.

In conclusion, what I've proposed is in its transport aspect no more complicated than XEP0124, and in its name-registration aspect AFAIK not comparable to anything currently existing. The restriction to HTTP gives us an addressing model already widely supported and understood, and lets us reuse existing infrastructure and avoid needless reimplementation or reinvention.

URLs could be used with a generalized protocol. The client would specify the URL scheme, port and an arbitrary name and the server would generate and return a URL, or an error if it doesn't support the requested scheme (servers could support a very limited set of schemes and ports, perhaps just one). Raw socket endpoints would use "tcp://host:port" and "udp://host:port". Servers that provide raw sockets would probably want to create a subdomain for each endpoint to avoid port contention. Since the server knows the URL scheme, it can transparently do caching/filtering/mangling for particular protocols. Making a request with the "http" scheme would be functionally equivalent to your reverse HTTP.

This is just off the top of my head and there are surely better approaches but the point is that it's quite doable and probably as simple or simpler than something at the HTTP layer.

HTTP's "richness" is also what makes it a pain in the ass. It's a megalomaniacal protocol designed for a very specific purpose and when you are forced to use it for any other purpose, you have to carry a lot of baggage, and the baggage is full of rocks.

This gateway service is nearly always going to be used to create some sort of ad-hoc messaging endpoint, rather than a proper web server with web pages, so why force tunneling over TWO layers of HTTP while precluding the use of any existing wire-level level protocols?

It's time we buried the "use HTTP for everything" meme. We already have an everything protocol called TCP and if there's going to be a layer after that, it will be a carefully designed, flexible messaging protocol like AMQP. HTTP adds negative value as a general purpose transport and it's not even that great for serving web sites.

The generalized protocol/URL idea is a nice one. I'm not sure it'd be simpler, but it'd certainly be useful.

It's interesting you mention AMQP; this work I'm doing actually came out of some of the work I did as part of my work on RabbitMQ.

Ah, you work at LShift. A funny coincidence indeed.

I think the URL request protocol itself would be fairly simple but any use case would be application specific so coming up with a general purpose implementation might be tricky.

I'm just finishing off the Erlang book and for my first project, I was going to either build a general purpose Comet server (improving on Orbited, Meteor, cometd, etc) or flesh out the above protocol and implement it... or a combination of the two. If you want to offer input or be involved: jedediah at silencegreys dawt kom.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact