Basic idea is this: You put all your real-time stuff in a message queue (MQ) which communicates directly with the browser. For authentication / authorization and various other forms of permission / logging, you have the MQ communicate with the web framework via http callbacks (Webhooks) and a standard REST API. So the architecture is:
User <--Websocket--> MQ :: publish/subscribe
MQ --Webhooks--> PHP/Django/Servlets/etc. :: user signed on, user joined a channel, etc.
PHP --REST--> MQ :: publish(msg), remove(user, channel), etc.
The key is to include cookie information in the callbacks from MQ -> PHP so the callback happens in the context of the user session. Suddenly you can do things like write a chat app in 30 lines of php + js, or a persistent time series in 20, and it really feels magical.
I actually started Hookbox almost as a statement of irony, because I was really frustrated about the major pushback I was getting to sockets in web browsers at the time. I'd just finished writing/submitting the initial proposal for Websocket, and I wrote this tongue-incheeck piece about the mismatch between typical web development and network server programming: http://svwebbuilder.wordpress.com/2008/10/20/html5-websocket...
So Hookbox started as a 2-3 day project that took on a life of its own for a while and ended up being really useful. This project was one of my smaller open source codebases and to this day I receive tons of interest and requests for maintenance, though I've abandoned it for years due to time.
I'm sure there's a huge market for this sort of thing. It's great to see Pushpin, I'll definitely check it out!
In a previous project that I had worked on, we were spawning up complex pieces of infrastructure using Chef integrated with fronting Rails app. Realtime updates were always tough to orchestrate with custom Rabbit MQ feedback from the Chef clients pushing out to Rails clients using JS.
I believe a solution like this one would come in very handy for pushing out realtime updates for long running infrastructure requests from a distributed system. Kudos!
Glad to see more streaming REST implementations coming alive!
Especially, if you are a student and trying to keep up with all these new stuff. It is becoming really hard to decide what to learn next or what to focus on.
And as others are saying, know your fundamentals, and know your theory. Get experience doing something real start to finish as often as possible and in order to do that you will have to dive into things. That's how to decide what to learn next.
At the startups that I've worked, things move quickly and unless you have a personal motivation to learn something, you can't keep up.
It's best to get a cursory view of your available options (star them, bookmark them, commit them to memory) and when the situation arises for a specific problem to solve, you'll know of a handful of options to further investigate.
Focus on understanding various modes of thinking, the various key paradigms of programming. Work on understanding basic algorithms, so that given a problem, you can do some rough math and have a general idea of the bounds involved. Spend some effort understanding the low-level stuff.
Long polling's probably been used for over a decade. You should be able to figure out why long polling exists. Think about, for instance, implementing an IRC chat room in the browser, without using any special new features, just HTML and old JS. Once you understand that, you should be able to skim this article, and understand the point and value of the tool without having getting into the details. For me, the point of reading such articles is to see if someone came up with a new way of viewing something, an idea or perspective that might expand how I deal with another problem in the future. Only secondarily do I think I'll use most of the tools I read about (although it's nice to know such a thing exists, for the day you do need it).
Memqueue is a revision-based queue server with a REST API. Multiple consumers can poll the same queue at a different pace by using revisions. A revision is sort of a cursor that allows a consumer to specify where to poll from in the queue. If a connection drops it's not a problem, you continue from the revision you stopped at after you reestablish a connection. Each time a new message arrives to the queue, the revision is bumped by one and consumers are expected to poll from the new revision.
It also allows you to specify message & queue expiries so you don't have to manage memory growth.
I'm probably just being ignorant lol. Can you point me to an example company offering a service like this?
In general these services are not available through the public internet so routers with clogged buffers are not usually an issue. But it depends on the SLA.
In the document I saw that Pushpin send response to client while it's waiting for the response from web application, right? How is it possible?
I mean if you send a response to the client, you can't send anymore responses after that.
What makes Pushpin special is that you can control what the outside-facing HTTP exchanges look like, which makes it good for implementing APIs, and may also be useful if you're just really anal about how your client/server interactions work. :)
In the pipeline, Pushpin goes in the very front, just behind a load balancer (if any). The reason for this is you could put instances of Pushpin in different geographic locations, all fronting an application in a single location. So you want it the furthest out, closest to any users that might be connected to it.
I'm not aware of such functionality in HAProxy but would love to hear about it (I'm an HAProxy fan :)).
This look like band-aid for developers still stuck with Python, PHP, whatever... technology from 2000's. I see I've got a down-vote from fanboys already, but that's just the way HN works: it's hard to have a constructive coversation, but it's easy to hate.
P.S. @jkarneges, this reply was not directed at you, but whoever down-voted my simple question. I mean, how can you down-vote a question? Let's not question anything and spread love, a la Facebook "like-button-only" style. :(
The article does play up the compatibility with legacy frameworks. However, the proxy approach itself was designed independently of this, and the versatility turned out to be a bonus. Some background:
Basically, I'm positing that as a system gets larger, then moving the problem to an outer layer is good design, even if all of your backend code is event-driven. You can use Pushpin and Node together with a straight face. :)
finally something that every web developer understands immediately and really takes the pain out of realtime for a lot of people.
If you are using wordpress and not wp-super-cache, when a page becomes popular your server is going to have a bad time.
update: looks like they fixed it
Also, just curious - how come qt is required?
Qt, because... it's a nice C++ event-driven lib. :)
Must look into Qt then sometime. :)