
Polyglot is a distributed web framework for multiple programming languages - sausheong
https://github.com/sausheong/polyglot
======
Xorlev
"Messsage queue - a queue that receives the messages that represent the HTTP
request. the acceptor accepts HTTP requests and converts the requests into
messages that goes into the message queue. The messages then gets picked up by
the next component, the responder. The implementation of the message queue is
a RabbitMQ server."

Alrighty then. Someone has never scaled RabbitMQ vs. a basic HTTP service. If
raw scalability is what you're looking for w/ a polyglot backend, an edge
service that accepts HTTP and turns those requests into Thrift structs (or
similar) to RPC to various polyglot services might be better for you. This is
the model most use.

However, I'm unsure how this'll be more 'performant' than picking the right
technology from the start and architecting wisely. Generally, the more
performant you want something to be the simpler you build it and only
compromise where necessary. Thrift/RabbitMQ are definitely complexity
compromises.

Complexity is the bane of scalability.

Additionally, if you needed pure scalability, you generally have purpose-built
services for each "responder" which is load balanced over. Pretty similar to
this, minus the message queue.

I imagine having a message queue in the middle of your HTTP response path
could lead to some nasty latency spikes too. Much better to drop a request
with a 503 than have the next N spin for minutes while workers chug through
it. Especially if you're taking in 10K req/s.

Last thought: The benchmarks are lacking detail, could use a more thorough
job.

~~~
sausheong
Thanks for the comments. The detailed benchmark data is in the perf/
directory, I just did some basic analysis.

------
kcorbitt
Switching from a monolithic framework like Rails to a number of independent
communicating services that handle different responsibilities is a classic
step in scaling. However, to the best of my knowledge that transition usually
involves moving to a mostly-custom setup dependent on the app's specific
needs.

It's not clear exactly what functionality Polyglot provides beyond, say, raw
RabbitMQ, but if it can find a way to encode best practices in a service-
oriented architecture it could be a handy tool for developers going through
this process for the first time.

~~~
sausheong
Polyglot as it is at the moment is an experiment, a prototype. Still fleshing
things out, so feedback and contribution is always welcome :)

------
buro9
I had to stop and check where this guy works. 2 companies I know of have just
moved to something fairly similar.

A request is turned into a JSON message pumped into a queue with a signature
declaring the type of message it contains, service discovery reads from the
queue and allocates a service to handle it, shuffling it onto another queue
(and if necessary spinning up the service). The service picks up the queued
item, processes it and hands it back where the new message may be the response
(in which case it gets handed back) or another service call (in which case
discover the handler and assign it to a queue).

It's SOA based on messaging and a basic pipeline. Except they don't call it
that.

Thankfully the applications in question do not have low response time as a
core criteria.

------
seguer
If the goal is the ability to have certain routes processable by different
languages/systems you could achieve this with reverse proxying (from eg.
nginx) [1].

That way you can leverage any existing language frameworks and run them as
standard HTTP responders. No need to work with a queue (and add it to the
stack).

You can still limit the HTTP methods each proxy responds to as well [2].

[1]: [http://nginx.com/resources/admin-guide/reverse-
proxy/](http://nginx.com/resources/admin-guide/reverse-proxy/)

[2]: [http://stackoverflow.com/questions/8591600/nginx-proxy-
pass-...](http://stackoverflow.com/questions/8591600/nginx-proxy-pass-based-
on-whether-request-method-is-post-put-or-delete)

~~~
sausheong
Thanks for the suggestion, it's a good one. A few cases a message queue can be
advantageous -- (1) persistence (2) a few responders can work on the same
request in parallel (3) adding/removing responders dynamically according to
the load.

These are not common/generic use cases but would be useful under particular
circumstances.

* I could be wrong with (3) -- I'm not very experienced in reverse proxies.

~~~
seguer
You're correct on 3; I actually did this in a large system I wrote and worked
on at my last job.

It wasn't quite dynamic (it required an engineer to set new values for how
many workers you wanted..) but we could do this via a GUI.

For (1) what do you do with the persistence? A web request, in general, is not
important after a few seconds.

For (2) how does Polyglot accept multiple responders for a single request, and
how would it join the responses?

~~~
sausheong
Thanks for the confirmation.

The implementation today is as a task queue which removes the request from the
queue once a responder acknowledges, but it could be a pub-sub model, where a
number of independent responders can work on the same message in parallel, and
only 1 responder need to return a response. In this case, persisting in the
queue is useful.

An alternative is to chain the responders where one responder can leave a
message in the queue for another responder, and the final responder returns
the response.

Polyglot is still experimental though, and the current implementation is a
prototype.

------
floatboth
So, it's like mongrel2, but a bit worse: AMQP is centralized, unlike ZeroMQ.

~~~
tlrobinson
That was my thought as well, though I never really understood the point of
Mongrel2.

It's basically a reverse proxy that speaks to upstream application servers
using a custom protocol over ZeroMQ instead of HTTP over TCP? Why is this
better than just using HTTP?

Does anyone here use Mongrel2? Do you like it?

~~~
rubiquity
I haven't used Mongrel2, though I have used ZeroMQ, but I can try to answer
this question:

> _Why is this better than just using HTTP?_

In SOA most of your services aren't going to be exposed publicly. HTTP is a
great protocol for public facing servers, but HTTP is a very clunky protocol.
For private services, it's a pretty big benefit (performance, scalability and
ease of parsing) to skip HTTP and use something else. ZeroMQ gives you several
messaging patterns that you would never get from HTTP.

~~~
FooBarWidget
That makes sense if your services do not follow the request-response pattern
(e.g. background workers). But what if they do? In Polyglot, the services very
clearly follow a request/response pattern because they're handling web
traffic. What sense, then, does it make to use a message queue?

Persistence? Makes no sense for web traffic. Even if the message is persisted
to disk, it's useful for a few minutes at most before the user gives up and
closes the tab.

Language-independence? You don't need a message queue for that. You can do
that with regular HTTP.

Scaling and load balancing? Ditto.

~~~
rubiquity
I'm not sure what your comment is getting at. I think HTTP is fine for the
Web, and any service exposed and designed for use by the Web is going to have
to use HTTP. But if you're going to use a framework like Polyglot, you're
going to have several services and any of those services not directly
communicating with the Web doesn't need to speak HTTP.

~~~
FooBarWidget
What I'm getting at is why those services _shouldn 't_ speak HTTP. I
understand that they don't _need_ to speak HTTP per se, but I get the feeling
that your comment is implying that such services _should_ speak a non-HTTP
protocol, while I think that it's fine even if those services speak HTTP.

------
dbpokorny
"forcing the deliberate use of different programming languages"

...wait, what? I don't see how this solves anything. It's like asking American
schoolchildren to learn English, Russian, and Chinese before doing math. Makes
no sense.

~~~
vdaniuk
I'd like to share my experience. I started to learn programming by taking
Coursera and other moocs in python, ruby, javascript and c. I feel that my
understanding was greatly enhanced by simultaneously getting to know various
language.

So this may be a nice learning exercise.

------
SEJeff
Also relevant to building microservices in a somewhat similar fashion:
[https://github.com/koding/kite](https://github.com/koding/kite)

------
datashaman
This sounds very similar to tir / mongrel2.

------
shebson
This is cool, but the naming is unfortunate as Polyglot is already a somewhat
popular library for doing internationalization in Javascript:
[https://github.com/airbnb/polyglot.js](https://github.com/airbnb/polyglot.js)

------
ilaksh
Seems like this is pretty much what every single system does that is based on
a web application that scales with a message queue.

With Polyglot are there standard SDKs for responders or acceptors?

I think we should relate this to that ocaml mirageos thing and the idea of a
common knowledge representation for program generation. I think pattern with
queue has a fairly close correspondence with some common OOP patterns.

We are repeating the same patterns over and over in different contexts for
different applications. I think that we have semantic representations and
programming languages that if we created a good common dictionary and
referenced that rather than restating everything I different forms then we
could get much better code reuse.

------
mcguire
" _1\. Acceptor_

" _2\. Message queue_

" _3\. Responder_ "

So, a SOA?

~~~
hangonhn
It's kind of amazing how often SOA is rediscovered by people. It's not at all
exotic either. It's fairly common in enterprise applications but I guess maybe
most people don't write enterprise apps?

My first thought when I read the article was "Did he just reinvent the wheel?"

It's in Go so that's kind of neat.

------
programminggeek
I actually built something very similar 2 years ago
[http://radial.retromocha.com](http://radial.retromocha.com)

Mine used a node proxy instead of a message queue, but same basic idea. It
makes scaling and changing languages so much easier.

Really, the trick is having a standard message protocol that everything abides
by. Once you have that, building a proxy and frameworks around it is pretty
trivial. I chose something similar to JSON-RPC and for what I wanted/needed it
worked well.

It never saw any kind of scale, but it was a fun project.

~~~
FooBarWidget
Why does it make scaling and changing languages easier? How is using an HTTP
reverse proxy/load balancer not just as easy, if not easier?

------
jgill
I thought of Polyglot by AirBnB when I first saw this post,
[https://github.com/airbnb/polyglot.js](https://github.com/airbnb/polyglot.js)

------
webmaven
Acceptors / Responders feel a bit like Python's WSGI model, with the addition
of a queue in the middle for connecting gateways to applications. I suspect
that the similarity extends to JSGI (Javascript), SCGI, and PCGI (perl).

------
EGreg
What is the benefit of having this vs say a monolithic framework?

~~~
pmontra
So you can write the new functionalities in the new popular framework and keep
the old one running. Why that's a better choice... well it depends on case by
case.

However this is a bit like a reverse proxy that load balances many different
web apps. You could have different applications written in different languages
serving requests to the same url.

Furthermore I don't see anything here that load balancers haven't done since
the '90s. Maybe I'm missing something but maybe that's why everybody is
puzzled.

~~~
sausheong
It's supposed to be more fine grained than load balancers, and you should be
able to scale up and down different parts of the same web app dynamically.
That probably didn't come up well in the write-up.

------
GUNHED_158
I was thinking why RabbitMQ and not ZeroMQ? just for my knowledge

~~~
SolarNet
They solve slightly different (architecture) problems and they are on
completely different ends of the message queue library spectrum of "usability
vs. customizability".

RabbitMQ is a "batteries included" solution. ZeroMQ is a roll your own sort of
library. If you just want a message queue use RabbitMQ. If you want to build
your own message queue system (with complex or specific requirements) use
ZeroMQ.

------
mmgutz
vert.x?

~~~
cordite
vert.x is limited to only languages with supported implementations and hooks
in the JVM world.

------
CmonDev
Based on examples, it seems to be meant for dynamic scripting languages,
rather than programming languages?

~~~
sausheong
Not really. I started with Ruby, Python and PHP but it's very much possible
with compiled languages like C/C++. There's a whole bunch of client software
that allows you communicate with RabbitMQ -
[https://www.rabbitmq.com/devtools.html](https://www.rabbitmq.com/devtools.html)

