Hacker News new | past | comments | ask | show | jobs | submit login
Reactive Manifesto (reactivemanifesto.org)
59 points by satyampujari on Dec 11, 2013 | hide | past | favorite | 45 comments



I've been reading to learn about tech for over a decade. It used to be the case that anything that didn't make sense was a sign that I'm a dumb baby and need to learn more. As I progressed and started exploring beyond dense technical tomes for knowledge, I've learned the hard way that other people's standards for what they would publish online in a professional capacity are lower than I assumed. They could actually just be talking out of their asses. That if a webpage is full of buzzwords written loftily, the purpose is to dupe me, regardless of whatever the distilled 1-paragraph version of that would say.


There's nothing like the fiery, strident rhetoric of a manifesto to get one's heart pounding.

"Application requirements have changed dramatically in recent years."

I mean, if that doesn't make you want to lose your chains, I don't know what will.


I like the broad idea (applications should be very responsive). Yes, this is very important, but I can't shake the feeling that the author of the manifesto has very little experience with responsive software (or even with software generally).

For example, the manifesto confuses ends with means. It states a desired end, but then claims that certain means are required to get there (for example, "event-driven"). Maybe event-drivenness can come into play in a given system, maybe it shouldn't; across a broad set of domains this is orthogonal to the concept of responsiveness.

In video games, for example, we do things that are extremely responsive compared to web stuff (last week I worked on something that had to run at 200 frames per second in order to meet requirements). Interactive 3D rendering systems are most certainly not event-driven; they derive their responsiveness from cranking through everything as quickly as possible all the time.

There are lots of different domains of software out there and they all have found different local attractors with regard to what techniques work and produce the best result. Web software is just one of these domains, and frankly, it isn't doing so well in terms of quality compared to some of the other ones. So I think if one wants to write a manifesto like this, step one should be to get out of the Web bubble for a while and work hard in some other domains in order to get some breadth and find some real solutions to return with.


I made a new diagram they can add to this "manifesto" if they like - http://i.imgur.com/ll51WJ3.png


Reference for those that don't get it (I had to look it up): http://www.timecube.com/ Original image appears about 1/3 of the way down.


I think...well, I know...I just...

What?


wait wait wait where did you get the internal documentation for express?


My first reaction to reading this? If you follow this design in the way that they say, you're going to wind up with a confusing mess with no visibility into why your buzzword compliant application is dog slow.

Very carefully thought out and pervasive monitoring is an under-acknowledged but utterly essential part of Google's recipe for success.


First reactions can be misleading. I'm currently taking the Reactive Programming course by Martin Odersky and Erik Meijer, over at Coursera, and one of their stated goals is to reduce complexity, specifically the complexity of what they call "callback hell".

Not sure if Reactive Programming achieves this lofty goal, and buzzwords are always annoying, but I'd be wary of dismissing RP out of hand.

(I'm still undecided on its merits, by the way)


Personally, I found the outline of the Coursera course you mentioned a lot more interesting than the "manifesto":

https://www.coursera.org/course/reactive


Yes, they aim to reduce the complexity of the code.

The issue is what tools you have to dig in when you're asked, "Why is this page taking 5 seconds to load?" With a traditional single threaded application there is the inherent simplicity that you can profile it, look at timings, and see where the performance went. With an asynchronous distributed application, you have to do a lot more work to start digging in.

The reason why this matters is that there are always some boneheaded performance mistakes. They would be trivial to fix if you only knew what to change. Without visibility, you won't be able to find where they are - you're just stuck suffering the consequences.

I'm definitely not saying that this is impossible. Far from that - Google succeeds brilliantly. But the kind of behind the scenes pervasive visibility that you need is an essential component, and it is not something that happens by accident or is trivially retrofitted on.


They don't just talk the talk, they walk the walk. Typesafe (the company founded by Martin Odersky) offers Typesafe Console as a part of their Reactive Platform.

There you can see lots of data related to an application using Akka, including its performance and possible bottlenecks.

You can use it easily from the web browser, the only downside is the large amount of RAM used.


Reduce complexity? In the second week of the course they removed the submission limit and doubled the submission window. The forum was a mass head scratching.

Edit:

How can you down vote an observation? Feel free to comment instead. I enjoy Scala, which is why I joined the course, but I thought the above added weight to the argument that Scala is too complex/academic/etc.


(I didn't downvote you)

I agree that the exercise from week 2 (the epidemy simulation) was a mess. It was barely about reactive programming though. The mess was introduced mostly because the exercise has lots of mutable state. That the tests cheated, bypassed the API and mutated the internal state of the simulation didn't help either. But I assumed one of the aims of this exercise was to show how mutable state can mess things up, even though this wasn't officially acknowledged by the staff.

Your observation about Scala is unwarranted though. Nothing about the exercise's difficulty has to do with Scala. It's also not "too complex" or "academic". It's an extremely simplistic discrete event simulation of a disease, almost like a cellular automaton... How can this be "too complex"?

A lot of the head scratching was caused because the course is open to anyone. This is a good thing, but it also means a lot of people taking the course barely have the fundamentals of programming down. If you read the forums, a lot of people don't know how to write test cases or debug code, for example. These people would have found almost any non-trivial exercise in any language difficult.


"It works best if the compartments are structured in a hierarchical fashion, much like a large corporation where a problem is escalated upwards until a level is reached which has the power to deal with it."

Because when I think "effective, responsive, and scalable decision-making", I definitely think "big ol' ORG CHART", amirite? /snark


This talk by Jonas Bonér (one of the Akka devs) covers the motivations behind this well: http://parleys.com/play/51c0c876e4b0d38b54f461f6/chapter0/ab...


So basically... Tcl. Which got all of that years and years ago. Except of course 'scaling', which doesn't make a lot of sense for the desktop environment it was created in. Actually, come to think of it though, it did get that too, via AOLServer.


I'm really curious about this, do you mind expanding on it?


I've always been fascinated by TCL, so I second girvo's request. I'd like to know more, with possibly some links to in depth resources, or some insight (as TCL is not that "in" nowadays).


As an example, much of FlightAware is written in Tcl.

http://wiki.tcl.tk/15990


Well, it was a pretty vague "manifesto", but Tcl (not TCL) did the event driven thing before it was cool, back in the 90ies. That made it quite responsive for Tk GUI's.

Erlang does a lot of the stuff in their manifesto too.


Uhhh... were you aware that basically all the GUI OSes were event driven such as Windows 1.0 and MacOS and AmigaOS etc? Writing desktop applications for any of those OSes involved writing lots of event handlers. And even before that, in the 80s when Lotus 123 reigned in the PC world, that was also built around an event loop. Same with Microsoft's Word 1.0 for MSDOS.

TCL did not have anything to do with event loops. Its claim to fame was that it was a simple language with a small footprint that was easy to integrate in any kind of application. Tk was a GUI like all the others, but you were able to write your event handlers in TCL. That doesn't make TCL into an event driven system. It just shows that when you have an event driven system you can factor it into two parts, the event driven core, and the event handlers. Then the event handlers can easily be written in a higher level language to reduce the lines of code and improve productivity.


I didn't say that Tcl was unique in this approach, just that it did it before it was 'cool'.

Tcl is very much an event driven system if you want it to be - they're fairly deeply ingrained into how it works.


Horses for courses. Async. architectures are great for distributed back-end systems (God I love making streaming data-science applications) ... but not so great for on-the-metal embedded devices; where static & limited computational capacity & whole-system predictability requirements drive you in the opposite direction. I do a lot of machine vision stuff for embedded systems, and having everything driven by the drum-beat of a frame interval really does simplify things a lot; particularly when it comes to finding the absolute cheapest hardware that will be able to perform a given function.


I read this awesome book and multi threading embedded systems. "Practical UML Statecharts in C/C++: Event-Driven Programming for Embedded Systems"

The author has an amazing stack for psuedo real time scheduling on seriously limited hardware, http://www.state-machine.com/ . QP-nano brings async real time to PIC processors! So there really is little overhead for async architectures. I think async squeezes more out of limited hardware by avoiding busy loops waiting for IO or other synchronizations to align.

I agree its more difficult to develop with async, but I strongly disagree going non-async saves you money on hardware


It's not really about the overhead. The intrinsic overhead of async. is trivial, anyway. It is more about predictability & how the system is understood. Async architectures let you have simple, easy-to-understand components, but the expense of a more complex macro-control-flow. Hardware and embedded guys spend a lot of time looking at storage 'scopes, and like things to be nice and periodic... it just fits in with the world-view a bit better. It is not so much that one is objectively better than the other, it is just about how different people's mentality works.


Apparently written for the sole purpose of feeling superior to the 95% of all developers who's applications in no way, economical, practical or otherwise, justify such architecture astronautics.

Yes, there are applications that needs this, and I love to read about them, and learn from them. From real world solutions for real world problems that is. Not via some arrogant abstract this-is-the-one-true-path manifesto.


I found out that a build I was doing was utilising http://imgur.com/U70Rcrz. I don't see that very often. Personally I find that kind of parallel processing CS magic impressive. Similarly I like the possibility of processing data as it arrives in a non blocking fashion in pipes-and-filters chains of computation. Another thing I like are having arbitrary numbers of stateless possibly short lived servers. Like you see at PaaS's like Heroku. Easy load balancing, easy recovery. These are all things that become possible by employing other programming paradigms than the classical thread based, blocking, stateful ones.

And it's not confined to one tech stack either; .net, javascript, java, scala, ...

To me it's just good computer science for when you want to be near real time, scale horizontally and utilise your hardware resources and be able to handle failure.


The people behind the reactive manifesto are typesafe.com

Unsurprisingly, they have products to sell that are manifesto-compliant.


Some thoughts about manifestos in general:

"A manifesto is about moral authoritarianism: an absolutist statement of eternal values from which follows (typically) an absolutist ideal of the good life. If there is one thing that most defines a manifesto, it is what it lacks: a central place for uncertainty."

"The problems Haque identifies cannot be solved with manifestos because they are problems, not karmic punishments for espousing false values that will go away through the embrace of the “right” values."

http://www.ribbonfarm.com/2013/11/13/the-gooseberry-fallacy/


'Show your support with a ribbon' that breaks when the site goes responsive.


here is my manifesto: react to everything and you're slave. create reactions and you're ruler. balance action and reaction and you're a master.


Nice moment for the pioneers, such as Joe Armstrong. Seems like they finally got understood. There would be no Scala without Erlang and no such paradigm without Martin Odersky. I am happy and grateful that I could read their books and use products of their efforts.



Hm, the reactive manifesto doesn't say that it's new. It tries to convert people from bloated ol' Java/CSharp to better practices that fits new common needs. Of course Reactive is not new... Nor realtime... But an ecommerce website for instance never had to wonder about this kind of things before: new needs.


It is not just about Java bloatware, it is already obvious that it is crap and several "fixes", notably Scala and Clojure, are already matured. It is mostly about understanding and avoiding other broken by design things, such as thread-mutex based "concurrency", mutable, non-parallel collections, and imperative programming (all these Java loops which do mutations) in general. It is about ideas summarized by Joe Armstrong in his thesis, that the world is parallel and that actor model and share nothing architecture together with fault tolerance and message passing via unreliable channels as the only way of communication coul is a more appropriate paradigm than current imperative-pthread-mutex mess.


Whoever created this page obviously never tried to resize the viewport to something like 320 by 240.


I did, and I didn't.

Sorry, but I've never seen this resolution in my analytics logs. Next time I'll think about it ;)


Also please target Lynx, Mosaic, and Netscape 2 in the future.



You win, sir! This is the browser of the future.


Yes, please! http://i.imgur.com/yMzh9jQ.png

EDIT: It's nothing more of a joke... What if we had Deluxe Browse II?

But more seriously, who browses in 320x200? Does anybody follow 256-color web-safe palettes anymore?



The Reactive Manifesto page can't react to resizes. Oh the irony.


It does, it's fully responsive. I believe the complaint is because the header sticks on the top so when you have a tiny screen then you can't read much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: