"Application requirements have changed dramatically in recent years."
I mean, if that doesn't make you want to lose your chains, I don't know what will.
For example, the manifesto confuses ends with means. It states a desired end, but then claims that certain means are required to get there (for example, "event-driven"). Maybe event-drivenness can come into play in a given system, maybe it shouldn't; across a broad set of domains this is orthogonal to the concept of responsiveness.
In video games, for example, we do things that are extremely responsive compared to web stuff (last week I worked on something that had to run at 200 frames per second in order to meet requirements). Interactive 3D rendering systems are most certainly not event-driven; they derive their responsiveness from cranking through everything as quickly as possible all the time.
There are lots of different domains of software out there and they all have found different local attractors with regard to what techniques work and produce the best result. Web software is just one of these domains, and frankly, it isn't doing so well in terms of quality compared to some of the other ones. So I think if one wants to write a manifesto like this, step one should be to get out of the Web bubble for a while and work hard in some other domains in order to get some breadth and find some real solutions to return with.
Very carefully thought out and pervasive monitoring is an under-acknowledged but utterly essential part of Google's recipe for success.
Not sure if Reactive Programming achieves this lofty goal, and buzzwords are always annoying, but I'd be wary of dismissing RP out of hand.
(I'm still undecided on its merits, by the way)
The issue is what tools you have to dig in when you're asked, "Why is this page taking 5 seconds to load?" With a traditional single threaded application there is the inherent simplicity that you can profile it, look at timings, and see where the performance went. With an asynchronous distributed application, you have to do a lot more work to start digging in.
The reason why this matters is that there are always some boneheaded performance mistakes. They would be trivial to fix if you only knew what to change. Without visibility, you won't be able to find where they are - you're just stuck suffering the consequences.
I'm definitely not saying that this is impossible. Far from that - Google succeeds brilliantly. But the kind of behind the scenes pervasive visibility that you need is an essential component, and it is not something that happens by accident or is trivially retrofitted on.
There you can see lots of data related to an application using Akka, including its performance and possible bottlenecks.
You can use it easily from the web browser, the only downside is the large amount of RAM used.
How can you down vote an observation? Feel free to comment instead. I enjoy Scala, which is why I joined the course, but I thought the above added weight to the argument that Scala is too complex/academic/etc.
I agree that the exercise from week 2 (the epidemy simulation) was a mess. It was barely about reactive programming though. The mess was introduced mostly because the exercise has lots of mutable state. That the tests cheated, bypassed the API and mutated the internal state of the simulation didn't help either. But I assumed one of the aims of this exercise was to show how mutable state can mess things up, even though this wasn't officially acknowledged by the staff.
Your observation about Scala is unwarranted though. Nothing about the exercise's difficulty has to do with Scala. It's also not "too complex" or "academic". It's an extremely simplistic discrete event simulation of a disease, almost like a cellular automaton... How can this be "too complex"?
A lot of the head scratching was caused because the course is open to anyone. This is a good thing, but it also means a lot of people taking the course barely have the fundamentals of programming down. If you read the forums, a lot of people don't know how to write test cases or debug code, for example. These people would have found almost any non-trivial exercise in any language difficult.
Because when I think "effective, responsive, and scalable decision-making", I definitely think "big ol' ORG CHART", amirite? /snark
Erlang does a lot of the stuff in their manifesto too.
TCL did not have anything to do with event loops. Its claim to fame was that it was a simple language with a small footprint that was easy to integrate in any kind of application. Tk was a GUI like all the others, but you were able to write your event handlers in TCL. That doesn't make TCL into an event driven system. It just shows that when you have an event driven system you can factor it into two parts, the event driven core, and the event handlers. Then the event handlers can easily be written in a higher level language to reduce the lines of code and improve productivity.
Tcl is very much an event driven system if you want it to be - they're fairly deeply ingrained into how it works.
The author has an amazing stack for psuedo real time scheduling on seriously limited hardware, http://www.state-machine.com/ . QP-nano brings async real time to PIC processors! So there really is little overhead for async architectures. I think async squeezes more out of limited hardware by avoiding busy loops waiting for IO or other synchronizations to align.
I agree its more difficult to develop with async, but I strongly disagree going non-async saves you money on hardware
Yes, there are applications that needs this, and I love to read about them, and learn from them. From real world solutions for real world problems that is. Not via some arrogant abstract this-is-the-one-true-path manifesto.
To me it's just good computer science for when you want to be near real time, scale horizontally and utilise your hardware resources and be able to handle failure.
Unsurprisingly, they have products to sell that are manifesto-compliant.
"A manifesto is about moral authoritarianism: an absolutist statement of eternal values from which follows (typically) an absolutist ideal of the good life. If there is one thing that most defines a manifesto, it is what it lacks: a central place for uncertainty."
"The problems Haque identifies cannot be solved with manifestos because they are problems, not karmic punishments for espousing false values that will go away through the embrace of the “right” values."
Sorry, but I've never seen this resolution in my analytics logs. Next time I'll think about it ;)
EDIT: It's nothing more of a joke... What if we had Deluxe Browse II?
But more seriously, who browses in 320x200? Does anybody follow 256-color web-safe palettes anymore?