Hacker News new | past | comments | ask | show | jobs | submit login
An introduction to reactive programming (recurse.com)
97 points by nicholasjbs on Mar 26, 2015 | hide | past | favorite | 55 comments

Nothing against react or reactive, but it's yet another reaction to going down the wrong path originally -- it seems.

I really don't understand why so many people jump to event driven systems when genuinely parallel computing is so easy. Specifically Erlang/elixir make it easy. EASY I tell you!

Yeah, yeah, Erlang has a weird syntax. Ok, so lose two weeks learning it. But you don't even need to do that-- just learn Elixir which is pretty comprehensible right off the bat.

Where's the downside? There isn't one. Yet people are going out of their way, doing all these different projects and attempts whose sole purpose is to try and fake your way around a problem that was genuinely solved 20 years ago!

To be honest- and maybe I'm wrong and missing something here-- it seems like so many programmers are following what's hip and popular. V8 was a neat javascript engine, and then node cam along, so lets jump into callback hell! Why put up with that when erlang was around and well known?

It's like people choose technologies based on fashion. But that seems too... pessimistic so I don't want to believe that.

I spent a total of 11 months consulting with a company that built a large (50kloc) financial system in erlang. They had terrible performance problems that were caused entirely by erlang.

Imagine you have a large amount of data (order books, accounts, etc). You could put it all in on erlang process, but the gc does not cope well with large heaps (multi-second pauses). You could store the data outside the heap (eg ets) but then you pay the cost of copying on every access and have to tradeoff ease of use (more data per key) vs performance (less data per key). You could split the data up into many processes and then all your simple calculations become asynchronous protocols. Have fun debugging the math or rolling back changes on errors.

I went into that contract with a fondness for erlang. Now I wouldn't touch it ever again. A naive single-threaded blocking server achieved 10x less code, 40x better throughput and 100x better latency. I used clojure, but any sane platform would have worked just as well with that design.

I'm curious whether you brought this problem up to the attention of erlang maintainers, what where their reactions and whether it will be addressed in the future. GCing large objects on the heap was (perhaps still is despite improvements) a significant problem also on .NET platform. Not sure whether this is the case, but have you considered patching the GC yourself or perhaps moving the problem to BIFs and managing that memory manually (without copying to ETS) ?

The problem was not a few large objects, but large numbers of small objects eg thousands of orders per market. I considered using manual memory management but it would have required rewriting most of the code (it was almost all business logic). I sketched out a prototype in clojure to see whether a better GC would help and found that even with hastily written code and much abuse of persistent data structures, the G1 collector could keep the pauses under 100ms. I expect that carefully tuned code would reduce those much further.

50kloc is considered "large financial system" ? Just curious.

It depends. I worked on a trading platform that >1M LOC. The overall complexity was lower than another quantitative modelling engine that only had couple of tens of thousands LOC. Complexity, work and LOC are correlated, but the correlation is not 1.

In the case of the former system, 95% of the code was connectivity, GUI, persistence, etc., the actual business logic was maybe 5%. And I'm not even talking about stuff written in kdb/q, where I produced maybe 500 LOC in a month. :)

As someone familiar (and generally quite happy with) reactive programming, what does Erlang/Elixir do that makes it a more general, better solution? And to what types of problems, specifically?

I think the reason that I picked up Rx is that it was a natural extension of LINQ, both literally and figuratively. What made Rx really click for me was thinking of it in terms of IEnumerable that explicitly acknowledged of the time domain. Since then I've come to understand some of the concepts that underlay both of them (monads, mostly), but I wouldn't have made the leap to Rx if it wasn't so easy to get there from existing knowledge. I haven't encountered anything where the next step would be "learn Erlang." You can make the argument that we should all learn about things that are unfamiliar, but there are lot of things that are unfamiliar and I haven't seen a lot of reasons why Erlang should rise to the top.

There is a confusion of terms here. Rx is reactive programming (aka dataflow). What this article talks about -- isn't (and indeed is better served by Erlang or other similar solutions).

I followed the Coursera course linked from the article and the similar "Programming Paradigms" course from Standford[1], and from what I recall, dataflow and actors where formally equivalent - i.e. you could write any one using the primitives of the other.

Dataflows can be implemented through message calls to independent actors, but I also saw a stateful actor implemented as a purely functional dataflow in the shape of a sequence of its successive states.

I've seen recently a lot of people from both styles saying "but that's not really what 'reactive' is" at the other, and I think it may be a misunderstanding caused by that equivalence.

[1] https://www.edx.org/course/paradigms-computer-programming-lo...

Erlang has two things that are relevant to this discussion: user level threads (aka green threads), and asynchronous message passing.

User level threads reduce the memory footprint of threads, and possibly the context switch time (depending on implementation) but they are not without drawbacks. Heck, Java 1.1 had green threads but they dropped them for kernel threads in later releases (http://en.wikipedia.org/wiki/Green_threads#Green_threads_in_...).

The main issue is balancing work on multicore machines. With kernel threads the OS will automatically balance them. With green threads you are reliant on the user level scheduler to do this work. It is more difficult to get right without the level of information the OS has. If you want to give the programmer control over scheduling you need to provide some access to kernel level threads.

Other issues include: not all the OS IO primitives have non-blocking equivalents you can use (typically you have to use a pool of kernel threads for these operations; the runtime may hide this.) and you still pay some context switch time with you don't actually have at all in a simple epoll / kqueue based system.

I'm running out of time, so I'll just note that asynchronous message passing is hard to reason about.

Ok, I know a bit about Erlang and think elixir is cool.

And for do a professional work, go for it is the right thing.

But here the problem of "just do erlang, is better" and where the JS folks are doing BETTER is that they try to show how do things, not just say this or that is better.

And look: I read that post, and because I see some functional stuff (map, filters, blah) I get the impression that was a good explain!

I know... a lot of JS code around is a terrible hack, and kinda cool/sad/funny how in JS are re-inventing (and a lot of times, poorly) what is better in X.... but as marketing, they are winning.


Where is something like this that show how build reactive properly? And I mean, build, not just use..

And hopefully in a digestible manner. I read a ton of stuff related to languages (because I wanna do one, for fun) and is like this:


So I think some examples in how do this well and why are needed.

P.D: For example, http://fsharpforfunandprofit.com/fppatterns is probably the best advocacy that I have found about practical functional thinking. Is the reason I have trying F#!

Alan Kay argued [https://queue.acm.org/detail.cfm?id=1039523] that programming is a sort of pop culture. I think this just illustrates that point.

As an experienced developer it is getting really grating. Why so many jobs requiring NoSQL "skills" when a relational database will solve 90% of the perceived problems. We have good mature back end frameworks. So unless you are writing a chat application, why is everyone looking to Nodejs?

Easier, or at least easy, fun, less compatibility issues when you live in a pure js and json world on the front and backends, productive, good libraries, good built-in libraries, easy to deploy, productive.

Converting to JSON is pretty easy in almost any language.

And the framework I currently use is productive, has good libraries, has good built in libraries. Has a good security record and is extremely productive and stable. Deployment is one area where it doesn't shine. If that's the best reason for choosing a language I would be using PHP.

And it doesn't involve JavaScript, which is in my opinion a benefit.

I don't understand the hate for such methodologies (in the comments here) that seem to go down the wrong path. And this is coming from someone who is a strong proponent of functional programming.

The problem as I see it is pure functional programmers scoff at object oriented design being so prevalent while object oriented programmers seem so biased and giddy towards OOP when they haven't even seen the light yet. How come more people don't see that both are correct?

Both are the result of the market and people reacting (heh) to the market. This by definition means that both are correct but serve different purposes. What is fascinating to me is that the web has evolved enough to where user are expecting results immediately for their actions. This is an amazing time to live in. Why? Because this means the market is finally very invested in providing a fluid functional experience for the user. Since the front end is functional and the backend is OO, its very exciting times as we are the midst of real time innovation. React is just a tool to help connect these two very different pieces.

Functional programmers need to get off their high horse and ask themselves why frameworks are almost entirely object oriented. Object oriented programmers need to ask themselves why they didn't think of a reactive view sooner. Both are not 100% right but functional programmers come off as d*cks when they fail to see there is no continuum of FP principles and that in the real world OOP makes sense (say to connect the pieces between say a Scala backend and React front end). You'd think they would realized this sooner though when noting, for example, that "call by name" usually is more efficient than "call by value".

Can you clarify what you mean with "reactive view"? "Reactive" is such an ill-defined and overloaded term these days.

However, I'd like to point out the long history of constraint systems such as Sketchpad, ThingLab, Garnet/Amulet and all the smaller/more limited variants. See also http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and...

I don't follow how "the market" validates technical paradigms as correct in any way you describe. Even assuming efficient rational actors, it has always seemed like the industry adjusts to what has low conceptual overhead and high familiarity, good advertising and at least reasonable utility.

Nor do I understand how the real-time web indicates we're in an innovative age just because functional techniques are currently gaining popularity.

For what it's worth, I've never been hostile to OO. I enjoy Smalltalk, I like Erlang most out of the so-called functional languages, and I've had interest in the concatenative, logic and array programming paradigms as well, where it seems most academics and HN users are ruthlessly committed to FP that they sometimes fail to recognize there's other paradigms as well.

1 - yes my definition of the market is actually as simple as part of your reasoning -> "it has always seemed like the industry adjusts to what has low conceptual overhead and high familiarity, good advertising and at least reasonable utility." I'm not sure if you meant it or not but that the perfect definition of a market and I mean it in that sense. As I've learned from pursuing my own startups...the market is not rational in any other sense. Ever.

2 = Because programming has always (in the mainstream sense) been FP platform + OOP middleware /frameworks + FP high level layer. Now, looking back, in most disciplines only 1 or 2 of these are used, but that just means in those situations the other layers are factored out. In enterprise AND startup development THIS is the formula that is ever present nowadays.

3- I've heard very good things about Erlang. I agree there are more paradigms. That is just a continuation of my first point interestingly enough.

I'm pretty new to programming but the benefit of being an unbiased observer is being able to pick up on all these things. Thank you for the response.

So many comments here critical of nonblocking code. What, exactly, is the downside? Seems to me that any downside is purely syntactical (callback hell etc. or whatever people like to say when they can't write good nonblocking code).

So with nonblocking code you have the challenge of a syntax that may be difficult for some. With any variant of threading, you have the actual real programming challenge of dealing with state, locking etc. I would much rather have syntax issues than state issues.

Also the only people who complain about nonblocking code are those who have been writing threads for 15 years. Why is it that something that is supposedly so bad and confusing is actually so easy for new programmers to pick up?

I find myself wondering the same thing. async-await syntaxes make asynchronous functions work pretty much the same as synchronous ones syntactically. The only drawback I can really think of is that you end up Futures going up the call stack. But this, again, seems to me to just be a detail.

On the other hand, the benefits are really obvious control flow with implicit synchronization. Scala has shown that it's pretty easy to integrate asynchronous workflows with actors.

Can we please stop using the term reactive programming wrong? Reactive programming[1], aka dataflow programming, is writing programs that work like a spreadsheet. Microsoft Excel is a great example of an environment for reactive programming. Functional reactive programming, or FRP, is simply reactive programing in a functional style.

What we have here isn't reactive programming at all, but a few patterns (or, rather, anti-patterns) devised to avoid blocking kernel threads (only some of them are related to reactive programming). They manage to avoid blocking but at great cost. If the overhead caused by blocking bothers you, you need to understand why and realize there are far better ways of avoiding it than these anti-patterns.

[1]: http://en.wikipedia.org/wiki/Reactive_programming

> Functional reactive programming, or FRP, is simply reactive programing in a functional style.

Functional Reactive Programming is a programming model invented by Paul Hudak and Conal Elliott. As Conal says on C2. There are other reactive programming systems that are functional but not FRP.

Also note that people often use the term "reactive programming" to mean "the programming of reactive applications." Obviously, data flow doesn't have to be used to make something reactive, which is a property of the artifact and does not depend on how it was built.

> there are far better ways of avoiding it than these anti-patterns.

Could you describe or at least point to some of them?

Lightweight threads. They make blocking free, and allow using constructs far more suitable for imperative languages. They don't destroy your stack traces, and don't require you to re-invent exception handling and control flow. Instead of promises -- simple blocking calls (or blocking futures in some cases); instead of observables -- blocking queues (aka channels).

Instead of reaching out for ways to avoid blocking, we just make blocking free.

Do you mean co-routines / user threads / green threads? I tend to agree it can have a serious performance boost in some cases. Not sure why you were down voted.

There is actually a library for java that adds it via byte code instrumentation (quasar or something) although not sure it will work for scala.

But saying that actor model is bad practice, I'm not sure that I accept it. Maybe on a single muticore but once you start talking distributed computing (e.g. Spark which is Akka based) then all this "avoid crossing to the kernel" optimization is becoming a drop in the ocean.

> all this "avoid crossing to the kernel" optimization is becoming a drop in the ocean

Here is an example of a small single threaded program beating out a number of distributed graph frameworks running on 128 cores, with a 128 billion edge dataset.


Performance matters because it enables simplicity. If your language forces you to pull in multiple machines to solve your problem then its turned a simple program into a distributed system and life gets complicated fast. Just throwing more cores at a program without understanding why its slow will just get you into trouble.

Multithreaded programs and distributed programs should be a scary last resort after making absolutely sure you can't get away with the simple solution.

Yes I saw this, and got a little disillusioned at first, but after looking carefully this is not big data, their entire dataset fits in RAM. When your dataset can't fit in RAM - this is where the last resort comes into play. Sadly most companies, I agree, don't know when data is really big data. Most of the time it's just medium data. And I agree about the overhead costs.

> their entire dataset fits in RAM

128 billion edges. 1 TB of data just to list edges as pairs of integers. 154 GB after cleverly encoding edges as variable length offsets in a Hilbert curve.

Do you have a bigger dataset?

Oh, I was referring to the original posts. Will take a look. Thanks!

> Do you mean co-routines / user threads / green threads? I tend to agree it can have a serious performance boost in some cases.

Yes, but in this context, they provide the same performance benefits as the asynchronous techniques mentioned in the article, without all the drawbacks of the cumbersome asynchronous style.

> But saying that actor model is bad practice

The actor model is a general technique for fault-tolerance, and it's great. How it handles concurrency, though, is an orthogonal concern. Akka has asynchronous (callback based) actors, and callbacks are an anti-pattern. Erlang has synchronous (blocking) actors, which are so much simpler, and don't have all the complications associated with asynchronous code.

Erlang has asynchronous message passing by default, on which synchronous message passing can be built.

I would agree that in both Akka and Erlang message passing works in an asynchronous way.

For me the difference is more that Akka has a push-style approach to message processing in the receiver (the receive message is automatically called) whereas Erlang and also e.g. the F# Mailboxprocessor use a pull-style appproach (you call receive to fetch a message). I like the latter approach better, because it allows one to start also different asynchronous operations which won't be interrupted by the reception of a new message.

That's not what I meant. Message receive in Erlang is blocking (synchronous), just like it is in Go and Quasar. Meaning, it's basically a blocking queue rather than the asynchronous/push-style/callback approach used in Akka.

Why are you being downvoted for a constructive comment?

"Free blocking" was what attracted me to Erlang's actors in green threads as opposed to Akka's actors that block entire threads when they block.

Yes. And in fact we can even use kernel threads most of the time.

However, there is one thread you don't want to block: the UI thread. We do need ways of getting things to the UI asynchronously.

> And in fact we can even use kernel threads most of the time.

Well, not if you need tens-of-thousands of them or more.

> However, there is one thread you don't want to block: the UI thread.

No problem. You simply multiplex fibers onto the UI thread, where they can block all they want. Here's an example in Quasar (Java):

    FiberScheduler uiScheduler = new FiberExecutorScheduler("UI-scheduler", 
       task -> EventQueue.invokeLater(task)); // schedule on the UI thread

    new Fiber(uiScheduler, () -> {
            for (i = 0;; i++) {
                assert EventQueue.isDispatchThread(); // I'm on the UI thread!
                uiLabel.setText("num: " + i); // I can update the ui
                Fiber.sleep(1000); // ... yet I can block all I want

"most of the time" we don't need tens-of-thousands of threads.

Yes, I can easily dispatch something onto the UI thread using the technique you describe, but for that using a simple HOM is much more convenient:

   uiLabel onMainThread setText:'num: ', i.
Neither this nor your example are actually blocking the UI thread, because they are really just incidentally there and just pushing data in, a quick in/out. (And I assume that "Fiber.sleep()" takes the Fiber off the main thread )

However, the more difficult part is if the UI thread actually has control flow and data dependencies, let's say a table view that is requesting data lazily. Taking the blocking computation off the UI thread doesn't work there, because the return value is needed by UI thread to continue.

> "most of the time" we don't need tens-of-thousands of threads.

Well, that depends what you mean by "most of the time". But if you have a server that served tens-of-thousands of concurrent requests/sessions, you'd want to use as many threads as sessions, and probably many more (as each request might fan out to more requests executing in parallel). In that case you can no longer use kernel threads and have two options: switch to an asynchronous programming style, with all its problems (in imperative languages), or keep your code simple, and simply switch from using kernel threads to lightweight threads.

> However, the more difficult part is if the UI thread actually has control flow and data dependencies, let's say a table view that is requesting data lazily. Taking the blocking computation off the UI thread doesn't work there, because the return value is needed by UI thread to continue.

Again, that's not a problem. If you use lightweight threads scheduled onto the UI thread, you can block those fibers as much as you like -- synchronously read from the disk or a database -- the kernel UI thread doesn't notice it (as it's not really blocked), but from the programmer's perspective, you can intersperse UI and blocking operations (including IO) all you want.

> Well, that depends what you mean by "most of the time".

It means "most of the time". As in the majority of cases, in the real world, not in hypotheticals such as...

> But if you have a server that served tens-of-thousands of concurrent requests/sessions,

"If you have..." -- But I do not, that's the point. A server with tens-of-thousands of concurrent requests is the absolute exception and so not "most of the time". Most web-sites or blogs can be happy if they have a thousand visitors per day, and that's already optimistic. They could be served by hand, or even by a Rails app.

For example, I work for Wunderlist (Alexa rank ~1600). We have over 10 million users (of our API), so already an unusual high-load case, yet we get "only" on the order of 10K requests per minute. (Well, that was last summer, so more now :-) )

Considering that most requests take a couple or maybe dozens of milliseconds, the amount of actual concurrency required to handle this throughput is orders of magnitude below what you describe. In order to keep latencies down and not just throughput up, you want to up the concurrency a bit, but to nowhere near your "but if" case. And that's an app with 10 million very active users. The case you describe is simply highly atypical. That doesn't mean it never happens, it's just not very common, even on the server.

That's what I mean when I write "most of the time".

Clients, on the other hand, tend to deal at most with on the order of 100 outstanding I/O requests (that's already pushing it pretty hard). Whether you use kernel threads, user threads or another of these async mechanisms is almost entirely a matter of programmer convenience, performance will be largely indistinguishable. On the client, I have a hard time seeing your case pretty much ever.

So you have none of the clients and a tiny percentage of servers with the need for 10s of thousands of concurrent requests. The other case is what happens "most of the time". That also doesn't mean you can't use a user-thread approach in those cases, you certainly can, it's just not necessary.


I am not sure I am getting through to you with the UI thread. One more try: yes, I understand you can reschedule your fibers (and thus not block the UI thread). I am saying it doesn't help, because you have an actual control flow and data dependencies that are essential, they are not artifacts of technology.

Scenario: You have an iPhone app that displays flickr images. You start the app, there are no images yet, they have to be fetched. But you UICollectionView just came into focus and is asking you, the data source, for the Subviews. You know that there are 10 images, so you tell it that. It then asks you for the the visible subviews. At this point, you have to return something, because the collection view wants to display something. But you don't have the image yet, it's still in transit. Still the UI has to display something to the user. So you can return empty UIImageViews. Or you can return placeholder views.

No matter what you do, you have to do it now as the UI is being displayed, because you can't de-schedule the user that is looking at the screen.

And later, when those images do trickle in from the network connection, you have to somehow asynchronously update that UI you displayed earlier. You simply cannot do it synchronously because at the time the display is built, the data just isn't there yet.

> That's what I mean when I write "most of the time".

I absolutely agree that in the scenarios you've described, the thread implementation makes no difference (again Little's Law), and that this is "most of the time" for traditional web apps. But we're working with people who are working on IoT, where there is constant ingest (and queries) from all sorts of devices, where we're easily surpassing 100K concurrent request, and expected to grow beyond that by several orders of magnitude.

> No matter what you do, you have to do it now as the UI is being displayed, because you can't de-schedule the user that is looking at the screen.

Ah, now I understand what you mean (thanks for being patient)! Still, I think this pseudo code (which gets run in a fiber for every image):

   display stub image
   fetch image
   display image
is easier than any callback-based alternative. You can even make it more interesting:

   Future<Image> image = get image
   while(!image.isDone) {
       display spinner frame
       sleep 20 ms
   display image
This is a lot easier than the async alternative.

By "tens-of-thousands of threads" I think he means something along the lines of how in Erlang/Elixir an object is often a thread, and a library a program. By giving so many threads "for free" you make blocking cost nothing. It's a very different approach from your typical language.

This article only uses a few threads, but it will perhaps quickly give you an impression of how this design works: https://howistart.org/posts/elixir/1

I am fully aware of the approach, especially in languages/systems like Erlang, and the freedom that very cheap threads give you.

My first point was that you actually have much more of this freedom than most people are aware of, even with (comparatively) heavy kernel threads. For example, see the "Replace user threads with ... threads" by Paul Turned (Plumber's Conference): https://www.youtube.com/watch?v=KXuZi9aeGTw

More on that point, I see a lot of user-threading/async craziness on clients such as iOS that would have been easily been handled by less than a dozen kernel threads, most of which would be sleeping most of the time anyhow. That's a number of kernel threads that is easily manageable and not particularly resource intensive.

My second point is that there is one thread that this blocking-happy approach mostly doesn't apply to, and that is the UI thread. You really don't want that UI waiting for (network) I/O and therefore must employ some sort of asynchronous mechanism for data flow and/or notifications.

The Paul Turner approach works when you have up to about 10K-20K threads. Beyond that, you lose the ability to model a domain unit-of-concurrency (request/session) as a software unit of concurrency (thread). The kernel-thread approach works as long as you don't hit against the Little's Law wall. Basically, Little's Law tells you exactly when kernel threads become the wrong approach, which depends on the level of concurrency you wish to support and the mean latency of handling each concurrency unit (i.e. request/session).

> My second point is that there is one thread that this blocking-happy approach mostly doesn't apply to, and that is the UI thread.

You're not allowed to block the kernel UI thread, but you can schedule lightweight threads onto the UI thread and block them all you want, so from the programmer's perspective that restriction disappears.

Sorry, I was just trying to help clarify what was being said -- not trying to argue against your points.

So, the actor model is an anti-pattern? Like Akka in Scala?

See my reply to eranation

Like C# (and now ES7's) async/await?

Not quite. async/await are what's known as stackless coroutines, and require explicit usage. Lightweight threads, aka user mode threads, aka fibers, are simply threads that have negligible (or no) cost associated with blocking. Examples include Erlang processes, Go goroutines and Quasar fibers.

It doesn't matter what it really means , it's like isomorphic applications , and stuff like that.A few Hipsters decided it was a good buzzword to sell this or that framework or solution , and will trash the rest. It's marketing. It's unfortunate but it is how it works. Most devs will never know what it really means, they will just think "this or that framework" when they hear reactive programming.

Reactive programming - you mean like reactive like reactive manifesto? That's not event driven. See recent published manifesto (http://www.reactivemanifesto.org/): they describe message driven paradigms, not event driven ones (now - this is a revision). But that's only a very small part of it! Recommend "Reactive Design Patterns" the free chapter from manning press: http://www.manning.com/kuhn/RDP_meap_CH01.pdf

describe message driven paradigms, not event driven ones

Messages are events.

Reactive programming - you mean like reactive like reactive manifesto?

That seems to be more about idealized system properties than about a style of programming. (Plus at least one bit of silliness, w.r.t. blocking / non-blocking being inherently inequal.)

Not exactly. Messages and events are dual to each other. Messages are sender centric. The sender determines the links to the receivers. The receiver listens regardless of who's sending.

Events are receiver centric. The receiver explicitly chooses what senders to listen to. The sender fires off events without regard for who's listening.

I'm not sure how much distinction actually matters in terms of expressive power, but it does seem to impact how the paradigms are used in practice.

> The receiver explicitly chooses what senders to listen to.

Not necessarily. The receiver could just specify what kind of messages to receive instead, and be decoupled from the sender.

Thanks for the links - Reactive Design Patterns is a great read. On the bottom of the first page, they list FP, futures & promises, CSPs, observers and observables (i.e. Rx), and actors as tools of the trade. Then in the third chapter they look at each tool and evaluate it on the basis of "reactive classification" as per the manifesto (including message passing, which is one of the tenets). In my view, it is not presented as a purely reactive/not reactive dichotomy, but more of a sliding scale - a given tool may satisfy some tenets and not the rest.

Actor model is arguably the most robust with respect to the proposed classification, but the authors also point out that it is not be-all and end-all and that specific problems or even portions of your system may call for different patterns.

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact