> "If this sounds to you a lot like Object-Oriented Programming (OOP), you are right."
This sounds like Alan Kay's aspirational viewpoint of independent "computers" interacting through messages. Not the object-orientation invented by Nygaard and Dahl in Simula, that introduced objects, classes, subclassing and virtual functions. Kay's vision is not reflected in modern object-oriented languages, while Nygard and Dahl's remain paramount.
The argument that the Actor Model is really a form of object-oriented programming is an unrequired appeal to credibility.
Objective-C has the message-passing approach to object-oriented programming; all dispatch was at run-time, and an object could delegate any unrecognized messages or decide for itself how to respond to them.
I suppose it's not "modern" anymore though - and it doesn't look like swift kept this paradigm.
The actor model deals with concurrent computation.
The threading models on node and browsers do not allow concurrent computation, except for the case of web workers, and the proposed threading additions of JavaScriptCore.
Because of this, the actor model can be overkill when you have no threads. More specifically, you may introduce queuing where no queuing (or other type of synchronization) is necessary. This is because an actor is basically a worker with a work queue (aka mailbox).
For the most part promises can solve most of your async issues.
Event emitting/message passing is valuable, but queuing is optional in many cases.
Hey, article author here, thank you for passing by!
This is correct but misses the point of the article. The idea is to introduce the Actor Model to people unfamiliar with it, using one of the most popular languages out there.
Please increase the contrast of the main text, it's uncomfortable to read.
edit: I was reading it on Chrome on Android, it the main text looked gray and had poor contrast. Now I'm on FF 52 on Windows and the text looks black. Weird.
How does this differ from a Redux-type store besides being able to instantiate them on demand instead of having a global state?
> It’s easy to screw up immutability in JavaScript, the actor internal state can be modified externally if users of the library are not extremely careful.
Sure, that's fine too. My point is more that it would've been nice to get a summary of the differences instead of requiring the reader to dig through the article, since Redux is much more prevalent. Even after digging, I still can't tell if there is a difference. If there is none, it would've helped greatly to explicitly say this.
The article is focused on explaining the design pattern, while Redux is influenced by the pattern.
From my understanding, Redux shares the actor/messaging pattern when dispatching events to the reducers, but the state in Redux tends to be centralized (you usually have one main store, but you can create other stores if you want), while in the Actor pattern, each object has it's own state.
Also, in Redux, information flows only in one direction: component => reducer => state. The Actor pattern seems to allow for actors to message each other back and forth.
Thanks for that explanation, that makes a lot of sense. See, if that had been included in the original article, I would've grasped it much more quickly, instead of being left wondering if Redux is implementing this pattern or I'm missing an obvious difference, and I'm sure it would've helped others too.
Actually it's quite opposite. Redux is a library which was created not until 2 years ago and it is used only in JavaScript. Actor model and similar message-passing patterns are in use for decades and not only in JavaScript but in many programming languages. They are popular e.g. in game development for communicating between game objects.
There is this weird viewpoint in JS community that world is turning around JS and its ecosystem, and this is the newest popular library that dictates standards rather than decades of CS knowledge ;)
Based on what shows up on the front page, I'd say the average reader is way more likely to be familiar with Redux than the actor model.
I never said Redux dictated the standard, I simply asked for the differences, and if it weren't for boobsbr's response, I'd be stuck in a state where I don't know how close Redux was to this model which is quite important to get context and understanding.
> There is this weird viewpoint in JS community that world is turning around JS and its ecosystem, and this is the newest popular library that dictates standards rather than decades of CS knowledge ;)
If you criticize people like this for not knowing what came first, then it's no wonder people feel like that because you'd never give them the opportunity to learn about the lineages. Instead, people will simply think "oh this looks like an interesting paradigm, maybe I'll consider using it in my next project" without realizing that they're already using it.
I'd also have liked to see destructuring used to set the new state:
return {
...state,
count,
}
I realize that it's superfluous when count is the only property on state, but it makes the code both easier to understand and to maintain as state gains new properties over time.
I think a better language to understand the Actor Model with is Elixir. The Actor Model's main point is concurrency. It is hard to demonstrate that in a language that doesn't support concurrency.
Ah. I have a misunderstanding of definition then, since when I think of concurrency I think of interleaved execution irrespective of whether or not this execution is done in parallel.
I tend to adhere to the dictionary definition of the word, which is two things happening or existing simultaneously. However, I do know that proponents of single-core-bound languages like JavaScript, and others, like to talk about concurrency in a single-threaded way.
The computer science definition of concurrency and parellelism are different that what you're explaining. In CS, a program can only be concurrent. Parallel is to do with how it's run, and the hardware. A concurrent program, is still concurrent even if it's run on a single core, whereas to be considered parallel, it must be running on multiple cores/processors/machines/etc. at the same time.
One way to think of it:
If I have three plates of food, that I can eat in any order, but it's just me eating, then this is concurrent. In practice, this is no faster than if I had to eat 1, then 2, then 3, however it has the potential to be. By adding another person, it becomes parallel.
When I was in computer science classes in college, "concurrency" was defined as multiple processes running at the same time on multiple processors. You could not have concurrency with a single core. If all you had was a single core, then the best you could hope for was "pseudo-concurrency," because it is impossible to run two things simultaneously on one core. When did this definition change?
Your food metaphor doesn't really work in the computer science sense, because you cannot concurrently eat three things at the same time unless you put them all on the fork at the same time, which is something you simply cannot do in computer science.
It never did. A concurrent program has always meant a decomposition of a program into parts that can be executed out-of-order while maintaining the same output. Whether or not this is by some scheduler on a single-core cpu is irrelevant to the theory.
The words "concurrency", "multi-threaded", and "parallel" each have different meanings. You can have a program that is multi-threaded but which does not maintain concurrency ie. the output is dependent on race conditions. This is usually, but not always, a bug.
Your confusion stems from the fact that most people do not care about the distinction. You usually don't care about concurrency unless you plan to actually run things in a manner which is unordered. Thus when the theory says "concurrency" you think "parallel". Technically speaking you're wrong, practically speaking your mistake usually won't matter.
As a final example consider a single-core computer running Windows with multiple processes running a myriad of services and programs. At any one time only a single process can run on that single available core, but in practice they are running "at the same time", because the system is implementing a model of concurrency allowing it to schedule and execute the different processes out-of-order.
You don't understand the food metaphor. If you have 3 plates of food, but they can be eaten in any order, then 3 people can eat them in 1/3 the time that 1 person could eat them. The food/plate is the code/data, the person/fork is the processor. You're never eating the same exact piece of food at the same time.
To be frank most of my experience with concurrency (rather than parallelism) is from formal theory taught during my masters studies. But I was rusty, so I was unsure if I remembered incorrectly.
Don't forget fault tolerance! In my particular use cases concurrency is not really necessary, but I love the fact that I don't have to be quite as 'defensive' in my programming. Whenever a particular (Erlang) process/'Actor' fails, its supervisor can simply restart it and I can deal with the cause of failure later while the entire system chugs along (mostly) happily.
Right. Elixir and Erlang's fault tolerance is amazing. I think it is important to point out that to use Elixir's let-it-fail mechanisms, you have to use concurrency. You can't have a supervisor and supervised code running in the same process, so you have to use concurrency in order to benefit from this fault tolerant mechanism.
It depends on how you set things up, but one possible approach is that the supervisor, after a few retries, would also shut down, which would trigger the supervisor above that to restart the whole thing with a different port. This wouldn't make sense, probably, but maybe it illustrates how you can go about these things.
Jesus! Are there any posts on HN these days that are not JavaScript or blockchain? (or a combination, describing 90% of blockchain projects....)
On topic: I don't see a use case, in which I would need to use the actor model and would pick javascript as my language of choice. However, I do think your article is well-written and explains the actor model quite well.
It's not unique to javascript. The equivalent in ruby would be `state = behavior.init.class = Method ? behavior.init : {}`
After looking it up, I don't disagree that Python looks a lot prettier, though: `state = behavior.init() if callable(behavior.init) else {}`
That made me curious if Ruby had anything like Python's `callable`, but I couldn't find anything. This is my best effort in Ruby for aesthetics: `state = if behavior.init.respond_to? :call then behavior.init() else {} end`
In the end, I agree, Javascript looks much worse than the equivalent Python in this case. Unfortunately I don't get a choice of python on the frontend :p
state = behavior.respond_to? :init ? behavior.init : {}
Asking an object if something is "callable" in Ruby is probably unnecesarily verbose, as you are most likely dealing with a method for any attribute access.
Slower, poor portability to android/iOS/browser, no typescript/flow (no, those things aren't like typescript or flow), async/await is an afterthought instead of a cornerstone, just because you didn't like an obscure one liner out of context, no thanks.
Realistically, if he weren't opting to be as terse as possible, this could be rewritten as:
var state;
if (typeof behavior.init === "function") {
state = behavior.init();
} else {
state = {};
}
I do enjoy terse ternary operations but they can get very ugly very quickly. I often stick if/else if I'm writing code for others. As far as choosing another language, write Python that gets executed on Chrome without further trans/compilation and I'll happily consider it.
yes,I like this design,and my system has applied it.I feel this theropy just like the domain-specific languague or system thought(the_Fifth_Discipline).
best regards
This sounds like Alan Kay's aspirational viewpoint of independent "computers" interacting through messages. Not the object-orientation invented by Nygaard and Dahl in Simula, that introduced objects, classes, subclassing and virtual functions. Kay's vision is not reflected in modern object-oriented languages, while Nygard and Dahl's remain paramount.
The argument that the Actor Model is really a form of object-oriented programming is an unrequired appeal to credibility.
Edit: grammar