Hacker News new | past | comments | ask | show | jobs | submit login
Concurrent JavaScript: It can work (webkit.org)
270 points by stablemap on Aug 30, 2017 | hide | past | favorite | 186 comments



Of course the language can be made to support it. Given the way Javascript is used, the question is whether supporting it will create a big mess.

They have some good ideas. One is that variables can be marked as restricted to one thread. That should be the default. Other languages could benefit from that feature. Python, for example. If you want to get rid of the Global Interpreter Lock, knowing which variables can't be shared between threads is a big help.

Variables should be owned by a thread or owned by a lock. Rust went that way, and it works well. The proposal here is old C-style locking, where there are locks and variables, but the language doesn't know which locks are protecting which variables.


I share your concerns about the usefulness and implementation of this concurrency strategy.

In my opinion, concurrency isn't necessary for JS. If web developers want high-performance, concurrent computing, they should push for adding concurrency to WASM.

I doubt concurrency could hurt JS, but I just don't see any reason for it that concurrent WASM doesn't fulfill.


And what about server developers using Node.js?


I thought the point of Node.js was that threads were too complicated and rendered unnecessary by asynchronous event loops that do message passing between processes. Are the Node people finally giving up on this trope?


Is there any limitation or reason for not implementing WASM on nodejs?


Of course not, and hopefully it will come to V8, but its mere existence does not preclude enhancing JavaScript, any more than the existence of PInvoke or JNI precludes enhancing .NET and Java.

When people say "JS shouldn't support feature X", when feature X is supported in every other serious language, I can't help but guess that their real problem is with JS itself, and they just don't think people should be using it at all. Fine, if that's their opinion, but they should come right out and say it.


Well for me it adds complexity to JS. One nice thing about JS is that it is single threaded and you don't have to deal with locks or inconsistencies there. An entire class of problems goes away, and since JS event loop is reasonably fast as it is, it's pretty nice not having to worry about concurrency.


I would think that it's owing to the frequently single-threaded context of JS that concurrency becomes even more vital for maintaining the liveliness of your app. I also think that identifying aspects of concurrency often means free parallelism speedups, and the user doesn't even have to be aware -- no extra mental price.


So true!

Lots of people actually like using JS. I think it makes sense to understand what features can be added to this language.


What's the point? Why not write your server directly in say Python or Java then?

I thought WASM is meant to get around the restriction that the browser only understands JS right now.


> why not use X

Because X doesn't run in a browser


Do you need to run your server in a browser?


No, but it's very nice to be able to share code for things like view rendering or form validation between the server and client without jumping through hoops.


So you use language X. Compile it and run it natively on server, compile it to wasm and run that in the browser.


Why should i use language X that i don't like, instead of nice dynamic language, without compilation step that i like, and which only misses this tiny feature?


There may or may not be, but that doesn't address that Node.js was created to specifically avoid a threaded concurrency model in the first place.


Is there any limitation of reason for forcing people to rewrite their Node.js apps in something else just to get modern features?


in this case, yes, a feature that breaks backward compatibility.

So someone suggested rewriting in another language, since the new feature will not fit the only advantage nodejs have: same language as browser. The alternative is to rewrite in a type of JS that will not run on the browser either.


Te proposal for concurrency in the post does not break backward compatibility.


No. In fact, AFAIK it's already implemented on Node, behind the --expose-wasm flag: http://thecodebarbarian.com/getting-started-with-webassembly...


Right! It can't be the all of the cool features are done in wasm, which doesn't even have a GC yet.


WASM is farther along than concurrent JS though...


That's true. But most of the web is written in JS, not wasm.


If you have crazy high performance stuff you can do it in native modules.


Depends on what you're doing, but if you just want to take advantage of multiple cores you can already accomplish that with `cluster`.


I remember when people used to say "you don't need threads, just use fork"!


What do they say now, out of curiosity?


Now they say use fork-join. ;-)

Point is, some people don’t like threads and probably never will. Other people like threads and probably always will. Some languages have threads and many other programming models and then everyone is happy. I want the people who like threads to be happy in JS.


I need concurrency to parse and prepare streamed 3D data while keeping the main javascript/render thread fast. WebWorkers are already very useful in that regard.


Obligatory plug for Browsix (http://browsix.org), "Unix in your browser", which already provides concurrency via Unix-like processes and IPC. Processes can communicate with sockets & pipes, and run on separate WebWorkers. (Also, you can directly run complex C/C++ applications that expect a Unix environment.)


I think it would be weird if objects were thread-restricted by default. Concurrency is most fun when it's easy for a thread to access an object it wants.


In fact I thought willy-nilly cross-thread access was the bane of multi-threaded programming. And also the main reason that immutability in languages such as Clojure is considered a desirable feature in support of concurrency.


You say fun, but what you mean is twisted mess.


WebKit uses threads. WebKit is not a twisted mess.


I mean, you could say the same thing about assembly language programs. "Either static or dynamic typing is needed to prevent large programs from becoming a twisted mess" is a pretty uncontroversial statement today, despite the fact that there were some very high quality large assembly language programs written back in the DOS days.

Besides, WebKit doesn't do fine-grained parallelism on the level of parallel styling or parallel layout. My colleague Manish came up to me after a long back-and-forth conversation with bz to figure out which objects were thread-safe and told me "the biggest lesson of Stylo is that adding parallelism to a previously large sequential C++ project is basically impossible". That's because large-scale projects invariably fail to properly document the precise synchronization invariants that each object must uphold in order to function properly. They settle for blanket statements like "the entire render tree is single threaded only" which, while conservatively a correct statement, essentially gives up on fine-grained tracking and makes it very hard to add parallelism later.

By contrast, when you lift concurrency guarantees into the type system, the compiler essentially enforces that the documentation on thread safety is always up to date. This makes it way easier to start with sequential code and add parallelism (and/or concurrency) later as real-world profiling indicates fruitful optimization opportunities.

I think you're way too casually dismissing the very real problems that unrestricted shared state has.


WebKit does a lot of fine grained parallelism.

I think you’re way too casually dismissing how awesome shared memory is.


With all due respect, no, it doesn't do fine grained parallelism in the areas I described. Do I think it would be impossible to add parallel styling or layout to WebKit? No. Do I think it would be difficult? Absolutely, primarily because of the difficulty of finding all the concurrency hazards in existing objects and making them thread-safe. It would be easier with a type system that automatically checks for and highlights such problems. Such a system is precisely what I'm advocating.

And I'm not saying not to use shared memory. I'm saying that shared memory should be controlled by some sort of static or dynamic type system that enforces proper locking or immutability as the case may be. Such a system is hardly a far-fetched idea, as we're successfully using one right now in the style system of nightly Firefox.


It would have been a shame if the OSes on which Firefox runs had similarly limited access to shared memory to one particular "safe" style in the spirit of paternalistic caution.

Would Rust/Servo have been possible without access to the raw rope?


Rust is a relatively hands-off style of thread-safety: it protects against data races but doesn't try to get things like deadlock freedom that other more restrictive systems require. The benefits apply when using locks, or message passing or even raw atomics (and similarly to higher level libraries like data parallelism in rayon).

However, Rust is also explicitly designed to allow access to the "raw rope" when abstractions don't cut it (or when building the abstractions): that is what `unsafe` allows.


As long as we offer a similar `unsafe` option, a stronger type system around mutability+concurrency might have a range of benefits for JS. Rust's definitely has a number of virtues.

A little hard to imagine grafting onto JS though. :-)


This isn't about "paternalistic caution," it's about purposeful and sensible engineering. JavaScript wasn't designed for threading, and it's a less natural fit for it compared to a language like C, so this is understandably a pretty serious undertaking.

Your point is that this may enable some currently unimplementable application in JavaScript. The assumption is that Firefox couldn't have existed without a free-for-all shared state thread model, which is very likely false.

I ask you, what applications does free-for-all shared state make possible that SharedArrayBuffer doesn't make possible?


Imagine having an app that wants to perform a concurrent search on its model while the view and controller keep chugging on the main thread. SharedArrayBuffer would mean that all of your state has to be in an array of primitives. I’d rather use objects, classes, strings, etc. without having to serialize them all the time.

JS is actually a better fit for threading than C, and in many ways it has similar problems. Unlike C, JS has memory safety and concurrency wouldn’t break that. Concurrent programming is a lot easier if you can’t corrupt the heap. Like C, JS has some “global” variables like RegExp.lastMatch (C has errno) that need to be made into thread-locals. My proposal includes thread locals so it would be easy to make lastMatch into a getter that loads from a thread local.


For your parallel search example, the data set has to be extremely large for parallel searching to have a significant improvement.

When does a client-side JS app have access to many GBs of local data that would justify a parallel algorithm? It seems exceedingly rare but maybe you can imagine an example.

If you're talking about a server side app, if your goal is speed, why would you choose JS over C++? It seems more sensible to write the parallel database search in C++ in that case.

As for appropriateness of threading for C over JS: I think the fact that JS is garbage collected makes a threading implementation a nightmare. A naive GC implementation otherwise kills performance: imagine running a parallel computing and having to "stop the world." GC at a conceptual level is inherently "single-threaded" and it will always be a bottleneck in one way or another.


Not parallel searching. Concurrent searching.

The data set only has to be large enough that the search takes more than 1/60th of a second. Then it's profitable to do it concurrently.

GC is not single threaded at all. In WebKit, it is concurrent and parallel, and already supports mutable threads accessing the heap (since our JIT threads access the heap). Most of the famous high-performance GC implementations are at least parallel, if not also concurrent. The classic GC algorithms like mark-sweep and semi-space are straight-forward to make work with multiple threads, and they both have straight-forward extensions that support parallelism in the GC and concurrency to the mutator.


JavaScript can already do concurrent searching. Concurrent is logical, parallel is physical.

Efficient parallel GC is non-trivial to implement. In the most common implementation, you have to pause all threads before you can collect. That will often negate the performance benefits of having independent parallel threads running, especially if they are thrashing the heap with shared objects as you suggest.


Many factors and capabilities went into Firefox's success. While it's easy to enumerate the primitives required in hindsight, I'm doubtful that if OSes of the period had taken a restrictive stance based on contemporary ideas of what should be allowed, that an easy time would have been had.

This is not hypothetical, consider the present. While Firefox on iOS exists, it's just a branding skin over WebKit, due to a similar flavor of security paternalism around JITing code (only bad people write self modifying code :-). If Firefox had needed to differentiate itself originally in such a market, it's doubtful it would have had much success.

A threading free-for-all may be the wrong abstraction to use for many applications, but it has the virtue of being a decent stand-in for the hardware's actual capabilities. It's also close enough to ground truth that most other abstractions can be built on top of it. Imagine how unpleasant building a browser on top of Workers + ArrayBuffer transfer would be (especially given the lousy multi-millisecond latency of postMessage in most browsers). Also, consider that while there is often loud agreement that raw threads are dangerous, after decades of research, there's little consensus on the "right" option amongst many alternatives.

SharedArrayBuffer is nearly as powerful as the proposal, but not quite. For example, while it allows writing a multi-threaded tree processing library, it would have trouble exposing an idiomatic JS API if the trees in the library live in a single SAB (as JS has no finalizers etc. to enable release of sub-allocations of the SAB). The options are either one SAB per tree (which likely performs badly), an API where JS users need to explicitly release trees when done with them, or leaking memory. With the proposal, each tree node could be represented directly as a JS object. The proposal may not be the best way to fix this problem, but we definitely still have gaps in JS concurrency.

Agreed this would be a serious undertaking, however, and not to be lightly considered.

The proposal goes a long way to make the case this can be implemented performantly, but some deep thought should go into how it would alter / constraint future optimizations in JS JITs.


As it stands now, adding threading to JS has a negative expected value. There is more potential downside than potential upside. It's illogical and irrational to undertake the effort under those conditions.

This should be an industry driven decision. Wait for the users of SAB to say it's not meeting their needs, and for them to provide clear reasons why (not hypothetical limitations, not vague falsely-equivalent comparisons to Firefox). Then we can tangibly weigh the pros against the cons.

Right now this is a solution looking for a problem. Your analogy comparing the JS runtime to iOS runtime isn't appropriate, no single company controls the web platform. Mozilla or Google or Apple or Microsoft can push for JS threads if the arguments for it make sense. Compare to WebAssembly.

In fact the evolution of WebAssembly is a good example of how this ought to happen. Imagine if the creator of emscripten opted to instead first propose a new VM/IL for the web? It would never happen because JS was already good enough. It was more natural to use JS first then create the VM with the goal of addressing the limitations encountered with the JS approach.

Let the tangible shortcomings of SAB bubble to the surface. Then we can sensibly design something that effectively addresses those shortcomings. Not a pattern-matched solution looking for a problem.


JavaScript has no type system currently, other than a trivial one with exactly one type. So, adding concurrency via a type system would be like trying to fit a square peg into a round hole.

Also, shared memory with no type system help is a proven technique that is used in shipping software and has been for a long time. This leads me to believe that it would be easier to use our existing full shared memory approach to implementing parallel layout than it would be to use your technique. Just the fact that some research experiment did use a type system and concurrency does not mean that this is actually better than the alternative. It just means that it is an alternative.

As for finding concurrency hazards when making things concurrent, you’re sorta talking to the right guy. That’s my bread and butter. Like every hard thing, it becomes easy with practice. Maybe type system help is for people who don’t practice enough.


> Maybe type system help is for people who don’t practice enough.

That's like advocating that tests are for people who "can't" write bug free code. Is this the attitude of all JSC developers toward engineering abstractions? I would expect designing a system that JITs third-party code from adversarial parties would lead you to be more cautious about code quality, not less.


Tests are great. No harm in tests.

I just don't think that we have sufficient evidence to conclude that using type systems to aid the development of concurrent code actually leads to better concurrent code. Therefore, I err on the side of not using the type system for that purpose so that I can use it for many other things that I think it's really good at.

(Just take a look at JSC's source code if you want to see our extensive reliance on those areas of C++'s type system that we can use to get hard guarantees. It's not that we don't like types. It's that we use types in an evidence-based way in those places where they actually help us catch bugs.)


> I just don't think that we have sufficient evidence to conclude that using type systems to aid the development of concurrent code actually leads to better concurrent code.

This is like saying we don't have evidence that evolution is real.

Type systems are proof systems. If a type system encodes concurrency safety, and a program is valid under that system, then you can be 100% certain that program doesn't have a race condition bug due to shared state between threads.

If a type system can prove buggy concurrent code is invalid code, then by definition it leads to better valid concurrent code.


Lol! You would not be successful as a scientist with this attitude.

A type system is a proof. But the proof in the type system is not a proof that it’s better to use a type system than not. That’s an entirely separate question.

I think that type systems are good at some things. Concurrency isn’t one of them.


I made an argument. What's yours? You just made a statement of opinion without any justification.

Additionally your opinion is wrong, by simple counter example. Rust's type checker already has the ability to prevent data races: https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h... This works today.


Any specific parts of JSC one should look into?

Btw, in case others are also wondering: JSC is JavaScriptCore.


All over.

We use smart pointers over raw pointers.

We have probably hundreds of data types defined as classes, which have only very controlled conversions to other data types. See things like DFG::AbstractHeap or B3::ValueRep.


Servo, which pcwalton works on, is largely motivated by the awesomeness of shared memory.


WebKit needs tons of extra effort to keep it from being a twisted mess precisely because it uses threads.


Any well written software requires effort to properly design and maintain. What’s the argument here? “Don’t give them shotguns, they might shoot themselves”?


Don't load the shotguns, turn off the safety, point it at their foot and tell them “please be careful”.


Can you be very specific?


When told that multi-threaded access is hard and messy, "X program uses threads and it's not a mess" is not an argument -- for one because nobody said X program is a mess. Threads being messy is not a transitive property.

Nobody doing threads ever said that they are not messy or that multi-thread access and locking et al are easy.


I do threads and I just said that they are not messy.


Well, nobody sane then if I'm allowed the Scotsman! People also defend all kinds of dangerous constructs or practices blaming other programmers for not being good enough to use them even though statistically even the best programmers suffer from problems derived from them (e.g. buffer overflows, or in the case of threads race conditions, starvation, etc.).

And since we're particularly talking about adding threads to Javascript, here's the opinion of the creator of Javascript itself on threads:

  You must be this tall to hack on threaded systems, and that 
  means most programmers should run away crying. But they 
  don’t. Instead, as with most other sharp tools, the 
  temptation is to show how big one is by picking up the 
  nearest ST code and jamming it into a MT embedding, or 
  tempting race-condition fate otherwise. Occasionally the 
  results are infamous, but too often, with only virtual 
  fingers and limbs lost, no one learns.

  Threads violate abstractions six ways to Sunday. Mainly by 
  creating race conditions, deadlock hazards, and pessimistic 
  locking overhead. And still they don’t scale up to handle 
  the megacore teraflop future.

  https://brendaneich.com/2007/02/threads-suck/
Also, are you the author of TFA? That sure speaks to your skills, but that doesn't mean standard definitions (e.g. on concurrency) or traditional consensus (e.g. on threads being messy) don't apply.


Appealing to authority isn't a valid argument.

Brendan and I have had many discussions about the nature of threading in JS.

The reality is that to get high performance in JS on modern (and future afaict) is that JS will need to have some lower cost threading mechanism than that provided by workers.

If you read the article you'll see that there isn't any way to mismatch lock/unlock.

The other massive footgun is the for whatever reason people insist on creating standard libraries that are not thread safe (collections, apparently bigint in java?). JS is never going to have completely undefined behavior, but non-determinism already exists.

Is concurrent code easier to screw up than single thread? yes. Is that cost enough to discount the possibility of ever having threads? I don't know. The article goes one way, you clearly go the other.


>Appealing to authority isn't a valid argument.

Calling out "argument from authority" when someone posted actual arguments an authority made is even less valid.

>The reality is that to get high performance in JS on modern (and future afaict) is that JS will need to have some lower cost threading mechanism than that provided by workers.

Perhaps, but this is an orthogonal concern as to the issues with threads themselves (not to mention with an environment that has since forever assumed a single thread of execution).


I think that someone taught you the wrong definition of concurrency. I’m just trying to help you learn the right one.


Oh ffs I feel like I'm back in 1994 or so...


Because there was some huge breakthrough since eliminating threading issues?

(If you answer just use Erlang, CSP or any other variation, well, my point exactly).


The only breakthrough is that some of us learned to use threads. Apparently some of us didn’t.


Apparently, after decades of CS and tons of costly errors, some of us still haven't learned than issues like memory safety, race conditions, etc are either handled at the language level, or are dangerous for everybody -- and that those that consider themselves immune to them because they "know how to use them" and are "better programmers" are just deluded.


You just called WebKit a twisted mess. I asked you for examples. Looks like you don’t have any.

Like I said, WebKit uses threads and it’s great. Maybe some day you will also learn to use threads.


coldtea literally never said that WebKit was a twisted mess.

robertwhiuk, responding to you, referenced using "twisted mess"

pizlonator:

  I think it would be weird if objects were thread-
  restricted by default. Concurrency is most fun when it's
  easy for a thread to access an object it wants.
robertwhiuk:

  You say fun, but what you mean is twisted mess. 
No reference yet to WebKit until your response:

  WebKit uses threads. WebKit is not a twisted mess.
coldtea chimed in with this:

  WebKit needs tons of extra effort to keep it from being a
  twisted mess precisely because it uses threads.
What is that "extra effort"? Discipline, care, good work. coldtea does not call WebKit a twisted mess. In fact, they seem to be saying the opposite. That it is not a twisted mess despite using threads.

You have some good points in this discussion, generally. But you should at least be honest in your characterization of other people's comments.

  Maybe some day you will also learn to use threads.
Maybe someday you will also learn reading comprehension.


coldtea was unable to come up with a single example of where WebKit is a twisted mess or even needs effort to avoid becoming one. No matter what kinds of semantics you throw at this, I think it’s the case that these arguments against threads lack basis.


>coldtea was unable to come up with a single example of where WebKit is a twisted mess

That's because coldtea (me) never said it IS a twisted mess.

What I said is that with threads, WebKit (and any program for that matter) needs extra effort to not be a twisted mess. Exactly what the parent explained again, and you don't seem to have got even this second time.

Why that is the case (in other words, why parallel programming with threads is harder and requires more effort than without), is CS 101.

Some reasons?

Synchronization and scheduling access to resources. Race conditions -- data Races, deadlocks, etc. Starvation. Balancing the number of threads (diminishing returns). Locking subtleties. VM/state/etc overhead of threading.

In fact you'd be hard pressed to find seasoned programmers that would disagree that threads are problematic and require extra caution.

Heck:

  "Although threads seem to be a small step from sequential computation, in fact, they 
  represent a huge step. They discard the most essential and appealing properties of 
  sequential computation: understandability, predictability, and determinism. Threads, as a 
  model of computation, are wildly nondeterministic, and the job of the programmer becomes one 
  of pruning that nondeterminism" -- https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf


You won’t weasel out on this that easily. You still have not provided anything other than the usual BS reasons for why threads require extra effort.

You know, drawing stuff to the screen also requires effort. As does file I/O. It’s computing - people get things wrong. The interesting question is: are threads so unusual in this regard that your whining makes any sense?

We have lots of threads but except when we are in the middle of something big and concurrency related like concurrent GC, we have very few bugs of the sort you describe. Most of the bugs that bother me right now are in the compiler and runtime and they are not concurrency bugs at all. It’s true that if someone is deliberately trying to increase concurrency, they will fix some concurrency bugs along the way just like a person writing compiler phases will fix compiler bugs alone the way. We haven’t given up on compilers just because they are might have bugs.

Finally, you said: “WebKit needs tons of extra effort to keep it from being a twisted mess precisely because it uses threads.” You seem to have now significantly walked back from this statement since all you can come up with are reasons why threads are hard that are not related to WebKit or any specific software package. Sounds like maybe you just don’t know how to use threads so you spread FUD to avoid having to learn them.


>What is that "extra effort"? Discipline, care, good work. coldtea does not call WebKit a twisted mess. In fact, they seem to be saying the opposite. That it is not a twisted mess despite using threads.

Exactly, thanks.


Imagine how much more fun it would be if threads sometimes got access to objects they didn't want!


I think the proposal was to thread-restrict variables, not objects. (Or more precisely, any variable is either restricted to a thread or guarded by a lock.) So it wouldn't actually stop you from accessing any object you want, as long as you can reach it from something in scope. But it also might not be a very useful safety guarantee.


Thread.restrict is about objects.

It’s interesting to also have a function like that for properties and/or variables.


Yeah, if by "fun" you mean "catastrophic"


I'm late to the party but please for anyone reading this, look into the actor model. Shared mutable state quickly becomes unmaintainanble. It used to be slow to do shared immutable state but that's no longer the case. Today there are many better ways to do shared memory with copy-on-write so data is only copied if it's changed. Think back on history - there are only a handful of computation approaches that have stood the test of time. Pipes in unix, spreadsheets, data management systems like MS Access or FileMaker. Had concepts from those programs made it into programming, our lives would be a lot simpler today. We don't need to torture ourselves for imagined efficiency by copying the C threading model that quickly dissolves into spaghetti.


> there are only a handful of computation approaches that have stood the test of time. Pipes in unix, spreadsheets, data management systems like MS Access or FileMaker. Had concepts from those programs made it into programming, our lives would be a lot simpler today.

Can you go into more detail on which concepts from Pipes in unix, spreadsheets, data management systems like MS Access or FileMaker you're thinking of? My familiarity with the tools is leading to blindness about application...


Ya what I was trying to say for each of those is:

Pipes/streams: one way channels between isolated executables

Spreadsheets: the only form of functional programming that's reached mass adoption (equivalent to lisp for the most part)

DBMS: the only form of declarative programming other than the web that's reached mass adoption. In this case the view has two-way binding to the data, but relationships are declarative and the controller has been abstracted into the GUI and event callbacks on buttons etc (controller is the weakest portion of MVC)

All of these concepts could/should make it into mainstream programming languages. Try searching for Redux, Elm, software transactional memory, the Clojure state store, coroutines (which are interchangeable with state machines but without goto).

I believe that we can use all of these techniques without the ugly syntax of functional programming languages like Haskell, Scala, R, etc. For example Elixir is trying to be a kinder, gentler Erlang.


Hi Zach, I'm working on a implementation of the actor model for node, and since you mentioned it, I was wondering if you would perhaps be able to provide some feedback on it. Still very early days, but making some good progress on a single node implementation.


Ya sure, I could maybe take a look. I have to be honest though that I haven't used the actor model for a project yet. I just get a lot of leverage from its principles when I do shell scripting.

In Node I would be concerned about shared mutable memory. Just curious how you are isolating the actors? Or is that still in the todo stage?


http://akka-js.org

Akka port for javascript


The actor model is not the panacea it's often made out to be. (Though it's obviously better than shared-memory models for almost everything.). I'm sure you already know this, I'm just pointing it out explicitly.

For one, AFAICT very few implementations provide built-in back-pressure which is crucial if you want to avoid runaway resource consumption or having to just drop messages on the floor. This may not be critical for one's particular scenario, but it very often is for applications where you'd used threading and blocking queues (redezvous being a special case).


Please, no. Shared-by-default heap memory is one of the greatest mistakes ever made in language design. We need a better design justification than it's the easiest thing to implement right now.


The justification is that when you have an array of 100000 elements and split it into into isolated chunks that can be processed without any synchronisation and then marshall it into json and then actually do the computation and then marshall the result into a json string and marshall it back into a javascript array is very wasteful. Heck even when I'm passing immutable messages I'd still implement it via shared memory to avoid the wasteful marshalling step.


Transferrable objects fit the bill for that perfectly. They are a 0-copy way to transfer data (currently only typed arrays are supported i believe) to and from web workers without the overhead of serialization.

No need for the mess that is (in my opinion) true shared memory.


Fork/join on arrays and trees would be an easier to manage model, IMO.


You can implement that on top of threads.


Of course you can. You can implement anything on top of threads, especially non-determinism, race conditions and deadlocks.


Event loops have these too, they just go by different names:

* non-determinism -> order in which setTimeouts run

* race conditions -> callbacks happen in an unexpected order

* deadlock -> broken callback chain


Yeah, but nobody relies on the execution order of disparate callbacks to achieve their results in JS. You're going to get burned in like two seconds if you try to do anything you listed. Lower level concurrency primitives in other languages allow developers to build fragile solutions which work 99% of the time until they deadlock and everything blows up. The concurrency model in JS is very explicit about being "no guaruntees."


There are many reasons why complex software fails in the 1% case. Concurrency is not the only reason. In my experience it is not the top reason. Our top hard-to-repro or no-repro crashes in JavaScriptCore are from non determinism introduced by the workload itself. Inside our engine we have many sources of nondeterminism that manifests even with concurrency features disabled.


Are you sure that setTimeouts are run in a non-deterministic order?

Callbacks can only come back in an unexpected order if you are using I/O.

A broken callback chain is not the traditional definition of "deadlock".


> Are you sure that setTimeouts are run in a non-deterministic order?

Certainly, if they are issued from different callbacks. :-) And in relation to how they may happen to be interleaved with I/O.

Also add to the non-determinism bucket: the order in which messages arrive from separate Workers.

> Callbacks can only come back in an unexpected order if you are using I/O.

Which abounds and takes many forms both in the browser and node.js.

> A broken callback chain is not the traditional definition of "deadlock".

Indeed, but it is the same class of developer hazard at play: waiting for an event to happen than will never come. Admittedly, only a subset of dropped callback chains correspond to the strict definition of deadlock: those where a chain is involved in marking a state/resource as available.

For example: I ignore the next button press on a ui that disables the button pending a reply because I accidentally failed to update the button state when a network request failed.


> Certainly, if they are issued from different callbacks. :-) And in relation to how they may happen to be interleaved with I/O.

That's non determinism from I/O, not from the callback or the setTimeout. setTimeouts are called in order based on schedule time.

And you can't blame non-determinism in JS on I/O. No language in existence is deterministic based on that measure.


This seems rather racy in Chrome, despite the I/O being deferred to the end :-)

  (function() {
    var x = '';
    setTimeout(function() { setTimeout(function() { x += 'a'; }, 6); }, 4);
    setTimeout(function() { setTimeout(function() { x += 'b'; }, 5); }, 5);
    setTimeout(function() { console.log(x); }, 50);
  })();
It's a fair point that generally I/O is the source of the bulk of the non-determinism in most JS programs. But if that I/O non-determinism is present (as it is in most useful programs), event loops aren't a panacea to avoid peril from it.

Concurrency introduces another source of non-determinism. But just as good programs use care with the ideally limited amount of code responding to events, good concurrent programs use similar care with access to shared state.

I'm unsure if the strawman syntax proposed encourages that care, but it is interesting that it can at least be done without breaking basic safety + performance. With SharedArrayBuffer on the way, we'll get all the non-determinism of parallel execution peeping through to JS without a particularly JS friendly syntax, so it's at least worth thinking about if something should be added to JS.


I object to the term "similar care". There's a big difference in what it takes to understand thread issues compared to single-threaded event-based systems.

For instance, if I'm not mistaken, your setTimeout thing is relying on precise timing. But this is documented when you look up setTimeout. It's not difficult to understand.

How to use locks and conditions/monitors correctly IS difficult to understand. You can't just read a few lines in the API documentation and proceed on to write correct code, beyond small toy examples.


setTimeout is obviously a simpler primitive to understand in isolation than monitors/conditionvars, but I would argue that when used in equally complex scenarios, similar challenges emerge.

Something like Dinning Philosophers is no less tricky to express + understand with an event loop, and with JS's async await probably would most cleanly be expressed in a style that mirrors a monitors/conditionvars version.

The fact that JS has added async await suggests demand for the convenience of concurrent blocking threads. While async-await manages to separate out non-async code to a degree, once an await happens, the global state can also be arbitrarily mutated no differently than with threads.

  function block() { return new Promise(function(a,b) { setTimeout(a, 0); }); }
  var x = 0;
  (async function foo() {
     x = 1;
     await block();
     console.log(x);
  })().then();
  (async function bar() {
     x = 2;
  })().then();
Concurrency is hard, but event loops aren't a magic wand.


I agree strongly with this. Multithreaded code is super hard to understand and super error-prone when you're dealing with threads and locks yourself.


Sadly, most people involved in language design don't actually do any serious concurrent programming and don't know any better, so the mistake keeps spreading and people keep glorifying the mess of shared-memory multithreading.


It's awesome to see work on concurrency, but I'd really like to se a model that didn't involve shared memory.

JavaScript on the web already has the concept of Transferrables. It'd be great to explore how user-land code could create transferrable objects and graphs of them so they could be sent between threads without threads sharing the same heap.


As someone who writes a lot of concurrent and parallel code, I can tell you it's easier to do it if the language is not imposing artificial constraints.

Also, most kinds of concurrency models will probably require the underlying VM to support some kind of shared memory. That's what this post is about.


User-land Transferrables would obviously need to be implemented with shared memory, otherwise you could just use the existing structured clone operation.

This post isn't just about an underlying shared memory model, concurrent access to objects and variables is exposed to users, and enabled by default. That's a huge footgun that I thought the PL world was moving away from.


Threads are quite successful in the real world. I think it's because the alternatives are either harder, less capable, less scalable, or even more of a footgun.


I personally love the idea of of transferable objects, this could work very well in an event driven architecture.


It'd also be nice if immutable objects could just be shared instead of copied or transferred instead of the current situation where they are copied and then made mutable.


> instead of the current situation where they are copied and then made mutable.

Could you share an example? As far as I understand, Immutable JS uses something like a trie for structural sharing and avoids memory leaks.


When you send the result of Object.freeze() to or from a WebWorker the object is still copied and comes out the other side as fully mutable (aka, not frozen).

Immutable JS is orthogonal in this case. That's "just" a fancy wrapper on top of a bunch of regular, mutable JS objects that makes it look immutable. It's not truly immutable to the runtime, and has no special interaction with webworkers as a result. It gets deep copied just like any other object.


Sorry, but we need shared memory for fast sharing of large immutable data structures.


Then add immutable objects to JS, which enable a host of other optimizations and are useful in single-threaded contexts too.

Shared-memory multithreading is just very difficult to use correctly. Actor-style concurrency hasn't really caught on (possibly because of the poor performance of workers and postMessage), but there is probably a middle ground that's easier for developers to get right.


I don't think it is a good idea to stray away from the bare metal in this case. We're trying to bootstrap other languages on top of JS. They can introduce their own abstractions to shield off the programmer. Just my 2ct.


JS already supports immutable objects, by using Object.freeze().


That doesn't really make things immutable. Even ignoring the deep vs shallow issue, consider what effect Object.freeze has on a JS Map (hint: none, really; you can still add/remove things).


Right, that's where libraries like ImmutableJS come in.


That's not deeply immutable like you'd want for sharing objects across threads, or even for using in contexts like React and Redux.


Not too hard to call Object.freeze() recursively to get the deep immutability you need. And of course there's ImmutableJS for other immutable data structures.


Shared memory could be hidden from the users. Passing objects would really be passing control of objects. If process A creates object a, and passes it to process B, no copying has to happen if the underlying mechanism uses shared memory and access control to effect the transfer (a change of owner).


Yes. Presumably transferrable objects would have to be created on a shared heap. Non-transferrables would live on thread-local heaps.


I find that thinking about multiple heaps is harder than thinking about threads. (Ever programmed with scoped memory? It's a nightmare.)

I find that thinking about transferrables is harder than thinking about threads. It's gross that an object suddenly loses all of its state just to give that object to another thread. Then, other parts of your code that still want access to that data have to do things that are definitely more gross than acquiring a lock.


You don't need locks to access immutable objects (except perhaps for garbage collection).

But I agree that transferring control is a bad idea.


I feel like most of the comments here are ignoring the fact that this is a straw man proposal, as the blog post emphasises several times.

They aren't proposing to add threads to JS in exactly this manner. They're pointing out that it would be technically feasible, for WebKit at least, and wouldn't add any unnecessary overhead. The specific proposal is just a starting point for discussion.


JS already has a proper concurrency model which is the event loop. What it doesn't have is a model for parallelism. It's important to understand the difference! It can be hard to grasp at first, because few people make the proper distinction.

If you're new to the subject, don't take my word on it, but watch the talk "Concurrency is not parallelism" by Rob Pike[0]. It's highly recommended.

Learning a language that makes a proper distinction will help - I got it through Clojure, but there's certainly others like (obviously) Golang.

0. https://youtu.be/cN_DpYBzKso


You've got to love computer scientists:

"Hey, there's this word 'concurrent', which means to happen at the same time, what should it mean in our field?"

"How about, 'happening in independent processes'?"

"Great! There's this other word, 'parallel', that means coexist alongside but be structurally independent. What should that mean?"

"How about, 'happening at the same time'?"

"Fantastic! OK, let's go to the pub."

Hard to believe that people get confused.


Even loop is not concurrency.

Just because concurrency and parallelism are different does not meant that concurrency and event loops are the same.


Creating async tasks that can run concurrently and deliver progress via events is a concurrency model.


How? Explain please. https://en.wikipedia.org/wiki/Concurrency_(computer_science)

"order-independent components", which makes sense for the Event loop.


I draw the line at whether the interleaving granule is under the programmer’s control or not.

For example, I can say that when a task executes on an event loop, no other tasks are executing concurrently to it because the event loop executes tasks sequentially.


Ok you draw the line there, but definitions are supposed to be commonly understood, that's how language works...


Yeah, I agree it would be better if people in the event loop community learned to use the word “concurrency” correctly.


You can build concurrency on the event-loop, though, which is what the OP was probably referring to.


Not quite. Consider a component in your code that does this:

    for (let blah of things)
        foo(blah);
In an event loop, if things has a lot of stuff in it, this can block the event loop. Other things in the event loop won't be able to run until this completes.

With threads, this just works.


That is parallelism, not concurrency.

Concurrency: two functions can run one after the other while they are non blocking for the callers (e.g. hyperthreading).

Parallelism: two functions can run at the same time (e.g. multiple processors).


Your definition of concurrency is not correct. Concurrency usually implies that the work is chopped up into very small interleavable bits. A while function call with a loop of unbounded length being the default granule of interleaving is definitely not part of the definition of concurrency.

The term "concurrency" subsumes both event loop style concurrency and two-cores-running-code-at-the-same time concurrency.


Yes, it is. The JavaScript event loop is a very lazy scheduler than does no preemption. It's a very primitive concurrency model. It's up to the users of the language to break their tasks down into sufficiently small components or yield to allow for interleaving of work.


Events and asynchronous operations are sufficient enough for a concurrency library to be built on top of JS with minimal changes. I wouldn't consider what JS has now a reasonable concurrency model but the asynchronous event handling could serve as a starting point for a source of events that run on multiple threads.


That was the state-of-the-art in OS concurrency in the early nineties. Sometimes, you would have to reboot your computer because some app forgot to yield the event loop.

It's not as bad if you have to restart your app because some task forgot to yield the event loop, but it's still bad.

Threads solve this problem comprehensively because threads get to make progress regardless of whether or not other threads are in a loop.


> That was the state-of-the-art in OS concurrency in the early nineties.

Many systems had pre-emptive multitasking in seventies and eighties.

I think Amiga was the first implementation for consumers. Released in 1985. https://en.wikipedia.org/wiki/Amiga_1000


In case anyone is wondering: this is nonsense.

JS the language has about as much to say about concurrency as C does.


The event loop doesn't allow you to do a computation concurrently with handling user events, so it's not at all a proper concurrency model.


It's a proper concurrency model, it's just not an efficient one when it comes to execution.

Concurrency has (at least) two components: design concurrency, execution concurrency. The event loop mechanism in JavaScript allows you to do effective concurrent design. It does not provide for concurrent execution. This is the primary distinction in academic CS between concurrency and parallelism.


The event loop mechanism in JavaScript is not well suited to designing OR executing a computation concurrently. Try replacing a for loop in numeric code with anything that allows for events to be handled. It's extremely awkward.


Surprisingly, that's not what concurrency model means.

What you describe is parallelism.


Parallelism means using multiple hardware resources on some task. Concurrency means multiple tasks may make progress in the same time period - perhaps via interleaving.

What I'm saying is that JS is terrible at allowing multiple tasks to make progress when one of those tasks is computation. Try taking some numerical algorithm and allowing interleaving user events with it - it's super nasty.

I hope I'm wrong because I've struggled with this. How would you perform a JS computation concurrently with handling user events?


I'm not a JS expert, but I do a lot of work with heavy numerical computing that occasionally has to run inside a responsive UI without threading (it's no fun). The solution is "simple", if annoying:

1) you find a small-enough granule of work inside each compute kernel so that one iteration/update/whatever of it is under the target frame duration (for these kinds of UIs 30 FPS is plenty and 15 FPS is tolerable, we're talking business visualization apps and so on)

2) you organize your computation as a resume/yield loop [0]; you run a few iterations of a function like "percent_finished = make_progress(state_of_computation)" then schedule yourself to do another round of progress on the next frame, and return/yield so that the event loop / UI can flush any pending I/O events

Needless to say, this gives up a lot of compute efficiency in return for making it relatively easy to maintain responsiveness. And it doesn't work on problems where you can't find good "yield" points in the computation. But a lot of numerical computing problems have natural/reasonable yield points, e.g. after each data point is processed, or after every 100 elements of a large vector are processed.

[0] A great example of this in a widely used library is the streaming compression interface in zlib, which lets you compress or decompress a nearly-arbitrarily-small granule on each invocation.


>I hope I'm wrong because I've struggled with this. How would you perform a JS computation concurrently with handling user events?

Like everything else -- by breaking the work into small steps, and e.g. doing looping etc asynchronously.

There's this coming officially now too: https://tc39.github.io/proposal-async-iteration/


Concurrency generally means preemption. Event loops have no preemption. It's not correct to use parallelism to mean preemption, as you seem to be doing.


>Concurrency generally means preemption

No, it doesn't. Concurrency is about the logical structure of the code. From Wikipedia:

  "In computer science, concurrency is the decomposability 
  property of a program, algorithm, or problem into order-
  independent or partially-ordered components or units. This 
  means that even if the concurrent units of the program, 
  algorithm, or problem are executed out-of-order or in 
  partial order, the final outcome will remain the same".
And from "Parallel and Concurrent Programming in Haskell":

  In many fields, the words parallel and concurrent are 
  synonyms; not so in programming, where they are used to 
  describe fundamentally different concepts.

  A parallel program is one that uses a multiplicity of 
  computational hardware (e.g. multiple processor cores) in 
  order to perform computation more quickly. Different parts 
  of the computation are delegated to different processors 
  that execute at the same time (in parallel), so that 
  results may be delivered earlier than if the computation 
  had been performed sequentially.

  In contrast, concurrency is a program-structuring technique 
  in which there are multiple threads of control. Notionally 
  the threads of control execute "at the same time"; that is, 
  the user sees their effects interleaved. Whether they 
  actually execute at the same time or not is an 
  implementation detail; a concurrent program can execute on 
  a single processor through interleaved execution, or on 
  multiple physical processors.
Note: "program-structuring technique", "Notionally" and "implementation detail".

Concurrency is NOT "running in parallel" (running simultaneously), as you seem to believe, and not even about your code getting preempted. As long as it's structured concurrently, that is.

But if you have concurrency AND preemption OR different independent processes etc over multiple cpus you can have parallelism. In a sense concurrency is a more generic principle.


In CS, parallelism is simultaneity. Two things are parallel if they occur at the same time.

Concurrency is design, which may allow for parallelism. And may allow for preemption. Preemptive or cooperative concurrency is a design decision, both are concurrency.


"Concurrently" means "at the same time", both in English and in CS. One way to make it appear as if you have concurrency is to timeslice. One really bad way to do this is to have the timeslicing granule be computations of unbounded length with no preemption.

I agree with you that it's valid to refer to event loops as a kind of concurrency. I would call it a very primitive and low-quality kind of concurrency.

But it's not at all valid to claim, as @coldtea seems to be doing, that concurrency excludes the notion of two things happening at the same time. He's implying that if you run things at the same time then it's called parallelism. That's just not true. It's still called concurrency, even though it could also be called parallelism in the right circumstances.

I think it's best to understand this terminology this way:

Concurrency: the phenomenon of two things happening at the same time, or being made to appear that way.

Parallelism: the study of how to make things run to completion in less time by using multiple CPUs or computers.


>"Concurrently" means "at the same time", both in English and in CS. One way to make it appear as if you have concurrency is to timeslice.

For one, the CS literature defines concurrency as not necessarily being at the same time.

Second, timeslice, by definition, is not at the same time. Hence the need to use quotes there. Concurrently does not mean "at the same time" any more than "timeslice" implies.

>But it's not at all valid to claim, as @coldtea seems to be doing, that concurrency excludes the notion of two things happening at the same time.

I never claimed that. I wrote that concurrency is not parallelism (which it isn't), not that it excludes parallelism.

Basic timeslicing (baring any other mechanism) for example is concurrent but not parallel. Node's evented "cooperative" concurrency is also not parallel.


Timeslicing makes it impossible for a program to accidentally discover that its threads or processes are not really running at the same time. For example, an infinite loop in one thread will not halt all other threads. To me, that’s concurrency.

Event loops are not like that. They run tasks to completion before starting other tasks and indeed the whole point of how you leverage an event loop is to take advantage of the fact that other things don’t happen at the same time. I would prefer to say other things don’t happen concurrently, and you would have understood my meaning.

It’s not correct to have used the word “parallel” in that context since my example only needs one CPU.

I think it’s ok to think of event loop interleaving bugs as being concurrency bugs, but insisting that event loops are proper instance of concurrency is diluting the terminology too much IMO. Hence I much refer to use concurrency to mean the case where the interleaving granule is not under the programmer’s control.


In the context of computing, I prefer to simplify to:

Concurrency: Having more than one linear sequence of instructions logically active at the same time.

Parallelism: Having more than one instruction physically being executed at the same time.


I'll generally agree with your definitions for concurrency and parallelism. Our parallelism definitions were essentially the same.

But what you say about coldtea is not a fair interpretation of their post. millstone said this:

  The event loop doesn't allow you to do a computation
  concurrently with handling user events, so it's not at
  all a proper concurrency model.
By this definition, concurrency is only occurring when two things happen simultaneously. coldtea is correct that that is parallelism.

Parallelism, and coldtea's comment does not exclude this, is a special case of concurrency. It is the case where a program's components execute simultaneously. Whereas, generally, concurrency in CS does not require simultaneous action (as you state in your definition).


That's not the definition I had in mind. What I meant is that JavaScript lacks an easy way to allow for computation that doesn't block the event loop.

We can imagine an old school cooperative multithreading architecture:

    for (var i = 0; i < arr.length; i++) {
        someCalculation(arr[i]);
        yieldToNextEvent();
    }
but JavaScript does not support this. Instead you have to awkwardly re-structure your computation in CPS or an equivalent to do this sort of thing.


I don't think that's the fault of JavaScript - it does not specify the event loop itself. ES6 has generators, so all you need is an enqueue(fn*(){}). There's already an enqueue in the browser - setTimeout(), and one in node - nextTick(), so all the functionality you need is there.


What's not quite clear to me is whether "the cell never moves" is one of the assumptions that actually goes into making this work or not. And if it is, how that plays with a compacting or generational collector, where you do in fact want the cell to move.


From the standpoint of the concurrency scheme I'm proposing, it's OK to move the cell in the GC. Then it becomes a standard GC moving problem.

In JSC we don't move cells for other reasons, and the point of saying that they don't move is that the object model itself does not require the cell to ever move.


Thank you for the explanation!


Genuine question: Aren't locks a quagmire in terms of lock management? Why start there?


All of the alternatives are an even bigger quagmire.


JavaScript is already concurrent. It isn't parallel though.

Is there any real world application that currently cannot exist without this feature? It seems like a solution looking for a problem.

I can see needing the ability for parallel JS for highly-parallel compute-heavy applications but those already seem serviced by SharedArrayBuffer.

So... why?


Forgive the ignorance, but why do we need threads when we already have the event loop?

And, as a follow-up, why introduce a new concurrency model that's based on Java instead of, say, creating a concurrent "event pool" where execution order isn't guaranteed and callbacks are executed with maximum concurrency (edit: to clarify, I mean maximum parallelism)?


Event loops are great for handling work that can be handed off occasionally to worker threads (like how Node works with libuv), but are awful when doing consistent CPU-bound work, such as with games, 2D/3D visualizations, finance, and the like.

SharedArrayBuffer is a nice primitive that allows multiple Threads (or WebWorkers) to work on data in parallel. The additional primitives they're proposing offer quite a bit of flexibility. For one, I would really like an easy `new Thread(fn).asyncJoin()` to create a Promise to do expensive CPU-bound work. There are a few npm modules that do some version of this but it can be slow.

As for the concurrency model, it should be possible to delegate tasks to a pool in this one. Map an array of input data and functions to an array of promises, then Promise.all() or Promise.race() on them.


I think that kind of stuff it much more relevant in the land of WebAssembly. It certainly makes sense there, but in normal JS, I doubt the usefulness.


It depends. It could be very useful for React/React Native, for one.


If you're that concerned about performance, though, aren't doing things like `.asyncJoin()` just going to eat up any potential gains by 1) wrapping the result in a Promise and 2) throwing the Promise back into the event/microtask queue? It seems like the transport would quickly become the bottleneck.

I'm also struggling to understand what tier of performance you're aiming for, since you're hoping to boost performance by using a Thread in JS when, in node, you can already code expensive computation in C++ and expose a binding back to JS (which to me makes more sense in the context of game design since that's basically what most already do with Lua--this would just be in reverse).


Joins like that wouldn't really be used for coordinating between multiple threads in a fine-grained fashion. For example, maybe you'd kick off two worker thread from your main thread, and each of those is going to do some fine-grained processing on some shared set of data. The main thread needs to know when that process is finished, so it joins both threads.


I think eventloop implements concurrency only but thread will implement concurrency with parallelism.


I edited my comment to clarify (hopefully). What I was thinking was that a theoretical "task pool" would execute callbacks with maximum parallelism.


One other thing to consider, the SharedArrayBuffer can only carry limited data types, it's not a shared object. I used Flash's SharedArrayBuffer and found it was very limited in what you could use it for.


This is very interesting. If this is indeed implemented, it will remove a serious hurdle from the equation of RN. Still JS, but the ceiling will go up considerably.


JavaScript should just copy the Tcl threading model.


They mean parallelism, right? Concurrency is what JS is already known for.


Yeah, JS definitely has concurrency. He's just twisting the term a bit into his world of shared-memory multithreading to make his model sound superior to other concurrency models.


Or use ClojureScript with core.async - which implements a go-like concurrency model. (Not saying that's fully or equivalent, or meant to denigrate this interesting work.)


Yeah, and lets have the gazillion scripts, loaded by your usual favourite news site, spawn 10 threads each...

For browsers, this is a bad idea. The biggest advantage of the event loop approach is that it is somewhat deterministic. With threads, that determinism goes out of the window. Lock-management? No way I do that for Browsers. I will quit working on client-side code if that is becoming a requirement!

Threads for the server-side? I don't care.


Whether it's a good addition to the language or not is debatable, but as a browser feature, it sounds terrifying. Browsers have a huge attack surface as it is (I mean, WebGL is a thing – exposing GPU drivers to random untrusted code on the internet, what a brilliant idea...), and exposing threading will only make it worse. Every single API would have to be carefully audited and made thread-safe, and I'm sure that many 'fun' bugs would crop up as a result.


Which is why the proposal specifically says that only a few DOM APIs would be exposed to background threads (Like console.log())


For that matter, anyone who'd like to have "fun" with concurrency and lock management in browser-side JS is free to play around with recurring asynchronous data fetching cached in localStorage when multiple browser tabs are open.

Spoiler: locks are the wrong approach and you should be relying on events/callbacks instead. Oddly enough, that's similar to my gut feeling about Threads in JS.


A browser can easily optimise HTTP or localStorage calls internally, without introducing Threads to the "userland" code.


I'm talking about a userland implementation.


Because of user inputs and networking, we already do not have determinism. And even without introducing thread feature, a site can spawn 10 threads each by using Web workers. So, it is not related to introducing threads.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: