
Threads Are a Bad Idea for Most Purposes (1995) [pdf] - ptx
https://web.stanford.edu/~ouster/cgi-bin/papers/threads.pdf
======
oppositelock
I started doing massively parallel programming on SGI systems around the time
this paper was published. SGI's at the time could have 64 CPUs in a single
system image, which was very novel. Sun was working on its early multi core
workstations, and companies like Cray were pushing different models of
distributed computation.

This paper came at a time when threads were really painful to work with. POSIX
threads were still new and mostly unsupported, so you were stuck with whatever
your OS exposed. On IRIX, you would do threads yourself by forking and setting
up a shared memory pool, on Solaris, you had the best early pthread support,
in Java, you used the native Thread classes which only really worked on
Solaris at the time. It was a mess!

This mess is now solved. Pthreads are everywhere, C++ has std::thread, Java
threads work everywhere, and we've had many new language some out which handle
parallelism beautifully - for example, the consumer/producer model built into
channels in Go is very elegant. The odd one out is Win32, but it's close
enough to pthreads for the same concepts to apply.

Event driven programming has also become threaded, this is how the whole
reactive family of frameworks, node.js, etc, handle parallelism.

As someone who's been doing this for a long time, what I find confusing is
higher level constructs that try to hide the notion of asynchronous
operations, such as futures, promises. As a concept, they're fine, but they're
difficult to debug because most developer tools seem not to care about making
debugging threads easier.

~~~
pdimitar
> _This mess is now solved._

If you say so. I can't count the dollars I've made fixing other people's
poorly written multithreaded code in my entire career, including in the last 3
years.

Thread support is [more or less] standardised in all major OS-es now, sure.
Doesn't change the fact that it's an extremely bad fit for the human brain to
think about parallelism.

Stuff like actors with a message inbox (Erlang/Elixir's preemptive green
threads) or parallel iterators transparently multiplexing work on all CPU
cores (Rust's `rayon` comes to mind) are much better abstractions for us the
poor humans to think in terms with. Golang's goroutines are... okay. Far from
amazing. Still a big improvement over multithreaded code as you pointed out
though, I fully agree with that.

I might be projecting here and please accept my sincere apologies if so, but
it seems to me you are a bit elitistic in your comment. Multithreaded
programming is still one of the most problematic activities even for senior
programmers, to this day. Multithreading bugs get written and fixed every day.

At this point I believe we should just move to hardware-enabled preemptive
actors where message passing is optimised on the hardware level, and just end
all parallelism disputes forever since they are an eternal distraction. (The
overhead when utilising message passing today is of course not acceptable in
no small amount of projects. Hence the hardware suggestion.)

~~~
travisgriggs
> Doesn't change the fact that it's an extremely bad fit for the human brain
> to think about parallelism.

I'm curious why you feel this way? Certainly anything with multiple (more than
one) thread is a lot harder than single threaded. Given.

But I think the human experience is rich with real world counterparts that can
make multithreaded programming "natural" (not necessarily easy, there's a
difference). Basically any process in real life you do that involves
collaborating with multiple people is a collaborative multi-threaded process.
When forced to grapple with multi-threaded solutions, I often ask myself, "if
I had a room full of people working on this problem, what would have to be in
place to make the operation flow smoothly?" For me, this anthropomorophisation
of the process makes it a "good fit" for how my brain is used to solving lots
of real world problems.

On the flip side, I haven't had a lot of luck finding real world experiences
that I can model coroutines/async/etc on. So they may be ultimately easier,
but if our rubrik for "fit for the human brain to think about" is what the
wealth of human experience has evolved our brain to handle well, I'm less
convinced.

But I like learning new things. Maybe I just haven't seen the lightbulb yet.
Help me see your point of view?

~~~
baggy_trough
The greatest programmers in the world cannot write bug free C level threading
code. It is a task beyond human capabilities.

~~~
ringzero
I don’t follow - are you asserting that, by contrast, the greatest programmers
in the world can write bug-free single-threaded code?

~~~
pshc
I think the implication is to stop using C.

~~~
pdimitar
I'd interpret it to stop using the pthreads model of the parallel coding, in
general. Because the pthreads model exists and is used in many programming
languages.

But stopping to use C is a good start (for whoever has the choice)!

~~~
nineteen999
Maybe the functional programming/managed code fans should step up to the plate
and rewrite the operating systems, desktop environments, and hundreds of
thousands of command line tools etc that have all been implemented in C or
C++.

Not as toy, proof-of-concepts. Fully fledged replacements for all that stuff
that can be used in our our daily work. Build distros of the stuff so that we
don't have to pick and choose this stuff and replace the bits of our systems
in a piecemeal fashion.

They could show us how it's done instead of talking endlessly about it in
online forums. So let's face it, it's never going to happen.

~~~
pdimitar
Hm, where did I say anything about FP?

As for managed code, it's being used with huge success in a lot of places (not
in OS-es or drivers, yes) but that's quite the huge topic by itself.

~~~
nineteen999
I never implied you did, they are merely one of the two major major groups of
programmers I see consistently deriding C based infrastructure that presumably
enables their paychecks to a large extent.

I'm not defending C, I'm merely sneering at its noisy detractors who spend
more time complaining about it than supplanting it.

------
yongjik
I'm not exactly sure if events are easier to debug. I use tornado (event-based
Python webserver library) extensively at work - when something goes wrong you
don't get a nice stack trace, you get some random sampling of callback
spaghetti. Also the _default_ state of matters is that everything is
serialized in a single thread and everything waits for their predecessor, even
when they are totally unrelated, though that probably tells more about the
particular framework than event-based programming in general.

I'd rather use a real multithread-based framework, honestly, though I concede
that it also opens up different ways of making developers' lives miserable.

~~~
maxmalysh
It's time to migrate to async/await. Check out asyncio.Protocols for TCP
servers and aiohttp for HTTP servers. We get beautiful async stack traces
delivered to Sentry. Debugging anything is a joy.

~~~
fjp
I’m always a +1 for aiohttp (and aiopg)’s ease of use

------
davidw
The author, in case anyone didn't recognize the name:

[https://en.wikipedia.org/wiki/John_Ousterhout](https://en.wikipedia.org/wiki/John_Ousterhout)

Specifically, he created the Tcl programming language, which had a nice event
loop way back when.

~~~
Uhhrrr
It's still there! IIRC you could also have arbitrarily many nested event
loops, for better or worse. And some threading support is also available [1],
although it seems to be culturally disapproved of.

1\. [https://wiki.tcl-lang.org/page/thread](https://wiki.tcl-
lang.org/page/thread)

~~~
davidw
I just meant to say that it was already in place... uh...holy crap... 25 years
ago.

~~~
kevin_thibedeau
Technically it was part of Tk. It got grafted into mainline Tcl 20 years ago.

------
milesvp
A shame that no one has mentioned cache invalidation as further reason
threaded programming is hard. One my biggest takeaways from Martin Thompson’s
talk on mechanical sympathy is that the first thing he tries when brought in
as a performance consultant is to turn off threading. He mentions locking as a
performance problem but that these days cache locality can be the key to
speeding up slow applications.

~~~
zzzcpan
Yeah, it was hard to realize in 1995, but nowadays pretty much everyone who
tried experienced performance problems with threads, or rather with shared
memory multithreading concurrency model. It doesn't actually scale if you
idiomatically synchronize shared memory access with locks or atomics, you need
some way to batch things and amortize the cost of synchronization between
cores while also preserving locality, which ultimately implies an asynchronous
model where threads are just a low level implementation detail.

~~~
milesvp
I've been hearing rumors that AMD's current offerings have been starting to
avoid even shared cache between processors. It boggles my mind that any CPU
designer would think a shared L2 cache is a good idea. Makes me wonder where
my model of memory starts to break down. I always just think of L2 as being
slower, less expensive memory than L1. I'm wondering if there are any benefits
that actually outweigh the cache eviction penalty of multiple processors
accessing it...

~~~
kardos
It's surely an engineering tradeoff resulting from weighing the different
pros/cons. If you had dedicated cache per core at 1/Nth the size, much of it
would be wasted when you're running less than full tilt -- eg, if you're using
2/4 cores then half of the cache is artificially unavailable instead of
doubling the cache available to those 2 cores.

------
dang
A thread from 2017:
[https://news.ycombinator.com/item?id=14547063](https://news.ycombinator.com/item?id=14547063)

Way back in 2008:
[https://news.ycombinator.com/item?id=399670](https://news.ycombinator.com/item?id=399670)

~~~
zevv
> A thread from 2017

Didn't you read the article?!

~~~
electricityUser
Did you click on the links in the comment above yours? ;-)

~~~
metalliqaz
Whoosh

~~~
birdyrooster
The perniciousness of winky face is on full display here

------
cdoxsey
You have two choices when it comes to utilizing multiple cores: threads and
processes. Threads are hard, but sharing data between processes is hardly any
easier.

Maybe they're a bad idea, but these days you have no choice but to learn how
to use threads. AMD's run-of-the-mill processors have 16 cores, Intel's 8.
Servers have lots more. Heck even your iPhone has 6.

The clock rate on CPUs isn't getting any better. It's just more cores from
here on.

~~~
marcosdumay
It being hard to share memory is a feature, not a bug.

Threads make any single thing on your program mutable without your direct
control. Processes keep the mutability scoped into a few hard to extend areas.

~~~
FartyMcFarter
> Threads make any single thing on your program mutable without your direct
> control.

No they don't. Threads don't mutate random variables by themselves, you need
actual code that does the mutation (whether it's running on a separate thread
or not).

I mean, how is that statement different from "calling other functions makes
any single thing on your program mutable"?

~~~
marcosdumay
> how is that statement different from "calling other functions makes any
> single thing on your program mutable"?

On those languages where functions mutate things, that's basically true. But
it's much more common that functions can only mutate global variables, and
people keep those in low numbers, exactly for that reason. Actually, replace
"functions" with "methods" and you will get into one of the largest flaws of
OOP.

But anyway, mutability is much less of a problem outside of concurrent code.

------
awinter-py
> Threads should be used only when true CPU concurrency is needed

> Scalable performance on multiple CPUs

the exceptions to 'when to use threads' in 1995 sound like SOP these days

------
jeffdavis
Threads have some very practical advantages:

* Standard, easy way to get a backtrace

* Standard, easy way to get a list of active things going on

* In many cases, threads make it easy to follow control flow

~~~
chriswarbo
Yet threads make backtraces and control flow much less useful, since they miss
out important context from concurrent threads.

I wouldn't personally say that threads make control flow easier to follow: we
might gain a little by disentangling separate activities into threads, but we
lose _a lot_ when these get interleaved in arbitrary, non-deterministic ways.

~~~
derefr
> Yet threads make backtraces and control flow much less useful, since they
> miss out important context from concurrent threads.

I mean, when a program panics, you get a stacktrace from (a consistent
snapshot of) _all_ the threads, so what's the problem?

~~~
erik_seaberg
If a highly concurrent service gets a query'o'death, it's hard to tell which
threads were working on it and which were merely within the blast radius.
Frameworks tend to roll their own notion of "request context" without it being
strongly typed or pervasive across the language and libraries.

~~~
derefr
Now I’m curious what backtraces from Postgres look like when parallel scans
are enabled. IIRC, you’ve got one isolated fork(2)ed master for the
connection, which then has threads to divide work. Not _too_ bad to debug.

------
anaphor
It's not that threads are necessarily a bad idea (though they can be for
performance reasons), but that programming with most synchronization
primitives is a bad idea. If you program with message passing, then it's not
much different from the "events" model except that in the event-driven model
you're trying to hide the underlying abstraction more (you still have
concurrency, it's just baked into the I/O library).

I honestly think this presentation is confused. "Concurrency is fundamentally
hard; avoid whenever possible" seems to go against their own argument. Event-
driven models (which rely on message passing) are still doing concurrency,
except instead of using locks and semaphores to synchronize things, you're
using mailboxes and channels.

Even CPU interrupts are a form of concurrency that is similar to event-driven
models. Just because you're not spawning a thread and acquiring a lock,
doesn't mean you're not doing concurrency.

~~~
pkolaczk
Event driven concurrency is not really easier than threads with primitive
blocking synchronisation. Races are still possible, instead of deadlocks you
can have livelocks, resource control and backpressure are non-trivial etc.

~~~
anaphor
That's true, but at least you can handle backpressure at the runtime level and
just choose a predetermined strategy for dealing with it (i.e. start dropping
messages, exponential backoff, etc)

------
TheFiend7
IMHO I think one large part of it is synchronization. If you're having to
synchronize things all the time, you're probably misusing threads and should
be using a different execution model.

~~~
desc
Another way to look at it is that only infrastructure should be locking stuff,
and infrastructure should be a very tiny part of the codebase. That
infrastructure should probably be responsible for layering a different
concurrency paradigm on top of threads...

Most platforms these days provide such things as part of the language, or in
the standard library, or as a freely-available package. Writing one's own
concurrency infrastructure is usually unnecessary, but when it is needed, it
needs to be kept as small and as easily-auditable as possible.

A bit like `unsafe` in Rust, in fact.

Locking all over the place generally indicates that someone's trying to
shotgun-debug concurrency bugs. I've had to use libraries which did that, and
wished horrible things upon those responsible.

------
bcrosby95
> Threads should be used only when true CPU concurrency is needed.

Which is basically any program running on any modern processor.

~~~
AnimalMuppet
Not at all. There are multiple cores, sure, but why does my program have to
use them? If it performs adequately using only one core, and if the nature of
the problem doesn't require threads, why should I make it multithreaded just
because the processor has multiple cores?

------
rongenre
Raw threads are really hard to program correctly, however sinking parallel
code into an executor or queuing framework tends to really reduce complexity
and in a lot of cases get all the cores working.

------
bjourne
In my life I must have written at least a few million lines of code. Probably
more. How many of those lines have been explicitly multithreaded? A few
thousand lines, at most.

------
bluejekyll
I think this was truly the most amazing thing about learning Rust. After
having experienced the pain (the issues brought up in the linked slide deck)
of threads in C, then C++ and Java, it was wild to work with a language that
provided some significant safety rails for working with data across threads.

Now async/await gives us even better options on top of that, but it’s truly
what made me enjoy the language so much. This article is what resonated with
me and got me to invest so much spare time over the last 5 years in working
with Rust: [https://blog.rust-lang.org/2015/04/10/Fearless-
Concurrency.h...](https://blog.rust-lang.org/2015/04/10/Fearless-
Concurrency.html)

~~~
gmfawcett
Rust does a great job here, but so do languages that make multithreading safer
at a higher level -- Erlang/OTP and Haskell are two great examples. Immutable
data may not solve every concurrency problem, but it sure goes a long way.

I also have a lot of respect for Ada's task-based concurrency approach
(independent actors, communicating by rendezvous). You don't get the
flexibility to roll your own concurrency strategy in Ada, but the language's
support for its chosen mechanism is truly excellent. Even if you'll never use
Ada, this part of the language is worth studying just as an example of great
engineering design.

~~~
DougBTX
There's a nice Kevlin Henney talk where he lays out this diagram:

    
    
                                     |
           non-shared mutable state  |  shared mutable state
                                     |
         ----------------------------+------------------------
                                     |
          non-shared immutable state | shared immutable state 
                                     |
    

Each quadrant is safe, except the one in the top right: shared mutable state.

Functional languages are great at the bottom two, since they strongly
encourage immutable state. Message-passing based concurrency strongly
encourages non-shared state, the safe two on the left.

Rust is the only language I've seen which encourages all three safe quadrants,
while making the fourth a compile-time error.

~~~
kccqzy
> Each quadrant is safe, except the one in the top right: shared mutable
> state.

No. Your database is one giant shared mutable state.

How is using a database safe then? Transactions.

Haskell has had software transactional memory for fifteen years now. Microsoft
tried to copy it in .NET but it's nearly impossible to do right in a language
without clear separation of pure and impure code.

~~~
astine
" _> Each quadrant is safe, except the one in the top right: shared mutable
state.

No. Your database is one giant shared mutable state.

How is using a database safe then? Transactions._"

Transactions are less an occasion where shared mutable state is made 'safe'
during concurrency and more a situation where concurrent processes are forced
to temporarily interact and operate in a sequential, non-concurrent, manner.
They use locks under the covers. Databases manage to be parallel because
different processes operate on different sets of data at the same time; they
lock when two or more processes attempt to access the same row on a table.
Transactional state access is still vulnerable to deadlocks and other
difficulties of concurrent programming. This means that transactional state is
not 'safe' in the same way that immutable and non-shared state are safe. It's
just a much easier way of managing the kind of difficulties you have with
shared mutable state than say, raw locks.

~~~
kccqzy
No. Please read about MVCC:
[https://en.m.wikipedia.org/wiki/Multiversion_concurrency_con...](https://en.m.wikipedia.org/wiki/Multiversion_concurrency_control)

The tl;dr is that

> Locks are known to create contention especially between long read
> transactions and update transactions. MVCC aims at solving the problem by
> keeping multiple copies of each data item. In this way, each user connected
> to the database sees a snapshot of the database at a particular instant in
> time. Any changes made by a writer will not be seen by other users of the
> database until the changes have been completed (or, in database terms: until
> the transaction has been committed.)

In other words, the system uses immutable state under the hood plus some
atomics/locking to present the abstraction of safe shared mutable state.

------
drcode
Part of me thinks the success of javascript is due precisely for the reason
that it prevents/discourages multithreaded programming.

------
kunglao
I've found C# to be one of the easier languages to do multithreading with. I
think this is owing to the capabilities of a Visual Studio like parallel stack
viewer.

Also, anyone working with an OOP should really read Java Concurrency in
Practice. That really helps in terms of learning how to think about multiple
threads in a OOP world.

Not sure how events by themselves can solve the threading issues as events can
be multi-threaded too. I've seen people write far worse event driven code than
multi-threaded code. If people want to use events heavily, I think it's better
to use a well known design pattern so others can understand what you are
trying to do.

------
bob1029
I haven't directly interacted with threads in a very long time, but I do use
Microsoft's Task Parallel Library on a daily basis. I feel like once you
understand TPL/async/await and the nuances with execution context and how to
handle CPU vs IO bound operations, things come together really nicely. I do
not really worry about things like cache coherency or the low-level
synchronization primitives involved anymore. I can seamlessly throw in some
locking with my TPL usage (typically ReaderWriterLockSlim) without much
concern for strange behavior across the various tasks. It really does "just
work" once you buy-in 100% (i.e. exclusively use TPL abstractions throughout).

I would say that 99% of the time I am dealing with IO (simply awaiting some
asynchronous database/network operation), with the other 1% of cases being
things that I actually want to explicitly spread across multiple parallel
execution units - I.e. Task.Run() or Parallel.ForEach(). In either case, I am
working with the sugar-coated TPL experience, and all the horrific threading
code is handled automagically. If I still had to work with threads directly, I
would probably have found a different career path by this point.

~~~
chrisseaton
What problems of threads is this actually isolating you from? It seems like
the usual problems of correctly protected shared mutable state and avoiding
deadlock are still there if you're using low-level primitives like
ReaderWriterLockSlim.

~~~
bob1029
I simply cite RWLS as an example of how you can combine other primitives into
the threading model afforded by TPL without much frustration. I make no claims
that it somehow eliminates fundamental concerns like shared state. In
practice, I use RWLS extremely rarely because I prefer to avoid shared mutable
state in the first place. I will spend an entire weekend reworking
architecture in order to get a lock out of a hot path if I need to. 99% of my
task-related code is boring stuff like:

    
    
      var session = await _connection.QueryFirstAsync<Session>(GetSessionSql, some session token);
    

The biggest concern for me is simply the thread lifecycle and # of threads
involved. TPL handles a thread pool for you and automatically schedules Tasks
to run on these threads as appropriate. This is a non-trivial affair which
would be painful to re-implement consistently and reliably in each
application. I, for one, would be far too tempted to waste entire days
screwing with thread pool parameters relative to environmental factors. With
TPL hiding these things from me, I can focus on a level of abstraction that
actually gets business features shipped. I still have not run into a scenario
where I would have rather gotten my hands dirty and implemented the raw
threads myself. Microsoft did a pretty damn good job. Everything "just works"
and it scales very well.

------
pooya13
In C++ it has become way easier to manage some of the issues mentioned since
1995, namely by using atomic, future or even locks.

------
bullen
Events are not only an alternative to threads they are also a complement. If
your language environment has a complex memory model with good concurrency
support and stable non-blocking IO, you can use threads! But for them to be
good you need to make sure that the input and output is compatible, as a
general guide:

1) Make sure your hardware interface is capable of being accessed by the
kernel from multiple threads. Network cards generally allow this, while
graphics cards still don't, at least without a lot of overhead.

2) Make sure your application profits form parallelism, and specifically joint
parallelism; which is my term for computing that allows many threads to work
on the same memory.

Bottom line is in my case only the Java server for my MMO will use threads in
a "joint parallel" way. In everything else I will avoid them like a jerrycan
full of gas tries to avoid fire.

So I'm transitioning to C with arrays!

------
_wldu
Years ago, an OpenBSD developer told me, "threads are for idiots" in response
to a bug I had submitted. At the time, I was a bit offended (it was a good,
reproducible bug report), but today, I think he was right. They're just too
complicated and 90% of the time they are not needed.

~~~
scottlamb
> Years ago, an OpenBSD developer told me, "threads are for idiots" in
> response to a bug I had submitted. At the time, I was a bit offended (it was
> a good, reproducible bug report), but today, I think he was right.

Whether he was right or not about threads, it's offensive to insult the person
just because you don't like their idea. Likewise for this to be the response
to a bug report - if you don't want to support threads, then don't support
them rather than lashing out at people who notice your support is buggy.

I think in 2020 he's mostly wrong anyway. Sure, there have been many problems
with threads, but...

* this presentation's "Threads should be used only when true CPU concurrency is needed" maybe meant "rarely" in "1995" but means "commonly" in 2020 when single-core performance has been mostly stalled for a while and core counts have risen dramatically.

* There are safer/easier alternate concurrency primitives than mutexes (channels) and at least partial solutions to major problems with threading. For example, in safe Rust there are no data races (even when synchronizing via mutexes). Other problems (deadlocks, contention, other types of race conditions) still exist of course.

* "Threads" vs "events (event loops + callbacks)" as described in this 1995 presentation isn't the whole world, especially today. What about communicating sequential processes with no shared mutable state (such as Erlang's actors)? So to some extent I disagree with the framing of the problem altogether.

* callbacks have their own problems beyond what's described in this presentation. Some that were widespread even in 1995: a string of operations written as a string of callbacks is a lot harder to understand than ones written with the "sequential composition operator" (;) and loops. (Callbacks are basically abandoning structured programming in favor of goto at the macro level.) Likewise harder to debug: you can't just get a stack trace and understand its current state. And some that have become more common since then. These days, event loops are usually multithreaded, so for any cross-request state you have the threading problems as well as the event loop problems. Today I'd say callbacks are the advanced, use them if you need the performance but be wary of the dangers option.

------
aloknnikhil
libuv
([http://docs.libuv.org/en/v1.x/design.html](http://docs.libuv.org/en/v1.x/design.html))
is pretty much this in action. Event loops are not necessarily easy to debug
though and message passing between event loops will force the use of
concurrency primitives such as mutexes and/or memory barriers (lock-free)
anyway. Also, working with event-driven architecture requires a more
functional approach since the handlers are all short-lived. Reminds me of
Erlang. I think the deck downplays the complexity of building such a system.

~~~
davidw
The author of the deck is Dr. John Ousterhout, who wrote Tcl, which had an
event loop way back in the day.

------
cjfd
Sure, only use threads when you actually need them. The cases where you
actually need them are quite numerous, though. E.g., you have a library that
connects to something over the network but the library is blocking. Or you
need to respond to events quickly but for some events you need expensive
calculations. If you can get away with copying data to the threads so shared
state is minimized this will help. The general principle is to make things as
easy as possible instead of as difficult as possible.

------
jimjag
It is amazing, and sad, how many people still refer to this document today
when complaining about how "threads are bad" and "events are the bee's knees"
or when justifying some architectural decision to avoid threads. No one
prototypes anymore. No one does their own real world testing and benchmarking.
They simply Google for some docs which support their already-made decision and
call it a day.

------
marcosdumay
Modern languages with high-level and verified async features and explicit
mutability make threads much more convenient.

That paper is finally becoming dated.

------
bodeadly
Async is superior. I have done processes with locks in shared memory. I have
done threads. But I predict Async will slowly start to take over. Processes
are not suitable for working on shared data. Threads frequently yield race
conditions and deadlocks even for experienced coders. But Async doesn't have
any of these issues. So why isn't it more popular? For two reasons:

1) it completely breaks the functional programming model that we all learned
as toddlers (instead of call A and then, after that's done, call B, Async is
call A which just installs B as a callback, returns immediately and then an
"event loop" calls B). Note that promises and tasks and futures are just
"syntactic sugar". Personally I'm not a fan. I don't use any of that. I just
use callbacks.

2) Even though Async it's great for concurrency, it's not great for
parallelism. Everything runs with one thread. So if you want parallel
processing you need workers.

But I would argue that issue 1 can be overcome. In fact, I find Async to be
quite elegant. I think in the long term people are going to realize that maybe
we've had it backwards all along.

Issue 2 is actually not that big of a deal for most things. It's actually
somewhat unusual that you need to have some CPU intensive operation running in
the background. Maybe image processing, data modelling, etc. But most blocking
operations are just I/O operations which are not using CPU that much. If I
needed to write some kind of network server, I would look at using libuv as a
portable runtime.

------
cryptonector
I mean, threads are OK, but you need to essentially and artificially (because
they're not processes) isolate them as much as possible (by minimizing shared
state), and you want to do async/evented I/O so that you have as many threads
as CPUs.

------
pedasmith
Hey, I get to tell m y own two 1990's era threads story.

First: in Windows 3.1, you got exactly one thread. My former company (BBN
Software Products, home of the RS/1 statistical program) managed to get a
version of RS/1 on Windows by splitting it into two pieces, each of which ran
a single thread. On piece (RS/Client) was the UI; it talked to the "server"
using TCP/IP (or a shared memory channel if the client and server were on the
same machine)

Second: I also got to help port a networking program over to an SGI box. At
the time, the SGI GCC-based compiler could either supports threads, or support
exceptions, but not both. (And my "unsupported" I mean, "generated code that
would crash even if no exception was ever actually generated"). I couldn't
convince the company to keep the threads and dump the exceptions, so instead I
had to convert the program to spawn new processes with shared memory (!) to
emulate the threads.

TL/DR: actually programming with threads at the time was decidedly
unsupported.

~~~
projektfu
True, but Windows NT 3.1 came with threads and used them throughout the
kernel. They were supported by nonstandard functions in the Microsoft C
runtime as well as through the windows API. Windows included its own
structured exception handling facility that also worked with it.

------
jlevers
This is mildly off-topic, but how does one end up working on problems that are
complex enough for things like this to even be an issue? It sounds incredibly
interesting to me, but most of the software I've worked on has been at least
somewhat web-based.

I know there's so much more out there, and I'm just not sure how to find
relevant problems to solve...it feels like a serious case of "I don't know
what I don't know."

I guess I should probably just pick some non-web concept I find interesting
and start making something.

~~~
yesenadam
Please do that as an Ask HN, I'd be interested in the responses you hopefully
get.

Edit: Ah, I see that you did, an hour ago. Very good!

------
mmphosis
_Concurrency isn’t a “nice layer over pthreads” - the most important thing is
isolation - anything that mucks up isolation is a mistake.

— Joe Armstrong_

------
InterestBazinga
I'm all for getting rid of threads, but what are you going to replace them
with? Traditional functional languages may be the most obvious solution, but
they're also among the most impractical of solutions. Is there anything else
out there that can replace threading needs, without throwing out the book on
programming? It seems like what we need hasn't been invented yet.

