

Memory Models: A Case for Rethinking Parallel Languages and Hardware - adg001
http://lambda-the-ultimate.org/node/4211

======
jswinghammer
I read a paper on the subject of transactional memory years ago that was
presented at OOPSLA in 2007 (I think). It was pitching the idea that there is
an analogy between transactional memory and garbage collection. It made the
topic more understandable for me and might be helpful:

[http://www.cs.washington.edu/homes/djg/papers/analogy_oopsla...](http://www.cs.washington.edu/homes/djg/papers/analogy_oopsla07.pdf)

I think this is an interesting explanation. The software product my company
makes depends on a multi-threaded program which to date has had only one bug
related to the fact it is a multi-threaded application. That bug was very
serious mind you but it's the only one we've found (so far).

~~~
liuliu
Transactional memory is easier to work with. But as long as you have strong
consistency guarantee, I doubt any significant performance boost will come
with that.

------
scott_s
Every paper written by Hans Boehm is worth reading:
<http://www.hpl.hp.com/personal/Hans_Boehm/pubs.html>

(If the paper you want to read is behind a pay-wall, google its title and you
will likely find a freely available copy.)

------
stcredzero
_if the programmer writes disciplined (data-race-free) programs_

The idea seems to have merit, but such a phrase (staring with "if the
programmer") also means that it will be diluted once it hits the programming
mainstream. (Just a fact of life.)

~~~
metageek
Yes, that's why they follow with:

> _We discuss why this view is the best we can do with current popular
> languages, and why it is inadequate moving forward. [...] In particular, we
> argue that parallel languages should not only promote high-level disciplined
> models, but they should also enforce the discipline._

------
jfm3
Fine grained concurrency will motivate the next big paradigm shift. I haven't
seen anything better than actors or STM yet though.

