
Joe Armstrong: Solving the wrong problem - geoffhill
http://joearms.github.com/2013/03/28/solving-the-wrong-problem.html
======
mncolinlee
I worked in Cray's compiler department for seven years. If we couldn't
dramatically parallelize someone's code, we couldn't sell a vector
supercomputer. Period.

Automatic parallelization is very possible. The problem is tends to be less
efficient. A decent developer can often do a better job than the compiler by
performing manual code restructuring. The compiler cannot always determine
which changes are safe without pragmas to guide it. With that said, our top
compiler devs did some amazing work adding automatic parallelization to some
awful code.

We inevitably sold our supercomputers because we had application experts who
would manually restructure the most mission-critical code to fit cache lines
and fill the vectors. Most other problems would perform quite adequately with
the automatically-generated code.

What this article lacks is a description of why Erlang is more uniquely suited
to writing parallel code than all the other natively parallel languages like
Unified Parallel C, Fortran2008, Chapel, Golang, etc. There are so many
choices and many have been around for a long, long time.

~~~
seanmcdirmid
Erlang is not designed for parallel programming; it is designed for concurrent
programming. These are two very different programming domains with different
problems.

Every time someone conflates parallelism with concurrency...everyone gets very
confused.

~~~
unoti
Isn't it really fair to say that it's designed for both? The way it uses
immutable state and something-similar-to-s-expressions to express data make it
very straightforward (or even transparent) to distribute work between multiple
processes and separate computers, in addition to how it makes it practical and
simple to break work into small chunks that can be interleaved easily within
the same thread. It's really designed for doing both very well, wouldn't you
say?

~~~
seanmcdirmid
Not at all. Erlang isn't useful for modern parallel computing as we know it,
which is usually done as some kind of SIMD program; say MapReduce or GPGPU
using something like CUDA. The benefit doesn't just come from operating on
data all at once, but these systems (or the programmer) also do a lot of work
to optimize the I/O and cache characteristics of the computation.

Actor architectures are only useful for task parallelism which no one really
knows how to get much out of; definitely not the close-to-linear performance
benefits we can get from data parallelism. Task parallelism is much better for
when you have to do multiple things at once (more efficient concurrency), not
for when you want to make a sequential task faster.

Maybe this will help

[http://jlouisramblings.blogspot.com/2011/07/erlangs-
parallel...](http://jlouisramblings.blogspot.com/2011/07/erlangs-parallelism-
is-not-parallelism.html)

and

<https://news.ycombinator.com/item?id=2726661>

~~~
peerst
"modern parallel computing" ... well not everything that can run parallel on
multi core CPU's can run very well on a GPU.

I'm using Erlang and GPU programming each for its area of expertise. FWIW I
even use both together via <https://github.com/tonyrog/cl>

Erlang is great at asynchronous concurrency which happens to be able to run in
parallel well because of how the VM is built.

GPU's solve totally different problems

~~~
seanmcdirmid
Yes, erlang is great for concurrency, GPUs are great for significant scalable
parallelism. They both solve different problems, I agree, and that's my point.

------
konstruktor
> At this point in time, sequential programs started getting slower, year on
> year, and parallel programs started getting faster.

The first part of this statement is plain wrong. Single thread performance has
improved a lot due to better CPU architecture. Look at
<http://www.cpubenchmark.net/singleThread.html> and compare CPUs with the same
clock rate, where a 2.5 GHz. An April 2012 Intel Core i7-3770T scores 1971
points while a July 2008 Intel Core2 Duo T9400 scores 1005 points. This is
almost double the score in less than four years. Of course, one factor is the
larger cache that the quad core has, but this refutes Armstrong's point that
the multicore age is bad for single thread performance even more.

For exposure to a more balanced point of view, I would highly recommend Martin
Thompson's blog mechanical-sympathy.blogspot.com. It is a good a starting
point on how far single threaded programs can be pushed and where multi-
threading can even be detrimental.

Also, I think that fault tolerance is where Erlang really shines. More than a
decade after OTP, projects like Akka and Hysterix are finally venturing in the
right direction.

~~~
jd007
Your point is absolutely correct, but your example could be better IMO. It's
not fair to compare a desktop chip with 45W TDP with a laptop chip rated at
35W. Not to mention that the newer i7 actually goes up to 3.7Ghz turbo (vs.
2.5Ghz constant for the C2D) for single threaded loads, so the clock rate is
not really comparable in that benchmark (even though base clocks are the
same).

A better example would be C2D E8600 @ 3.33Ghz and i5 3470S @ 2.90GHz (3.6Ghz
turbo). They are both 65W desktop parts, and the single threaded clock speed
is similar. You can see that the C2D gets 1,376 in the single threaded
benchmark, while the i5 gets 1,874. The difference is not as drastic (the C2D
launched at a significantly higher price point as an enthusiast level chip,
while the i5 is a budget chip) but definitely still significant. There are
probably even better comparisons but I didn't spend too much time picking out
comparable CPUs from different generations.

~~~
konstruktor
True, good point. I tried to find something with a similar nominal clock speed
and forgot about Turbo. But then, Turbo is a good example why single threaded
performance is getting better even in the age of multicores.

------
daleharvey
I cant help but read a lot of irony in this.

Erlang solved a problem really well over 20 years ago, its the sanest language
by far that I have used when dealing with concurrent programming. (I havent
tried go or dart yet) and I owe a lot of what I know to the very smart people
building erlang.

However it has barely evolved in the last 10 years, will 2013 be the year of
the structs? (I doubt it), every new release comes with some nice sounding
benchmark about how much faster your programs will run in parallel and there
is never a mention of whats actually important to programmers, a vibrant
ecosystem and community, language improvements that doesnt make it feel like
you are programming in the 80's. Better constructs for reusing and packaging
code in a sane way.

Its fairly trivial in most languages to get the concurrency you need, I think
erlang is solving the wrong problem in 2013.

~~~
tieTYT
I was following you until your last sentence. I've never done concurrency in a
FP language before, but I do know that writing it in Java makes it hard to get
right.

~~~
MichaelGG
What prevents you from implementing an actor/message passing system in Java?

Erlang's core concept of concurrency seems like something that'd be better
suited as a library and app server than a whole language and runtime.

I've yet to hear of any Erlang-specific magic that cannot be implemented
inside another language.

~~~
SoftwareMaven
How would you get per-actor heaps that cannot be violated by other actors?
That is critical to Erlang's ability to recover from processes dying. I spent
a lot of time doing Java and can't think how you could (you could in the JVM
if you had language constructs for it, but then we are back to a new
language).

There's a reason Stackless Python's actors aren't just a library on top of
Python.

~~~
reeses
TLABs are a partial step in that direction at the JVM level. You could also
use object pools/factories keyed to the thread.

Those are the first two "ghetto" hack solutions I can think of that wouldn't
require significant code changes on a going-forward basis.

~~~
SoftwareMaven
But those hacks wouldn't provide the same guarantees that language-level
changes provide. Sure, you can try not to impact other thread's heaps, but
nothing is stopping me, which means a simple programming error has the
potential to impact multiple threads. As a result, you can't just "reboot"
that thread (a critical piece of what makes Erlang interesting), because you
have no guarantees its errors didn't impact other threads. You also have no
guarantees that the underlying libraries aren't mucking up all of your
carefully crafted memory management.

It's like the kernel protecting memory so applications can't overwrite each
other. Sure, applications could just write to their own memory, but nobody
actually trusts that model[1]. Instead, they want something below that level
enforcing good behavior.

1\. Obviously, virtual memory adds a wrinkle to this that kind of forces
kernel protection, but even if we had literally unlimited RAM, we would
_still_ implement kernel protections on memory.

~~~
reeses
Unfortunately, with any native code interaction, whether explicit FFI/NIF or
implicit (RNG, clock, etc.), your threads can still mingle state.

In a single process, you're still limited by the memory virtualization
capabilities of your underlying processor and MMU.

------
dvt
Same old hype. Erlang is good I guess, and I've used it in production a couple
of times. But it's just a language that solves 3 problems but creates another
30. Just like C++11, Dart, Go, etc.

This kind of belligerent rhetoric (we're solving the right problems, everyone
else is dumb) is the kind of drivel that gives momentum to language zealots
that think language X is better than language Y.

I've contributed to Google Go in the early phases and I was naïve and really
believed that Go was the "next big thing." But it turned out to be yet another
general-purpose language with some things that were really interesting
(goroutines, garbage collection, etc.) but some things that were just same-old
same-old. Now, I'm editing a book about Dart and I've since lost my enthusiasm
for new languages; I can already see that Dart solves some problems but often
creates new ones.

And in a lot of ways Erlang sucks, too. The syntax is outdated and stupid
(Prolog lol), it has weird type coercion, memory management isn't handled that
well (and many more). Of course, since Facebook uses it, people think it's a
magic bullet (Erlang is to Facebook like Python is to Google).

The article also forces readers to attack a straw man. Often times, algorithms
simply _cannot_ be parallelized. The Fibonacci sequence is a popular example
(unless you use something called a prefix sum -- but that's a special case).
So in many ways, the rhetorical question posed by the article -- "but will
your servers with 24 cores be 24 times faster?" -- is just silly.

~~~
rdtsc
> The syntax is outdated and stupid (Prolog lol),

How is it any more or less stupid than curly bracket.

Show me another production ready language that has the same level of pattern
matching as Erlang.

> Often times, algorithms simply cannot be parallelized.

Who cares. How many people here have implemented individual algorithms and
delivered them as units of execution. Sure middleware companies maybe sell a
cool implementation of A* or some patented sort algorithm.

You can think of parallelization at system level. Can you handle client
connections concurrently (and in parallel?). If yes, that covers a large chunk
of the usage domain for platforms these days.

> memory management isn't handled that well (and many more).

Can you expand on that?

~~~
Peaker
Haskell has nicer pattern matching capabilities.

~~~
peerst
No it hasn't :-) Just to stay on your level of detail

------
djvu9
We used Erlang several years ago. The code base has ~100k lines of code so it
should be representative. We abandoned it later and switched to C++ because of
performance (mostly in mnesia) and quality issues (some drivers in OTP). We
didn't expect too much from performance considering it is functional (which
seldom does in place update) but it is still below expectation.

It is understandable though. Just think about how much resources have been put
into development of Erlang VM and the runtime/libraries(OTP), and compare it
with JVM/JDK. There is just no magic in software development. When talking
about high concurrency and performance, the essential things are data layout,
cache locality and CPU scheduling etc for your business scenario, not the
language.

~~~
acqq
The tests here confirm your experience, Erlang is for many algorithms
significantly slower, even when more cores are used:

<http://benchmarksgame.alioth.debian.org/u64q/erlang.php>

~~~
digitalzombie
Do not use Erlang for number performance. I tried with EulerProject and create
some prime number generators in Erlang. It is slow as hell.

Use it for what it is built for!

------
eridius
If zlib could be rewritten in Erlang to be lock-free, why not just rewrite it
in C to be lock-free instead of porting it? AFAIK Erlang isn't some magical
language that allows traditionally-locked data structures to become lock-free.

~~~
PixelPusher
Well, zlib is fairly trivial and probably not a good example due to overheads.
However, an example such as a torrent server this would make much more sense.
That being said, Erlang is basically a scripting language for building fault-
tolerant and parallel applications.

Using C, you might be able to get parallel, but it'll be a lot of work to make
it distributed and fault tolerant.

The underlying data structures have little to nothing to do with what's being
said in the article.

~~~
eridius
I've looked at Erlang before, and I would certainly agree that it's far
simpler to write a concurrent application in Erlang than it would be in C.

I'm just taking issue with the bit at the end, where they're bragging about
removing a serial bottleneck by rewriting zlib in Erlang in order to remove a
lock. Rewriting it in Erlang really doesn't have anything at all to do with
switching to a lock-free data structure.

~~~
PixelPusher
Ah yeah, I had to read it a second time to realize what you meant. That's
true, that it's a bad example and doesn't make much sense.

I _think_ what they meant to say was that they parallelized the image
processing mechanism of the application as a whole.

------
eksith
There's one big problem Erlang couldn't solve that I live with to this day :

Unlike another general purpose language (like say, C++ or C#) allow me to
grasp what's happening after staring at it for 30 seconds. This is the same
problem, I have with Lisp.

Maybe I'm just dyslexic, but these rhetoric pieces for one language or another
that says it's concurrent (which it is), fast (obviously), more C than C, will
bring the dead to life, create unicorns and other wonderful, fantastic things
that I'm sure are all true, just don't seem to be capable of passing into my
grey matter.

You know another thing all these amazing super power languages haven't been
able to do that even a crappy, broken, in many ways outright _wrong_ ,
carcinogenic etc... etc... language like even PHP has allowed me to do? Ship
in 48 hours.

Before, I get flamed, I already tried that with Nitrogen
(<http://nitrogenproject.com>). It didn't end well, but maybe it will work for
someone already familiar with Erlang.

It's like you've written the Mahabharata; it'a a masterpiece and it's one of
the greatest epics of all time. Unfortunately, it's written in Sanskrit.

~~~
rcfox
> Unlike another general purpose language (like say, C++ or C#) allow me to
> grasp what's happening after staring at it for 30 seconds. This is the same
> problem, I have with Lisp.

I had the same problem with Lisp (Scheme, to be more specific) and I thought
that it'd be impossible to reason about run-times and such. That is, until I
learned the language and the standard libraries. I've never looked at Erlang,
but I'd bet it's the same issue.

A C++ programmer can look at C# code and figure out what it's doing because
they have similar syntax and vocabulary. Just because Erlang isn't immediately
accessible to you, it doesn't mean it isn't any good for shipping in 48 hours.

Perhaps if you spend 24 hours sharpening your axe, you'll chop that tree down
in another 4 hours instead of using the full 48.

~~~
eksith
I used hyperbole of course, but in fairness, I've spent on the order of
several months getting into Lisp. Mabye it's because I didn't work with it
exclusively (I've been told here on HN that you need to be immersed in it
completely and continously) that I still didn't become _OK_ in any semblance
of the term.

But here's the thing: I switched to C# from VB.Net. Before that, it was
VBScript (ASP pages) and even before that it was PHP and JavaScript. At no
point did I _stop_ in the language I'm currently working in to start learning
another.

Until that changes, I don't see how it will help me.

~~~
pnathan
Basically, I would say that what you are suffering from is a kind of mental
block syndrome: you think in a procedural/imperative paradigm. All your listed
languages operate in that paradigm. It's a very transferable paradigm, as it
so happens. I can come up to (some approximation of) speed in an imperative
language in under 2 weeks. In order to ship Lisp(Prolog, Haskell...), you
_have_ to break out of that paradigm. I am not condemning you, mind. It is
what it is. Rewiring your head is hard, and often doesn't have direct results.

I can, however, ship with Common Lisp, because I've spent on the order of 5
years learning it and writing it most evenings. I am learning Clojure and am
preparing to ship a (excruciatingly minor) product with that after only maybe
two months of dabbling. This is possible because I've bent my head around into
Lisp shapes.

It's also been said that some people have the shape of Lisp in their head, and
when they learn Lisp, their heads fit it by nature, and other people don't
have that innate meshing. I certainly found Lisp to mesh with my head well.

Oh yes. It can be hard to get started with Common Lisp, just in terms of
getting an environment working. I have a tutorial site to help with that(
_plug plug plug_ ): <http://articulate-lisp.com/>

~~~
paganel
> you have to break out of that paradigm.

That's why I never regretted trying to learn Common Lisp some time ago, even
though I didn't ship anything into production, and why I really do enjoy doing
the same thing with Erlang right now, i.e. trying to understand it and getting
as comfortable with it as I can get (and preferably this time maybe putting
something out there).

Both these experiences helped me see programming differently, a change of
"paradigm" as you very well put it, so now even when I get back to Python or
PHP I feel like I'm a better programmer. Plus, there's something to be said
about the fact that always trying to learn new and interesting stuff and not
only focusing on "shipping code" is what keeps one's passion at higher levels.
After almost 10 years in this trade I've found that passion at what you're
doing is a very valuable and in the same time very volatile resource.

------
acdha
zlib is fine as long as you don't give an non-threadsafe memory allocator -
see <http://www.gzip.org/zlib/zlib_faq.html#faq21>. As far as I can tell, it
either means that the summary was imprecise and the slowdown was in the image
processing code and not zlib or that they chose to rewrite (and debug) a big
chunk of code rather than read the zlib documentation.

Ignoring that point, this seems like a poor point for comparison as it's a
trivially parallelized task because zlib operates on streams and shouldn't
have any thread contention. There's very little information in the description
but unless there are key details missing, this doesn't sound like a problem
where Erlang has much interesting to add. The most interesting aspect would be
the relative measures for implementation complexity and debugging.

------
Uchikoma
Because the lock-free Erlang meme doesn't die:

    
    
      1. Erlang has locks and semaphores [1], receive is a lock, actors are semaphore. Erlang chose a 1 semaphore/ 1 lock per process model
      2. Erlang scales better not because of being lock-free (see above), but because it easily uses async compared to other languages
      3. Async prevents deadlocks not Erlang being lock-free (see above)
    

Some 4year old reading [http://james-iry.blogspot.de/2009/04/erlang-style-
actors-are...](http://james-iry.blogspot.de/2009/04/erlang-style-actors-are-
all-about_16.html)

[1] <http://en.wikipedia.org/wiki/Semaphore_(programming)>

------
splicer
> _We’re right and the rest of the word is wrong. We (that is Erlang folks)
> are solving the right problem, the rest of the world (non Erlang people) are
> solving the wrong problem._

> _The problem that the rest of the world is solving is how to parallelise
> legacy code._

As member of the rest of the world, I can assure you that I'm not trying to
solve either of these problems. :p

------
InclinedPlane
People have been thinking this, that it's vastly better to design for
concurrency upfront, for literally decades. And every single time there has
been a big sea change in processor technology it's always been the _next_
generation which will see things like VLIW or Erlang and so forth come to the
fore while what I will call "iterative advancements" and "patched solutions"
turn out to have too many weaknesses to be competitive. In reality the reverse
has happened, and new specialized languages and instruction sets have been
relegated to niches.

It'll be the same over the next 20 years as well.

I predict that we'll see a lot of technological leaps which will serve as much
to maintain the ability to run "old code" in new and interesting ways as to
enable a brave new world of purpose-built languages.

In the next few decades we'll see advances in micro-chip fabrication and
design as well as memory and storage technology (such as memristors) which
will result in even handheld battery powered devices having vastly more
processing power than high-end workstations do today.

Is that an environment in which one seeks to trade programmer effort and
training in order to squeeze out the maximum possible efficiency from hugely
abundant resources? Seems unlikely to me, to be honest.

Indeed, it seems like the trend of relying on even bloatier languages (like
Java) will continue. Do you think anyone is going to seriously consider
rewriting the code for a self-service point-of-sale terminal in Erlang in
order to improve performance? That's not the long pole, it never has been, and
it's becoming a shorter and shorter pole over time.

In the future we'll be drowning in processor cycles. The most important factor
will very much not be figuring out how to use them most efficiently, it'll be
figuring out how to maximize the value of programmer time and figuring out how
to use any amount of cycles to provide value to customers effectively.

(I think that advancements in core, fundamental language design and efficiency
will happen and take hold in the industry, but mostly via back-door means and
blue sky research, rather than being forced into it through some impending
limitation due to architecture.)

~~~
lifeisstillgood
Can I rephrase slightly (and take the odd strawman liberty)

The "mainstream" has been relying on incremental improvements for decades, and
in doing so avoided rewriting legacy code until last possible moment

Some people have taken concurrency upfront and anecdotally seen cost /
performance benefits plus more modern codebases and have anecdotally enjoyed
competitive advantages in areas where concurrency makes a difference

We will never see the average, user interface bother with concurrency and
legacy rewrites because the competitive advantages are low.

There are likely to be areas where the concurrency advantage is great enough -
if you like erlang look for those niches

~~~
InclinedPlane
Yes, precisely.

It's like designing a race car, or a fighter jet. Sure, they are amazing
things. But are people ever going to commute to work in anything resembling a
Bugatti Veyron or an F-22? Of course not. Neither maximum automotive
performance nor air combat effectiveness are the sorts of things that are
normally necessary to optimize for in daily life. Some time in the far future
we're going to have both the tools to write amazingly efficient programs and
to do so with a minimal amount of fuss from the programmer's perspective, but
it'll be a long time getting there. And in the meantime there are going to be
plenty of cycles of figuring out how to produce performance gains with the
least disruption to existing ways of doing things.

~~~
lifeisstillgood
Puuuuhhhleeeeaaaseeee can I commute to work in a Bugatti Veyron??

Please please please :-)

Edit: sorry unable to resist. However I am on Joe Armstrongs side - I would
far rather make a decent living doing fun Erlang work than be in a java shop
making the next generation of POS

Added to that I think _not_ using Erlang or some STM based concurrency
language _must_ be an informed decision - if the CTO of big bank says we have
tried two pilot projects rewriting the ATM network in Erlang and the projected
costs do not add up, fine. If he says "I have two hundred java coders, we
aren't moving". I don't think that's valid

~~~
InclinedPlane
Of course, most people would like to be working on race cars, or spacecraft,
or fighter jets, but that just isn't an option for every body. And it's not as
though there's no in between. The choice isn't just between some soul sucking
blub-job in the enterprise trenches or using Erlang, there are lots of
languages, lots of development patterns, lots of products.

~~~
lifeisstillgood
I would agree, but with the proviso that the spectrum between soul-sucking and
cool-space-tech is not a nice linear graph - in my experience its step-
gradients, some companies are entirely on one level, and then they have to
make a real effort to climb to the next (i.e. From manual deploys to CI)

Its actually a consultancy opportunity (I hope :-0)

------
meshko
The lack of understanding is amazingly widespread. I often have to explain to
people that when they look at their CPU utilization and it is at 10% it means
"you are throwing money way", not "you are efficient".

~~~
masklinn
That's not really true though, or at least not on all workloads: much as you
are not "throwing money away" by not pegging your car engine in the red zone
100% of the time, you're not throwing money away by not being at 100% CPU all
the time, there are other metrics, values and issues to take in account e.g. a
pegged CPU but an unresponsive computer is useless for a desktop, a pegged CPU
which can't serve requests because the CPU is pegged _because_ it's swapping
like mad is useless for a server, so is a server at 100% CPU when there's no
load on it which will just keel over when people start trying to actually
interact with it.

~~~
jeremyjh
It sounds like you missed the point here. If an eight-core server is at 10%
utilization, it effective has a single processor nearly pegged and the process
doing it is thus CPU bound (and maybe serving responses at a high latency)
while you have other cores sitting idle. Conserving CPU resources and running
under capacity is wise, but has nothing at all to do with this comment.

------
kamaal
This blog post shows everything that is wrong with languages like Lisp and
Erlang. This is total disregard for that the rest of the world considers
valuable to them.

The problem with these languages remain unchanged. The syntax is so strange
and esoteric, learning and doing anything basic things with them will likely
require months of learning and practice. This lone fact will make it
impractical for 99% for all programmers in the world.

No serious company until its absolutely unavoidable(and situation gets
completely unworkable without it) will ever use a language like Erlang or
Lisp. Because every one knows the number of skilled people in market who know
Erlang, are close to zero. And those who can work for you are going to be
crazy expensive. And not to mention the night mare of maintaining the code in
this kind of a language for years. There is no friendly documentation or a
easy way a ordinary programmer can use to learn these languages. And there is
no way the level of reusable solutions available for these languages as they
are for other mainstream C based languages.

In short using these languages attracts massive maintenance nightmares.

The concurrency/parallelisation problem today is very similar to what memory
management was in the 80's and 90's. Programmers hate to do it themselves.
These are sort of things that the underlying technologies(Compilers/VM's) are
supposed to do it for us.

I bet most of these super power languages will watch other pragmatic languages
like Perl/Python/Ruby/Php etc eat their lunch over the next decade or so when
they figure out more pragmatic means of achieving these goals.

~~~
alberich
>> I bet most of these super power languages will watch other pragmatic
languages like Perl/Python/Ruby/Php etc eat their lunch over the next decade
or so when they figure out more pragmatic means of achieving these goals.

You know, Lisp's syntax is weird but it is exactly this what makes it so
flexible. It's easy to manipulate code as data, because the syntax is very
regular. Try to do that with C's syntax...

So, unless someone knows how to solve this in a easy way, I'd say that the
lot's of parentheses are actually a pragmatic decision (i.e. you want easy
macros... so you have to use this uncommon syntax).

If popularity is the goal, then maybe those languages were not pragmatic.
However, It seems the language designers of such powerfull languages (e.g.
Lisp, Erlang, Haskell) were looking to solve other problems where popularity
is really not a concern.

~~~
coldtea
> _You know, Lisp's syntax is weird but it is exactly this what makes it so
> flexible. It's easy to manipulate code as data, because the syntax is very
> regular. Try to do that with C's syntax..._

Why you'd "manipulated code as data"? To write macros? A good template system
can help with that (if you need it) without homoiconicity.

For me, the level of manipulation of "code as data" (and vice versa) you get
with JSON/JS is enough for a lot of use cases.

~~~
alberich
> Why you'd "manipulated code as data"?

To write DSLs? You cannot use JSON to create new syntax for your language. The
whole idea of DSLs is to extend the language for the problem at hand. How
would you do that with JSON? Say... how would you write something like CLOS,
for instance, using the alternative mentioned by you?

Maybe your option is good enough for a lot of use cases. But what when it is
not good enough? Then you're stuck and there's nothing you can do except
waiting for the language desingers to release a new version of your language
with, hopefully, the changes you need.

~~~
coldtea
> _To write DSLs? You cannot use JSON to create new syntax for your language.
> The whole idea of DSLs is to extend the language for the problem at hand._

I'm not that sold on DSLs. I have a language (the base language) that people
know, has certain semantics, etc.

Now I suddenly go on and add a new mini-language on top of it, with my ad-hoc
semantics for the "problem domain"? Why multiply the languages used, so that
someone would have to reason and understand both, instead of just one?

I could just use the functionality of the base language, AND it's
syntax/semantics, to model the problem. I.e with objects in an OO design, with
functions, in a procedural design, data and first class function in a
functional design etc.

I don't really like all those Ruby DSLs for example, like for testing, were
you have to learn each one ON TOP or knowing the core language.

------
zzzeek
> The road to automatic parallelisation of sequential programs is littered
> with corpses. It can’t be done. (not quite true, in some specific
> circumstances it can, but this is by no means easy).

vs three paragraphs later

> Alexander’s talk gave us a glimpse of the future. His company concurix is
> showing us where the future leads. They have tools to automate the detection
> of sequential bottlenecks in Erlang code.

why is that not a contradiction? because an erlang program isn't "sequential"
to start with?

~~~
dustismo
I think you missed the "automatic" part. Completely rewriting a program in a
new language is certainly not automatic.

~~~
zzzeek
both phrases feature the term "automate"...but yes, one is detection, one is
resolution

------
surferbayarea
Have you even used zlib in c++? The largest ecommerce site out there uses zlib
in a multitreaded c++ application(24 cores, 100s of threads, 1000s
requests/sec/server) and it works just fine! Bet you erlang can't come within
a tenth of the performance of c++...

~~~
coldtea
> _1000s requests/sec/server_

That's not particularly impressive you know.

~~~
surferbayarea
yes, because it also does other computation. Poiint was to illustrate that
zlib can be used in a concurrent computing setting with high performance. The
blog writer had claimed that zlib doesn't work in a multithreaded setting.

------
dap
This completely misses the fact that many network services are not compute
bound, and multi-tenancy (as we get from "the cloud") lets "legacy" code make
very efficient use of CPU resources, even a larger number of relatively slow
cores.

------
alexchamberlain
A very very interesting article, but I was extremely disappointed with the
example they gave.

So, there was an error in someone's code which you rewrote without the error
and it ran faster? Well done detective...

We need more parallel programs, no doubt, but we need more, better
programmers, who are willing to write in compiled languages with low-overhead.

------
artsrc
Excel solves this right problem, and Erlang does not.

Erlang allows you to create concurrent programs, i.e.: programs where the
result is schedule dependent.

One right problem is allowing people to write deterministic parallel programs.
This gives you the speed (from parallel) with the reliability (from
deterministic).

------
spenrose
In conventional blocking languages, you can get a start on parallelizing your
programs this way:

    
    
      - break program into function calls that match the steps that can happen in parallel
    
      - wrap the function calls in messages passed over the network
         + i.e. process(thing) -> post(thing)/poll_for_things()
    
      - split the sender and receiver into different processes
    

OF COURSE there are big advantages to using a language (Erlang) or a
heavyweight framework (map/reduce) designed for concurrency. Rolling your own
process-centric concurrency is a different set of tradeoffs, not a panacea.
But it's worth considering for some problems.

------
uwiger
To say that Erlang fails to deliver what most programmers need misses the
point. If you have a mainstream problem, use a mainstream language!

I've spent many years developing and reviewing products in the telecoms realm,
and have found that failing to realize _when_ something like Erlang brings
life-saving concepts to your project may well make the difference between
delivering on time and disappearing into a black hole of endless complexity.
It's not for everyone, but when it fits, boy does it help!

------
damian2000
I would hazard a guess that 90%+ of the worlds programmers are working on
projects that don't really need to use parallelism to get the job done.

~~~
tunesmith
The question is, will that percentage go up or down as time goes on?

------
ternaryoperator
Joe Armstrong: "The problem that the rest of the world is solving is how to
parallelise legacy code."

Donald Knuth: "During the past 50 years, I’ve written well over a thousand
programs, many of which have substantial size. I can’t think of even five of
those programs that would have been enhanced noticeably by parallelism or
multithreading."

------
jlebrech
Which frameworks can you use?

This? <http://nitrogenproject.com/>

~~~
melchebo
I suppose you mean web framework. There's Chicago Boss:
<http://www.chicagoboss.org/>

Also a recent book using other Erlang web technology:
[http://www.amazon.com/Building-Web-Applications-Erlang-
Worki...](http://www.amazon.com/Building-Web-Applications-Erlang-
Working/dp/1449309968)

------
fulafel
all true. to add' we are in a local optimum where we have a lot of fast non-
parallel solutions and the parallel way needs >10 x parallelism. we'll see if
we ever get to ubiquitous 100-way parallelism with no need for backward
compatibility.

