
Ask HN: Reasons behind renaissance of functional programming languages - fizwhiz
I&#x27;ve been curious about the renaissance of functional programming languages like Scala and Clojure in the software community. Is this a byproduct of our evolving hardware where instead of clock speeds the focus has been on pushing out more cores, so to reap the benefits software developers need better abstractions to make their code easily parallelizable? Is there some erlang-esque promise of pervasive immutability which is vaulting these languages into more popularity? Or is it simply the fact that these newer programming languages have simply learned from other languages&#x27; paint points and amplified the good parts by bundling them together neatly? I don&#x27;t have anything against these languages, I&#x27;m just trying to get a deeper understanding of what appears to be a trend in the programming language space.
======
pbiggar
There has always been an interest in these languages. What's changed is that
using a functional language no longer cripples you. That is, the rise of
service-oriented architectures, horizontal scaling, javascript, and REST APIs
mean that it is no longer suicide to build your product using a functional
language.

20 years ago you needed to write desktops apps, so you were limited to C/C++.
Then 15 years ago you needed access to java libraries, and the only real JVM
language was Java.

Now, you just need to build a service that speaks HTTP, so you just need a
HTTP server, a HTTP client library, and a database adapter. That means you can
build your product in any language you like, and not lack the ability to
succeed.

CircleCI is written in Clojure. We communicate with all our vendors over
simple REST APIs - it only took a few minutes to write our own client bindings
for Stripe and Intercom.

------
tel
I think Haskell has been growing a lot fueled by having a really great base
language finally getting a pretty comprehensive standard library. This was a
highly intentional group effort that began probably 10 years ago and pushed
the Haskell libraries to a level of maturity.

I can't truly claim this has driven the adoption of other languages, but
Haskell's (or pick some language from the ML roulette) influence on Clojure
and Scala is undeniable. Clojure's commitment to immutability pales in
comparison to Haskell's, while the latter demonstrates the advantages of such
programming in spades. Scala's type system is a novel extension of the HM
system which forms the basis of Haskell's.

So am I claiming that the increased availability and "practicality" of Haskell
has driven a renaissance in functional programming?

Sort of, yes, but I don't feel it's the core underlying cause. More a symptom
in its own right. These ideas are the leading edge of programming language
research. They're smart to learn and to integrate into our next generation of
tools. This evidence has long resounded from finding them at the center of the
"Holy Trinity" of category theory, logic, and type theory. It's "practical"
significance is still growing, however.

~~~
NhanH
Additionally, I think it's just a matter of "it's ready".

We just couldn't have practical functional language 20 years ago, in term of
both software and hardware. Back then, GC was a rarity, the JVM didn't exist
yet. Software has to be ran on the user slow-ass computer, speed was
everything and you didn't care much about concurrency/parallel problem.
Nowadays, even if the clock speed didn't stagnate and we have just one super
core in desktop computer, we would still write a lot more distributed system
on the web now than we used to.

Also, we're building more and more complex software over the year. I remember
a question for Peter Norvig in "Coders at work", where he was asked about the
differences in programming now and back there. In the past, programmers
generally had to write (mostly) everything from scratch, and they kept the
whole program in their heads. Nowadays, we actually spend most of our time
reading other API and trying to plug different components together,
referential transparency is simply a godsend.

In a sense, I think the change from imperative programming to functional is
similar to the one from GOTO to structure programming. May be we need a
"Mutable state considered harmful" to push functional languages mainstream ?
:-)

~~~
chas
We did have practical functional programming languages 20 years ago. PG has
been very vocal about the advantages of using Common Lisp to create Viaweb.
Viaweb started the summer of 1995 and meets my bar for practicality. To the
best of my knowledge, functional programming languages weren't as widely used
then but it was not because no practical languages existed.

------
mbenjaminsmith
As a counterpoint I'll explain why I _don 't_ use Haskell.

I like Haskell. I like it in theory and I like playing around with it. I like
the discipline it requires having to re-think "everyday" things in order to
solve them in a new paradigm. I started with Python and then learned half a
dozen other imperative languages so Haskell is utterly alien to me apart from
the most basic concepts like recursion. I liken it somewhat to learning Vim --
you make the simple complicated and gain something in the process. Haskell is
the number one language I wish I was working in at any given time.

So why am I not using Haskell? Most of my time is spent on OS X and iOS. While
I could shoehorn Haskell into a project it doesn't make business sense to do
so. I couldn't justify the extra time, complexity and I certainly couldn't
justify it to a client. There simply is no good answer to "what about the next
developer?" and other related questions. (I think this is why shoehorning a
paradigm into an existing language is much better approach when you have a
tight coupling of language and platform -- I'm thinking of RAC here.)

Give me something to work on where Haskell is the obvious choice and I'd be
all over it. Until then it has to stay in the toy box.

~~~
dj-wonk
But when you see a possible fit for Haskell (I would hesitate to make
'obvious' be the bar), have you tried to persuade or demonstrate the
advantages? Why not find customers/areas that would be open to using Haskell?

To some people, unfamiliar technology is scary, possibly unacceptable, and
often far from 'obvious'.

I don't like the metaphor or idea behind "you make the simple complicated". I
prefer to say this: with functional languages, you name more things: mapping,
folding, and so on, because you use them so often. Then you get comfortable
thinking in higher-level ways. This means you can build on them more
effectively.

~~~
mbenjaminsmith
> I don't like the metaphor or idea behind "you make the simple complicated".

When I say simple things I mean what's obvious for someone who programs
primarily in imperative languages, like top-down, line by line execution in
functions or local variables in functions. If you can take the same functions
and understand them as operations on lists of things then _both_ ways of
looking at the problem are valid, though the latter won't be obvious without
having the relevant experience.

> But when you see a possible fit for Haskell (I would hesitate to make
> 'obvious' be the bar), have you tried to persuade or demonstrate the
> advantages?

I have yet to. Again, I work primarily with Obj-C. Integrating Haskell would
mean complicating a fairly elegant collection of tools and libraries. "Hey I
like FP" isn't something I'm willing to sell to clients nor do I think it's
the responsible thing to do. I go to great lengths to ensure that I'm
delivering a product that could be understood by and maintained by developers
with a wide range of experience / skill. Integrating a second language /
paradigm into that without any clear gain would be counter to that.

> Why not find customers/areas that would be open to using Haskell?

For me it's really a chicken and egg thing. It's not easy for me to bring
Haskell into my professional or even personal projects so I'm not experienced
enough to go after primarily Haskell projects. What would make me consider it
would be significant downtime caused by a lack of projects in the language(s)
I'm already familiar with. Until that happens I'm not sure I'll make the jump.

I could possibly see writing libraries in Haskell but again it's hard for me
to make that jump when I know I could write it faster in C/Obj-C because of my
familiarity. What I like about Haskell (the type system, how succinct it is,
functional purity) I can get most of the benefits of in Obj-C. I rarely if
ever use dynamic types and I tend to write Obj-C (well, more and more these
days) in a functional style by leveraging the relatively new addition of
blocks.

~~~
loup-vaillant
> _obvious for someone who programs primarily in imperative languages, like
> top-down, line by line execution in functions or local variables in
> functions._

This makes me angry at "ordinary" programmers. They _think_ it's obvious, but
they never rigorously modelled the sequential execution in the first place.
Instead, they use their instinct. Instinct is efficient, but when it's wrong,
tough luck.

This is probably why proving the correctness of a program is so difficult: you
have to pin everything down mathematically, _including sequences of side
effects_. For most people, that's much more difficult than merely relying on
their guts. As a result, no one proves their programs. Maybe they test them.

With Haskell, you can't even _program_ without pinning it all down
mathematically. As a result, no one uses Haskell… OK, not _no one_ , but still
that's a significant barrier to entry.

I wish programmers were more mathematically aware. I wish they realised
they're doing applied math, whether they like "math" or not. I'm tempted to
blame our education, but I'm not sure it's the only culprit.

~~~
nathan7
The best guess I've come up with is that anyone who is sufficiently
mathematically inclined becomes a mathematician. I'm curious if anyone has any
points/counterpoints in that direction.

~~~
loup-vaillant
Well, we do have computer scientists who have worked on program proof,
cryptography, and type inference. Haskell (and earlier Miranda, and ML) where
_invented_ somehow.

~~~
nathan7
Accepted, but they seem to be on either side of a dividing line — I'm doing my
best to straddle it, but I'm still held back by a wall of language that is too
often entirely impenetrable to me. I'm a very functionally-minded programmer,
and I'm rapidly adapting to Clojure now (coming from rather functional JS, and
having spent a rainy week on Haskell, which taught me basic type theory and
such) — but I still often hit the wall of impenetrably abstract things too :(

~~~
loup-vaillant
It takes time. I used to read many Haskell related papers, and many things
were impenetrable. Then, from time to time, one of them was suddenly obvious.
Then another. And another.

The best advice I can give is, don't be content with words and labels. Search
for the underlying _structure_. Take monads for instance. Lots of tutorials,
lots of examples. But I didn't really got them until I read the
Typeclassopedia: when you get to read the word "monad" you realize it was
explaining the monadic structure in depth from the very start.

Practice also helps. Implementing a Parsec clone in Ocaml helped me understand
applicative functors and monads. (I also experienced some of the disadvantages
of strict evaluation.) You may want to try that in Clojure.

Finally, when something seems impenetrably abstract or obscure, that may be
because you lack some basic vocabulary. For instance, you can't understand a
paper on type inference if you don't know how to read type rules:
[https://en.wikipedia.org/wiki/Type_rules](https://en.wikipedia.org/wiki/Type_rules)
Which you can't read if you don't know what's an environment… As I said, it
takes time.

------
thinkpad20
Here are some reasons I can think of:

* Parallelization and concurrency do seem to be easier in functional languages with immutable data. Not easier to write per se, but easier to reason about.

* JavaScript was, as far as I can tell, arguably the first truly mainstream language to embrace first-class functions and closures. The ubiquity of JavaScript has exposed many to the benefits of such an approach. Similarly, Rails and other Ruby-based software make lots of use of functional programming through Ruby's do-blocks. These languages have made people more comfortable with first-class functions.

* Languages like Haskell, Scala and Clojure have shown that functional programming can be high-performance and one need not necessarily trade speed for expressiveness. Similarly, many scripting languages (and/or the hardware they run on) have become fast enough to allow for them to be used for high-performance applications.

* Functional programming is fun, and allows for beautiful and succinct code.

~~~
sigil
> _JavaScript was, as far as I can tell, arguably the first truly mainstream
> language to embrace first-class functions and closures._

This is a nitpick, but Perl has had closures since the early 90s -- before
Perl5 even.

~~~
zoomerang
Not to mention Python and Ruby.

~~~
denibertovic
Ruby doesn't have functions as first class citizens. Python does.

------
eslaught
I'm going to argue against the idea that functional languages are flourishing
as a result of the availability of parallelism. There are two reasons for
this:

* Performance is not that big an issue for many applications. And if performance is not an issue, then neither is parallelism: the only reason to do parallelism is for performance (as opposed to concurrency, which makes sense even on sequential processors). So parallelism only _really_ matters for performance-critical applications, because in most non-performance critical applications, you can squeeze out 10x by just running a profiler and writing better serial code.

* If you do want performance, parallelism alone is not sufficient. On modern processors, data movement is significantly more expensive than computation. And in that regard, most functional languages perform poorly. Imperative languages, for all their faults, give you much more fine grained control over memory usage patterns. So for performance-critical applications (which are again, the only applications where you really care about parallelism), it is not uncommon to see the core of the application written in C/C++, even if the rest of the application is written in something else.

By the way, if you aren't measuring your performance by comparing it against
the peak compute/memory bandwidth of the machine, then you don't really care
about performance, because you don't really have any idea what you're leaving
on the table. This is why it's possible, for many applications, to give the
code to an experienced performance programmer and see speedups in excess of
10x.

~~~
barrkel
You can still care about performance by having acceptance criteria for
throughput and latency, even when it's not the best the machine can do. You
sound like you're arguing for a binary perspective on performance. But
performance is a tradeoff. Often performance comes from de-abstracting
multiple steps in a program's operation, reducing redundancies introduced by
modularity and composability, and increasing complexity by denormalising data
structures.

Design for performance thus often comes at the cost of development agility and
operational correctness. Being correct and agile yet still meeting performance
criteria is a bigger win for most businesses, which usually aren't constrained
by cost and raw speed of hardware. Functional programming, to the degree that
it let's you exploit machine parallelism, gives you more leeway for good
design before you risk missing your perf requirements.

And that's a reason functional programming is actually given a boost by
hardware parallelism.

(Much bigger is that it's often a better way to structure programs, and now
there is less integration costs since you hide behind the - yes - functional,
stateless API of a http request handler, an explicit function of input to
output.)

PS: your perspective on perf is kinda off to me. Almost all applications care
about performance. If they never terminated they'd be useless. Performance
isn't just a priority, it's a requirement. But usually it's just a deadline
and a hardware cost multiplier - and money saved on hardware may not be enough
to pay for extra / more expensive people elsewhere.

~~~
eslaught
> Design for performance thus often comes at the cost of development agility
> and operational correctness. Being correct and agile yet still meeting
> performance criteria is a bigger win for most businesses, which usually
> aren't constrained by cost and raw speed of hardware.

This point illustrates exactly the reason why people don't really care about
performance. Performance is an arbitrarily deep rabbit hole that you can go
down as far as you want. But because performance trades off with other things
that people frequently want more (simplicity and maintainability, among
others), they stop when performance is "good enough". What is "good enough"
for most people? Probably 10x away from what the machine is actually capable
of.

You probably think I'm being flippant by claiming that these sort of speedups
are possible on real applications. I am not. In high performance computing
(HPC), this happens surprisingly frequently, perhaps because people are
willing to trade off almost anything for performance. Here's an example of a
paper showing speedups over some 30 year-old Fortran/MPI code:

[http://conferences.computer.org/sc/2012/papers/1000a040.pdf](http://conferences.computer.org/sc/2012/papers/1000a040.pdf)

And for this particular application I happen to know that there is another
paper in submission which shows additional speedups over what this paper got.
People who are willing to squeeze keep finding ways to optimize this
application.

------
pixelmonkey
IMO, it's something of a backlash against the rise of class-focused, object
oriented languages (e.g. Java, C++, Ruby) leading to some complexity in the
last decade.

Pendulum now swinging back the other way. Functional, data structure oriented
programming has many benefits above old-school procedural programming e.g. C,
Fortran. But it also doesn't have the baggage of mainstream OO languages.
Thus, this feels like an appealing -- and new -- ecosystems.

This -- along with a rediscovery of some functional concepts that seem have
more importance in a distributed setting, like immutable data, purity, higher-
order functions -- have led to some renewed interest.

~~~
seanmcdirmid
This has gone on in academia for awhile, you are now considered mainstream if
you embrace FP as the way, and you are considered a rebel if you instead
believe that OOP is still the way.

I predict that there will eventually be a FP backlash that reverses the
situation again (hey, look how easy objects are to use vs. this crazy FP
stuff!?).

~~~
pcarbonn
Logic programming, the third programming paradigm, will come to the rescue...

~~~
seanmcdirmid
Sure, the database junkies will never give that up.

------
badsock
People have covered a lot of the rational reasons, so here's an emotional one.

I've been coding a bunch of Rust lately, and I'm happy to discover it has my
favourite thing from Haskell: the "if it compiles it's usually correct"
feeling is wonderful. My software feels rock solid to me, even while it's
being heavily changed.

It almost certainly translates into more reliable software, but even just the
feeling is enough to hook me.

~~~
sousousou
I agree. Once you understand typechecker error messages a little bit,
refactoring becomes very simple (and fun!).

1\. change a core type or function 2\. fix all the type errors 3\. yay

This is wonderful. I think the tradeoff is that you start a new system, you
have to delay running the code for a while. Until you have your core.
(Fortunately there is undefined :: a for such uses.)

------
alkonaut
These come to mind (there are probably even more)

1) Complexity: Managing complexity is just easier with compact syntax,
immutability etc. Applications are becoming larger and more complex, and
languages that help reducing coupling and code size will become more popular
than those that easily allow getting the job done at the expense of high
complexity (e.g. OO/imperative).

2) Concurrency: With concurrent/multi-threaded applications being par for the
course rather than something exotic, functional thinking is now required for
many applications regardless of language.

3) Modularity/Decoupling/Interoperability: SOA and API's means you can develop
independent bits of software that communicate in some language agnostic way
(e.g. http). The CLR and JVM now both have support for functional languages
that work well with their imperative siblings. This means you don't even have
to be loosely coupled by http, you can write an application where the
logical/computational bits are F# and the UI is C# while in the same
application.

Apart from these we must not forget that almost all the large imperative
languages (C++/C#/Java) have gained numerous functional constructs in recent
years. Few of us can imagine working in an imperative language without lambdas
these days.

So functional thinking has slowly been sneaking into the imperative
programmers day job too. This has made more of us interested in these
languages that consist almost _only_ of these bits we find to be the most
elegant in our OO languages. It's simply less scary. In 1998 the difference
between Haskell and Java 1.1 was quite large. In 2014 the difference between
C#5 and F#3.1, or between Java 8 and one of the functional JVM languages, is
no longer big enough to make it scary.

------
erikb
I've read a lot of interesting things here, but no post really covered my
perspective, so here it is:

Reason 1: Object Oriented Programming becomes less popular. After the 90s
people started to see that OOP is not really the solution that scales to
unlimited complexity. It was a step forward but nothing more. Many different
things got tried, like Aspect Oriented Programming, but none of that really
stuck.

Reason 2: The moment we had processors with more than one core we suddenly had
a need for general purpose parallel processing. While the solution to this now
seems to be using tools/frameworks like ZeroMQ, another reasonable option were
languages with different paradigms, like functional.

I think that both these things happened nearly at the same time got a lot of
traction to functional languages. I am following this since 2008, though. That
functional languages still didn't make it to the top of popularity currently
results in me no longer believing that they will ever be. Functional languages
always had this kind of swingy popularity where a small group of people where
strong hearted followers while the general opinion always swings between "this
is ridiculous" and "maybe we could use it for that use case".

All in all I would say, the question is a great one, now that the current
renaissance of Functional programming is pretty much over.

------
stonemetal
Cynical answer: The consultants have decided that they need a new snake oil to
sell since the Agile bus is running out of steam.

Hopeful answer: FP languages have gotten enough better than the current
mainstream that they can't be ignored.

I believe it is a bit of the Hopeful answer but mostly the Cynical answer. If
you compare java and Clojure(or Scala for that matter) on the programming
language shoot out Clojure is neither more concise nor faster than Java. So
why all the hype then? As far as I can tell it is book sells and the fashion
cycle.

~~~
tensor
Clojure is much more concise than java, but ironically _high performance_
clojure is not more concise, nor as fast, as java.

But that said, high performance _java_ is both slower and less concise than C
by the same metrics. So if you think this is just fashion, then nearly every
language outside of C is also just fashion.

In my view, languages as a whole have been moving more and more towards the
functional space over time. Newer languages, or versions of existing
languages, typically have more functional features. Languages are trending
towards higher order. As a result of this, the existing higher order
languages, the functional languages, are suddenly much closer to mainstream
and so have become more popular as the majority become aware of their
benefits.

~~~
weavejester
If you stick to primitive operations like loop, if, let and data primitives,
the bytecode generated by Clojure is the same as the bytecode generated by
Java.

Clojure's syntax isn't optimal for low-level in-place mutability, but macros
can go some way to resolving that. The few times I've worked on high-
performance Clojure code, it hasn't been too difficult to get performance
comparable to an equivalent Java library.

Java is somewhat slower than C, but it's generally within an order of
magnitude if you stick to primitives.

------
weavejester
I suspect there are three main reasons.

Limitations of hardware mean we're increasingly looking at computing in
parallel, which is a natural fit for languages that emphasise immutability.
Distributed computation also benefits hugely from immutable values, as it
sidesteps much of the problem with cache expiry.

We're also seeing a demand for more reliable systems. Taking mutability out of
the equation eliminates a significant area of possible bugs, and allows for
more sophisticated type-checking.

The third reason is simply that it's only recently that we've gotten good
functional languages with a comprehensive and mature set of libraries. Haskell
has been around for decades, but when I first started learning it seven or
eight years ago, the set of libraries on offer was very thin compared to
Haskell today. Scala only popped onto the scene 11 years ago, and Clojure only
7 years ago.

------
AYBABTME
I would conjecture that your perception of FP languages making progresses is
an echo of your own development or of people immediatly surrounding you, and
that possibly, numbers would show that there is no increase in FP language
adoption.

I'm not taking a stand here, just posing a possible alternative explanation.
At least, I've thought about the OP's question and guessed it was a false
generalization of my own progress. Data in (dis)favor of either.

~~~
AnimalMuppet
FP is quite popular on HN. The more time you spend on HN and perhaps similar
sites), the more you think FP is popular.

Is there an actual increase in FP language adoption? Probably there is, within
the population that reads HN. Within the programming world at large? I have no
data, but from seeing some places that claim to measure usage, my perception
is that any increase in FP in general usage is fairly small.

------
jesusmichael
where would this "renaissance" be taking place? The most popular languages are
java and C...

~~~
revscat
Java just added lambdas.

~~~
mpweiher
Smalltalk has had blocks since at least 1976.

------
drudru11
It is a _side-effect_ of larger memories.

------
patrickg_zill
RAM is no longer a bottleneck. So you can "spend" some RAM in the search for
increased programmer productivity.

CPU is not really a bottleneck, either, so, you can also spend a few cycles
there also.

~~~
yati
These reasons make sense when explaining the rise of, say, dynamic,
interpreted languages luke Python/Ruby which are inherently slow(compared to
conpiled, statically typed languages). Scala runs on the JVM, which is super
fast once warmed up. So does Java. But Scala is more functional, more
expressive and without a significant RAM or CPU hit compared to what we are
already paying with Java.

~~~
ShardPhoenix
Idiomatic Scala uses a lot more RAM than idiomatic Java, at least in my
experience.

------
aaron695
How about a boring theory, Internets.

It's just a subculture that couldn't gain critical mass without the internet.
Like furries it has a wide but sparse appeal.

It is a subculture that's not simple though, it takes a while to get into,
unlike being a furry you really need to do it in your job to be good at it
hence it's only starting to hit it's peak.

------
frozenport
There is a lot more demand for tasks that FP is good at such as data
processing where state is a means to an end, and hence the program can be
stateless.

Contrast this with the 90s were we needed to write user interfaces where state
was the point, or C++ gamedev.

------
frik
A bit offtopic:

It would be great if there was a 10GHz CPU (single-core), for some special
domains.

A hardware startup company that designs a new CPU that interrups the status-
quo would be great. (3GHz in 2004, 3.8GHz in 2006, ~4GHz in 2014)

Like in the 1990s 3dfx (GPU) and Cyrix (CPU):

[http://en.wikipedia.org/wiki/3dfx](http://en.wikipedia.org/wiki/3dfx)

[http://en.wikipedia.org/wiki/Cyrix](http://en.wikipedia.org/wiki/Cyrix)

~~~
alecdbrooks
In what special domain do you envision such a high clock rate helping?
Performance is only loosely connected to GHz[0]. If there is a case where
clock rate trumps performance, I'm genuinely curious to know.

As processor architects are hitting fundamental physical limits when it comes
to increasing clock speeds[1], it would probably take a team of world-class
engineers, not a few disruption-minded startup founders, to create a 10GHz
CPU. Such a chip would have extreme cooling and power demands and wouldn't
necessarily perform well.

Cyrix doesn't seem like a great example, as it mostly competed with Intel on
its budget chips, if Wikipedia is correct. I suspect the greater amount of
low-hanging fruit in processor design made it easier for smaller companies to
compete with Intel in the 90s.

[0]: [http://arstechnica.com/gadgets/2011/04/ask-ars-whats-the-
rel...](http://arstechnica.com/gadgets/2011/04/ask-ars-whats-the-relationship-
between-cpu-clockspeed-and-performance/)

[1]: [https://www.quora.com/Why-havent-CPU-clock-speeds-
increased-...](https://www.quora.com/Why-havent-CPU-clock-speeds-increased-in-
the-last-5-years).

[http://www.reddit.com/r/askscience/comments/ngv50/why_have_c...](http://www.reddit.com/r/askscience/comments/ngv50/why_have_cpus_been_limited_in_frequency_to_around/).

~~~
frik
Higher clock speed, more instructions per cycle and good pipelining
architecture means higher single-core raw performance.

We had already complex pipelining superscalar architecture with Pentium Pro
and Pentium 4 (Netburst/Prescott). The current Core Architecture has a lot
simpler pipelining based on Pentium 3/M. (see
[http://en.wikipedia.org/wiki/Clock_rate](http://en.wikipedia.org/wiki/Clock_rate)
)

I favor a high single-core over a slow many-core CPUs.

Have you ever coded a many-core application that runs on thousands of CPUs? I
have done using it using
[http://en.wikipedia.org/wiki/Cilk](http://en.wikipedia.org/wiki/Cilk) ,
[http://en.wikipedia.org/wiki/OpenMP](http://en.wikipedia.org/wiki/OpenMP)
(and CUDA and OpenCL on GPU), as well as traditional using operating system
process and threads.

You need new algorithms that work on massive parallel computers. Converting
algorithms from serial to massive parallel is possible in many cases, but
really hard science work (have done it).

Nevertheless, for a specific domain I would need a really high speed single-
core CPU.

A good book about the topic is "Inside the Machine" from ArsTechnica:
[http://www.amazon.com/Inside-Machine-Introduction-
Microproce...](http://www.amazon.com/Inside-Machine-Introduction-
Microprocessors-Architecture-ebook/dp/B004OEJO0A/) ...and various university
lectures.

~~~
alecdbrooks
I understand the advantages of higher single-core performance and the
difficulties in parallelization. And yes, higher clock speeds mean better
performance, all other things being equal.

But originally, you said 10 GHz, a record-breaking clock speed, would be
advantageous. I replied to say that I doubt that, given the physical limits
mentioned in the links I shared above. If what you originally meant was that
processors with fewer cores and better raw performance per core is what you
need, then I misunderstood.

>Have you ever coded a many-core application that runs on thousands of CPUs?

Nope. I admit you have more experience in that area than I do, but I don't
really see why you brought it up.

>Nevertheless, for a specific domain I would need a really high speed single-
core CPU.

Out of curiosity, which domain do you have in mind?

