Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Reasons behind renaissance of functional programming languages
101 points by fizwhiz on Apr 28, 2014 | hide | past | web | favorite | 89 comments
I've been curious about the renaissance of functional programming languages like Scala and Clojure in the software community. Is this a byproduct of our evolving hardware where instead of clock speeds the focus has been on pushing out more cores, so to reap the benefits software developers need better abstractions to make their code easily parallelizable? Is there some erlang-esque promise of pervasive immutability which is vaulting these languages into more popularity? Or is it simply the fact that these newer programming languages have simply learned from other languages' paint points and amplified the good parts by bundling them together neatly? I don't have anything against these languages, I'm just trying to get a deeper understanding of what appears to be a trend in the programming language space.



There has always been an interest in these languages. What's changed is that using a functional language no longer cripples you. That is, the rise of service-oriented architectures, horizontal scaling, javascript, and REST APIs mean that it is no longer suicide to build your product using a functional language.

20 years ago you needed to write desktops apps, so you were limited to C/C++. Then 15 years ago you needed access to java libraries, and the only real JVM language was Java.

Now, you just need to build a service that speaks HTTP, so you just need a HTTP server, a HTTP client library, and a database adapter. That means you can build your product in any language you like, and not lack the ability to succeed.

CircleCI is written in Clojure. We communicate with all our vendors over simple REST APIs - it only took a few minutes to write our own client bindings for Stripe and Intercom.


I think Haskell has been growing a lot fueled by having a really great base language finally getting a pretty comprehensive standard library. This was a highly intentional group effort that began probably 10 years ago and pushed the Haskell libraries to a level of maturity.

I can't truly claim this has driven the adoption of other languages, but Haskell's (or pick some language from the ML roulette) influence on Clojure and Scala is undeniable. Clojure's commitment to immutability pales in comparison to Haskell's, while the latter demonstrates the advantages of such programming in spades. Scala's type system is a novel extension of the HM system which forms the basis of Haskell's.

So am I claiming that the increased availability and "practicality" of Haskell has driven a renaissance in functional programming?

Sort of, yes, but I don't feel it's the core underlying cause. More a symptom in its own right. These ideas are the leading edge of programming language research. They're smart to learn and to integrate into our next generation of tools. This evidence has long resounded from finding them at the center of the "Holy Trinity" of category theory, logic, and type theory. It's "practical" significance is still growing, however.


Additionally, I think it's just a matter of "it's ready".

We just couldn't have practical functional language 20 years ago, in term of both software and hardware. Back then, GC was a rarity, the JVM didn't exist yet. Software has to be ran on the user slow-ass computer, speed was everything and you didn't care much about concurrency/parallel problem. Nowadays, even if the clock speed didn't stagnate and we have just one super core in desktop computer, we would still write a lot more distributed system on the web now than we used to.

Also, we're building more and more complex software over the year. I remember a question for Peter Norvig in "Coders at work", where he was asked about the differences in programming now and back there. In the past, programmers generally had to write (mostly) everything from scratch, and they kept the whole program in their heads. Nowadays, we actually spend most of our time reading other API and trying to plug different components together, referential transparency is simply a godsend.

In a sense, I think the change from imperative programming to functional is similar to the one from GOTO to structure programming. May be we need a "Mutable state considered harmful" to push functional languages mainstream ? :-)


We did have practical functional programming languages 20 years ago. PG has been very vocal about the advantages of using Common Lisp to create Viaweb. Viaweb started the summer of 1995 and meets my bar for practicality. To the best of my knowledge, functional programming languages weren't as widely used then but it was not because no practical languages existed.


As a counterpoint I'll explain why I don't use Haskell.

I like Haskell. I like it in theory and I like playing around with it. I like the discipline it requires having to re-think "everyday" things in order to solve them in a new paradigm. I started with Python and then learned half a dozen other imperative languages so Haskell is utterly alien to me apart from the most basic concepts like recursion. I liken it somewhat to learning Vim -- you make the simple complicated and gain something in the process. Haskell is the number one language I wish I was working in at any given time.

So why am I not using Haskell? Most of my time is spent on OS X and iOS. While I could shoehorn Haskell into a project it doesn't make business sense to do so. I couldn't justify the extra time, complexity and I certainly couldn't justify it to a client. There simply is no good answer to "what about the next developer?" and other related questions. (I think this is why shoehorning a paradigm into an existing language is much better approach when you have a tight coupling of language and platform -- I'm thinking of RAC here.)

Give me something to work on where Haskell is the obvious choice and I'd be all over it. Until then it has to stay in the toy box.


But when you see a possible fit for Haskell (I would hesitate to make 'obvious' be the bar), have you tried to persuade or demonstrate the advantages? Why not find customers/areas that would be open to using Haskell?

To some people, unfamiliar technology is scary, possibly unacceptable, and often far from 'obvious'.

I don't like the metaphor or idea behind "you make the simple complicated". I prefer to say this: with functional languages, you name more things: mapping, folding, and so on, because you use them so often. Then you get comfortable thinking in higher-level ways. This means you can build on them more effectively.


> I don't like the metaphor or idea behind "you make the simple complicated".

When I say simple things I mean what's obvious for someone who programs primarily in imperative languages, like top-down, line by line execution in functions or local variables in functions. If you can take the same functions and understand them as operations on lists of things then both ways of looking at the problem are valid, though the latter won't be obvious without having the relevant experience.

> But when you see a possible fit for Haskell (I would hesitate to make 'obvious' be the bar), have you tried to persuade or demonstrate the advantages?

I have yet to. Again, I work primarily with Obj-C. Integrating Haskell would mean complicating a fairly elegant collection of tools and libraries. "Hey I like FP" isn't something I'm willing to sell to clients nor do I think it's the responsible thing to do. I go to great lengths to ensure that I'm delivering a product that could be understood by and maintained by developers with a wide range of experience / skill. Integrating a second language / paradigm into that without any clear gain would be counter to that.

> Why not find customers/areas that would be open to using Haskell?

For me it's really a chicken and egg thing. It's not easy for me to bring Haskell into my professional or even personal projects so I'm not experienced enough to go after primarily Haskell projects. What would make me consider it would be significant downtime caused by a lack of projects in the language(s) I'm already familiar with. Until that happens I'm not sure I'll make the jump.

I could possibly see writing libraries in Haskell but again it's hard for me to make that jump when I know I could write it faster in C/Obj-C because of my familiarity. What I like about Haskell (the type system, how succinct it is, functional purity) I can get most of the benefits of in Obj-C. I rarely if ever use dynamic types and I tend to write Obj-C (well, more and more these days) in a functional style by leveraging the relatively new addition of blocks.


> obvious for someone who programs primarily in imperative languages, like top-down, line by line execution in functions or local variables in functions.

This makes me angry at "ordinary" programmers. They think it's obvious, but they never rigorously modelled the sequential execution in the first place. Instead, they use their instinct. Instinct is efficient, but when it's wrong, tough luck.

This is probably why proving the correctness of a program is so difficult: you have to pin everything down mathematically, including sequences of side effects. For most people, that's much more difficult than merely relying on their guts. As a result, no one proves their programs. Maybe they test them.

With Haskell, you can't even program without pinning it all down mathematically. As a result, no one uses Haskell… OK, not no one, but still that's a significant barrier to entry.

I wish programmers were more mathematically aware. I wish they realised they're doing applied math, whether they like "math" or not. I'm tempted to blame our education, but I'm not sure it's the only culprit.


The best guess I've come up with is that anyone who is sufficiently mathematically inclined becomes a mathematician. I'm curious if anyone has any points/counterpoints in that direction.


Well, we do have computer scientists who have worked on program proof, cryptography, and type inference. Haskell (and earlier Miranda, and ML) where invented somehow.


Accepted, but they seem to be on either side of a dividing line — I'm doing my best to straddle it, but I'm still held back by a wall of language that is too often entirely impenetrable to me. I'm a very functionally-minded programmer, and I'm rapidly adapting to Clojure now (coming from rather functional JS, and having spent a rainy week on Haskell, which taught me basic type theory and such) — but I still often hit the wall of impenetrably abstract things too :(


It takes time. I used to read many Haskell related papers, and many things were impenetrable. Then, from time to time, one of them was suddenly obvious. Then another. And another.

The best advice I can give is, don't be content with words and labels. Search for the underlying structure. Take monads for instance. Lots of tutorials, lots of examples. But I didn't really got them until I read the Typeclassopedia: when you get to read the word "monad" you realize it was explaining the monadic structure in depth from the very start.

Practice also helps. Implementing a Parsec clone in Ocaml helped me understand applicative functors and monads. (I also experienced some of the disadvantages of strict evaluation.) You may want to try that in Clojure.

Finally, when something seems impenetrably abstract or obscure, that may be because you lack some basic vocabulary. For instance, you can't understand a paper on type inference if you don't know how to read type rules: https://en.wikipedia.org/wiki/Type_rules Which you can't read if you don't know what's an environment… As I said, it takes time.


> Why not find customers/areas that would be open to using Haskell?

I don't speak for him, but I don't because I don't want to do wrong by my customers. I don't do long-term support. I can maintain a heavily FP application (in Haskell or whatever else). Most programmers can't.

That it has its advantages is good for me. Those advantages make it even harder for normal people to get good development help.


Here are some reasons I can think of:

* Parallelization and concurrency do seem to be easier in functional languages with immutable data. Not easier to write per se, but easier to reason about.

* JavaScript was, as far as I can tell, arguably the first truly mainstream language to embrace first-class functions and closures. The ubiquity of JavaScript has exposed many to the benefits of such an approach. Similarly, Rails and other Ruby-based software make lots of use of functional programming through Ruby's do-blocks. These languages have made people more comfortable with first-class functions.

* Languages like Haskell, Scala and Clojure have shown that functional programming can be high-performance and one need not necessarily trade speed for expressiveness. Similarly, many scripting languages (and/or the hardware they run on) have become fast enough to allow for them to be used for high-performance applications.

* Functional programming is fun, and allows for beautiful and succinct code.


> JavaScript was, as far as I can tell, arguably the first truly mainstream language to embrace first-class functions and closures.

This is a nitpick, but Perl has had closures since the early 90s -- before Perl5 even.


It really depends on your definition of what mainstream means when it comes to languages. I think you can both be right depending on the interpretation.


That's not a nitpick--that's getting the facts right.


Not to mention Python and Ruby.


Ruby doesn't have functions as first class citizens. Python does.


And Lisp :-)


I'm going to argue against the idea that functional languages are flourishing as a result of the availability of parallelism. There are two reasons for this:

* Performance is not that big an issue for many applications. And if performance is not an issue, then neither is parallelism: the only reason to do parallelism is for performance (as opposed to concurrency, which makes sense even on sequential processors). So parallelism only really matters for performance-critical applications, because in most non-performance critical applications, you can squeeze out 10x by just running a profiler and writing better serial code.

* If you do want performance, parallelism alone is not sufficient. On modern processors, data movement is significantly more expensive than computation. And in that regard, most functional languages perform poorly. Imperative languages, for all their faults, give you much more fine grained control over memory usage patterns. So for performance-critical applications (which are again, the only applications where you really care about parallelism), it is not uncommon to see the core of the application written in C/C++, even if the rest of the application is written in something else.

By the way, if you aren't measuring your performance by comparing it against the peak compute/memory bandwidth of the machine, then you don't really care about performance, because you don't really have any idea what you're leaving on the table. This is why it's possible, for many applications, to give the code to an experienced performance programmer and see speedups in excess of 10x.


You can still care about performance by having acceptance criteria for throughput and latency, even when it's not the best the machine can do. You sound like you're arguing for a binary perspective on performance. But performance is a tradeoff. Often performance comes from de-abstracting multiple steps in a program's operation, reducing redundancies introduced by modularity and composability, and increasing complexity by denormalising data structures.

Design for performance thus often comes at the cost of development agility and operational correctness. Being correct and agile yet still meeting performance criteria is a bigger win for most businesses, which usually aren't constrained by cost and raw speed of hardware. Functional programming, to the degree that it let's you exploit machine parallelism, gives you more leeway for good design before you risk missing your perf requirements.

And that's a reason functional programming is actually given a boost by hardware parallelism.

(Much bigger is that it's often a better way to structure programs, and now there is less integration costs since you hide behind the - yes - functional, stateless API of a http request handler, an explicit function of input to output.)

PS: your perspective on perf is kinda off to me. Almost all applications care about performance. If they never terminated they'd be useless. Performance isn't just a priority, it's a requirement. But usually it's just a deadline and a hardware cost multiplier - and money saved on hardware may not be enough to pay for extra / more expensive people elsewhere.


> Design for performance thus often comes at the cost of development agility and operational correctness. Being correct and agile yet still meeting performance criteria is a bigger win for most businesses, which usually aren't constrained by cost and raw speed of hardware.

This point illustrates exactly the reason why people don't really care about performance. Performance is an arbitrarily deep rabbit hole that you can go down as far as you want. But because performance trades off with other things that people frequently want more (simplicity and maintainability, among others), they stop when performance is "good enough". What is "good enough" for most people? Probably 10x away from what the machine is actually capable of.

You probably think I'm being flippant by claiming that these sort of speedups are possible on real applications. I am not. In high performance computing (HPC), this happens surprisingly frequently, perhaps because people are willing to trade off almost anything for performance. Here's an example of a paper showing speedups over some 30 year-old Fortran/MPI code:

http://conferences.computer.org/sc/2012/papers/1000a040.pdf

And for this particular application I happen to know that there is another paper in submission which shows additional speedups over what this paper got. People who are willing to squeeze keep finding ways to optimize this application.


> On modern processors, data movement is significantly more expensive than computation

This is so true, and is really an emperor-has-no-clothes statement with respect to this claim that in the future we're all going to use immutable data structures to leverage multiple cores. Every situation where I've had to write performant code has ended up in one of these two cases:

1. All the time is spent in linear algebra, in which case it's all about planning out the computation so LAPACK can do its thing.

2. There's some funky custom signal manipulation. Some tiny bit of code needs to be in C, with everything in nice packed arrays in the right order so you can sweep straight through.


I agree - for me, immutability is much more about making code easy to reason about than it is about making it parallelizable.


People have covered a lot of the rational reasons, so here's an emotional one.

I've been coding a bunch of Rust lately, and I'm happy to discover it has my favourite thing from Haskell: the "if it compiles it's usually correct" feeling is wonderful. My software feels rock solid to me, even while it's being heavily changed.

It almost certainly translates into more reliable software, but even just the feeling is enough to hook me.


I agree. Once you understand typechecker error messages a little bit, refactoring becomes very simple (and fun!).

1. change a core type or function 2. fix all the type errors 3. yay

This is wonderful. I think the tradeoff is that you start a new system, you have to delay running the code for a while. Until you have your core. (Fortunately there is undefined :: a for such uses.)


IMO, it's something of a backlash against the rise of class-focused, object oriented languages (e.g. Java, C++, Ruby) leading to some complexity in the last decade.

Pendulum now swinging back the other way. Functional, data structure oriented programming has many benefits above old-school procedural programming e.g. C, Fortran. But it also doesn't have the baggage of mainstream OO languages. Thus, this feels like an appealing -- and new -- ecosystems.

This -- along with a rediscovery of some functional concepts that seem have more importance in a distributed setting, like immutable data, purity, higher-order functions -- have led to some renewed interest.


Part of the issue with Object Oriented design is how they maintain state and deal with their dependencies.

A pure function has a very portable interface that makes it suitable for the distributed and social nature of the way we create software together.

These kinds of problems were not envisioned by the creators of the original Object Oriented systems like Smalltalk. Objects were designed for personal computers and building and using software designed for personal computers.

While shared workspace environments were explored in systems like Self, they were always designed around the objects being these visual instances running in a GUI.

Objects detached from a system like Self or Smalltalk lose a lot of their powers of abstraction. They are not and have never been portable, even if they were designed to be reusable.


> A pure function has a very portable interface that makes it suitable for the distributed and social nature of the way we create software together.

Citation needed. I personally don't talk to other programmers in terms of functions, but I use plenty of metaphors and analogies that come with my evolutionary capabilities to use natural language (math, on the other hand, was something I had to learn).


I'm citing my own experience as a professional.

Programmers talk to other programmers in terms of functions all the time. Every API you use exposes functionality. If that API assumes some sort of state, that can cause issues for the consumer of the API. It also forces whatever entity is maintaining state be tightly coupled to the consumer and leads to centralized systems.

If the API does NOT have state and instead uses a token based system for authentication and access permissions it is operating in a functional manner. State is maintained outside of the function.

The less coupled that two modules are the more portable that they are. You can bundle up something like the "express" npm package and ship that off to whomever. You can easily build modules on top of an express application by passing the entire app to another module. This pattern has been used to build an entire ecosystem of loosely coupled middleware.

These are not Object Oriented designs. They aren't entities that communicate by sending messages to one another while floating around in some global workspace. They are interfaces that pass state around as arguments to function calls.

Global state is very nicely demonstrated in how modules can be shared between Node and Browser Javascript. What is global? All of the I/O. Files, XHR, mouse clicks, sockets... all of these are the inputs and outputs and all are handled by "window" or "process" or other global state.

Modules can be easily be reused between the server and client if they do not interface at all with any sort of global state.


> Programmers talk to other programmers in terms of functions all the time.

I seriously doubt it. In order to talk in terms of only functions, you would need to give up identity...because what is identity but not a key for encapsulated state? Without identity, you really are talking with just values, but you have problems describing things with handles. There is no single Bob, just a new Bob who is reconstituted each time you talk about him. Naming anything is weird, and the pure lambda calculus of course avoids it. Most APIs are not purely functional, but somewhat object-oriented (oriented around identity) if only in a few key aspects that are needed to do anything at all.


You can have identity without encapsulated state, you just need something that's sufficiently unique, such as a UUID or SHA hash.


It is the same thing actually, either will give you the other.


How does a raw UUID equate to encapsulated state? Adding a UUID doesn't inherently hide or restrict access to data.


Think about it like this: as long as you have an identity, someone can remember something about you...and therefore state!

If you didn't have an identity, then that would be impossible, it is like a superpower where no one could "remember" who you are, and therefore nothing about you.

A UUID is synonymous to state, and in fact, procuring and propagating these IDs would be the main step to implementing state in a pure language.


A signed access token is how identity is provided in a stateless environment.

Take Facebook OAuth for example. A user logs in, is provided with a signed access token by FB, and then uses that token to prove to an application that the owner of the secret key has granted them permission. The owner in this case was Facebook itself.

This can be verified by the server by comparing the:

var expectedSignature = crypto.createHmac('sha256', FACEBOOK_OAUTH_CLIENT_SECRET).update(encodedData[1]).digest('base64').replace(/\+/g,'-').replace(/\//g,'_').replace('=','');

with the signed request signature:

var signature = signedRequest.split('.',2)[0];

and making sure that they're the same.

This way the server never needs to maintain sessions, cookies, or a whole bunch of other crappy, centralized, coupled, insecure methods for building networked software. The server only needs the secret keys of whatever access tokens users might be providing. The access token contains the userId or any other relevant information. If the server needs more information about the user, it can provide the access token to another API that returns more information about the user, again, based solely on the access token, just by passing state INTO whatever system/function/service needs it.

If this server is sitting in front of a database all that the server cares about is what the access permissions are of whatever incoming request it is dealing with. It still doesn't need to maintain state. The database is maintaining state, but the server is just verifying that the request can make the changes to the state of the database.

All with the wonders of functions, modules, and APIs that try to operate without managing their own internal state, which again, is the main feature of functional programming languages.


You said "encapsulated state". I was wondering why you thought it was encapsulated, which is usually a term for information hiding.

I'm also not certain why you think this conflicts with the idea of talking "in terms of functions".


This has gone on in academia for awhile, you are now considered mainstream if you embrace FP as the way, and you are considered a rebel if you instead believe that OOP is still the way.

I predict that there will eventually be a FP backlash that reverses the situation again (hey, look how easy objects are to use vs. this crazy FP stuff!?).


Logic programming, the third programming paradigm, will come to the rescue...


Sure, the database junkies will never give that up.


I guess 2020 will be the year of procedural programming then.


Why? Both OOP and FP are supersets of procedural programming, so in any case, it is the year of procedural programming already.


Question - why didn't you seek help in learning FP?


I worked with Martin Odersky team for awhile, and I saw the way...was to use some FP and a lot of the cool OO stuff that Scala provided (traits!). Scala is one of the most advanced OO languages out there.


I'm very familiar with the PLT field, including OO languages like Scala and the history of its development. I was asking precisely because of your work on scalac. Much of your code had to get thrown out/refactored/cleaned up. The impression was that you didn't understand the FP part. Your recent comments on HN reinforce that.

But that's besides the point. I'd like to find out what gaps can be filled so you no longer find FP difficult to understand or use. What material have you tried so far?


That is complete BS, I didn't drink the Kool-Aid and was able to get the job done in a way that didn't use mega pattern matching functions, which are particularly loved by some (especially scalac) but not all. Martin also preferred lazy evaluation over incremental and reactive computation, and that was that. If you look at my recent IDE demoes (on YouTube or just see my papers), you'll see where Scala could have gone vs. where it is now.

The Scala community eventually drifted away from OO, but it could have gone very differently.


What about the baggage of the old-school functional languages?


These come to mind (there are probably even more)

1) Complexity: Managing complexity is just easier with compact syntax, immutability etc. Applications are becoming larger and more complex, and languages that help reducing coupling and code size will become more popular than those that easily allow getting the job done at the expense of high complexity (e.g. OO/imperative).

2) Concurrency: With concurrent/multi-threaded applications being par for the course rather than something exotic, functional thinking is now required for many applications regardless of language.

3) Modularity/Decoupling/Interoperability: SOA and API's means you can develop independent bits of software that communicate in some language agnostic way (e.g. http). The CLR and JVM now both have support for functional languages that work well with their imperative siblings. This means you don't even have to be loosely coupled by http, you can write an application where the logical/computational bits are F# and the UI is C# while in the same application.

Apart from these we must not forget that almost all the large imperative languages (C++/C#/Java) have gained numerous functional constructs in recent years. Few of us can imagine working in an imperative language without lambdas these days.

So functional thinking has slowly been sneaking into the imperative programmers day job too. This has made more of us interested in these languages that consist almost only of these bits we find to be the most elegant in our OO languages. It's simply less scary. In 1998 the difference between Haskell and Java 1.1 was quite large. In 2014 the difference between C#5 and F#3.1, or between Java 8 and one of the functional JVM languages, is no longer big enough to make it scary.


I've read a lot of interesting things here, but no post really covered my perspective, so here it is:

Reason 1: Object Oriented Programming becomes less popular. After the 90s people started to see that OOP is not really the solution that scales to unlimited complexity. It was a step forward but nothing more. Many different things got tried, like Aspect Oriented Programming, but none of that really stuck.

Reason 2: The moment we had processors with more than one core we suddenly had a need for general purpose parallel processing. While the solution to this now seems to be using tools/frameworks like ZeroMQ, another reasonable option were languages with different paradigms, like functional.

I think that both these things happened nearly at the same time got a lot of traction to functional languages. I am following this since 2008, though. That functional languages still didn't make it to the top of popularity currently results in me no longer believing that they will ever be. Functional languages always had this kind of swingy popularity where a small group of people where strong hearted followers while the general opinion always swings between "this is ridiculous" and "maybe we could use it for that use case".

All in all I would say, the question is a great one, now that the current renaissance of Functional programming is pretty much over.


Cynical answer: The consultants have decided that they need a new snake oil to sell since the Agile bus is running out of steam.

Hopeful answer: FP languages have gotten enough better than the current mainstream that they can't be ignored.

I believe it is a bit of the Hopeful answer but mostly the Cynical answer. If you compare java and Clojure(or Scala for that matter) on the programming language shoot out Clojure is neither more concise nor faster than Java. So why all the hype then? As far as I can tell it is book sells and the fashion cycle.


Clojure is much more concise than java, but ironically high performance clojure is not more concise, nor as fast, as java.

But that said, high performance java is both slower and less concise than C by the same metrics. So if you think this is just fashion, then nearly every language outside of C is also just fashion.

In my view, languages as a whole have been moving more and more towards the functional space over time. Newer languages, or versions of existing languages, typically have more functional features. Languages are trending towards higher order. As a result of this, the existing higher order languages, the functional languages, are suddenly much closer to mainstream and so have become more popular as the majority become aware of their benefits.


If you stick to primitive operations like loop, if, let and data primitives, the bytecode generated by Clojure is the same as the bytecode generated by Java.

Clojure's syntax isn't optimal for low-level in-place mutability, but macros can go some way to resolving that. The few times I've worked on high-performance Clojure code, it hasn't been too difficult to get performance comparable to an equivalent Java library.

Java is somewhat slower than C, but it's generally within an order of magnitude if you stick to primitives.


I would say choosing C is just fashion too. For any software development project technical merit limits language selection to a very broad category of languages. After you get to that broad selection of languages all narrowing is non technical (fashion) driven.


I suspect there are three main reasons.

Limitations of hardware mean we're increasingly looking at computing in parallel, which is a natural fit for languages that emphasise immutability. Distributed computation also benefits hugely from immutable values, as it sidesteps much of the problem with cache expiry.

We're also seeing a demand for more reliable systems. Taking mutability out of the equation eliminates a significant area of possible bugs, and allows for more sophisticated type-checking.

The third reason is simply that it's only recently that we've gotten good functional languages with a comprehensive and mature set of libraries. Haskell has been around for decades, but when I first started learning it seven or eight years ago, the set of libraries on offer was very thin compared to Haskell today. Scala only popped onto the scene 11 years ago, and Clojure only 7 years ago.


RAM is no longer a bottleneck. So you can "spend" some RAM in the search for increased programmer productivity.

CPU is not really a bottleneck, either, so, you can also spend a few cycles there also.


These reasons make sense when explaining the rise of, say, dynamic, interpreted languages luke Python/Ruby which are inherently slow(compared to conpiled, statically typed languages). Scala runs on the JVM, which is super fast once warmed up. So does Java. But Scala is more functional, more expressive and without a significant RAM or CPU hit compared to what we are already paying with Java.


Idiomatic Scala uses a lot more RAM than idiomatic Java, at least in my experience.


I would conjecture that your perception of FP languages making progresses is an echo of your own development or of people immediatly surrounding you, and that possibly, numbers would show that there is no increase in FP language adoption.

I'm not taking a stand here, just posing a possible alternative explanation. At least, I've thought about the OP's question and guessed it was a false generalization of my own progress. Data in (dis)favor of either.


FP is quite popular on HN. The more time you spend on HN and perhaps similar sites), the more you think FP is popular.

Is there an actual increase in FP language adoption? Probably there is, within the population that reads HN. Within the programming world at large? I have no data, but from seeing some places that claim to measure usage, my perception is that any increase in FP in general usage is fairly small.


How about a boring theory, Internets.

It's just a subculture that couldn't gain critical mass without the internet. Like furries it has a wide but sparse appeal.

It is a subculture that's not simple though, it takes a while to get into, unlike being a furry you really need to do it in your job to be good at it hence it's only starting to hit it's peak.


where would this "renaissance" be taking place? The most popular languages are java and C...


Renaissance means resurgence, not dominance. Re = again, naissance = birth.

10 years ago I would see the very occasional article about Lisp. Now FP discussion is everywhere. Whether it's being used in prod systems or not, FP is certainly gaining attention.


There's been a discussion of object vs functional for 30 years. I've yet to hear a compelling argument for one or the other, but OOP seems to be easier for more people to grasp since it strives to replicate the attributes/methods/inheritances of real world concepts/objects.


Same confusion...I know there are bunch of open source projects built using FP became popular recently, like Storm.

Still, as far as I know, the work horses in some of the biggest firms in the industry are those not-so-FP languages, like Java, C++ and Python.

FP is very cool, but I think it has yet reached that far to be treated more seriously like those more mature ones, which represent not only the language itself, but also the ecosystem behind: tools, libraries and community.


Java just added lambdas.


Smalltalk has had blocks since at least 1976.


lambdas in java extend the object model not destroy it. So I don't get your point.


JavaScript has both beat for deployment base and developer population, I wager.


in our bubble this may ring true but pretty much the whole rest of the world is running on Java and C


Erm, you're ignoring a little thing called "The Internet as Experienced By Millions".


1. erm, you are ignoring a little thing called "embedded systems".

2. what is the crossover of internet-capable devices also running Java or C? like 100%?

3. you are only defending your assumption that there is a bigger install base, not a bigger developer base.


I'll give you embedded systems for deployment, but the number of embedded systems programmers is dwarfed by the number of web developers.

The problem with the TIOBE index is that it also seems to factor in courses and services--I wonder if there is a straight-up developer census.

That'd solve the question rather thoroughly. :)


embedded systems is not a defense that there are more C programmers than Javascript. it is a defense that Javascript does not have a larger install base.

embedded systems + financial/government sectors + academia + video games + mobile apps (compared to simply "web development") is my defense against the notion that there are more Javascript developers.


I very much doubt that since java, c and python are the basis of most CS programs.


There is a lot more demand for tasks that FP is good at such as data processing where state is a means to an end, and hence the program can be stateless.

Contrast this with the 90s were we needed to write user interfaces where state was the point, or C++ gamedev.


A bit offtopic:

It would be great if there was a 10GHz CPU (single-core), for some special domains.

A hardware startup company that designs a new CPU that interrups the status-quo would be great. (3GHz in 2004, 3.8GHz in 2006, ~4GHz in 2014)

Like in the 1990s 3dfx (GPU) and Cyrix (CPU):

http://en.wikipedia.org/wiki/3dfx

http://en.wikipedia.org/wiki/Cyrix


In what special domain do you envision such a high clock rate helping? Performance is only loosely connected to GHz[0]. If there is a case where clock rate trumps performance, I'm genuinely curious to know.

As processor architects are hitting fundamental physical limits when it comes to increasing clock speeds[1], it would probably take a team of world-class engineers, not a few disruption-minded startup founders, to create a 10GHz CPU. Such a chip would have extreme cooling and power demands and wouldn't necessarily perform well.

Cyrix doesn't seem like a great example, as it mostly competed with Intel on its budget chips, if Wikipedia is correct. I suspect the greater amount of low-hanging fruit in processor design made it easier for smaller companies to compete with Intel in the 90s.

[0]: http://arstechnica.com/gadgets/2011/04/ask-ars-whats-the-rel...

[1]: https://www.quora.com/Why-havent-CPU-clock-speeds-increased-....

http://www.reddit.com/r/askscience/comments/ngv50/why_have_c....


Higher clock speed, more instructions per cycle and good pipelining architecture means higher single-core raw performance.

We had already complex pipelining superscalar architecture with Pentium Pro and Pentium 4 (Netburst/Prescott). The current Core Architecture has a lot simpler pipelining based on Pentium 3/M. (see http://en.wikipedia.org/wiki/Clock_rate )

I favor a high single-core over a slow many-core CPUs.

Have you ever coded a many-core application that runs on thousands of CPUs? I have done using it using http://en.wikipedia.org/wiki/Cilk , http://en.wikipedia.org/wiki/OpenMP (and CUDA and OpenCL on GPU), as well as traditional using operating system process and threads.

You need new algorithms that work on massive parallel computers. Converting algorithms from serial to massive parallel is possible in many cases, but really hard science work (have done it).

Nevertheless, for a specific domain I would need a really high speed single-core CPU.

A good book about the topic is "Inside the Machine" from ArsTechnica: http://www.amazon.com/Inside-Machine-Introduction-Microproce... ...and various university lectures.


I understand the advantages of higher single-core performance and the difficulties in parallelization. And yes, higher clock speeds mean better performance, all other things being equal.

But originally, you said 10 GHz, a record-breaking clock speed, would be advantageous. I replied to say that I doubt that, given the physical limits mentioned in the links I shared above. If what you originally meant was that processors with fewer cores and better raw performance per core is what you need, then I misunderstood.

>Have you ever coded a many-core application that runs on thousands of CPUs?

Nope. I admit you have more experience in that area than I do, but I don't really see why you brought it up.

>Nevertheless, for a specific domain I would need a really high speed single-core CPU.

Out of curiosity, which domain do you have in mind?


If we throw away conventional wisdom about how processors work, we could do something magic: throw away the clock.

Ivan Sutherland has done some work on a new processor architecture called Fleet, which takes this premise and rethinks how we should program from the bottom-up. Instead of instructions performing operations, the programming model becomes one of passing messages between individual units in the processor, and the problems become that of traffic management and routing, which happens to relate to functional programming, the actor model, and data-flow paradigms.

Have a read of "The tyranny of the clock", and other publications by the Asynchronous Research Center:

http://web.cecs.pdx.edu/~mroncken/ARC-2012-is13_TuringCenten...

http://arc.cecs.pdx.edu/publications


It would be great if there was a 10GHz CPU (single-core), for some special domains.

There's a reason it hasn't happened.


What are the reasons? Moore's law? What are the physical reasons beside heath transfer?

In 2014 we have now the Intel Haswell (14 nm) with 3D tri-gate transistors yet just 3.7 GHz single-core clock rate.

A 10 GHz CPU from Intel would be possible today, we just need dielectric liquid cooling (not water, but pressured Fluorinert) like the http://en.wikipedia.org/wiki/Cray_2 and http://en.wikipedia.org/wiki/Fluorinert

General public cannot buy such equipment, but it does exist.


There are 3 physical reasons besides heat dissipation:

- size: higher frequency means you need smaller circuits, otherwise the electric impedance works against fast switching. There's a limit to how small we can build chip components right now.

- molecular stability: higher frequency requires higher voltage (to provide the energy), which in turns makes it easier for the metal oxide to degrade over time (this effect is additional to degradation due to heat)

- phase divergence: the digital circuits in our processors are heavily synchronized, and require components to react simultaneously to clock edges. However the propagation of the clock signal across the chip introduces errors in the signal phase. As the frequency increases, phase errors accumulate and cause circuits to "lose synchronization". To avoid this, one must increase the complexity of the clock distribution network greatly, which in turn compounds the effects above (and the heat problem).

All in all the current state of affairs is summarized by a rule of thumb (so-called "Pollack's law"): the speed (instructions/sec) of a single core processor increases with the square root of its complexity.

By this rule, to clock up from 3 to 10GHz would be a 9-fold increase in complexity, for which we don't have the technology ready just yet.


At 10GHz, a signal can propagate no more than 300 microns in a cycle due to the speed of light. The CPU would have to be designed either asynchronously or with many more clock domains than currently. The performance probably wouldn't scale linearly by any realistic metric, and you'd still have the same memory latency.


Your math is two digits off. Using Grace Hopper's "one nanosecond is one foot", one cycle at 10 GHz is 1/10 of a foot or 3 cm. That's 30,000 microns not 300.

But yes, the speed of light limitation on signal propagation is a major factor in our current limit of 3 to 4 GHz chips.


Arguably the previous poster only mentioned he wants a 10GHz processor, not fast off-chip memory accesses. Also you can achieve a lot of performance in signal processing applications using a fast-clocked single core with on-chip SRAM scratchpads.


If believe, if any, we (as in software engineers) would benefit more from faster memory and higher bus frequency. In many cases fetching from and writing to memory is much more of a choke point than raw processor speed.


It is a side-effect of larger memories.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: