Hacker News new | comments | show | ask | jobs | submit login
Which Programming Languages Are Functional? (jenkster.com)
94 points by ingve 633 days ago | hide | past | web | 106 comments | favorite



Erlang currently is probably the most practically used functional language. It has been in production for decades. It manages probably 50% of world's smartphone <-> internet traffic. There are many database built on it, message processing backends, handles hundreds of millions of connected device (WhatsApp).

Yet most people forget it about when talking about "functional", and end up comparing Python to Haskell basically. Maybe with F# thrown in there for good measure.

Erlang has solid closure support, tail call elimination, immutable data structures and variables, encourages using recursion instead of for loops, in practice folds/maps are common constructs instead of an oddity etc (like say in Python).

It even has an optional success typing system:

http://learnyousomeerlang.com/dialyzer

EDIT: Elixir is using Erlang's VM as well and almost everything about language features above apply to it. If you don't want to give Erlang a try, check out Elixir. It also has one of the friendliest communities for new-comers.


It's weird, I always kind of wondered why Erlang gets left out of the FP discussions. It's not as dogmatic as Haskell or something, but functional-ness is kind of its bread and butter.


I think the answer is that Erlang's creators are not academics and they sort of never went out of their way to tout functional properties of their language. Functional properties fell out of their requirements for simplicity, fault tolerance, concurrency and reliability. In other words the goal of Erlang was never to be functional but to be useful for fault tolerant distributed systems.

Someone down the line probably mentioned to them "Hey, btw, did you know you have a functional language?" And they probably shrugged and said "Oh, cool!"

I would guess Joe, Robert or Mike never went about reading papers on type theory, thinking about meta-circular evaluators, or y combinators etc and said "We need that, let's implement it!". Instead, they started with stuff like "let's minimize state" so made state explicitly passed in. "Let's minimize mutable state", so make data immutable. So then need recursion instead of for loops, so let's think about tail call elimination. They played with Prolog so language inherited the base Prolog syntax which has a functional taste to it, that helped as well.

The same thing kind of happened with the actor paradigm. They never started with a goal of implemented an actor / message passing programming paradigm. It just fell out of fault isolation property. Someone later told them "Oh btw, did you know you have implemented the actor paradigm?" and they probably shrugged and said "Oh, cool!" and went back to doing whatever they were doing.


I've talked with Robert at some length and listened to his talk on LFE, and that's more or less the way of it, plus Robert himself was an old Lisp buff from way back (hence Lisp-flavored Erlang). It was basically a combination of "this is what we need to do what we want" and occasionally bits of "well of course you do it that way".

That was actually how he described tail call elimination in his LFE talk: "of course you have TCE/O, why the hell wouldn't you?"

Which was a refreshing perspective to me, considering how many programmers still seem to have been heavily indoctrinated to the idea that recursion is automatically bad.


There's a tremendous amount of truth to this.


Because the Erlang people don't do 'sexy', they just do stuff that quietly works. They're too busy working to be a part of the social networks around programming languages, they don't care whether you care, they care whether they can get their job done.


One Erlang project that is rather friendly up-front is the Zotonic[1] web framework. As someone new to Erlang, it's great to be able to get off the ground running with such useful resources.

1. https://zotonic.com


Your link is broken; I don't think they use SSL.


Because haskell is almost a platform for discussing and implementing FP concepts. Especially if you want type system. (Not sure how much is ML discussed these days)

Erland on the other hand, is targeted on soft-real-time distributed systems. How was the quote? "Every soft-realtime highly availabe distributed system contains informally specified, incomplete and bug ridden implementation of erlang." Or something like that.


It's also driving almost all of the non-Google ad networks, has a pretty sizable footprint in high-finance, and increasing numbers of online gaming (mobile and console) backends are built with it (or its younger cousin, Elixir).


One of the most powerful things about Erlang is that there is no state other than what you are passed in locally (of course, there are still side-effect dumps, like databases & files, but those can be well-managed behind gen_server).

This property makes Erlang programs incredibly easy to reason about and test.


Agree with that. Makes it very nice jumping in understanding a complicated system -- because it is obvious how state is mutated by looking at the current function. In a typical state heavy OO, I found I am lost more often because I have no idea what the current state is and how it got to that state if it is broken when debugging.


another plug for Elixir: I'm using it right now w/ the Phoenix framework and love it. The last time I played around w/ Erlang I was frustrated and confused, so it's either a big improvement or I've gotten smarter! (more likely the former)


I think it is probably a little of both. For me, Elixir was the crutch I needed in order to break through the Erlang veil. I still like Elixir's syntax a lot better, so I tend to stick with it, but I definitely understand Erlang more because of Elixir.


I'm not sure if I like this definition of "functional". Don't get me wrong, I'm a Haskell guy and think Haskell is fantastic for production use, but this definition of "functional" almost seems to rule out everything except Haskell. Even if I try to maximize my Haskell fanboyism I can't with a straight face choose a definition for "functional programming language" that of the today's production-viable languages only includes Haskell. A definition that narrow just isn't very useful. I don't have super strong opinions about how it should be defined, but I tend to be drawn towards a more fuzzy definition. One that seems attractive is: "functional programming languages are languages that focus on functions as the basic unit of abstraction".


I tend to think of it in that way as well.

To me, "functional programming" taken back to origins in Lisp and the lambda calculus is a style of programming that focuses on the composition of functions.

The "purity" element is merely the consequence of this focus: in order to be composable, our functions largely need to be free of side effects, and take and return values as mathematical functions do.

I think the emphasis on function composition also makes for a better, more believable sales pitch, once you take into account first-class functions as values and the kinds of abstractions and patterns they allow. I have a version of this pitch which goes like this:

Imagine a function that, given the right inputs, could produce any output in the known universe. To which you might rightly answer "that's pure nonsense". The sheer volume of internal logic would naturally be more complicated than everything within that universe.

But if that function can itself receive other functions as inputs, then the potential outputs expand almost infinitely.

And indeed, this is why the lambda calculus itself has proved Turing-complete: simply by composing functions within functions in this way, you can literally recreate the whole of computing and mathematics.

In practice though, people have their limits, and the day to day operation of a computer does require occasionally breaking the perfect mathematical puzzle for the sake of getting work done, and the vast majority of languages in existence allow for exactly that. A definition that doesn't allow for them might well be "true" in the "pure" sense of the word, but it's not very useful for people other than mathematicians.

And hell, even the Turing-machine uses mutable state.


Yeah I mean I for one am over ten years old so I remember when 'functional' pretty much just meant functions-as-first-class-values and every-thing-returns-a-value.

Then because those are pretty reasonable language features all the languages that had been written in the late 80s-mid 90s and became popular around 2000 had those features.

So then for a while 'functional' was just sort of way of saying 'not java'.

Now 'functional' seems to mean strong algebraic typing for some reason.


I explained it like this elsewhere:

The functional programming paradigm models computation as a relation between sets, and is thus inherently declarative. However, in practice, we often think of functions as imperative, ie you put in an input value and get out an output value, same as with a procedure. From this point of view, the characteristic property of a function is that it has no side-effects. Because of ambiguity of the terms, we call such a function pure, and a language which only has pure functions would be a purely functional language.

However, not all functional languages are pure: A functional language is a language with syntax and semantics which allows the programmer to use the functional paradigm efficiently. Some of the concepts which make using the paradigm feasible include - among others - lambda expressions with lexical closure, higher-order functions, variant types and pattern matching, lazy evaluation, type-inference (in case of statically-typed languages).

This is by no means an authorative list, and a language can very well be functional without providing all or even most of them, but if a language does - ie makes them usable without having to jump through major hoops - their presence is a strong indicator that the language should be considered functional.


It's a spectrum really. For example, OCaml is much more functional than JavaScript by having strong typing, inductive types, pattern matching etc. but lets you write functions with side effects unlike Haskell.


Haskell lets you do side effects. The difference is that the type system carefully controls them.


It's a matter of definition. It's perfectly consistent to say Haskell does not allow side-effects as they stop being such once they have been reified.


But then it's equally consistent to say that C doesn't allow side-effects because the whole program is reified.


I was more referring to how OCaml allows mutable references and arrays, and allows IO similar to imperative languages whereas Haskell isn't so generous.


Haskell absolutely allows mutable references and arrays. See IORef [1] and MVector [2]. Notice how the functions that perform mutation are monadic. That's Haskell controlling the side effects and making sure that they don't pollute pure code. The ST monad stuff [3] that wyager mentioned is even cooler because it allows you to bundle up side effecting stuff and actually use it in pure code in a completely safe way, which is something that you can't do (without the dubious usafePerformIO) with the IO monad.

[1] https://downloads.haskell.org/~ghc/latest/docs/html/librarie...

[2] http://hackage.haskell.org/package/vector-0.11.0.0/docs/Data...

[3] https://downloads.haskell.org/~ghc/latest/docs/html/librarie...


Haskell actually does allow mutability in pure code using the ST monad.

It's pretty cool; you can use ST to have mutable variables and arrays and such, but the mutable values aren't allowed to escape the ST "context". So you're still guaranteed immutability outside of the ST monad, because the type system locks all mutable values inside the monad.

This is mostly only used in very-high-performance libraries that use algorithms based on array access.


The wiki definition is on point in my opinion: "a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions".


Where "function" means "like in math", right? Not "arbitrary callable block of code". I think it makes sense to think of "functional" as more or less synonymous with "approaching Haskell", even if you want to include many other languages under the umbrella.


> I'm a Haskell guy

Don't ever be a 'programming language 'x' guy or girl'. Be yourself, know some tools and skills and be prepared to switch those out for whatever works best in that particular application. If you limit yourself to a certain programming language, no matter how great it seems to be today you've built in your own obsolescence.

Better to be a guy or a girl with x years of Haskell experience and a whole bunch of other stuff besides.


On the one hand I think that's reading too much into the phrase "Haskell guy", but on the other I disagree in the sense that being a dilettante is no good either. I think people should be "T-shaped", deep on one to a few languages/platforms and some experience with many others. Spending a long time with one language/platform will teach you deeper lessons over time than switching every 6 months.


I used that phrase as a more concise way of communicating pretty much what you are suggesting. I assumed that since my choice of X was Haskell it was pretty much a given that I haven't limited myself to a certain programming language and aggressively investigate promising new technologies.


But why would you want a "functional programming" language when you can have a modern multiparadigm language (JavaScript 6, Swift, Java 8, C#)?

The author seems to believe that no side effects is the key property of a functional language. But for me most problems I deal with (user interface, various business domains, integration with other systems over network etc.) are filled with state and procedures. Sure you can deal with that in a purely functional way but to me it is a bit like doing object-orientated programming in C with macros and a lot of conventions. Why not simply use a programming language with suitable primitives? In particular now that multiparadigm languages are the most widely used in our industry.


The value of functional programming is not in what it brings, it's in what it takes away.

It is easier to reason about functional programs because they maintain certain properties. These could be types, an absence of side effect, etc. This is the point of the entire article.

You get none of those benefits if your "multi paradigm" language allows you to break those guarantees.

For a multi paradigm language to derive benefits from functional programming, it should allow developers to explicitly write guarantees that can be enforced at compile time. This is a promising direction but, to my knowledge, it's not available in any of the languages you mentioned.


I think this is exactly it.

If you had a multi-paradigm language, you'd need not only just some way to break the rules easily and have that bubble up, you'd also want this rule-breaking behavior to be exposed up the chain from any libraries. You'd also have to enforce the benefits of immutability and such through social pressure in order for users to see large parts of the benefits of side-effect isolation.

A lot of the choices FP languages make that are less about side-effects (like strict typing systems) are also often an all-or-none proposition. Optional typing systems have very limited benefits if libraries aren't well-typed. The static analyzers can only do so much.


Oh, they can do surprisingly much. See how Erlang works with its optional typespecs and dialyzer.


> The value of functional programming is not in what it brings, it's in what it takes away.

That's what I used to think, and in fact I still think like that somewhat, but do note it's explicitly mentioned as a red herring in John Hughes' well-known and often quoted paper "Why Functional Programming Matters".

From the paper:

> Such a catalogue of “advantages” [things that FP lacks or constraints] is all very well, but one must not be surprised if outsiders don’t take it too seriously. It says a lot about what functional programming isn’t (it has no assignment, no side effects, no flow of control) but not much about what it is. The functional programmer sounds rather like a medieval monk, denying himself the pleasures of life in the hope that it will make him virtuous. To those more interested in material benefits, these “advantages” are totally unconvincing.

> [...]

> Functional programmers argue that there are great material benefits — that a functional programmer is an order of magnitude more productive than his or her conventional counterpart, because functional programs are an order of magnitude shorter. Yet why should this be? The only faintly plausible reason one can suggest on the basis of these “advantages” is that conventional programs consist of 90% assignment statements, and in functional programs these can be omitted! This is plainly ridiculous. If omitting assignment statements brought such enormous benefits then Fortran programmers would have been doing it for twenty years. It is a logical impossibility to make a language more powerful by omitting features, no matter how bad they may be.

He then goes on to argue that the real benefit of FP languages is that they excel at modularity, a widely agreed upon trait of good software, and that they have an excellent mechanism for "gluing" stuff together, i.e. lazy evaluation.

(I've since read that some FP devs disagree about the importance Hughes placed on laziness).


I think he's refuting weak-men argument. Not quite straw-men because I've heard them being made.

Of course it's about modularity, but how do you get modularity? How do you safely abstract the internals of libraries? Types and lack of side effects!

It's also about correctness. Most of the time spent writing any complex bit of software is generally spent finding out what's wrong with it.


Agreed. I find Hughes' paper very interesting, but I disagree with him that "you cannot make something more powerful by removing things".

Even philosophically this position is weak. It's self-evident to me you can make something more powerful/better by removing things. You can make something better by removing risks. You can make an UI saner by removing the possibility of invalid interactions. You can make art better by removing superfluous details.

I still think that types and control of side effects is essential to (good) FP.


- You can make the Library of Babel better by removing books.[1]

- You can make limbs more useful by removing degrees of freedom, simplifying the control task.[2]

[1] https://en.wikipedia.org/wiki/The_Library_of_Babel

[2] http://www.ncbi.nlm.nih.gov/pubmed/16631583


Code Contracts for .NET, maybe?


I've tried working in an FP style in the "multiparadigm" languages you've mentioned, but it's hard, really hard. Well actually, the irony is that JavaScript is the one that's bearable and that's because (1) you can find some Javascript libraries that are doing FP and (2) it's very flexible.

Static languages such as Java and C# are not OK though. For one they lack features that are essential for statically typed languages. For example higher kinded types is an essential feature for FP. Without that you can't build libraries of reusable FP stuff. Comparing it with Scala for example you can't build Scalaz or the newer Cats or Shapeless [1].

The other problem is that FP programming languages come with FP libraries. You can't underestimate the importance of having good implementations for persistent collections be the default.

On having "no side effects", that's not what FP is about. After all, without side-effects you couldn't get any output out of a program. FP is about encoding those side-effects in types and abstractions you can reason about, or in other words it's about controlling side-effects. Haskell developers are fond of their IO monads. FRP is another way of doing that, being particularly good at dealing with user interfaces. And for example FRP-related libraries that were born by playing to the strengths of the mainstream languages we are dealing with are Facebook's React and Microsoft's Rx.

BTW, this is one of the best books on functional programming available today: http://www.manning.com/bjarnason/

[1] https://github.com/scalaz/scalaz, https://github.com/non/cats, https://github.com/milessabin/shapeless


I beg to somewhat differ: As far as purely functional languages go, it is about reification of side effects (eg via monads or uniqueness types) in such a way that you do get to use pure functions.

My personal definition of 'functional' is based on this lack of side-effects (or, more abstractly, on modelling computation as a declaration of relations between sets, which is what mathematical functions are), but it gets fuzzy once you leave the realm of purely functional languages.

If you prefer another definition for pragmatic reasons, I'm fine with it, it's just not my choice.


+1 for referencing the "Functional Programming in Scala" book. It is, as you rightly state, one of the best books out there regarding FP concepts and written in a very approachable style IMHO.


While HKT are pretty neat, they're hardly essential, and to claim that without them you literally can't build reusable FP libraries is ridiculous.


Perhaps it's more accurate to say there are well-known, pervasive, use cases that are not possible without HKTs.


Controlling state the way functional languages do makes code significantly easier to test. Besides this, functional languages have many great features such as strong typing, pattern matching and inductive types that have yet to be adopted by any mainstream languages.


Don't get me wrong; I think "no side effects" is a very useful way of programming for a large number of problems. But not all problems. In particular problems related to user interface and system integration.

And languages I listed (JavaScript, C#, Swift and Java) have all stolen large chunks of ideas pioneered by functional programming and put it into their latest release.


If you take a language like Haskell, you can utilize an FRP framework like Netwire to write pure-ish code whilst having reactive things. I'm currently writing an ncurses application that does exactly that.


Static typing is not a requirement for a functional programming language.


It's a requirement for being able to write library functions that work with custom effects, which is perhaps the biggest advantage of a functional programming language.


> (user interface, various business domains, integration with other systems over network etc.) are filled with state and procedures.

Procedures can be modeled as functions, state can be passed around, user interface hotness currently is also trending to functional / immutable (Elm, React).

> bit like doing object-orientated programming in C with macros and a lot of conventions.

If you are forced to use C and macros yeah it is no fun.

But if you have tail call optimizations, a good type system (even optional one like Erlang does), immutable data structure support, monitoring and tools (Erlang VM), you can build large systems that handle lots of mutable state in a sane way.

Doesn't matter what language or OO heavy framework (worked on Python, C++, C, C#, Java over the years) if there is lots of mutable state stored in class instances, it makes it very hard to jump in and understand a new large code base. While say something written in Erlang the state is explicitly passed around, it is minimized, and looking at a piece of code it easy to understand what happens vs say looking at somethingSomethingManagerInstance with 100 methods and 100 private data attributes, and understand when something breaks how it go to that broken state.

That's just my observation based on systems I worked over the years.


I keep seeing C# referred to as modern. Would someone please explain what is modern about it?


Why not just call the post: "Which Programming Languages are Pure?". What's the point of redefining Functional Programming in a way that is at best controversial?


A language can be purely functional, purely object-oriented, purely garbage-collected, ...

The sole specifier 'pure' is meaningless without context...


Maybe I'm missing something, but what does implicit "this" arguments have to do with functional programming?

> public String getName() { return this.name; }

> How would we purify this call? Well, this is the hidden input, so all we have to do is lift it up to an argument:

> public String getName(Person this) { return this.name; }

> Now getName is a pure function.

Isn't that just syntactic sugar? I would say that a real functional programming language enforces that functions cannot have side effects, not just that you have the ability to write such functions.


I think this mental exercise from Lamda the Ultimate holds true:

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures." Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress. On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.

Source http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent


Objects are arguably just syntactic sugar for passing a bag of state around (where some of that state might be function values). This becomes very clear if you've ever done OO in C (with a manual virtual function table).

A lot of the argument for "composition over inheritance", SOLID etc. is separating the pieces of state that specific object methods depend on. I don't think the Python approach helps that much in practice (since method calls look exactly the same), but it does make it much clearer where anything referenced in a method body has to be coming from.


> Maybe I'm missing something, but what does implicit "this" arguments have to do with functional programming?

They're implicit mutable state. Whether you think that has anything to do with functional programming is another matter, I suppose.


How is it syntactic sugar? In the second version the function is now accessing an argument rather than something outside its scope. That's pretty much the definition of a pure function (besides having no side effects).


It's alternate syntax, or, if you want to go there, you can say it's syntactic sugar. Take a few minutes to go read the bytecode Java generates on something like person.getName() to see how this is being handled at that level.

Further, purity is about expectations, not scope. Expecting something to be pure means I can reliably call it with the same arguments and always get the same result, with the additional expectation that it will change nothing about the world. A function that accepts no arguments can be pure but must always return the same thing. If the person's name is indeed immutable, then person.getName(), provided it performs no side effects, is perfectly pure.

In concept, other than brevity, invoking a constructor on an immutable object is not much different than the function returned by a higher order function such as this:

(defn name-getter [{name :name}] (fn [] name)) (def get-name (name-getter {:name "John"})) (get-name)


I would agree if getName was accessing a global variable but I don't see how getName(person) is any more functional than person.getName(). OCaml features objects with similar syntax for example: http://caml.inria.fr/pub/docs/manual-ocaml/objectexamples.ht...


It's not and anyone arguing to the contrary is wrong. The article is way off base on this. You could argue it's purely syntactic sugar, but really, it's barely even: it's just alternate syntax. People arguing against this are failing to understand that different languages have different syntaxes.

person.getName() is the same as getName(person). In the former it's an implicit parameter, in the latter it's explicit. Or, looking at it differently, one is a syntax for functional calls that prepends the first argument as a target, and one is a syntax that accepts the target as the first argument.

With Java 8 method references, you can see this a bit more clearly. You could, for instance, do this to get a function without the implicit this:

Function<Person, String> getName = Person::getName; getName.apply(new Person());


Because you can write person.getName() like this:

def getName():

. . this.setName('foo')

. . return this.name

There's nothing in principle preventing you from doing that in the "functional" approach but if your data is immutable then you can't do it.


  def getName(person):
    person.setName('foo')
    return person.name
The problem, from a functional viewpoint, isn't the implicit argument, it is the fact that you can write setName.


Really? I thought implicit arguments were a strict no-no in functional programming.


Implicit arguments are still arguments. It is just syntactic sugar, and I think syntax changes cannot change a functional language into a non-functional one.


Friendly reminder that all programming languages are just tools and it's not productive to get worked up about which one is better than others or which arbitrary label makes them more valuable.

Except PHP. PHP still sucks.


I half agree. A hammer and a screwdriver are both just tools, but they're good at different things. It's good to know about the variety of available tools and the sorts of tasks they're good at.


Sure, but in many situations the tool you already have and understand trumps the theoretically better suited tool you have to pick up at the hardware shop and train your team to use.

The logical extreme would otherwise be to switch to a new tool whenever one comes out that is objectively better at a given task and retrain every team member that was previously using the old tool and rewrite the code that relies on the old tool to use the new tool.

This balance is called pragmatism and it's more important in the real world than the objective quality of tools you don't already have in your toolbelt.


PHP Lives Matter!


Was with you till the second sentence.


That was the joke.


Standard reminder: please remember OCaml and F#.

OCaml may be a little obscure in the mainstream programming community, but there are a ton of folks using F# out there.


There are a ton of folks using f# out there? Would be nice to hear about them.


Some testimonials: http://fsharp.org/testimonials/

Don't have time to do the full internet report for you, but if you're interested, the way to tell is simply google various programming problems with F# instead of C#.

Five years ago you could mostly find C# code and convert in your head. Now about 80% of the time you can find what you're looking for in F#. That's a ton of people.


Its worth mentioning Elm when talking about making Javascript functional. To me Elm is like the fun sibling in the Haskell family.

Signals are way easier to understand (for me personally) than monads. I just wish it had better Javascript interop. Making ports for everything to talk to an existing API (like three.js) is a deal breaker.


I feel the same way about Elm. I am curious regarding your comment about ports though. I will admit, I haven't used ports for connecting with three.js, but I have used them to tie in with existing Javascript code during our conversion process and I thought they were pretty nice. What makes the combination of three.js and Elm ports a problem. Is it just the number of them you have to write, or is there some other pain point?


Yea essentially its a matter of scale. three.js is not a small library by any means and I'm using a lot of the features. I tries to use:https://github.com/seliopou/elm-d3

as my guide. I think it just comes down to "I don't have time to rewrite bindings for an entire API right now" because ports require you to really understand what's going on. There's no shotgun method of converting an entire API.

Another reason I gave up is that Purescript does have three.js bindings.


Yeah, that makes sense. Thank you for the information.


"Functional" is just a word, and I don't think it's particularly helpful to declare "language X is not functional". map/reduce, lambda, types, and control of effects are all valuable, and no single one absolutely characterizes functional vs. not.


Hmm I've always felt the true nature of FP is having higher order functions. That's what starts to make things fun. If you need a long keyword to introduce a lambda, or if partially applying and passing functions around is ugly then you lose a lot of the benefits and cannot truly embrace HOFs. In a typed language, no generics or other type limitations puts a major cramp on this too.

Yeah this means C function pointers allow some FP. Except they're clunky and have a hard time bundling info with them (no closures). This makes HOFs sort of a pain.

Controlling state, or at least immutability, is just a generally good idea. Even in rather non-FP code I wish I had easy immutability.


I take issue with the critique of Java not being functional. Old style Java uses mutable objects and yes code is not functional. However, it is now (a good) standard) practice to make immutable objects.


IMO the essence of programming is the definition of abstract interfaces and the composition of components within a context according to those interfaces. The essence of functional programming is the composition of components that create their results without modifying their execution context in any way that could be detected by a failure of referential transparency.

With that definition, functional programming is a style, not a class of languages. An assembly routine that computes a cosine is a functional program. I typically write C/C++ code in a functional style, constructing and returning new data structures rather than modifying arguments in place, because it's so much easier to test, use in parallel code, and reuse later.

So I don't think that strong typing, higher kinds, higher order functions, algebraic types, pattern matching, anonymous closures, polymorphism, or garbage collection, e.g., are necessary to functional programming (although they're all wonderful and Haskell seems to have collected just the right mix of features together into a language that I love using whenever I can). It's about whether or not your components have interfaces that are like mathematical functions.


> [Perl] has a magic argument, $_, which means something like, "the return value of the previous call".

What?!


I don't know whether you recognize this as wrong, or are surprised at the thought of such a thing (which doesn't actually exist like that).

In any case, i've suggested a correction here: http://blog.jenkster.com/2015/12/which-programming-languages...


$_ isn't really global though:

    perl -e 'for (1,2,3,4) { for (5,6) { print "  $_\n" } print "$_\n" }'


It may look like it is not in loop constructs, but it is. `for` and friends just apply `local` scoping to `$_` internally.


I recognize this as wrong, and am surprised by how he could have gotten that idea. Good job with the correction.


Well, try to explain $_ in a short sentence without being wrong. "It's the perl equivalent of the pronoun 'it'." OK, now try explaining it in proper engineering terms that describes exactly what it is, how to use it, and in some sense, how to implement it, in about that many words.

At the very least, I suspect honesty will compel you to admit that either A: that took you longer than you expected or B: you left out a lot of corner cases, especially when modifying $_ does or does not change the thing it does or does not reference, and how it interacts with local, and, and, and....


Here you go: default variable to pass string argument to many built-in functions and operators, which typically return their result in the same variable.

Yes, it took a little longer. No, it didn't took that much longer to leave out plainly invalid and misleading description. No, I didn't need to put corner cases there, exactly the same way as you wouldn't specify corner cases for Yacc to describe what it does.


Ok. $_ isn't trivial to understand, let alone explain. But nobody's claiming that. Well not me at least.

What irked me is that's just about all the author's got to say about Perl in the article. Other languages inherited this "feature" (Ruby anyone?) or equivalents, but no mention of that. Why even mention Perl if he's got nothing (solid) to say about it?


I did in the linked comment in roughly the same space as the original description. The only thing that could maybe have been included is the scope restriction afforded by local, but that's a feature available to all variables and not necessary to mention.

I also did not take longer than expected, nor did i leave out any corner cases that affect day-to-day work and am not aware of many of them existing in the first palce.


Oh lord... I envy your innocence. This isn't even the whole story. There are like ten more of these (good luck remember which one does what). They are all something like $., $/, $" etc. and they're all global.


They're all documented and explained in https://metacpan.org/pod/distribution/perl/pod/perlvar.pod and all have english names available and all but a small hand-full only exist for backwards compatibility at this point.


I would never use them in my code. The problem is when I'm debugging 5 year old code it becomes such a headache.


I thought that the point of $_ was that you don't reference it directly in your code?


It is!

But sometimes you end up using it to make code more reliable, as in:

    do_thing($_) for @things;
Where $_ is passed in as an argument, as opposed to:

    do_thing for @things;
Where $_ is (due to its global nature) set by `for`, and read in sub do_thing { ... }


> I would never use them in my code.

Most people don't. The only one that is used a lot is @_. :)

As for debugging old code, that's why i pointed out perlvar. It's never far and it's always easy to look up what a specific var does.


Well I end up doing that of course, but I'm just complaining that it gets really annoying really fast seeing all these dollar signs peppered in the code, making it super unreadable.


> Lisp is the oldest functional programming language, and the oldest dynamic language.

But Lisp is not a functional language! It's based on the concept of CONS which is mutable and allows for shared structure. Lisp is #1 language that everyone thinks is functional when it's actually not. I'm surprised this article out of all made this mistake.


Well, Lisp is based on the Lambda calculus, which is what we've come to know as the basis for functional programming. Things are much more overtly functional when you look at certain members of the Lisp family such as Scheme.


Would Scheme count as a functional programming language by this definition?

On one hand, the idiomatic emphasis on tail calls (supported by tail call optimization in the language) makes it easy to avoid stateful for and while loops.

On the other hand, most nontrivial code has plenty of `set!`s.


Scheme is a multi-paradigm programming language. Functional is one such paradigm that it supports. It also supports imperative, object-oriented, relational, and probably others that I don't even know about.

In Scheme, one can build functional systems on top of imperative building blocks. For example, memoization can be implemented as a higher-order procedure that uses a mutable hash table as the cache. A memoized function is still pure, despite the use of local state, because it retains the crucial property of referential transparency.

Anyway, I guess my point is that the use of 'set!' or 'hash-table-set!' or whatever else doesn't mean that a program is no longer using the functional paradigm. To quote from a Clojure web page[0], "if a pure function mutates some local data in order to produce an immutable return value, is that ok?"

[0] http://clojure.org/transients


I can find points I disagree with on every language here, but this author seems to have quite the bias against Java, so I'll make a few counterarguments.

"If you write Java code that has no side-effects, that doesn't read or change the local object's state, you'll be called a Bad Programmer."

No you won't.

"Java thinks that localised side-effects are cornerstone of good code"

Java doesn't think anything. It's a tool.

"Unfortunately, in practice Java doesn't just try to encapsulate side-effects; it mandates them"

No, it doesn't.

I will say I've worked in Java shops and seen the language used in this way. I've also worked in Java shops and seen people who have read and internalized the messages in Effective Java, especially with regard to handling state, and the codebase looked nothing like what is described in this article. I recommend the author take some time to read that book and keep reminding himself that Java is a generalized tool that can be used in many ways.


All the ones that aren't dysfunctional.


Argh. One second after clicking downvote, I got the joke...


Jerk :-p




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: