Hacker News new | past | comments | ask | show | jobs | submit login
Functional programming is not popular because it is weird (2016) (probablydance.com)
222 points by l5870uoo9y 4 days ago | hide | past | favorite | 506 comments





Languages do not becomes successful due to their intrinsic qualities. Languages become successful when they are coupled to a successful platform. E.g. C became popular because Unix became popular, JavaScript became popular because of the browser, Objective-C became popular because of the iPhone and so on.

Therefore observing that a language or paradigm is popular or unpopular does not say anything about whether it is good or bad. JavaScript is the most popular language in the world. If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world. So paradigm has nothing to do with it.

Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.

I don't buy that functional languages are unpopular because they are unintuitive. Losts of stuff in JavaScript is highly unintuitive. It didn't prevent it from becoming the most popular language in the world. People will learn what they need to learn to get the job done.


>Languages do not becomes successful due to their intrinsic qualities. Languages because successful when they are coupled to a successful platform.

That doesn't seem right. There are plenty of languages that are born from platforms, but I'm skeptical that its anywhere near the majority. Some platformless examples from the top of my head

- Java

- Rust

- Python

- C++

- Go

- Lua

In my opinion: C is not popular because of Unix. Unix is popular because it's written in C. The same is arguably true of Kubernetes. Neither docker nor k8s drove mainstream adoption of Go. Go did that with it's own properties, the same is true of Rust and C etc.

> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.

The question here though is why?

The author argues because FP is unwieldy for the general software case (which I assume is enterprise CRUD apps). And I have to agree: State management is a huge part of CRUD apps.

------

EDIT: I am not presuming K8s is popular because of Go. The argument here is the success of the platform and choice of language are related, not directly consequential. K8s could have been written in rust and would probably be as popular because of its feature set


Java was the only way to write applets and code for feature-phones early on. Nobody thought of writing application servers in java in early 90s, java was supposed to be the way to write code for internet of things.

C++ was the easiest way to write kinda-OO code and use C libraries. Then it became de-facto standard for gamedev and desktop app programming.

In Linux world it's still the case that if you want to write a desktop app you should write it in C/C++. Use any other language and you will struggle against the dependency management and package managers forever.

Python is the only clear example of succeeding without a platform out of these languages, and the fact that after decades still Python is less popular than PHP shows that historic accidents and having a good platform is more important than any quality of the language.

On the other hand Perl is dead so maybe it's not as bad :)


For a lot of users, Python is its own platform. As a so called "scientific" programmer, when I'm programming, I'm very much "inside" Python. I don't need to know or care very much about the details of the platform that it's running on. I start up my IDE or Jupyter and the rest of the system melts away.

This may be because Python lends itself to bring used in integrated environments such as Jupyter.


>Java was the only way to write applets and code for feature-phones early on.

That's not how it got its popularity. Applets died very soon (the novelty lasted like 1-2 years), and feature-phone apps were never a big thing. Unlike e.g. mobile apps that quickly overtook the desktop, feature-phone J2ME apps at the time were a peanuts business (and there were other ways, like native APIs from Palm, Blackerry, and Windows mobile edition - of yore, not their post-iPhone smartphone OS).

Java, even back in 2000, was big in the enterprise space and remained so (which is why Sun quickly emphasized that and let the applet sdk languish).


> In Linux world it's still the case that if you want to write a desktop app you should write it in C/C++. Use any other language and you will struggle against the dependency management and package managers forever.

I do agree in a very general sense, but there are definitely a few good options or there for desktop software.

For example, Lazarus/FreePascal is one of the best solutions for writing GUI apps even nowadays. It's a shame that it's dead as far as market share is concerned, even if it has a community around it, is open source and still receives regular updates with pretty good platform support.

Though i guess you could say the same about any technology stack that produces executables that are for the most part statically linked and don't complicate dependency management.

Of course, Java with Swing also hasn't disappeared anywhere and is still perfectly capable of producing most desktop software as long as there is any sort of a JDK or JRE on the device. There's also JavaFX/OpenJFX which is supposed to be more modern, but I've experienced more issues with it in comparison to Swing.


For all that, you can still get an awful lot done in a dead language!

The (excellent) libraries are python's platform

I would have avoided python if I could... But I can't avoid numpy/scipy etc etc


Was Python's package management unusually good compared to other languages at the time?

Depends on the time, CPAN was far better than setuptools. It was even better than dealing with RPM, early on.

RubyGems was the package manager that was really amazing and changed everything IMO. pip didn’t even exist until almost a decade later after that.


Python took the shell scripting platform from Perl just like PHP took the web scripting platform. It then extended to web servers (e.g., Django as an alternative to Ruby on Rails) and ML, but that was later.

For Java, JVM was the "platform". I still remember learning Java in university - what left an impression on me was that my first ever university homework where I used Java (i.e. non-trivial project) ran perfectly on first successful compilation [+]. GC is huge help, especially if you don't use smart pointers (which were far less prevalent at the time).

[+] to clarify, I was a somewhat experienced programmer, this was one of the final year projects, for distributed systems or parallel computing I believe. A simple system without doubt, but still, a system not a "hello world". Something that I wouldn't have expected to work on first try, had it been written in C.


Yes. I like this in Java, and in Rust, and it's annoying for me in languages like Python where you mostly don't have this. I want the compiler to tell me my program is nonsense, so then I can fix it, rather than wait until the program has done most of its work and then, oh, did you notice this needs to be an integer but you provided a string? Sorry, program crash, fix the bug and run the whole thing again.

But of course although it's the default in Java, and not provided out of the box in Python, I also wrote some web framework stuff that adds these runtime errors to Java (by using Reflection to make decisions at runtime instead) and I've seen Python code with more type safety that would tell you earlier that there's a problem. So it's ultimately not only a matter of programming language although that does definitely set the tone.


I think platform in this argument is meant to mean something users obtain and install apps (or websites) into. The JVM is not a platform really, it's a runtime. For a while the JRE was sort of a platform because users downloaded it separately, but that hasn't been true for a long time and yet Java is still popular.

It is almost forgotten now, but the platform which carried Java to critical mass was the browser. Java successfully pivoted to server-side, but initially it was considered a client-side language.

C++ was the preferred application development language on Windows.

Python is popular because of the numeric and ML ecosystem.


I would say that in the enterprise software world Java was an irrelevant toy before the pivot to server side. At that point though, whoosh stratosphere

I'm not sure that's quite right. I feel like there was a decent span of time where many desktop apps were written in Java (with AWT or Swing), particularly enterprise apps where you valued development speed and ease of deployment more than a slick UI or high performance.

My $JOB is maintaining such a desktop CRUD app that's been in use for the past 20 years. It uses a Swing (via a proprietary higher-level framework) for its GUI. Over the years it's accreted a few dozen services (also in Java) in its orbit, but the central GUI app and its data-base still remain and continue to be extended as new requirements arise.

Python is popular because their teach it in US schools and colleges as a next best thing after BASIC int terms of ease. During ML uprise Python became a common denominator between mostly US-based scholars.

> It is almost forgotten now, but the platform which carried Java to critical mass was the browser.

I'm still confused by the cycle of

- You can run Java applets in a web browser!

- Ew. Time to stop doing that.

- I've had a brilliant idea! We need a bytecode-based VM in the browser so a web page can run arbitrary code.

- I'll call it "WebAssembly".


Other than security, the other bad thing about applets was that they were very slow to start, and required a Java runtime that did not come bundled with the browser - you had to download an installer for it manually, install it, and then the thing was constantly nagging you about updates (and, IIRC, the updating agent or whatever it was that sat in my tray noticeably degraded the machine's performance even if nothing was using JRE).

Here:

- You can run Java applets in a web browser!

-> Yeah, so? There are all crap, slow to start, big on resource bloat, slow to draw, and don't offer much).

-> Ew. Time to stop doing that.

Then 15+ years passed, during which JS became the dominant way to build CRUD/Enterprise/form-based/and more apps, got big features (from WebRTC to an embedded DB, and from accelerated canvas and 2D graphics to MIDI). And it even hit big on the server too.

Then: - I've had a brilliant idea! We need a bytecode-based VM in the browser so a web page can run arbitrary code.

WebAssembly, is faster than Java Applets was back then, has easier ties to the DOM for UI (as opposed to being a sandbox), can be used to supplement conventionally written web apps, and is not tied to a single company.

So, quite a lot of differences due to time, and also in the characteristics of the technologies involved.


Well, it’s not hard being faster than 20 years old tech. Also, webassembly still doesn’t have proper bindings to the DOM (and seriously, at the time we could be happy that a static DOM could be displayed), and java is one of the very few languages that actually has a proper specification allowing independent implementations, and it is not just saying that “the spec is whatever code we write”.

So these comparisons frankly make no sense as every such shortcoming could have been easily fixed in 20 years.


>Well, it’s not hard being faster than 20 years old tech.

Yes, but it was even more hard to make Java applets run with any acceptable speed in 1997, which is what mattered for their deprecation. Understadingly people didn't just say "let's suffer them for 10 years until the hardware catches up to make them tolerable".

>Also, webassembly still doesn’t have proper bindings to the DOM

Yes, but still better than what applets had :-)

>So these comparisons frankly make no sense as every such shortcoming could have been easily fixed in 20 years.

It would make even less sense for people in 1997-1999 to stick with applets because "such shortcomings will be easily fixed in 20 years". People use what works now, or at least offers a serious advantage despite the shortcomings. Java applets then didn't offer much.


> Understadingly people didn't just say "let's suffer them for 10 years until the hardware catches up to make them tolerable".

It’s funny because that is essentially what happened. Other than flash (which suffered essentially from the same shortcomings as applets, but at least had a productive environment to create them), there was nothing that replaced these technologies for many years to come. Canvas rendering came much later and only with much more recent browsers did it have acceptable performance. It is not accident that many people long for the old web which was strangely more interactive in some cases than what we have today.

Don’t get me wrong, java applets were shitty. But we sort of threw it all away instead of fixing it, which in hindsight seems to have been an easier road (since the JVM has always been the state of the art runtime, and JS engines had to catch up from zero). DOM integration into an object oriented language would have been much easier than what webassembly does through JS bindings, and lack of security was frankly more of a mindset back then, and the jvm could have been sandboxed just as “easily”* as js engines are.

* it is a hard thing to do, but it happened to JS engines because the money was there. Integrating the JVM with proper sandboxing would have required less money/energy.


The problem with Java applets was not the use of bytecode or a VM. The problem was that it was slow to start so you looked at a grey rectangle for a long time before anything happened. Flash was much snappier, which is why it won out over Java. Flash also had better development tools targeted multimedia and games.

Correct me if I'm wrong, but I think the problem with applets was that they were too powerful and it took eons for vulnerabilities to be fixed.

Java has all the security logic INSIDE THE VM.

That's the big difference.


- Now you can go back to run Java applets in a web browser!

>It is almost forgotten now, but the platform which carried Java to critical mass was the browser

I think that's backwards. I was there. Aside from a few e.g. banking and government applets one was forced to use and a couple of exceptions, applets never got anywhere and died fast.

Enterprise Java became what Java is all about very very soon. Java landed in 1996. Servlets/Tomcat landed in 1998.

By 2000 there was plenty of enterprise Java development - in fact that was the year MS created its own Java copy, in the form of C#/.NET after having tried to extend their version of Java for the Windows platform.


Back when Flash was called Shockwave, it was competing with Java applets for browser games. It did lose, but it was out there.

Minor nit, Shockwave wasn't the same as Flash. All three plus ActiveX overlapped in their time on the market, which is hair-raising to think about.

Was Shockwave a platform for Flash then, or something?

I don't know any technical details here, but from a user perspective Shockwave did turn into Flash. The Flash file extension "swf" even stands for "ShockWave Flash".


It was two separate products with separate origins. Macromedia had Shockwave and then they purchased Flash (which was called Splash then) and branded it Shockwave Flash.

Shockwave was somewhat similar to Flash, but it was developed for the "multimedia" CD-ROM's, which meant the files was generally too big for online. Flash was much more compact which is why it won out.


Shockwave = plugin for playing Macromedia Director content. Director was a GUI builder for interactive multimedia apps which were programmed in a custom language called Lingo.

Flash was originally called FutureSplash, if I recall correctly. When Macromedia bought it they rebranded it to fit their general branding theme, hence the confusion. Flash wasn't really intended to be an app platform, it started out as a vector animation format, but later they added scripting using a dialect of JavaScript called ActionScript.

Most users ended up with the Flash plugin but not the Shockwave plugin. Macromedia Director was huge in its day - my first programming job involved writing Lingo - but it died out pretty quick when the internet started taking off.


> Flash was originally called FutureSplash

So had the renaming taken place now in stead of twenty years ago, it would be called F7lash in stead of just Flash.


Shockwave was Flash's larger brother, feature wise. Shockwave stuff could contain Flash files, but Shockwave could do things Flash couldn't, or could do them earlier. (And if I remember correctly Shockwave came first, and when Macromedia aquired it they then build Flash)

Notably, Linux support for Shockwave was missing.

I don't think ML was what it is today when python started.

Right, definitely not.

From my perspective, Python became popular as the simplest and friendliest general scripting language along the lines of Perl or TCL, and the easiest for somebody familiar with other languages like C or Java to pick up.

If I wanted to do something just a bit too fiddly for a shell script, I'd reach for Python (and still do, although I've recently started using Node too).

Perl initially became popular as the best text-wrangling language for CGI scripts and the like, but (I think) was a bit too weird to be a general-purpose hit for the masses like Python. Likewise Tcl/Tk was great for whipping up quick UIs, but both parts seemed a little too weird to stand on their own.

Ruby was another strong contender; I think Python was already too well-established to be displaced by Ruby, but Ruby found its audience via Rails (which I'm guessing is better than Django).


Rails was the first of that type of web framework. Django and all similar frameworks are Rails clones. At that time Rails was the killer app for Ruby. Unfortunately for Ruby the other languages were willing to put in the effort to copy it, and Rails is no longer a unique advantage.

Django and all similar frameworks are Rails clones.

Hmm, are you sure? I’m not saying you’re wrong, because I’m not sure myself! I heard of Django before I ever heard of Rails but that doesn’t mean much.

Wikipedia suggests they were released at around the same time -- Django started slightly earlier, Rails open sourced earlier.


Zope was the leader in python long before Django or rails. Since 1999

ColdFusion and JSP and PHP were dominant for a long time before Rails as well.


Aha, yes, I was getting mixed up between Zope and Django!

Just a minor nitpick Rust still has miles to go before it catches with Python or Go or Java, so not yet in the same league. Most of the rust crates today depends on C, so it’s not going to replace C in next 2 decades, but will still depend on C.

Hopefully it will start replacing C going forward but in that space Rust is competing with Zig and Nim.


> Zig and Nim

Two languages almost nobody uses. It doesn't seem likely it's actually competing with them in that case.


You hardly hear about Zig or Nim. Are they actually being used in production?

I do not know about Zig, but Nim is used in production at multiple companies - biotech, cryptocurrencies, finance, had an attempted commercial video game app, etc., etc. See here [1] for example.

[1] https://github.com/nim-lang/Nim/wiki/Organizations-using-Nim


Rust isn't competing with Zig and Nim just like C isn't competing with Rust.

> And I have to agree: State management is a huge part of CRUD apps.

I see this point come up often, but it doesn’t really line up with my own experiences. I find the explicit state control in functional languages makes it much easier to write enterprise CRUD.


CRUD is about state, but typically the mutable state is in the database, not the application layer.

UNIX became popular because it was free beer with source tapes.

Had UNIX been written in PL/I, Mesa or BLISS, those languages would be "C" today.


Oh, how I miss BLISS. It was the first language I learned that got rid of the silly distinction between statements and expressions. What a revelation! Everything was an expression. Everything returned a value. I guess a bit like Ruby, but 40 years ago.

https://en.wikipedia.org/wiki/BLISS

https://www2.cs.arizona.edu/classes/cs520/spring06/bliss.pdf


> Everything returned a value. I guess a bit like Ruby

Like C, isn't it? Assignments return values! Still feels weird AF to those of us who started on BASIC and Pascal.


Elixir possibly has some of this flavour. If and switch are both expressions and return values. I dig it.

I think it is that way in almost every FP language (eg. Scala, Haskell, Kotlin, etc)

> Go did that with it's own properties

Go got popular because it was a Google thing and everyone fanboyed hard over that.


While Google's support was definitely a factor, Go also had some important language features going for it. Most importantly it is targeting a relatively empty niche in the programming language landscape, i.e. that of high performance, close to the metal languages with little performance overhead, while still being easy to write. "Easy to write" for 80% comes down to being memory-safe, unlike C/C++. If you're in that niche, you have few alternatives. The other options for mainstream memory-safe languages are interpreted scripting languages and Java-style jitted languages. Go easily beats both in resource consumption without being much harder to program in. Rust isn't really comparable because while it is memory-safe its memory management system still forces the programmer to think about the memory and resource usage, thus being slower to program in.

I have always used Pascal for that

But no one else does


Go is just shitty Pascal, change my mind.

On a more serious note. I decided to read Delphi documentation recently because I’m old enough to hear a lot about it, but not quite old enough to write anything in it. It had discriminated unions. It did! I can’t imagine my life without them, I write stuff exclusively in Ocaml-like languages, so the only question that I have in my mind is “how the hell we managed to go backwards?” It’s so weird.


Go has some restrictions, but all in all it's a great little pragmatic language, which solves a lot of practical problems (utf-8 strings, concurrency, garbage collection, cross compilation, single binary deployment, performance, readability) and which I can easily keep in my head as opposed to most of the other languages I've used in my career.

In that way it's like a Delphi for the modern world.


No, Free Pascal is Delphi for the modern world.

Of course it had. Turbo Pascal had, AFAICR. (Probably because, at a guess, Wirth-standard Pascal did; though I'm less certain about that.)

How is that different from eg. D? Or the myriad other GCd AOT compiled languages?

Did google still have fans among the technical crowd when golang launched? I'd expect Rob Pike and Ken Thompson to be the fanboy-attractors rather than google.

What happened to Dart, then?

It got betrayed by Google itself when the Chrome team dropped support for DartVM support.

Then AdWords team which had just gone through GWT => Dart rewrite, kind of rescued the project from closing down, after the key designers left.

Finally Flutter team kind of gave a new life to Dart, which made Dart only relevant in the context of Flutter.


It targeted the front-end, which lots of programmers don't give a shit about.

Java and the JVM had considerable marketing investiment to become a popular choice among the enterprise segment.

Python was taught in many universities for its flat learning curve, then started to be used in academic research.

Even without an unrelated platform, like JS and the browser, they had something to catapult their adoption.


> Python was taught in many universities for its flat learning curve, then started to be used in academic research.

Python was used in academia long before it was used for teaching. “Courting” scientists was done from extremely early on in the language history (matrix-sig was founded in 1995) and the “extended” indexing added at their request (auto-tupling and slicing predate PEPs as those were introduced for and by Python 2).


The Spark data processing platform was/is popular, and is written in Scala, a.k.a. FP.

Java & Go are sponsored by megacorps; their success or otherwise has little to do with the language's strength in isolation. They are more examples in favour of successful platforms being leveraged to promote a language.

> If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world.

Brendan Eich advanced a Scheme-like scripting language for Netscape, which got a facelift from Java’s popularity.

“Scheme was the bait I went for in joining Netscape. Previously, at SGI, Nick Thompson had turned me on to SICP.”

“The diktat from upper engineering management was that the language must “look like Java”.“

“I’m happy that I chose Scheme-ish first-class functions”

JS actually made a few functional concepts mainstream, such as first-class and anonymous functions, and callback patterns.

Source: https://brendaneich.com/page/5/


Two counter points spring to mind:

Python - which took perls lunch over a decade or so and now in more recent history became the obvious language when data scientists and ai/ml practitioners needed to adopt a platform. Its popularity has even led to it also expanding into education in a huge way after Python became the most popular scripting language.

Java - the jvm is an utterly amazing piece of engineering and yet still to this day most people still write java instead of the other better languages (kotlin, scala, clojure) hosted on that platform, and they deploy only to Linux environments. The platform with its write once run anywhere and industry leading garbage collector that gets better than 50% memory utilisation, isn’t the thing that attracts people - its the language.

Functional languages used to be the cool thing before object oriented design came along and unlocked the ability to build bigger systems. The ultimate programming language (lisp) has always encourages functional since what, the 60s?


I would argue that python's popularity comes with numpy/pandas, jupyter notebooks, and the recent popularity of AI/ML/data science. Yes it's useful in other areas (flask and django are somewhat popular) but nowhere near as popular as its use for ML.

Scala actually became MUCH more popular with Spark. I think the original point stands - the platform pulls the language.

Java is popular due to the jvm - it was the first jvm language after all. It got popular before others like scala & clojure managed to get off the ground, at all.


> I would argue that python's popularity comes with numpy/pandas, jupyter notebooks, and the recent popularity of AI/ML/data science.

Python was hugely popular long before these were a thing. It was already seen as one of if not the most beginner friendly language at the beginning of the 2000s. It became the language of choice for ML because it was already popular with beginners not the other way round.


Yes, before Django there was Zope.

>> Java is popular due to the jvm

That's not how history played out at all. Java spent much of its life being pitched as a way to make C++ developers more productive.

The jvm was actively hated by many for years - slow startup times, excessive memory usage, it didn't used to enjoy the rich tooling ecosystem it has now and it was seen as opaque and hard to tweak for performance.

To this day it's not that hard to find Java devs who want off the hotspot jvm - whether that's to go onto other JVMs with different tradeoffs (Azul or whatever) or whether to go native compilation (GraalVM) instead.


I wrote another comment where I explained "popular due to jvm" better - the productivity gain was real, but came from the GC which was....JVM. No, the JVM wasn't hated initially - not until it became hugely popular and people started pushing it to the limit. Or well, at least that's how I remember the history, I might be wrong; human brains are notoriously fallible and I'm too lazy to search and validate/confirm my version of it :)

Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance. Java being memory safe is a huge part of that, which is the JVM platform. Java the language had the major feature of being superficially similar to C++, that helped take over the business market but is now no longer relevant. When Java was first marketed as a business application language there were few competitors. Nowadays there are, but nowadays the major reason to choose Java is because it is entrenched and good enough.

However all the Java shops I hear about are also looking into Kotlin at some level of adoption. From all the new JVM languages Kotlin integrates the best with the existing Java environment and for displacing a language in an existing niche an easy upgrade path is the most important. So I think Kotlin will become an important language in the JVM ecosystem, it will just take a long time because these types of businesses are conservative in their tech choices.


> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance

While both of these things are true, they not connected in the way you imply, since Java is a pretty low level language by today's standards.

In the Java school of business app engineering, writing the code is rarely a big part of the effort, so it doesn't matter if the langugae is not very good or expressive. Java wins at having a big commodity-like labour pool of programmers, and there's a lot of inertia and stability in the platform.

There are of course a lot of people who use more expressive and creative tools in making business apps, like eg the many companies using Clojure, Scala, Ruby, Python etc for them, so it's not the only way to skin the cat.


> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance

I fail to think of any other platform that could run these monstrosity CRUD enterprise apps as fast as the JVM can. Sure, C++ can be written to utilize hardware better, but with all the classes and interfaces around with everything being virtual, a good JIT compiler can skip method lookups over AOT compiled languages.


Kotlin's future is tied to Android.

On the JVM it is like trying to replace C on UNIX.


Both of my last 2 large bank gigs (kind of the last places you'd expect cutting edge tech) were going all in on Kotlin. New projects were Kotlin only, and there was active work on sunsetting/migrating Java applications towards Kotlin. None of these were Android applications.

Sure, this is anecdotal. But I'd say the same of Java's dominance in the JVM space. Java's continued dominance is not a sure thing from my vantage point.


JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

Groovy was all the adoption rage across German JUGs back in 2010, then everyone was going to rewrite the JVM in Scala, or was it Clojure?

Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

https://trends.google.com/trends/explore?q=%2Fm%2F07sbkfb,%2...


> JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.

This is less relevant today. The host blessed languages do have an advantage, but I would not say it is insurmountable. It might have been the case in the past, but the modern JVM is a platform, it is no longer a glorified Java language interpreter.

> Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.

Maybe. Maybe not. Most developers I talked to that have experienced the transition do not want to go back to Java.

This isn't to say Java will die. It will continue to thrive. But Java dominance (on the JVM or as a whole) isn't a sure thing anymore.


There is a functional programming language that is coupled to a platform, that most people on HackerNews wouldn't think of: the M language for PowerQuery. It is used to make ETL pipelines for Excel (and I think other Microsoft apps?). It is very popular, although among people who don't consider themselves "programmers" or "software engineers", but rather analysts.

https://docs.microsoft.com/en-us/powerquery-m/

In fact, Excel itself can be considered a functional, reactive programming environment, and if we grant that, then it is the most popular programming language on the planet.


It has just occurred to me that Excel is an excellent example of a functional language to use to explain to people who don't understand what functional languages are.

> If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world.

If whatever Netscape added as their scripting language had been too weird I think there is a good chance that a competing browser would’ve implemented a less weird language and that such a less weird language would have won over the hypothetical too weird language.

For me, Objective-C is too weird so I never bothered with it but I love Swift and only after Swift came out did I start making apps for iOS. And that’s even though I had been wanting to make mobile apps for iOS for a long time.


Microsoft introduced VBScript support in Internet Explorer. VBScript was a variant of Visual Basic and therefore a lot more familiar than JavaScript. VB was already used in MS Office and a whole generation had learned programming starting with BASIC. But it didn't matter.

Nobody would use a language which wasn't supported by the browser with the largest market share, regardless of the merits of the language.


I disagree, at school we studied both C and LISP during the same semester, writing a lot of similar things in procedural Vs functional styles... I can assure you that most people (who didn't have any programming exposure) really preferred C

This data point is confounded by lisp's weird syntax, I would expect people not used to programming to hate the excessive indentation and nestedness that lisp's syntax does to expressions.

from my memories it was much more the equational thinking required which was a hard conceptual leap. Most people are really not used to it.

> JavaScript became popular because of the browser

I would also say it is mainly due to Gmail and their G Maps extensive use of XMLHttpRequest, and Douglas Crockford insight on that JS has closure and function as first class citizen, it is just Scheme is C clothing. Check out his Little Javascripter. That and of course the intro of JSON.


> and Douglas Crockford insight on that JS has closure and function as first class citizen

«Insight» is like «RTFM»? :)


> and Douglas Crockford insight on that JS has closure and function as first class citizen

That is not an "insight". Anyone who had written an event handler in JavaScript would already know that.


I think that we must see C always in the context of assembly language. Pascal and C really helped things moving forward in terms of abstractions. C seems hard now and it is, however, assembler is way harder and when things started to evolve from 8bit to 16 and 32bit computers, we were all glad that machine language was something from the past. Same goes in most part to BASIC.

Not sure if it's implied but JavaScript was basically ideated as Scheme with C syntax in the Browser. That concept didn't take off for quite some time but pure functional programming within React actual got quite some popularity. Even "plain React" is borrowing a lot of concepts from FP, especially when combined with Flux and other add-ons. During that time also Clojure due to Om (React with Clojure) gained some popularity.

One language where FP basically arrived in the mainstream is by the way Scala. And there are other newer language which allow for FP'ish programming like Kotlin.

But otherwise it's true, languages like Schema or Haskell will probably not arrive in the mainstream soon.


> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.

WhatsApp backend is written in Erlang. A large part of the world's telecom infra is written in Erlang (2G, 3G, 4G, 5G). It powers the highest availability systems in the world. Still nobody cares about it....


>Languages do not becomes successful due to their intrinsic qualities. Languages become successful when they are coupled to a successful platform.

That's true for system and low-level application languages (Javascript is an exception, as for the web platform, there's no alternative so it's used for everything).

Perl, Python, Java didn't become succesful because of platform ties (yes, Java was made by a platform vendor, but most of its programmers used Windows and deployed on Linux or AIX, not Solaris).


How does your theory explain Rust or Python?

Despite Rust being around for quite some time now, highly liked (hyped) by its users and being a good language with many features that make it stand apart while having major commercial support, it's still a very niche language.

I think it rather supports the OPs claim.


People are learning Python because they want to do ML or scientific computing - not the other way around. Python has its current popularity because of NumPy and the whole numeric and ML ecosystem.

I love Python as a language, but realistically it could just as well have been Ruby or some other language.


Python was already quite popular by the time 3.0 came out in 2008, well before the AI summer was in full swing, which is a large part of what drove the popularity of NumPy et al.

You're not wrong now, but that's just the chicken, the egg included web applications and its use as a scripting language.


Python was already the scripting language of choice at CERN when I was there about 20 years ago.

Summer School computing classes even used to teach unit testing in Python.

And the Atlas HLT/DAQ build scripts were based on Python.


The counter example you’d probably want to use is Julia — it was written with scientific computing in mind, but is nowhere near as popular.

Seems that NumPy was written in Python b/c the ML community was using Python, not the other way around.

For Python, the platform is education.

And most educators decided on Python because? You don't think it has anything to do with people liking the language and therefore wanting to use it and teach in it?

> And most educators decided on Python because?

Because there was a huge push by the Python designers towards this direction?

They got funds from the DARPA in 1999 for a proposal entitled "Computer Programming for Everybody", the first part of which was " Develop a new computing curriculum suitable for high school and college students".

The Python project did a lot of outreach and marketing specifically targeted at educators to explain why Python was a great teaching language (and did a lot of work to make it so - don't get me wrong). It worked.


It can also be due to tooling -- once a student has got the python binary installed you can get them writing and running code without having to make sure they have the correct version of x, y, z (don't even need conda or pip) or teaching them what a compiler is etc.

Javascript is quite popular as an introductory language too -- students can open up a web browser and type stuff in the console in the middle of a lecture.


I agree. It even features a basic IDE, IDLE, as part of the default install. No need to figure out how to configure your text editor to interact with the Python shell. I remember trying to learn Ruby before Python, several years ago, but then got stuck trying to configure Geany to work with Ruby.

They chose Python since it's essentially a free, open-source alternative to Matlab.

Also, Python is used as a scripting language on top of C, and C is popular.


>They chose Python since it's essentially a free, open-source alternative to Matlab.

Different domains of education had different reasons but some classes had nothing to do with Matlab.

- The professor Gerry Sussman of the famous MIT 6.001 SICP class said they switched from Scheme to Python because (paraphrasing) it's more high-level with libraries to get immediate work done. (E.g. A class project to control a robot.)

- Peter Norvig teaching Artificial Intelligence classes switched from Lisp to Python because he noticed his students kept getting stuck on Lisp syntax instead of progressing on the more important AI concepts. Switching to Python made teaching the class easier.

One can google for their interviews on why they switched to Python.


Why didn't they use Scheme? That was pushed by a very prestigious institution so had some momentum in the education field. Instead they switched to Python and basically nobody uses Scheme to teach any longer. Python won because people preferred it over lisp, simple as that.

Teaching Scheme etc always got pushback from the outside, because regardless of how well it works for education, people get it in their head that you are teaching something that "industry doesn't use" and thus is bad.

in comparatively few places Matlab would have been the alternative.

It's a really intuitive one for new people to learn - you're far from the machine, but the syntax doesn't get in the way of the concepts much. Of all the languages I have taught people in, Python is the quickest route to understanding/independence.

It also helps that Linux distros usually come with Python

Python is quite popular with the data / machine learning crowd. Think pandas, tensorflow etc.

Python was popular before those libraries came out though. And then people wrote those libraries in Python because it was popular, making it more popular. People using popular languages to create platforms or on their platforms isn't evidence that having a platform is what drives language popularity.

Python came with a platform of sorts, its motto was "batteries included", after all.

I personally find functional languages highly unintuitive.

Intuitive just mean familiar. A programming language is intuitive if it is similar to another language you are already familiar with.

I personally find object oriented languages highly unintuitive.

This is exactly the reason why people are trying to push their favorite language down everyone's throat.

Hi, do you have a minute to talk about our lord and saviour, Rust?

Chicken or egg?

Why do you assume the browser would have become as popular as it did if Netscape had gone with Scheme? Or Unix with C?

It seems more likely there is a feedback between the popularity of a platform and it's anointed language.


Netscape already had total dominance of the browser market when JavaScript was introduced.

> JavaScript became popular because of the browser

That is a very popular perspective. In the past I have seen this sentiment used heavily by people who hate JavaScript as a means to rationalize how JavaScript could have become popular at all.

Unfortunately this sentiment is disqualified by data.

The browser became a popular platform in the late 90s and early 2000s as business and social interest on the web grew. JavaScript has been around cross-platform in a mostly consistent way since 1998 due to the publication of early current standards.

This did not make JavaScript popular though. JavaScript would not become popular for more than a decade later.

In 2008 a couple of things happened. Chrome launched offering the first JavaScript JIT. Before this JavaScript was hundreds of times slower than it is now. Also, jQuery started gaining popularity at this time which allowed employers to hire any dumbass off the street to write in it. Douglas Crockford also heavily evangelized the language for its expensive capabilities: functions as first class citizens, optional OOP (easily ignored/bypassed), native lexical scope.

A little after that around 2009/2010 Node.js launched and GitHub became common knowledge. It’s about this time that JavaScript exploded in popularity. The web was already popular for more than a decade before this.

> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.

I would argue functional programming is incredibly popular but less common in the workplace because OOP conventions are what’s taught at school.


> Unfortunately this sentiment is disqualified by data.

Your comment seems to support, rather than disprove, the claim.

JS was around for years without much success before the browser became a popular platform for applications. People used to only write applications for the server or for platforms like Java and Flash that were just embedded as plugins in the browser. They only started using JS after improvements to browsers made it a better platform than the alternatives.


This shows JavaScript as being popular since at least 2004 (beginning of the dataset) https://trends.google.com/trends/explore?date=all&q=javascri... and this is closer to my recollection.

> I would argue functional programming is incredibly popular but less common in the workplace

It would help if someone defined what constitutes "using functional programming".

I mean, this is functional programming:

   function add(a, b) { return a + b }
From my experience, every developer does functional programming to a certain extent, it's just the extent that varies.

Functions were invented by functional languages…but they originally were a lot more flexible than the way most procedural languages use them now.

Let’s try rewriting your example this way

add = function(a,b) { + a b }

Why is this better? Because + is a function, and add is an object, so you can then refactor the code to say this

add = +

Try doing that in C.

As I see it the code for some programs has a high degree of symmetry (ie repeated patterns), and the best language is a notation the let’s you capture that the best.

If you find yourself cutting, pasting and modifying things slightly each time, then your language is holding you back.

Test-script code is the best example of this…particularly hardware tests. You have the same code copied and pasted dozens of hundreds of times.

Thoreau once said that government is best that governs least. That language is best that makes you type the least.


> add = + > Try doing that in C.

Function pointers. They are abused to no avail in C programs. It has the original callback hell, not JS. (And that is mainly because C has no proper way for abstractions)


Yes, that's functional programming! I think you're right to say that many programmers use FP much of the time. The modern trend is definitely to borrow many FP techniques and use them inside an imperative shell.

Your example is strict FP, where a and b are fully evaluated before add() executes. There's also non-strict FP, where everything uses lazy evaluation by default. That has some big advantages but also a few pitfalls, and generally makes things significantly weirder when compared to imperative languages.

Pure FP is where you only have functions and expressions, no variables at all. I think that's where most mainstream programmers draw the line -- sometimes you just want to store something somewhere, without being forced to jump through what seem like weird hoops.

The original elevator pitch for Haskell was that it's a "non-strict, purely functional" language (aimed a unifying a bunch of different research languages, plus the proprietary language Miranda).


Javascript became mainstream with the Web 2.0/AJAX hype of 2004. It was still Google's "fault", but it was the slick Google Maps site (and also GMail) and not Chrome that was the inciting factor. With an assist from Microsoft's Web Outlook and XMLHttpRequest.

Really, the reason Chrome invested heavily in optimizing Javascript was because it had started to be widely used in more than trivial scripts.


For me OOP is weird. I never understood what people mean when they say it is close to how they think. It feels like a way to obfuscate the flow of code and requires to make decisions about what thing should have which responsibility and relation with which other thing and building weird hierarchies that will bite you back in the long term.

Relations between Objects change, customers don't know what they want, change is always pain with OOP.

Now people say, composition over inheritance. Right, but isn't that the point of functional programming?

Functional programming maps to how I think. The most important questions is always: What data structures do I need? Get your data in order and the rest will flow naturally.

I don't use functional programming because I am a math nerd or something, I am not even good at math. I use is because it is composible and easy to understand. I can refactor without any fear of side effects.

Now is pure functional programming practical? For some tasks, sure but yeah not always. Imperative programming gets stuff done. Work with the strengths of both.

The thing you are not used to always feels weird, that is a problem with you not the thing you try to learn.


Oh man. Are you me? Thanks for that comment.

For me functional programming was just like a “missing link”. Wait, I can just code like I think, not in this weird statefull way? It’s just so practical for me. I can deliver 10x the value for 0.1 the effort. It’s just not fair, I know for a fact that there are more people with the same model of thinking, I just want them to feel as liberated


Amen!

Same as you and GP.

I'd add that, for me, "functional programming at the edges" is sufficient: I don't need to go full straight jacket / Haskell-style to get a huge lot of the benefits of FP. I can still use an imperative outer shell and still have lots of functional parts in my code which are easy to reason about.


What language do you use? (that gives that nice blend)

JavaScript!

Ok :-) me too (typescript)

I tend to mix OOP and functional together as I use objects as a way to relate functionality or steps within the process. I almost never use objects as a means to model the actual data. At most, I use objects in this instance as glorified structs. The only real benefit that OOP as implemented in most languages has is the ability to write to an interface whether you use an explicit interface type or an abstract class which concrete implementations derive from. Like Bob C Martin said in many lectures the inversion of control is really where OOP shines. Anything else most other paradigms do it better.

I find that OOP languages (as long as they support FP and immutability) have the benefit of using classes as nothing more that first-class parameterised modules (without any mutable state) so I don't have to repeatedly pass arguments from one function to the next but can look them up from a shared context. Haskell doesn't really have that option, although people have told me that OCaml can do that.

You are describing the way I wrote in Scala, and now in Python: using class instances as that are initialised shared contexts like shared database connections, serialisers, etc.

Yes, classes as glorified Reader monads :)

This is exactly how I roll these days! Avoid classes until I find myself doing too much argument plumbing, or having too many arguments to a function. Also love me some memoization. If I'm honest, I haven't tried aggressively using closures, but I suppose that would have been another way to do it...

This is exactly how I use D!

> I can refactor without any fear of side effects.

Can you? Like say you changed you sum() function to only sum every other element, because for the code you're working on right now that made sense. Well now you've screwed up the other places which relied on sum() summing all the elements.

Dumb example because you'd know better, but surely you can still shoot your foot off like that with FP?

And yeah I know that's not the side effects you were talking about, but when I'm changing my OOP stuff, that's the kind of side effect I'm most worried about.

I've never really used a proper functional language though, so it's certainly possible I'm unenlightened.


> Can you? Like say you changed you sum() function to only sum every other element, because for the code you're working on right now that made sense. Well now you've screwed up the other places which relied on sum() summing all the elements.

I would not change sum but just filter out every other element and feed that new list to the sum function. (If you need it often, write a new helper.)

Your sum function should not decide which elements should or should not be added, that is the callers job. It doesn't even have the context to decide on that.

So yes, I would have the guarantee that nothing would break because I did not change sum in the first place.

In practice you probably wouldn't even start with that sum function but simply implement a function that takes two numbers and adds them together. Then the caller can use higher order function like fold and filter to do whatever it needs.

(Of course we assume it is not literally adding two numbers but some complicated logic, otherwise don't even write a helper for it, just sum your stuff when you need it.)

And yes, you can still have logic bugs in functional programming languages and it can't really protect yourself from that when refactoring but I wanted to note how keeping your design simple and having easy to understand composable functions can help avoid bugs.


> I would not change sum but just filter out every other element and feed that new list to the sum function.

Of course, and I would do the same in my imperative program. A dumb example like I said.

> Then the caller can use higher order function like fold and filter to do whatever it needs.

Right but then at some point you have a chain of ten of these calls, and you have it _all over_, and so you figure hmm lets make a separate function out of that, code duplication is bad after all.

And then you find your new function has a bug and you need to change it... Will all the users of this new function be OK with that bugfix?

If you don't wrap up these chains into new functions, how do find all the places you need to change once you need to make that bugfix?

> having easy to understand composable functions can help avoid bugs

Sure this I get, which is why my imperative code also contains lots of "do one thing" methods that is used as Lego bricks. And I use a fair bit of functional-ish code, ala LINQ, where it makes sense.

I wish I had discovered functional programming at an earlier stage, where I had more time to experiment. I think it would be very informative to make two non-trivial feature-equal programs in either style so I could compare.


One thing I want to point out is that when I first read "do one thing", I thought I knew what they were saying and I didn't. It took a long time for me to finally grasp that message.

One of the ways I finally learned that was after learning async/await in C#. Every beginner to async methods dread in fear once they realize the "zombification" of the Task<T> return type, where if you call an async function 6 layers deep into your program you need to change the return type all the way up the stack. Almost always now I call the async function at the very top layer. If I need some value or list I compute that list or value and return that all the way up then make the async call at the top based on it.

I learned to split my functions into two types, pure functions and impure functions. Async functions are an example of an impure function. The only thing those functions are allowed to do is their impure thing. Make a web call, push a value into a database, whatever. The pure functions are where you actually do your computations and transformations.

If you have a pure function that has a bug and you need to change it, because it's a pure function it is inherently testable. Just run it and see. If you're not sure, it's trivial to create a new function with the bugfix and only call that function from the places that you are sure of. But then try to make sure.


> I know that's not the side effects you were talking about, but when I'm changing my OOP stuff, that's the kind of side effect I'm most worried about.

That's not a "side effect"[1] at all. It's a logical bug in the implementation that only affects the function's return value. You absolutely still have to worry about those sorts of bugs in FP, which is why you still need tests or a fancy enough type system to prove the bug cannot happen (for your example that is unlikely to be practical).

"Without any fear of side effects" doesn't mean you don't have to fear anything at all, but it's one less thing to worry about.

[1]: https://en.wikipedia.org/wiki/Side_effect_(computer_science)


> That's not a "side effect"[1] at all.

Yes, like I said. Point is, in my imperative OOP code, I'm relatively seldom worried about actual side effects, and more worried about introducing bugs due to me not fully grasping all the interactions of the code I'm changing.

Did someone somewhere make an assumption about how this code works (sums all numbers), and am I breaking that assumption when I'm making this change (summing every other number)?


> I'm relatively seldom worried about actual side effects

I'll give you one concrete example: In a number of ORMs, I have no idea when database calls actually happen and this can lead to a) subtle logic bugs (inconsistent state), or b) bad performance (N+1 queries).

I've had such problems both with Ruby's Active Record and Hibernate, two of the most popular ORMs. I've even deployed code that had subtle bugs, even though I wrote tests, because while my tests verified that the correct state of the object was written to the DB, the updated state was not reflected in the JSON response object. I'm sure similar things happen in other frameworks as well.

A PFP can prevent that sort of craziness from happening.


Fair point, though I'd say such ORMs are broken by design by the sound of things.

I agree that Hibernate in particular is broken by design, but it is what it is...

Confidence in a program's behaviour is not an all-or-nothing proposition. You can statically verify certain properties of your program (through immutability, type systems, linters etc.) without having to write a full formal proof of your program's entire behaviour, and you can gain more confidence in the entire program working correctly.

In the example you just gave (summing over a list of numbers), in almost all programming languages you'd resort to runtime verification instead, i.e. tests, just because it's so much more practical. But if you have a PFP language, you still have the confidence that the behaviour of the inputs and outputs is the only thing you have to test, since the function cannot do anything else.* Even in a non-pure FP language, you know that it cannot mutate other variables, although it might have side effects.

* Ok, it might loop forever or crash, but that's comparatively rare.


> building weird hierarchies

There's a certain kind of personality that really gets into that (the 'architecture astronaut'), specifically because it is counterintuitive and they like convoluted things. I think the idea that really gets them going is "look, it knows to XYZ itself!"

I do scientific programming, and most of the time, functional style makes way more sense. Doesn't stop someone with the OOP bug from turning all the logic inside out and making a brittle mess, though.


There are problems (or parts of problems) that are more about behavior and others that are more about data. The latter of course is benefited by a FP mindset/language, but I wager that OOP is a better fit for the previous.

What OOP gives is a way to encapsulate some given part of a program, and make it a working little building block you can later reuse. It can contain mutable state, but is promised to be correct as only that class can touch it, it can be used as a slightly different version of a common interface, etc.

The two are not mutually exclusive at all.


Some of the weirdness is so completely avoidable that it frustrates me, but languages and libraries dig in on ‘you get used to it.’ I’m talking about the f(g(h(x))) vs x | h | g | f. Right to left chains of functions seem more popular than chains of pipes in FP, but even being someone who has drunk the kool-aid deeply, I always prefer pipes.

‘Take the pan out of the over once it’s been in for 30min, where the pan contains a folded together eggs and previously mixed flour and sugar’ isn’t more FP than ‘Mix the flour with the sugar, then combine with eggs, then fold together, then put it into the oven, then take out after 30min.’ But people new to it so often think it’s an FP thing for no good reason. It’s a new user issue that ought to be totally fixable as a community.

FP has enough great ideas that I’d recommend everyone learn pure functional solutions just to put into their tool belt, but it’s absolutely true that getting up to speed is harder than it needs to be. My hot take: it won’t be really mainstream until someone figures out how to make dependent types really ergonomic, which seems a long way away.


I'm relatively an FP newbie, and I have a few questions.

1. I've never seen the "pipe" notation you talk about in any lang except bash, which is not an FP right? In which languages does "|" denote function application?

2. Aren't dependent types orthogonal to whether a language is functional? Just to make sure we're on the same page on what dependent types mean, I wanna give a example (a contrived one, sorry) of dependent types in Python.

    from typing import overload, Literal

    @overload
    def str_if_zero(num: Literal[0]) -> str: ...
    @overload
    def str_if_zero(num: int) -> int: ...
    @overload
    def str_if_zero(num: int) -> int | str:
        if num == 0:
            return "That's a zero"
        return num

Elixir, Elm, Ocaml, Haskell, and F# all have a pipe like operator for function composition (though the actual operator varies).

There's also a Stage 2 proposal for adding a pipe operator to JS[1]

[1]: https://github.com/tc39/proposal-pipeline-operator


Clojure has the -> macro:

https://clojuredocs.org/clojure.core/-%3E

It does the same thing, but it's much more awkward to use.


As a daily user of -> and ->>, I am curious what you find awkward about them.

You have to plan to use them in advance. With a pipeline operator you can figure out the command you want at one stage and then tack on another part of the pipeline without needing to go back and alter the way you began the line.

R also has it, if you count R as a functional language.

I think libraries are just as relevant as languages, and dplyr’s pipes are wildly popular and (more to the point) even idiomatic R at this point.

Actually base R has a pipe operator now. (I still use the dplyr pipe, just out of familiarity.)

R is a surprisingly functional language!

As does Julia

Got it. Thanks!

> which is not an FP right? > Aren't dependent types orthogonal to whether a language is functional?

You're correct and I think that's kinda the parent's point. I think what the OP is saying is that the much of the syntax ergonomics are orthogonal to FP, which means they don't have to be as weird/frustrating/unusual/insert preference/unergonomic as they are.

A good example is how piping vs nesting are different syntaxes for function composition: x|A|B|C vs C(B(A(x)))

The former might feel more comfortable, familiar, and left-to-right readable for someone new to FP. That said I think a few language do use piping. F# has |> and I think Clojure used >>> maybe? -- signed, humble java programmer.


> piping vs nesting are different syntaxes for function composition: x|A|B|C vs C(B(A(x)))

> The former might feel more comfortable, familiar, and left-to-right readable for someone new to FP.

C(B(A(x))) is the syntax introduced in school, but note that if you continue on in algebra the notation flips from C(B(A(x))) to x_{ABC}. Everyone agrees that it's easier to list the transformations in the order they happen.


Clojure has two primary macros for this, thread-first (->), and thread-last (->>).

They enable you to write pipe transformations and in my opinion make the code much more readable.


Dependent types are not totally orthogonal of FP. They generally require immutability and function purity to work well and become extremely difficult to work with in the absence of those features.

See e.g. https://shuangrimu.com/posts/language-agnostic-intro-to-depe...


Yeah, type checking becomes undecidable otherwise and the compiler can also become vulnerable but Rust says they're going to try, so we'll see.

Well it's luckily a little better than that. Type checking doesn't have to be undecidable and the compiler doesn't have to be vulnerable because you don't have to execute any side effects in the compiler even if the code is meant to execute any side effects at runtime.

It's more that the utility of dependent types is generally restricted to functions that are pure and deal with immutable values, which means one possibility is to specially mark which of your functions are allowed to be used in a dependent type and which aren't. But the fragment of your program that can be used in dependent types is generally going to be a pure FP fragment.


Side effects aren't the only reason the compiler can be undecidable though. It depends on the type system, not just referential transparency. Look at Cayenne or ATS. Also, don't forget it's possible to still possible to DoS the system (although this is already more than possible with C++ and such, so what are we complaining about.)

Yeah, dependent types would be nonsensical for code that isn't functional. Marking which parts are and aren't is a good idea. Idris did something similar for totality checking. I think it'd be nice to have a system which differentiated between functions and procedures though.


> I think it'd be nice to have a system which differentiated between functions and procedures though.

That's essentially what 'IO' types are for (or finer-grained alternatives like 'Eff', etc.). At the value level, we have things like 'do notation', 'for/yield', etc.


F# and Ocaml are good examples from my limited functional experience.

https://stackoverflow.com/questions/2177110/in-f-what-does-p...


Thanks!

  h(x).f().g() 
Is also very common and I would say it counts, especially if the language support something like extension methods.

> especially if the language support something like extension methods

Extension methods are a bit of a hack (although useful!); a cleaner approach is https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax although I've not personally used a language which supports it :(


I'm a big fan of UFC myself.

I don’t know if I would say it’s very common(?) but it was introduced as a named concept (“uniform function call syntax”) AFAIK in D and is also a feature of Nim

https://tour.dlang.org/tour/en/gems/uniform-function-call-sy...


(2) No, dependent types are not the same as overloading. Basically it's when the type depends on the values; so like having a type for odd numbers or having an array that knows its length at compile time (for example to avoid array out of bounds exceptions at compile time)

1. Others have answered it well. Tons of languages have pipe operators. I used | in my example because I figured more readers will recognize bash piping than other notations.

2. Yes that's what I mean by dependent types, where types depend on values, like f(0): string and f(1): int. And yes, they totally are orthogonal. However, pure functional programming tends to lean on its type system more than other styles, in my experience. But then without dependent types, something as simple as loading a CSV gets hairy relative to something like pandas's pd.read_csv. Maybe others can go into why pure FP and static typing go so much together, but I gotta get back to work.


The pipeline operator |> exists in Elm[0]. It is available in bash as & and the popular (in my limited experience) package flow[1] also implements it as |> for readability. It is a proposed addendum to Javascript[2] but I have no idea how seriously the JS powers-that-be are taking that.

The example you gave of Python doesn't have tagged union. I really don't know much at all about Python so maybe the type checker can tell you that you're handling all the cases, but with tagged union types you can have to handle every possible output of `str_if_zero` when calling it. That doesn't have to be an FP thing but I only find it when doing Elm and Haskell in my experience.

[0] https://elm-lang.org/docs/syntax#operators

[1] https://hackage.haskell.org/package/flow

[2] https://github.com/tc39/proposal-pipeline-operator


For some reason I wrote "bash" when I meant "Haskell" in the second sentence. Sorry about that.

outside `|` there's the threading macro in lisps (or else)

(-> x f g h ...)


Some languages like OCaml have “pipeline” operator like `|>`, others like D allow you to chain function application `f.g.h()` instead of `h(g(f()))`

[0] https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/ho...

[1] https://tour.dlang.org/tour/en/gems/uniform-function-call-sy...


That sort of chaining seems a bit like the method chains you have in some languages, like Java streams, C# LINQ and Rust iterators. That sort of "functional constructs when it fits" is getting popular, even if pure functional isn't gaining much popularity.

In D, Flix [0] and Nim you can use UFCS [1] to chain functions.

[0] https://flix.dev/principles/

[1] https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax?w...


D doesn't have pipes, being a C-like (it probably could, but it's untypical), but it has something almost as good: "uniform function call syntax", meaning that a function that takes X as its first parameter can be called as if it was a method of X. `foo(x)` => `x.foo()`. This makes functional programming in D a lot closer to the pipe example, in that you're usually passing a lazy iterator ("range") through a sequence of chained calls.

Every C-like language should copy this idea, imo.


That sounds amazing, but I wonder if it could get unwieldy as well.

Would have absolutely loved it in Go, where over the course of time I had to build up dozens if not hundreds of validators on strings, maps, etc. Ended up resorting to storing them in `companyInitials-dataType` packages, EG `abcStrings.VerifyNameIsCJK()`, but the unified call syntax would be super tidy.



Yep exactly, but without requiring monkeypatching or operator hacks. Basically sorta like what C# does.

edit: The extension method page even links https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax


Nim has this as well, it's lovely.

This is why every time this stuff comes up I plug the haskell flow package. Imo, it should be in the prelude.

https://hackage.haskell.org/package/flow-1.0.22/docs/Flow.ht...


> ‘Take the pan out of the over once it’s been in for 30min, where the pan contains a folded together eggs and previously mixed flour and sugar’ isn’t more FP than ‘Mix the flour with the sugar, then combine with eggs, then fold together, then put it into the oven, then take out after 30min.’

Oddly enough, the language that put the most thought into this problem is Perl, which provided the pronoun variables to represent "whatever I just computed".

Natural languages always offer multiple strategies for ordering the various parts of a sentence, because the order in which things are mentioned is critical for several purposes, from making it easier for the listener to follow a chain of thoughts to building or defusing tension or correctly positioning the punchline of a joke.


I'm always surprised how unpopular perl (and even more, its spiritual successor raku) is with the HN crowd. It does not square with my experience of various languages' ergonomics.

It's not at all surprising that perl put thought into things like this, the entire language concept optimises for elegance of expression.


I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you can make them up on the spot. For example,

    result = input
    result = transform(result)
    result = transform2(result)
    return result
Perhaps I misunderstood the idea of pronoun variables. A related idea is the idea of `self` or `this`, in that computation is viewed from the perspective of an agent (which is ironically called an object when it is typically a grammatical subject).

Perhaps this is why we like OOP and CSP patterns so much, it meshes well with our social abilities.


I’ve hacked pipes in python many times just because it’s so much better. And no, “a=f(a);\n b=g(a);\n c=h(b);\n a=i(c)” isn’t “more declarative” or “more expressive” than a | f | g | h, and even in python a |pipe| b |pipe(blah)| c |pipe(blah, blah)| d is so much nicer than nesting the parens.

This sounds like a debugging nightmare. Is it not easier to name each intermediate result so you can inspect it in a debugger?

It's easier, but there are several considerations:

- There's workarounds. Eg. in F# you can redefine the pipe operator in debug mode [0] so it can be stepped into and inspected.

- I think some IDE plugins are also working on allowing pipeline contents to be automatically inspected.

- And of course, there's the good old tee operator for printf debugging. FP style rarely mutates variables, so a few print statements are just as good as a live object inspector.

As for why you actively wouldn't want to have those variables... much like comments, sometimes variable names add precious information and should be typed out, but sometimes they're just unnecessary noise, mental overhead, and scope pollution.

E.g. compare:

    let suppliers = getSuppliers()
    let sortedSuppliers = sortBy getSupplierPriority suppliers
    let bestSupplier = first sortedSupplier
to:

    let bestSupplier =
       getSuppliers()
       |> sortBy getSupplierPriority
       |> first


[0] https://stackoverflow.com/a/4966892

Elixir includes IO.inspect[1], which prints a value and then returns it. It makes it easy to insert this function into a pipeline without disrupting the existing logic[2].

[1] https://hexdocs.pm/elixir/1.12/IO.html#inspect/2

[2] https://elixir-lang.org/getting-started/debugging.html


In terms of debugging inconvenience for intermediate values, `->(x) | a | b | c` is the same as `c(b(a(x)))`.

I find debugging /w a REPL using functional-esque code much easier. You can play around and exec/inspect lines/snippets at will without changing state & "breaking" the debug session.


Agreed they are pretty much the same inconvenience level, but for me that level is too much either way. Maybe it's just my workflow, but I always preffered step-by-step debugging compared to REPL which is why I like to see many intermediate results being named.

when I hack it into python, it’s slightly annoying. when done properly, no, it’s no worse than f(a+g(b+c())), which is everywhere in code already. And debuggers / error messages should indicate what sub expression caused it, which would clean that up

I think maybe part of it is that most people are introduced to FP through haskell, if I had to guess. And haskell (at least the tutorials) is IMO guilty of the top-down approach where the final value is written first, referencing variables that haven't appeared yet. ocaml, on the other hand, has you define your components before the final value which is more familiar to most programmers

I think haskell doesn't always look super readable, but I can appreciate it at least because it reminds me of math papers where the final theorem is introduced first, and then it's broken into lemmas, and each of those lemmas is proved etc


I concur, in Haskell you have to import the pipe operator (&) before using it, kinda second class, no snark intended.

A lot of the difficulty in learning FP is in unlearning imperative programming. I hypothesise that for someone whom has never programmed before, FP and declarative paradigms are easier to reason about.

> FP has enough great ideas that I’d recommend everyone learn pure functional solutions just to put into their tool belt

Completely agree! I got my first taste of FP writing unreadable nests of python list comprehensions and it's shaped my Java tremendously. I've never spent much time in FP dedicated languages, but it's as good a frame to understand as OOP. I find they even mix and match well.


I’ve “drunk the koolaid” and I mostly went the opposite direction on left-to-right pipes vs. right-to-left compose: with a(b(c(d))), order of evaluation will always be arguments->function call (in most languages) and, so d is the first thing evaluated. Pipes make it seem more consistent, but it also introduces an inconsistency.

isn’t the first thing you are being the first thing evaluated just better? we read ltr, so having to see and remember a before c(d) but then having some of a’s arguments at the very end is just annoying vs pipes

It's hard for me to verbalize, but the rtl composition just sort of makes sense. For one thing, it's the way functions already sort of work:

    a(b(c(d))) === (a • b • c)(d) // the functions are in the same order
And, when you think about the typical order of evaluation, a(b(c(d, e, f))) is evaluated:

    - look up the value of d, e and f
    - evaluate c(d, e, f) (call it g)
    - evaluate b(g) (call it h)
    - evaluate a(h) (call it i)
    - return i
Pipes add an inconsistency between the order of evaluation of arguments vs. function calls and the order of evaluation of functions. (Also, completely separately, I've mostly come around to disliking operators and thinking Lisps are right here: precedence and associativity are "intuitive" for basic math operators because we've spent years studying math and getting used to PEMDAS, but anything beyond this is just asking for trouble: aside, maybe, from APL/J's strict right-associativity without precedence)

I have an algebra book on my shelf* that puts function arguments on the left, as in (x)f instead of the usual f(x). It is definitely more “natural”, as functions compose in the normal reading direction. But even so, it is increadibly hard to read, because the break with the usual convention is so radical. Needless to say, it didn’t catch on.

* In my office, but I am at home. Don’t remember the author; sorry. But it was written back in the ‘60s, I think.


What I like about pipes it that it allows two ways to write chains of function calls. For calls that are more branch-like, the nested structure of standard function calls makes the most sense to me:

    article = create(author("John"), Page.article())
For cases where you're modifying data sequentially, the pipe structure makes it really easy to read:

    5th_prime = 0.. | filter(is_prime) | nth(5)
Having a way to write function calls forwards and backwards lets you choose whatever syntax feels most appropriate for the job.

that same logic works for reversed-compose - a|b|c|d = (a)|(b~c~d). It is inconsistent with the normal kind but I think it’s worth it just for the ability to write list |map| func |filter| predicate |group-by| keyfunc |map| .values() |map| sum, which is so nice because you start with the value and as you read the thing that’s happening is precisely the thing you’re reading, and everything’s in one place - it’s

This is only true if your stages don't have arguments: with a | b(something) | c | d, `something` is now evaluated before b(something)

true, although argument evaluation order in a(x,b(y,c(d),z),w) is already somewhat hard to rely on

Java streams are a (limited) version of what you propose. Something like this is very idiomatic:

    myhashmap.entrySet()
        .stream()
        .map(Map.Entry::getValue)
        .sorted()
        .distinct()
        .toList()

Difference with pipe operator would be

1. can't be extended.

    myhashmap.entrySet()
       .stream()
       .flattenBars()
       .toFoo()
2. (As a result of 1) - applies only to streams

> can't be extended

They can be extended arbitrarily by defining collectors. The Stream interface is very well designed. For example your "toFoo" would be .collect(Collector(<foo-provider>, <foo-accept>, <foo-merge>, <foo-finisher>)). All of them can be arbitrary functions working on arbitrary types. That's the standard library implementation of "toList", in fact!

> applies only to streams

You're exactly correct here, but for legacy reasons Java did not have a choice. I'm the first to admit mylist.stream().map(f).toList() is dumb. A list is a functor, dammit!

It's the best possible solution given the existing constraints, but it's definitely limited in what it can do.


I was going to say C# LINQ, too!

For the order of fn composition, I found the pipe operator found in e.g. Elm really nice.

Most FP-heavy languages also have good support for metaprogramming, and have macros that imitate pipes (e.g., all lisps and Julia).

I recently watched this talk-

Why Isn't Functional Programming the Norm? (https://youtu.be/QyJZzq0v7Z4)

Here's the HN thread- https://news.ycombinator.com/item?id=21280429

I learned in that talk, among other things that Oracle spent $500 million to promote and market Java.

- https://www.theregister.com/2003/06/09/sun_preps_500m_java_b...

- https://www.wsj.com/articles/SB105510454649518400

- https://www.techspot.com/community/topics/sun-preps-500m-jav...


It's way simpler actually.

Functional programming isn't the norm because — while it's extremely good at describing "what things are and how to describe relationships of actions on them" — it sucks at "describing what things do and describing their relationships to each other". Imperative programming has exactly the opposite balance.

I find the former to just be more valuable and applicable in 80% of real world business cases, as well as being easier to reason about.

Entity Relationship Diagrams for example are an extremely unnatural match to FP in my eyes, and they're my prime tool to model requirements engineering. Code in FP isn't structured around entities, it's structured in terms of flow. That's both a bug as well as a feature, depending on what you're working on.

Most of the external, real world out there is impure. External services, internal services, time. Same thing for anything that naturally has side effects.

If I ask an imperative programmer to turn me on three LEDs after each other, they're like: Sure, boss!

for led in range(3): led.turn_on(); time.sleep 1; led.turn_off()

If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place and then they're like... "oh, because time is external to our pure little world, first we need a monad." Whoa, get me outta here!

Obviously with a healthy dose of sarcasm.

Don't get me wrong, for the cases where it makes sense, I use a purely functional language every day: it's called SQL and it's awesome despite looking like FORTRAN 77. I also really like my occasional functional constructs in manipulating sequences and streams.

But for the heavy lifting? Sure give me something that's as impure and practical as all of the rest of the world out there. I'll be done before the FP connaisseur has managed to adapt her elegant one-liner to that dirty, dirty world out there.


I have my share of gripes about Haskell (which I'm assuming is the language you have in mind when you're talking about a pure FP language), but even with the sarcasm disclaimer, this is a pretty extreme strawman.

This is the equivalent Haskell.

  turnOnThreeLEDs = for_ [1..3] (\i ->
    do
      LEDTurnOn
      threadDelay (10^6)
      LEDTurnOff
  )
or all in one line

  for_ [1..3] (\i -> do { LEDTurnOff; threadDelay (10^6); LEDTurnOff })
It looks basically the same.

EDIT: I would also strongly dispute the idea that FP is structured around flow instead of data structures. In fact I'd say that FP tries to reduce everything to data structures (this is most prominently found in the rhetoric in the Clojure community but it exists to varying degrees among all FP languages). Nor is SQL an FP language (the differences between logic programming a la Prolog and ultimately therefore SQL and FP is very very different).

FP's biggest drawback is that to really buy into it, you pretty much need a GC. That also puts an attendant performance cap on how fast your FP code can be. So if you really need blazing fast performance, you at least need some imperative core somewhere (although if you prefer to code in a mainly FP style, you can mainly get around this by structuring your app around a single mutable data structure and wrapping everything else in an FP layer around it).


Without knowing Haskell, it looks like there are bugs in the code. Specifically, there's no delay after LEDTurnOff in the first example, and you have the same function name twice in the second example.

If those are bugs, I'd forgive that. If those AREN'T bugs, then keep me far, far away from FP!

Also, what is the point of i? Clearly, each LED should have its own index, but then i is never used again. (I understand this could be pseudocode or there's a lot of other code not included.)

And the ranges are inclusive in Haskell? I feel like a lot of friction between Matlab and Python involves how each language's indexing/slicing/ranges are represented, so it's interesting to see each language's approach (indenting like Python, lower camel case, delays in us, etc.) --- but with every language difference, I'm personally less inclined to want to learn something new without a great reason.


Ah yes you are totally right.

I misread the initial example:

  turnOnThreeLEDs = for_ [1..3] (\led ->
      do
        turnOn led
        threadDelay (10^6)
        turnOff led
  )
It should be the above (i is changed to led), where I thought the original was automatically going to a new led and didn't realize that `led` was actually an integer and `turn_on` and `turn_off` are basically pseudo-methods (or extension methods). (The original code also only sleeps after turning an LED on, not off)

Indeed the second example is a typo that should have on vs off.

  for_ [1..3] (\led -> do { turnOn led; threadDelay (10^6); turnOff led })
The joys of writing code on mobile and too much copy pasting.

`i` is the same thing as `for i in...`.

Also yes ranges are inclusive.


Thanks for all the info. I want to give FP a proper try one day, and there are many different roads, but it's always a rocky start for me with a new language. Having a clear translation from one to the other is important, so I'm glad you updated this.

True that this matches the original example. I guess my mind filled in the second delay automatically when it noticed, "this isn't gonna blink to the naked eye!"


Well, you DID sneak a monad in there :-)

So what?

The point is to make it easy to program imperatively (with effects, where relevant) while simultaneously reclaiming the ability to check for correctness and maintain laziness by default.

What's so good about implicit sequential evaluation? Shouldn't the effect ordering be explicit? Isn't explicit better than implicit?


> Isn't explicit better than implicit?

That's the point, isn't it. No, explicit is not always better than implicit.


Rather unavoidable when turnOnLED's likely type is IO () ...

Rather unavoidable when you have types like IO() in the first place.

Use of that type is easily limited in Haskell code. For instance, in my chess counting project [1] only a few lines in Main.hs use IO (), while the other approximately thousand lines of code have nothing to do with IO ().

[1] https://github.com/tromp/ChessPositionRanking


What you wrote is imperative despite the fact that it’s written in Haskell.

So, you recognize imperative is a subset of functional? :-P

do-blocks have perfectly functional semantics, so if you consider that to be imperative as well, this means that a sequence of instructions changing state is both imperative and functional, as long as you declare where the state is being handled in your code.

And yes, of course functional code can handle state. The good thing about this 'Haskell imperative' style is that it doesn't fall prey of side effects, the bane of imperative programs (uncontrolled side effects are NOT a good thing). In Haskell, you control why and where you allow them.


One could also make a language that has exactly the same visual syntax that C where ; is specified as a functional composition operator instead of a separation of instructions. These kind of mindgames are pointless - if your code is sequencing instructions, it's imperative ; if it's denoting it's functional

If you do that, then you have to admit different kinds of imperative code: C-imperative style that can modify any state in the application as side effects, and Haskell-imperative where you can only modify state explicitly declared as input to the procedure.

It's not just mind games, the difference has very real implications to the architecture of the whole program and the control you can exert over unpredictable side errors.


I mean if pure FP is enough to write imperative code as well then the distinction between the two doesn't seem all that important to me. What would be a non-imperative equivalent to illustrate your idea?

Small examples of mutating state in Haskell are about as meaningful as small examples of pure functions in Java. Small examples are really easy to do and not that ugly, bigger more complex examples don't look so nice. The whole reason people want to use Haskell is because doing complex state mutations is horribly ugly and unergonomic so people don't do it, that is a feature of the language.

> If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place

a proper FP engineer would model the problem of turning on LEDs one ofter another as a set of states. A simple way would be a bit set of the LEDs, in an array where each element is the LED's on/off state, like ['000', '100', '110', '111'].

Then, the problem decomposes into two, simpler problems: 1) how to create the above representation, and 2) how to turn the above representation into a set of instructions to pipe into hardware (e.g., send signals down a serial cable).

The latter problem is imperative by nature, but the former - that of the representation of states, is very pure by design! So the FP model provides a solution that solves a bigger, more general problem of turning LEDs into patterns, and this solution is just one instance of a pattern.

So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.


I promise you, a "proper" C programmer will do the same thing, faster. And I say this as an FP fan!

A bad C programmer will write horrible spaghetti code, but it will probably be enough to do the job. A bad Haskell programmer will get absolutely nowhere.

If you think pure FP is great for this stuff, I think you need to explain why imperative languages are regularly used to win the ICFP programming contest (https://en.wikipedia.org/wiki/ICFP_Programming_Contest#Prize...).


> A bad Haskell programmer will get absolutely nowhere.

that's a feature, not a bug in my books! Maintaining code written by other bad programmers is the bane of my life (despite getting paid to do it, so i can't complain).


Heh, there’s something in that.

Now I’m wondering what the worst mainstream language is for maintaining somebody else’s legacy code. C can definitely be pretty bad... but I’m thinking maybe Perl?


you don't maintain perl - it's a write-once language. Every time you need to change it, you rewrite a new perl program to do exactly what you need ;D

the title of course, goes to javascript imho.


> So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.

I guess you are talking about embedded so I'll concentrate in the LED example. In embedded code size and performance matter, so you try to be as straightforward as you can be. And I think applying "your boss might ask you in the future" to every piece of code is what drives some development far beyond the complex point.

Should I spend a week creating a super-complex infrastructure for turning on/off some LEDs just in case my boss asks my to change the pattern? Should I spend a week thinking the right code-pattern, or trying to "solve a bigger, more general problem"? It's just 3 LEDs blinking... just write the damn for-loop!

At the end of the day, my microcontroller only "digests" sequential instructions. So the simplest thing (for embedded) is to think and feed the microcontroller with sequential instructions. All the rest is just ergonomics for the sake of programmer's comfort or taste.

I'll do the sequence. If my boss asks my to change the sequence, I'll change the sequence. It's not a big deal.

I don't know if in this case one would "struggle" modifying this particular for-loop. And I can think at least 3 five-minutes solutions in C that doesn't require FP to structure a program to quickly change the pattern if required.


I'm at a very similar place to you at this point. It make sense for FP to be good at "describing relationships of actions" since the base unit of reasoning is a function, or an action.

The beauty of modern programming is that we don't have to stick to a pure example of either paradigm. We can use FP techniques where it makes sense and turn to imperative otherwise.

In you example, we could have a nice, purely functional model of an LED that enforces the invariants that make sense. We could then "dispatch" the updated led entity to an imperative shell that actually took the action. All without using the M-word!

I'm probably - unfairly - treating your example more seriously than you intended it, but I think I'm leading to the same conclusion as you at a slightly different place. I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.

> I use a purely functional language every day: it's called SQL

This is my favourite way to annoy FP advocates (despite probably being one myself). Every one is a closet mathematician in FP-land but no one wants to admit how beautiful relational algebras are.


> I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.

Funtional Reactive is a very good way to create that mix. Web Front developers have realized that, and that's the reason why most modern frameworks have been veering towards this model slowly, with Promises and Observers everywhere.

When you represent state as an asynchronous stream that you process with pure functional methods, you get a straightforward model with the best of both paradigms.


I like FRP, but prefer to imitate it in a synchronous manner now - I mainly work on the JVM and I've personally found debugging to be too painful when working asynchronously. If I need async then FRP is definitely the first tool in the toolchest that I'd reach for.

Elixir's pipe operator is a brilliant tool that I wish every language had. I mainly use kotlin day-to-day and definitely abuse the `let` keyword to try to get closer.


True, FRP doesn't need to be asynchronous; it's just a very good paradigm to support multi-process computation and module composition.

As I said above, it just happens to also be a very good at handling state without a fear of side effects.


> I want to have a purely functional domain that I wrap in an imperative shell.

Like you, I too think the ML-side of functionnal programming got it right. Sadly, their most popular language commited the unforgivable sin of not being written by Americans and is therefore condamned to never be as popular as Haskell. I console myself by using F# when I can.


Interestingly, a lot of very popular languages are not authored by Americans:

- Guido of Python fame, is Dutch

- Stroustrup (C++) is Danish

- Lerdorf (PHP) is Danish

- Anders Hejlsberg (author of both C# and TypeScript) is also Danish

- Ruby is not as popular as it used to be, but Matz is Japanese.

- Java's Gosling is Canadian, but I'm not sure if that is the kind of American you had in mind

That covers a big chunk of Tiobe top 10. If anything Denmark is over-represented!

edit:

- Wirth (too many languages to list) is Swiss


It's not about the nationality of author. It's about where they worked from and with whom. Except Ruby which failed, all the languages you are talking about where developed in the USA.

Van Rossum moved to the USA in the 90s, got funds from DARPA and went to work at Google quite quickly. Stroustrup developed C++ while at Bell Labs in New Jersey. Lerdorf moved to Canada as a teenager before going to work in the USA. Hejlsberg made C# and TypeScript at Microsoft in Seattle. Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco. James Gosling is Canadian but did his PhD in the USA before developing Java at Sun. Wirth did its PhD at Berkley before moving to Standford where he did most of the work on ALGOL W what would become Pascal and did multiple sabbaticals at Xerox PARC.


> Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco.

I'm not sure what you mean in terms of "its move to Heroku in San Francisco". Also, Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse). However, I would argue that Ruby remained relatively niche outside Japan until it was discovered by DHH and used for the Ruby on Rails framework (to this date, it's somewhat hard to find work in Ruby outside of RoR). DHH lived in Denmark at the time but moved to the US shortly thereafter.


When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.

> Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse).

Ruby definitely is a niche language. I have never seen used outside of the web and it's pretty much always mentioned with RoR. That doesn't preclude success stories developed with Ruby to exist.

It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml which was the "failure" I was mentioning initially despite being a nice language itself and seeing interesting development right now.


> When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.

I didn't actually know that, so fair enough. Still, I think DHH probably had a larger impact in popularising Ruby in the US (and, by extension, other parts of the world).

> Ruby definitely is a niche language. I have never seen used outside of the web

That's only if you consider the web to be "niche" and if you do that, then JavaScript is "niche" too.

It's true that Ruby outside of Ruby on Rails is somewhat rare, but several other successful technologies are other written in Ruby, for example:

- Homebrew (macOS package manager)

- Chef (server provisioning software)

- Vagrant (VM provisioning software)

- Cocoapods (iOS package manager)

> It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml [...]

I think you're way off base.

Yes, Python is extremely popular and Ruby can't compare overall - although I have a feeling that Ruby still overtakes Python when it comes to web dev, but obviously Python is huge in other areas and is also not exactly niche in web either.

But Ruby is #13 on TIOBE, while OCaml doesn't even feature in the top 50. Github and Discourse are only examples, we could also mention Airbnb, Shopify, Kickstarter, Travis CI and many others. I've personally worked at several Ruby companies, in fact I maintain a small Ruby codebase even now at my current company (although it's not our main language), etc.

Ruby had huge momentum in the 2000s and even early 2010s. It didn't catch on in the enterprises much, true, but it was the cool thing back when everyone was annoyed at the complexity of Java EE or the mess that was PHP back then. Ruby was also the language Twitter was originally written in before they migrated to Scala. It lost a significant amount of momentum since then and basically all of the hype (people migrated to Node, then later to Elixir, Clojure and co. and some like me jumped back to statically typed languages once they became more ergonomic), but it's still maintained by quite a sizeable number of companies.

More than that, RoR had an outsized influence on the state of current backend frameworks to the point where I claim that even one of the most heavily used frameworks today, Spring Boot, takes a lot of inspiration from it (while, of course, being also very different in many areas). I would also argue that Ruby inspired Groovy, which in turn inspired Kotlin, and that internal DSLs such as RSpec also were emulated by a number other languages later.


Of that I agree. As discussed else thread this suggests that having a platform or sponsor is a strong contributor to a language success.

I'd also add Roberto Ierusalimschy (Lua) and José Valim (Elixir) to this list, both from Brazil. But as a fellow commenter points out, place of birth is less important, when compared to how well the author is integrated into the anglophone old boys network of computer science.

My FP experience has mainly been with Scala (with haskell on side projects).

Is OCaml the ML language de jure?


Yes, it's the most actively developed and the most featurful.

F# is another interesting ML. It has less of the features which makes Ocaml interesting but it runs on .NET so you have access to a ton of library.

SML seems more niche. I don't think it sees much use outside of academia.


People need to be familiar with both approaches. And there's sillyness on both sides. I've seen influential imperative OOP programmers on stackoverflow model a bank account using a single mutable variable, even when they surely know accountants use immutable ledgers.

Most imperative languages since Fortran contain declarative elements, otherwise we'd be adding numbers with side-effects. Similarly most FP languages offer imperative programming. But the real power from FP comes from it's restrictions and yes, query languages are one such (excellent) application. Config languages and contract languages are others.


Your LED example is an interesting one. In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

A pixel array can be trivially modelled as a pure datastructure and then you can use the whole corpus of transformations which are the bread and butter of FP.

A screen is as IO as it comes for the most average consumers of a screen, we aren't peeking into its internals.

And for me, that's the point of FP - it's not that IO is to be avoided, it's about finding ways of separating your IO from the core logic. I loosely see the monad (as used in industry) as a formalised and more generic "functional core imperative shell"

Now when it comes to pure FP languages, they keep you honest and guide you along this paradigm. That said, it's perfectly possible to write very impure imperative Haskell - I've seen it with my own eyes in some of the biggest proprietary Haskell codebases

But imperative languages don't generally help you in the same way, if you want to do functional core imperative shell, you need a tonne of discipline and a predefined team consensus to commit to this


> In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

It was. I still remember the days.

It was nice to be able to put pixels on the screen by poking at a 2D array directly. It simplified so much. Unfortunately, it turned out that our CPUs aren't as fast as we'd like them at this task, said array having 10^5, and then 10^6 cells - and architecture evolved in a way that exposes complex processing API for high-level operations, where the good ol' PutPixel() is one of the most expensive ones.

It's definitely a win for complex 3D games / applications, but if all you want is to draw some pixels on the screen, and think in pixels, it's not so easy these days.


Screen real estate size in memory increased by square law while clock speed and bus speeds increased only linearly, it was pretty clear that hardware acceleration was the way forward by the mid-eighties when the first GDPs became available. I even wrote a driver for one attached to the BBC Micro to allow all of the VDU calls to be transparently routed to the GDP for a fantastic speed increase.

I don't think you could have made the GPs point any better for them.

I don't know. What was the GP's point? That FP people like to think too much and sometimes you just want to get stuff done?

Or that FP purists don't know how to actually build useful things? Trololol it took Haskell until the mid 90s to figure out how to do Hello World with IO

To be honest FP is a moving target but I see it as one of the mainstream frontiers of PLT crossing over into industry.

I can accept that to some, exploring FP is not a good for their business requirements today but if companies didn't keep pushing the boat with language adoption, we'd still be stuck writing fortran, cobol or even assembly.

Once upon a time lexical scoping was scoffed at as being quaint and infeasible.

Ruby and Python were also once quaint languages.

Java added lambdas in Java 8.

Rust uses HM type inference.

So what was their point? That FP people spend too much time thinking and don't know how to ship? In which case - I'm grateful that there are people out there treading alternative paths in the space of ways to write code in search for improvement.

In any case their example was pretty spurious, anyone who's written real code in production knows IO boundaries quickly descend into a mess of exception handling because things fail and that's when patterns like railway oriented programming assist developers in containing that complexity


q.e.d.

Would love to know what has been proved? Very up for an open and honest discussion.

I'm back to writing imperative after years of functional. I think it is a very pragmatic choice today to go with an imperative language but I find class-oriented programming to be backwards and I think functional code will yield something more robust and maintainable given how IO and failure are treated explicitly. I'm not quite sure where the balance tips between move fast but ship something unmaintainable vs moving slower but having something more robust and maintainable.

Programming in a pure language is quite radical, it's a full paradigm shift so it feels cumbersome especially if you've invested 10+ years in doing something different. I'd liken it to trying to play table tennis with your off hand in terms of discomfort. There are plenty of impure functional languages around - OCaml, Scala, Clojure, Elixir.... And Javascript (!?!?)

FP is relatively new as a discipline and still comparatively untrodden. What if equal amounts of investment occured in FP - maybe an equivalent of that ease of led.turn_on will surface.

And tbh it probably just looks like a couple of bits - one for each LED and a centralised event loop. Which so happens to have been a pattern which works quite nicely in FP but emerged in industry to build some of the most foundational things we rely on...


I think functional programming is less popular simply because people just aren't good at it.

Functional programming is a good way to describe how a system works. By describe the input, the processing, the output. You describe the whole diagram.

However, in reality. People just sucks at thinking systematically. It's likely that the whole education system never taught you about how to do it.

Everyone was taught to do things step by step and smash the results together to see if it works instead of prepare everything before doing any actual work most of time. And if it actually need preparing, there is usually a pre made checklist for you to do it easily.

That type of thinking process isn't that common in our daily live. And of course no one is used to it.

But I think people should at least do it once. Even you are still programming interpretively afterwards. It could benefits you very much and make you a better programmer.


I agree with you but see another relevant reason: in FP, you HAVE to consider side effects 1) from the beginning and 2) completely, which as anyone can guess is quite a task.

In imperative you can just ignore it and produce objectively worse code, as you are not even aware of all side effects possible. And sure, for the LED project it wouldn't even matter, but the decision FP vs imperative is then more of a design / quality criterion in general - the notion of one being better than the other is just wrong.

Also a monad is much more complicated if you don't really understand it which makes judging it a bit unfair


What is a side effect? Getting the time? Pushing a result to an output channel? A debug printf? Setting a flag to cache a computation as an optimization? Is it not: evaluating a thunk? Implicitly allocating some memory to store the result of a computation?

Haskellers are trained to have a very inflexible view of what a side effect is. It is dictated by the runtime / the type system. In my views, there are lots of things that Haskellers call "side effects" that I would just shrug my shoulder on, and also lots of things that they do not call side effects but I care about them. It really depends on the situation.

This fixed dichotomy imposed by the language does more harm than good in my experience. NB: I'm aware that running a computation that for example gets the system time will get a different time it runs. That does not mean that I _have_ to consider it a side effect. I usually do not have a good reason to run the procedure multiple times and expect the runs to be totally identical by all means. In an imperative language, I have very precise control when this procedure runs.


Apart from language-imposed limitations (haskell is nowhere near the theoretical completeness of category theory, e.g. bottom-type), the "pure" nature of FP forces the use of abstract structures able to handle it (e.g. Monads), thus, wanting to write code, you first need to think even possible side effects trough to be able to write code containing them, which by definition is a stronger criterion of catching unwanted effects compared to imperative, where you can produce whatever you want. And sure, it is in no way a guarantee to produce good code, it is just a stronger condition. It effectively boils down do "assembly is just as good as C", and we see where it took us.

Anyone telling me to think it through as rigorous in imperative is lying to him/her self practically, unless they're actually verifying their code.


I don't know, I'm positive I'm not part of the sacred circle, but just a data point. The Haskell applications that I've managed to produce were all uniformly slow-compiling and unmaintainable. And I promise it wasn't for lack of thinking about "side effects".

In my view, the problem is that functional languages give you a toolset to compose functions (code) by connecting them in structures. In Haskell, that is made harder by the restricting type system (very limited language to do type computations) that you must champion, including a myriad of extension, which invariably lead me down to dead-end paths that I didn't know how to back out of without starting all over.

But Haskell's restricting type system aside, every programmer that I consider worth their salt has understood that it's not about the code. Good programmers worry about the data, not the code. Composing code is not a problem for me; I just write one code after the other, there isn't much else that is needed. I just think about aligning the data such that the final thing that the machine has to do is as straightforward as possible. Then the code becomes easy to write as a result.

The possibilities to design data structures in Haskell are obviously limited by its immutability. Which is, quite frankly, hilarious. "State" is almost by definition central to any computation - and Haskell tries to eliminating it (which of course is only an illusion; in practice we're bending over backwards to achieve mutability). For Haskell in particular, which does not even have record syntax, basic straightforward programming is often just not possible in my perception. I refuse to reduce a hard to use library like "Lenses" to do basic operations thing that should be _easy_ to code.

Even though Haskell is popular, and many programmers (including me) go through a Haskell phase, I haven't seen many large mature Haskell codebases (I know basically about Pandoc; and ghc if a Haskell compiler counts). Why is that?


I think trying to eliminate state and replicate the unnecessary state with getter is generally a good thing. One of biggest bug category programmers encounter `forgot to sync XXX` can be totally eliminated by this if you don't copy these state at first place.

But eliminate all of them... just looks silly to me. You need state anyway, why not just write them in a sane way?


As a concrete example, here is a video (and github link) of a concrete program that I'm currently working on, and that I think is not a bad program.

https://vimeo.com/605017327

I already have plans for improving it (especially the layout system), but overall it works pretty well and is reasonably featureful with little code. It's not perfect but "state" is certainly not a problem at all.

I can't tell you something like this can't be coded in maintainable Haskell, but I can tell you that _I_ wouldn't have managed, and googling around it doesn't seem like there are a lot of people who can do it.


It's been ages since I've used a "real" functional language but wouldn't it be nice to parameterise externalities like time, and have events occur "spontaneously" that force application state to update? Kind of like interrupts. Or now that I think about it... it sounds a bit like React where DOM events etc force application state to update (in a pure fashion)

Has this been done before?

So your LED function in pseudocode looks like

  ToggleLeds(leds, t): 
    for each LED
      LED.power = (LED.start + 1s) > t ? ON : OFF
And this is invoked from main() as follows

  main():
    ToggleLeds(this.LEDs, Events.Time)
Where Events.Time is some kind of event stream which allows the runtime to reevaluate main() and any other dependent functions each time it's updated

edit: And to sidestep the obvious performance issue with the function being reevaluated every few microseconds :D you would implement something like this

  main():
    ToggleLeds(this.LEDs, Events.Time(ms=1000))

there are FPs that have and embrace side effects, not everything is haskell.

C++ and C have spent $0 on marketing and are more popular, so I don't think this is a good indicator of success.

Indeed, C came for $0 with UNIX that AT&T wasn't allowed to sell and provided source tapes alonside a symbolic price, in a time where systems cost several hundreds $$$$.

C++ came on the same package as C compilers, some of which it was a compiler switch away.

Both were picked by OS vendors that tried to cater on top of UNIX clones.

Yep, zero marketing.


Never heard of any conference starting with Cpp or the like, nor does it have a website I guess..

Those exists because the language is popular, they weren't created to market the language before it was popular.


The fact that you linked so many different companies is evidence that this wasn't just some push by a single company. Those things happened in parallel because the language was popular and is evidence of a vibrant community more than anything else. Being popular means many will want to make things with it yes, saying that it got popular since many people did things with it doesn't make sense.

Yes, because as I mentioned on the other comment you ignored, thanks UNIX, C, being born at AT&T, and the $0 cost of UNIX tooling up to the mid-80's alongside source code.

Had C++ been born somewhere else, e.g. Objective-C, and its popularity wouldn't exist.


> Yes, because as I mentioned on the other comment you ignored

I am not the other person you responded to.

Anyway, don't you think the fact that so many others decided to copy the language and implement their own versions of it is a testament to its popularity and not just that it got pushed by a single company?


It was pushed by UNIX popularity.

It was, and still proves that a company marketing it with $500M didn't happen. Are you also going to say the same about python, or can we end this discussion?

UNIX was just pushed by AT&T, Sun, HP, IBM, Compaq, Dec.

I let you sum how much money they invested into selling their UNIX workstations.

As for Python, Zope made it during the early days, plus the research labs and companies that employed Guido, and if you want a list,

- DARPA funding in 1999

- Zope in 2000

- Google in 2005

- Dropbox in 2013

- Microsoft in 2020

You can sum up Guido's salary as per those corporations.


Exactly! Thanks for proving the point. It was a collective effort of people/companies pushing a good programming language rather than a single company (Oracle) doing it. Not to mention that they license it in certain cases.

Whatever.

My point is that OP’s comparison is useless because Java most definitely has marketing costs mostly associated with that 8+ million Java developers, the same as other popular languages.

How is this relevant when the topic was that Oracle literally spent $500 million marketing Java? The community of anything popular is marketing itself, yes, but that is a very different thing from having an actual marketing budget to push it to popularity.

That you have never heard of things does not mean they do not exist.... https://cppcon.org/

It was sarcasm.

The sarcasm didn't make sense. The conference was started way, way later, by community organizers from all companies. This completely disproves that it was a major money push from one company.

I thought Java was once the most popular language before Oracle bought it from Sun Microsystems.

They bought the whole of sun!

And yes, it was, though in the intervening years they've improved the language a lot IMHO.


this is also a good talk in a similar vein, although aimed at haskell programmers, is really about any technology looking to grow into the mainstream

Gabriel Gonzalez – How to market Haskell to a mainstream programmer - https://youtu.be/fNpsgTIpODA


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: