Hacker News new | past | comments | ask | show | jobs | submit login
The case for dynamic, functional programming (onebigfluke.com)
101 points by todsacerdoti 16 days ago | hide | past | favorite | 115 comments

> You can adopt the techniques of functional programming right now in your favorite language (such as Python, used above).

No tail call optimization, no expression orientation, mutability by default, no variable shadowing, single expression lambdas, no do-notation, reference semantics everywhere, dog slow function calls... functional programming in Python doesn't work in my experience.

Now, there is a great case for functional lisps, like Clojure, since they support all the fancy patterns in cutting-edge Haskell etc. but without an extremely complex compiler. Of course you sacrifice some type-safety.

Like the strict MLs (ocaml, Standard ML, F#, ...), while Lisp and Scheme can express the Haskell idioms it lacks the pervasive laziness that makes them (somewhat) efficient. For instance one might write `zip [0..] xs` to number the list `xs` in a single pass. This lack of laziness (even on code that does notionally work without it) manifests in blowing stacks, in Scala at least. Also the lack of types (type classes) precludes at least some things such as the `deriving` mechanism.

For all that you're dead right about Python.

> This lack of laziness (even on code that does notionally work without it) manifests in blowing stacks, in Scala at least.

Please don't spread FUD.

The std. lib functions in Scala are stack safe. (Simple, fast while loops under the hood when it comes to collections).

To pick up the `zip` example:

  val r = (1 to 100_000_000).zip(1 to 100_000_000)
won't blow up with a stack overflow.


Also custom recursive functions can be made safe (for example by using trampolining).

You blow stack in Scala only when you explicitly ask for it by writing unsafe code. (Which is BTW something that applies to more or less all languages as almost nobody ever repeated the mistake to build a lazy by default language).


With SRFI 41 (streams) this can be expressed lazily in Scheme, but it is more verbose than Haskell:

    (import (srfi srfi-41))
    (define xs '(a b c))
    (stream-zip (stream-from 0) (list->stream xs))
This returns a lazy stream whose values can be evaluated later.

Sounds irrational, but it looks much more beautiful for me. My brain says, you should like Haskell, mirroring my experience that static languages with a strong type system are much better for big projects. But my heart knows, I would always prefer Lisp.

The strict MLs often have a `lazy` keyword, but it's definitely not as tidy as Haskell for this use-case. I'm sure a clever Clojure macro could be devised for the same!

Functions like map and range return lazy lists in Clojure. This is how I'd implement zip:

   (def zip (partial map vector))

   (zip (range) [:a :b :c])
   ;;=> ([0 :a] [1 :b] [2 :c])

Hell, even JS is a better functional language than Python.

Author here: I agree that another dynamic language with all of those attributes would be great.

Elixir. Lets you do stateful declarative stuff (talking to databases, communication, disk) with no fuss. No mutability, single line lambdas, fast function calls.

Plus, rediculously easy concurrency, customizable failure domains, first class concurrent testing...

Might as well use Erlang?

No? The syntax and standard library is vastly different in terms of ergonomics. You could argue whether its better but it's undeniably more familiar to non-erlangers.

(Alternatively, you already are using Erlang anyway.)

Well, Erlang has slightly nicer syntax, IMHO.

I would have assumed you can use the same library functions?

(Sorry, I mostly only used Erlang and only briefly looked into Elixir and didn't find anything useful compared to just plain Erlang.)

"didnt find anything useful"

Better documentation, better package management, testing framework, consistent function calls, opinionated standards on how to name things (get vs fetch, ! suffix for crashables, is vs ?, size vs length), Protocols, not having lexical macros, not needing make, Testing Framework, compile environments (test, dev, prod), $callers, custom guards, Task, Registry, DynamicSupervisor, IO.inspect (now dbg), mix tasks, TESTING FRAMEWORK...

Or former Prolog users.

Could Google adopt such a language? They were early on Python, which was great! Perhaps they could "bless" Clojure - or more likely build their own.

I'm not sure what Python has to do with functional programming, even remotely. It basically contains none of the typical building blocks for functional programming (immutable data, real lambdas, pattern matching, sane scoping, etc.). Python has weird scoping, weird lambdas, and fully on mutability.

Regarding static versus dynamic functional programming, I think dynamic typing is probably oversold in that most dynamic programs could be static programs with very little effort. Where dynamic types really shine are at boundaries of programs. Dynamic types are very useful for handling messages, I/O, data formats, etc.

I actually think an ideal language would be one that is at its core a statically typed functional language, but then it has dynamic types allowed at the above stated boundaries. Then there could be some explicit conversion layer or solution that moves values from one side to the other.

Basically all other uses of dynamic typing can be solved in a statically typed language that has anonymous types like anonymous records and anonymous discriminated unions and some sort of "umbrella" or "shared" type such as interfaces, generics, and/or typeclasses, for example.

What do you mean by 'real lambdas'?

The functions you can create with Python's lambda syntax are a bit limited, but that's a purely syntactical limitation: you can create named function just fine and pass them around.

(I agree with most of the rest of what you are saying.)

They are likely referring to the hoops you have to jump through to get early instead of late binding. The boilerplate becomes automatic after a while, but that doesn't make it pleasant.

You get lexical scoping either way.

Of course, closures and mutability everywhere don't interact well. You have to make copies of values manually before they get mutated away, if that's what you refer to by 'early binding'?

Haskell is this. It is static with a dynamic type: https://downloads.haskell.org/~ghc/9.2.4/docs/html/libraries...

I think Typescript is close. It does suffer from JS heritage.

A TS like language that also had a way to runtime check types (for those data boundaries) but mix in some “any” when needed would be a hell of a pragmatic language.

A robust approach for data-boundaries is to write parsers or decoders. The Elm community has done some nice work here.

Yes I am suggesting making that sort of stuff but even further make it a language feature (I know purists prefer fewer language features which is OK). I feel it is a bit early in its journey but mint is a good example:


It is made by someone coming from the Elm community, who made a lot of Elm packages.

Common Lisp is idd pretty nice.

Indeed, and I would further argue that business data be dynamically typed, because business data is often messy and changeable, whereas the plumbing that carries it can be statically typed to ensure you are connecting the pipes correctly.

> I would further argue that business data be dynamically typed, because business data is often messy and changeabl

Huh? Static types make change and refactoring much easier.

It's complicated.

Static typing easily leads to a over-specification of types.

With dynamic typing the least specific type that will make this work is automatically implied. You can often make huge changes to the underlying data structures and things will just work.

Basically: I need a Car Object with these exact fields vs give me a thing with the field "foobar" that contains something that I can concatenate with another things.

(Though static languages with structural typing like TS muddle the water here as they can represent some of the power of dynamic languages. That might be part of why TS is so popular. The problem is that types in dynamic languages can easily get super complex without feeling complex while for static languages you might need very advanced features (dependent types and other scary stuff) to represent some cases.)

The big downsides when working with dynamic languages: You need to have tests or at least some reliable way to test you program. If not, refactoring can be indeed worse than what you know from static languages.

Also static types make it easier to safely make changes to a program without actually understanding most of the code as you can just follow compiler errors while dynamic languages require you to have some model of what is going on in your head. So for people that jump projects a lot, static typing also allows for easier refactoring while dynamic typing allows for more productivity when one is more intimate with the code.

> Basically: I need a Car Object with these exact fields vs give me a thing with the field "foobar" that contains something that I can concatenate with another things.

Um, that's another matter entirely. Polymorphism. Mainstream statically typed programming languages until recently knew exactly one type of polymorphism based on objects, classes and interfaces. It is object-level polymorphism that lies at hear of OOP.

This form of polymorphism is easy to implement in compiler or interpreter, but it also is very limited in expressiveness. So proper functional languages use parametric object-level and function-level polymorphism and allow dictionary passing (both implicit and explicit). This approach filters into mainstream programming language: generics in Java, templates in C++ and so on.

Yes, I am aware of that.

The point is, Polymorphism in dynamic languages is pretty trivial.

There is a reason the golang devs resisted introducing generics for so long, it introduces complexity and can be hard to debug.

If you read my side-note:

> The problem is that types in dynamic languages can easily get super complex without feeling complex while for static languages you might need very advanced features (dependent types and other scary stuff) to represent some cases

If you want to have a functional example: Elm and it's lack of type classes or similar features. There is a reason Elm is resisting to implement a better way to add ad hoc polymorphism for better or worse. It is difficult to fit into the goal of Elm being a simple and easy to use language.

So yes, you can either have a more complex static type system and pay the price for that or just bite the bullet and over specify types.(The problem is also one of ergonomics. Programmer might over specify types because that is the path of least resistance or because things like generics make code harder to debug.)

> immutable data

sort-of possible now (frozen*)

> pattern matching

That has been added recently

Neither are foundational in the language and have all sorts of quirks. If anything, it makes Python akin to C++'s bloat, one of the complaints and ultimately ironic comparison at the beginning of the article.

I'd say that Python is uniquely functional not because of features or authors intention, but by being somewhat flexible and having users interested in writing in this style. So despite its quirks, for many people it is as functional as they can get, and for some even as much as they need.

Judging by the title, I was expecting to read about the advantages of dynamically-typed functional programming languages like Scheme and Clojure over statically-typed functional languages like Standard ML and Haskell. It seems to me that large segments of the functional programming community love static typing, and the research community has done (and continues to do) a lot of work applying type theory to the design of programming languages. However, there are still many advocates of dynamically-typed functional programming languages, generally from the Lisp family.

I don’t think of functional programming when I use Python, even though it’s possible to program in a functional style. I’m also unaware of any Python development environments that offer Smalltalk or Common Lisp levels of interactivity, though this is less of a criticism of the language and more of a matter of tooling.

Yup, I would love to see a bit more love for Lisp style languages. They're incredibly powerful and it's a shame they're relegated to the fringes of professional programming.

It's because LISP has a "great filter" for high IQs.

Programming has many such filters:

Understanding pointers / references separates the python/script kiddies from the "real" C programmers and understanding von neumann architecture.

Recursion separates the programming-is-a-job with the beginnings of algorithmic understanding of computation.

LISP is heavy recursion, non-infix notation, lambda calculus, and other concepts that are simply too much for a garden variety programmer.

Lisp evolved into ML and Haskell and these have influenced professional programming enormously over the last few decades. Or are you advocating we ditch all progress and go back to Lisp? Have you tried an ML such as OCaml? It has built-in cons lists, convenient symbol manipulation and macros.

Influenced but didn't evolve into. For one thing ML and Haskell have a totally different approach to types.

ML was designed in the 70s by Robin Milner to replace all the LISP code he had. Just like LISP it was designed for symbolic manipulation. They are far more similar than you are giving them credit for; and usually recognised as part of the same family tree. The System F calculus also has a "totally different approach to types" but it evolved from the Lambda calculus. ML and Haskell are based on System F and its successors.

Frankly, yes, I wouldn't mind going back to boring old lisp. Particularly when confronted with Java or Javascript. I also have no real desire to dive into strongly typed functional languages - see the "dynamic" part of the original title.

And all of those modern functional languages are as fringe (or even more fringe) than Lisp is. I wish this wasn't the case, but let's be honest: programming languages that appeal to mainstream developers and companies will always optimize for development teams of wildly varying skill.

> I also have no real desire to dive into strongly typed functional languages -

That's a big shame. But don't knock it until you've tried it.

> And all of those modern functional languages are as fringe (or even more fringe) than Lisp is.

I'm not convinced Lisp is more popular anymore. I've spent the last 10 years being a full time Haskell and OCaml programmer. The killer application is compilers and associated tooling. Almost no one is choosing Lisp for this today. And why would they?

> but let's be honest: programming languages that appeal to mainstream developers

Where do you think the newer mainstream languages are getting e.g. algebraic datatypes and pattern matching from?

That's interesting. Have you written anything further about ML vs lisp for tooling, or could point to a reference? Compiler dev here, C++ when paid and lisp when not. Interested in SML but haven't spent the time to assess what I'm missing.

Exhaustive pattern matching over AST nodes is a good trick. That might interact better with the type constructors than ad hoc representation in clos. It seems possible I should stop putting everything in a map.

Is the edge over lisp in the compile time rejection of incorrect programs or elsewhere?

Although one point is that many only learn Lisp from FOSS tooling, instead of the Lisp Machine survivors, CCL, Allegro and LispWorks.

One of the talks at this year's HIW is how Haskell could be more better in IDE tooling.

Why does every thread about functional programming and/or lisp have to have this not-so-subtle dig at developers who don't use them?

Do you really think developers who prefer functional programming or lisp(s) are smarter on average than other developers?

No smarter, but they likely care a great deal more about programming.

I doubt that very much.

> programming languages that appeal to mainstream developers and companies will always optimize for development teams of wildly varying skill

Does emacs have a hotkey for this sentence? Or why does it keep coming up in every discussion remotely related to lisp?

Why do you think so many Lisp fans (e.g., Norvig) went on to use Python instead of those ML alternatives?

Norvig is a teacher and author who wants to reach the widest audience possible. So in his case, it was either Java or Python.

Norvig wrote an article or post where he deemed Python to be “an acceptable lisp” (or something like that, I may not have it exactly right).

Hardly a ringing endorsement! Python is certainly succinct and expressive like Lisp (when his alternative is Java). ML was designed to be a better Lisp.

ML wasn't designed as a better Lisp. It was designed as a 'better' (and statically typed) functional programming language for writing proofs in a theorem prover. The theorem prover and then also the first ML version was written in Lisp. That was the motivation for the first version of ML.

I did not mean to imply it was a Lisp. It was designed to solve the same problems as lisp, i.e. symbolic manipulation; and was used to rewrite code originally written in Lisp.

ML was at first a domain specific language for writing/assisting proofs: a) maths-like notation, b) functional, c) statically typed, ...

Lisp did only do b) to some extent. It was and still is common to develop languages as internal or external domain specific languages on top of Lisp. ML was such an example. Often these languages then are/were ported away from Lisp to a specialized (and possibly smaller/more efficient) implementation. Thus Lisp served as a prototyping environment.

100% agree with both of these comments. I'd love to see a resurgence of interest in this area.

What we need are examples of Lisp-style languages leading to big wins. Clojure did that to some extent, but unfortunately in the Common Lisp world successful commercial or technical projects are few and far between.

Our best bet is probably making interesting advances in the open source field, until something takes off, which is largely a factor of luck.

My biggest critique of the Lisp crowd. I mean, look at what gets made in Go, Go is barely 10 years old. Go is a worse Java, not as egregious an example of "worse is better" as Javascript, but still... a backwards step outside of its threading model.

For a language with such raw purported power, it is lacking in databases, operating systems, management software, games, etc. AI used to be it's killer app, but it seems the latest AI revolution isn't functional programming based.

One of the issues is that Lisp makes an individual programmer very powerful. The Paul Graham Lisp essay that gets brought up CONSTANTLY is an example of that. He built an entire ecommerce site by himself, but it was incomprehensible to the people that bought it from him and they rewrote it in some more pedestrian language.

Lisp lends itself to ivory tower constructions of abstractions that are ridiculously powerful to the one person that wrote them: the programmer, but the rest of the world will just end up reinventing their own wheels.

It takes a higher IQ that the person that wrote code to understand it about as well as the original author. But if LISP people are all high IQ, then the people capable of reading other Lisp code is of such high IQ that it falls apart.

I look at people like the Jepsen guy who uses clojure to test about the most difficult thing to test in the world: concurrent distributed systems, and other ultrasmart people and understand that FP is powerful. It also has advantages in true heavy concurrent programming, although FP seems to be squandering this.

I think it goes back to the operating environment. If Lisp machines were the killer apps they seemed, the lisp people need a novel OS + GUI + IDE + REPL computing environment to visually sell themselves, kind of like what smalltalk did back in the day.

I can't express how much I agree with everything you say.

Often, I've wondered if Lisp attracts, due to it power and flexibility, a certain type of "lone hacker" who builds amazing but unscalable ivory towers of abstractions.

One example of this, often mentioned as a success, but which I consider a failure if you take the longer view, is Naughty Dog using Lisp to develop their game engine compiler and such. This was so specialized that their new owner Sony balked and made them use standard tools.

Alternately, you could say that Lisp does exactly what it was meant to do: enable single and small teams of developers to engage in very productive exploratory programming. Carl Hewitt once told me that back in the heyday of Lisp (the 80's) they believed general AI would be solved by small teams of developers. Current AI efforts are the total opposite.

In this alternate interpretation, perhaps Common Lisp is simply not suited for large scale development or collaborations. Either due to language features or personality types. So maybe prototype in CL, then hire a few dozen Java devs to go to market.

I'm not trying to be down on CL. I love it and have been developing my open-source 3D system in it. But man, I can't believe there isn't a single cross-platform GUI toolkit for CL! Or a decent open-source IDE.

I guess we can do our small part and hope for the best.

> What we need are examples of Lisp-style languages leading to big wins.

How about JPL, used to control Mars probes and landers using ANSI lisp?

The most successful and influential CS course to date?

This website?

Lisp successes are out there. But it is harder to learn and understand than imperative languages, so it will never win out over Java and C# and Go and their ilk.

Due to the demand for developers outstripping the supply, we need languages that are safe for developers with wildly varying levels of skill to use - that limit the blast radius of their efforts in the worst case. One of the early design goals of Golang was exactly this - to make it simple for developers of varying skill to work together on code.

This isn't a knock on mediocre developers individually, but it is an unfortunate side effect of our need for so many of them.

The JPL Lisp story is a fun story, and cool application of Lisp over 20 years ago. But it is by no means a Lisp success story. There is a reason no space probe since then has been controlled using Lisp (as far as I know).

Your links didn't come through, so I don't know what web site you are referring to. Maybe Orbitz? That was one success and still continues to be developed in CL by Google AFAIK. One outlier data point perhaps.

As for the 6.001 course at MIT, I took that course back in the day, and was saddened to hear it now is taught in Python.

You are correct that we need languages which have a ready supply of developers. In a commercial setting that fact alone will trump any language features or technology advantage. Java was meant to be the new Cobol and works hard at limiting programmer flexibility so they don't shoot themselves in the foot.

"This website" referred to Hacker News itself, which is written in a Lisp (Arc).

I don't think there's anything remarkable about this website that could be considered a "big win" for Lisp. It's a totally run-of-the-mill dynamic web app that could have been developed more quickly in any web framework such as Rails or Django.

In fact, the hacky way it was implemented in Lisp had some clear downsides. In the early days (I'm probably going to mess up the details but hopefully the gist will come through) there was a notorious failure mode due to the way entities such as stories and comments were stored in memory using closures. These closures obviously had to be cleaned out periodically, and so if you stayed on a page too long and then clicked a link on the page, it would be invalid. You'd have to go back and refresh the page and click the link again. I don't think it's out of bounds to say this website is basically a rehash of the hacks pg came up with in the mid 90s to implement web apps in Lisp for Viaweb. The fact that he got rich off those hacks may have been a selling point for Lisp 20 years ago when he wrote Beating the Averages, but the world has moved on.

Would have been a stronger case if Julia, Erlang/Elixir, or Clojure/Racket was used.

Python is one of the worst dynamic/FP languages I've ever had the displeasure to utilize.

I think the OP needs to step out of their comfort zone of Python.

The point of the article isn't to say that python is better, all it's saying is that a more functional style leads to cleaner code. This is irrespective of python's technical merits.

With a dynamic, functional language you could enjoy all of this simplicity. But you don't even need that to benefit from this perspective today! You can adopt the techniques of functional programming right now in your favorite language (such as Python, used above). What's important is your mindset and how you approach problems, minimizing state and maximizing clarity in the code you write.

I suspect the point of the article was to be seen again and remind people of his book.

Whether or not that is the case, surely the substance of the article more worthy of your analysis than something as intangible as the authors internal motivations? Assuming positive intent is worthwhile frame of reference that keeps my world looking a bit brighter and something I always hope other do for me.

Fair enough. My comment above was unnecessary.

I did elsewhere in a top level comment go into detail about why I thought TFA wasn't good. https://news.ycombinator.com/item?id=33722399

I agree that another language that's better for functional programming would make this more compelling. Python has a lot of limitations in this area. I think all of those alternatives you listed (Julia, Clojure, etc) include turn-offs for Python programmers that are deal breakers. So I'm seeking another language that appeals to Python programmers but has more of the attributes that are amenable to functional programming.

IMO, Ruby is a better general purpose "scripting" language like Python that has cleaner and more consistent features - including ones that lend themselves to functional approaches. Outside of web development it doesn't have the same appeal as Python due to the larger Python ecosystem... (many companies think they want to and will do "data science!")

Also, in my experience, Pythonistas tend to be more resistent to paradigm changes. So often I have heard, "Why would you want to do that?", or "This works fine as it is.". Not that I'm a fan of list comprehensions in Python (I think they're awkward and ugly compared to Ruby collection operations), but Python codebases I have had the displeasure of working in had lots of nested loops, mutations everywhere, copy-pasted code, and 30+ line methods. Trying to encourage single-responsibility, composability, less OOP, and more pure functions is like shoveling water uphill.

With both Ruby and Python you do need to be a bit thoughtful about unnecessary collection copies (which you would tend to favor when writing pure functions); but often you have a good sense of how large a volume of data you are handling and where copies will be particularly slow or heavy to do. When necessary, you can have an impure function that mutates input data, and at least in Ruby you can warn callers of the impending mutation by adding the ! to the function name. update_order!(order).

Unfortunately this toy example isn't even pure functional because it raises an exception as a way of providing an alternative return from the function.

A better example, if one wanted to keep a single function pure and still keep (two) states would be to return a tuple of (current_count, is_expired). Further, since it uses zero as the implicit boundary to determine when the counter is complete by counting down from the provided value, the function should be named countdown_timer() or something more descriptive.

Honestly I'm not sure who this little article was written for. The references to C++ were irrelevant. The OOP Python example, however, was a pretty common example of how Python people write code. It's a shame, really. We should (edit: not) have to deprogram people to get them to choose something more like the latter "functional" approach over the former pointlessly-OOPy example.

Why do you consider raising an exception not being pure functional? In my opinion, it is exactly equivalent to propagating error return value up the call chain until it is handled (but is faster if no exception is thrown, and contains a stack trace of where the original error value was produced for programmer's convenience).

If you take that view, yes, it's functional.

However, if you take that sort of view, everything is functional. Writing to the screen can be viewed as having returned instructions to write to the screen, combined with an interpreter that performs it. Writing to the network is like having returned an instruction to write the network, along with an interpreter to do it. (And a certain laissez faire attitude about laziness, perhaps.)

This view is not necessarily wrong. There is a certain truth to it; for one thing it's not a terrible gateway to understanding how "pure functional" languages interact with the real world in practice, because this is pretty close to how they do it.

However, it is useless, because it means every language is a pure functional language, and the utility of a term is its ability to split the universe under discussion into multiple pieces. Or to put it another way, a decision procedure that puts 100% of the items under decision into the same category conveys exactly zero bits of information.

So for a definition of "pure functional" to have any utility, it needs to exclude some things. Exceptions are borderline. Technically Haskell does indeed have them, but it has a complicated relationship with them. It is certainly the case that we are discussing something that is not in the function signature anymore, which people generally consider not "pure functional". But generally it definitely has the "well, we could pretend that we returned a special value and had a special interpreter wrapped around it for every call" copout feel to it in general.

I am not trying to argue, rather being curious.

I've tried to look up what "pure functional" means exactly, and failed to find anything better than Wikipedia[0]:

    In computer programming, a pure function is a function that has the following properties:
    1. the function return values are identical for identical arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams), and
    2. the function has no side effects (no mutation of local static variables, non-local variables, mutable reference arguments or input/output streams).
I don't see how that excludes exceptions. I agree that exceptions kind of "feel" imperative rather than functional, but could you maybe help me find an accepted definition which excludes exceptions?

[0] https://en.wikipedia.org/wiki/Pure_function

I think it excludes exceptions: exceptions are a side effect

Raising an exception is not returning a value; it is essentially a GOTO.

A pure function has no side effects. Maybe it takes an input, and maybe it returns a result. But whatever it does internally has no affect on the outside world. Raising an exception is definitely a side effect.

Worse yet, in TFA example it wasn't even necessary to use an exception in place of a returned state. I would argue that the exception example was, irrespective of functional vs imperative principles, a bad use of exceptions.

Yeah, I agree that in that Python example, use of exceptions is awful (and the entire example is awful, I think).

> But whatever it does internally has no affect on the outside world. Raising an exception is definitely a side effect.

However, would you please elaborate, how do exceptions affect the outside world, in your opinion? As far as I understand, exceptions only "return exceptional value" up the call stack, to the calling function. Of course, barring total program crash in case of unhandled exception, but unhandled exception is a programmer's error, like integral division by zero or infinite memory allocation loop.

Raising an exception affects control flow, which is the execution state of the program.

TFA example misuses exceptions as just another return value, requiring the caller to have additional code to handle "normal" and "other" return values. So as we and others seem to agree, that is not even a good example. At this point we kind of get stuck in the whole discussion about exceptions and how to handle them. This is probably a hot area that I should avoid getting into, but here is an _example_ of how to handle errors without having exceptions (and without impacting execution states): https://go.dev/blog/error-handling-and-go

Function application, where you apply a function to a collection of data, requires that the application function returns a value (and does not throw an exception). As a Python example (not the prettiest, but it's Python...), imagine you have a list of strings which you want to strip the leading and trailing whitespace.

x = [' a', 'b ', None]

list(map(str.strip, x)) => TypeError exception because you can't strip on None

To work around this, you have to do something like an anonymous function to validate your inputs before calling strip:

list(map(lambda s: s.strip() if s is not None else None, x)) => ['a', 'b', None]

In this case it's clear why Python throws an exception, and we were able to avoid it by validating the input during the mapping. But if a normal behavior of the mapped function was to throw an exception as just another type of result, we couldn't prevent/avoid that, and we would either not be able to apply the timer() function to a list of inputs or we would have to have a form of map() which also accepted an exception handler function and then somehow wove its results into the normal map output. Such a thing may exist, but I don't know Python well enough; and if it does exist, it would only make the language even uglier ;). [Edit: on second thought, the solution would be to make a better_timer() function wrapper which called timer() and handled the exception, then more appropriately returned [time_remaining, complete_flag]. Then we would pass better_timer() to the map().]

There are even nicer benefits to purity if you are using higher order functions that may be called later/elsewhere. Instead of taking data through some series of transformations, you take data and build a series of operations which can later be used in a transformation. Then somewhere more toward the edge, well beyond your core business rules, you execute the series of transformations (functions) on your starting data to get an end result. This approach can't be used all the time, but there are cases where it works well. But like the map() example above, if you have to also handle exceptions as normal results returned from functions, then you have to have additional exception handler functions that you carry around with the other functions in case they are needed. Yuck.

Lean maps returns into exceptions.


> With Python it's much easier to become proficient and remember everything you need to know. That translates to more time being productive writing code instead of reading documentation

Until your python program reaches a certain size where the lack of type safety and other protections becomes a real liability. Contrast that to something like Rust, where it's easy to have confidence in very large code bases.

As someone who's worked on a huge Python program (10M+ LOC), type safety was the last thing on our mind when maintaining it.

Refactoring might not have been quite as simple, but there are plenty of tools available for refactoring Python code, and we were never afraid of doing it.

Though, working on a project right now in Java, the type system isn't exactly saving us from refactoring failures, because the type system doesn't ensure that the String or Integer or <low level interface> being passed in is the right one.

Of course, I'm sure we're just "doing it wrong".

> As someone who's worked on a huge Python program (10M+ LOC)

Holy moly. I can't imagine working on a codebase like this in Python.

I maintain several small codebases (<10k LOC) at work. Even at this size, I definitely feel the lack of type safety -- specifically, at the boundaries of logical blocks of code. When refactoring, the lack of thorough type-checking means that I need to mentally propagate my changes throughout the codebase. This is exhausting!

So, what was the first thing on your mind when maintaining it? Python gives you plenty else to worry about besides types.

There’s a significant delta between Java types and Haskell types.

How do you know someone's a Haskell programmer? They insert themselves into every discussion about typing to point out how bad all programming languages that aren't Haskell are at types.

I could just as well have said OCaml. Or Idris. Or PureScript. Or Elm.

The point isn't Haskell. The point is not Java.

You picked one of the best dynamic languages and compare to literally the bottom barrel of the statically typed language. And then you complain when someone points this out. This seems to me like you are acting in bad faith.

I don't think the accusation of bad faith is necessary. Don't you think it's more likely the commenter was just missing the point?

I've had a similar experience in a large Python codebase!

> As someone who's worked on a huge Python program (10M+ LOC), type safety was the last thing on our mind when maintaining it.

This is true. Python dictionaries are extremely powerful, and they obviate the need to use elaborate types throughout the code. There is even a special syntax for making dictionaries in Python. In a code base with lots of dictionaries, type errors are a thing of the past.

Everything-a-dict results in KeyError instead of type errors. But of course, the values have a type unless it's dicts all the way down. And I really wonder what you'd be using those 10M LOC for, with that approach...

> dynamic languages don't require you to "emulate the compiler" in your head

On the contrary. Your ‘functional’ timer function takes an argument and applies the - operator to it.

You must be sure, every time you call the function, that you pass an argument to which - can be applied. If you don't, your user might see a TypeError. So you, the human author of the code, must ‘emulate the compiler’ by performing type inference and type checking in your head.

In my statically-typed language, the compiler ensures that my program only ever passes arguments for which the - operator is applicable. I don't need to ‘emulate the compiler’, because I'm actually using a compiler.

I wouldn't do the type checking in my head, I'd just assume it's going to work as intended. If that's not the case, then an error will happen. What's wrong with the user seeing an error? There's always a chance my program will have a bug and fail. So I have to handle that case no matter what and properly deal with unexpected errors, error reporting, etc.

Sure, you always have to handle the case that an unexpected error occurs, but every unexpected error that occurs in production has a cost. What that cost is depends on what your software does and how your users are using it, but there's always a cost. You may decide in your particular context that there is no amount or severity of errors that is too costly for your users to encounter, but most successful software endeavours have some limits.

Bugs are inevitable but we should strive to have as few as possible. Your program not working right is obviously bad UX (as we've all experienced using buggy software).

The reason we have type checking is same reason we have other tooling in our editors to do static analysis like highlighting syntax errors and linting: catching bugs during development is better than finding them in production.

Usually this is "solved" by creating unit tests to emulate what a compiler in a statically typed language would otherwise do. Which is (another) reason that dynamic languages shouldn't be the choice for anything but scripting.

Functional programming can be attempted in any language that offers higher-order functions that can close over free variables. In other words, languages isomorphic to untyped lambda calculus with additions. Python, Perl, JavaScript, and Go all qualify.

However, it seems to be generally accepted that you're really only doing baby FP unless you also program with types. Types, and the algebras that can be imposed over them, are where the really interesting FP work is done that yields new control structures (of which monads are just one), that are difficult and cumbersome to express in untyped LC, or in a dynamic programming language.

Author here. Part of my point here is 1) you get a lot of benefit from dynamic languages, and 2) you also get a lot of benefit from functional programming. And 3) you'd get the most benefit from both together. When you add types into the mix then you're taking two steps forward and one step back in terms of simplicity and understandability.

What you said is also perpetuating a common reason that people are turned off to FP in general these days:


"At some point functional programming became equivalent in people's minds to complex type systems (e.g., monads), and that caused the value of Lisp to be lost to history. It's a shame."

> "At some point functional programming became equivalent in people's minds to complex type systems (e.g., monads), and that caused the value of Lisp to be lost to history. It's a shame."

This is nonsense. Types help us name, categorise, discuss and share abstractions. These abstractions are useful in any language, typed or otherwise. Lisp lives on in its descendents ML and Haskell. If someone doesn't want to learn anything new, then they are themselves "lost to history".

I program in both Lisp and SML/OCaml. I don't think Lisp lives in the latter.

For me, Lisp is about power. However, Haskell and other languages that popularized category theory feel like going through the GoF nightmare again.

SML/OCaml has built-in cons lists and is great for symbolic manipulation. Everything is an expression and there's too many parens.

I don't know what you're getting at. Static typing makes your code easier to understand and reason about. It's pretty much an unmitigated win.

Dynamic languages may have made some sense back when people wanted to cruft something up and get it running without declaring the type of each object before it is used (which declarations can get verbose and cumbersome in the presence of parameterized types). But today we have powerful computers that can quickly check and infer types in ever more elaborate type systems, and the risks of dynamic languages seem less and less worth the development speed gains. You can see this in perennial language popularity contests. Consider how thoroughly the programming community embraced TypeScript over JavaScript because of the guarantees static type checking provides before your code even runs. And while Lisp used to be the smart kids' favored programming language, it was displaced by Haskell and later by Rust once those languages matured to where people were building real world stuff in them. (As it turns out, baking object lifetimes and ownership into your type system means you don't even need a GC to get Lisp levels of power and abstraction!)

I'm a big fan of Clojure (dynamic FP) and have worked with it professionally, but I really don't see why types seem to be so cumbersome.

I hate languages like C#/Java, but not because types are cumbersome. You play around with something like Haskell (and its type system) and it's a whole other story.

Dynamic typing is great for small to med projects or for prototyping but in bigger codebases it becomes quite messy trying to keep in mind what is what and can lead to logical errors or use of "bad" data.

Static typing let's compiler keep tabs on what variable can contain and you can define safe fully defined conversions between types and removes mental overhead of keeping types in mind.

I'm working on a language/platform that combines pure core (no direct side-effects) with some procedural techniques (allow internal/local mutations) and no shared mutability (100% pass by value, copy-on-write is used to optimize quick deep snapshots of value trees).

I'm convinced this is the best combination of OOP and FP. I tried to find other languages like this, but I couldn't. Ironically the closest thing is how an SQL database works.

Does my description ring a bell to anyone?

Yes, I agree with your conception. Likewise, I am working on a language with no shared mutability, and something like a pure core and often find myself solving problems that feel like putting a database in the middle of the application.

Any links to your language, perhaps?

Sounds like Haskell but making things that are awful monad stacks of readers, writers, state (yuk) into sublime language features.

This is what Firefox shows me:

> Warning: Potential Security Risk Ahead

> This site is blocked due to a phishing threat.


Thanks for reporting that — I have no idea how that could be happening. Site is hosted on Blogger so it's a pretty vanilla setup.

Nix is my favorite dynamic functional language. So much so that I'll probably use it (via hnix) as a general scripting language in projects that benefit from that.

This entire post reminds me of that satirical O’Riley book called “upsetting your coworkers with Python”.


Wait where is self.count set, and is it 0 or 1?

Fixed now, thanks!

Agree. But Python is not good example for me. I think Python developers are genius because they could get things done in a language with 1-line lambda !

Single expression lambdas. Lambdas can span multiple lines but they specifically contain a single expression, no statements. And since Python isn't really an expression oriented language this is still pretty constraining.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact