Hacker News new | past | comments | ask | show | jobs | submit login
Can Programming Be Liberated from the von Neumann Style? (1978) [pdf] (cmu.edu)
128 points by tosh on Dec 19, 2016 | hide | past | favorite | 107 comments




reads like an ode to Haskell/ML.

while haskell/ML is beautiful, and i'd much prefer it if everyone did adopt it - it seems human nature is preferring languages such as C, Python, C# and Java, exactly due to their expressiveness and distance from the pure math side of things.

maybe FP will be the future, but right now and considering everything is a stack machine that has ASM/C under the hood, things have to change on the hardware level to enable deep change in both our hardware and our curricula, which often produce "math is hard". it wouldn't hurt if the larger players put out mobile FP sdks. (i know we can program android in SCALA and that many folk wrote Haskell hooks for iOS, but it's not all very supported by the main players)

if anything, dabbing into Haskell made me understand math better. our youngsters would benefit greatly from such a change and it'd boost things beyond what we can imagine today if more than 5% of the population could program.

a low level FP styled non-gc`ed programming language would be interesting to see rise too. though haskell is making its way through the bushes, it's nooks and crannies leave a bit to be desired, esp. when complex software becomes extremely difficult to read by others, despite it's beauty and conciseness.


There is no future in a strict adherence to one programming paradigm. Any sufficiently large program will have parts that are expressed best in different paradigms. One part of a program wants to be functional, another imperative, another object-oriented, etc. We need less languages that make an opinionated statement about the one true way and the one true paradigm. A paradigm is a tool of expression, and we need to be able to use the right one when needed. We need general-purpose, extensible languages.


Yes. This is the software "engineer" answer. Nothing sacred, each choice is simply a trade off. Sometimes an FP lense is the best one to view and tackle the problem with, when it's not you use something else. You don't strain or try to unnaturaly shift the goal posts--you use something else.


Wholeheartedly agree with this. Right tool for the problem at hand, and sometimes that means using multiple tools (or paradigms). Incidentally, one of the things I like about Python, in that is so easily lets you mix paradigms such as FP & OOP. Certain use cases just don't naturally fit w/ a pure FP style, nor a pure OOP style, but sometimes a healthy mix of both is in order. One of the areas FP really wins is in ease to introducing parallelism. If all of your parallel functions are pure, introducing parallelism should be a non-issue. Results of a function only depend on the (non-global, immutable arguments and, possibly, immutable global state). Shared mutable state is the bane of all parallel programming. Shared mutable state necessitates locking of some form, and even experts get it wrong (and non-experts get it wrong very frequently) - personal experience, no citations.


I completely agree with this. I've been doing professional Scala for about 7 years now and I have long moved past the phase of trying to make all my code "pure" and "functional". While I'd say my code always maintains a more functional feel than Java or C++, many problems are just more elegantly and efficiently solved with OOP and imperative/mutable designs.


I disagree.

Of course we need general-purpose, extensible languages. We also need motherhood and apple pie. But multi-paradigm languages are usually mistakes. They aim to give you the best of multiple worlds but frequently wind up giving you the worst.

We saw this with C++, which was supposed to be a structured / OO hybrid. The trouble was, the mismatch between the strucutred world and the OO world created huge complexity, starting with functions that might or might not be virtual. Non-virtual destructors anyone? Not to mention the fact that structured programming doesn't use GC, but proper OO programming requires it. See http://www.ipipan.gda.pl/~marek/objects/faq/oo-faq-S-3.12.ht... for the reasons why.

The problems get even bigger when you try to hybridise functional and OO paradigms:

* In OO the core concept is the mutable object. An object has an identity that persists as it mutates, and you can distinguish between two otherwise identical objects because they have different identities. Hence the distinction in all OO languages between reference equality (two references pointing at the same object) and object equality (two distinct objects that are equal in all their fields).

* In the functional paradigm, on the other hand, there are only values. Asking if two strings are the same object, or two employee records, is as meaningless as asking if two "3" values are the same object. Values are also immutable, so the idea of changing a string or employee value is as meaningless as changing "3" to be "4".

So when you create a hybrid functional/OO language you must deal with this impedance mismatch. If objects are mutable then the order of execution matters, and you have to trust the programmer to get the data flow and control flow to agree, thereby losing the big advantages of the functional paradigm. If objects are not mutable then you don't have an OO programming language.

Finally, I do not believe that "Any sufficiently large program will have parts that are expressed best in different paradigms". Claims that some domain is best expressed in one particular paradigm always seem to end up with statements of the form "Yes, but I want to do this my way, and your paradigm makes me do it your way". Sure, if you try to write Haskell in an OO way you are in for a world of pain, but that just means that when using a functional language you should architect your application in a functional way. And similarly for an OO language of course.

I spent a long time in the OO world, and I've used the best OO language ever designed (Eiffel). I moved to Haskell, and have become convinced that for most general purpose programming the functional paradigm is superior to OO. (The exceptions are in hard real-time and resource-bound environments). And the reason is that Backus was right.


> In OO the core concept is the mutable object.

I disagree here. In OOP the core concept is the object (whether mutable or not) understood as a set of data and accompanying functions which operate on it. This is where OOP and FP collide: FP strives for code reuse through universal functions which operate on heterogeneous data.

In practice in FP data has to adhere to a certain structure, which is not very different from OOP's classes/interfaces/traits after all. FP just ditches classes of objects as a construct, opting instead for non-encapsulated interfaces into generic data structures, whether expressed explicitly (e.g. typeclasses) or implicitly (duck typing).

This duality is exposed in languages like JavaScript, where class methods are just regular functions dynamically bound to `this`. A class instance is, after all, equivalent to a bunch of functions closed over its members. I.e. OOP is just syntax sugar to encapsulate data and functions.

Mutability is just a (leaky) implementation detail, completely orthogonal to OOP or FP. A class with no internally-mutating setters is still a class. A method with immutable bindings is still a method.

> If objects are mutable then the order of execution matters

Again, completely orthogonal to mutation. The order of executiong always matters:

    (cons :a (cons :b '())) ≠ (cons :b (cons :a '()))
I agree that function composition is nicer than a bunch of statements (in certain cases!), but the burden is still on the programmer to get it right, whether dealing with mutable data structures or not.

What FP languages get right is making side effects (including mutation) a second-class citizen, but that's completely orthogonal to OO.


The code example you give isn't an example of evaluation order, it's an example of different expressions not giving the same result.

Order of evaluation being irrelevant means that when evaluating

    (cons 'a (cons 'b nil))
it doesn't matter whether you first create the outer or inner cons cell. Or, for that matter, whether you do anything at all at this point (you could for instance just suspend the entire expression as a thunk and only evaluate them when you try and access values within the list -- i.e. lazy evaluation).


It's not an example of evaluation order because we weren't discussing evaluation order. GP said:

> If objects are mutable then the order of execution matters, and you have to trust the programmer to get the data flow and control flow to agree

Which is true both for OOP and FP, unless I misunderstood what GP meant with data and control flow.


> (cons :a (cons :b '())) ≠ (cons :b (cons :a '()))

In less idiosyncratic syntax, this says that the list [a, b] is not equal to the list [b, a]. That's the case in all languages (at least, it should be!).

The order of execution is about the same value being calculated along different paths.

For example, we can consider the value [write("foo", /tmp/x), read(/tmp/x)].

Here's one path:

    [write("foo", /tmp/x), read(/tmp/x)]
    [null,                 read(/tmp/x)]
    [null,                 "foo"]
Here's another:

    [write("foo", /tmp/x), read(/tmp/x)]
    [write("foo", /tmp/x), "previous content"]
    [null,                 "previous content"]
These values aren't equal, so the path taken by the interpreter/compiled-code can affect what our program does!

In contrast, we could follow an approach like Haskell and encapsulate IO actions as standalone values. This gives one path:

    [write("foo", /tmp/x), read(/tmp/x)]
    [<Some IO action>,     read(/tmp/x)]
    [<Some IO action>,     <Other IO action>]
Here's another path:

    [write("foo", /tmp/x), read(/tmp/x)]
    [write("foo", /tmp/x), <Other IO action>]
    [<Some IO action>,     <Other IO action>]
These are equal, so we don't have to worry about the path taken affecting out outcome.

We do have to worry whether the chosen path will produce an outcome at all, since it's possible that a program gets caught in an infinite loop even if it's avoidable. We can remove this worry if we use lazy evaluation.


You're nitpicking. GP said:

> If objects are mutable then the order of execution matters, and you have to trust the programmer to get the data flow and control flow to agree

My point is that, even in FP, as soon as you hit sequential operations (in your example, side effects), you're still bound by the execution order and the programmer getting it right (and how that is completely orthogonal to OOP or FP).

Order-dependent sequential operations are the programmer's responsibility regardless of paradigm.

I don't think OOP is inherently sequential. Your post made me realize GP is conflating OOP and imperative programming and his post makes sense if :%s/oop/imperative programming/.


> I disagree here. In OOP the core concept is the object (whether mutable or not)

You've missed OP's deeper point: even if immutable, the duality of object identity & object state remains. (Immutability is not the central issue. The issue is that in FP you can use the term "data" without blushing, where as in OOP "state" is masquerading as "data".)


No I didn't miss it, I just don't think that duality is inherent to OOP and is pretty much a byproduct of OOP languages being imperative, mutable and referential, which makes OP conflate all those properties.

What do you think prevents OOP languages from implementing automagical non-referential equality by data traversal by default like FP languages do? Is it some property of OOP itself? IMHO it's not a property of the paradigm.

The duality is still present in FP languages, it's just hidden under a nice automagical layer.

    (= (cons :a (cons :b '()))
       (cons :a (cons :b '())))
     => true
Both lists are still separate memory regions. They're the same data only in a platonic way and there's nothing that prevents OOP from implementing the exact same abstraction.


C++ is a poor counter-example, and I hope not many other people think I'm recommending C++ because it's the last language I would ever recommend. C++ doesn't allow the sort of syntactic abstraction needed for a good multi-paradigm language. The Lisp family of languages is a good example of multi-paradigm programming languages done well. I also don't like this OOP vs. FP dichotomy. There are other useful paradigms, and yet more that need to be invented! I love functional programming, but I also want to be able to do things like implement low-level imperative primitives to use for higher-level functional APIs (a classic example being a 'memoize' function), or to use generic procedures or CLOS-like OOP when it is useful.


We covered multi-paradigm programming with Scheme in 61A at Berkeley. Yeah, that was a strength Scheme has.


An "aha!" moment a long time ago for me was realizing that there's no essential distinction between a (curried) function call with a record type argument in a pure FP language and a const method in an OOP language.


I think FP is on the up and up. OCaml and Haskell have seen a lot more interest in the last 5-10 years (despite both being 20+ years old), and I've been working in Scala full time for 6+ years now.

There are still gaps and room for improvement to be sure, but I think progress is happening. Idris takes the best parts of Haskell and drops the laziness that was so problematic for performance. At the other end Rust is practically an ML-family language that offers low-level non-GC style. I don't think it's human nature that's kept less-functional languages popular so much as hardware limitations and business effects.


> it seems human nature is preferring languages such as C, Python, C# and Java, exactly due to their expressiveness and distance from the pure math side of things.

People say this, but... I'm really not sure this is the case. When you teach people these other paradigms they take to them quickly and naturally and find them good for some things and bad for others, just like how we swap around all kinds of language and environment decisions.

We keep offering this fiction to ourselves: Environment ______ for programming is special. We do it with editors, we do it with languages, we do it with compilers.

In reality, none of them are. They're incrementally better in some ways, perhaps worse in others. We are past the era of mighty breakthroughs and shocking new approaches in the realm of pure expressions. We're now in the era built on those original successes: where programs train themselves from data.

Which is the endgame of reusable code, of course.

What we're doing now is increasingly mapping back down from very powerful and abstract languages into rather small sub languages. The current crop of successful languages (Clojure, Scala, C#, F#, Kotlin, JavaScript, Python, Ruby, Haskell, OCaml) all have one thing in quietly in common: They have a lot of tools to model domain languages without a lot of syntactic and cognitive overhead.


The problem with ML is that edit distance between two similar programs is much higher than C/Python/Java. You have to change a lot more to go from program A to a similar program A' than you would in an imperative language. Empirically, people really don't like that property.


I find this an intriguing proposition. Is there experimental evidence, both for 'the edit distance is larger' and 'people don't like that property'?


Here's some. I used to be extremely comfortable with Haskell, but things didn't go well when I tried using it for the first online programming contest I ever did. It was just too slow to write code in, compared to C++, which I hadn't used for years. Being able to plop down variables in front of a for loop, or print statements, or extra conditionals to handle some corner cases or break out of a loop, and to just casually mutate local variables, makes writing out a solution go a lot faster, with a lot less mental overhead from having to janitor variables between maps, folds, and whatnot.


For me, my preference for languages like C and Go (work and fun) and Java/C# (work only) has nothing to do with "expressiveness" - in fact, I can't even think of a language less expressive than Java - but with the fundamental truth that computers store state and programming is about modifying that state.

And when I play around with Haskell or any other functional language, I love the expressiveness, but I hate how abstracted away the actual state modifications are and how we like to pretend we live in some "pure" mathematical land without state (that sadly doesn't exist).

I love the beauty of Haskell but hate having to muck around with monads - I just want to poke that memory address right there and flip that bit thank you very much.


I think it's more about the underlying machine as opposed to a particular language implemented on top of Von Neumann architecture. For instance, we are constrained by the underlying machine regardless of which higher-level language we use, functional or otherwise.

I started reading this book on functional languages and lambda calculus. It has some interesting notes on how to build language syntax on top of a purely-functional machine architecture (at least conceptually). Pretty interesting stuff:

[An Introduction to Functional Programming Through Lambda Calculus](http://store.doverpublications.com/0486478831.html)


Julian Bigelow, chief engineer of the Institute for Advanced Studies computer project (Von Neumann's first actual computer), stayed at the IAS after Von Neumann left for the AEC (and died in 1957) and believed the architecture was the limiting factor.

"The modern high speed computer, impressive as its performance is from the point of view of absolute accomplishment, is from the point of view of getting the available logical equipment adequately engaged in the computation, very inefficient indeed." The individual components, despite being capable of operating continuously at high speed, "are interconnected in such a way that on the average almost all of them are waiting for one (or a very few of their number) to act. The average duty cycle of each cell is scandalously low."

Julian Bigelow, quoted in George Dyson, "Turing's Cathedral", pg. 276


That's because they're universal, programmable computers. If you have a very specific, fixed task (e.g., matrix*vector multiplication), then you can build a chip in which a much higher number of transistors does useful work in each cycle. You won't be able to browse the web, watch a video, do your taxes, or program other computers with it, though.


I am naive when it comes to computation science, but doesn't the turing completeness of someting essentially mean you are not constrained? Perhaps constrained in efficiency, but not outcome.


As soon as you target an actual user doing stuff on an actual computer, there will always be constraints due to efficiency and performance. You might not run into them, because your program is sufficiently small and the hardware sufficiently fast, but they're always there.


Constraints which are not hit are not very constraining...


> maybe FP will be the future

Backus designed a functional programming language called FP: https://en.wikipedia.org/wiki/FP_(programming_language)

> a low level FP styled non-gc`ed programming language would be interesting to see rise too

There are BitC (https://web.archive.org/web/20130815183427/http://www.bitc-l...) and Rust which are reasonably close to what you want.


> it seems human nature is preferring languages such as C, Python, C# and Java, exactly due to their expressiveness and distance from the pure math side of things

I personally think humans prefer imperative languages because they are closer to the metal, and hence more predictable in terms of performance.


Also closer to how most of us learn and are taught.

Consider, I often tell my kids that they need a cup if they want water. I just as often have to change that to "go get a cup, and I'll get you water."

Declarative seems to be natural at a desire level, but at an action level, imperative seems to be more common.

I believe this is flipped for topics and actions that are well internalized. When my kids are older, I can just say "cup". If cups aren't in the right drawer, I have to provide imperative to where to check, though.

And yes, all of this is anecdotal. YMMV. I'm highly interested if folks know of studies or counter examples.


Or it could simply be that your kids have wrapped the imperatives in a object called cup.

All this gets me thinking back to the micros and how they booted straight into a BASIC interpreter.

In a sense while booting to a mouse (or touch) driven GUI helps with getting now common tasks done out of the box. That no mainstream OS ship with something that allows someone to sit down and instruct a computer similarly to how it was done back then seems to have turned modern personal computers into near appliances.


I believe this is a common line of thinking. More so with phones, which are surprisingly difficult to get self written programs on. Compared to early computers where I literally bought printouts of programs. (Toys, but still...)

Not that it is impossible, but it is surprisingly difficult and not shocking that fewer do it. (Of course, I question myself on that claim. Fewer, but by what measure? And do I have proof?)


Thing is, back around the early days of Android Google offered a online dev environment that could sideload APKs to phones. I think it was based on Scratch.

But they have since handed the whole project over to MIT.

Frankly Android could have been a computer in our pockets, and i get the feel Rubin wanted to take it in that direction.

But around the time of 3.x-4.0 there was pressure from elsewhere in Google to take in a more iOS direction. We see this with the introduction of music and video into the then named Android Marketplace (later renamed Google Play).

And then there was the whole push towards ChromeOS as the choice of businesses, complete with Citrix being a launch day demo.

Only in the last few years have Google done an about face regading Android. Both with Android apps coming to ChromeOS, and with various functionality being added to Android to make it more business friendly.


I see I also missed you comment on internalizing concepts. I think that is absolutely true. Also at the language level. There are a lot of things "internalized" by higher level languages.

To the point of android, though. I am getting my kids a Kano computer. Hoping that line will get them interested in the nuts and bolts, as it were. I need to be better and build things with them. :(


Looks like a nifty kit indeed.


I have my hopes up for Nokia and Microsoft - win10 devices are more or less a pocket PC. When Linux Support gets properly implemented into Windows, a brilliant new era will dawn.

Not to mention c# is a beautiful language.


> When Linux Support gets properly implemented into Windows, a brilliant new era will dawn.

Dream on. MS will implement just enough for cloud devs to use Windows as their dev environment, with a clear push towards Azure as the cloud of choice.


We'll see how it'll get bootstrapped when it happens. It takes an optimist and a pessimist for balance :)


Touche, sir.


All this gets me thinking back to the micros and how they booted straight into a BASIC interpreter.

I grew up in that era, and I'm still nostalgic about it. On the other hand, a tablet PC plus Jupyter/Python comes pretty darn close to satisfying that need.


I dunno. Current tablet offerings seems to be stuck either as a entertainment appliance, or straddling that and desktop Windows.

Microsoft screwed up badly when they decided you can't put a Metro style interface on a Win32 program and get touch support that way.

You either have to make it a Metro app (and ship it via their store), massively curtailing what you can do, or stay with the Win32 WIMP. And even when you tweak the size of various Win32 elements it is still a mess to operate via touch.

Edit: and using jupyter with a tablet strikes me as accessing a mainframe from a terminal (something a micro could act as with the right software).


It's certainly not as good as it could be. I have a "convertible" laptop, where the screen detaches from the keyboard to become a tablet. It runs Windows 10, so I can run the Jupyter server and a browser directly on the device. I've also played with Azure Notebooks, but I don't always have Internet access.

This isn't my only computer, and indeed doing anything with code from a touch screen is marginal at best, but at least it's possible. On a bigger computer, of course I get a bigger screen and keyboard.

Admittedly, I had to look up what Metro and WIMP are. I'm not actually trying to create apps that others can use. My use of programming is largely for my own productivity and learning.


What makes jupyter/python closer to this than booting into emacs? (With something like org-mode?)

Just the point/click nature of jupyter?


That's a good question. I've built systems that boot into a command prompt, and booting into emacs wouldn't be too big a step away from that. But what always seems to happen is that after a while the nostalgia wears off and I remember why I like my Jupyter.

I guess it's the REPL-on-steroids nature of Jupyter that I like. Also, having inline matplotlib graphs is pretty killer.


If you haven't, check out org mode in emacs. It can also inline graphs easily. With ready to go bindings for Python, graphviz, gnuplot, and R. (C-c C-x C-v will show charts inline with default bindings.)

My favorite is grabbing logs with a shell block. Reshaping with elisp or Python, and then charting with gnuplot. All in one file with an iterative workflow.


You're right that it's closer to how people are taught, but there's no reason we can't teach from the FP perspective.

Consider the imperative

  cup_of_water get_water (cup_loc, water_loc) {
      cup = get_cup(cup_loc);
      return fill_cup(cup, water_loc);
  }
or something like that, versus

  getWater :: CupLoc -> WaterLoc -> Cup Water
  getWater cupLoc waterLoc = fillCup (getCup cupLoc) waterLoc
It's just a slightly different way of thinking about things.


There's nothing in either example that is intrinsically functional or imperative, they're just different syntax.


This is dodging things by all being symbolic. Which is well after the point my kids are at in these examples. I'm taking about my one year old.

It is amazing how they can understand well before they can say words. Or read and write them. Sign language can be taught really early. Not sure what data that adds here.

But, I believe that imperative makes sense because so much in life is imperative. Especially directions we give each other.

For things that can be done symbolically, declarative can be a big boon. But it does seem to be the minority.


Now audit each cup-filling in a database.

(Easy in an imperative system, painful in a functional one where you have to start passing monads around and caring how many times the cup is constructed rather than how many cups you end up with.)


I am not sure what can be gained by comparing human languages or speech to computer language paradigms. I mean I could easily encode "go get a cup, and I'll get you water." as (ocamlish/F#ish)

    get_a_cup |> Option.map(fill_with_water)


My assertion is that people are more comfortable with imperative programming, because we are learning imperative thinking from a very early age.

That is, there is no intrinsic advantage to imperative, by my assertion. Just that it is something we are much more comfortable with from experience.


What I am trying to get in as I am not sure that this kind of thinking/speech is inherently imperative or declarative in (in the computing sense). The idea of modifying and controlling bits of state which other processes depend is fairly novel to the novice programmer, isn't it? I'm sure it tripped me up a lot.


I'm not sure I follow. Imperative/Declarative in the programming sense is no different from the speech sense, is it? Quickly skimming the wikipedia page [1] for imperative mood makes it look similar enough to me. The page for imperative programming [2] references the grammar page, even.

As for the state, I think in life everything has state. Kids are quickly learning which ones have states that they care about. It is not uncommon to see them learn that cups filled with water need to be held a certain way, otherwise they spill.

Now, like more formalisms, computing and programming is much more strict. But that is a matter of rigor, moreso than of form. That is, it is the same form, just not forgiving in syntax/content.

Though, seriously, if I am abusing terms here please let me know. Is not my intent.

[1] https://en.wikipedia.org/wiki/Imperative_mood

[2] https://en.wikipedia.org/wiki/Imperative_programming


I think once someone has done enough programming to understand or care about being closer to the metal, they prefer what they already know, which is usually imperative languages.

I wonder which paradigm a total beginner would prefer? As a beginner they would not be influenced by "how the metal works" because they don't know or don't care. My guess is that beginners would prefer imperative languages as well, even though it has nothing to do with "the metal".


My experience is that most software engineers don't really understand the hardware at all, so the idea that "how the hardware works" is influencing anything is a red-herring in my opinion. I say this as a hardware designer.


not to mention they're battle tested. and C is predictable, period. no other language can come close bar ASM. even Rust has a long way to go before it matures to C11 levels - in terms of industry trust, community adoption and backtrack of delivering.

but ill have to still admit ignorance to a greater understanding of Rust.


> and C is predictable, period.

Really?


If it's written spartan style, yes. If you go nuts with its dark corners things do get wierd. It's why it's a hard labour to write safely - and a contributing reason to JVM success.

It's the mother of all, and abusing it in ways that make it unpredictable is more than likely.


If you have to add that qualifier, then is it really a predictable like language?


I don't see a problem with proper use of qualifiers.

It has to be said that I'm a "right tool for the job" kind of man, and spend less time worrying about the perfect-ness of a language. Can't think of a better language to write crypto/network routing code with. Though a colleague did implement some mDNS craziness with java.

Don't shoot me. Heh.


That's why a hardware change would be a step towards a change. If the metal was less von neumann and more Turing/number theoretical approach, it'd be a deep change.

So far the limitation of what's practical were/are the driver of industry.

And von neumann did solve a real problem they had back then. These days were facing a software engineering problem which is more high level, and most faculties are triaging the situation - but I can't really blame them. Many unis lack the funding for ground breaking experimental work - and the likes of Turing/von neumann/knuth/dijkstra et Al, only come once a generation.


"When Backus studied and publicized his function-level style of programming, his message was mostly misunderstood, as supporting the traditional functional programming style languages instead of his own FP and its successor FL."

You may have fallen into this trap as described on https://en.wikipedia.org/wiki/Function-level_programming.


I'd be very interested in a low level FP without GC (including reference counting), as well. I believe a major difficulty is how do deal with closures. Obvious solutions include region inference and copying the closure, both with obvious downsides. Another difficultly is how to deal with structure sharing, a technique required for the efficiency of almost all functional data structures.


I have been thinking about this lately and found this last week http://www.ats-lang.org


I am not sure on which side of the flame war you are on :)

How is being more distant from pure math result in things being more expressive (Java more expressive then Haskell)

People will use whatever is more familiar (lazy), no one really wants to (put in the work and) learn something new. All of us have been learning the imperative way since childhood, that's why each new imperative programming language can be picked up in a week, but then we turn and say Haskell is hard and forget to consider that we've invested a lifetime in learning things like C/Java and gave FP a few weeks.

Hardware does not need to change at all, Haskell works just fine as is.

Complex software in Haskell is hard to read only for ppl that never got beyond "Learn You a Haskell" (i am also not that far along) (but i'll admit there is a tendency among Haskell developers to be more cleaver then it's needed with their code and unnecessarily cryptic). The solution to a complex problem is 100 time easier to read in Haskell then in any other language.


I stay away from flame wars and actively attempt to see the whole picture.

My experience is my own, viewing students confront different languages and just looking at it all grow. I'm biased towards myself, heh.

I certainly agree that haskell is awesome. And I know what you're Trying to say.. We're Sadly educated to prefer a certain syntax, and many programmers are still bad at math, bar a devoted and lucky core.


I think it's a pity that whenever someone dives into haskell, in the first few tutorials/pages he immediately is show how this great beauty comes from math and category theory and how it's all connected and they get discouraged very fast (if they were not exposed too much to college level math). If you find the strength to ignore that for a bit in the beginning and find a path laid out by a professional developer, you can do amazing useful stuff without understanding the math behind it that would be quite hard to accomplish in other languages. Anyway, whoever thinks at diving into haskell, ignore everyone that says it's hard, don't look at math staff in the beginning, and don't try to do to much IO like reading/writing from files/db all over the place.


> don't look at math stuff in the beginning

It's inherently more math-y: No state, just functions that have a given output for a given input. Ignoring things that I would call "math stuff" means ignoring the language, beyond trivial expression evaluation.

> and don't try to do to much IO like reading/writing from files/db all over the place.

Ah, cool. Just don't do too much of the useful stuff, and you're good.

Haskell, and FP in general, has always seemed to me like something that you just have to bite the bullet and dig into. Take it for what it is, and not what someone else tells you about it.


In defence of my advice with which you seem to disagree.

Avoid math stuff: By math stuff i mean "category theory", you don't need it to do useful stuff and it's better to avoid it when you learn the language and get comfortable with it. You also seem to imply that (pure) functions are somehow math-y, i am not sure most of the developers see a function definition and think math.

Don't do to much IO: I didn't say "any", i said too much. It's a way to avoid getting thrown at the deep end of the pool (monad transformers) when taking your first steps. The two most popular Haskell tools/projects, Pandoc and PostgREST have very little IO in them, and yet they are immensely useful.

About biting the bullet: I think expressions like these propagate the notion that somehow haskell is harder then other languages and you just have to expect that and get used to it, thus ppl don't even bother an run away when they hear Haskell. In my personal experience (following the 2 things above), after 2-3 months of learning Haskell, i was able to do this https://github.com/ruslantalpa/skin which 1 month later became the base for the current PostgREST core. Sure the was refactoring and such (with the help of more experienced haskell devs) but the fact is, a few months into the language i was able propose and implement deep changes to useful Haskell tools. My ability to do those things did not come from me being some kind of genius (I mostly did PHP and JS all my career) but precisely because Haskell gives you such powerful tools and is easy once you give it more then 2 weeks of your time.

What was your path to learning Haskell? Why do you feel like you had to "bite the bullet"?


Unfortunately, to do much that's actually useful, you hit IO real fast. That was the most difficult thing that I remember from my undergrad intro to programming class in Haskell.


exactly why we need a change on the level of school curricula - and the lazy part of being human can't be avoided. it's that "maximal result/minimum effort YOLO" that's coded into our genes.


Me question is if there are significant large codebases built in pure functional languages, and if there are statistics as to which ones have fewer bugs - functional or imperative?


I think the concern is less about bugs and more about productivity.


> reads like an ode to Haskell/ML.

It may seem like that from the intro, and from how the text is frequently summarised, but a closer reading shows that what Backus proposes is almost as different from modern functional programming as it is from imperative programming.

For example, Backus stresses the need for not naming parameters. This is a supported (if not universally liked) coding style in most modern functional languages, but the lambda calculus is fundamentally about naming parameters. The language that Backus proposes, FP, isn't really a lambda calculus-style language. It has more in familiar with APL or stack-based concatenative languages.

> maybe FP will be the future, but right now and considering everything is a stack machine that has ASM/C under the hood, things have to change on the hardware level to enable deep change in both our hardware and our curricula

You don't really need anything to change at the hardware. The hardware should stick to doing computation as well as physics permit, and then we can always come up with programming models that fit. I myself am working on a functional language (in the lambda calculus tradition) and an optimising compiler that generates pretty efficient GPU code[0]. GPUs are nothing like any reasonable mathematical tradition, but it turns out to be, if not easy, then reasonable to map a subset of functional programming (namely, array programming) to GPUs, and probably other parallel machines. This is because the programming model makes very few requirements of the runtime system, giving the compiler a lot of freedom to bridge the rather large conceptual divide between high-level array programming and massively hardware.

Imperative languages don't really grant the compiler this freedom, because they impose very strict ordering requirements on when their component statements are executed. Automatically determining when such ordering can be ignored (to obtain parallelism) is very very hard. At the other extreme, a lazy functional language (like Haskell) requires a lot of flexibility at runtime to deal with thunks and spontaneous allocation, which also inhibits the compiler's ability to bridge the divide and generate efficient code.

Related, but not strictly a language issue, is the dirty secret that many of the beloved functional algorithms and data structures are inherently sequential. It has been said (although I sadly don't remember by who) that "the linked list is the GOTO of functional programming". What is meant is that the use of linked lists enforces a strict sequential ordering and fiddling around with cons cells, instead of focusing on the overall structure of the computation. For this reason, parallel functional languages generally do not have linked lists as their primitive type, but use arrays instead, and have map/reduce/scan/etc as language primitives rather than library functions.

[0]: https://futhark-lang.org


I would like to know your opinion on using FP methods on with languages like Python


I do it from time to time - and I like it. Works well for small functions/procedures.

But try to use python beyond the pythonic way on a large scale and your in for a design challenge. But Im sure it'll be beautiful and you'll get much hate, as well as praise for it, heh.

Usually though, when I write python, I keep it pythonic, mainly for the support and debugging efforts.

The community's large code Base is pythonic and is designed to stitch together pythonically. Aslong as the API is pythonic, what lies underneath can be functional.


There is some italic text in there. Did you mean

  Haskell*/ML
?

> a low level FP styled non-gc`ed programming language would be interesting to see rise too

Sounds like Rust. If not, what do you think disqualifies it?


> Sounds like Rust. If not, what do you think disqualifies it?

Rust never struck me as a functional language. It's disciplined, in the manner of ML or Haskell, but it is generally (and inextricably) an imperative language at heart. The strong type system is geared towards making imperative, stateful code safe, rather than eliminating it.


Yeah ur right about the asterisk. I'll change it when I get to a desktop.

As to Rust, it looks awesome.e, really. I've personally havn't used it but have been watching it rise through the rank with greater momentum (and more serious projects) than haskell. I just don't know enough about it to have made a statement I could back. I've a few future projects that I could use it in.


I've been using Rust for the past year. Lack of tail calls kind of hurt the FP side of it

Here's 4 esoteric programming language implementations: https://github.com/serprex/oilrs https://github.com/serprex/inliners https://github.com/serprex/NULL https://github.com/serprex/Kelxquoia

I wouldn't call their implementations particularly functional. Linked Lists get a bit of a spotlight in FP but in Rust their ill advised. Option<Box<List<T>>> means they can't be shared, Option<Rc<List<T>>> can be overkill. For awhile I had some code which wanted to chain references along the stack, but issue is that in each recursive call the lifetime varies, but one can't put the lifetime of the reference in the type (each level of recursion has to carry an extra lifetime parameter) so I had to instead make a ListTrait<T>, have List<'a> carry an Option<&'a ListTrait>, but now that's a virtual pointer with a virtual table. Commit of replacing the code: https://github.com/serprex/lambdaski/commit/138217d5530b93a6... . Was done by changing the code to not create copies (which had this russian doll lifetime pattern) but instead all use references to the string being parsed, thus once every reference had the same lifetime they could all be packaged into a single Vec<T>


i definitely commend you on the effort you're putting into this.


No real higher kinded types means its really difficult to use in the same way as you could Haskell/Idris/ML.


I think it's fair to assume that most of us have been schooled in imperative then later OO styles, with probably no attention (my case) or little attention given to FP styles. Having tried FP a little bit now, I think it's quite powerful (although not a silver bullet).

  Given function: max x y = if x > y then x else y
  foldr max (minBound::Int) [1,10,-11,1] // yields 10
  foldr (max.(uncurry max)) (minBound::Int) [(1,-11), (10,1)] // yeilds 10
While not always the case, I think much of FP code demonstrates ease-of-compositionality (partial func, composition operator in second ex), ease-of-reuse (of max function in different contexts) in a vein similar as above. I think most of the time maintainability is a positive side-effect.

I wonder how many of commenters here have seriously tried all imperative/OO/Functional styles of programming, and still think FP is not something to invest anymore time/effort in vs the other paradigms.

Personally, I'd hope more energy was spent on eliminating the short-comings of FP, than keeping a firm foot in the OO camp, which I think is already overloaded with too much attention with quickly plateauing benefits in return (at language level, not in terms of products that come out of it)


Equational reasoning like it is presented in the paper is alive and well in the Haskell community.


Is Von Neumann Style, not the most efficient way to program a Von Neumann Architecture? Which as I understand it, is pretty much all major commercial computers today.


I have this article printed and I read it occasionally, if only to try and reach a deeper understanding of the points made in it.

This isn't about functional programming, ML and Lisp even predate this paper. I shall quote myself from a previous submission: "The paper describes function-level programming, it is different than functional programming; see the languages APL, K, and J."

Don't believe me? Check out the John Conway's Game of Life in APL[1] or J Language[2]. Even a high level language like MATLAB[3] still reads like normal code.

[1] https://www.youtube.com/watch?v=a9xAKttWgP4

[2] https://copy.sh/jlife/

[3] https://rosettacode.org/wiki/Conway%27s_Game_of_Life#MATLAB


To me it is about abstractions. FP cannot abstract over mutable state. In a way that is a good thing, less mutable state is easier to control as you grow a program. But inability to abstract is not a good thing. (Just like how async infects your callers, so does mutability in FP.)

But also note that in this paper, the main trouble pointed out with imperative languages has long been solved. Using objects/classes and generics or runtime typed (dynamic) languages. All these make it easy to extend and compose.


Functional programming is magical. The majority of people who works with magical frameworks tend to agree that having libraries doing simple predictable things is better than magic. At the same time, everybody seems to think functional languages are awesome.


This is the opposite of the truth; the semantics of functional languages are vastly simpler than the semantics of your average imperative language. This is why C(++) is chock full of undefined behaviors, whereas a language like Haskell (built on extremely simple primitives like lambda-abstraction) have almost none. The only interesting undefined behaviors in Haskell are weird things with strictness that no one ever really runs into and typically don't have any impact on your program unless you're doing something strange with IO-based exception handling. And for any such case, there's a library that exports a version of whatever you're trying to do that has well-defined behavior on any compiler.

One thing that I'll agree is more "magical" for functional language is the transformation from functional code to efficient machine code. It's not obvious how Haskell can achieve such high performance when it makes almost no concessions to the underlying architecture of the computer. On the other hand, it's usually pretty clear what kind of assembly your C will turn into. This can make performance estimation more difficult.


> the semantics of functional languages are vastly simpler than the semantics of your average imperative language. This is why C(++) is chock full of undefined behaviors, whereas a language like Haskell (built on extremely simple primitives like lambda-abstraction) have almost none

C(++) has undefined behavior because it's meant to run close to metal, and there's no official standard of CPU design.

If anything, a C compiler could always insert an "if" in a non-standard arch on every signed addition to get it to overflow the "right way".

The thing is that it would ruin performance.

And the proof is that Java, which was quite imperative, doesn't have nearly as much undefined behavior as C.


> Java, which was quite imperative, doesn't have nearly as much undefined behavior as C.

You're right, but Java is still orders of magnitude less predictable than Haskell. A few good links here.

http://stackoverflow.com/questions/376338/what-are-the-commo...

It essentially boils down to a combination of unsound type systems and underspecified semantics. Java was not created on a sound theoretical basis - it's quite ad-hoc.


fp is the opposite of magical because it is essentially math, 100% predictable (follows laws) and all things composed of simple primitives


I think the OP of this thread means performance is obscured and as a result requires tuning which can be finicky and difficult


I'll jump into the flame war and say that every single programming language we use with computers is math. It may just not be expressed in a way you are comfortable with. And they are all as 100% predictable as FP.


There is a difference though.

In a (strict) functional language, foo(bar) always returns the same way and always has the same affect.

In almost every non-strict language foo(bar) can do anything, and can always be different based off some state not contained in foo or bar, but only the scope of the program.

So while mathematically they may both be equally as predictable at the physical level, given only a function its parameters, a strict functional language is predictable while a 'traditional' language is not


I don't think that this is because of functional vs other, but because of the assignment operator and mutable primitives (which functional languages tend to omit). Consider

  \x y -> g (f x) (f y)
We have no way of knowing what this function will do, because f and g are free variables with respect to the function. But we assume they are constant, so we're happy. And if you zoom out far enough eventually you'll find the binding locations.

Instance members and globals are just the imperative version of free variables, and if it weren't for that pesky assignment operator or mutable primitives, they'd be constant and hence completely harmless too.

You could imagine an OO language without assignment or mutable primitives that would have all the advantages of "functional" languages but that would still feel different enough in style that we wouldn't call it one.


A computer does not work by means of "essentially math". It is a sequence on instructions. How does "pure math" translate into a sequence of instructions? No one knows! Magic!


I'd like to get a copy of a digitized version of this paper. Also, can anyone point to information about the response from CS academics and software engineers to its publication?


I'm quoting a citation-free sentence from Wikipedia (ugh), but it sums up what I've always heard: "this paper did less to garner interest in the FP language than to spark research into functional programming in general."


There's a record around of correspondence between Dijkstra and Backus relating to the lecture. Don't know of many other reviews off hand.

https://medium.com/@acidflask/this-guys-arrogance-takes-your...


Looking at kinesin that just walks over a protein in raw nature in Von Neumann style, I doubt so.


For me, functional programming and ideas that come with it, like immutability, gain relevance while constructing software for verification (making software testable), as well as when trying to make software more maintainable by discouraging side-effects.

For me, learning those concepts helped me write better software, even while using imperative languages.


Henceforth, I will add "Keywords and phrases" sections to my crawlable writing


my most recent "non" von neumann discovery https://www.infoq.com/presentations/power-144-chip


-"Computer says no"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: