Hacker News new | past | comments | ask | show | jobs | submit login

It's rather unbelievable to me that people find ML syntax unfamiliar and off-putting. It makes no sense, and I have always found it disappointing that both Rust and Gleam moved away from it.

People seem to love Python syntax. ML syntax is just more regular, consistent, and typed. How is that worse?




It's not worse. It's just not better in any significant way. So if you're new to this language, and the first thing it makes you learn is this new syntax which isn't better in any significant way, learning that language immediately feels like you're wasting your time learning something that isn't better in any significant way.

Of course, if you started with ML syntax, you'd feel the same way learning a language with C syntax.

But if you're introducing a new language, it makes sense to target the language the most people are already familiar with. Maybe your language does really bring some real syntax improvements, but it's a silly hill to die on, because syntax just isn't very important.

If syntax changes are all your language brings to the table, your language really isn't worth learning. And if your language brings more important ideas to the table, then it would be a shame for people to never make it to those ideas because they got bored learning unimportant syntax ideas.


"Not better in any significant way" is a hot take, and pretty hand-wavy way of dismissing a whole group of languages. With so many people preferring that syntax, it might just be that you're actually wrong..

For instance, I'd say the piping is significantly better than the backwards reading of nested function calls you have to do in Python.

Data |> map |> filter |> group |> sum

Vs sum(group(map(a) for a in data if filter(a)))


> "Not better in any significant way" is a hot take, and pretty hand-wavy way of dismissing a whole group of languages. With so many people preferring that syntax, it might just be that you're actually wrong..

I can see how it could be interpreted that way if you didn't read anything else in my post.

A lot more people use C syntax than use ML syntax, so your own argument, "[w]ith so many people preferring that syntax", would support C not ML.

> For instance, I'd say the piping is significantly better than the backwards reading of nested function calls you have to do in Python.

> Data |> map |> filter |> group |> sum

> Vs sum(group(map(a) for a in data if filter(a)))

That's not coherent Python syntax, but knowing both languages, I'll agree that the ML syntax is definitely better. But significantly better? Reading that sort of code in Python simply isn't a problem I run into. Neither syntax solves any problems I have, because understanding syntax isn't ever the problem I have (in languages that aren't intentionally opaque).

Incidentally, if you want that sort of thing in C-ish syntax, it's not hard to get it in (for example) C#:

  data.map(f).filter(p).groupBy(g).reduce(add);


I first learned of programming with C, C++, Java, and Python. I was unimpressed and remain so. It wasn't until I learned of visual dataflow languages and functional languages, in particular functional-first or pragmatic functional languages that I began to like programming.

But, I was also not indoctrinated by a computer science degree, which probably has a lot to do with it.


> if you started with ML syntax, you'd feel the same way learning a language with C syntax.

I always wonder this but I’ve always found Lisp syntax to be a lot more intimidating and less readable for whatever reason. I should give it a fair shot one of these days.


Lisp syntax is about the vocab. Because the grouping (what expression is child of what other expression, and at which position: third child, fifth child, ... is clear). If you don't know what the word means, like what is foobly in (foobly ((x 3)) (blorch x)), and don't yet have the intuition to make an accurate guess, you are screwed if you don't look it up. The meaning of everything inside (foobly ...) could depend on the definition of foobly. foobly could be in the language itself, or some extension in the language implementation, or it could be defined in the code base.

Newcomers to some kind of Lisp get confronted with identifiers that they haven't seen anywhere.


Lisp syntax is unreadable because there is not much syntax. Functions, macros, variables are all in a mix of gazillion parentheses.

ML syntax is really clean.


A few of the things I hate about ML syntax:

* Infix semicolons instead of paired delimiters to denote code blocks. These cause shift-reduce errors and make indentation unintuitive.

* Implicit currying turning what should be obvious argument count errors into type errors on unrelated lines.

* Implicit currying making parameter order overly significant. This drives languages to introduce infix pipe operators with unintuitive precedence and to bikeshed forward vs backward piping. This could all be done away with by having a dedicated partial application syntax.

* Postfix, of all things, for type constructors. Except for the ones that are, sigh, infix again.

* Inscrutable type signatures on higher-order functions, largely as a result of the above and of the convention of single-letter type variables.


I don't hate ML or its syntax, but this feels like a reasonable set of irritations to me. I may need to introspect a bit more.


This is my really high level and poorly explained thoughts on this. I have enormous trouble reading functional programming languages.

Python gets away significant whitespace by reading like pseudo code and is fairly clean. It uses words rather than symbols.

C syntax provides visual structure using brackets.

It seems to keep things readable, you get the option of using symbols or significant whitespace for scope.

Functional languages have a tendency to do both. Lisp’s syntax is defined by replacing syntax with parentheses.

You basically get into a situation where you need to be able to read and comprehend math-like lines of code. Which doesn’t seem to mesh well with the structure of nature language.

It’s kind of like abstraction in programming. Some people kind of get it. Some people really get it. And other people need to consider a different career.

I think there are just too good programmers who don’t work well with functional programming to say it’s a problem with lack of familiarity. I think it could be a fundamental lack of talent.


Have you tried F# or OCaml? I would argue that programming in Python is much harder than it is in F#. It is for me.


I tried F# seriously for a few days and the tooling was so annoying to set up it made me crave going back to pip and virtualenvs

I guess if you're a .NET developer that may be a non-issue, but I literally couldn't get all but the simplest examples to work and just gave up even though I loved the language and its design overall


How recent was this?

If not recent, steps would be:

* Download and install .NET 7 SDK (very easy on basically any platform): https://dotnet.microsoft.com/en-us/download

* Already, you have F#. You can run `dotnet fsi` to enter F# Interactive (FSI), the F# REPL. Or you can create a new solution and project with the dotnet CLI: https://learn.microsoft.com/en-us/dotnet/fsharp/get-started/.... Once you have tests, you can just use `dotnet test` and it will find and run all tests.

* You can also just download VS Code and install the Polyglot Notebooks extension to write F# code in a notebook. All you need is the .NET SDK, VS Code, and the Polyglot Notebooks extension. https://marketplace.visualstudio.com/items?itemName=ms-dotne...

* Either in an F# script (.fsx file), FSI (ran by `dotnet fsi` again), or in a notebook, you can install NuGet dependencies by just writing

    #r "nuget: <package name>"
https://learn.microsoft.com/en-us/dotnet/fsharp/tools/fsharp...

I'm confused about the tooling being bad. Python doesn't have any management like solutions and projects like F# and .NET has. Visual Studio on Windows and macOS are good, there's JetBrains Rider, and also VS Code with Ionide for IDEs. I've never had issues with the tooling with F#. That cannot be said for Pip, in my experience.


> the tooling was so annoying to set up it made me crave going back to pip and virtualenvs

Damn, if you wanted to go back to pip and virtual envs, then that's really saying something about how bad the tooling is, because pip and virtualenv are the bane of my existence when writing Python. Give me cargo or even npm and I'd be happy.


F# tooling is not bad actually. In fact it is much easier to have project level dependencies unlike python where dependencies are system wide or virtual environment wide. I just think OP is unfamiliar with the kind of tooling that DotNet has and got deterred by that. I'm a python developer who have tried F# casually and adding a dependency is as easy as copy pasting a line of command from the nuget repository online


Something I like about boring Java: I hardly ever think about the dependencies management tool.


Java is even worse to be honest, you basically need an IDE for it, so much that VSCode literally ships a build with Java preconfigured.

https://code.visualstudio.com/docs/languages/java#_install-v...


I was specifically thinking of dependencies management. VSCode/IDE is kinda out of scope here.

I find the InteliJ product are good IDE. I use VSCode for node stuff and it's fine.

As daily driver, I personally use vim and build with maven. It's boring, nothing ever happen. I need 2 binary and one edit to my $PATH. Once in a while I do use IntelJi, to explore new codebase.


Gradle, Maven etc are also a pain, especially when compared to cargo or other all-in-one package management and build tools.


How so? I hardly have to interact with it. If I need a new dependency, it’s a new line.

Node or Pip will found innovative way to break in that same scenario.


It couldn't possibly be that VSCode ships with a build for a language that is enormously popular preconfigured instead?


So? Lots of languages work well with VSCode that aren't TypeScript.


My point is that the REASON VSCode ships with java support is NOT that Java "needs" it.

But I think you knew that.


You need an IDE for any language honestly. And I don't see that as a negative.


I have never used an IDE for any language. Some languages make it very easy to set up. Python, Haskell, Rust, Node are extraordinarily straightforwards with basic CLI tooling. others like C++ are more complicated, but the compilers and tooling are extensively documented.


Every company I've worked at has extensive setups regardless of the language. Even supposedly "simple" languages like Python will benefit from IDEs, and it shows. It's especially the case when programs become larger and more complex.


Static types mean some languages benefit from IDEs more than others. Code completion is not very meaningful without types unless you use some machine learning magic.

The nice thing about Typescript is that it was basically designed as an IDE language. It has just enough typing to make the IDE useful and let's you ignore the typing well enough when you need to. It isn't a safe language, just toolable.


F# was one of the languages I was thinking about with I wrote my comment. :)


Very strange. When I write F# code, it almost directly mimics the problem statement. For example, I have implemented a fair amount of The Ray Tracer Challenge in F#, and it feels like I'm literally just writing down the description and tests from the book and have working code.

Do you have a particular example of Python that you feel it shows off its pseudocode abilities? Because one can write F# almost identically to Python, so I just can't imagine Python being superior given that it lacks several domain modeling tools that F# has (records and discriminated unions).


Speaking as a python dev with some experience in F# , one maybe minor point is the unfamiliar function call syntax that could throw off programmers not familiar with curried functions. Most other languages have parenthesis for function calls, so it can be a bit jarring to have a sequence of strings and not know whether the result is a function or a value. Furthermore some experienced F# devs love making custom symbols, which can also obfuscate the code


> so it can be a bit jarring to have a sequence of strings and not know whether the result is a function or a value

I think that the nice thing about F# is it is always a value. Function evaluation and currying can be taught fairly readily, especially given that once you get it, there's no gotchas.

> Furthermore some experienced F# devs love making custom symbols, which can also obfuscate the code

I agree that is painful when it occurs, but that style should absolutely be de-recommended and is so by Don Syme. I haven't really encountered it much in my personal development though (or I have avoided it).


> You basically get into a situation where you need to be able to read and comprehend math-like lines of code. Which doesn’t seem to mesh well with the structure of nature language.

"The Art of Doing Science and Engineering" has a few paragraphs about this. He points out that language has some intentional redundancy, because humans are unreliable and require it. But programming language designers are often skilled in logic, and prefer that to verbose "human" communication.

The language referenced is APL, but it made me think of functional languages which have a lot of custom infix operators, etc.


> But programming language designers are often skilled in logic, and prefer that to verbose "human" communication.

Reminds me of a thing my dad quoted about the problem with Newtonian Notation. AKA dot notation or fly spec notion. Because with it a fly can do differential math. As a result people use different notation usually.

That said redundancy allows for both readability by humans and better resync by parsers when there are errors. And it's easier to code gen and auto refactor code.


My way of explaining this is that we have a part of our brains which is intended to handle and recognize grammar. If we offload some of the work to it, it feels easier to deal with code. Even though what we are actually doing is certainly harder.

The tradeoff of this kind of grammar is that metaprogramming becomes harder because by the act of making some of the structure into special grammar, it becomes harder to metaprogram.


Familiarity.

The preferred syntax is the one that incurs the least cognitive load. Which in turn depends on what they are already familiar with.


It makes perfect sense. Most programmers learn to program on a C-like, Basic-like or Python-like language. Not ML-like. What's familiar is what's easiest to adopt.


But those languages weren't familiar to begin with. It's like people are saying programmers can only learn to program one time.


They can only learn to program the first time one time. After that, everything is relative to their first language. So learning cost becomes a function of the distance between old languages and the new one.

Moreover, the gain in going from zero programming languages to one is huge. You can make things! Going from one to two is small in comparison. You can make things, but hopefully somewhat better.

So the ROI on learning a very different language isn't great for most. And that's when the technology adoption curve comes in handy: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle

For any technology, some people will like the novelty and will be risk tolerant, so they'll pick it up for fun. Others will be seeking some strong benefit; if the technology provides that they'll adopt it too. If that happens, another chunk of people will pick it up as the coming thing. But circa half of people adopt the new thing only when it becomes dominant or when they're forced to by circumstance.

Max Planck said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” And that's often shortened as "science proceeds one funeral at a time." Software's not quite that bad, but it's certainly heading in that direction.


Given how much effort has been spent in shoving Javascript into places it doesn't belong solely so that programmers don't have to learn something else, that seems to be more true than not.


No, but that's what most people learned, so most people will continue to favor more C-like languages than languages that are not C-like. If you want people to favor ML-like languages, you'd have to start off teaching them ML first (which some colleges do).

Industry-wise as well, C-like langs have network effects through the past 30 years, especially in JS land which is a lot of the professional software engineering work done these days.


This is incorrect. People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

It's easy to prove this fact with some natural example outside of programming.

Let's say I have several tasks I need you to do. If I wanted to present those tasks to you in written form, what would those tasks look like?

They would look like a series of procedures. Any human would present imperative instructions to you WITHOUT being taught anything beforehand. Nobody is going to put those tasks into a single run on sentence because that's less intuitive to us.

Additionally if I want you to repeat a task I'm not going to write something self referential. I just say repeat this task four times or something. We communicate this way through language and through writing, so OF COURSE c like syntax is more natural.

(for those who didn't pick up on it, I'm referring to function composition and recursion respectively in the examples I just gave.)

You got it backwards. The industry teaches C-like syntax because that syntax is naturally more intuitive. This is despite the fact that in the long run FP-syntax is overall better.


> People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

Are there some studies that back that up? My wife took the How to Code course on edX, and the use of Racket was a non-issue. The use of it was quite literally transparent.

The issue is that humans react to something foreign with huge amounts of bias. So if I person has any introduction at all to some form of a C-type of language, it seems to shut down all forms of curiosity, generally speaking. I honestly wonder if that has been studied.

I was first taught C, Python, and Java, and was totally uninterested in programming. Once I found functional-first and visual dataflow languages, I was hooked.

I'm generally curious what C and Python people think when they learn something like Prolog? Do they exclaim "this isn't hard enough?".


>Are there some studies that back that up? My wife took the How to Code course on edX, and the use of Racket was a non-issue. The use of it was quite literally transparent.

There's no studies. It's like asking for a study about how people tend to smile when they're happy. Do you really need a study for something so obvious? Do you go around the world thinking nothing is true or real unless someone spent money on some sort of quantitative analysis?

If you read the rest of my post you'll see that what I stated follows from logic. When we have a series of tasks we want written down, we automatically write it down via a list of imperative procedures. This is done automatically. We haven't taken any TODO list writing lessons or anything before hand we just do it procedurally meaning we're born to do it this way. It's innate.

This is the obvious default. And thus as a result any programming language similar to that default will be more intuitive.

As for your wife, who's to say procedural programming is not easier? She may have found racket easy but that doesn't mean she wouldn't find imperative languages far more intuitive.


> People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

I disagree with this, as most people do not use any of these languages. Excel is by far the most popular programming language, and it's a mostly-functional, reactive, lisp-style language. That's very far from C, and the closest we have to end-user programming. So if anything, the evidence says that people have an innate biological bias toward Excel-style programming; whereas we, the small fraction of a fraction of people who can program in C, are the weird ones.


Wrong. Excel is biased towards familiarity with mathematics. It just builds on our pre-existing familiarity with mathematical formulas. The intended usage is ALSO just for matehmatical formula calculations. Excel is not typically used to execute step by step procedures across IO.

To analyze our innate biological biases you must remove all context of mathematics and programming and just frame it from the context of a typical activity:

What do you get when you ask someone to write down a series of tasks?

you get a list of imperative instructions. This happens without ANY additional learning or education.

For excel you need to be taught excel and there's a per-requisite of already being familiar with basic mathematical language.

I mean honestly this is all obvious to me. I can come at it from another angle to help you see why it's so obvious.

What type of universe are humans designed to live in? We live in a universe that moves forward in time, with each change in incremental step forward in time comes changes in the properties of things around us. State changes and mutations. Our brains are tuned to live in this type of state changing universe ALONG with being state changing entities that are an intrinsic part of the universe itself.

Logically a programming style that imitates how the universe, and how our minds work will be MUCH more intuitive.

What sort of universe is functional programming? It's a place where time doesn't exist, state changes don't occur. This universe is actually impossible to exist.

Think about it. You know about von neumann machines right? Can you build a lambda calculus machine where no imperative instruction ever occurs? No it's impossible. In order to even simulate this universe we need access to the time domain to execute an evaluation step.

Theoretically if you build some sort of adder that can add 50 values at the same time you can have a 50 operation FP program evaled in one step. Practically speaking the evaluation must happen in several steps because adders only add a couple values at a time then save it to memory. We only know time doesn't exist in the FP world is because these equations can be evaluated in different orders. That's why haskell can do lazy evaluation.

Basically functional programming is a sort of clever fantasy universe we made up. We aren't biologically tuned to operate or even analyze such a universe. We have to take extra steps in imagination to make sense of it. It is NOT intuitive.


> What do you get when you ask someone to write down a series of tasks? you get a list of imperative instructions.

I think this is where we differ, and why everything is clear to you and not to me. You're taking this as axiomatic, but I think there's more to it than that.

Of course you get back an imperative list of instructions when you ask someone to write down a series of tasks. It's almost tautological. What you're missing is not all programming is writing a list of instructions, because indeed not all programming is imperative. I'm sure you know this, but you've been conflating programming paradigms and syntaxes, so I want to make sure we're clear with the terminology. I can agree that the imperative paradigm is indeed very natural for many people to understand, but I don't think that means people are biologically predisposition to C-style syntax.

We can look at how young children learn how to program in order to get a feel for this. Seymour Papert famously studied this with his language Logo, which is a lisp-like language. Its turtle graphics module allows kids to give imperative instructions to a computer which cause it to draw shapes on the screen. But it's a far cry from c-style syntax, so I think the link between a paradigm being "natural" and a predisposition for any particular syntax is tenuous. Indeed many new programmers reach for Python over C, which while imperative, eschews much of the syntax that really gives C its style.

Moreover, it seems like neither the syntax nor the paradigm is what causes young learners to really grok Logo. In his book "Mindstorms", Papert demonstrates how he can use his system to get young students in elementary school to simulate systems of differential equations. Alan Kay does something similar in his Etoys environment, which itself isn't imperative. As an educator, I've never seen anyone get young students reliably do the same with C.

> This happens without ANY additional learning or education.

It does happen after some brain development. Much easier than coming up with a list of steps to execute is just to ask something else to give you the result you want, and allow it to fill in the details a.k.a. declarative programming. Children will do this far before they gain the ability to formulate a correct step-by-step program.

To test this, ask a child what they want for lunch? They will not give you a list of steps to complete to make the lunch:

  - go to the kitchen.
  - put bread on plate
  - put cheese on bread
  - put turkey on cheese
  - put bread on top
  - return to me with plate 
No. They will say "I want a sandwich with cheese and turkey", and then will expect a sandwich to be delivered that meets the specifications. This is the declarative paradigm, so if we're using the developmental timeline of children to determine to which programming paradigm they have a biological predisposition, it would then have to be declarative, not imperative.

> What type of universe are humans designed to live in? We live in a universe that moves forward in time, with each change in incremental step forward in time comes changes in the properties of things around us.

If our world were a programming language, it would be continuous, distributed, parallel, and asynchronous. It's like that movie: everything is happening everywhere all at once, which is pretty much the exact opposite of imperative "one thing at a time, then move to the next thing" programming. The reactive paradigm exhibited in Excel actually is closer to this reality than the imperative paradigm.

Note I'm not really arguing for the functional paradigm, I'm arguing for Excel as the most natural for people. I still think that's the case after our exchange, especially considering the points you raised.

> That's why haskell can do lazy evaluation.

Haskell can do lazy evaluation because it was explicitly designed by scholars to specifically study lazy evaluation. You can have a functional strictly evaluated language. It's not key to the paradigm.

> Basically functional programming is a sort of clever fantasy universe we made up. We aren't biologically tuned to operate or even analyze such a universe.

Given what I wrote above about being continuous, distributed, and asynchronous, it would seem that imperative programming on discrete logic is also a clever fantasy universe.


>Haskell can do lazy evaluation because it was explicitly designed by scholars to specifically study lazy evaluation. You can have a functional strictly evaluated language. It's not key to the paradigm.

The point I made here is haskell CAN be made to do lazy evaluation. An imperative program CANNOT be made to do lazy evaluation.

    example:

    imperative:
    let mut x = 1
    let mut y = x + 2
    x = 3
    y = x + 2

    functional:
    x1 = 1
    y1 = x1 + 2
    x2 = 3
    y2 = x2 + 2
You will note that in the second example the order of all the expressions do NOT matter while in the former is absolutely matters. Changing the order of the first program can change the meaning of the program itself. It is not a "design decision" these are distinct and intrinsic properties of imperative and functional programming paradigms respectively

>If our world were a programming language, it would be continuous, distributed, parallel, and asynchronous. It's like that movie: everything is happening everywhere all at once, which is pretty much the exact opposite of imperative "one thing at a time, then move to the next thing" programming. The reactive paradigm exhibited in Excel actually is closer to this reality than the imperative paradigm.

So? It's still imperative. Everything is executed procedurally through temporal space. One thing happens after another. This is ORTHOGONAL to parallelism and async because each thread must be executed in sync. The concept of continuous space has been shown by physics to NOT be the case but this is besides the point as it's just another orthogonal concept.

>But it's a far cry from c-style syntax, so I think the link between a paradigm being "natural" and a predisposition for any particular syntax is tenuous.

This is a bad example. All this proves is that people are capable of learning functional syntax. It does not say whether functional syntax is more natural then c-like syntax. That is the heart of the question which one are we more biologically predisposed to? You put people in a situation where they have to learn functional syntax of course most people can learn it in the same way people can learn reading, writing and math even though we aren't biologically predisposed to it.

You need an experiment given to a person with no concept of either paradigm and see what choice they make. Like you said, it's "tautological," they make the imperative choice without any prior education or influence.

>To test this, ask a child what they want for lunch? They will not give you a list of steps to complete to make the lunch:

If you ask a child what they want for lunch they will give you the black box library definition of "lunch". if f(x) = x + 1, they do not tell you x + 1, instead they tell you f. You asked for f, they gave it to you.

If you asked the child "how do you make lunch" and they told you, "I want a cheese and turkey sandwich", well that would be the wrong answer. You asked for the definition of f, and they didn't give you the definition they gave you f.

In short let me put it this way. Your question was demanding a declarative answer. The child gave it to you because you didn't ask for anything else.

If you reword the question to ask for a temporal answer that flows across the time dimension where each step is dependent on the previous step the child will give you the answer in imperative form rather then functional. "How do you make lunch?"

Bending your series of instructions on making a sandwich into the do notation of monadic closures of haskell is really trippy, no adult thinks this way, let alone children.

This is important to know. Because functional programming is essentially asking the question "How do you make the sandwich in a way that doesn't involve time" This is a question that does not have an intuitive answer. This is the real question that needs to be asked.

That is why FP is much less intuitive then imperative programming.


> An imperative program CANNOT be made to do lazy evaluation.

That's not really true, as the execution strategy is not necessarily connected to the programming paradigm. You can create a strict functional language or a lazy imperative language. These are orthogonal dimensions of the design space.

For example:

  z = expensive_calculation_take_ten_minutes();  // defer evaluation of z
  x = get_user_input();                          // strictly evaluate x
  y = some_other_thing(x + 3);                   // strictly evaluate y
  if y > 10 {
    print(z);                                    // lazily evaluate z, print(z) here
  } 
There's no reason at all this imperative program must be strictly evaluated. Evaluating z can be deferred until it's clear that it's needed in the print call.

> This is ORTHOGONAL to parallelism and async because each thread must be executed in sync.

It's not really orthogonal, it's a dual. Imperative code can be made to run in parallel by running it on multiple machines simultaneously. But because the imperative model clashes with asynchrony and parallelism, it's much harder to express these concepts in imperative code. It's much easier to express asynchrony in an asynchronous-first language, but in those languages it's harder to express an ordered sequence. Many of the problems I encounter with distributed async programming in imperative languages exist at the boundary between asynchrony and synchrony. The best solution we have today to write async code is to enter a special sync runtime, which needs to exist in order to explicitly support the distributed/async/parallel semantics that imperative paradigm cannot model well. This results in a phenomenon called "function coloring", where only "async colored" functions can only be used in the async code, essentially bifurcating programs into async and synchronous parts.

The notion of "callback hell" should make plain why imperative programming is not the most natural paradigm to model asynchronous communication. Callback hell exists because shoehorning the imperative paradigm into an asynchronous runtime is clunky at best.

> The concept of continuous space has been shown by physics to NOT be the case

But is that understood at an innate biological level? I think not. Human beings, and in particular students of programming, experience their world continuously in time and space. This is why they are surprised that 0.1 + 0.2 != 0.3 in most languages.

> It does not say whether functional syntax is more natural then c-like syntax.

To be clear, Logo is an multi-paradigm Lisp-like language. You can express imperative code cleanly in Logo, and that's how children use turtle graphics. The example is to show you that the c-style syntax is not at all connected to an ability to program at a young age without any concept of either paradigm. Insofar as you are claiming that the c-style syntax is what makes imperative programming biologically innate, the existence proof of Logo must cause you to be wrong at that point, as it has a Lisp-like syntax. If anything, my experience teaching students C shows me that the syntax hinders the learning process (e.g. = vs == and imperative c-style for loops are the source of very many bugs for new programmers).

> You asked for f, they gave it to you.

Right, and you asked for a list and they gave it to you.

> If you asked the child "how do you make lunch"

Throughout your posts, you are treating "here is how I make a sandwich" as programming, while you don't seem to consider "I want a sandwich" as programming. These are both programming. The former is imperative, the later is declarative. If we want to understand whether a young child is capable of imperative programming, we ask them your question. If we want to understand if they are capable of declarative programming, we ask them my question. The question is: which question can they answer first? Child development science tells us it's the declarative program they will be able to express first.

> Your question was demanding a declarative answer. The child gave it to you because you didn't ask for anything else.

Right, indeed. But your question demanded an imperative answer, and that's what you got. My point is that developmentally, children will have the capacity to give you the declarative program long before they can formulate the imperative one. So does that mean we have a biological predisposition to declarative programming? Or maybe there's no such thing as a biological predisposition to any particular paradigm?

> If you reword the question to ask for a temporal answer that flows across the time dimension where each step is dependent on the previous step the child will give you the answer in imperative form rather then functional.

I'll note here that the only temporal notion offered by the imperative paradigm is the notion of an ordered sequence. There are far richer treatments of time in other paradigms, which include temporal operators specifying "as soon as", "before", "after", "until", "whenever", etc. These concepts cannot be expressed directly in imperative paradigms.

The notion of time in most imperative languages I know of is CPU-time, or instruction time. It has little to do with wall time, so appeals to how close imperative programming adheres to an concept of time that's intuitive due to human experience falls short. Indeed, I often experience students frustrated by the inability to cleanly model these concepts in imperative languages.

> Bending your series of instructions on making a sandwich into the do notation of monadic closures of haskell is really trippy, no adult thinks this way, let alone children.

It's interesting that this is how you phrase it, because in my experience children do indeed naturally ask for sandwiches often before they can even describe how they're made. Have you ever seen a kid get mad that the jelly is on the bottom of the PB&J, only to placate them by turning it over? Such a child has no idea how the sandwich is made.

If that means they are using the notation monadic closures of Haskell naturally, it would seem to cut against your argument that this notion is not natural.

> That is why FP is much less intuitive then imperative programming.

To be clear, I'm not saying it is more intuitive. I'm not even mounting a defense of functional programming here. I'm pushing back against the notion that c-style syntax is innately understood at a biological level. In your various posts here you seem to conflate syntax and semantics, which I don't understand because you seem experienced enough to know the difference. Or maybe I'm misunderstanding you, but I feel like you've offered a pretty full-throated argument, so perhaps I'm just not following. But I think we're talking past one another at this point.


>Right, and you asked for a list and they gave it to you.

No I didn't. I asked how do you make a sandwich. The answer doesn't have to be in list form. The child chooses to give me the answer in list form.

Your question, however, the only possible answer is declarative form. There is no other choice. You structured the question in such a way that it demanded a singular answer. "What do you want for lunch?"

>I'll note here that the only temporal notion offered by the imperative paradigm is the notion of an ordered sequence. There are far richer treatments of time in other paradigms, which include temporal operators specifying "as soon as", "before", "after", "until", "whenever", etc. These concepts cannot be expressed directly in imperative paradigms.

Incorrect. the functional paradigm is a subset of imperative paradigm. Imperative is simply a series of functional programs with mutation. All "richer" treatments are therefore part of the imperative style.

>The notion of time in most imperative languages I know of is CPU-time, or instruction time. It has little to do with wall time, so appeals to how close imperative programming adheres to an concept of time that's intuitive due to human experience falls short. Indeed, I often experience students frustrated by the inability to cleanly model these concepts in imperative languages.

There is a notion of time that doesn't have to be so technical. The notion is that one thing happened after another thing, or another thing happened before something else. This concept is expressed through change. Mutation. When you remove mutation then this concept of time disappears.

>It's interesting that this is how you phrase it, because in my experience children do indeed naturally ask for sandwiches often before they can even describe how they're made. Have you ever seen a kid get mad that the jelly is on the bottom of the PB&J, only to placate them by turning it over? Such a child has no idea how the sandwich is made.

No, this simply means they know about the sandwich as a blackbox abstraction. This has nothing to do with monadic IO. If a child knew how to make a sandwich, he for sure wouldn't know how to express the construction process as monadic do notation.

>To be clear, I'm not saying it is more intuitive. I'm not even mounting a defense of functional programming here. I'm pushing back against the notion that c-style syntax is innately understood at a biological level. In your various posts here you seem to conflate syntax and semantics, which I don't understand because you seem experienced enough to know the difference. Or maybe I'm misunderstanding you, but I feel like you've offered a pretty full-throated argument, so perhaps I'm just not following. But I think we're talking past one another at this point.

I'm saying that c-style syntax is innately understood at the biological level. C-syntax is equivalent to procedural syntax or semantics or whatever you want to call it.

> There's no reason at all this imperative program must be strictly evaluated. Evaluating z can be deferred until it's clear that it's needed in the print call.

The imperative part of the program must be strictly evaluated. You only lazily evaluated the declarative part of your program where the values are immutable. x must be evaluated before y and before print because those values mutate. They cannot be done in different order.

All the immutable parts of your program can be evaluated out of order. But the mutable parts cannot be.

>To be clear, Logo is an multi-paradigm Lisp-like language.

I will have to see this language to know what you're talking about. But lisp is usually taught functional first. Children searching for documentation have to dig deeper to find the c like syntax. By nature of documentation and usage, children are more likely to bias towards functional over imperative simply because the bias of the manual and users are also heading in the same direction.

>But is that understood at an innate biological level? I think not. Human beings, and in particular students of programming, experience their world continuously in time and space. This is why they are surprised that 0.1 + 0.2 != 0.3 in most languages.

Yes. A student readily understands 1, before understanding 1.0000111111. There's a dual interpretation arising from two separate modules in the brain. There's a part of the brain that's visual and there's another part that's symbolic and handles language.

The language aspect of our brain is by default discreet, the visual part is continuous. Our interpretation of symbols and language flowing through time is not continuous at all. But our interpretation of movement through space is.

>Throughout your posts, you are treating "here is how I make a sandwich" as programming, while you don't seem to consider "I want a sandwich" as programming. These are both programming. The former is imperative, the later is declarative.

No I'm saying here is "how I make a sandwich" can be EITHER declarative OR imperative. I am saying "I want a sandwich" is declarative programming with no way to convert to imperative.


> No I didn't. I asked how do you make a sandwich. The answer doesn't have to be in list form.

This is what you argued originally is proof of your claim:

  Let's say I have several tasks I need you to do. If I wanted to present those tasks to you in written form, what would those tasks look like? They would look like a series of procedures. Any human would present imperative instructions to you WITHOUT being taught anything beforehand. Nobody 
This is a very strong claim. "Any human" includes even the smallest children. "Nobody" precludes any one. All we have to do to prove your claim false is to find a single human who won't do as you said, which is trivial. I've already identified one: small humans whose brains haven't developed enough to comprehend ordered instructions.

Moreover, it's leading. You start with "I have several tasks" (which sounds like a list), and you want them written down (okay, so you want someone to write the list you just told them).

> Incorrect. the functional paradigm is a subset of imperative paradigm.

I must state again that I'm not defending or referencing functional programming in any of my replies. I started with a reference to Excel, and have also mentioned Etoys and Logo. None of these are considered strictly functional. When I mentioned temporal programming, that's another paradigm entirely separate from functional and imperative.

> The notion is that one thing happened after another thing, or another thing happened before something else. This concept is expressed through change.

This is a very indirect way of expressing time. The only thing that needs to mutate for time to pass is the timer. In the imperative model, the world is stopped until the next statement is executed. This is to match the paradigm to the computing hardware, not to the real world as you keep insisting.

> I'm saying that c-style syntax is innately understood at the biological level.

Okay, if you say so. You literally don't have a single study backing this up. The only argument you've offered is "it's obvious", which I assure you, it's not. If it's only obvious to you, then maybe all you're arguing is that you innately understand c-style syntax. Which, sure. But that isn't generalizable to "every human", as you are literally claiming.

Can you answer me this: have you ever taught C to anyone, like an 6-year old? Can you honestly tell me it the case that they were presented with C, and they immediately understood the syntax? I remember learning C at 6 and it wasn't a smooth process. Then again, maybe that just indicates I'm not a human.

> All the immutable parts of your program can be evaluated out of order.

That doesn't make it not imperative. If you think your imperative code is being executed in the order it's written, you should seriously investigate your compiled code. Optimizing compilers make all kind of decisions to reorder statements when they can.

> I will have to see this language to know what you're talking about. But lisp is usually taught functional first. Children searching for documentation have to dig deeper to find the c like syntax.

The first program all students write in Logo is to draw a square:

  pd
  repeat 4[forward 100 right 90]
  pu
This doesn't look anything like C. It's completely imperative. It's understood and used by 6 year olds. These same 6 year old are lost when it comes to C. There is a mountain of research to support this, I've already referenced for you. That you don't know about these languages really cuts against your argument that c-style syntax is biologically and innately understood by all humans, as it seems that you're arguing from ignorance.

> Yes. A student readily understands 1, before understanding 1.0000111111.

I'm not disagreeing with that. I'm saying that children understand their world to be continuous even before they understand the number 1. Students then learn about integers, and then learn about then about the continuous nature of the number line. That nature is intuitive precisely because they understand the concept from physically interacting with the world. Seymour Papert discusses this idea in Mindstorms.

> No I'm saying here is "how I make a sandwich" can be EITHER declarative OR imperative.

In what way? Declarative programming is all about eliding the how and leaving it up to the compiler. If you answer this question declaratively, it won't contain instructions to make a sandwich. Such details are left to the implementation.


>This is a very strong claim. "Any human" includes even the smallest children. "Nobody" precludes any one. All we have to do to prove your claim false is to find a single human who won't do as you said, which is trivial. I've already identified one: small humans whose brains haven't developed enough to comprehend ordered instructions.

It is a strong claim and I stand by it. In your mind you feel as if you identified a counter example. I plainly already told you that it is NOT the case.

First off lisp like languages are by default geared toward an FP style. Documentation and syntax makes it easier to do the FP style over imperative. The language you chose has clear bias towards the FP style.

Additionally. Nothing stops people from doing python in the FP style. Python is very amenable to that. One could make the argument, according to your flawed logic, that with list comprehensions, reduce and recursion one should, if you're theory is correct automatically do FP when using the the python language.(I didn't choose this example earlier because it suffers from the same problem as logo, there is a clear bias towards imperative that's intrinsic to the language)

But this doesn't occur. People choose the paradigm that fits them better. Imperative.

Those kids may understand FP programming. They may be able to learn logo. But that doesn't mean that the imperative style would've been easier and more natural.

>Moreover, it's leading. You start with "I have several tasks" (which sounds like a list), and you want them written down (okay, so you want someone to write the list you just told them).

This is you illustrating my point. I have 5 things, I have 2 tasks, I have several things, I have several tasks. None of these things explicitly point to procedures that have to be in order. It's simply your natural bias. If I say write something down but I don't say write it down in a single sentence or write it down as a numbered list, your bias automatically inserts imperatives in there. The language itself is neutral, but your natural tendency to insert additional meaning into the sentence is proof of my point.

>I must state again that I'm not defending or referencing functional programming in any of my replies. I started with a reference to Excel, and have also mentioned Etoys and Logo. None of these are considered strictly functional. When I mentioned temporal programming, that's another paradigm entirely separate from functional and imperative.

Understood. My point here is just saying that the additional "features" you mentioned aren't exclusive to the declarative paradigm because the declarative paradigm is actually a restrictive style. Imperative programming has more features and more freedom then functional programming but that doesn't necessarily make it better.

>This is a very indirect way of expressing time. The only thing that needs to mutate for time to pass is the timer. In the imperative model, the world is stopped until the next statement is executed. This is to match the paradigm to the computing hardware, not to the real world as you keep insisting.

No. This is the most natural way. For most of human civilization mathematics did not exist. The concept of time as a symbolic measurement did not exist. People measured time through change. When you take symbols and language and strip it down to our base perceptions, time is mutation, time is change. It is the most primitive and fundamental measurement of time.

Again your bias with what you know (timers) influences your view here.

>Okay, if you say so. You literally don't have a single study backing this up. The only argument you've offered is "it's obvious", which I assure you, it's not. If it's only obvious to you, then maybe all you're arguing is that you innately understand c-style syntax. Which, sure. But that isn't generalizable to "every human", as you are literally claiming.

Why do we need studies for everything? It's like I can't have thoughts or strong opinions on everything if there's not some scientific study backing it up? I strongly believe that you are human. Do I need a study to prove such an obvious point? Science is just the definitive data driven way at arriving at an accurate conclusion. The downfall is that it's slow and expensive. You can arrive at similar conclusions using less accurate pathways like logic and common sense.

> There is a mountain of research to support this, I've already referenced for you.

links? From what you described though, the "research" doesn't confirm or deny anything. Like I said it only shows that children can learn functional. Doesn't show that functional is more natural then imperative.

>This doesn't look anything like C. It's completely imperative.

Do you mean functional?

>In what way? Declarative programming is all about eliding the how and leaving it up to the compiler. If you answer this question declaratively, it won't contain instructions to make a sandwich. Such details are left to the implementation.

No it can. Simply treat the time dimension as a spatial dimension. That's how FP languages do it. The "how" is simply a series of temporal tasks done in a certain order. If you treat time as physical geometry then you simply "declare" the whole procedure in one shot.

>I'm not disagreeing with that. I'm saying that children understand their world to be continuous even before they understand the number 1.

No this is not necessarily true. The brain is made up of multiple modules that understands things in different ways at the same time. The language module in your brain understands things discreetly by default, the spatial part of your brain understands things continuously. There is no "singular" form of understanding these concepts where one concept is understood before the other.


> It is a strong claim and I stand by it. In your mind you feel as if you identified a counter example. I plainly already told you that it is NOT the case.

Strong claims require strong proof, and can be summarily dismissed if said proof isn't provided after a discussion spanning thousands of words. Sorry, but I don't think there's much more to say here until you've familiarized yourself with the literature. You've only just heard of Logo from me, which is decades old and is exactly the kind of programming language that touches on what you're trying to say here. The Logo team backs up their work with a system they implemented to explore how children learn to program, and then studied this system extensively with actual children. Their findings don't support what you're trying to say here. Your entire argument is based on "it just makes sense to me", which is fine if it makes sense to you logically, but that's not what the literature shows. Sometimes reality is counterintuitive. As proof of your assertion, you state that literally any single human being would give you an answer that comports with your point of view. I find this unconvincing when held up against the body of literature I've referenced here, which studies exactly the kind of person you're looking for to prove your point: people without any preconceived notions about programming. The literature emphatically shows that they are not innately and biologically predisposed to prefer c-styles syntax. It's just flat not true according to the literature.

I'm surprised you're so sure of yourself seeing as that you haven't read anything from this body of work. Maybe I'm not the one going off my own preconceived notions? Because I'm here citing sources, and all you can say is "it just makes sense to me, you're wrong because you're biased". When asked for any sources to back up your assertion, you declined to provide one, and stated you didn’t even need anything to back you up. This argument is unconvincing in any context, but especially when there's actual research contradicting you.

> First off lisp like languages are by default geared toward an FP style.

Let me stop you right there. Logo is not a functional language; it supports programming in multiple paradigms including imperative. You just learned about it yesterday, so I don't know how you can keep asserting this. The syntax is lisp-like, but it's a multiparadigm language. Earlier I claimed that I thought perhaps you knew the different between syntax and semantics, but now I'm not so sure, as you keep insisting that `lisp syntax == fuctional language`.

> links? From what you described though, the "research" doesn't confirm or deny anything.

It shows young children can write very sophisticated programs when handed Logo, but are at a complete loss when given C. It's pretty hard to square that research against your assertion that humans are biologically predisposed to c-style syntax. If you were right, they would take to C just as naturally as they take to Logo. Research has found that's not the case.

I've given you all the information you need to find the sources I referenced. You can find "Mindstorms" at your local bookstore or library. The author is Seymour Papert. You can find out about Etoys from Alan Kay's body of work. You can find the Logo language online and code in it yourself to see it's multiparadigm and not strictly functional. It's an example of imperative programming used by kids that is devoid of the c-style syntax. Logo is taught to kids because they take to it better than C. C existed at the time Logo was created, and the designers of the language specifically made it to be optimally applicable to the learning style of children. The key insight of Logo (particularly the turtle graphics module) is to frame the program as kinematic actions from the point of view of the child. In this sense you are right that children can understand imperative programming as early as 1st grade. But it severely cuts against your argument that c-style syntax is what is innately familiar to humans at a biological level. It's this notion against which I push in these comments.

> Like I said it only shows that children can learn functional. Doesn't show that functional is more natural then imperative.

That you still call Logo "functional" tells me you haven't read any of the literature, nor used the language, so therefore you don't actually know what it shows. Insofar as Logo offers functional capabilities, they aren't presented to children in the highly imperative turtle graphics module.

And again, I'm not arguing functional is more natural than imperative.

> Do you mean functional?

No, the code I wrote is completely imperative. In C it would look like this:

  pd();
  for (int i = 0; i < 4; i++) {
   forward(100);
   right(90);
  }
  pu();
Children have been shown to regularly take to the Logo program, while the equivalent C program is inscrutable to them. Again, see the body of work produced by Seymour Papert and the Logo team.

Anyway, that's all from me for this thread. Feel free to reply but I'm done here. Cheers!


Sure, I'll agree with that. Imperative code is simply more natural for people.


If you change the order of language introduction, people's preferred languages and paradigms change, based on my experience TA'ing in undergrad. Our program started off with python, but immediately introduced lisp and prolog right after. The output was CS grads who were more varied in their language choices.


I honestly think that ML is too consistent and regular.

Natural languages inevitably build in a lot of redundancy that can seem irrational but that serves a very important purpose in decreasing confusion and increasing the rate of comprehension. The redundancy makes the language harder to describe, but it reduces cognitive load once you've learned it because it's easier to distinguish between different elements, even in less-than-ideal conditions.

I think this is where both Lisp and ML fall short and why they consistently lose to the more convoluted syntaxes: what seems like a flaw is actually an important feature that makes comprehension easier. Sure, you can go overboard (looking at you, Perl), but the irregularities present in C-style languages are valuable extra information channels that increase cognitive bandwidth.


> People seem to love Python syntax.

I sure don't. I'll take explicitly-delimited blocks over having to slap a ruler to my screen to know what scope I'm in any day. It's a prime example of why form over function is a horrible tradeoff.

I use Python in spite of how much I hate its syntax (and myriad other aspects of it) because it's often the path of least resistance.

> ML syntax is just more regular, consistent, and typed.

My sample size is "Elm and Haskell", and of that sample size, my impression is the precise opposite. I encountered enough cases of whitespace that shouldn't have mattered but does anyway that it put me entirely off both languages.

Lisp? Tcl? Forth? Now those are regular and consistent (though admittedly rarely typed).


Neither Elm nor Haskell are really ML-dialects though. They're more ML-adjacent. I honestly rarely have any issue with indentation in F# that isn't immediately fixable by some editor feedback. Even thinking about it, I can't think of any issues I have had at all with any consistency.

I think Lisp in the flavor of Common Lisp isn't so regular or consistent, but Scheme and Forth certainly are. I basically consider ML as a typed, indentation-sensitive Scheme, at least the way I think and program in them both.


Editor indentation guides have been a thing for a decade or two.


That's cold comfort when I need to make some quick fix in nano or vi on some remote server somewhere.


A practice to avoid. Cattle, not pets.

Not to mention, well factored functions/methods shouldn't be so large they are hard to manage. Never had trouble with terminal editors either, but I keep things simple.


If only all of us were so fortunate to live with you in such a utopia where servers are perfectly fungible and automation scripts are written by competent software engineers :)


Penny wise and pound foolish, your org’s decision.

Currently on a project that had 12 years of tech debt. Have taken out about 60% of the garbage in three years. Indentation never a significant issue. Everything else has been.


> People seem to love Python syntax.

Python's syntax is just yet another C-like syntax that goes back to ALGOL. People love it because it's familiar, it's not doing anything special.


Yes I find this point unconvincing too. I do not think elm is intimidating, lots of FE devs jumped on its bandwagon and it’s considered easy to learn. If it’s true that it is, the syntax seems like the last thing people would find intimidating about elm, or haskell. The intimidating part would be the programming model.

Reason basically started with that premise that syntax is a problem to solve and we saw how successful it turns out.


It's very easy to see why. People naturally are drawn to procedures rather then formulas.

When given a task of several things to do, that task is naturally given and expected in a list of procedures. Nothing needs to be explained here. People intuit this naturally. A list of things to do is grammar we all universally understand.

But ML syntax and FP?

Who composes all the procedures into a single run on sentence? And to top it all off it's some sort of functional grammar that's highly divergent from a list of procedures? That's essentially FP and ML.

This initial barrier is what prevents people from adopting FP.

People who say it's because C like syntax is the default syntax taught in schools are missing the point. C like syntax is MORE intuitive and THAT is why it is taught in schools.

That is not to say C like syntax is better then FP. I prefer FP, but I'd be lying to myself if I said it was naturally intuitive.


Luckily, the vast majority of FP languages operate on lists of procedures combined together with the `.` operator. Similar to how C composes things with the `;` and many pythonic languages compose with the newline operator.

Realistically, most FP programmers program ML-like languages (Haskell and Elm included) in an imperative manner. It's an extremely straightforward translation, and in Haskell it's basically syntactically identical due to do notation.


I'm not talking about tricky stuff like do syntax monads.

I'm talking about FP at it's core.

The '.' compose operator is not procedures. In haskell it does something completely different.

>Realistically, most FP programmers program ML-like languages (Haskell and Elm included) in an imperative manner. It's an extremely straightforward translation, and in Haskell it's basically syntactically identical due to do notation.

No Elm doesn't do any tricky stuff. So elm is not imperative Haskell can look imperative with do notation. That's about it.

I mean you can put put elements of your equation on different lines but it's still not imperative.

   1 +
   2 +
   3
I mean ..yeah... if you want to call that (1 + 2 + 3) imperative, be my guest. But obviously that's not what I'm talking about.


Function composition is the most basic thing in FP languages and is equivalent in spirit and nature to the semicolon or newline operators of c and python like languages. Not sure why you chose addition as an example. Addition is order independent in most languages.

In elm (not 100% familiar with the operators), ordered compute can be expressed as follows:

a >> b >> c

Which says do a first, then b then c. Due to how data deps work in an FP language, a is always done first here, b next and c last.

Very imperative. No do notation.

In fact all the major Haskell monad instances are variations of the (.) operator, including IO.

So... If ordered compute extending all the way to IO is not what you're talking about... What are you?


Composing things with the dot doesn't execute anything.

It creates a new function. In Haskell it's actually composed backwards. The first function executed after you compose your function and apply that function is to the right.

Additionally each composition must have inputs and outputs that match the neighboring functions.

Nobody thinks this way when writing procedures. Each instruction is independent of the other. Composition is about building a pipeline, very very not imperative.

Do notation in Haskell is the closest thing in go to imperative. It is not composition. It's just an series of endlessly nesting closures which technically is even more harder to reason about. Do notation unravels this in ways that are hard to understand what's truly going on.

The point is this. Haskell and elm are not imperative, at times they can imitate imperativeness but to deal with these languages 100 percent involves thinking differently in ways that are unnatural. No amount of bending and breaking is going to turn these languages into something you can call imperative.


Yeah and composing things with a semicolon creates a new c program

Haskell is an expression language and a runtime environment. The expression language is more powerful than c, but c et al are also expression languages plus environments, just not typically talked about that way. Because the expression language is quite simple.

Imperative do this then that is a exactly function composition where the pipeline is the state. In other words imperative languages provide an implicit function. This is most obviously seen with stack based concatenative languages.


ML Syntax seems to be made for machines, not for the average human mind. Parsing language that is 'too unnatural' is a bit of a strain when compared to something that looks like average grammar if you squint at it, so it doesn't surprise me at all.

It's also a bit in the realm of scientific/academic programming vs. real-world bulk programming where it's mostly just CRUD, mapping and business rules. It's all still important if that's how money is made, but as far as I can tell, most programming isn't really all that involved with 'quality languages' and more about feature output from a backlog. If you can do that for cheap with a fresh can of Python developers and some extra EC2 instances on AWS, that's an easy choice vs. someone who might want to do it in Lisp instead for example.


I learned OCaml in college and I would not liken it to Python at all. Python is very easy to grasp since it's basically pseudocode, but OCaml, for example, is based entirely on recursive constructs (yes, imperative versions exist but only as a last resort). It's an entirely different way of thinking. The syntax therefore also looks unfamiliar, coming from `for` loops in Python.


Whose pseudocode? I have not once written pseudocode that looks like Python, though apparently this is odd. In high school pseudocode was BASIC with structured control flow.

I tend towards pseudo-SML for scribbling things; once got in trouble for using "language specific constructs" like map() for university work. They weren't a fan of APL for adjacency matrix munging either, though that's how I went through that part of the course. In either case pseudocode is just ignoring the unpretty parts of any language, and it can hardly be said any language is more like pseudocode.


> I have not once written pseudocode that looks like Python, though apparently this is odd.

Well, looks like you're an outlier. Most people I know who write pseudocode don't use filters, folds, maps or APL like constructs either, even though I totally would when writing the actual code. They write it in a Python-like format with for-loops and if-statements.


F# can be written nearly identically to Python if one so desires.


It's a matter of personal preference, and there's a lot more people who are familiar with C-style syntax. A good amount of those people also dislike Python, in fact.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: