Hacker News new | past | comments | ask | show | jobs | submit login
Courting Haskell (honzajavorek.cz)
84 points by honzajavorek 2 days ago | hide | past | web | favorite | 68 comments





Though I’m not much of a Haskell programmer (spent some time learning it and writing toy programs), I would use a different motto to characterize Haskell: Stop prescribing (loose) design patterns; make them tight and refactor them into libraries, so they need only be written once.

More than anything else, the language aims at providing modularity (and terseness, on the macro scale) for the programmer (while trying very hard to compile to something efficient). Even laziness is to enable more modularity.

The “make design patterns tight” part is what ends up needing abstract mathematical reasoning (category theory is basically mathematical pattern reasoning distilled to its essence).

The other consequence of abstracting patterns into libraries is that novice programmers end up with a lower “writing code” -vs- “reading+thinking” ratio compared to other languages. And this can be jarring to folks whose attitude is to learn by writing code (to discover patterns in the process).


"All patterns are anti-patterns."

A pattern is a common expression form that your chosen language is unable to capture in a library. As we get better languages, what had been patterns turn into ordinary library components. Patterns composing those with one another and with core language features either become more library components, or challenges for subsequent language design.


I really disagree with this viewpoint. Patterns are just patterns. It doesn't matter if your language is able to express the pattern in a reusable form or not. The whole point of a pattern is that it is a common solution to a class of problems given a set of circumstances. Even if you have a "super awesome pattern" widget that you can use, you still need to know whether or not that pattern is appropriate for the problem you have and the circumstances in which the problem is expressing itself. Even beyond that, most programmers will be dealing with many more than one programming language in their lifetime. Learning well known design patterns is about understanding the abstractions, the problems and the circumstances so that when you are faced with a similar situation in another language you can efficiently see if there are capabilities that will help you out.

Basically, consider the situation where you say, "Here's how I do X in this language. How would I do a similar thing in that language"? A design pattern gives you a name that you can use instead of "X". It also gives you a context where you can realise, this pattern is appropriate in here, but not appropriate there. When you talk to people you can simply say, "Can I easily implement a functor here? If not, how can I get similar utility in the same circumstances? Are there any caveats that are different than the normal ones?" It seriously speeds up the conversation. It also allows one to think about programming more generally rather than thinking of it only in terms of a specific programming language.

Edit: grammar


You don't use the "sort pattern". You call the sort function, because it exists.

In C, you would code up, in place, an example of the "hash table" pattern, but more expressive languages have hash dictionaries in the library that are as fast as, or faster than, you could afford to code in place.

If your language can't properly express a "maybe" monad, you can cobble something together to use instead, and mention that in a comment. But it's only a pattern because you don't have it.


I'm afraid I didn't express myself well. Where I think we're not aligning is that you are under the impression that I'm using the same definition of pattern that you are. It was my intent to express that I disagree with your definition.

It's important to understand that a pattern is a solution to a problem along with contexts in which it is both appropriate and inappropriate. To document a pattern you need: a description of a problem, a description of the solution, contexts in which this solution is appropriate, contexts in which this solution is inappropriate. If you read the early literature on patterns and pattern languages (Beck and Cunningham's initial paper, or really any of Coplien's writings), I hope it will be more clear.

"Sort" and "hash table" are not patterns. They are simply solutions without problems. They are also not specific enough to discuss various contexts. When is it inappropriate to sort? That's a meaningless question without a lot more information. Sorting may indeed be a solution to a problem, but it is not in itself a pattern.

We could say that the maybe monad implements a particular pattern for representing optional data. It is, however, not the only method for representing optional data. There are many others. The point is to be able to understand which one you should use in which context. That it can be implemented in a library is fantastic (less code for you to write), but that was never the point of design patterns (hence the word "design"). The point was to give you a vocabulary with which to discuss the merits of various solutions to problems and to pick appropriate ones for your circumstance.


Do you consider the things in the "gang of four book" to be patterns? If so, how do you distinguish them from something like "sort" or "hash table", since they are also solutions without problems? If not, I'd argue that your definition of pattern doesn't match the present meaning.

"Pattern" comes from the Gang of Four book, by way of confusion about an entirely different concept from architecture.

The book has not aged well. Its vocabulary has turned out to be decreasingly useful. I go for many months at a stretch without encountering any reason to mention any of them. The only names that come to mind, at the moment, are the "visitor" and "pimpl" patterns, only the latter of which I have used in the past decade, and that because it is imperfectly supported by the library template std::unique_ptr<>.

That is not for lack of discussion of choices among possible solutions to problems. Notably, most on https://cpppatterns.com/ are just library components.


Pattern comes from here: http://c2.com/doc/oopsla87.html It even says so in the GoF book.

I wouldn't say that Haskell is "trying very hard" to enable efficient compilation. It might be trying hard given what the existing language semantics looks like, but we still don't have e.g. an elegant model that can equally account for both strict and lazy code (even though there's quite a bit of research in this area, based on fairly natural considerations).

Reasoning about what code patterns actually require the use of general GC (because they involve creating references that might outlive their pre-defined scope, as with general closures) and what patterns might dispense with this is another source of potential efficiency improvements.


"[Haskell is], indeed, heavily founded on mathematical theories (category theory and lambda calculus)."

This is a pretty common reason given for why Haskell is "math-ey", but I'd argue it's simultaneously a bit misleading and also a bit boring. The part that's misleading is "Haskell is based on category theory": while a major feature of programming in Haskell (the monad abstraction) was inspired by category theory, the truth is that moment-to-moment programming in Haskell doesn't have much to do with category theory unless you want it to. Even monads require zero knowledge of category theory in order to understand or use! Some people find usefulness in category-theoretic abstractions, but many others—myself included!—don't at all: I've actually improved the performance of Haskell code in the past by cutting out category-theoretic abstractions in favor of simpler code, and by now I steer clear of most Haskell code that wears category theory on its sleeve.

The part that's boring is that "Haskell is based on lambda calculus". The lambda calculus is a mathematical description of a simplified model of computation with functions and binding… and that's basically it. Almost every modern programming language uses variable binding as a basic feature, and in that sense, is "based on lambda calculus". Plenty of other languages, for example, are inspired deeply by Lisps (like JavaScript or Ruby) which in turn were directly inspired by the lambda calculus, but I don't think that lineage makes them particularly math-like. Instead, being "based on lambda calculus" usually just means that they have variables and closures—something true of almost every popular language used today!

Now, Haskell-as-practiced can have a fair bit of math-inspired abstraction in it, which can be daunting to someone new to Haskell. Some of that abstraction is useful (e.g. monads), some of it isn't, but the fact that it exists is more about vocal Haskell programmers and bloggers, and much less about Haskell-the-language being "heavily founded on mathematical theories". And you'll also find plenty of programmers and bloggers arguing against the more math-heavy parts of the ecosystem: just in the past month I've seen plenty of Haskell programmers passing around links like https://www.simplehaskell.org/ which advocates for exactly this less-mathy approach to Haskell.


The big problem here is documentation.

Most beginner Haskell resources go into complicated monad explanations very early on, instead of just showing how to use them like a tool. Intermediate resources are even worse.


I disagree about all computer languages being about lambda calculus. Lambda calculus is fundamentally about functions being first class objects (as opposed to say sets, if we are taking a math prespective). Well that has become a more popular language feature, plenty of languages do not support this. C being an obvious answer.

I'm not a haskell programmer, but I agree with you on the math aspect being largely over excited bloggers. Sometimes i feel like half the haskell articles posted here are taking some simple concept, wrapping it up in heavy duty mathematical machinery, not using the machinery for anything, and then proclaiming profound insight. The other week there was a haskell article on hn literally talking about how great it is that strings form a semigroup under concatenation, because i guess nobody realized you could concat strings before that.


I've been been learning haskell by building a standard monolithic web application: Postgres DB, some static content, REST API and firebase authentication. It's taking longer than it would with a familiar language but so far I'm really enjoying the experience. I also took a small deviation a couple of weekends ago to build a simple OSM router and was pleased with the ease of development and performance.

I'd recommend the book Practical Haskell by Alejandro Serrano Mena to get a firm introduction to the language and ecosystem of web applications. After that take at look at the libraries developed by FP complete.


I'm using the same method as you. What are you using as the SQL and REST API libs? I'm using Servant and a library called Squeal for Postgres.

I'm combining servant and Yesod into a single WAI app so that I can serve a web app and provide an API from the same server. For DB access I'm using persistent.

In the past two months I've been trying to learn the Haskell programming language. It's vastly different from anything I know, so it served me also as a way how to empathize with complete beginners to coding. This is a diary from my journey.

" Powerful! It takes just a few lines to implement your own clone of Vim:"

Uh, ok lol.

" If SQL or Python read like an English sentence, then Haskell reads like math. Feels like math. It is math. "

This is basically why there's a learning curve. Any analogy used as a learning example will typically fall short because in order to fully grasp the concepts you just have to learn the math behind it, because that's by design (denotational semantics).


To the whole article I say "ditto." Great summary. Also, didn't Guido say he wanted to eject the functional operators in Python? #hearsay

Slashdot had an interview with him where he talks about it[0], he basically says Python isn't the language for FP and then complains about readability which, YMMV on that one. I don't know if his opinion has changed in the last seven years, though.

[0]: https://developers.slashdot.org/story/13/08/25/2115204/inter...


Just like to point out that you don't need to know category theory to program Haskell. You don't need to read or watch Bartosz' videos to program (though they are great fun to watch), that part in the blog is a bit specious.

LYAH isn't a great pedagogical resource. I recommend the Haskell First Principles book. It's long but rewarding and has frequent questions.


At last! somebody else who is prepared to be honest about their fear of maths, and mathematical notation, and people who leap from "its just like school maths" to referring to the intimidating maths you skipped or failed in school and university.

Its just like maths: requires deep brain structure wiring you may not have.


Do people actually claim haskell-ish math (i.e. abstract algebra type stuff) is just like school (highschool?) math.

If so, i wish i was in their high school. Sounds much more interesting than mine.


I wonder why is there is so much fear of math on HN, which is essentially a STEM community. I would expect it among literary types, but not here.

I think many people have a problem with mathematical notation, not maths itself. Mathematical notation is optimized for terseness and suitability for use on a whiteboard. Code has made great leaps in terms of readability and expressiveness compared to that, and is therefore more accessible to many people even when expressing the same concepts.

For an example see the comments on the map/reduce post from yesterday.


People overstate the connection between math and computer programming. Sure there are math heavy areas, but your average programmer isn't into them. At most programming typically involves the mildest amounts of whats in an intro to discrete math course.

Too much drama for no reason. Haskell is a programming language; it has some neat features, and some downsides. If it's the right tool for the job at hand, use it, otherwise use something else.

You keep mistyping Clojure as Closure

no mention of haskellbook.com is surprising!

I read that book for 6 months straight, followed all the exercises, and quit when I realized I couldn't write a simple script that did something useful.

But boy did I sure evangelize Haskell throughout the process! facepalm


HPFFP was the book that finally made the language 'click' for me. It took 13 months to complete and only towards the very end did I start to get an understanding of how to structure an actual program larger then a leet code exercise.

Its a _really_ different language from what I was used to (Javascript, Python, Ruby, etc). The book covered a lot of material that felt very basic to me, but interspersed with that was material that covered concepts I had never even considered, this was the case in the early chapters as well as later on.

I found that the early chapters setup lots of concrete examples that later on would be revealed as simplified cases of highly abstract structures that occur all over the place. Uncovering these is a very magical experience. For that reason alone, I tell people to just grind through the book rather then skipping ahead to the more exciting material later on.

But you don't have to like HPFFP or Haskell, nobody has to like anything. For me, Haskell is a joy to use and provides a seemingly never ending source of learning material to explore.


I started building straight away from day 1, getting a decent REST API together is surprisingly easy. You really don't need to understand most of the advanced type level stuff to stay productive.

Maybe you tried to go for too high abstraction level straight off the bat? That can turn out very depressing on Haskell.


Yes, for that reason I recommend Will Kurt's Haskell book instead. It's way more practical.

[flagged]


Learning Haskell taught me about a lot of language features that I find myself reaching for in lots of languages. Examples include: destructuring, list comprehensions, currying, immutable data structures, maybe types, and algebraic data types.

The language has a lot offer. How much you want to take away and incorporate in your programming repertoire is clearly at the discretion of the student; however, to imply that it is devoid of transferrable value is a false assertion.


Ironically, Haskell pioneers exactly in programming scientific research while the rest of the languages focus more on the linguistics rather than the science.

[flagged]


Ah, the classic - "we are producing REAL software over here".

Always reminds me of this: https://steenschledermann.files.wordpress.com/2014/05/no-tha...


Haskell has crazy amounts of wild evolution and experimentation with new ways to do a programming language. Look at the number of language extensions present in a typical non-toy haskell system. It's not just a single arbitrary set of conditions. I'd consider it more as a highly successful platform for PL research that people happen to build stuff in. If that's not progress, I don't know what is.

Monads were not invented to support the needs of Haskell. The concept predates even the existence of microprocessors.

Haskell simply recognizes the applicability of that existing mathematical concept to an aspect (in fact various unrelated aspects) of programming. This isn't really a feature unique to Haskell.


Walk all you want, meanwhile Haskell is developing ways to fly.

If by "developing ways to fly", you mean "crawling on its knees while flapping its arms", then OK.

When I see an (admittedly admirably concise) Haskell program get ten times longer just to approach the speed of the most obvious expression in C++ or even C, it leaves me uninspired.

I want a language where the cleaner the program is, the faster it is. It seems like only C++ is really even trying. (Rust might step up, once it matures. Or might not.) With Dennart Scaling wholly dead, and Moore's Law petering out, there will be lots of room for better ways to use lots of cores, GPU units, and FPGAs (with plenty of macro cells) reconfigured at runtime, to make better use of what logic we can fit. C++ is only taking baby steps into that brave new world, but I don't see anyone else doing even that much.

In the meantime, we have a bunch of cores to keep busy doing actually useful work, and decreasing room for actively wasteful languages.

Haskell has had plenty to teach us, but it will not be the language that flies.


> When I see an (admittedly admirably concise) Haskell program get ten times longer just to approach the speed of the most obvious expression in C++ or even C, it leaves me uninspired.

Well, if the most obvious expression in C++ or C was a hundred times longer then that's still a win for Haskell. Particularly when the performance-critical parts are usually a tiny fraction of any given program. Frankly I don't think the industry should even start worrying about performance yet though, given how much difficulty we still have with producing correct software.


I have not seen any example of a program that would be a hundred, or even ten, or five, times longer in C++.

Not worrying about performance is a luxury that those of us who face stringent requirements are often not afforded. My experience using the WWW indicates that many, many sites demonstrate their contempt for me (and you, if you could perceive that) by ignoring it.


And wait until you discover the Eskimo languages does not, in fact, have hundreds of words for snow. It's a myth.

http://www.lel.ed.ac.uk/~gpullum/EskimoHoax.pdf


'elaborate vocabulary surrounding snow' != 'hundreds of words for snow'

I went through elementary school in Inuktitut and don't remember anything about an elaborate vocabulary for snow. Learning stuff for Haskell made it much easier to learn similar concepts in Rust, which is a pretty pragmatic language. Sometimes learning things doesn't help but it also never hurts.

This fits my experience. But then I am only an average engineer.

A different way of thinking of abstraction or just a feeling of elegance or a feeling of power is not a sufficient explanation for why Haskell is so great.

People want definitive and theoretically correct answers not talking points for a philosophical debate on programming styles.

Why is typed functional programming measurably better than procedural? Why is it better than OOP? Definitive answers are in demand not exploratory experiences.


Definitive answer: sum/union types [0] and currying/partial function application [1]. Between the two, almost anything you write becomes very easy to adapt to new requirements. Anyone who has done a medium to large scale fp project can attest to this.

If you want to be pedantic, currying and partial application are separate concepts, but I've never seen an fp lang implement one without the other. Note that I linked to this f# blog because I think its example-forward approach is clearer than most fp documentation I've read.

[0] https://fsharpforfunandprofit.com/posts/discriminated-unions...

[1] https://fsharpforfunandprofit.com/posts/currying/


Sum and product types are essential for programming. After you have used a language with them you will miss them dearly in any other language you use.

Typeclasses and traits (if you're writing Rust) coupled with ad hoc polymorphism are a succinct and beautiful method of abstraction. You will miss them also in almost any other language.

Those features in a language (functional or not) will let you confidently refactor code, and (IMO) will lead to less mistakes and better abstractions.


Why does "Typed Functional Programming" have to be measurably "better" then OOP or procedural programming in order for Haskell to be considered a good language?

All those questions at the end of your comment can be turned around the other way and are equally valid and unanswerable.


Why use it if it's not measure-ably better on some metric? It's important because we should only use things if they are better or the same. Not if they are worse.

Sure, there are tradeoffs but if you quantify all tradeoffs something usually comes out better.


If you can objectively measure the quality of a program (in a non-bs) way, you could probably make millions. There are whole industries around code health metrics.

I doubt I could make billions. I don't think it's that hard.

> Why is typed functional programming measurably better than procedural? Why is it better than OOP? Definitive answers are in demand not exploratory experiences.

Functional code tends to be shorter than procedural code. This allows you to think in bigger steps. For example, to read whitespace-separated numbers from stdin and print out their sum, in haskell you could write:

    main = interact $ show . sum . map read . words
This uses very generic haskell functions to put the input into a list of strings, read each string as a number, sum the numbers and show the sum as a string.

It seems to me that the equivalent procedural code would be much longer.

Haskell's type system keeps track of what's going on and alerts you when you try to do something that doesn't make sense. For example, if you leave out the "read" function above, you would be trying to sum strings instead of numbers. Haskell's type checker would complain at compile time. This enables you to program at a high level without having to debug run-time errors because of type mistakes.


I feel to be realistic you have to show a program with failure scenarios. That is at least where i struggle with FP. Like what happens if a user enters a non-numeric value and i want to echo back a warning and get a correction, before continuing?

>Functional code tends to be shorter than procedural code

Shorter is one metric that FP "tends" to be better at. But it's not definitive. Who says procedural programs can't be shorter? And also is shorter necessarily better? Also does it come at the cost of readability? Actually let's not get into readability as it's not exactly measure-able.

>Haskell's type system keeps track of what's going on and alerts you when you try to do something that doesn't make sense.

Algebraic Type systems are indeed a measure-able metric when you measure correctness, amount of errors or total possible programs you can write. It restricts the code you can compile to be correct from a typed perspective. Meaning that out of all the possible programs you can write, Haskell allows you to write less programs in the sense that it stops you from writing certain incorrect programs.

However, ADT's can be used in procedural programs or OOP programs as well. See Rust.

What I want to know is specifically about the functional programs. In the functional programming paradigm what is the quantify-able metric that makes it definitively better?


> However, ADT's can be used in procedural programs or OOP programs as well. See Rust.

Rust isn't OOP. Funnily, it's type system is pretty much the same as Haskell's but stops just before higher kinded types.

Ignoring type systems and just looking at functional vs imperative, the advantage for functional is immutability making functions easier to reason about. Haskell in particular is also lazy, and therefore enables you to not be concerned with evaluation order.


Thanks for letting me know rust isn't OOP. I never said it was. Funnily.

>Ignoring type systems and just looking at functional vs imperative, the advantage for functional is immutability making functions easier to reason about. Haskell in particular is also lazy, and therefore enables you to not be concerned with evaluation order.

Sans the part about laziness, for which the formal term is "normal order evaluation" FYI, OOP guys say the exact same thing about objects word for word.


I don't see how that could be true, as OOP is inherently about encapsulating mutation not preventing it. The problem with OOP is you're also free to alias mutable objects.

> Sans the part about laziness, for which the formal term is "normal order evaluation"

If you want to be pedantic, Haskell is 'call by need', not 'normal order'.


Sure but this is what they say and what they believe. Literally. Armies of programmers believe this and your statement won't convince them.

It always goes into philosophical mumbo jumbo. I'm looking for quantitative and logical proofs that say definitively why functional is better. You do see how your statement will just trigger a vague retort from an OOP guy which will trigger a bunch of other vague retorts and counter examples from you. The end game is it goes nowhere.

If I want to be pedantic, the official term is normal order. You are wrong on this. See SICP chapter 1.


> If I want to be pedantic, the official term is normal order. You are wrong on this. See SICP chapter 1.

SICP isn't going to help you, we're talking about Haskell's 'call by need' evaluation. It's not the same as 'normal order', feel free to look up the definition where ever you like.

I don't think a 'logical proof' is even possible, so I'm not really sure what you're asking for. We could definitely use more research on paradigms and their effects. There are a few studies on types providing benefits, but nothing that I'm aware of about FP. I think the problems stems from the fact that "functional programming" is itself a pretty loose definition.

edit: something you can read about evaluation order,

https://en.wikipedia.org/wiki/Evaluation_strategy#Non-strict...


All right I'll give you that on evaluation order. Did not realize that Haskell took it two steps further.

Logical Proofs are possible for most things that have logical definitions.

There are aspects of FP that can be enforced in your proof. You don't have to have a vague definition of good design or a vague definition of FP. Follow a strict but commonly agreed upon definition of both and derive a proof from there. It's very possible.


> You don't have to have a vague definition of good design or a vague definition of FP. Follow a strict but commonly agreed upon definition of both and derive a proof from there

There is no such definition, just like there isn't one for OO. You could of course make one up and then test it qualitatively, but that's far & away from a proof.

I don't believe a 'proof' for something like this exists, if you disagree, I challenge you to find such a proof, or link to any resource describing one.


I am indeed talking about making something up. Similar to how an the signal to noise ratio for rating signals is made up. It can't be that hard.

Following the analogy for the SNR, you can indeed prove that one signal is "better" in terms of SNR to another signal, simply by calculating the number.

I get a bit more into something like this in another reply: https://news.ycombinator.com/item?id=22071522

Obviously the "ratio" I describe in my example above is something I pulled out of my ass and has serious flaws but I'm talking about traveling in a similar direction to find a rigorous "proof." You would need to define a minimal turing complete assembly language and and do the same for OOP and FP to even begin to proceed forward with such thing.

Of course what I describe can only say a given FP program is "better" then a given "OOP" program. The way to a general proof would be to find something like all the ratios of all possible foundational programs with one or two assembly primitives for OOP and FP then by induction we prove that because all programs are built from these foundational primitives the paradigm with the the foundational primitives with the "better" numbers are indeed better.

There's probably other numbers as well like the amount of references.


> In the functional programming paradigm what is the quantify-able metric that makes it definitively better?

It seems extremely unlikely that you'll ever find a satisfactory answer to this, because any advantage a programming language paradigm gives you is either going to be in terms of programming language theory (which you rejected up-thread), or in terms of developer experience and productivity (whatever _that_ means). However, any rigorous study of those latter categories is likely going to be seriously confounded by their variability due to things which are _not_ related to language paradigm, such as organizational concerns, the language's tooling and ecosystem, the problem domain, the skill and experience of individual developers, &c &c. None of these things is straightforward to control for, and I'd be extremely skeptical of any quantifiable metric someone shows me that purports to show clear wins in real-world software development based on language paradigm of all things.


> is either going to be in terms of programming language theory

Programming language theory does not study "advantages;" it's simply not a question it asks, let alone answers. The theory is concerned with what properties certain formal systems have. It cannot, nor does it attempt, to assign those properties a value, just as mathematics does not ask or answer whether prime numbers are better than composite numbers.

> None of these things is straightforward to control for, and I'd be extremely skeptical of any quantifiable metric someone shows me that purports to show clear wins in real-world software development based on language paradigm of all things.

Maybe, but that doesn't matter. You cannot claim that you're providing a significant benefit and in the same breath say that it isn't measurable. An advantage is either big or not measurable; it can't be both. If you say that the benefit of the language is offset by the bad tooling, then you're not really providing a big benefit. If and when the tooling catches up, then it's time to evaluate.

But maybe not. I find it very dubious that significant differences are not measurable in an environment with such strong selective pressures for two reasons: 1. it doesn't make sense from a theoretical perspective -- adaptive traits should be detected in a selective environment, and 2. it doesn't fit with observed reality. We observe that technologies that truly provide an adaptive benefit are adopted at a pace commensurate with their relative adaptability; often practically overnight. The simplest explanation from both theory and practice to why a technology does not show a high adoption rate is that its adaptive benefit is small at best.


>Programming language theory does not study "advantages;" it's simply not a question it asks, let alone answers. The theory is concerned with what properties certain formal systems have. It cannot, nor does it attempt, to assign those properties a value, just as mathematics does not ask or answer whether prime numbers are better than composite numbers.

I think we can go deeper than this. There are properties of well designed programs that can be measured to be numerically higher or lower than poorly designed programs. Under this mentality "better" is simply a word with no meaning that is describing a number. It is the human that has the opinion that the higher (or lower) number is a "good design."

The question is what is that number and how do you measure it? For example one number off the top of my head: lines of text. Another better number is the amount of functions. Both of these numbers have flaws so maybe a better number is given a (high level language) and a (program written in assembly language); what is the largest number of high level language primitives you can use to recompose an identical program?

(high level language primitives)/(low level language primitives)

As the ratio approaches 1 we are achieving maximum flexibility as the high level language is injective to the low level primitives. As the ratio approaches zero we are reducing complexity at the cost of flexibility (we reason about less primitives). If the ratio exceeds one then we are creating excess primitives.

Maybe the better designed language/paradigm can has primitives that can used to drive that ratio back and forth from 0 to 1. A poorly designed language is one with a ratio of 4.5 or 0.0001.

So something more advanced but along the lines of this rudimentary and rough outline is certainly possible in my mind.

>But maybe not. I find it very dubious that significant differences are not measurable in an environment with such strong selective pressures for two reasons: 1. it doesn't make sense from a theoretical perspective -- adaptive traits should be detected in a selective environment, and 2. it doesn't fit with observed reality. We observe that technologies that truly provide an adaptive benefit are adopted at a pace commensurate with their relative adaptability; often practically overnight. The simplest explanation from both theory and practice to why a technology does not show a high adoption rate is that its adaptive benefit is small at best.

Yeah I agree. Additionally we're not dealing with the real world here with billions of variables. This isn't a computer vision problem. Assembly language, FP and OOP have a countable amount of primitives. It is amenable to theory and measurement.


>(which you rejected up-thread)

Where did I reject programming language theory? You mean type theory? I didn't reject it. I said it doesn't apply to functional because ADT's can be used on most programming styles outside of FP.

>None of these things is straightforward to control for, and I'd be extremely skeptical of any quantifiable metric someone shows me that purports to show clear wins in real-world software development based on language paradigm of all things.

Don't account for those things. Account for what can be measured. Cut through all the frosting and get to the main point. What is the exact definition of a well designed program when we don't factor in opinionated things like readability? At it's core their must be fundamental properties of a shitty program and a well designed one that exist outside of opinion and is pretty much universally agreed upon.


It's shorter in Python:

  sum(int(n) for n in input().split())
I'd argue it's an awful lot clearer, too.

Edit: A more functional version is shorter still, but I think not worth the readability cost:

  sum(map(int, input().split()))



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: