Seems like your university is doing it wrong. While I do believe that the principles used in FP languages are vastly superior I don't think this is really a matter of debate. The advantages are obvious and if you didn't realize it you probably weren't taught well enough.
On topic:
I sometimes wonder if the wins from FP languages are lost on the people not already scarred by the pitfalls of imperative programming. I wonder the same about someone starting with Python instead of something more wordy and cumbersome. I don't think you really can appreciate the ease of use if you haven't already written tons of boilerplate or type information because the compiler is stupid.
At the same time you don't know the price of the ease you're getting, so there is a warped perspective on both the ease of programming and the performance of languages.
Haskell, I think, exists in a sort of middle ground where it's actually really fast but extremely expressive. Maybe someone uninitiated in programming won't have as many problems formulating functional solutions to problems in Haskell too.
That's exactly the kind of hair-raising and mindless (in the truest sense of the word) nonsense we were presented with: "the superiority of FP is not a matter for debate".
The reality was that the tools were slower and buggier than others, the programs different (and slower) but no simpler and the bugs different but stilly just as buggy.
When presented so mindlessly and categorically, any claims of superiority instantly lose credibility. When reality is taken into account, even more so.
The principles used in FP are actual principles you could employ in imperative languages... and people do, because they are obviously good. It's a matter of there actually being principles instead of the alternative; nothing.
This aside, you seem to be talking about the languages themselves, so let's go ahead:
> the tools were slower
Which tools are these? I don't see this as a systematic failure of FP languages.
Which language were you presented with? Did they explain what you're trading your speed of execution for?
I'm asking these questions because it doesn't really seem like you got it and this supposedly being 20 years later it seems like you're purposefully not getting it. In these discussions I find that a lot of people believe FP proponents to be very unpragmatic, but I find that in most cases, such as this one, there is little to no concession made on the "other side".
Most FP languages are not suitable for performance sensitive tasks and most people will admit this. For most other tasks, though, they're just going to allow you to work faster and get to where you're going with less hassle.
I would like to point out that I'm not saying FP languages are just better overall or that an FP language is always the best choice, but I am saying that the principles they employ and teach undeniably are the best. The alternative isn't as much a principle as it is a lack of principle. Flipping individual switches and modifying memory is just not caring.
Wow, the arrogance and ignorance is just incredible, just the way I remember. :-) Someone who doesn't buy into your highly constrained world-view must obviously be ignorant.
"doesn't seem like you got it" ... "purposefully not getting it" ... "principles [..] undeniably the best".
No, you like those principles best, and that's about the most you can say. We don't even have solid criteria with which to make that call, never mind there being any type of consensus as to which is "best" (except within specific communities, but with each community having a different consensus and the most closed-minded of these communities usually the most sure of themselves).
Just to head off more ignorant comments: I aced those courses, as well as a few more including the elective "foundations of FP" and did my oral exam on Backus's FP calculus, which inspired me to come up with Higher Order Messaging: http://en.wikipedia.org/wiki/Higher_order_message
I also did some courses on algebraic specification and graph grammars (related to category theory), as well as formal specification with Z (yes, aced those as well). While useful in certain very constrained areas, these techniques trade making the smaller problem better (spec -> code) for making the larger problem ( actual problem -> spec ) infinitely worse.
Of course, functional techniques are a part of my toolbox, as they should be for any professional. However, if they were the only contents or even the majority of my toolbox, I couldn't do what I do and have been doing for the last quarter century.
Bravo! I doubt programmers will escape their quest for the 'right answer' to questions with context specific answers - but it's nice to see someone with the credentials making it clear that we should try.
>It's a matter of there actually being principles instead of the alternative; nothing.
BS principles are not better than no principles.
Or principles that tie your hand around your back for that matter. And functional principles do that for a lot of cases, with respect to performance, memory impact etc.
Functional languages are not some kind of 2x or 10x multiplier of a programmers abilities. In fact most of the things that made LISP interesting have been ported since ages in modern non-functional languages, from GC to closures, and even macros.
>Most FP languages are not suitable for performance sensitive tasks and most people will admit this. For most other tasks, though, they're just going to allow you to work faster and get to where you're going with less hassle.
Not the case in the real world.
For one, this BS idea forgets all about ecosystems and libraries. Python will let you work faster and get where you're going with less hassle. Functional languages that have 1/10 the libraries available, and in various stages of abandonment because of lack of programmers will not.
>The alternative isn't as much a principle as it is a lack of principle.
Really? For one, what all "FP languages" do when they run, is "flip individual switches and modify memory". The way they model the program is just an abstraction (and a leaky one) from what happens in the machine.
Second, I thought the Church-Turing thesis shows that lambda calculus is equivalent to the Turing machine way, for one.
Third, where did you get the idea that non-FP languages don't have principles? Smalltalk, for one, sure has tons of unifying principles behind it. Prolog. Erlang.
And if Algol was good enough for Dijkstra, modern, 10-times better algol derivatives are good enough for me. Especially if they also have adopted tons of functional paradigms, which you can get to opt to use when it makes sense for your design, and not have them forced upon you.
Python is an amazing language. I have been using it to do all my automation and scripting work for a long time. However, Python programs almost always feel like they were held together with string.
I recently have been putting a lot of time into learning Haskell and it is amazing. The type system gives me so much confidence, and I have now reached a stage where writing Haskell programs is as concise and fast as writing Python programs, with a lot less bugs and a lot more extensiblity. The thing is that defaults matter. With Python, sure you can achieve a lot of this stability by using a lot of assertions, but who want's to do all that work to write a simple script. With Python, I used to have write once code that I would use for a task and chuck away. With Haskell and its abstractions, I reuse almost all of my code. I have completely replaced Python with Haskell as the programming language I use to do my daily work.
Sure, Haskell is very different from conventional programming languages and initially hard to learn since we all just want to change state. However, personally this style was surprisingly easy to internalize. Instead of modifing data, you are passing it through a series of pipes, transformers and filters to get your final result. It is very easy to reason this kind of code, though with laziness, Haskell makes it hard to reason about its exact performance.
> Really? For one, what all "FP languages" do when they run, is "flip individual switches and modify memory". The way they model the program is just an abstraction (and a leaky one) from what happens in the machine.
Well, all Python programs manage pointers in the end, and flip individual switches and modify memory, so functions and objects and all of that jazz are just leaky abstractions. We should all go back to assembly, or even better, flipping switches.
I believe the IO Monad to be a wonderful abstraction for what happens inside the machine. Monads are wonderful abstractions in general, and first learning how normal monads like the list, State and Maybe monads work is an amazing experience, but realizing how IO is just a special case of the State monad is mind blowing.
> Second, I thought the Church-Turing thesis shows that lambda calculus is equivalent to the Turing machine way, for one.
Yes they are equivalent just as Brainfuck is equivalent to Python, or C to Java, or Fortran to Smalltalk. These are all Turing Complete languages, so obviously they are equivalent!
> And if Algol was good enough for Dijkstra, modern, 10-times better algol derivatives are good enough for me. Especially if they also have adopted tons of functional paradigms, which you can get to opt to use when it makes sense for your design, and not have them forced upon you.
If assembly was good enough for Knuth, why are we not using that? x86 is good enough for me. Why are Algol derivatives forcing me to use the imperative style of code? Why can't I use a modern functional language and and use imperative, unsafe, statefull code when it makes sense for my design?
The problem is that defaults matter, and separating unsafe and safe code has a lot of advantages. Also, IMHO functional code is just easier to read and much cleaner. Instead of writing step by step instructions to the computer, we tell it what to do in a more declarative manner, because the lower abstractions can easily be separated out. Eg:- the canonical quicksort in Haskell [0]
The problem with the "canonical" Haskell Quicksort implementation is that it is utter tosh.
The whole point of Quicksort is being a fast and in-place sort. That's its entire reason for being (the name kinda gives it away). An algorithm that's slow and not in place is not Quicksort and not "the essence" of Quicksort either. It's some other sort ("Slowsort"?) loosely inspired by a cursory reading of Qicksort and not understanding what Quicksort is about and why it is "quick".
The whole thing is somewhat typical though: "We have discovered that the essence of <well-known-thing> is <completely-different-thing>". No, you have discovered a mapping function from <well-known-thing> to <completely-different-thing> That's not the same as "the essence".
> While I do believe that the principles used in FP languages are vastly superior I don't think this is really a matter of debate. The advantages are obvious and if you didn't realize it you probably weren't taught well enough.
No, there are plenty of people who will debate this, programming languages academics even. There is a vocal FP crowd that fawns over elegance, but there are plenty of other people who are more pragmatic about it.
> I sometimes wonder if the wins from FP languages are lost on the people not already scarred by the pitfalls of imperative programming.
Pity the poor programmers who get things done, write huge beautiful systems, all without worshiping lambdas? And by taking shortcuts ala side effects as a way to making things easier.
> Maybe someone uninitiated in programming won't have as many problems formulating functional solutions to problems in Haskell too.
It depends on how much math they have behind them. You take a mathemegician, sure, but a high schooler who is still having trouble with pre-calc?
> It depends on how much math they have behind them.
Very much so, and also depends very much on the problem at hand. Yes, compilers and yet another RB-Tree implementation can be great in FP. Most real-world problems aren't that. In fact, most real world problems I have encountered (and see) tend to be massively about manipulating state. Often they do little else than shuffle data around.
I sometimes think "computers" are misnamed, because they do so little actual "computation". "Data around-shufflers" might be more appropriate.
On topic:
I sometimes wonder if the wins from FP languages are lost on the people not already scarred by the pitfalls of imperative programming. I wonder the same about someone starting with Python instead of something more wordy and cumbersome. I don't think you really can appreciate the ease of use if you haven't already written tons of boilerplate or type information because the compiler is stupid.
At the same time you don't know the price of the ease you're getting, so there is a warped perspective on both the ease of programming and the performance of languages.
Haskell, I think, exists in a sort of middle ground where it's actually really fast but extremely expressive. Maybe someone uninitiated in programming won't have as many problems formulating functional solutions to problems in Haskell too.