While fiddling around is still somewhat possible in Haskell, the language itself makes it quite difficult. Haskell kind of forces you right at the beginning to pause and think "Well, what is it that I'm actually trying to do here?" It let's you recognize and apply common patterns and implement them in abstract ways without having to think about what kind of values you actually have at runtime. In that way Haskell is the most powerful language I know.
Have a tree/list/whatever? Need to apply a function to each of the elements? Make your tree/list/whatever an instance of the Functor type class and you're done. Need to accumulate a result from all the elements? Make it foldable.
Something depends on some state? Make it a Monad.
You either get a result or you don't (in which case any further computations shouldn't apply)? Use the Maybe Monad.
You need to compute different possible results? Use the List Monad.
Need to distinguish three different possible values that are different compositions of elementary types? Make yourself your own type and pattern match the behavior of applying functions.
Need to output in a certain way? Make it an instance of the Show class.
Most concepts that are used every day have some kind of idea behind them that is abstract and implementation independent. Haskell kind of forces you to reference those ideas directly. The downside is that you actually have to know about those concepts. However, knowing about the such concepts makes you also a better programmer in other languages, so it's not like it's a bad thing.
When trying to build complex software in Haskell, I find myself spending a lot of time commenting/uncommenting swaths of code, just so I can get part of a algorithm to load in GHCi. It sucks. What I wish would happen is GHCi allowed me to load just the things that type check, and skip the rest, so I can fiddle. This is definitely possible. Not compiling is great for production, but not while developing.
Software is built in pieces, if I'm working on one piece, another statically unrelated piece shouldn't prevent me from working. In this regard Haskell GHCi (and many static languages), makes developing more complex than dynamic languages, but again it's not intrinsic.
I also wish when I run my tests, it listed all the type errors, as well as run tests on the code that do type check. Having more safety mechanism in Haskell helps with writing correct code, but compiling doesn't mean the code works. Automated testing is still more useful for writing software that works. Haskell isn't as safe as many people think .
sort a = a ++ a -- it compiles, so it must sort
Use `-fdefer-type-errors` (should work with both GHC and GHCi), all errors become warnings, and if you try to use a function which was compiled with an error, you get a runtime error.
But I must confess, I used it in the past to have a convenient way to print out something like: 'Add "x" "y"' as "x + y". I didn't care about using read to turn it back into the internal representation since expression parsing is kind of difficult. I used Parsec instead. So I had show to output and a parser as the inverse.
But you're right, as projects get larger, a priori design and static types quickly become essential. And at that point, requirements are usually known and frozen.
I predicted that the natural resolution would be languages with both (i.e. optional static types, especially at important interfaces) - but while this feature exists, it hasn't taken off.
Instead it seems that performance is the main attraction of static types in the mainstream (java, c#, objective-c, c, c++); and ML-family and Haskell are popular where provable correctness is wanted.
There's definitely a lot of missing documentation about this folk practice of "fast, loose, shitty Haskell" due to the strong culture of pretty code that's also enabled by Haskell. I remember seeing a video presented at CUFP that went into the merits here, though.
Essentially, this is a "tricky" concept because you want to design your types to be exactly as restrictive as you can afford without having to think too much. It probably requires a good grasp of the Haskell type system applied in full glory in order to bastardize it just right.
I think types are the ultimate fast iteration tool, but this is not a well-documented practice.
It's often said that a Lisp advantage is to be able to write sloppy - without fully understanding what's needed. I guess you can model that degree of "less constraints" in Haskell as well? Otherwise constant forcing "understand what you're doing" can sometimes be a burden.
Haskell doesn't have a theorem prover, you may want to check out Idris . Haskell gives you more safety that you aren't going to get run time errors than say Java, but not completely. You'll still need automated testing for correctness, that you're not getting garbage in, garbage out.
After all, programming, like literally everything else, is 99% human and 1% logic, machines, data, "scaling", etc. Programs are written by people for people (incidentally they can also be read by a computer), so it's incredible important that the 99% of that equation (you the programmer) don't become discouraged at the onset by an extremely elegant, expressive, but rather rapey language before you're ready for it. In that sense, it's absolutely okay to be "seduced" by an easy scripting language in the beginning. Eventually, though, when you start lamenting about "undefined is not a function" and how that could be so easily avoided when proper type-checking, that's your body telling you that you're ready for Haskell now.
Well, no, they literally weren't there for me when I was new to programming. (MATLAB existed then, but I wouldn't actually see it for more than a decade.)
And while I don't think they are bad languages for beginners, I don't see a clear argument presented as to why they are superior for that purpose (just a somewhat vulgar analogy that presumes that people share your subjective opinions about the languages involved.)
Pro tip: If your analogy needs a disclaimer that perpetuates gender stereotypes for it to work, then its probably sexist.
In other words, it's sexist in the sense that we recognize there is a biological difference between the sexes - we're not applying it to infer men are automatically rapists, or women are automatically unable to make executive decision. So maybe instead of playing around with labeling terms that carry a lot of negative connotations, you can actually consider the circumstance and context of what is being said before you label.
Now combine that with some data from Christian Rudder's Dataclysm (just a link to a info-pic + summary article here):
Men consume a lot more porn than women and hunt for casual sex a lot more than women. Actually, if you weren't so rustled, you could've just google'd "consumption of porn by gender" and gotten a lot more results than the two I put up there. But yeah, way to not walk away and accept that someone else has a valid point, and feel free to continue loudly cry "no, your stats suck", "give more sources", while hiding behind a throw-away account and throwing out sensational accusations of "sexism!" for the sake of accruing karma on your main one.
"By and large, men prefer images and graphic sex sites; women prefer erotic stories and romance sites." - http://rescuefreedom.org/parallax/wp-content/uploads/2015/01...
Haskell is to programming what bugs are to food. Both are functional, an acquired taste and look scary from the outside.
This may be true for you, but it is not true for everyone. You assume that learning Haskell as a first programming language would be more difficult, but you don't present any evidence to support that claim. People who have done so disagree with you.
Wouldn't Scripting languages allows one to gradually build that understanding. Suppose you end up with a lot of complex code? Ditch it and build it from scratch. Usually takes around 1/10th the time it took first time with much better results.
So I think by the time one can think up and build the perfect abstractions in Haskell, one can write 3 or 4 iterations of the program in a dynamic language. Each time with better abstractions and neater organization....
In Haskell, I'll often have a problem and just stare at my laptop and think for an hour. Then write a dozen lines of simple, straightforward code. The code is easy to test, and the problem is marked as "solved" instead of "seems to work" as happens in scripting languages.
Edit: auto complete fixes
The final few sketches almost always end up simpler than the original idea seemed!
I also try to follow ESR's paraphrasing of Fred Brooks:
"Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious."
I think this concept is actually more important than the choice of language.
Another othing Haskell gets right is the support for parametric polymorphism (generics). You are forbidden from manipulating generic parameters other than passing them around so there is less room for error. This "theorems for free" is what makes things like monads "tick".
That said, one thing that is in vogue right now is adding optional type systems and runtime contracts to scripting languages. Its still a bit of a research area but I think it has a very promising future.
Haskell makes it very safe to change your code, but it adds some initial costs. Scripting languages make it very unsafe to change the code, unless you spend a lot of time writing tests, but then they stop being fast to iterate.
EDIT: Bottom line, I think Haskell also works well for people who "see programming as a cybernetic extension of their mind".
There's that Perlis quote: "Show me your data structures, and I won't usually need your code; it'll be obvious". For me, a lot of thinking about programs involves thinking about the types of data involved, and there Haskell gives a language to talk about it. You can start writing down your datatypes, and the function types, directly in your emacs buffer (leaving the function bodies as just "undefined" at first). By contrast, if you are programming in some untyped language like Scheme, you have to do all that work inside comments---e.g. if you write a compiler you maybe start by writing a huge comment saying "this is the grammar I expect input expressions to follow". Having a type language around kind of helps by providing a notation.
I guess there is some other kind of exploratory thinking which untyped langauges provide a good notation for? But in my life I have mostly worked in typed languages, so I don't have any concrete idea of what it is.
> Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.
I don't think it's about exploratory programming. It's more about reading other people's code.
Lately I've reading this academic paper about End-User Software Engineering. While not an easy read nor a good introduction if you've never read about End User Development, it delves precisely on methods for building tools that allow just that, while adding opportunistic checks to correct bugs.
For really complex things the implementation in scripting language can introduce errors that are catchable in typed language.
So you code your complex things, it fails to perform to expectations (but somehow performs, not just stack dumps). Where the source of fault lies, in the complex idea itself or in the almost whole source code?
My rule of thumb is that I write in Tcl/Python/Bash something that is not longer than 200-300 lines.
I am sorry, haskell is just a huge roadblock to get things done in the real world.
In the real world proffesional need to juggle all sorts of models. Haskell just says "Fcuk you ! its my way or the gonadway !".
I need to juggle between json, matrix, html, etc.
Each of them have hundreds of expections.
You can say my model is imperfect but guess what buddy, every model is. The only models that will work for all cases is prolly einstein's equations but even that has exceptions when dealing with blackholes !
I tried writing a music library in haskell and haskell makes it really hard to create rules that are expections to the model. Apparently the models developed by 1000s years of music theory is not good enough for haskell !
I cannot even image what it must to be code chemical rules using hakell that have hundreds of expections, or biologial models ! Oh My !
I am sorry haskell just gets makes computation much more difficult. Apparently mutation is a crime even though god himself thought it was okay as a rule for everything in the universe.
my anecdotal experience.
btw i really like the concepts in haskell. I read two of its famous books - LYAGH and RWH. And use all haskell concepts almost daily in production. However the implementation of haskell is not really ready for production or useful enough for the average developer. Its also not easy for the average developer to put food on the table using haskell
Apart from that, I have written many production grade Haskell application, and I can not agree with you that Haskell is getting in the way. I admit that when learning Haskell I sometimes had that feeling too, but basically that was just me thinking about the problem too complicated or in a wrong perspective. Now that I am past that point Haskell is super fun to write, very productive and results in extremely maintainable code - it is just so easy to refactor anything you can imagine - and when it compiles again you are probably good to go!
Perhaps not, but Haskell is good enough for them:
Functional Generation of Harmony and Melody http://dreixel.net/research/pdf/fghm.pdf
- Type-safety. Correctness. Speed.
- I don't know what this even means.
I need to juggle between json, matrix, html, etc. Each of them have hundreds of expections.
- Haskell has great libraries for each of these.
- It just doesn't let you do it incorrectly.
- The more complex, the better Haskell is suited.
- Immutability doesn't make computation harder.
- You say you like these concepts, but it doesn't sound like you have the slightest idea what those concepts are useful for.
I think it's more accurate to say that Haskell makes you annotate specifically which way you're doing it, and only combine ways when it's okay to do so.
The best a language can do is to fill a niche and to be very good at that particular thing. You should always use the language that is most suited for your problem, whatever that is. But there are many languages that let you get away with being a terrible programmer. Haskell just isn't like that and I want to point out that you can learn a great deal from being forced to think in more abstract ways, just like you did.
In programming, we often encounter the temptation to just mutate everything and to use side-effects since it's quite convenient to do so in the short-term. In the long-term, these things will come back and bite us. I argue that it is important that a programmer should have experienced what it is like to simply not have the option to do so. After learning Haskell, I tried to avoid side-effects in other languages as much as possible and to use them consciously. That was something I didn't even consider before learning Haskell. And, obviously, the less side-effects you have, the easier it is to maintain or exchange parts of your program.
I currently use Haskell to calculate probably a hundred analytical derivatives for a large sparse array that is used in a simulator written in Fortran. And it's very good at that. For quickly writing some evaluations of output of this simulator I use Python, because Python is better suited.
Pick a language based on the problem. Don't just use one language because you know it. In my experience, Haskell is very well suited for a lot of mathematical stuff.
By the way, Einstein's field equations will work for gravity in the classical regime, not in all cases. But still, if you simply want to calculate how far you'll throw a ball, you really should take another model based on Newtonian gravity or simply take a constant gravitational force. Planetary movements are also fine with Newtonian gravity (except when you really need it accurately. E.g. for the precession of the perihelion of mercury). However, GPS calculations are terribly inaccurate without general relativity (time flows differently if you are close to a big gravitational potential well). So, pick your model based on what you want to do, just like you pick your programming language based on what you want to do.
Examples, please? And why?
Besides that, from mathematician's point of view, "efficient numerical computation" is a necessary and useful practically, but very ugly thing. Speaking of C-like types `int`/`long`/`uint32_t`: they are only a crude approximation of natural numbers, they are a ring modulo 2n, which we pretend to use as natural/integer numbers, silently failing when this range wraps up.
And that is not the end of the story: for integer numbers we can at least specify what mathematical model describes them (a ring modulo some power of two), for floating point numbers it is impossible: set of possible IEEE 754 values is a very weird and irregular finite set of values (NaN, +Inf, -Inf, +0, -0, exponential distribution of points density with min/max bounds) with complex modes of failure. Associative law, distributive law, commutative law? The very equality check? Forget about it, floating point numbers have none of that.
While quite fast, it is not in the league of low level programming languages. If you need ridiculous speeds, you don't have another choice but to use C, C++ or Fortran.
Python has a lot of very useful modules. If I can solve my problem with basically a few import statements and don't care about performance or anything, I find Python to be better suited.
Erlang's light-weight threads are a boon. Having a webserver written in Erlang and using Erlang as a server-side language, you can support a lot of sessions at once.
there are many lovely mutable data structuers in haskell
is one for unboxed C-struct style arrays,
http://hackage.haskell.org/package/hashtables is a super mature stable mutable hashtable library, that is used, among other places, in agda!
..... please fact check your feelings in the future :)
Haskell does support mutation, it just requires it to be controlled. Take a look at Data.Array.ST and Data.Array.IO
> Erlang's light-weight threads are a boon. Having a webserver written in Erlang and using Erlang as a server-side language, you can support a lot of sessions at once.
GHC provides a very similar thing in the form of "green threads".
Haskell supports mutation, although many short tutorials don't address it and even many longer tutorials don't do much with it.
Not to need them, sure, but not not to use them. (I swear that's the right number of 'not's.) I don't understand why something like Neil Mitchell's `Safe` isn't just the way that things are done by default.
You can always recognize a file where I'm doing list ops because I define `uncons :: [a] -> Maybe (a, [a])` at the top of the file, haha.
Languages tend to suffer from an Iron Triangle: quick to write, quick execution, quick to learn-- pick 2. Haskell takes a long time to learn but it produces very high-quality executables and, once you know it, it's very productive.
While "quick execution" may seem separate from the type safety which is also a major selling point of Haskell-- and, arguably, a bigger one-- they're actually tightly coupled. Safe code can be optimized more aggressively, and it's often for the sake of performance that unsafe things are done... so the fact that Haskell can be robust and generate fast executables is a major win.
Haskell just says "Fcuk you ! its my way or the gonadway !"
It doesn't, but I am going to start saying this. Thank you for the inspiration.
Apparently mutation is a crime
Not so. Every program's main method has type signature IO (), which means that it does perform mutation. You just want to get as many functions as possible not to involve mutation because it's easier to reason about them. It's a similar principle to dependency injection, but more robust and clear.
However the implementation of haskell is not really ready for production or useful enough for the average developer.
I disagree. With Clojure and Scala, I've met people who've used them and moved away. Satisfaction rates seem to be about 60% with Scala (that is, 60% of teams or companies that make a major move to Scala are happy) and 90% with Clojure. I've never heard of anyone who's become unhappy with Haskell or rolled back on it.
One of the dangers of using Scala, for an example, is that, if that if your Scala deployment doesn't work out (or is sound but is blamed by the business for something unrelated) you can get stuck doing Java. Haskell, at least, doesn't have that problem.
I really like the author's suggestion of mentally translating Functor to Mappable. Are there any other synonyms for other Haskell terms of art?
What I'd really like, I suppose, is a complete overhaul of Haskell syntax to modernise and clarify everything: make it use actual words to describe things (foldl vs foldl'? BIG NO). Put in syntax redundancy and visual space to avoid the word soup effect: typing is cheap, understanding is expensive. Normalise and simplify terminology. Fix the semantic warts which make hacks like seq necessary --- if I need to worry about strictness and order of evaluation, then the language is doing lazy wrong. etc.
Basically I want language X such that X:Haskell like Java:K&R C.
This will never happen, of course; the people who have the knowledge to do such a thing won't do it because they are fully indoctrinated into the Haskell Way Of Life...
I agree that a shared vocabulary is important, but standardizing in a way that makes the mathematical writings on the topic more accessible seems a big win. Moreover, "functor" is a bit more precise than "mappable" - a functor is a mapping that preserves structure. In what sense? The math can guide you. In this case, it means the functor laws.
That's not to say that coming up with other associations to help ease understanding is a problem - I have no problem with saying, "for now, think of Functor as 'mappable'". The equivalent for Monad would probably be "flatMappable", and Monoid would be "appendable".
Rather a bit more than that. Eilenberg and Maclane's original paper defining the basic notions of category theory was published in 1945!
Monad is definitely abnormally difficult to humanize. The trio (T, ∀ a. a -> T a, ∀ a b. (a -> T b) -> (T a -> T b)) is really hard to nail down.
I don't object to "mergeable" for Monoid, but I think I weakly prefer "appendable" since it seems to say a little more about how things merge (and of course the free monoid is exactly that).
Speaking again to the broader context, one thing I really like about Haskell's choice of naming these abstractions after the math is that this type of discussion has no bearing on what types adhere to the abstractions - we're not left arguing over whether Sum and Product are "really" appending, or set intersection is "really" merging. Integers are a clearly monoid under Sum and Product, and set intersection is clearly a semigroup but not a monoid (if our universe is open) because there is no identity.
Its also interesting that pg is such an accomplished writer. I think programmers need to think about code and well written document as having the same importance.
Just my two thoughts.
You can't blame that one on Haskell or the functional community - the term was already established before the C++ community decided to use it in spite of pre-existing definitions. They even ignored Prolog's pre-exising abuse of the term functor :-)
A few similar terminological accidents of history come to mind, where the original definition of some term is now obscure and a different definition popular:
- POSIX capabilities (as implemented in e.g. Linux), which are a security mechanism that has nothing to do with what security researchers have been calling capabilities since the 1970s
- Microsoft operating systems using the term "Format" for creating a file system, despite the fact that it is impossible to actually format hard disks at the hardware level since the 1990s
- imperative programming languages abusing the term "function" to mean procedures with side effects
- "thunk" meaning a stub that emulates/bridges different calling conventions, instead of a call-by-name (or lazy) closure
- "Tea Party" used to refer to a fine rock band from Canada
But foldl' is horrible, I agree.
1: fold :: (Foldable t,Monoid m) => t m -> m
The Monoid operation mappend is guaranteed to be associative, so the order is irrelevant. Data structures can fold in whatever way is most efficient for their structure.
It's true that lists and arrays are implemented as right folds, however the fold implementation for sets is neither:
fold = go
where go Tip = mempty
go (Bin 1 k _ _) = k
go (Bin _ k l r) = go l `mappend` (k `mappend` go r)
-- Here, I reorganized the code of `fold` to have the same shape as
-- `foldl/foldr` so that you can see the difference in structure more
fold2 = fold3 mappend mzero
fold3 f z = go z
go z' Tip = z'
go z' (Bin _ x l r) = f (go f z' l) (f x (go f z' r))
foldl f z = go z
go z' Tip = z'
go z' (Bin _ x l r) = go (f (go z' l) x) r
foldr f z = go z
go z' Tip = z'
go z' (Bin _ x l r) = go (f x (go z' r)) l
fold [[_ 4 _] 3 _] → f 4 (f 3 #)
fold2 f # [[_ 4 _] 3 _] → f (f # (f 4 #)) (f 3 #)
foldl f # [[_ 4 _] 3 _] → f (f # 4) 3
foldr f # [[_ 4 _] 3 _] → f 3 (f 4 #)
fold [[_ 4 _] 3 _]
f (go [_ 4 _]) (f 3 (go ))
f 4 (f 3 (go ))
f 4 (f 3 #)
fold2 [[_ 4 _] 3 _]
fold3 f # [[_ 4 _] 3 _]
f (go [_ 4 _]) (f 3 (go _))
f (f (go _) (f 4 (go _))) (f 3 (go _))
f (f # (f 4 # )) (f 3 # )
f (f # (f 4 # )) (f 3 # )
f (f # (f 4 #)) (f 3 #)
foldl f # [[_ 4 _] 3 _]
go # [[_ 4 _] 3 _]
go (f (go # [_ 4 _]) 3) _
f (go # [_ 4 _]) 3
f (go (f (go # _) 4) _) 3
f (go (f # 4) _) 3
f (f # 4) 3
foldr f # [[_ 4 _] 3 _]
go # [[_ 4 _] 3 _]
go (f 3 (go # [_ 4 _])) _
f 3 (go # [_ 4 _])
f 3 (go (f 4 (go # _)) _)
f 3 (f 4 (go # _))
f 3 (f 4 #)
* + (* + (* + (* + (* + (* + *))))) versus
(((((* + *) + *) + *) + *) + *) + * versus
((* + *) + *) + (* + (* + *))
Really, this should be considered as C++ perverting the existing terminology from category-theory for Functors.
> I really like the author's suggestion of mentally translating Functor to Mappable. Are there any other synonyms for other Haskell terms of art?
I think that there is a great deal to be said for leveraging intuition. But who's intuition? Who was Haskell designed by/for when Functor was first defined in the standard library?
> What I'd really like, I suppose, is a complete overhaul of Haskell syntax to modernise and clarify everything: make it use actual words to describe things (foldl vs foldl'? BIG NO).
The intention is admirable, but what does it cost to do it, and what is gained by doing it? It seems that the implication is that certain functions become immediately intuitive to people (what kind of people?) in certain contexts, and that possibly-by-analogy, these context can be extended (how far?). I'm not saying that this is a bad goal, but rather than try to compromise in this manner, the Haskell community has often adopted terminology that is precise instead of intuitive.
Functors could have been Mappables, but how far would that analogy hold, and who is already familiar with maps in this context? Better to use an accurate term, and when someone unfamiliar with it learns it in this context, they will be able to apply it to many other contexts.
> Put in syntax redundancy and visual space to avoid the word soup effect: typing is cheap, understanding is expensive. Normalise and simplify terminology.
On the surface, I've always supported this - if only for the reason that I would always like to be able to pronounce a combinator when I'm talking to someone. The downside would be the combinatorial explosion of different subsets of names that people would learn for even one library. I'm not sure weather it would be a net plus or minus.
> Fix the semantic warts which make hacks like `seq` necessary --- if I need to worry about strictness and order of evaluation, then the language is doing lazy wrong. etc.
I think you will find that this is an unsolved problem. Better to allow people to be explicit when necessary instead of making the language totally unusable.
> Basically I want language X such that X:Haskell like Java:K&R C.
I think I understand the sentiment, but the analogy feels too shallow. For instance, I would make the following predictions from your analogy - Do they hold?
* Runs on a virtual machine instead of being compiled
* Extraordinary measures taken to make the language and binary-formats backwards compatible.
* More type-safe
* Less primitives
* More automated memory-management
> This will never happen, of course; the people who have the knowledge to do such a thing won't do it because they are fully indoctrinated into the Haskell Way Of Life...
Indoctrinated is obviously a loaded term. I think you will find that nearly all Haskell programmers in any position to influence the development of the language are very open-minded when it comes to new ideas. Part of the reason why Haskell looks the way it does today is because it was intended to be a platform for experimentation.
> Really, this should be considered as C++ perverting the existing terminology from category-theory for Functors.
For that matter, category theory borrowed the word from linguistics, and most definitely did not keep the same meaning.
But Haskell is a programming language. Using words like functor to mean something different from what other programming languages mean by the term creates a barrier to understanding for (non-FP) programmers. (The other definition is rather well established in non-FP circles, which is by far the majority of programming.) And when Haskell proponents state that their definition is right because it's the one from category theory, non-FP programmers find that rather arrogant.
That being said, I think anyone would agree that terminology choices in Haskell can only fairly be accused of ignoring PL parlance that was in existence before the terms were adopted in Haskell...
With that being said, the "Functor" terminology timeline:
~ 1942 - Category Theory - http://en.wikipedia.org/wiki/Category_theory
< 1991 - Haskell - Notions of computation and monads (Moggi)
> 1994 - C++ STL - http://en.wikipedia.org/wiki/Standard_Template_Library
> 1995 - Gang of Four - http://en.wikipedia.org/wiki/Design_Patterns
> 2000 - C# - http://en.wikipedia.org/wiki/C_Sharp_(programming_language)
> 2004 - Java Generics - http://en.wikipedia.org/wiki/Java_version_history
Now clearly we can't accuse Moggi of arrogantly ignoring existing PL terminology, because it didn't exist at the time. So, should we then say that Haskell users should have abandoned the term once it started being used differently?... This seems unfair too, as it was already in use in the Haskell ecosystem by then. I really can't accept that Haskell users are arrogant simply for using a term they adopted very early on consistently and in line with its original definition. Maybe they are arrogant, but certainly not for that reason.
So maybe they are arrogant because they don't play well with others? How do they react to other people using the term in a different fashion? I have never seen a Haskell user complain about someone calling a C++ function-object a functor. Maybe it has happened, but I just don't see it coming up very often.
> And when Haskell proponents state that their definition is right because it's the one from category theory, non-FP programmers find that rather arrogant.
I've never seen a Haskell user bust up a conversation and chastise a bunch of C++ users for talking amongst themselves about function-object "Functors". That would be arrogant, but does that ever happen? Why would they do that? The only time I can see Haskell users forcing definitions down people's throat is in situations like this, where they are being berated for using their own terminology and decide to set the record straight. That being said, who's forcing? The replies could really only be accused of being informative.
What do you suggest Haskell users should do? Stop calling functors functors? Sheepishly demure and say "okay you're right, we're wrong" when someone says that functors are how C++ does them?
Really I couldn't care less about the terminology point as I don't believe it has ever caused any significant issues in terms of ambiguity. And I'd be surprised if anyone earnestly attempting to learn Haskell was slowed down because of these terms (slowed down more than if Haskell invented totally new terms). The only reason why I'm getting worked up is because of the "arrogant" label.
Now how did this thread of conversation start?
> I really like the author's suggestion of
> mentally translating Functor to Mappable.
> Are there any other synonyms for other Haskell terms of art?
> What I'd really like, I suppose, is a complete overhaul of Haskell syntax
>> As others pointed out, the way Haskell uses
>> the term "functor" is related to the way mathematicians
>> had been using it for at least a decade before cfront.
I guess I just wished that people would make sure that they are at least justified when using inflammatory language.
Sorry about the rant.
Your point about chronology is noted. I have no rebuttal.
> What do you suggest Haskell users should do? Stop calling functors functors? Sheepishly demure and say "okay you're right, we're wrong" when someone says that functors are how C++ does them?
Stop saying "we're right, you're wrong" when someone says functors are how C++ does them. Accept that C++, C#, Java, and the Gang of Four can use the term to mean what they mean without them being wrong. Ideally, recognize that, within the wider world of programming, the FP use of the term is the minority, and so some effort at translation to the majority terms may be appropriate.
That said, I'm well aware that I'm talking to the wrong person. Comments of the type that I'm complaining about occur on HN, but I don't think they come from you.
The only quarrel I could pick with what you said was your original comment, when you faulted C++ for not adapting the term from category theory. I had my timeline wrong in my first reply to you, but I still think that, since the roots of C++ are very far from category theory, expecting it to go there to find its terminology is a bit unfair.
for expressions in scala are monadic comprehension and implicit parameters are analogous to typeclass constraints.
OP's article is still a great way of wetting appetite, and sharing insights; but moving on from there is better facilitated by Chris Allen's recommendations.
There is also the IDE issue; FPComplete has a web-based IDE that is good for beginners, and it is possible to setup Emacs to be a very helpful IDE (though this is by no means simple). With Haskell an IDE is really helpful: see the errors as you type, and resolving them before running into a wall of compile errors.
Anyway: go Haskell. I'm looking forward to a less buggy future :)
There are Haskell libs of course that are used in these environments, and the companies usually end up fixing them such that they're quite good. Most libs used by pandoc are likely to be great, and there's a few dozen others of the same caliber (its useful to search around and see what libs are used by the other few companies using Haskell since they have likely been vetted as well).
The other largest issue to actually using Haskell is that all the knowledge your ops team has of running a production system are essentially null and void. All your existing knowledge of how to fix performance issues, null and void. Learning Haskell and becoming productive in it almost starts to look like the easy part compared to effectively running a Haskell (dealing with space leaks, memory fragmentation issues, and ghc tuning for stack sizes, allocations, etc).
Also, a lot of the really common libraries like text, attoparsec (parsers), aeson, networking, etc are highly tuned for low latency and performance. Many use compiler rewrite rules and techniques called stream-fusion to compact a lot of the machine code away. Also aggressive inlining etc can be done.
I'm sure there are some memory-heavy or poorly optimized libraries out there but that's certainly not the norm. I've had no problems with the libraries off-the-rack.
The stream fusion stuff is sweet, but not exactly unique to Haskell since any language with good iterator/generator abstractions have similar constant-time memory characteristics.
I found this posting a little more approachable to seeing the various optimizations possible with stream fusion:
There should be warnings all over the Prelude and basic libraries documentation.
The author helped me narrow it down to some issues with how ghc by default allocates a stack space that is rarely enough, and once it starts growing the stack space the RAM per connection gets pretty ridiculous. Using higher default stack space helped remedy this some, but the per-connection RAM cost was still way higher than Golang/Python which I was comparing to.
So... separate project, I write a load-tester in haskell for a websocket server. I need to issue some HTTP requests, and I see Brian O'Sullivan made a nice library, wreq. I use it as described and quickly discover it uses ridiculous amounts of memory because it doesn't mention that you should always re-use the Session (the underlying http-client emphasized the importance of re-using the Manager):
(I am sorry that this issue prolly came off as a bit whiny there, I was very frustrated that such a gap was omitted from the docs)
So, my program is working pretty nicely, until I discover that its not actually sending multiple HTTP requests at once (even though the underlying http-client lib has a thread-safe TCP connection pool). After browsing some code, I see the problem:
The solution that was so far implemented seems equally weird to me.... letting different requests stomp over the Session's cookie jar... I forked it so that I could have multiple wreq Sessions use the same Manager, and now it finally works as it should.
I won't even go into how some of these libs have occasionally wanted conflicting dependencies which leads into its own 'cabal hell' (googling for that is entertaining unless its happening to you).
I've only been writing Haskell for a bit over a year now, but everytime I write code with it, despite my love of the language, the libraries and run-time end up frustrating me.
For non-trivial applications, we’ll always want to use a Session to efficiently and correctly handle multiple requests.
The Session API provides two important features:
When we issue multiple HTTP requests to the same server, a Session will reuse TCP and TLS connections for us. (The simpler API we’ve discussed so far does not do this.) This greatly improves efficiency.
Also, your bug reports are really solid.
I wish there was a language or library that was willing to take the Haskell functionality and just give it all names like this.
type Mappable = Functor
type NotScaryFluffyThing = Monad
> I wish there was a language or library that was willing
> to take the Haskell functionality and just give it all
> names like this.
The article says "A list is a Functor". Now you're saying "Either is a Functor". But those two things don't have the same nature.
Maybe what the author meant "The  list constructor is a Functor"?
I'm not sure what is gained by garbling abstractions and reducing them to a subset of their potential interpretations.
The best way to say it is "The list type 'forms' a functor" or "The Either type 'forms' a functor". The fact that they form a functor implies that their map operation has a fixed set of properties, and these properties are independent of what exactly the data structure does and how it works.
In Haskell, a Functor actually consists of two parts: The type itself (f), which 'transforms' a type a into type "f a". ie. Maybe "applied" to Int gives "Maybe Int", a new simple type (let's handwave kinds away for now). In addition to that, the fmap function is required for Maybe to be a Functor. A Functor is defined by this ability to "add structure" to existing types and the mapping operation.
Seen this way, Either is clearly not a Functor: "Either Int" is not a simple type. However, "Either Int" is a functor: Either Int String forms a simple type, and you can implement fmap. In fact, that works for any type, so "Either a" is the functor as usually defined in Haskell.
class Container(val property:Int)
val mappedlist:List[Int] = list.map(x=>x.property)
val mappedOption:Option[Int] = option.map(x=>x.property)
Two things, in particular, stand out for me when thinking about Haskell this way (as a "tool for thinking" language).
First, unless you're a mathematician, you probably haven't thought very deeply about algebraic data types, and how useful and expressive it is to build up a program representation from a collection of parameterized types. The article touches on this a little bit in noting that Haskell teaches you to think about data types first.
But it's more than just "data first," for me, at least. Grokking Haskell's type system changed how I think about object-oriented programming. Classes in, say, Java or C++ or Python are a sort of weak-sauce version of parameterized abstract types. It's kind of mind-blowing to make that connection and to see how much more to it there is.
Second, monads are a really, really powerful way of thinking about the general idea of control flow. Again, the most useful analogy might be to object-oriented programming. When you first learn to think with objects, you gain a flexible and useful way of thinking about encapsulation. When you learn to think with monads, you gain a flexible and useful way of thinking about execution sequencing: threads, coroutines, try/catch, generators, continuations -- the whole concurrency bestiary.
I think monads are hard for most of us to wrap our heads around because the languages we are accustomed to are so static in terms of their control flow models, and so similar. We're used to thinking about control flow in a very particular way, so popping up a meta-level feels crazy and confusing. But it's worth it.
Speaking of which, I found "Functional Programming in Scala" excellent for teaching someone with an imperative background how to "think functionally". Monads are explained in an easy to understand way. I can imagine that without reading that book I'd have been looking at a couple of years of coding before I started to see the abstractions, etc. By contrast "Learn You a Haskell" lost me part way through both times I tried to read it...
Also, companies actively using and recruiting for Haskell are now starting to join the Commercial Haskell SIG, so if you want to poke around, you can find them here: https://github.com/commercialhaskell/commercialhaskell#readm...
Some notable ones include:
* Facebook Haxl, an abstraction around remote data access 
* Microsoft Bond, a cross-platform framework for working with schematized data 
* Google Ganeti, a cluster virtual server management tool 
* Intel Haskell research compiler, a custom Haskell compiler used internally at Intel Labs 
1. It is just a small team or even one person using it and they're doing it because they really want to use that technology badly.
2. The project is some side research thing or trivially small that it could have been done using any technology.
3. It is actually just a tool or sub-system of the main system that was low risk enough.
4. The project is no longer operational, if it ever made it to that stage.
Also he is focusing on large companies who have huge reasons they can't use Haskell, mostly related to internal resources. If you have several hundred Java engineers (for example) you literally cannot just switch to Haskell, it wouldn't work.
Lisp falls into the same category. High quality and very interesting but it will never, ever gain widespread use. Don't believe me? A half century of proof exists. Haskell is already at a quarter century.
Both are very cool and everyone should learn them to some degree because they will make you a better programmer but neither will ever be used widely. They just aren't appropriate for most general purpose programming tasks.
Lisp gives the programmer maximum raw expressive power. This appeals to lone wolves and autodidacts, but it completely punts on the issues of standards, teamwork and maintainability.
Haskell on the other hand, promises a direct solution to a huge swath of problems that are experienced across the board in software development today. The pitch is essentially an extension of what Sun used to sell Java in the 90s: it makes your code safer and more maintainable. Except Java only really did that for memory management in a C-dominated world, the type system gives you barely anything in that regard, so you still have just as many NullPointerExceptions as you suffer from lack of types in languages like Ruby. Haskell type system gives you infinitely more meaningful safety, but with suitable state-of-the-art functional abstractions to minimize the pain of acquiring it.
The only catch is the learning curve is steep, but as more and more programmers scale that wall, the benefits to performance and maintainability will become apparent to the pointy hairs. Lisp never really had an equivalent value proposition, except in a few narrow fields where its expressiveness and plasticity were key.
That is: Some languages aren't suitable for general use. Some aren't... and then they are. But popularity probably correlates with how suitable the language was at least two years ago, and maybe more like 10. (Call it 5 as a compromise.)
So popularity doesn't tell you that the language is unsuitable now. But I agree, there is a correlation. Programmers for the most part aren't stupid sheep, afraid to use something new.
Presumably I use server-side applications written in Java, but I've no way of telling. If server-side counts then most people with computers indirectly use Haskell via Facebook's Haxl project.
As a practical note, the fact that educated people use it is an indicator that it is useful.
Possibly. It could also be that they use it because it's interesting and informative rather than useful per se.
It could also be that it's useful in particular contexts in the same way that Feynman diagrams are useful.
My point was that it takes a long trivial amount of time to learn Haskell during which you might feel "unproductive" by that measure. I feel that it's during that period that the magic happens :-)
it's comforting -for me- to see that almost everybody is going through the same phases while learning haskell. i believe that should say something to haskell community.
i've recently started learning haskell. it's been 25 days. (so says cabal) i was reading a book and struggling to build a web app. (why web app?) i was so close to quitting. later i decided this is not the way to learn haskell. one simply does not read the book and try stuff. that was not enough. at least for me. so i changed my method.
my new method of learning haskell is:
- read the book.
- find one or more mentors (i have two) that are really good at haskell and can answer all kinds of impatient questions you have.
- watch people doing and explaining haskell stuff.
- join #haskell-beginners on freenode and ask your questions.
- create something small first that you can turn into something big later.
online haskell resources are surprisingly deficient however #haskell-beginners community is awesome when it comes to helping n00bs like me and "learn you a haskell" book is an excellent book.
one more resource that i use as reference material is the "haskell from scratch"  screencast series by chris forno (@jekor).
before you begin, make sure you checkout chris allen's (@bitemyapp) "learn haskell"  guide.
we'll get there people, we'll get there. :)
This definitely helped me too. I started out looking at functions and monads as 2 'types' of function that could only be mixed in certain ways, and didn't bother with the gory details at first. IME It's only when you experience monads and their effects that the gory details make perfect sense.
Right now I'm learning Yesod, but I don't feel confident that's really what I want. Which of these are closest to Rails? Which are closest to Sinatra?
Scotty would be closer to Sinatra and Flask. Spock is similar to Scotty but comes with a few more built-in features like type-safe routing, sessions, etc.
I recommend Yesod but there are certainly some advanced metaprogramming features (routing, models).
Have you checked out the Yesod scaffold site? https://github.com/yesodweb/yesod-scaffold
Scotty and Spock are both Sinatra-like.
There's a lot of good info here: https://wiki.haskell.org/Web/Frameworks
I don't mean to criticize or anything, just mean to understand. There are so many people who are very passionate about Haskell that it makes me think that it must be worth while to learn. But I just don't get how it would be useful for things that I do most with programming: writing Web/Desktop/Mobile apps in Swift, Python, and PHP.
Also, can you recommend a good book or resource that uses real world examples to teach Haskell?
Out of the things you mentioned, server-side programming is the one where Haskell fits best. Server-side programming is more amenable to unusual languages because you get to choose your own platform and there are plenty of mature web frameworks you can use (too many of them, I might say). It might be worth a try to experiment writing code in a more type-safe language. Even the simple things like algebraic-data-types are things I miss a lot when working on other languages.
Yes, and yes.
> Also, can you recommend a good book or resource that uses real world examples to teach Haskell?
The obvious thing to recommend here is Real World Haskell , which directly addresses some of the areas you raise.
Also, Write Yourself a Scheme in 48 Hours  is more in-depth and real-world than most tutorials (writing a Scheme interpreter isn't exactly a common real-world application, but its more real-world scale than most tutorials address, and it uses a lot of things that are of concern in many real-world apps.)
Haskell is a general purpose programming language.
RWH is well-written and covers some real-world tasks, but some of its examples are outdated enough that they don't even compile anymore (at least, I encountered that scenario a year ago or so) and Haskellers will frequently warn people that parts of it are out of date (see elsewhere in these comments).
Someone else suggested "Write Yourself A Scheme" as a good practical introduction, and that in itself says a lot about who Haskell appeals to -- people who are interested in programming languages. The MLs and Haskell remind me of Brian Eno's line about how the first Velvet Underground album only sold 30,000 copies, but "everyone who bought one of those 30,000 copies started a band".
> "We store memories by attaching them to previously made memories, so there is going to be a tendency for your brain to just shut off if too many of these new, heavy words show up in a sentence or paragraph."
That has always been my belief. I don't have anything else to back it up, only that my own speed of learning seems to increase for new subjects with time. The more I know, the easier new concepts seem. Very few things are completely new, unless I start delving into subjects I'm completely unfamiliar with. Say, Quantum Mechanics.
With most programming languages, I (and probably many here) can learn enough to start creating something useful in a weekend. Haskell always gave me trouble because it seems to take longer than that.
Then again, so does Prolog. I'll try yet again.
I'm missing Visual Studio, are there any realy good Haskell IDEs out there? for example ones which allow debugging.
As I code only in haskell, it's perfect fun for me.
Now, maybe a good way to start is using/practicing their Vimgolf client. 
In emacs, as I didn't use any others modes (except haskell-mode ...), I don't need their wonderful package managers any more.
Vim + ghcmod + syntastic has a useful subset of the functionalities of an IDE.
A minor wording recommendation:
> better in every measure(lower complexity, speed, readability, extensibility)
Apart from a missing space before the parenthesis, this reads like there was lower complexity, lower speed ...
there are only two problems in CS, cache invalidation and naming things - phil karlton
An RSS feed would be great.
Also, does anyone know what colorscheme this is using for the code samples? Looks nice.