An adjective such as "monadic" can be used instead: "In Haskell, IO actions are monadic because the IO type has a flatMap (bind) operator and a unit (return) function which satisfy left identity, right identity and associativity".
All of the above are the properties (traits?) of the IO datatype. Monads (in Haskell) don't "exist" by themselves, they're just a property.
The dynamic may be slightly clearer for semigroups. In mathematics, a semigroup is a pair containing a set and an (associative) operation on that set. In Haskell we say things like "ByteString is a semigroup" but that's short for "there is a semigroup where the set is all possible ByteStrings" and you assume the people you're talking to understand what the operation is because there is only one instance of Semigroup commonly defined for ByteString.
But for learning purposes I agree wholeheartedly!
In math, there are other uses for rings, but if you're using them, it helps to know a little ring theory. Similarly, if you want to use any of the other Haskell monads, it helps to know how monads work.
Maybe, although it is also not type safe because Haskell doesn't have a value restriction for polymorphism. (or looked at another way Haskell's value restriction assumes that function application is a value).
(cons 'safe x)
But calling this a "safety" is a silly meme.)
Now, how CL style type-tagging is fundamentally different form what Haskell does
What Haskell does in no more "safer" than if I do tag-checking at runtime. (That's why CL is called a strong-typed language). So this is why it is a silly meme.
The idea that compile-time checks is much better than runtime checks is also not obvious (Java's and other commercial languages nonsense about type safety aside - Java is no more safe than CL).
What is better - to clutter the code with explicit type annotation or clutter the code with explicit type-tag checking in the runtime code is another question.)
I personally love Lisp and Erlang with "everything is a pointer to a tagged value" semantics.
The fact that compiler is refusing to compile type-mismatching expression for a simple built-in types doesn't imply that it is "safer". Other languages, notably Lisp and Erlang will catch and signal the same error later, at runtime.
I am not a Haskell guru, but what I see, especially in the case of Monads, is an implicit type-tagging with additional tag - name it State, or IO, or Safe and then type-checking against them.
I cannot see the fundamental difference which gives that "extra safety".
That's not all a statically typed language does.
Regardless, they are safer. The usual counterargument goes that this safety is too much of a burden, or not worth it, etc. But nobody I know argues they are equally safe.
If you mean "dynamically typed" languages, then it differs in that they do not have types.
>The fact that compiler is refusing to compile type-mismatching expression for a simple built-in types doesn't imply that it is "safer"
No, it does not imply that. It means that.
>Other languages, notably Lisp and Erlang will catch and signal the same error later, at runtime.
Precisely. So if that piece of code was not executed, you do not know that it is incorrect. If that branch is only followed rarely, your incorrect code is out there in production waiting to blow up when it finally does run that branch.
>I am not a Haskell guru, but what I see, especially in the case of Monads, is an implicit type-tagging with additional tag
The article in question does a good job of explaining monads.
No. Haskell checks at runtime which tag a value has. But, crucially, the compile-time check guarantees that these tags will always be valid. Assuming you have handled every possible variant of a type (e.g. `Just ...` and `Nothing` for the `Maybe` type), you are guaranteed that you are "covered". Unlike in, say, python, where you might get passed a string, or None, or...?
> The idea that compile-time checks is much better than runtime checks is also not obvious
Dynamic languages have their place, but there's no contest whatsoever when it comes to safety. What's better: discovering an error before the code has run, or discovering the same error while the code is running, possibly in production? In the huge majority of cases, the former is preferable.
> What is better - to clutter the code with explicit type annotation or clutter the code with explicit type-tag checking in the runtime code is another question.
With type inference, you get the best of both worlds :) Most Haskell code doesn't need any type annotations, and most of what's out there only has explicit type annotations for top-level expressions, for purposes of documentation.
If you want to define safety broadly, then both static (upfront analysis, type checking) and run time solutions (hyper visors, tag checks, monitoring) are necessary to achieve it.
In practice, and it has also been my experience, as long as you leverage the type system, the debug time necessary to get a correctly-working program in a statically, strongly typed language tends to be significantly shorter than with a dynamic language (or with a statically typed language with a more primitive type system). The size of the code is often comparable as well.
Haskell is also completely unsafe when it comes to resource usage, which is why it is not used often in say avionics.
However, I don't buy the usual argument that follows up (not saying you are saying this now, but I'm sure you've heard it): "therefore, you're not better off using a statically typed language, therefore just use a dynamically typed language".
Just because a statically typed language cannot rule out every kind of runtime flaw is not a good reason to stop using them. Statically typed languages rule out some errors that dynamically typed languages don't. In both cases you also need to perform additional checks, and in both cases, safety critical systems should be formally verified. Not "instead", but "in addition to".
A statically typed language is an additional, automatic safety net on top of everything else. Every bit of safety counts. You don't need to formally verify your generic list algorithm isn't going to try adding two elements (and throw an exception when the elements cannot be meaningfully added) because it can't by definition. That's one less formal verification you need to do.
> In both cases you also need to perform additional checks, and in both cases, safety critical systems should be formally verified. Not "instead", but "in addition to".
In practice, there are a lot of other things going on in Haskell that work against formal verification in safety critical real time systems. When you need to know for sure, non-determinism (say in the form of lazy evaluation) is your enemy, and the type system itself becomes less useful because those paths have to be rigorously explored anyhow. Heck, at that point, you might even want to use assembly since even the compiler is suspect.
Thankfully, most of us don't write that kind of code.
Pedantic: runtime is a thing that runs your program while run-time is a phase.
That's a pretty non-standard view. It's as unhelpful as static typing proponents claiming dynamically typed languages such as Python are "unityped". As unhelpful as Simon Peyton Jones jokingly claiming that Haskell is the world's best imperative language :) Any of those claims may be technically true, but it doesn't help us understand, choose or use those languages. Other similarly unhelpful claims include the old "these languages are all Turing-complete anyway, so it doesn't which one you use."
So I disagree with you, philosophically, but since I acknowledge you're a very knowledgeable guy, let me ask you about your opinion:
- Would you say there is no advantage to using a statically typed language like Haskell/ML over a dynamically typed language?
- Or if you think there is an advantage, do you think the costs outweigh the benefits?
- If you were asked to develop a safety critical system, and given the choice of using Haskell (or ML if you dislike lazy evaluation by default, or OCaml if you prefer a more hybrid language) and Python (or a similar dynamic language of your choice), and every other analysis tool you can think of, static or run-time, which language would you pick? Assume you cannot pick anything else, you're given time to become proficient on the language you pick, and you cannot refuse the assignment. I know, this is a fantasy scenario, but indulge me.
Safety critical systems are generally real tine so Haskell is off the table. I'll also spend a lot of tine manually verifying the code, and using alot of external analysis and verification tools, so restricted C++ is fine in that case. Now, if you told me that the system was safety critical and nit real time, and the system wasn't important enough to merit lots of manual verification, then Haskell would be a great choice because its static type system is better than nothing.
Why do you think Haskell is fine as long as the system "isn't important enough to merit lots of manual verification"? That sounds puzzling to me. I'd say one thing complements the other: automatic and manual verification seems the best option.
Is lazy evaluation your biggest issue with Haskell? How about languages like ML or OCaml, which are arguably safer than "restricted C++" and do not default to lazy evaluation?
Or is GC your main problem with these languages? This would rule out most dynamically typed languages as well.
Anyways, you might want to read the stack overflow article on this subject:
Perform test in the operation:
(/) :: i -> i -> Maybe i
Require test before the operation:
nonzero :: i -> Maybe (NonZero i)
(/) :: i -> NonZero i -> i
What is your opinion of Idris?
This is part of what makes HoTT exciting, I believe, as it provides a formalized way of passing proofs and operations around over different types so long as there's a suitable homotopy.
So while Idris is a very exciting research direction, it's completely fair to state that they have a hefty set of barriers to overcome before they truly demonstrate the value of DT in some larger variety of usecases besides just theorem proving.
Runtime checks will only trip up if you happen to hit a code path the introduces an incorrect type. This may only occur in some extremely rare scenario that you never pick up in testing.
Compile time type checking allows you to prove that your program is definitely type safe, with 100% certainty.
Depends what you mean.
It's safe in the sense that you will never get a type error.
In Haskell terms, it's possible to write a function that's not total (e.g. not implemented for all possible data constructors of a type), which can then crash or fail to terminate. For example, "head" will crash on an empty list (duh).
However, this is easy to avoid, and type safety in Haskell always holds true, as do all the other guarantees the compiler makes (like referential transparency).
Absence of type errors is not safety, it is absence of type errors.)
There's no reason to throw the baby out with the bathwater though, since 90% safety is a hell of a lot better than 0% safety.
Personally I'm keen to see mainstream languages adopt better totality checking for that exact reason - my fantasy language would enforce that `main` is always a total function*
*(For this fantasy language, I'd probably allow infinite recursion to still exist, since the halting problem is theoretically impossible to solve without introducing a lot of pain to prove that your code will actually terminate, and that level of totality checking is often counter-productive for general-purpose code)
However, if you do get a pattern match failure, one of two things is true:
1. You can easily fix it by accounting for all patterns (or adding a default match)
2. Your program model is conceptually broken and you should probably find a new model that accounts for all possible patterns.
Much easier to deal with than a type error :)
Though these days I've been saying "Turing complete" is a bug, not a feature, provided you can accomplish your aims without it.
It does, although for Option and Result, there's .unwrap() which simply exits the program (through fail!()) on None/error. The fact that you can do this is practical, although could potentially train bad habits.
(Only available when the program is compiled for profiling.) When an exception is raised in the program, this option causes a stack trace to be dumped to stderr.
This can be particularly useful for debugging: if your program is complaining about a head  error and you haven't got a clue which bit of code is causing it, compiling with -prof -fprof-auto and running with +RTS -xc -RTS will tell you exactly the call stack at the point the error was raised.
You need HKTs to have type-checked generic monads. Almost no languages have them.
Dynamic languages can implement generic, but not type-checked monads. Many static languages can implement type-checked but non-generic monads.
Now you need to show how type-checked generic monads are critical. That's much harder than showing that the monadic pattern is useful.
In practice, 75% of the time that I use monads I am using some kind of monad stack. I'm sure you can argue that this is unnecessary or that I only do so because Haskell affords it... but on the other hand, I believe that transformer stacks do a good job of elaborating the space of effect types with an efficient, expressive language. Without them you either elide side effect reification or build a lot of one-off monad stacks.
H out in = out -> (FFIOperation, in)
The AST POV espoused by this article is quite good as well, but a little bit less obvious how to "step" things forward or operate nicely in parallel contexts.
Also, in reality the above perspective is a good way to embed Haskell into other contexts.
The IO type is rather rigid in that there is always a next callback, which implicitly contains the entire program state as part of the function closure. In a reactive programming language like Elm you can have many functions that may be the next callback depending on what the next event is, along with all the callbacks that run as part of the signal graph. Purity is about how constrained the callbacks are, not about the overall structure of the program.
You can write print statements to do debugging (with Debug.Trace), and in practice it's not very hard to work IO into your code when you need it (even if only for temporary debugging or development). Crucially, however, it's much harder to accidentally work IO into your code. The few cases where I really miss print statements "for free" are vastly outweighed by the many cases in impure languages where I'm accidentally mismanaging my mutable state.
Whether it results in more performant code? In some cases yes (the restrictions make it much easier to prove certain compiler optimizations), but that's not really the point. Referential transparency is about making your code more expressive, and easier to reason about, to design, and to safely tweak.
Idiomatic Haskell is not generally as fast as mutable C/Java/etc. Creating/evaluating thunks is not fast and immutable data structures often result in excess object creation. When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.
Haskell is one of my favorite languages, the performance story just isn't quite what I want it to be. I do, however, think that there is plenty of room for improvement, i.e. there is no principled reason Haskell can't compete.
This argument may seem more abstract than what you mention, but in fact it gets to the very heart of why there aren't good unboxed mutable arrays in haskell. In truth, there are. You can convert Immutable Vectors (which are lists with O(1) indexing but no mutation) into Mutable Vectors in constant time using unsafeThaw. The problem is that your code is no longer persistent, and you've risked introducing subtle errors. My biggest problem is that the haskell community seems to look at non-persistent data structures as sacrilegious. As a scientific programmer, that makes me feel like maybe learning haskell wasn't such a good investment after all. But on the bright side, functional programming is on the rise, and I'm confident that all my experience with Haskell will transfer well in the future.
There may be other, faster libs that I don't know about, but I couldn't find them. I tried HaXml first (from which HXT is apparently derived), but the parser choked on my document and the author didn't come forward with a fix when I reported the problem (by email, the project isn't on Github). There is one called HXML, but I think it's dead. The TagSoup library might have worked, but I don't think so. It's not easy jumping into a new language and then coming up against library issues that prevent you from finishing your first project.
Hopefully HXT will be updated to use modern string types soon. In the meantime, I believe that xml-conduit (http://hackage.haskell.org/package/xml-conduit) might be what is desired.
edit: Then to make things dead simple, add on dom-selector:
It enables using css selectors like so:
queryT [jq| h2 span.titletext |] root
It's interesting that XML libs have to invent operators and obnoxious syntax (like HXT's arrow usage, or coincidentally the fact that HXT's parser uses the IO type, which is just crazy talk). dom-selector seems to have the same problem. I prefer readable functions, not DSLs where my code suddenly descends into this magic bizarro-world of operator soup for a moment.
Lenses would make tree-based extraction easier, I think, although lenses aren't easy to understand or that easy to read. Tree traversal with lenses and zippers seems unnecessarily complicated to me.
In a scraper you just want to collect items recursively, and return empty/Nothing values for anything that fails a match: Collect every item that contains a <div class="h-sku productinfo">, map its h2 to a title and its <div class="price"> to a price, and then combine those two fields into a record. It's something that should result in eminently readable code, not just because it's a conceptually trivial task, but also because someday you need to go back to the code and remember how it works.
Bizarro world of operator soup? I don't really follow you. That dom selector code just compiles down into functions itself. I don't see how anything could be any clearer than a css selector for selecting an html element.
I know nothing about how the compiler works, and my haskell code still easily outperforms my clojure code. The only optimizations I do are the same as anywhere else: profile and look at functions taking up too much time.
>and know the bytecode they want generated.
Bytecode is not involved. Machine code is, but I don't even know ASM to know what I want generated or if it is being generated that way.
>When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.
This is simply nonsense. Unboxed mutable vectors are trivial in haskell: https://hackage.haskell.org/package/vector-0.10.11.0/docs/Da... No, there is no substitute for using the right data types. Why do you think haskell or haskellers suggest using the wrong data types?
I didn't say you couldn't do arrays with Haskell, I said Haskell doesn't make it easy. Here are the actual array docs, BTW: http://www.haskell.org/haskellwiki/Arrays
I'm a relative Haskell novice, but was able to write some mutable array code with only a cursory read through the documentation.
Granted it's extremely verbose compared to most imperative languages.
Enjoy using C then. You suggested that haskell was bad because it was not fast enough. If "not as fast as C" is not fast enough, then virtually every language is not just bad, but much worse than haskell.
>I said Haskell doesn't make it easy
And I showed you that it is in fact trivially easy.
>Here are the actual array docs, BTW
That is a random, user-edited wiki page. I linked to the actual docs.
I agree. The only languages I've used that are remotely competitive for my purposes are static JVM languages (Java and Scala), Ocaml, and Julia for array ops. Haskell comes closer than many others, but just isn't there yet.
The docs you linked to are a 3'rd party package marked "experimental". I'll also suggest that you are glossing over most of the difficulties in using them. It's trivially easy to call `unsafeRead`. It's not so easy to wrap your operations in the appropriate monad, apply all the necessary strictness annotations to avoid thunks, and properly weave this monad with all the others you've got floating around.
(That last bit is fairly important if you plan to write methods like `objectiveGradient dataPoint workArray`.)
Except scala and ocaml are both slower than haskell.
>The docs you linked to are a 3'rd party package marked "experimental".
No it is not. What is the point of just outright lying?
>I'll also suggest that you are glossing over most of the difficulties in using them
I'll suggest that if you want people to believe your claim, then you should back it up. Show me the difficulty. Because my week 1 students have no trouble with it at all.
>It's not so easy to wrap your operations in the appropriate monad
You are literally saying "it is not easy to write code". That is like saying "printf" is hard in C because you have to write code. It makes absolutely no sense. Have you actually ever tried learning haskell much less using it?
>apply all the necessary strictness annotations to avoid thunks
All one of them? Which goes in the exact same place it always does? And which is not necessary at all?
>and properly weave this monad with all the others you've got floating around.
Ah, trolled me softly. Well done.
I also don't know why you are behaving as if I dislike Haskell. I enjoy Haskell a lot, I just find getting very good performance to be difficult. You can browse my comment history to see a generally favorable opinion towards Haskell if you don't believe me.
I also gave you a concrete example of a reasonable and necessary task I found difficult: specifically, numerical functions which need to mutate existing arrays rather than allocating new ones, e.g. gradient descent. Every time I've attempted to implement such things in Haskell, it takes me quite a bit of work to get the same performance that Scala/Java/Julia/C gives me out of the box (or Python after using Numba).
This is a bit of a strange convention in the Haskell world. Libraries tend to be marked "experimental" even when they are completely stable and the correct choice for production use. Note that Data.Text is also marked "experimental", and it is perfectly stable and the correct choice for Unicode in Haskell.
> 3'rd party package
Data.Vector is 3rd party in the sense that it is not part of the GHC base package, but so what? It is now considered the correct library for using arrays in Haskell.
I'm not. Given that you can't tell someone's emotional state via text, it doesn't make much sense to assume an emotional state for someone else simply because it will make you feel better.
>The page you linked to explicitly says "Stability experimental"
So does every library. It is the default state of a seldom used feature that still hasn't been removed.
>I also don't know why you are behaving as if I dislike Haskell
I am responding to what you say. You said using a mutable unboxed array is hard. That is not a simple misunderstanding, that is either a complete lack of having ever tried to learn haskell, or a deliberate lie. There's literally no other options. I teach people haskell. They do not use lists for anything other than control. They have absolutely no problem using arrays.
>I also gave you a concrete example of a reasonable and necessary task I found difficult
But you didn't say what made it difficult. So a reader is left to assume you are trolling since that task is trivial.
The Haskell community has historically had a reputation as a welcoming and friendly community. Let's work on presevering that.
If we have the following function:
foo :: Int -> Int
foo x = x `div` 0
foo :: Int -> Int
| trace (show x) False = undefined
| otherwise = x `div` 0
wtf v x = trace (show x) v
someFunction x = anotherFunction x `wtf` x
I see statements like this all the time from people that either fundamentally misunderstand Haskell, and use to have the same misunderstandings myself. You really don't sacrifice anything by using it.
> the abilitity to write print statements to do debugging
I can slap a `trace` statement wherever the fuck I want inside my Haskell code for debugging. Even inside a pure function, no IO monad required. If I want to add a logger to my code, a 'Writer' monad is almost completely transparent, or I can cheat and use unsafePerformIO.
> and lose the ability to reason about order of execution.
If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language.
> But does it really result in more performant code
Haskell has really surprised me with its performance. I've only really been using it for a short time, having been on the Java bandwagon for a long time.
One example I had recently involved loading some data from disk, doing some transforms, and spitting out a summary. For shits and giggles, we wrote a few different implementations to compare.
Haskell won, even beating the reference 'C' implementation that we thought would have been the benchmark with which to measure everything else, and the Java version we thought we'd be using in production.
Turns out that laziness, immutability, and referential transparency really helped this particular case.
- Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking. Other implementations had separate buffer and process steps (Even if hidden behind BufferedInputStream) that blocked the CPU while loading the next batch of data
- Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer, wasting CPU cycles, memory bandwidth, and cache locality.
- Referential transparency meant that we could trivially run this over multiple cores without additional work.
Naturally, a hand-crafted C version would almost certainly be faster than this - but it would have required a lot more effort and a more complex algorithm to do the same thing. (Explicit multi-threading, a non-standard string library, and a lot of juggling to keep the CPU fed with just the right amount of buffer).
On a per-effort basis, Haskell (From my minimal experience) seems to be one of the more performant languages I've ever used. (That is to say, for a given amount of time and effort, Haskell seems to punch well above its weight. At least for the few things I've used it for so far).
I'm still of the impression that well written C (or Java) will thoroughly trounce Haskell overall, but GHC will really surprise you sometimes.
I haven't used OCaml much - but my understanding is that the GIL makes it quite difficult to write performant multi-threaded code, something that Haskell makes almost effortless.
This has always interested me. I have never gotten an answer, and I suppose I can't seriously expect one now, but I am still compelled to ask:
Why did you put C in quotes up there? Why isn't Haskell in quotes? You didn't put C in quotes in other parts, but that isn't what I'm talking about.
Probably because C is a single letter, and thus potentially needs some differentiation from the surrounding sentence, whereas Haskell is an actual word. But no idea really.
What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.
> If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language..
Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider:
main = do
start <- getCurrentTime
fact <- return $ product [1..50000]
end <- getCurrentTime
putStrLn $ "Computed product " ++ (show fact) ++
"in " ++ (show $ diffUTCTime end start) ++ " seconds"
> Turns out that laziness, immutability, and referential transparency really helped this particular case
I don't buy it. In particular, laziness is almost always a performance loss, which is why a big part of optimizing Haskell programs is defeating laziness by inserting strictness annotations.
> Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking
This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that.
> Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer
Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.
Like laziness, immutability is almost always a performance loss. This is why ghc attempts to extract mutable values from immutable expressions, e.g. transform a recursive algorithm into an iterative algorithm that modifies an accumulator. This is also why tail recursive functions are faster than non-tail-recursive functions!
> Referential transparency meant that we could trivially run this over multiple cores without additional work
It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.
Standard C knows nothing of threads, while Haskell has some nice tools to take advantage of multiple threads. So this is definitely a point for Haskell, compared to standard C. But introduce any modern threading support (like GCD, Intel's TBB, etc.), and then the comparison would have been more even.
When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.
Of course, that's not what the do notation specifies, but I agree that's somewhat subtle. As you say, it's a consequence of laziness. Replacing "return" with "evaluate" fixes this particular example.
In general, if you care about when some particular thing is evaluated - and for non-IO you usually don't - an IO action that you're sequencing needs to depend upon it. That can either be because looking at the thing determines which IO action is used, or it can be added artificially by means of seq (or conceivably deepSeq, if you don't just need WHNF).
Perhaps it is, but that doesn't mean it's not immensely valuable as a default. And it's worth noting that in the case of Debug.Trace, the actual program is still referentially transparent, it's just the debugging tools that break the rules, as they often do.
>Haskell's laziness makes the order of execution highly counter-intuitive.
Yes, there are some use cases where do-notion doesn't capture all the side effects (i.e. time/memory) and so a completely naive imperative perspective breaks down. But these cases are rare, and it's not that hard to learn to deal with them.
> What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.
I'd disagree that this is any real attack on the merits of referential transparency, since Debug.Trace is not part of application code. It violates referential transparency in the same way an external debugger would. It's an out of band debugging tool that doesn't make it into production.
> Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider
I wouldn't say it makes order of execution highly counter-intuitive, and your above example is pretty intuitive to me. But expanding your point, time and space complexity can be very difficult to reason about - so I'll concede that's really a broader version of your point.
> Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.
C uses null-terminated strings, so it order to extract a substring it must be copied. It also has mutable strings, so standard library functions would need to copy even if the string were properly bounded.
Java using bounded strings, but still doesn't share characters. If you extract a substring, you're getting another copy in memory.
Haskell, using the default ByteString implementation, can do a 'substring' in O(1) time. This alone was probably a large part of the reason Haskell came out ahead - it wasn't computing faster, it was doing less.
Obviously in Java and C you could write logic around byte arrays directly, but this point was for a naive implementation, not a tuned version.
> This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that
It would seem counter-intuitive that the standard library would read one byte at a time. I would put money on the standard file operations buffering more data than needed - and if they didn't, the OS absolutely would.
> Like laziness, immutability is almost always a performance loss.
On immutability -
In a write-heavy algorithm, absolutely. Even Haskell provides mutable data structures for this very reason.
But in a read-heavy algorithm (Such as my example above) immutability allows us to make assumptions about the data - such as the fact that i'll never change. This means that the standard platform library can, for example, implement substring in O(1) time complexity instead of having to make a defensive copy of the relevant data (Lest something else modify it).
On Laziness -
I'm still relatively fresh to getting my head around laziness, so take this with a grain of salt. But my understanding, from what I've been told and from some personal experience:
In completely CPU bound code, laziness is likely going to be a slowdown. But laziness can be also make it easier to write code in ways that would be difficult in strict languages, which can lead to faster algorithms with the same effort. In this particular example, it was much easier to write this code using streaming non-blocking IO that it would be in C
> It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.
Except that GHC can do some clever optimizations with referential transparency that a C compiler (probably) wouldn't - such as running naively written code over multiple cores.
> When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.
I completely agree. If you need bare to the metal performance, then carefully crafted C is likely to still be the king of the hill for a very long time. Haskell won't even come close.
But in day to day code, we tend to not micro-optimize everything. We tend to just write the most straight forward code and leave it at that. Haskell, from my experience so far, for the kinds of workloads I'm giving it (IO Bound crud apps, mostly) tends to provide surprisingly performant code under these conditions. I'm under no illusion that it would even come close to C if it came down to finely tuning something however.
Do notation is not specifically a line-by-line imperative thing, and complaining that it isn't that doesn't make it bad. Obviously, the goal in Haskell isn't precisely to do imperative coding. It remains true that you can hack imperative code into Haskell in various ways effectively.
>lose the ability to reason about order of execution
>But does it really result in more performant code?
That is not the goal. The goal is being able to reason about the code, and write code that is correct. The fact that it performs very well is due to a high quality compiler, not purity.
> Every benchmark I've ever seen, more practical languages like Ocaml have come out on top.
Doesn't look that way from here: http://benchmarksgame.alioth.debian.org/u32/ocaml.php
How exactly is a language that is unable to handle parallelism "more practical" than one that handles it better than virtually any other language?
> And haskell has extensible records, they are just a library like anything else:
And OCaml has monads, they are just a library like else.
Then what? You made the vague statement, make it not vague.
>And OCaml has monads, they are just a library like else.
And? I did not claim ocaml lacks monads. You claimed haskell lacks extensible records. You do understand that my post was a direct reply to what you said right? Not just some random things I felt like saying for no particular reason.
Yes, you have to declare your effects. In practice that means that most of your code returns IO, and isn't constrained anymore. I don't know if this is a library feature, or an essential feature of the language, but it would be very interesting for example to put a GUI together by computing events in functions that returned an "Event" monad, widgets in functions that returned a "GUI" monad, database access in functions in a "DB" monad, etc. Instead, all of those operate on IO.
 A completely subjective assessment.
 I've though for a short while on how to code that, but didn't got any idea I liked.
Of course, you can do this as a library. In fact, this is an example use case for Safe Haskell which also prevents people from circumventing your types with unsafePerformIO and friends.
Moreover, some existing libraries already take similar approaches. FRP libraries extract reactive systems (like events but also continuously changing signals) into their own types. A button gives you a stream of type Event () rather than an explicit callback system using the IO type. Check out reactive-banana (my favorite FRP library from the current crop) for a nice example.
Similarly, people use custom monads to ensure things get initialized correctly, which has a similar effect to what you're talking about. The Haskell DevIL bindings for Repa come to mind because they have an IL type which lets you load images and ensures the image loader is initialized correctly exactly once.
Sure, in the end, everything will need to be threaded through IO and main to actually run, but you can—and people do—make your intermediate APIs safer by creating additional distinctions between effects.
The thing is IO is generic. So IO (Db) and IO (Gui) are different things.
An example of this(not a very good one I'm afraid) is the X monad.
The best way to understand IO is to think about working with pure functions in an impure language. Let's say I've given you a promised-pure function which emits commands (re: the "Command pattern" if that's the way you want to see it) and you operate them using side effects. This is a massive inversion of control issue of course, but you can see how it might work.
Further, you might understand that your job is easier due to the purity of the command-emitting function. You explicitly give it all of the inputs you desire and operate it as needed. For instance, you can perhaps run it forwards and backwards as desired. Or weave it in with another "thread" I parallel knowing that only you must handle races and shared memory—the threads are pure.
Finally, you might understand that the risk of bad programming is borne on your shoulders primarily—side effects are complex and you're the only one handling them.
In Haskell, "you" are the RTS and the pure threads are Haskell programs. The IO monad is nothing more than what it feels like to be "inside" a useful kind of command pattern. Finally, we compartmentalize all side-effects into the RTS so that we only have to get them right once.
Purity makes it easier to reason about the semantics of your code. This isn't about parallelism, it's about concurrency, including single-threaded concurrency. Case in point, I recently spent quite a while scratching my head over a bug that happened because someone else had written some code to mutate a piece of shared state when I wasn't expecting it.
But the pure functional programming model is a very high level of abstraction (deep down, every interesting thing in computing is a state machine), and it has a tendency to leak like mad. One such case where it does so is I/O. In fact, you can't even do I/O in a 100% pure language - and that's what the I/O monad is really about; it's punching a hole in the language in order to let the big bad ephemeral outside world in. But in a controlled manner, so that the language's fundamental ethos of purity can be maintained, which in turn makes its laziness manageable. In short, the deeper downer reason why Haskell loves its I/O monad so much is because without it the language would be fairly useless. Anyone who tells you the I/O monad's really about making I/O concurrency headaches less of a hassle has been doing more blog-reading than programming.
So why preserve the illusion? Well, ghettoizing all things stateful lets you take advantage of pure semantics everywhere else in your code, which theoretically makes it easier to reason about and maintain.
As for monads themselves, IMHO they're kind of overblown. It's just another design pattern, akin to Visitor or Strategy or Decorator, only functional instead of object-oriented. Super-useful, applicable in all sorts of circumstances, and easily worth knowing. And, just like object-oriented design patterns, easy to make sound way more complicated than it is if you try to explain the idea to someone else without having fully digested it yourself first.
I guess that I've got to read the rest of the article now too...
return :: a -> State a
get :: State Int
set :: Int -> State Int -- should be Int -> State () surely?