Hacker News new | past | comments | ask | show | jobs | submit login
State in Haskell (1995) [pdf] (microsoft.com)
54 points by azhenley on April 2, 2021 | hide | past | favorite | 11 comments



The paper notes a limitation this approach has with arrays: once you stop mutating an array and want to make a "pure" version that is useable outside ST, you have to choose between the safe but slow option of copying the array, and the true to its name "unsafeFreeze" function:

> The implementation of arrays is straightforward. The only complication lies with freezeArray, which takes a mutable array and returns a frozen, immutable copy. Often, though, we want to construct an array incrementally, and then freeze it, performing no further mutation on the mutable array. In this case it seems rather a waste to copy the entire array, only to discard the mutable version immediately thereafter.

> The right solution is to do a good enough job in the compiler to spot this special case. What we actually do at the moment is to provide a highly dangerous operation unsafeFreezeArray, whose type is the same as freezeArray, but which works without copying the mutable array. Frankly this is a hack, but since we only expect to use it in one or two critical pieces of the standard library, we couldn't work up enough steam to do the job properly just to handle these few occasions.

It seems that the recent addition of linear types to GHC 9.0 will enable "freeze" operations that are both fast and safe: http://hackage.haskell.org/package/linear-base http://hackage.haskell.org/package/linear-base-0.1.0/docs/Da...


I always assumed that do-notation had been introduced in Haskell as soon as monads became a thing, but no: this paper dates from 1995, while do-notation appeared with the Haskell 1.3 report in 1996. So here all the monadic actions are threaded together with thenST (a specialised version of >>=)

Skimming this article really made me appreciate the compactness and clarity of do-notation (although hard-core Haskellers will object that it hides what really is going on)


> although hard-core Haskellers will object that it hides what really is going on

I've never seen anybody say this. Many people say that when learning you must avoid it a few times to understand it, but I never saw anybody claiming it hides things for experienced developers.

Also, I think the idea is very far fetched, expanding do notation on your head when reading code is one of the minimum skills to make proper use of the language, thus it can't hide anything to hard-core Haskellers. Anyway, if you think your program is better structure using do, use it and don't stop to listen to criticism.



Oh, but notice that all those reported problems either have the "newcomers" word on them or are cases of your code having a worse structure due to it.

Also GHC now has features to revert all the bad structure problems cited there.

There is no wildcard advice of "use do notation" or "avoid do notation" possible. It's an incredibly handy tool, but you need to decide for yourself if it improves your code or not.


I believe that negative view of do notation is not popular nowadays. The tendency is to lean more on it. There are language extensions like ApplicativeDo and the newer QualifiedDo which expand the reach of the notation.


My "teacher" (a cargo cult "craftman") banned the use of do among other ivory tower extremisms.


I’ve seen some pretty broken do statements from attempts to use parsec. The applicative syntax just makes more sense in describing what is happening; especially for more complicated parsing logic.


I agree that Applicative syntax is good for parsing. But I'd add that not all parsers can be applicative, so occasionally you do need do notation in your parser.


Why is this rendering so terribly in Chrome?


It contains bitmap (Type3) fonts. Some older documents produced by TeX have this issue (back in the day you'd have metafont generate bitmaps designed for your specific printer, taking into account ink fill-in, etc).

Modern outline versions of these fonts have since become available and this typically isn't a problem with newly generated files. (You can make this happen, but by default everything uses outline postscript fonts these days.)

There exists a utility called pkfix that can fix this if you happen to have the original ps file, and it has the right comments from dvips.

There is a second utility called pkfix-helper that will try to guess and inject those comments based on the metrics (character sizes) of the fonts. But it can't really match a lot of the symbol fonts that only pull a couple of characters.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: