
How do I modify a variable in Haskell? - MichaelBurge
http://www.michaelburge.us/2017/08/15/how-do-i-modify-a-variable-in-haskell.html
======
jamesbrock
Mr. Burge has written a nicely thought-out blog post showing an evolution of
different ideas about how to write to an array in Haskell.

I don't want people trying to follow the evolution to get the impression that
iterating and writing to an array is really hard in Haskell, so here is a
complete Haskell program which

1\. Initializes a _10x10_ mutable array to _0_.

2\. Iterates in a for-loop from _0_ to _9_ , setting the diagonal to _1_.

3\. Freezes the mutable array to an immutable array and prints it.

    
    
      module Main where
      
      import Foreign.C.Types (CInt)
      import Control.Monad (forM_)
      import Numeric.LinearAlgebra (toLists) -- http://hackage.haskell.org/package/hmatrix-0.18.1.0/docs/Numeric-LinearAlgebra-Data.html
      import Numeric.LinearAlgebra.Devel (runSTMatrix, newMatrix, writeMatrix) -- http://hackage.haskell.org/package/hmatrix-0.18.1.0/docs/Numeric-LinearAlgebra-Devel.html
      
      main = do
          putStrLn $ unlines $ fmap unwords $ fmap (fmap show) $ toLists aImmutable
        where
          aImmutable = runSTMatrix $ do
              a <- newMatrix (0::CInt) 10 10
              forM_ [0..9] $ \i -> writeMatrix a i i 1
              return a
    

Output:

    
    
      1 0 0 0 0 0 0 0 0 0
      0 1 0 0 0 0 0 0 0 0
      0 0 1 0 0 0 0 0 0 0
      0 0 0 1 0 0 0 0 0 0
      0 0 0 0 1 0 0 0 0 0
      0 0 0 0 0 1 0 0 0 0
      0 0 0 0 0 0 1 0 0 0
      0 0 0 0 0 0 0 1 0 0
      0 0 0 0 0 0 0 0 1 0
      0 0 0 0 0 0 0 0 0 1

------
jordigh
I've tried to like Haskell, but when simple tasks start to look like
complicated puzzles with many possible solutions, I get really put off. I want
to get stuff done, not feel clever for managing to contort my algorithm into a
different form whose runtime performance I'm uncertain of, all done merely in
the name of avoiding a for loop.

~~~
tome
It's a shame you feel that way. Modifying an element of an array in Haskell is
easy and the blog post gives almost exactly correct code, but for some reason
places it under the title "we run into trouble when we realize there’s no
built-in mutating assignment". But we don't run into trouble! Mutable arrays
are provided in Haskell! Haskell has mutation! Here's the code

    
    
        import Data.Array.MArray
        import Data.Array.IO
        
        size = 10
        
        (!) = readArray
        
        main :: IO ()
        main = do
          -- 1. Declare the array
          arr <- newArray ((1,1), (size,size)) undefined
          let _ = arr :: IOArray (Int,Int) Integer
        
          -- 2. Initialize the array to 0
          sequence_ $ do
            i <- [1..size]
            j <- [1..size]
            return $ writeArray arr (i, j) 0
        
          -- 3. Set the diagonal to 1
          sequence_ $ do
            i <- [1..size]
            return $ writeArray arr (i, i) 1
        
          -- 4. Print the array
          sequence_ $ do
            i <- [1..size]
            j <- [1..size]
            return $ do
              arr_i_j <- arr ! (i,j)
        
              putChar $ if arr_i_j == 0
                        then '0'
                        else '1'
              if j == size
                then putChar '\n'
                else return () 
    
        > main
        1000000000
        0100000000
        0010000000
        0001000000
        0000100000
        0000010000
        0000001000
        0000000100
        0000000010
        0000000001
    

I've no idea why the blog post is written in such long winded style. Mutation
in Haskell is not difficult!

~~~
jstimpfle
But IO (or ST) style is also unidiomatic and simply too cumbersome. You are
just not going to write "writeArray arr (i,j) k" instead of "arr[i][j] = k",
and more importantly,

    
    
        v <- readArray arr (i,j-1)
        w <- readArray arr (i-1,j)
        writeArray arr (i,j) (v+w)
    

instead of

    
    
        arr[i][j] = arr[i][j-1] + arr[i-1][j];
    

No way you're doing that over and over again for any substantial performance-
oriented code (where you absolutely need mutable arrays).

~~~
Peaker
Doing this kind of effect sequencing in Haskell is annoying indeed. A radical
library can mitigate that but that would be really unidiomatic.

Could be nice to have sugar like expr[!x] that expand to: x >>= \genName -> ..
expr[genName] ..

So you could write: writeArr arr !(readArr ..) !(readArr ..)

Then if you add an operator to do the reading it becomes quite reasonable.

~~~
WarDaft
Well, you can add operators, though you almost certainly should think about
more idiomatic solutions first, it's entirely possible to write operators to
allow a statement like:

    
    
      arr .~ (i,j) .= arr ! (a,b) + arr ! (x,y)
    

You make orphan instances for Indexed and Num, and then .~ is just a slight
tweak on the adjust function that Indexed provides and .= is literally a
direct synonym for $ that just looks better in this context. This is actually
more flexible than most languages, because now (arr .~ (i,j)) or (.~ (i,j))
can be named and applied to multiple things should that be desirable for some
reason. Also note that this code is very weird, as arr is not an array, but an
IO action describing how to produce one. Operations on it actually produce
diffing instructions to be applied elsewhere. I have also not tested the
performance.

These exact things are not in base because _they are discouraged_ and not
supposed to be easy. Note that the lens library provides operators more or
less just like this for a wide variety of data types, in a safe and composable
way.

~~~
Peaker
Taking IO actions just to do the bind for the caller goes against the benefits
you usually get from purity.

------
beaconstudios
isn't the whole point that you write code completely differently in functional
languages? rather than writing loop code to populate an array, you'd do
something like the following (in JS as I'm not a haskell programmer):

    
    
        const row = (len, onPositions) => {
            return (new Array(len)).map((item, index) => onPositions.includes(index) ? 1 : 0);
        };
        
        const matrixWithDiagonal = (len) => {
            return (new Array(len)).map((r, index) => row(len, [index]));
        }
    

the point being, that the code generates the result directly, rather than
making space for values and then looping through toggling the positions that
should be set to on.

note: you wouldn't really use the (new Array()) construct generally speaking
(and it doesn't work with .map anyway) but it's easier than including
underscore or ramda in the example.

~~~
jordigh
The original example is just to try to find something where you need to modify
an array. Sure, you can create identity matrices without modifying an array,
but that's just changing the parameters of the problem. If you really do need
to change an array, how do you do it in Haskell?

Your solution is the typical thing you do in Haskell. You might know how to
perform an algorithm by modifying an array, but now you suddenly have your
hands tied and you can't do that anymore. Now you have to come up with an
alternative way.

It's like someone really wants to implement bubblesort in Haskell but instead
the solution ends up being to implement quicksort because the bubbling is
difficult or unusual in Haskell.

~~~
dxbydt
> If you really do need to change an array, how do you do it in Haskell?
> ...It's like someone really wants to implement bubblesort in Haskell but
> instead the solution ends up being to implement quicksort because the
> bubbling is difficult.

You nailed it! As a math major myself, I completely empathize with this pov.
The vast majority of languages present themselves as minor tools that can be
used to solve your problem. The goal is to solve your problem, not figure out
the tool. If I want to decompose a matrix via cholesky or schur or QR or LU or
what have you, I just want to be able to set arrays willy-nilly without
thinking so much about how to accomplish the same thing without setting the
array. The Haskell way of "let's not really set the array, it's not what you
want" which a bunch of comments have expressed here, focuses the attention on
the tool, not my problem at hand. I honestly don't give 2 shits about your
tool, whether you call it C or python or octave. I care about my problem, not
your tool. So your tool should just stand aside, let me do my thing & not
complain. That's exactly why the vast majority of applied researchers in math
stick to matlab/octave or ml researchers to python or vision/robotics guys to
c++ or insert your speciality here... In each case, the tool is one tiny minor
part of the problem solving process. As a quant,I would worry about
convergence of pde's and getting the term structure right - yes the whole
thing is in c++ but I don't spend days and nights thinking about what does c++
want, I just care about the math. With Haskell, I do have to stop what I am
doing and focus on the tool. The language says, lets think about a nice
function composition that elegantly yields this transform without dirty
mutation. I know such a transform exists,but looking for it is not my day job.
Most applied math & applied ml work is seriously messy and highly imperative,
for which such elegance is a mismatch, because of time constraints imposed by
capitalism. If we were all working with plenty of free time, yes I could
concoct the perfect elegant transform every single time, but I don't have that
kind of luxury. Haskellers who advocate pausing a bit and rethinking the
problem, solving a different problem instead, or expressing the computation
recursively with function transforms etc. to suit their language are perfectly
honest and right - it's just that I don't have the fucking time to do those
things. So yeah, the language is great, but I don't get paid enough to rethink
every dirty thing I do with these dirty imperative languages so that it
becomes a neat elegant exercise I can marvel at. There is honestly not enough
time in the world or money to compensate for the amount of time it would take
to rework all the dirty mutable spaghetti that constitutes the majority of day
to day work. We all try, but outside of textbook examples and a few select
domains, nice fp doesn't scale. It's not a language issue. It's just the times
we live in.

It's all quite sad, because when I was in school, I actually led a talk using
John Hughes' famous fp advocacy paper, where I expanded his examples into
finmath, but once you work in industry and see the frantic pace and general
messiness of the code base, you realize even the second coming of Jesus Christ
won't get us anywhere.

~~~
beaconstudios
I think you're missing the point. The reason for functional programming isn't
just that it looks pretty - immutable data makes state conflicts and shared
data problems more explicit and forces you to handle them, function
composition encourages designing small composable units that make coding
faster as you go, that sort of thing. Functional programming tries to force
you onto the more long-term-beneficial line on this famous graph:
[https://martinfowler.com/bliki/images/designStaminaGraph.gif](https://martinfowler.com/bliki/images/designStaminaGraph.gif)

The ironic thing about the example you linked is that if you're trying to
implement bubble sort and Haskell pushes you into quick sort, that's a _good
thing_ because bubble sort is objectively incorrect. The oldest excuse in the
book for bad design is that one does not have time, and while it is often
objectively true, it's also true that good design actually saves you time in
the long run. Awful spaghetti code is the fastest code to write but in a
year's time when it takes a month to get a 2-point story completed because of
all the hacks and //TODOs and workarounds you'll wish you wrote it right the
first time.

~~~
jordigh
I regretted using bubblesort as an example because the O(n^2) performance is
not relevant to my point. Let's consider a Knuth-Fisher-Yates shuffle instead.
Pretty easy to describe by mutating an array, and I don't think it's _wrong_
or pigheaded to implement it that way or that I'm just being too stubborn to
see how great it is to implement it in a Haskell way.

~~~
beaconstudios
I think the important point is that functional programming languages push you
to immutability because it's a more sensible default. You can still write
mutable code but the fact that code is always mutable in other languages can
cause problems where none should exist.

------
nihils
If you want immutable data structures that simulate the behavior of mutable
data structures look into Zippers. In this case, a list zipper.

~~~
nihils
For example, this problem of modifying the diagonal of a matrix can be solved
quite easily:

    
    
      type Zipper a = ([a],[a])
      
      cursorOnDiagonal :: [[Int]] -> [Zipper Int]
      cursorOnDiagonal matrix = map (\(n,x) -> splitAt n x) (zip [0..(length matrix)-1] matrix) 
    
      flipToOneAtCursor :: [Zipper Int] -> [Zipper Int]
      flipToOneAtCursor = map (\(ys,x:xs) -> (ys,1:xs))   
    
      backToList :: [Zipper Int] -> [[Int]] 
      backToList = map (\(ys,xs) -> ys ++ xs)
    

If matrix == [[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]], then:

    
    
      backToList . flipToOneAtCursor . cursorOnDiagonal $ matrix == [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]

------
mrcwinn
For those with access to another language, an alternate solution would be
something like:

a = 1; a = 2;

Kidding. Really interesting write-up!

