Hacker News new | past | comments | ask | show | jobs | submit login
How do I modify a variable in Haskell? (michaelburge.us)
47 points by MichaelBurge on Aug 15, 2017 | hide | past | favorite | 69 comments



Mr. Burge has written a nicely thought-out blog post showing an evolution of different ideas about how to write to an array in Haskell.

I don't want people trying to follow the evolution to get the impression that iterating and writing to an array is really hard in Haskell, so here is a complete Haskell program which

1. Initializes a 10x10 mutable array to 0.

2. Iterates in a for-loop from 0 to 9, setting the diagonal to 1.

3. Freezes the mutable array to an immutable array and prints it.

  module Main where
  
  import Foreign.C.Types (CInt)
  import Control.Monad (forM_)
  import Numeric.LinearAlgebra (toLists) -- http://hackage.haskell.org/package/hmatrix-0.18.1.0/docs/Numeric-LinearAlgebra-Data.html
  import Numeric.LinearAlgebra.Devel (runSTMatrix, newMatrix, writeMatrix) -- http://hackage.haskell.org/package/hmatrix-0.18.1.0/docs/Numeric-LinearAlgebra-Devel.html
  
  main = do
      putStrLn $ unlines $ fmap unwords $ fmap (fmap show) $ toLists aImmutable
    where
      aImmutable = runSTMatrix $ do
          a <- newMatrix (0::CInt) 10 10
          forM_ [0..9] $ \i -> writeMatrix a i i 1
          return a
Output:

  1 0 0 0 0 0 0 0 0 0
  0 1 0 0 0 0 0 0 0 0
  0 0 1 0 0 0 0 0 0 0
  0 0 0 1 0 0 0 0 0 0
  0 0 0 0 1 0 0 0 0 0
  0 0 0 0 0 1 0 0 0 0
  0 0 0 0 0 0 1 0 0 0
  0 0 0 0 0 0 0 1 0 0
  0 0 0 0 0 0 0 0 1 0
  0 0 0 0 0 0 0 0 0 1


I've tried to like Haskell, but when simple tasks start to look like complicated puzzles with many possible solutions, I get really put off. I want to get stuff done, not feel clever for managing to contort my algorithm into a different form whose runtime performance I'm uncertain of, all done merely in the name of avoiding a for loop.


It's a shame you feel that way. Modifying an element of an array in Haskell is easy and the blog post gives almost exactly correct code, but for some reason places it under the title "we run into trouble when we realize there’s no built-in mutating assignment". But we don't run into trouble! Mutable arrays are provided in Haskell! Haskell has mutation! Here's the code

    import Data.Array.MArray
    import Data.Array.IO
    
    size = 10
    
    (!) = readArray
    
    main :: IO ()
    main = do
      -- 1. Declare the array
      arr <- newArray ((1,1), (size,size)) undefined
      let _ = arr :: IOArray (Int,Int) Integer
    
      -- 2. Initialize the array to 0
      sequence_ $ do
        i <- [1..size]
        j <- [1..size]
        return $ writeArray arr (i, j) 0
    
      -- 3. Set the diagonal to 1
      sequence_ $ do
        i <- [1..size]
        return $ writeArray arr (i, i) 1
    
      -- 4. Print the array
      sequence_ $ do
        i <- [1..size]
        j <- [1..size]
        return $ do
          arr_i_j <- arr ! (i,j)
    
          putChar $ if arr_i_j == 0
                    then '0'
                    else '1'
          if j == size
            then putChar '\n'
            else return () 

    > main
    1000000000
    0100000000
    0010000000
    0001000000
    0000100000
    0000010000
    0000001000
    0000000100
    0000000010
    0000000001
I've no idea why the blog post is written in such long winded style. Mutation in Haskell is not difficult!


> I've no idea why the blog post is written in such long winded style. Mutation in Haskell is not difficult!

For a second there, I honestly thought you were being sarcastic. I'd consider the example you just gave to be extremely "long winded" and "difficult" for what it does. If I ported that code line-for-line to Ruby, for example, the whole thing would look like this:

  size = 10

  # Initialize the array to 0
  arr = Array.new(size) { Array.new(size, 0) }

  # Set the diagonal to 1
  (0...size).each{|i| arr[i][i] = 1 }

  # Print the array
  puts arr.map{|row| row.join}.join("\n")


That's not really a line for line port. Or even block for block since yours is never uninitialized. The original code was designed to basically mimic writing bad C in Perl, with an uninitialised array that you then explicitly looped over to initialise to zero, and then modified the diagonal. That sort of thing is difficult on purpose in Haskell.

On the other hand...

  main = mapM_ (putStrLn . foldMap show) [bool 0 1 . (==x) <$> [1..size]| x<-[1..size]]
Or

  x .* y = replicate y x
  main = mapM_ (putStrLn . foldMap show) [0 .* x ++ 1 : 0 .* (size-x)| x<-[0..size-1]]
Or if you really do want mutation shorthand (because you're a crazy person) just build it:

  a .// l = mapM_ (uncurry $ writeArray a) l

  pickFn f g (b,v) | b = g v | otherwise = f v

  main = do
    arr <- newArray ((1,1), (size,size)) 0
    arr .// [((x,x),1)|x<-[1..size]]
    mapM_ (pickFn putStr putStrLn . (((==size).snd) *** show)) $ getAssocs arr
It's not pretty, but it's not supposed to be. Do you know what is pretty? Doing things the right way.

  sudo apt-get install haskell-stack
  stack setup
  stack ghci --package matrix
  import Data.Matrix
  print $ identity 10
And of course there's similar for Ruby.


Pop quiz:

Do you genuinely believe I wrote the code the way that I did because it's simplest way of initialising an identity matrix in Haskell?


FWIW. (Haskell doesn't have a convenient built-in function for printing arrays. Guilty as charged!)

    size = 10
    
    main = do
      -- Initialise the array to 0
      arr <- newArray ((1,1), (size, size)) 0
      let _ = arr :: IOArray (Int, Int) Int

      -- Set the diagonal to 1    
      forM_ [1..size] $ \i -> writeArray arr (i,i) 1
    
      -- Print the array
      forM_ [1..size] $ \i -> do
        forM_ [1..size] $ \j -> do
          a <- readArray arr (i, j)
          putChar (if a == 0 then '0' else '1')
          when (j == size) (putChar '\n')


Thanks, that looks much nicer and more in-line with what I'd expect from an example intended to demonstrate how easy a simple task is in a specific language.

As an aside, Ruby doesn't have a built-in function for printing arrays either (unless you count .inspect or .to_s, which let you print _anything_). The code I gave formatted the array as a string using some simple data manipulation functions before printing it; it was intended to match the output of the program you provided.


But IO (or ST) style is also unidiomatic and simply too cumbersome. You are just not going to write "writeArray arr (i,j) k" instead of "arr[i][j] = k", and more importantly,

    v <- readArray arr (i,j-1)
    w <- readArray arr (i-1,j)
    writeArray arr (i,j) (v+w)
instead of

    arr[i][j] = arr[i][j-1] + arr[i-1][j];
No way you're doing that over and over again for any substantial performance-oriented code (where you absolutely need mutable arrays).


I don't think IO is unidiomatic at all, although 7 years ago I might have agreed. First if we really need to we can write haskell that looks a lot like C. Secondly if I find myself in a situation where I need to write a little block of code that just doesn't lend itself to being written in Haskell I have no problem writing it in C. If I was implementing some numerical algorithm this is what I would do. Indeed haskell bindings to things like BLAS to just this. The benefit of marking all side effects with IO is an easy to read codebase, easy refactoring and of course Software Transactional Memory!


Doing this kind of effect sequencing in Haskell is annoying indeed. A radical library can mitigate that but that would be really unidiomatic.

Could be nice to have sugar like expr[!x] that expand to: x >>= \genName -> .. expr[genName] ..

So you could write: writeArr arr !(readArr ..) !(readArr ..)

Then if you add an operator to do the reading it becomes quite reasonable.


Well, you can add operators, though you almost certainly should think about more idiomatic solutions first, it's entirely possible to write operators to allow a statement like:

  arr .~ (i,j) .= arr ! (a,b) + arr ! (x,y)
You make orphan instances for Indexed and Num, and then .~ is just a slight tweak on the adjust function that Indexed provides and .= is literally a direct synonym for $ that just looks better in this context. This is actually more flexible than most languages, because now (arr .~ (i,j)) or (.~ (i,j)) can be named and applied to multiple things should that be desirable for some reason. Also note that this code is very weird, as arr is not an array, but an IO action describing how to produce one. Operations on it actually produce diffing instructions to be applied elsewhere. I have also not tested the performance.

These exact things are not in base because they are discouraged and not supposed to be easy. Note that the lens library provides operators more or less just like this for a wide variety of data types, in a safe and composable way.


Taking IO actions just to do the bind for the caller goes against the benefits you usually get from purity.


OK, I know this isn't nice but it's already part of the way there

    writeArray arr (i,j) =<< ((+) <$> readArray arr (i,j-1) <*> readArray arr (i-1,j))


This is not only not nice, it's just a bad idea. (and I understand it!)


I think you may have misinterpreted my comment.


It's not unidiomatic at all, and if you need to do it, you'll do it.


{caveat: I am pretty sure that there are other ways to write the Haskell code at a higher level. I wrote this not as a rant about Haskell, but because I have been thinking about 'big versus small' languages, recently.}

For me, I want a big language, not more correct assembly or C. An identity matrix in Racket is three lines:

  #lang racket
  (require math/matrix)
  (identity-matrix 10)
Or more iteratively:

  #lang racket
  (require math/matrix)
  (diagonal-matrix 
    (build-list 10 (lambda (x) (values 1))))
Or more generically:

  #lang racket
  (require math/matrix)
  (build-matrix 10 10
    (lambda (x y)
      (if (= x y)
        (values 1)
        (values 0))))
The choice three different abstractions in fewer lines of code than the single Haskell abstraction. For all its type safety the Haskell abstraction is down and dirty Von Neumann style sequential memory access. Looping over the elements of a matrix is pretty far away from the useful abstractions of matrices.

To put it another way, i's and j's are good variable names for loops and bad variable names for matrices...If I have to iterate over elements of a matrix, I really want m's and n's. But that conflicts with semantics of a conventional looping construct.


> caveat: I am pretty sure that there are other ways to write the Haskell code at a higher level. I wrote this not as a rant about Haskell, but because I have been thinking about 'big versus small' languages, recently

> The choice three different abstractions in fewer lines of code than the single Haskell abstraction

These two parts of your comment don't seem to mesh :)


In 'big languages' I am thinking more about how much 'talking about' domain specific topics looks like 'talking about' the topics covered in the core language. Or maybe about how easy it is to make 'talking about' domain topics look like 'talking about' core topics. Maybe it's the degree to which the reductionist process of dividing up code into files is typically reflected when writing code.

Some languages like Python default to explicitly referencing the module, that is the happy easy path leads to writing "math.floor(x)," not "floor(x)." That's even after I write the mandatory "import math." C is probably close to one end of the spectrum, and at the other end of the spectrum are languages like Wolfram, where:

  GeoDistance
   [Entity["City", {"NewYork", "NewYork", "UnitedStates"}], 
    Entity["City", {"LosAngeles", "California", "UnitedStates"}]]
Racket is probably closer to Wolfram, and Haskell closer to C. I think the C end is more likely the more a language tries to solve problems that programs don't have before they are written...e.g. speed and type errors. The Wolfram end tries to solve problems that programmers have before programs are written...e.g. what do I want to say.

Wishful thinking is often Abelson's starting point in the SICP lectures. That's a bit at odds with the philosophy of some languages.



You're probably right that the presentation here is misleading. Most common things like arrays, dictionaries, sets, and more don't really present any trouble in practice. You only use the techniques in my article when structuring your program more broadly, and arrays were just an example.

I'll make a note that for arrays specifically, someone took the trouble to make a mutable version that's usable in IO. Nobody would ever read and write references to an immutable version over using just MArray, but my post might mislead them into doing just that.


not that this is code golf, and not that the following is even mutation, but I couldn't help trying out the "for" syntax in Elixir, which this seemed like a good job for:

    one_or_zero = fn(x,y) -> if x==y, do: "1", else: "0" end
    for i <- 1..10 do
      for j <- 1..10 do
        one_or_zero.(i,j) |> IO.write
      end
      IO.write("\n")
    end


It depends on the task, of course, but I generally think Haskell is a very productive language for getting simple things done. Let me explain why.

There is indeed a side of the language and culture focused on elegance and there's also a side focused on pragmatism. Elegance and pragmatism aren't mutually exclusive but generally we use it as a type-safe getting things done language.

Imho its strength is in the type system even if that means starting your project out entirely in the IO monad with heavy let binding style, you'll still benefit from the type system. You can then, over time, decouple things, test things, etc.

I hope you don't give up on it because in the realm of pragmatic, unexciting, just trying to do my job software, Haskell really shines. It's just hard to see sometimes because not many production users are talking about that kind of use, you're usually seeing the library author's view which is going to be very different.

[Edit] I also think Rust is very promising for occasions in which you need the low level performance. If I can though, I will use haskell because its type system does such an amazing job at protecting myself from me and six months ago me.


> when simple tasks start to look like complicated puzzles with many possible solutions, I get really put off.

Doesn't every language do this? Often I spend more time trying to decipher a C program than I do actually writing C code. With all of the variables, pointers, and mutation happening it can feel like a complicated puzzle and I get really put off.

What often happens is that the Haskell you read about online is far more theoretical than what you actually need to get practical work done using the language. It's often called that "Haskell Pyramid" or some such.

The only way I know of to "learn to like Haskell," is to not try but simply do it. Start from the beginning. Think of yourself like an empty jar. Put your mind back into the place where you first started programming and go from there. It will be frustrating at first re-learning many concepts you feel that you know but it's the best way IMO.

I had to do this in martial arts once... unlearn all the ways my body was used to moving and simply accept that I was going to start over and learn a completely new way. From the beginning. No excuses. No short-cuts.

> whose runtime performance I'm uncertain of

Reasoning about the run-time performance of Haskell programs is much more tricky, in my experience, than a strictly-evaluated language.

At least at first.

Like most things in Haskell once you are comfortable with the fundamentals you learn how the run-time works and ways to write strictly-evaluated code when performance is a concern.


I think you have to embrace it to appreciate it; Haskell's advantages wouldn't be there without its disadvantages. (Indeed I suspect I'd never have been able to appreciate Haskell if I hadn't learned Scala first, because Scala gave me a path from traditional imperative/OO code to Haskell-like code that doesn't exist in Haskell itself - but now I find myself wishing for a Scala without those things that I at one point needed). Stop worrying about the runtime performance, stop trying to write [previous programming language] in Haskell. Write expressions for the values that you want, then see if the performance is good enough for your use cases (and if not, have a quick go at profiling for common pitfalls rather than immediately jumping to writing it the imperative way); you might be pleasantly surprised (or you might find Haskell is genuinely unsuitable for your use case, though that's pretty rare IME).


If you want to shovel bytes around in memory (or more likely, the memory model provided by your OS) that's your perogative, but there's nothing necessarily clever or contorted about relegating the responsibility of memory-management to your dependencies (including the compiler). Avoiding for loops is mainstream today, with your stream APIs, map/reduce, LINQ and what have you.

Loops are (in the general case) hard to reason about and provide few run-time guarantees. Predicate transformers with recurrence-relations are mostly intractable and a giant PITA. Variable assignment (or register overwrites) are hard to reason about, and loops (well... single entry/exit in lieu of goto, but whatever) were introduced to make it easier on the minds eye to follow the evaluation of programs and still they fail regularily.

Looping is not a simple task, it has real externalities wrt. program correctness (however you define it) and programming languages are providing more and more tools to avoid them and provide more run-time guarantees.


Thanks for reading!

The idea behind this article is that somebody who asks the question is probably confused about a lot of the core concepts in Haskell. So I write the same code in 20 different ways so that they can use their understanding of earlier concepts to understand later ones. Which hopefully makes them less confused.

That's a different goal than selling people on Haskell, or giving them the best solution and moving on. There is definitely a learning curve before you can hack out code without thinking too much about it.


I wish there was a canonical, common, simple, straightforward and nearly unambiguous and universal way to modify an element in an array.

In many languages: arr[idx] = nevalue

In Haskell: a blog post or two, telling me how great all the new ways are because of the type system.

I know I'm sounding very contrarian, but it really is a frustration I have with the language. I've heard several times that once you manage to use it a lot you find these unambiguous universal methods. I just feel like getting there first requires digesting all of these blog posts.


>In many languages: arr[idx] = nevalue

The thing you have to keep in mind is that this kind of naked mutation is probably the main source of bugs in programming today. One of the main advantages of OO was that it demarcated which functions could modify a "global" variable. But if I get a weird value in one of my fields, I still can't trivially tell how it got there, I can only narrow down to methods of the class (and possibly the inheritance tree, depending on variable visibility).

Due to that, there is more ceremony in modifying things in Haskell. The language works best if you write most of your code in a way that doesn't mutate anything and then limit mutation to a small area using the advanced techniques developed in the language. But doing all this requires a pretty substantial investment so you probably need something to convince you that the end will be worth it before you start. I don't know what you tell you, what got me interested was something like:

> fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

and

> max = head . sort

but I understand that's not going to motivate most people to completely change how they approach programming as a discipline.


arr // [(idx,newvalue)]

(//) is a function that takes an array and a list of pairs of indexes and values and returns an array with those indexes set to those values.


This is reinforcing my impression of Haskell's idiomatic aesthetic as "really bad Perl".


That's a shame, since once you understand it it really is beautiful.


    Prelude> let x = [1, 2, 3]
    Prelude> x
    [1,2,3]
    Prelude> x // [(2,42)]

    <interactive>:5:3:
        Not in scope: `//'
        Perhaps you meant one of these:
          `/' (imported from Prelude), `/=' (imported from Prelude)
So, how canonical is this if it's not in Prelude?


The prelude isn't everything in Haskell. Just like you need to import sys or do import re in python to get at things less used here too you have to do the same.

    λ :m Data.Array
    λ let x = array (1,3) [(1,1), (2,2), (3,3)]
    x :: (Num e, Num i, Ix i) => Array i e
    λ x
    array (1,3) [(1,1),(2,2),(3,3)]
    it :: (Ix i, Num i, Num e) => Array i e
    λ x // [(2,42)]
    array (1,3) [(1,1),(2,42),(3,3)]
    it :: (Num i, Num e, Ix i) => Array i e
    λ x
    array (1,3) [(1,1),(2,2),(3,3)]
    it :: (Ix i, Num i, Num e) => Array i e
Note however that this isn't mutating x directly, to do mutation would require state or a monad.


Further to the other answers given, `[1, 2, 3]` is not an array, but rather a list.


The Prelude is just what's visible in the default namespace, it's not the whole language

Data.Array and // are also described in the Haskell 2010 report

https://www.haskell.org/onlinereport/haskell2010/haskellch14...


I understand the source of your question -- not all languages provide arrays in the included-by-default namespace. Python for example requires you to import an array module. These are part of the standard library in both cases.

This makes it a little harder to get started but considering the kinds of things programmers put up with on a daily basis is extremely mild.


I assumed there must be something like this, but that notation is a pain.

What's wrong with something like "set arr idx newvalue"?


> What's wrong with something like "set arr idx newvalue"?

Nothing. See here: https://news.ycombinator.com/item?id=15017853

It's called "writeArray".


I ride a horse. I've tried driving a car, but it seems like a complicated puzzle with many buttons compared to a horse.

You have trained your brain to think about programming in a certain way. Haskell does things largely differently built on a set of completely different fundamentals and abstractions. For most of my non-CS friends taking a CS class in college: using a loop or basic recursion was a daunting task for them.

New ideas and abstractions require concentrated effort. I find it easy to forget the harder concepts I needed to grasp as I was learning programming (back when I was 11-12), but it's quite an important thing to remember while trying to understand something as different as Haskell.


I always suggest people to learn Common Lisp or OCaml since they're easier to get into (OCaml is the easiest imo) if you want to get your head around the basic ideas of functional programming.


I recommend something similar ... Prof. Dan Grossman's Programming Languages course in Coursera, taught using Standard ML: https://www.coursera.org/learn/programming-languages

He starts by teaching statically typed functional programming with an emphasis on understanding semantics and idioms; then in the following modules he covers Racket and Ruby to compare and contrast dynamically typed homoiconic programming and object-oriented programming.


I've had a ball with Elixir as my first functional lang, FWIW.

Took a few seconds to bang out an immutable solution to the original problem: https://news.ycombinator.com/item?id=15019685

Of course, there's no mutation in Elixir (or the BEAM VM)... At all.


Do you recommend F#?


F# is a ML familt language so it should go well with the "ocaml" recommendation.

Another vote for Common Lisp here. You can do functional programming but you can also "step outside" and do OOP, imperative (and other paradigms) if you need.


isn't the whole point that you write code completely differently in functional languages? rather than writing loop code to populate an array, you'd do something like the following (in JS as I'm not a haskell programmer):

    const row = (len, onPositions) => {
        return (new Array(len)).map((item, index) => onPositions.includes(index) ? 1 : 0);
    };
    
    const matrixWithDiagonal = (len) => {
        return (new Array(len)).map((r, index) => row(len, [index]));
    }
the point being, that the code generates the result directly, rather than making space for values and then looping through toggling the positions that should be set to on.

note: you wouldn't really use the (new Array()) construct generally speaking (and it doesn't work with .map anyway) but it's easier than including underscore or ramda in the example.


> completely differently

Not completely. Mutating state and I/O are done in a more imperative style via monads.

> the whole point

Depends on who you are I guess. The big wins for me in Haskell are the type system and purity by default. Of course I absolutely love the composability that functional programming offers, but there are other functional programming languages. Haskell I like for those reasons in particular. Disclaimer: I'm still a newbie when it comes to fp and Haskell.


to be clear, by "the whole point" I didn't mean "the primary benefit". Just that as a very fundamental point, you have to write code very differently for that code to be functional.


I realized that after I read your reply to the other comment. I certainly understand that sentiment.


The original example is just to try to find something where you need to modify an array. Sure, you can create identity matrices without modifying an array, but that's just changing the parameters of the problem. If you really do need to change an array, how do you do it in Haskell?

Your solution is the typical thing you do in Haskell. You might know how to perform an algorithm by modifying an array, but now you suddenly have your hands tied and you can't do that anymore. Now you have to come up with an alternative way.

It's like someone really wants to implement bubblesort in Haskell but instead the solution ends up being to implement quicksort because the bubbling is difficult or unusual in Haskell.


In Haskell you usually don't want to modify stuff in memory. That's a manual optimization which is beneath the level of abstraction for typical use. Instead, you write high level code describing how to generate new arrays (or vectors or matrices or what have you) from inputs and let the underlying implementation of the library and compiler deal with optimization.

For example, the Haskell vector package [0], implements a ton of optimizations for looping, including stream fusion. Instead of manually trying to loop over an array the way you would in C, Haskell encourages you to use a library like vector to hide those low-level details and let you focus on only the task you want to accomplish.

This method is extremely powerful because the underlying implementation has been completely hidden. This lets you swap out the underlying implementation often with a single line change to an import statement.

[0] http://hackage.haskell.org/package/vector


a fair point, but I think it would at least be worth mentioning that if you're trying to modify a variable in Haskell you need to step back and figure out if what you're doing makes sense.


I've seen C programmers starting C# get very agitated that they can't "just read the underlying bytes" or whatever. Using low-level techniques from another language doesn't always make sense when learning a new language; often you need to take a step back and find the idiomatic solution to what you were actually trying to do.


It's incredibly rare that the problem you are trying to solve requires that you alloc an array and modify values in place, instead of something else that gives you the same result. Haskell is normally not suited for those cases anyway, because of stop-the-world garbage collection and hard to predict performance.

But, if you want to do that anyway, there's IOArray. You will just have to program everything on IO, and will get an executable that inherits the ordering of your code like any imperative language. Yet, way more often than not, people that think they have that problem just want to write C/Java/watever code in Haskell and have no good reason for dictating every detail of the machine behavior.


> You will just have to program everything on IO

Sort of, but I think this misleads a little bit.

You will have to program your manipulation of the array in IO, and thus anything using the actions manipulating the array. Intermediate computations could be outside IO.

And of course there's also ST.


> If you really do need to change an array, how do you do it in Haskell?

https://news.ycombinator.com/item?id=15017853


> If you really do need to change an array, how do you do it in Haskell? ...It's like someone really wants to implement bubblesort in Haskell but instead the solution ends up being to implement quicksort because the bubbling is difficult.

You nailed it! As a math major myself, I completely empathize with this pov. The vast majority of languages present themselves as minor tools that can be used to solve your problem. The goal is to solve your problem, not figure out the tool. If I want to decompose a matrix via cholesky or schur or QR or LU or what have you, I just want to be able to set arrays willy-nilly without thinking so much about how to accomplish the same thing without setting the array. The Haskell way of "let's not really set the array, it's not what you want" which a bunch of comments have expressed here, focuses the attention on the tool, not my problem at hand. I honestly don't give 2 shits about your tool, whether you call it C or python or octave. I care about my problem, not your tool. So your tool should just stand aside, let me do my thing & not complain. That's exactly why the vast majority of applied researchers in math stick to matlab/octave or ml researchers to python or vision/robotics guys to c++ or insert your speciality here... In each case, the tool is one tiny minor part of the problem solving process. As a quant,I would worry about convergence of pde's and getting the term structure right - yes the whole thing is in c++ but I don't spend days and nights thinking about what does c++ want, I just care about the math. With Haskell, I do have to stop what I am doing and focus on the tool. The language says, lets think about a nice function composition that elegantly yields this transform without dirty mutation. I know such a transform exists,but looking for it is not my day job. Most applied math & applied ml work is seriously messy and highly imperative, for which such elegance is a mismatch, because of time constraints imposed by capitalism. If we were all working with plenty of free time, yes I could concoct the perfect elegant transform every single time, but I don't have that kind of luxury. Haskellers who advocate pausing a bit and rethinking the problem, solving a different problem instead, or expressing the computation recursively with function transforms etc. to suit their language are perfectly honest and right - it's just that I don't have the fucking time to do those things. So yeah, the language is great, but I don't get paid enough to rethink every dirty thing I do with these dirty imperative languages so that it becomes a neat elegant exercise I can marvel at. There is honestly not enough time in the world or money to compensate for the amount of time it would take to rework all the dirty mutable spaghetti that constitutes the majority of day to day work. We all try, but outside of textbook examples and a few select domains, nice fp doesn't scale. It's not a language issue. It's just the times we live in.

It's all quite sad, because when I was in school, I actually led a talk using John Hughes' famous fp advocacy paper, where I expanded his examples into finmath, but once you work in industry and see the frantic pace and general messiness of the code base, you realize even the second coming of Jesus Christ won't get us anywhere.


I think you're missing the point. The reason for functional programming isn't just that it looks pretty - immutable data makes state conflicts and shared data problems more explicit and forces you to handle them, function composition encourages designing small composable units that make coding faster as you go, that sort of thing. Functional programming tries to force you onto the more long-term-beneficial line on this famous graph: https://martinfowler.com/bliki/images/designStaminaGraph.gif

The ironic thing about the example you linked is that if you're trying to implement bubble sort and Haskell pushes you into quick sort, that's a good thing because bubble sort is objectively incorrect. The oldest excuse in the book for bad design is that one does not have time, and while it is often objectively true, it's also true that good design actually saves you time in the long run. Awful spaghetti code is the fastest code to write but in a year's time when it takes a month to get a 2-point story completed because of all the hacks and //TODOs and workarounds you'll wish you wrote it right the first time.


I regretted using bubblesort as an example because the O(n^2) performance is not relevant to my point. Let's consider a Knuth-Fisher-Yates shuffle instead. Pretty easy to describe by mutating an array, and I don't think it's wrong or pigheaded to implement it that way or that I'm just being too stubborn to see how great it is to implement it in a Haskell way.


I think the important point is that functional programming languages push you to immutability because it's a more sensible default. You can still write mutable code but the fact that code is always mutable in other languages can cause problems where none should exist.


> but I don't spend days and nights thinking about what does c++ want

You have a magical talent as a C++ programmer then.


> isn't the whole point that you write code completely differently in functional languages?

That's certainly the conventional wisdom, but I think it's becoming less and less the case and we discover what are good styles to program in and imperative and functional styles are ending up converging somewhat.


I think it's more that people are realising the an imperative syntax on a functional language is both accessible and benefits from functional design. See elm, the functional aspects of modern JS, and probably some other examples that don't come to mind right now.


And functional constructs are appearing in imperative languages, lambdas, iterators, etc. etc..


iterators have been around for a long time and certainly aren't a functional construct. Lambdas are just syntactic sugar for functions in most cases.


If you want immutable data structures that simulate the behavior of mutable data structures look into Zippers. In this case, a list zipper.


For example, this problem of modifying the diagonal of a matrix can be solved quite easily:

  type Zipper a = ([a],[a])
  
  cursorOnDiagonal :: [[Int]] -> [Zipper Int]
  cursorOnDiagonal matrix = map (\(n,x) -> splitAt n x) (zip [0..(length matrix)-1] matrix) 

  flipToOneAtCursor :: [Zipper Int] -> [Zipper Int]
  flipToOneAtCursor = map (\(ys,x:xs) -> (ys,1:xs))   

  backToList :: [Zipper Int] -> [[Int]] 
  backToList = map (\(ys,xs) -> ys ++ xs)
If matrix == [[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]], then:

  backToList . flipToOneAtCursor . cursorOnDiagonal $ matrix == [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]


For those with access to another language, an alternate solution would be something like:

a = 1; a = 2;

Kidding. Really interesting write-up!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: