Hacker News new | past | comments | ask | show | jobs | submit login

But, it seems like it should be relatively simple to make the compiler convert your multiple assignment code to static single assignment form at compile time, with relatively little trouble. If there is any good reason not to do this, or if it is impossible/difficult, then I would understand single assignment. I just think the reason for it needs to be explained.

Similarly, if single assignments aims to satisfy a different set of tradeoffs, and somehow grants some advantage that multiple assignment does not, it seems like these should be clear enumerated.

In Lisp, for example, when new programmers complain about the parens, we usually explain that the they allows code to be equivalent to data and make the code directly be a parse tree. Which, among other advantages, makes macros far easier and cleaner.

I haven't seen any similar advantage expressed other than some hand waving about concurrency. If single assignment makes concurrency easier, how does it do so?




The reasoning that appeals to me most about single-assignment is the idea that it makes it easier to reason about what actions could have altered the state upon which the currently considered code operates. It makes explicit which code branches a particular expression is dependent upon. Supposedly this simplifies refactoring and reasoning about code. This justification sidesteps the idea of facilitating concurrency. It's similiar to the reasoning behind pure functional approaches where side-effects are avoided.

I've personally worked with code where single-assignment would have avoided bugs which were introduced because later modifications did not completely account for previous possibly state-altering actions. On the other hand this article shows places where enforcing single-assignment on the programmer makes reasoning yet more complicated.

I'm not completely sure what my final opinion on the subject is. Side effects, state, and assignment can all be considered orthogonally and I'm not sure what the best way is to deal with them. Maybe Haskell has the answer with its Monads, but I haven't examined the language closely yet.

One thing I liked about Pascal is the difference between the := (assignment) and = (equality) operators. You also had Procedures with side effects and Functions without. Separating these concepts definitely added to the pedagogical utility of the language.

As I learn more about these concepts I find myself more explicitly managing assignment, state, and side-effects in my code. Even in imperative or OO workaday languages such as PHP or Java. I will attest that it does make a difference in the ease with which I can decouple and maintain an ordinary codebase.


I've also found that avoiding statement-order dependencies (like those introduced by multiple assignment) is good practice even in Python/JavaScript/Java/C. Why make it harder to reason about your program than it already is?

The example in the article could easily be solved if Erlang had a function composition operator, eg. Haskell's

   baz . bar . fab . foo $ x
or Eve's

   x -> foo -> fab -> bar -> baz


Now I'm no Erlang master, but why not do:

  X1 = baz(bar(foo(X))).
Or, if you want to be fancy:

  X1 = lists:foldl(fun (F, A) -> F(A) end, X, [fun foo/1, fun bar/1, fun baz/1]).
Obviously the fun thing about the second way is you could have code make up any arbitrary list of one argument functions to apply in succession. If you know what functions you are going to use ahead of time, then the first way is much clearer.


I have no idea. That's what I would've done.

The only thing I can think of is that the article's author doesn't want to have to move between the parentheses. His initial complaint was that adding a function 'fab' in between foo and bar means you have to rename everything. If you chain the functions in one long expression, you need to navigate to between bar and foo, type 'fab(', then add a separate parentheses onto the end.

It also gets really ugly when foo, bar, and baz are instead line-long expressions, which they often are:

    X1 = big_long_expression_containing_some_stuff_and_baz(
              big_long_expression_containing_some_stuffand_bar(
                     big_long_expression_containing_some_stuff_and_foo(X))).
While if you have a sane linebreaking policy (as both Haskell and Eve do), you can do:

   X = big_long_expression_containing_some_stuff_and_baz
     . big_long_expression_containing_some_stuff_and_bar
     . big_long_expression_containing_some_stuff_and_foo
     $ x


> It also gets really ugly when foo, bar, and baz are instead line-long expressions, which they often are

I sometimes use variables to "take a breather". You do some heavy, syntax and/or meaning laden work, and put the result of all of it in 'current_velocity' or whatever, and that conceptually helps you to move on to the next thing with the result of what you've just done firmly in mind. Single assignment doesn't prevent that, obviously, but sometimes functional code uses them less and just uses a bunch of nested functions. Those Haskell examples you posted seem like a nice idea, by the way.

http://journal.dedasys.com/articles/2007/01/17/clockwork-lan...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: