Hacker News new | past | comments | ask | show | jobs | submit login

Nim also has an experimental feature for a+b * c to parse as (a + b) * c, and for a -b to get parsed as a(-b), because of the whitespace. The documentation itself gives an example of echo (a,b) getting parsed as echo((a,b)).

Nim also allows passing by non-const reference without any indication of such at the call site. Also it has semantic indentation, which is cute and clean-looking but more effort to safely edit than the popular alternatives.

Everything I see in Nim is designed around being clever. Even in the documentation they have clever extensions to BNF syntax that save such precious characters.




> and for a -b to get parsed as a(-b)

It almost looks like you're suggesting that `a -b` will be parsed as a*(-b). Let me just clarify, that is not the case. With the 'strongSpaces' feature, `a -b` results in a compile-time error. Without that feature, it works as expected (`a-b`).

> Nim also allows passing by non-const reference without any indication of such at the call site.

Why is this a problem?

> Also it has semantic indentation, which is cute and clean-looking but more effort to safely edit than the popular alternatives.

Could you give an example of how exactly semantic indentation makes the language less safe? I have been using Nim (and Python) for years and have not found this to be the case.


> With the 'strongSpaces' feature, `a -b` results in a compile-time error.

Unless there's a function named a? It doesn't matter, you're cherry-picking your response.

> Why is this a problem?

Go ahead and tell us what benefits and detriments you're already aware of, having thought about the question, and I'll tell you if you've missed anything. e: I recommend looking at why C++ has non-const reference parameters and how C# handles the problem and considering whether the benefits of this feature could be had without its detriments.

> Could you give an example of how exactly semantic indentation makes the language less safe?

I said that it was more effort to safely edit. Your question drops the specificity that we're talking about making edits to the code, and also that we're talking about an effort/safety trade-off. Also, it's specifically the kind of semantic indentation found in Python, not (to such a degree) the kind typically found in Haskell. It's easy to find examples: Almost every edit that moves code around takes higher cognitive load than the equivalent done in a language where blocks are explicitly delimited.


> Also, it's specifically the kind of semantic indentation found in Python, not (to such a degree) the kind typically found in Haskell. It's easy to find examples: Almost every edit that moves code around takes higher cognitive load than the equivalent done in a language where blocks are explicitly delimited.

I disagree. Moving code around in Java or C++, pure brace languages, has at least as much cognitive load as Python does, and as most languages do. Are the variables referenced defined in the new scope? do break/continue still function as expected? Are exceptions properly handled and propogated?

The indentation is usually the least of one's worries when moving code around; and at the very least, Python (and I assume Nim, though I hardly have any experience with it) guarantees that the visual and logical code hierarchies match; The fact that they might not is a constant cognitive load in curly brace languages (and a source of bugs if you ignore it).


> guarantees that the visual and logical code hierarchies match; The fact that they might not is a constant cognitive load in curly brace languages (and a source of bugs if you ignore it).

Well it's a cognitive load if you deal with it manually and the source of bugs if you ignore it, but if you use an automatic indenting program, that solves it once and for all, along with a few other problems.


Same is true of Python (and I assume, Nim)


> Go ahead and tell us what benefits and detriments you're already aware of, having thought about the question, and I'll tell you if you've missed anything.

Just a second, let me put on my Eiffel/Ada hat for this question.

The contract between a caller and callee is a potentially complex thing; it specifies not only which arguments can be mutated (if at all), but also how their abstract state before and after relate to each other or what other visible side effects may be produced. I refer you to the Eiffel and Ada language specifications for possible ways to specify such contracts, invariants, and pre- and postconditions. Common convention is that functions that return a value do not have visible side-effects at all (the Ada '83 rationale specifically found limitations that disallowed all side effects at the language level too limiting). See: Command-query separation. Adhering to CQS, queries do not mutate arguments, and commands are expected to; hence, you need to know the specific contracts for commands and how they affect their arguments, but can rely on query arguments not changing.

More generally, mutability of arguments is just one aspect of a contract; importantly, in order to be able to optimize or change the internal representation of an object, we may desire the option of having mutability of the concrete state even where the abstract state remains immutable (e.g. splay trees or self-organizing lists, arguments that have internal caches, etc.). Languages that force a knowledge of immutability of the concrete state at the call site can thus violate principles of modularity in that they limit how the implementation may change.

Declarations of immutability of only the location of the variable at the call site (as with out/ref parameters in C#) are of even more limited value; while they guard against some clerical errors, it is not clear that this protection justifies the additional syntactic footprint.

Back to Nim: The simple solution to ensure that a variable is not being modified by the callee, if you desire that, is to use a "let" rather than a "var" declaration. In fact, using "let" for values that you do not intend to change further and only use "var" variables where they are expected to be mutated throughout a procedure is already idiomatic Nim [1] and should guard against the majority of related clerical errors of this kind.

[1] https://github.com/Araq/Nim/wiki/Style-Guide-for-Nim-Code


>> Nim also allows passing by non-const reference without any indication of such at the call site.

> Why is this a problem?

I have no experience with Nim. In C++, I've heard people argue that this is poor language design because at the call site, you cannot see that the variable is potentially being mutated; making it explicit that a mutable reference is being passed allows the reader of the call site to understand what mutations might occur without needing to know the signature of every function involved.

I've been learning Rust, which requires notation at the call site that you're passing a mutable reference, and thus far, I quite prefer it; I had been concerned that it might be "too much typing", but it's proven to be rare and unobtrusive.


> I've been learning Rust, which requires notation at the call site that you're passing a mutable reference, and thus far, I quite prefer it; I had been concerned that it might be "too much typing", but it's proven to be rare and unobtrusive.

I'm so glad Rust made this design choice, because it means that a programmer can see very quickly where borrows are happening and understand where and where not new borrows could be inserted.

Take this example:

    struct Widget {
        name: String
    }
    
    fn jabberwock_the_widget(w: &mut Widget) {
        w.name.push_str(" (modified)");
    }
    
    fn main() {
        let jimbo = Widget { 
            name: "Jimbo".to_string() 
        };
        let name = &jimbo.name;
        jabberwock_the_widget(&mut jimbo);
    }
If that call were just `jabberwock_the_widget(jimbo)`, and I didn't have `jabberwock_the_widget`'s signature at hand, I would have to run the code through a compiler to see that there is an illegal mutable borrow on jimbo. With mutable borrows written so explicitly, it's clear at the call site that it can't happen with the existing borrow on the widget's name.


This has pretty limited scope, though. If a function you call returns an &mut you have no way to tell thanks to type inference, and when you use that value it looks like you're passing by-value.

Similarly if you call a method, there's no indication at all how `self` is being passed.


  > If a function you call returns an &mut you have no way 
  > to tell
Indeed, but also consider that a function can't return a `&mut` unless a `&mut` was passed in to it somehow. This means that the only way to implicitly introduce a `&mut` is via a method call like `foo.bar()` that takes `&mut self`, but also consider that it would be impossible to call the method here unless `foo` were explicitly declared as mutable to begin with. Given that mutable variables in Rust are relatively rare, these factors conspire to broadcast any mutable references quite clearly.


I didn't get the feeling it was about being clever. Nim just seems like a lot of assorted ideas, stuff you might think up while writing in C++ or something and say "it'd be nice if...". There doesn't seem to be a grand overarching theme or design; it doesn't have the elegance some languages do.

But it seems that doesn't matter. Some people really like it, and perhaps some interesting new feature will come out of it and benefit the world.


It's a GetShitDone language.


Or gets_hit_done one.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: