I really want to start using Nimrod for real work.
His name is Andreas Rumpf and he is really impressive.
Since Mesa/Cedar (1978) there have been quite a few systems languages proposing that approach.
I see a similar thread where you made a similar statement, so in favor of not rehashing that whole thread, here it is:
GCs are simply not appropriate for every use case.
Reference counting is not a panacea; once you start wanting to break cycles (which history tells us you will), you start having to deal with stopping the world or concurrent collection. If you don't have thread-safe GC, then you have to either copy all data between threads (which limits the concurrent algorithms you can use) or you lose memory safety.
Finally, your implicit claim (that Rust's safe memory management is more vulnerable to leaks than GC) is untrue. Rust's safe manual memory management is no more vulnerable to leaks than GC. The compiler automatically destroys memory when it is no longer reachable.
I'm sure you're aware that this is quite an unfair/exaggerated statement to make. But yes, I'm all in favor of language features that help prevent memory leaks.
But the reason the smart folks at Mozilla don't just switch to using a GC for all of Firefox (as do none of the other major browser vendors) is due to GC pauses sucking for user interaction. If you don't think that's a concern, or have a solution for it, please elaborate.
Also, note that a GC does not automatically mean no memory leaks. For instance, see how leaky Gmail is (was worse according to their dev team).
I turn off the cycle collector in my realtime apps. I prefer designing a clean, solid system that isn't reliant on cycles without my direct knowing. I guess that's just my inner control freak though.
Oh, and I said that it has locks like every other language. I didn't mean shared memory like every other language.
Finally - if you're just being smug at how smart rust is for having lifetime tracking and all those pointer types/restrictions - I don't think it's all that great; Nor did the gaming community when they got their hands on it last; Nor do many others who agree in the opinion that rust is just too complex while being too restricted.
Hybrid automatic and unsafe manual memory management (when the unsafe portion is for something really common like shared memory) is not something I'm really a fan of; it gives up safety while retaining the disadvantages of automatic memory management (lack of control, overhead). I think that safe automatic or fully manual schemes are the ones that have won out in practice because they get to fully exploit the advantages of their choices (safety in one case, control in the other).
To reinforce your point, many are unaware that the various forms of reference counting are also GC algorithms in CS speak.
I have actually used both Rust and Nimrod for real work (same project, first Rust then rewrite in Nimrod). My experience is that Nimrod is far easier to handle than Rust. It feels like Python with the native speed of C. There are a lot of nice features built into Nimrod, for instance native Perl regular expressions and seamless import of C functions. As for me it is the most productive language I ever encountered. I know many languages.
I also tend to prefer catching errors early, and having a typed language that warns and errors at compile-time is great.
1) Everything is (strangely) called a procedure, and then there's syntax to differentiate arguments that will be modified in place (proc myproc(myarg : int, inplacearg : var int)). Kinda weird, and a lost opportunity to have checking for pure functions at compile time.
2) import vs. include. Why have include at all to shoot yourself in the foot if you have cheap namespacing?
3) If vs. When ?
4) varargs feels kinda unnecessary
5) Case-insensitive. Oh... why?
On the bright side, I like how OO was implemented.
I disagree that it's a middleground between C and Python though. I see it more like an evolution of Pascal, it has the same niceties (ALGOL-like, static types, builds executables) and some things added on top (no VM, but GC'ed, metaprogramming).
If I try to answer your points:
-> 1) The procedure keyword comes from Pascal. I am not shocked by it, when I learned programming the teacher used to call them procedures also.
The 'var' in procedure arguments could be considered the opposite of the 'const' of C/C++. Everything is const by default in Nimrod, but if you want something modified in place you indicate it.
-> 2) Herm... I got nothing. I haven't used include, I didn't see the use for it yet.
-> 3) 'when' is compile-time, 'if' is at runtime.
-> 4) Unnecessary? varargs is quite useful. For example, if you use redis, you can do db.del("akey", "anotherone", "otherkey"), instead of having to fiddle with an array. Varargs makes some calls cleaner.
-> 5) "The idea behind this is that this allows programmers to use their own preferred spelling style and libraries written by different programmers cannot use incompatible conventions." from the Nimrod manual (http://nimrod-lang.org/manual.html). It forces you not to name functions and variables too closely. So you won't be able to have different things named myStuff and my_stuff because it will refer to the same variable or proc. You enforce your own writing style. That is debatable. You have others enforcing a style, like with gofmt. The case insensivity did not disturb me, though (but I admit it surprised me at first).
If I were starting a new language, I wouldn't pass the opportunity of disallowing mixing these concepts, so there's a way to reason about pure functions.
3) I get that, but it feels like something that could've been optimized away by the compiler, they didn't bothered and instead bloated the syntax. Not a fan of the naming too.
4) Just IMHO, but this kind of magic feels out of place on a static language. In something like Python, variable arguments aren't as opaque since there's an underlying object being passed around (a list or a dict), and your arguments can be of any type.
3) You can't optimize it away. A compile-time conditional statement has to allow for undefined identifiers and such, but for a runtime conditional statement you want to have the compiler signal an error even if it can statically determine that the condition is always true or false.
3. If == runtime control structure. When == COMPILE time control structure...code in a failing when clause is not compiled. Basically the equivalent of something like an #ifdef pre-processor macro in C.
4. Eh, I like it.
5. Never been an issue.
You mean Modula-2 (1978) co-routines?
Of course I might be missing something as I'm not that familiar with Go.
This is not a new thing. Structural subtyping has been around since the earliest formal treatments of subtyping in the early 80s. OCaml's subtyping relation is structural.
There's a related notion of row polymorphism that was first formalized in the late 80s. As far as I know, it hasn't been widely adopted, but is the subject of ML Poly/R. Elm's extensible records also seem similar. Row polymorphism is also an important concept when dealing with typed concatenative languages, like Joy and Cat.
Really, Go brings nothing new to the table. It is a synthesis of (mostly) good ideas. Unfortunately, it also forgoes other good ideas (parametric polymorphism, sum types, and pattern matching come to mind). The goodness of exceptions is, of course, debated.
Yeah, what I wanted to say was "the only aspect worth noting" or something similar. I knew about structural typing and vaguely remembered that row polymorphism exists (but I'm not really sure what it is).
Actually I wanted to play with Joy for a couple of times now, but it seems unmaintained and rather hard to approach. I ended up learning some Forth and (little) some of Factor instead. I think I'll give Cat a shot, I'm not a fan of CLR, but I'd really like to know how you can type concatenative language.
Or like object types in OCaml. They also use structural typing.
Contrast with Java where you'd have to both create a new interface and write an adapter class in a separate "glue" library that has hard dependencies on both libraries.
Anyway, I believe this feature is very handy. It's not "new" however. As noted, OCaml objects - and also modules - support structure typing too. And you can't call OCaml a new language. Scala supports it too, in more than one way. And so on.
Also, compared to powerful and extremely rich type-systems that these other languages got Go's seems rather limited. What I meant by interfaces not being a "serious feature" - I should have said it differently, I know - was that compared to other features of modern type systems it's not that significant. I get a feeling that it only looks like it is in Go because it lacks those other features.
And BTW, that's a concious decision of language designers to keep the language simple. I don't say it's a bad decision, either. I just want to note that Go indeed is simple (at least in regard of types) and not that innovative. And also that using Java as a baseline is not the most ambitious thing to do. ;)
Not very good for large scale programming, I would say.
As for comparing with Java, well, yes it is true.
However there are plenty of strong typed languages which offer similar capabilities.
If you you want to change a public method and can find all the type's usages, an IDE or search engine can tell you which call sites will break. (Or just compile everything and see what happens.)
If you can't find all the type's usages, you're screwed anyway because any change that would break an interface will also break a call site that calls a method directly, without using an interface. So having all the interfaces declared right there doesn't help that much.
From a large scale application developer point of view it matters a lot.
In code bases developed by 50+ developers across multiple sites, it is important to be able to look to a struct definition and be aware what interfaces in the code base are supported.
Back when C++ didn't yet have templates, there were several suggestions about what kind of generics to implement and how. g++ implemented "protocols", which are basically the same as Go interfaces. I think this was eventually considered (and refused) in C++ as "concepts", but I might be mistaken. Templates are more general and can be kludged to implement protocols - which is probably the reason they won out in the C++ standardization race.
It can improve readability a good deal. Here's an example I made:
So you need less explicit error-checking everywhere with a language that support exceptions.
The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language.
-- D. E. Knuth
But "Nimrod" might not be as good of a name as they hope. The biblical Nimrod was a "mighty hunter", and the name may have that connotation in Europe: the British seem to have always had a warship or a warplane (or both) named Nimrod, for instance. But in the US, due to the ironic use of the name by Bugs Bunny to address Elmer Fudd, we tend to associate "Nimrod" with incompetence and gullibility.
If this is connotation is intentional, perhaps as humor, please just tell me I'm not getting the joke.
The fact that the word has a negative connotation in the US is a bad coincidence. It however seems too late to change the name now.
But is it really so bad? Many people use Git and in Britain 'git' has similar negative connotations, as Nimrod does in the US.
I don't know. It's not like "Gimp", which is viewed by some as insulting to handicapped people. But it wasn't clear to me that the Nimrod developers were even aware of the US connotations, so I brought it up.
Perhaps the "Git" comparison is apt; as far as the US is concerned, that's the sort of name that's been picked.
How is this understanding wrong?
Generally speaking, if you would normally do something using reflection, you can also do it using code generation, which should compile to faster code (since you're not interpreting it at runtime) but possibly at the cost of code bloat and making the code harder to follow.
In a lot of instances, a macro isn't particularly more complicated in intent than a string + eval solution; it's just a much more verbose way of attacking the problem.
I'm probably going to post about it soon in more detail, if people are interested.
Personally I didn't find myself wanting all of the Racket syntax to be transformed. but I certainly more than once wanted to have a form which would offer infix syntax for everything inside, like TCL's `expr` (IIRC). There's https://github.com/marcomaggi/Infix/blob/master/infix/infix.... but I don't know if it works with Racket. Having a `lang` for infix notation seems like a good alternative: everything that would benefit from it (mainly maths in my case) would be in a separate file anyway.
So you have at least one interested person now :)
Do you know how does it compares with Dylan's macros?
There is some support in the library for things like map, filter.
It's not the focus of a language though, which is ok - I love FP, but it's not like it's the only way forward. Diversity is good.
Really we actually use it as an insult to people, e.g. "man that guy is such a nimrod", "you're a nimrod". Very weird choice for a name :)