Hacker News new | past | comments | ask | show | jobs | submit login
Gluon – A static, type-inferred and embeddable language written in Rust (github.com)
203 points by Lapz on June 22, 2018 | hide | past | web | favorite | 94 comments

Looks really interesting. The readme actually got me started with lots of good examples.

Side note though: I won't ever use a symbols-instead-of-words language unless I have to.

    let { (*>), (<*), wrap } = import! std.applicative
This is not my idea of good language syntax. I like jr developers, non-X language developers, hardware guys, and even graphic designers to be able to understand my code.

Consider simple string concatenation. This is what it looks like in Objective-C:

    NSString *one = @"Hello";
    NSString *two = @"World";
    NSString *three = [[one stringByAppendingString:" "] stringByAppendingString:two]
Total fail for something so simple.

I'm dyslexic. I haven't looked into it in detail, but someone once told me that dyslexia is some kind of problem processing things that are pronounceable. When I was learning Japanese I was amazed how easy it was to read kanji (the Chinese) characters. Apparently processing things that are inherently symbolic (even if you can pronounce them) is done differently. It was like someone lifted a veil. Even now I prefer reading Japanese to English. I suspect if I learned Chinese I would enjoy it even more.

The same goes for languages with symbols. For me, they are an order of magnitude more readable. Obviously people with dyslexia is a very narrow group to design a language around, but I suspect that there are others who also find it easier to think about symbols than words. That's why people design languages in this way (and as someone else pointed out, probably why math has such a large number of symbols).

There are things that are easy to learn and things that are easy to use. Often there isn't as much overlap as you might expect between the two group. For something like programming languages, I'll take easy to use over easy to learn any day. Having said that, I am absolutely sure there are people who prefer having a big block of prose in their code. I see that kind of code frequently as well, so which one is easier to use almost certainly depends on the person.

Great insight, thank you for sharing. I'll keep this in mind. Maybe in some cases a middle ground could be finding a way to use less symbols instead of more. Verbosity isn't always desired with symbol or prose.

The input method for symbols is the other problem. Math notation is used for quickness.

Sometimes you are expressing a specialized concept that, even when translated into words, won't be readily understood by someone without domain knowledge. That's when I believe symbols are very useful.

In this case we are dealing with an applicative functor. If you know what's an applicative functor, you will immediately know what these symbols mean. If you do not, even if we translate <* as `sequenceActionsKeepFirstValue` you still don't know what this is doing.

I know what an applicative functor is and I struggled to map what <* and *> are. These are not standard, even in the niche world of functional programming.

These symbols are part of the definition of the Applicative type class in Haskell http://hackage.haskell.org/package/base-

Same with Idris and Scalaz. I don't know how much more standard it could be.

standard it might be, but I seriously hope IDEs would give a hint on what to google when a junior sees "<"...

try googling < or "scala <*"...

I kind of miss the days where you would turn to the manual, read and understand it, before doing serious work in a language.

I've had a junior dev ask me what kind of strange multiplication thing was going on in our C code base. If you don't even know about pointers, maybe you're not "junior" yet and should learn some more.

The asterisk symbol in C is heavily overloaded too; it can mean a pointer type, dereferencing a pointer, multiplication (of ints, floats, ...).

another instance where IDEs can help juniors by providing hover-to-show-name/descriptions

An IDE should just show the pronounceable name of that symbolic alias on hover.

Symbols are fine as long as they're alias names for fully written out names.

I think Scala will support such an IDE feature with an @alias annotation in the future. (I saw that idea on a slide in a talk of M. Odersky some time ago but actually don't know the current status).

Search features for language documentation should recognize special characters as such. For asterisk that gets very ambiguous though, unless it uses regex or something else, standardly.

Then use nicknames, or abbreviations. Searching for symbols is very hard on the NLP dominated Web. And even if Google is getting better (but I tried a month ago with Scalaz symbols, and half of them failed), the other search engines (for example Lucene used by forums, or full text indexes) will strip out every non alphanumeric character.

I'm confused by your example, which seems to be the opposite of symbols-instead-of-words. It's words-instead-of-symbols gone berserk.

Cleaning things up:

NSString one = “Hello”;

NSString two = “World”;

NSString three = one.appendString(“ “).appendString(two);

I agree with him, except for the super verbose method name, the symbols are a problem.

one, two, and three are pointers. @"Hello" is shorthand for creating an NSString instance, which is a heap-allocated object. Both of those are important in the context of Objective-C, so they can't just be left out entirely. (I also think Objective-C is a pretty extreme example here since its syntax is limited by its history as a preprocessor layer over C.)

Setting aside those, the Objective-C code uses these symbols:

   ; = "" [] :
And the example you provide uses these symbols:

   ; = "" () .
It doesn't seem like the perceived improvement is from eliminating symbols, but from switching from Objective-C's Smalltalk-like method invocation syntax to a more familiar (to many devs) C++-style syntax. Also, it seems to me that leaning more on symbols makes the code even easier to read:

   NSString one = "Hello";
   NSString two = "World";
   NSString three = one + " " + two;
Judicious use of overloadable/custom operators can make code easier to read, especially when they express an operation that can't be expressed clearly in a word or two. <>, > and <* fall into that category.

I really, really wish people would stop using + for string concatenation in new languages they design. It shares only a tangential relationship to arithmetic addition, and even that's a stretch. It's caused an enormous amount of bugs in numerous different languages. Please, let's just let that concept die for newer languages.

Do you have examples of bugs that it causes when more strictly typed? That is, string + number is an error.

In those languages, it's more a cognitive mismatch (usually covered up as it's what people learned it as).

Even then, people seem to think of + as a special case, and in a language that is otherwise very strictly typed will decide to automatically convert between different numeric types (floating point and integer, for example). That's admittedly a different problem, but it shares a lot in common with what I originally mentioned.

> It shares only a tangential relationship to arithmetic addition

You’re right—string concatenation is not addition, it’s multiplication. (Contrast “sum types” and “product types” for an intuition, if you don’t have an education in abstract algebra.)

I'd disagree and say that it's value-level addition. Adding the number `3` to the number `5` gives us a single number `8`, not a pair of `3` and `5`. Likewise appending `"hello "` to `"world"` gives us a single string `"hello world"` not a pair of strings.

If we model arithmetic addition with Peano numerals, that would be:

    three = S (S (S Z))
    five  = S (S (S (S (S Z))))
    eight = plus three five
          = S (S (S (S (S (S (S (S Z)))))))
We can model strings as lists of characters (which Haskell does, but is inefficient). It's essentially the same except the second constructor takes a character as argument:

    hello = Cons 'h' (Cons 'e' (Cons 'l' (Cons 'l' (Cons 'o' (Cons ' ' Nil)))))
    world = Cons 'w' (Cons 'o' (Cons 'r' (Cons 'l' (Cons 'd' Nil))))
    greet = append hello world
          = Cons 'h' (Cons 'e' (Cons 'l' (Cons 'l' (Cons 'o' (Cons ' ' Cons 'w' (Cons 'o' (Cons 'r' (Cons 'l' (Cons 'd' Nil)))))))))
The algorithms for adding Peano numerals and appending lists of characters are identical:

    plus Z     y = y
    plus (S x) y = S (plus x y)

    append Nil         y = y
    append (Cons x xs) y = Cons x (append xs y)
If we replace 'Char' with '()' (i.e. `List ()` or `[()`]) then they're isomorphic.

I wrote up some thoughts about this connection at http://chriswarbo.net/blog/2014-12-04-Nat_like_types.html

I personally like forms of string interpolation:

NSString three = f”{one} {two}”;

Readable and should be efficient.

You'd use `<>` or `++` in Haskell for strings, lists or whatever other instances of Semigroup or Monoid type classes.

I had the same opinion about symbols until I learned some APL. It taught me that symbols can be so useful as they are in math.

It is true that non-math people, hardware guys, or graphic designers may not be able to understand a complex equation and think it is just some kind of hieroglyph, but the right notation can help to make things more clear. Anyway, it is very likely that, for example, a graphic designer would not be able to understand the Maxwell equations even if we wrote "rotational of the electric field" instead of using symbols.

I am not claiming that symbols are always the best solution, but I think they have their place and there is usually a sweet spot for the problem at hand. I'd suggest you to try to keep an open mind.

All this said, there are many (way too many) terrible examples.

It is like symbols once internalized like strings, could be used as a tool of thought.

> It is true that non-math people, hardware guys, or graphic designers may not be able to understand a complex equation and think it is just some kind of hieroglyph, but the right notation can help to make things more clear.

The semantics of these "hieroglyphs" very much depend on the context, though: "a x B" itself only says "apply some operation x to the named entities a and B".

> The semantics of these "hieroglyphs" very much depend on the context

We need context to understand words too. What is your point?

Words also inform context (bi-directional), symbols derive their meaning entirely from context.

They may give some more context, but you usually need to know the domain up to some point anyway.

For example:

    a = &b;
may just be a weird bunch of symbols if you don't know C. But my point is that, anyway, for someone who doesn't have any idea about what a pointer is:

    a is pointer_to b;
is not going to make it obvious, and it is going to make a simple program with a linked list a nightmare to read for those who know.

I am aware it is also very easy to show an example where symbols are much less helpful than words. Some good ones were given in this thread. I am just suggesting to keep an open mind because I have the feeling symbols can actually make some programs more readable, although finding the right balance is not an easy task.

Ah, Objective-C, the language that seems designed as if on purpose to confuse and confound. I'm glad it's on the way into the garbage bin.

Another way to write that would be:

    NSString *one = @"Hello";
    NSString *two = @"World";
    NSString *three = [NSString stringWithFormat:@"%@ %@", one, two];

Not sure if you were trying to make it simpler, but honestly, this also looks needlessly difficult to understand.

Would you say the same about the operators in Java or C?

Gluon recently added implicit arguments with the latest release (see http://marwes.github.io/2018/06/19/gluon-0.8.html). In spirit it is like modular implicits (https://arxiv.org/pdf/1512.01895.pdf). This is really exciting because modular implicits represent a pragmatic balance between the flexibility of ML modules and the convenient overloading of type classes.

Another cool feature of Gluon is modules are basically just records. This is a strict improvement over ML modules in my opinion. (See http://gluon-lang.org/book/modules.html)

Congrats on the recent release!

> implicit arguments

Because it worked so well in Scala?

What's the problem of implicit arguments in scala? I can see why implicit conversions can be a problem but aren't implicit arguments required for typeclasses and other nice features?

You should read the paper :) Modular implicits are not the same as Scala implicit arguments. Gluon probably just chose the name "implicit arguments" because it corresponds better to their variation on modular implicits.

What's your opinion of Odersky's scala3 implicit plan?

I'm not very familiar with Scala3, but as far as I'm aware implicits are still a core part of the language?

Let me put it like this: I recently applied to two high-profile startups with experienced technical leadership, and they both explicitly asked me if I was a Scala fan. They weren't and made bad experience in the past regarding productivity due to complexity, tooling and developer onboarding, which unfortunately matches my experience. There is some good stuff for certain niches (Akka, Spark etc.), but otherwise I feel like Kotlin is winning.

:sigh: I hate function call syntax without parens. It makes code much more difficult to read and, if it supports first class functions, passing around no arg functions is harder than it should be.

I hate function call syntax with parens. It makes code much more difficult to read and, if it supports calling functions, calling functions with args is harder than it should be.

While I agree the parent could have elaborated more, would you like to as well?

I've never worked in a language without parents for function calls so I share their confusion reading Haskell or Ruby code at a glance.

I don't think a language needs to bend for non-users but being intelligible without knowing the language canI be nice.

As an outsider, the examples:

A(B(C)) A(B,C) A(B)(C)

Could all be valid Python but still communicate a little bit about A B and C. Without parents they'd all look like "A B C". I'm sure it's more obvious if you know those languages.

The interesting thing is that in Haskell, the equivalents of Python's `A(B,C)` and `A(B)(C)` are identical: `a b c`. This is because of currying: you're allowed to supply one argument to a function at a time, and you get back a function that takes one fewer argument. So if `add(x,y) = x + y`, then `add(5)` is a function (let's call it add5) so that `add5(y) = 5 + y`, or in Haskelly notation, if `add x y = x + y` then `(add 5) y = 5 + y`. If you write `add (x,y)` in Haskell, then it means a function that takes a single tuple `(x,y)` as an argument.

A(B(C)) would be `a (b c)` or `a $ b c` or `(a . b) c`. The first is the most vanilla way, the second is convenience (basically "evaluate everything after $ first, then plug it in") and the third uses the function composition operator `.`.

Nice summary. I would just like to add that `$` and `.` in Haskell aren't part of the language syntax, they're just normal functions/operators, which some people like to use.

We can define `$` ("apply a function to an argument") as:

    f $ x = f x
We can define `.` ("compose two functions") as (where `\x -> ...` is an anonymous function):

    f . g = \x -> f (g x)
Equivalents in, say, Javascript would be:

    function dollar(f, x) { return f(x); }

    function dot(f, g) { return function(x) { return f(g(x)); }
People mostly use `$` because of its precedence rules, which cause everything to its left to be treated as its first argument, and everything to the right as its second, i.e. we can use it to remove grouping parentheses like:

    (any complicated thing) (another complicated thing)

    any complicated thing $ another complicated thing
It's also useful for partially applying, e.g. `map ($ x) fs` will apply each function in the list `fs` to the argument `x`.

> Without parents they'd all look like "A B C"

No, that would just be a(b,c). You would use parentheses to express the others. a(b(c)) would be a (b c). a(b)(c) would be (a b) c. IMO it's still perfectly readable, you just have to know it and get used to it (which, granted, can be difficult if you're only used to paren-ized languages, but not an impossible ask, certainly not, imo, a deal-breaker).

There's an added complication that `a(b, c)` could mean "call `a` with two arguments `b` and `c`" or it could mean "call `,` with two arguments `b` and `c` to form the pair `(b,c)`, then call `a` with that one argument".

The latter would be written `a([b, c])` in other languages. These are all isomorphic (we're calling `a` and giving access to `b` and `c`), but can confuse people who are new to the syntax.

> I hate function call syntax with parens. It makes code much more difficult to read and, if it supports calling functions, calling functions with args is harder than it should be.

the parent has a point you have no point.

is this a function call or a function value?


now it's explicit.

Your sarcasm just does not work.

> calling functions with args is harder than it should be

care to explain? because it is not harder.

Not taking sides here, but just wanted to point out that in Haskell/Agda/Coq/Idris/etc. your `foo` would denote a function, just like writing `foo` in e.g. Javascript. Your `foo()` would call the `foo` function with the argument `()`, which is a unit value (AKA null, void, etc.).

Interestingly a function which takes a unit value as argument (AKA a "thunk") is isomorphic to its return value, i.e. wrapping the body in a function doesn't change its meaning. Yet it can be useful to prevent side-effects from triggering, even if the effect is as benign as "heat up the CPU by performing lots of calculations".

Now I will take sides and point out that if we have currying, all of the following are the same:

    f(a, b, c, d)
    f(a, b, c)(d)
    f(a, b)(c, d)
    f(a, b)(c)(d)
    f(a)(b, c, d)
    f(a)(b)(c, d)
    f(a)(b, c)(d)
Not only does this proliferation of forms over-complicate a language, but it's also completely redundant as a no-parens language would write it as `f a b c d`.

> Not only does this proliferation of forms over-complicate a language, but it's also completely redundant as a no-parens language would write it as `f a b c d`.

and I don't see how it makes anything more readable. It doesn't.

I wasn't talking about "readability", since that's subjective and can change based on background, familiarity, what someone's used most recently, etc.

I was giving an objective, measurable example where the `f(a, ...)` syntax is "harder": namely that it causes a proliferation of (curried) function application forms, all of which are equivalent, compared to just one in the `f a ...` form. As it stands, such parenthesised function calls don't even have a normal form! We could add one, by adding a new rule to the language, but even then there's ambiguity about what that rule might be: should we rewrite expressions of the form `f(a)(b)` into `f(a, b)`, thus making `f(a, b, c, d)` the normal form; or should we rewrite expressions of the form `f(a, b)` into `f(a)(b)`, thus making `f(a)(b)(c)(d)` the normal form?

Issues like this would plague anyone trying to manipulate programs programatically, e.g. writing interpreters, compilers, formatters, linters, documentation generators, HTML/LaTeX renderers, static analysers, (structured) macro expanders, minifiers, obfuscaters, deobfuscaters, (structured) diff/patch/VCS/etc., auto-completers, template/skeleton code generators, etc.

It’s a ”function value”, since you haven’t applied it to another expression.

Whether that is a partially or fully applied function doesn’t matter in the first syntax, but in the second syntax you have to add ugly parens to disambiguate the partially applied function from the function call.

As soon as you get serious about type-level functions, the first syntax beats the shit outta the second.

I liked how LiveScript had ! instead of (), it made the whole source look like stuff would be done! xD

I mean, why not give invocation its own one character operator, it's used all over the place.

I hate parens call syntax with or without functions. It makes code much more difficult to read, and if it supports functions calling first, arguing with functions and passing is harder than it should be.

> if it supports first class functions, passing around no arg functions is harder than it should be.

Can you expand on that? How does no parens make it harder to pass around first class functions? In Gluon (and in Haskell, Purecsript, Idris, etc), you just pass around the function name. If you have "inc x = 1 + x" and you want to pass that to map, you just pass it by name: "map inc [1, 2, 3]"

What could be simpler?

How does "map inc [1, 2, 3]" know not to do "inc [1, 2, 3]" and then pass the result to "map"? I'm guessing it's a language rule, but it could be the type system. (I last used Haskell in about 1999.)

It's certainly ambiguous to outsiders, which is kind of the point of the discussion. Simpler would be something like Python: map(inc, [1, 2, 3]). No ambiguity.

Function application in most languages in which 'f x' means 'apply the function f to the argument x' is left associative.

So `map inc [1,2,3]` is the same as `(map inc) [1,2,3]`.

This is consistent with the fact that -> is right associative.

So in Haskell, we usually say

map :: (a -> b) -> [a] -> [b]

Which is usually interpreted as 'If you give me a function which can turn a's into b's, and then you give me a list of a's, I can give you a list of b's.'

But it is exactly equivalent if we write the type of map like this:

map :: (a -> b) -> ([a] -> [b])

Which I like to read as 'If you give me a function which can turn a's into b's, I can give you a function which turns a list of a's into a list of b's.

It's certainly the case that ML-style languages have a syntax that can be unfamiliar to outsiders, but I would disagree that the syntax is ambiguous. Indeed, Standard ML (another functional language) is one of the few languages to be formally specified.

Okay. I know a lot of languages, but not really functional ones (I have used Haskell, OCaml and Lisp (though that's not like this), but not much and long ago).

Maths is more type-dependent in the absence of parentheses. E.g. sin cos⁻¹ x = sin(cos⁻¹(x)), but A cos⁻¹ x = A × cos⁻¹(x).

By "ambiguous" I mean to people looking at the code. Obviously it's unambiguous and well-defined to the computer.

The rule is simple to learn and perfectly consistent once you know it, even to a human programmer. This is in direct contrast to mathematical notation, which remains ambiguous and context-dependent forever.

While it's been over a decade since I first learnt this style of functional languages, I don't remember being confused by application syntax. It also helps that function application binds more tightly than anything else.

I don't think it's any great indictment that it is not immediately understandable to someone who does not know the rule. There will be lots of other things someone will not be able to understand in a Gluon program, unless they know Gluon. It's perfectly reasonable to expect someone to learn a programming language before they write code in it.

> Maths is more type-dependent in the absence of parentheses

Yes, mathematical notation is also heavily overloaded, often abbreviated or "abused" and full of "puns".

There have been some attempts to "fix" this. Lambda calculus (the basis of most functional languages) could be seen as an attempt to do this. A more recent example is https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...

> How does "map inc [1, 2, 3]" know not to do "inc [1, 2, 3]" and then pass the result to "map"? I'm guessing it's a language rule, but it could be the type system.

That is a lot like saying: in the Python expression "5 - 4 - 1", how does the language know to subtract 4 from 5 first, and then subtract 1 from the result, rather than subtracting the 1 first?

The answer is: precedence rules. It's not ambiguous, though you might not know which order applies if you aren't familiar with the language. The same could be said of the syntax of any language, except perhaps for those in the Lisp family.

Functional languages generally don't try to be familiar to programmers who are used to other paradigms, because that compromises the elegance of the language (for example, mandatory parentheses for function calls and currying are an ugly mix). But once you learn a language in the ML family (Haskell, OCaml, etc.), they all have the same kind of familiarity that a Java programmer feels when they look at C#.

That's kind of my point - people know the rules for maths, and Python's function call notation is the same, so it's familiar and unambiguous to outsiders.

I understand that other languages make different choices regarding syntax and readability. This particular choice, while not inherently a bad choice, makes things more difficult for most people, and led to eximius's comment at the start of the thread:

> It makes code much more difficult to read

I think it's only more difficult to read initially, when you aren't used to it. I actually find it more natural than the more mainstream function call syntax after having used it for a while. When I look at code in C-family languages, it feels like there is a lot of unnecessary syntactic noise that distracts me from understanding what the code is doing.

I agree that the "f x y z" syntax can look confusing/unfamiliar to people who haven't seen it before, whereas everyone knows how subtraction works. But it's worth learning because currying is an elegant way to simplify a language and make it more uniform: a function is just a map from a single argument to a single result. To compose functions, you only need the output type of the first function to match the input type of the other function. You can build a general function composition function that takes two functions and returns their composition. There is no arity to worry about. Parentheses should be used for grouping and not be overloaded to also indicate function calls.

> people know the rules for maths, and Python's function call notation is the same, so it's familiar and unambiguous to outsiders

Personally I'm really not a fan of relying on operator precedence. Even in that Python example I would write it `(5 - 4) - 1` to disambiguate. I have a few issues:

- Firstly, parsing combinations with varying precedence is something that computers are good at but people not so much, so I see no reason to force readers to juggle this sort of stuff.

- Second, mathematical precedence rules are overly complicated. Trying to support them can make a language overly complicated. Languages like Smalltalk prefer to remain simple, even if that "breaks" standard mathematical expressions https://en.wikipedia.org/wiki/Smalltalk#Expressions

- Finally, arithmetic is a tiny part of a programming language. There's no "standard" way to extend operator precedence to include, say, 3D rendering operations, or parser combinators, or attaching event handlers, etc. All we can do is give each operator a number, at which point we're just exacerbating the "humans aren't good at precedence tables" problem.

This has also been discussed elsewhere, e.g. at http://wiki.c2.com/?OperatorPrecedenceConsideredHarmful

PS: I prefer the `f a b c` function call syntax to `f(a, b, c)`, but prefer s-expressions to both. Scheme is really nice, but I don't want to live without currying and laziness by default. I have high hopes for https://lexi-lambda.github.io/hackett :)

This means you have to rely on precedence, or add your own parens.

For example, how do you read this?

  f -4
In F#, whitespace is significant (and it's a function call); in OCaml, it's a subtraction. The idea of wrapping a negative number with parens is not necessarily obvious.

It's impossible to pass around functions without arguments in Haskell, because there are no functions without arguments in Haskell. It looks like the same is true of Gluon. (So you don't really pass around the function name: you pass around the variable which contains the result of evaluating the expression.)

In languages where there can be functions with no arguments, if referring to the function without parentheses calls it, it can be inconvenient to get a reference to the function itself.

In both Gluon and Haskell, functions without arguments can be represented as functions over the unit type:

  f: () -> SomeType
which can be called via:

  f ()
There are examples of this in the Gluon book (http://gluon-lang.org/book/syntax-and-semantics.html). There are no syntax-related difficulties at all here. This is also how it's done in OCaml and related languages.

(Note: This is (basically) useless in Haskell, since laziness makes this have the same semantics as a constant:

  f :: SomeType
But since Gluon is strict, there's a pretty important difference between the two.)

getChar is a Haskell function without arguments.

In Haskell, getChar isn't a function (something of type a -> b), its type is IO Char


It represents an interaction with the outside world that results in a Char when you perform it. It isn't a function because in Haskell functions must be pure (with no side effects).

This is false. getChar is a constant.

I am confused :S

From your other post: "In both Gluon and Haskell, functions without arguments can be represented as functions over the unit type: f: () -> SomeType"

Isn't `f` a constant here too?, how is this different than getChar? (I get I don't understand something here but not sure what)

`getChar` is not of type `() -> SomeType`, but directly of type `SomeType`.

(Though yes, `f` also is a constant... it's just a constant that happens to be a function, which `getChar` is not.)

I feel the responses to you were somewhat unhelpful. Would they also claim that something of type `MyFun Char` is not a function, where

    data MyFun a = MyFun (() -> Char)
Technically they'd be right.

I didn't say it was unambiguous, just hard to read. Literally visually parsing it is more difficult than with parens.

Depending on how dense the code is, it isn't that bad. But it is another barrier to understanding.

    (a b c d)
I agree, it’s much better with parens - the scope is immediately obvious.

Yeah this is what drove me off ruby. It makes scanning the code so much harder because you have to get the context of anything to understand it.

Even python have a syntaxically useless ":" before idented block that makes it easier to spot.

Looks like a great language that's a nice mix of OCaml and Haskell!

What's the interop story? If I am embedding this language inside my app, I reasonably want to make data structures in my app to be available through this language. How would that work?

The support for interop looks strong:



There are some marshalling traits that you can implement or derive.

Now this is what every language landing page should be like. I love the nontrivial code snippet solving a fun little problem front and center.

Wow, I've been wanting something just like this for Rust. And it has row polymorphism!

Arguably, this is Rust.

Not sure what you're trying to say here. Gluon is much more like OCaml than it is like Rust. And Rust _doesn't_ have row polymorphism.

By being hosted in Rust, Gluon has access to the Rust ecosystem. So one could use Gluon and still be in the family depending how seamless the interop story is.

I think we are in agreement. I was saying that I was looking for an embedded programming language like this for Rust, and now I've found it.

It would be great if someone made a UI framework with Rust and made this the scripting language (making the experience Elm-like, but with a better type system and without web technologies)

Too bad it has a different heap per thread. If I want to extend a C (or rust) posix-multithreaded program, it means I can't easily use gluon to access the state of my program (which requires accessing posix-multithreaded protected variables in the normal heap). Same thing that makes Lua useless to simply extend actual, pre existing C programs.

Anyway, I'm still happy to see that more modern languages than algol have gained enough influence than the choice of type system, type inference and non C like syntax is not frowned upon anymore.

Looks like F# but has a big design mistake in my opinion (which F# doesn't): the symbol '=' in a functional programming should be tied to comparison, not assignment. (The approach taken here is using "==" for comparison, which is less ugly than JavaScript's "===" but still ugly.)

As the `in` keyword seems key to understanding the language, could someone elaborate on it?

It isn't obvious after reading the intro on http://gluon-lang.org/book/syntax-and-semantics.html.

Looks like a variant on ml-like `let`.

    let <variable> = <expression 0> in <expression 1>
will bind the value of expression 0 to variable in expression 1.

So many new languages lately, does anyone is risky enough and using such languages in production?

This looks very similar to Nix, with improvements such as real typing and lack of semicolons. That's a very good thing.

Any plans for a purely-functional variant (i.e. a single expression per evaluation)?

According to the manual, Gluon doesn't have statements, but it elides "in":

   let n = 1
   let m = 2
   let nm = n + m
is sugar for

   let n = 1 in
     let m = 2 in
       let nm = n + m in

I feel like every day a new programming language presented on HN. Maybe you didn't realize, but we know CL/scheme/clojure already and have no turning back..

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact