Hacker News new | past | comments | ask | show | jobs | submit login
Grain: A strongly-typed functional programming language for the modern web (grain-lang.org)
259 points by bpierre on July 31, 2018 | hide | past | favorite | 153 comments



The main page doesn't do a great job of showing what's interesting about Grain vs. JavaScript. The only hint is this: "No runtime exceptions, ever. Every bit of Grain you write is thoroughly sifted for type errors, with no need for any type annotations."

Maybe show some examples of errors Grain would catch that JavaScript wouldn't, like Elm does: http://elm-lang.org/


> No runtime exceptions, ever.

This is something I wonder about JS: there are a lot of places that exceptions happen in (say) Python that just return `NaN` or `undefined` in javascript. Is it intentional? Is it a good idea?

Examples: the multiplication operator essentially never throws. Out-of-bounds (or "not found") lookups don't throw.

I suspect the logic is "only throw if you have the wrong type for that operation" -- like calling something that isn't a function, or looking up an attribute on something that isn't an object. (Except multiplication for things that aren't numbers is fine, I guess...)

I'm writing a dynamically typed language and don't know if it's better to go the JS way or the Python way. Or do something horrible like having multiplication return a `Maybe` type...


Yeah in general it’s good to avoid sometimes returning a value and sometimes undefined.

There’s an equivalence you can make between a type and a mathematical set, where you say that the type is the set of all possible values for that type.

In both JavaScript and math, multiplication of numbers is an operation that has a special property called closure, which means that the return type is the same as the input types. Multiply a number by another number and you will ALWAYS get a number back. The inputs and the outputs are in the same set, which means you can go nuts multiplying numbers and you’ll never have to check your values to make sure they’re still numbers; it can be safely assumed.

JavaScript gives NaN a type of “number” as a clever compromise. Instead of throwing an error when multiplying a number by a non-number, JS allows multiplication to still return a “number” (`4 * ‘a’ === NaN`) and then you don’t need to ever worry about accidentally “leaving” the set of numbers (`NaN * 10 === NaN`).

More likely than not though, multiplying a number and a string is just not a useful thing to do. With a good type system, a program that could wind up in a situation of multiplying a string and a number can be rejected by the compiler.

So in a roundabout answer to your question, a lot of “what then?!” situations can be entirely avoided with a good type system. For the remaining situations like accessing an element in an array that may not exist, a Maybe can make a lot of sense.

(Maybe Float turns out to be just like JavaScript’s numbers, but instead of NaN you have Nothing.)


Closure is only good in programming if the answers it gives are always useful. Otherwise, you run into the same non local error problem that plagues nulls: the error still happens, but now it happens at some distant usage site, far away from the cause.


Good point. This is exactly what JavaScript does with NaN and the set of numbers, and errors are harder to track down because of it.

Computer numbers in general have all sorts of issues like this where clean mathematical theory is broken. NaN, Infinity and -Infinity, for example, are all similar to absorbing elements[1] under multiplication in the sense that once you get that as a result, you can't do the inverse operation (division) to get back to your original inputs.


The JS way is a mistake, intentional or not.

Here's a little thought experiment: Your program ends up with an "undefined" in some variable 'x'. How did that happen? In JS it can happen in any number of weird and wonderful ways, e.g. you called a function which did a 'v[i]' or it could have just fallen off the end because someone forgot to check a return path, or...

In Python there's much less scope what the problem could be. Since v[i] will just throw an IndexError, you know that the problem couldn't possibly be stray index or similar.

That is a much better experience for finding problems in one's code.


But JS programs must execute in uncontrolled environments, e.g. browser engines from the past and future with random extensions installed. A bondage-and-discipline language may help developers find logic bugs, but will make the users' experiences worse.


How is "there was a JavaScript error on this page" informational icon any worse than an order total of "undefined" or that the page just doesn't seem to be reacting to clicks? I'd argue that the former is much better UX -- at least the user isn't confused whether it's them that's doing something wrong or it's the page that just shoddily implemented.


> at least the user isn't confused whether it's them that's doing something wrong or it's the page that just shoddily implemented

Users generally ignore all messages coming from their computers. They routinely blame their computers for doing things that the computers are not doing at all. Like, oh I can’t find the document I wrote three weeks ago -> obviously my computer is being disobedient.

Anyway, I don’t blame the users. To them the computer is entirely uninteresting and they don’t want to care about the sorts of concerns that we technically minded people do.

A broken page is a broken page and it doesn’t matter to the user why the page is broken. The only thing that matters to them is, can they do the thing they wanted to do or not. And like the sibling said, the web browsers are surprisingly good at handling broken code.


Did you read what I said at all?

"Order total: undefined" is hugely different from "There was an error -- maybe just leave the page, yeah?".


Banker vs moon rover. B&D languages panic at the first whiff of deviation between the compile-time and runtime environments. But I loaded cnn.com, and my third-party ad blocker did god-knows-what to dozens of requests, with a bazillion unexpected undefineds, and yet the page still rendered.

The web powers through!


This has nothing to do with undefined vs. exceptions, sorry. You can do example the same with a powerful enough sandbox -- e.g. what WebAsm does to avoid hostile code taking over your computer.

It has everything to do with sandboxing (or not). Ads being served by third parties should not be able to influence the JS running in the page.


Generally, I'd say that the less strict the compiler/interpreter is, the worse code you'll end up with. In my experience, Python needs a lot of external checks (i.e. linting, static analysis) to help the programmer write code that actually work as intended (and JS probably is worse).

As John Carmack said: "...if you have a large enough codebase, any class of error that is syntactically legal probably exists there."


It's generally better to highlight the existence of an error condition sooner rather than later.

Compile time is better than runtime, evaluation time is better than via explicit value inspection.

Non-signalling NaNs provide an easy way of distributing an invalid value widely throughout a program's data structures before it's discovered.

The only reason to go the other way, and have silent NaN propagation, is if they're expected to occur very frequently. For example, consider how awkward SQL would be if evaluating over null values triggered errors - the conditionality of expressions would need to become very elaborate.

Monadic approaches to errors (Maybe, Rust's Result, that conditionally apply a function to a value only if it's not an error) blur the line between these two. Because they rely on data flow, they feel a bit like non-signalling NaNs. OTOH, evaluation generally bubbles up using monadic return types, and errors bubble up too, just like exceptions - exceptions, and consistent, universal use of monadic return types, are isomorphic and can have the same evaluation characteristics.


> exceptions, and consistent, universal use of monadic return types, are isomorphic and can have the same evaluation characteristics.

I feel like there is a small but important difference. Encoding exceptions in monadic return types allows the type system to guarantee that all exceptions are handled, or at least considered by the programmer. That is a very strong guarantee, especially as you put up layers and layers of foreign libraries and abstractions.


NaN is property of the floating point specification.

https://en.wikipedia.org/wiki/NaN

The purpose of silent propagation is for the expression to be checked at the end of the overall expression rather then have every operation itself be checked. It's a speed optimization. It is by far more user friendly for an invalid operation to throw an error.

For example

for this following operation

(100/2/2/0/2)

without NaNs the system must check 4 operations 100/2, 50/2, 25/2, 12.5/0 for an error. Every arithmetic operation whether it has a division by zero or not will have to have additional checks.

With NaNs the computer just runs through the entire operation with no checks and the user just explicitly does a check for a single NaN at the end. The Maybe Monad operates under the same concept when used as an output type for a division operation.

The same logic applies for javascript "undefined".


> This is something I wonder about JS: there are a lot of places that exceptions happen in (say) Python that just return `NaN` or `undefined` in javascript. Is it intentional? Is it a good idea?

Yes and no respectively.

It's intentional in the sense that the original Javascript was not intended to fault (much) because it was for basic scripting, so if one small script blew up it should not bring down the entire page's scripting. This also resulted in Javascript's exceptions facilities being very shitty (exception handling is not very flexible/convenient, and while that may have changed since — not sure — for a long time it was impossible to sub-type the native Error).

However it's not really a good idea for non-trivial software systems, it makes errors much harder to notice, recover from and debug. In fact "modern" JS APIs tend to fault e.g. parsing invalid JSON raises SyntaxError, it doesn't return null or undefined; and promises and async functions will convert exceptions to failures.


Invalid JSON parsing as `null` would be an exciting API decision, haha.

One thing I think I like about JS over Python is that empty containers are "truthy". This, combined with non-raising missed lookups, tends to be pretty ergonomic. For me, at least.

Though it is messier -- in Python `key in d` is well entrenched as an idiom, and in JS `if(d.key)` misfires if the value is zero, and `if(d.key === undefined)` doesn't discriminate between a missed lookup and a stored `undefined`. Though the latter must be discouraged, right?


> Invalid JSON parsing as `null` would be an exciting API decision, haha.

PHP's json_decode does that. And yes `null` is also returned as a PHP NULL.

> Though it is messier -- in Python `key in d` is well entrenched as an idiom, and in JS `if(d.key)` misfires if the value is zero

Or false, or null, or the empty string, or NaN.

FWIW `if (key in d)` also works in JS if d is a native object, though you need `d.has(key)` if d is an actual Map object instead.


I would argue to throw; there's not much good that comes out of intentionally having an error cascade through the program until reaches a throwable error, or produces incorrect results (this is true for both python and js, but the decision you're making is to allow it more than absolutely necessary for a dynamic lamguage)


I think JavaScript simply got it quite wrong by trying to be way too clever. In theory it is fine to say that dividing by 0 is Infinity but what can you do with Infinity?

In practice that causes the output of your program be either Infinity or NaN but the problem is that is usually never what you want.

Then you want to know "Why do I get a NaN here?" and you will have to study all of your program and single step its execution to understand why you got an answer you din't want which is of little value to anybody.

Problem with Infinity is you can not come back from there. It does not cancel out if you subtract it.

Instead IN PRACTICE it is much better to catch errors early with the help of the type-system, unit-tests, and assertions.

It is an interesting challenge: How do I define the type "Any number except zero" and have my type-system then catch divisions by zero?


> I think JavaScript simply got it quite wrong by trying to be way too clever. In theory it is fine to say that dividing by 0 is Infinity but ...

This wasn't javascript's idea. They're just following the spec. https://en.wikipedia.org/wiki/IEEE_754#Exception_handling


This IEEE spec is about Exceptions like exception for division by zero. As far as I can tell dividing by zero in JavaScript does not cause an exception, does it?


Yes, it does. And the result of that exception produces a signed infinity, unless the numerator was zero. Note that this is different than the language-level Error construct that can be thrown and caught.


I think the marketing page is not doing a great job. Grain is a language compiled to WebAssembly. So comparing it with either Javascript or Elm is pointless


> Grain is a language compiled to WebAssembly. So comparing it with either Javascript or Elm is pointless

This ability is a tool and nothing precludes a wasm target for Elm (in fact Evan specifically expressed interest in it: https://github.com/WebAssembly/gc/pull/1#discussion_r1115011...).

Either way, comparing Grain to JS or Elm is anything but pointless, at the end of the day it's competing with them.


If we're speaking of competition and JavaScript, then a much stronger competitor would be PureScript.

http://www.purescript.org/

PureScript is a Haskell dialect that was built specifically to run on top of JavaScript engines. Specifically, compared with Haskell, it is strictly evaluated (whereas Haskell is non-strict / lazy, well almost). However it eliminates some of Haskell's baggage, but retains Haskell's functional purity.

It's all around an awesome programming language for FP that targets JavaScript engines, far surpassing any other attempt.


> So comparing it with either Javascript or Elm is pointless

Why? They all target browsers. What else would you compare it to?


Like comparing to Rust compilation to WebAssembly for example?


Does Rust have any DOM integration?


Yes. For a while, you've been able to embed JavaScript wrappers using stdweb (https://github.com/koute/stdweb). There are pre-built libraries covering much of the DOM. But you may need to submit PRs if you get heavily into <video> or something else with non-standard APIs.

The shiny new toy is wasm-bindgen (https://github.com/rustwasm/wasm-bindgen), which will soon allow the use of WebIDL (https://heycam.github.io/webidl/) to wrap the entire browser API, if I understand correctly.

In practice, it all seems to work pretty well. But for public-facing sites, you need to watch the number of dependencies you include and keep the *.wasm size down.


There are vdom/components-based libraries based on that as well e.g. Yew seems popular.


They've even got the ability to query dom using a light weight lib on the js side. I think this is a more full fledged framework and they should really talk about it. The marketing page is not upto other frameworks imo.


> No runtime exceptions, ever.

I'm curious how they handle JSON parsing. For example in Scala/Play you define the class you want to parse to, and if the JSON passed in at runtime doesn't match the shape of that class then you get a runtime exception. Obviously it'd be nice if that didn't happen, but I can't think of any alternative that would make sense.


> Obviously it'd be nice if that didn't happen, but I can't think of any alternative that would make sense.

Returning a sum of success + error, as e.g. Rust's Serde[0] or Elm's Json.Decode[1] do.

The developer gets the feedback that the operation has failed at runtime, but they also get the feedback that the operation can fail and they must handle it somehow at compile-time, even ignoring the issue is an explicit decision.

[0] https://docs.serde.rs/serde_json/de/fn.from_str.html

[1] http://package.elm-lang.org/packages/elm-lang/core/latest/Js...


In Scala too well grown libraries return errors via sum types, e.g. Either.

In Play this is actually something like JsSuccess/JsError, not an exception, although Play is pragmatic (aka dirty) at times, so it's possible that it has functions throwing exceptions.


You should look into other more saner json libraries. It's unacceptable to have a runtime error. Nowadays I recommend circe.


I could be wrong about this, but I think that, like Elm, it's not technically Grain that throws the runtime exception. It's whatever is reading in the JSON.

In more detail:

It's purely functional, so it cannot handle I/O at all on its own. It needs some sort of harness into which it fits which handles that, and which then calls the pure functions in Elm / Grain. In Elm's case that harness is JavaScript, and it seems to be a similar case for Grain. (Maybe for Grain it's WebAssembly? I don't honestly know enough about Web Dev to say).

During I/O (including user input, and sending data over a network), runtime exceptions can still be thrown, but once it's been converted and passed into the functional part, it's in a sense correctly formatted by definition, and the purely functional Elm / Grain parts don't throw runtime exceptions at all.


I wonder how they handle array out-of-bounds accesses...


Likely like Elm, where indexing into an array returns a Maybe. OOB return Nothing, while in-bound returns Just <value>.

Or they don't support indexing at all.


Looks like they're using an indexed Algebraic Data Type.

https://github.com/grain-lang/grain/blob/master/src/grain-st...


There's no indexing in that.


Maybe by making the API of index based list datatypes return the value inside of a container type. In general it's a helpful strategy when creating total functions.


I don't think web languages should have comparisons with JS on their homepage now that we have wasm.


The WASM devs have really been trying to correct the "WASM will replace JS" meme. I don't think any of them believe that it will, or even that it would be a good thing if it did.


I wonder how many people actually believed that. People want to use their favorite language on both the server and the client/browser. Why is so hard to get this?


Pretty much the first sentence on the page:

> No runtime exceptions, ever. Every bit of Grain you write is thoroughly sifted for type errors, with no need for any type annotations.

That's... weird to me. That seems to posit that ALL runtime exceptions are necessarily type errors. Huh?

What about full disk, DB errors, data verification error, parsing errors, network errors, and tons of other errors that don't appear to be type system related?

[A] This language only doesn't realize this class of errors exist, which, um, seriously? That can't bode well.

[B] The language uses the type system to propagate such errors. `Either<Result, ErrorCode>` for example. The language is somewhat unique in that it is designed to enforce such code style for ALL possible error conditions.

[C] The language returns null or blank objects on error conditions, such as how (early) javascript returns 'NaN' when trying to parse "hello!" as a number, instead of raising an error condition. That's a style choice I really wouldn't like, but I guess to each their own.


Grain dev here! Fixed that on the site. That line was just supposed to be about runtime type errors, not all errors.

Grain is definitely still really in its alpha form. There's lots we want to do with it that we're still getting around to! We weren't quite expecting much of any traffic to the site, but that's really my fault for having the site be public before we were at least a bit further along. :)

Even still, it's been awesome to get a lot of early feedback! We'll get a roadmap together so it's easier for people to see where we're trying to take the language.


I would assume case B, which isn't really that unique. Erlang for example follows that pattern (and has no concept of "exception"). After all, I/O errors, parsing errors, etc., aren't exceptional; they very much should be expected. Forcing them to be handled in the normal code path helps create robust systems.

(There are of course also true runtime errors -- e.g. out-of-memory -- and precondition errors -- i.e., coding errors that fall outside the capability of the type system to enforce -- which do not fit this pattern.)


> Erlang for example follows that pattern (and has no concept of "exception").

Erlang absolutely has a concept of exceptions. They're even called exceptions: http://erlang.org/doc/reference_manual/errors.html#exception....


Wow, I used Erlang extensively a few years ago and don't remember exceptions at all. (Exit reasons and process monitors, yes; but I don't think I ever wrote a "catch" expression, or encountered a library that expected me to do so.)


Yeah it's very rare to catch exception within processes, or to raise them explicitly. It's common to write something like

    {ok, Result} = foo()
though, and that raises badmatch if it fails, and you handle it in a supervisor.


You can use throw/catch to great effect when you need non local returns


>> The language uses the type system to propagate such errors.

...

> Erlang for example follows that pattern.

Erlang is its own beast; if you characterize pattern matching as part of a type system, which I guess it is, then...maybe if you squint?

Normal errors are indeed typically reflected in the return value, but if the error pattern doesn't match the code's expectations then a runtime exception is thrown...if you're lucky. If the error pattern happens to fit with the pattern match but the code doesn't know about it, eventually some other error will occur.

Anyway, Erlang definitely doesn't fall into the "No runtime exceptions, ever" category, and as such doesn't really fit with the options the OP presented.


> Normal errors are indeed typically reflected in the return value, but if the error pattern doesn't match the code's expectations then a runtime exception is thrown

With a sufficiently complete static (even inferred) type system, you can catch any possibility of such failures AOT and not run into any error patterns that don't match code expectations at runtime, so while the description isn't literally true of Erlang, Erlang does show a way that it could be true of a language with AOT enforcement of adequate type system.


I think we read the OP differently. I'm just responding to the bit about errors being returned as an "Either" type. You are right that Erlang does not enforce pattern matching to succeed, though its static analyzer (Dialyzer) can catch many instances when it definitely won't.


The likely reason is B. Elm is a frontend web language that also has this guarantee, and it uses Either types for situations that can fail. It's not entirely unique in that regard, there are other languages (mostly) without exceptions; Rust, for example.


Elm goes even further, insisting that you actually handle the Error types in some way.


Errors could presumably be provided as return values (similar to Rust's Option or Haskell's Either), though we know from languages like Go that propagating these manually is a chore and an eyesore.

A harder problem is errors that cannot be handled by a simpler type system. For example, consider Haskell's head function, which is defined like this:

    head :: [a] -> a
In other words, it takes a list, and returns the first element. But if the list is empty, it throws an exception (technically called an "error" in Haskell):

    > head []
    *** Exception: Prelude.head: empty list
Haskell's type system is extremely powerful, but it cannot catch this at compile time.

In order to express constraints such as "list must not be empty" or "number must be higher than 1" or "file must be open, the type system needs to understand the semantics of the values expressed by a type system. This can be solved by dependent typing, but that's a complicated concept currently implemented by only a handful of languages (e.g. Idris).


> Errors could presumably be provided as return values (similar to Rust's Option or Haskell's Either), though we know from languages like Go that propagating these manually is a chore and an eyesore.

Mostly because Go is terrible.

> Haskell's type system is extremely powerful, but it cannot catch this at compile time.

The problem is not the type system, it's that Haskell's error handling is inconsistent.

Elm, despite having a much simpler type system than Haskell, handles this properly:

    head : List a -> Maybe a
This has nothing to do with the type system itself and everything to do with how the type system is used (or not used). The designers of Haskell's `head` decided to make it a partial function, leading to type-checking inputs not generating any output.


This seems like such an obvious solution it makes me wonder why creators of Haskell did not think of it? Lists are a very basic type used all the time if you don't get that right lot of the value of the great type-system is lost I would think.


They did. The elm-style version is in the standard libraries as:

    listToMaybe :: [a] -> Maybe a
(http://hackage.haskell.org/package/base-4.11.1.0/docs/Data-M...)


That's a lengthy name for the simple operation of getting the first element of a list if there is one. I would have preferred 'car' :-)


It's a bit verbose, but this kind of thing is far less prevalent in Haskell anyway; most of the time you destructure a list through pattern matching, not explicit head/tail functions.


I think it's because they weren't very clear on how they wanted to handle partial functions / error reporting, and historically functions like head are partial.

Making them total makes them a bit heavier for the best case, and furthermore can seem unnecessary (as you could just pattern-match the list itself).

You can also see this oddity in Haskell allowing non-exhaustive case, and allowing refutable patterns in bindings.


> A harder problem is errors that cannot be handled by a simpler type system. For example, consider Haskell's head function

Haskell's head function is often pointed to as a wart even given Haskell's type system; the signature really ought to be:

  head :: [a] -> Maybe a
With a definition (equivalent to):

  head [] = None
  head x:xs = Some x
It's absolutely not a good example of a source of errors that can't be dealt with in a type system like Haskell's.

> In order to express constraints such as "list must not be empty" or "number must be higher than 1" or "file must be open, the type system needs to understand the semantics of the values expressed by a type system.

Sure, but you don't have to express those as type constraints to avoid runtime errors when they occur; if you can express the distinction in normal code you can use distinctions of return type.


You still can't avoid runtime errors because of non-terminating functions. Determining whether an arbitrary function in a Turing complete language is total or not is undecidable. I don't see any mention of a totality checker (which would make the language not Turing complete) either. Idris also has this.


> This can be solved by dependent typing

dependant typing isn't needed for some use cases you have mentioned:

> list must not be empty you can use NonEmpty

https://hackage.haskell.org/package/base-4.9.0.0/docs/Data-L... . We are still encoding this special case and our intent in types. Cons is that we need a special support for it (special library) where with dependant types we could reason about this use case without a special library

EDIT: added newlines


This is actually a really complex problem. Recently in C++, the discussion is happening regarding `nothrow` functions. Can such functions allocate memory (and hence throw "out-of-memory" errors)?

The basic tradeoff is, most program can't (or don't) handle an "out-of-memory" error anyways, so for those, "nothrow" and "nothrow+allocates" is essentially the same. On the other hand, some (few) software is written to handle "out-of-memory" errors, and C++ is one of the few languages that targets such software, so in those programs, the distinction is meaningful.


I mean, this looks interesting, but why was this posted prematurely? Now I have to remember to look this up again in two months when they actually have a website that isn't 3% done. Basically all of the docs are in the "todo" stage. I always get a bit annoyed when people post their pages way too early.


"If you are not embarrassed by the first version of your product, you've launched too late." -Reid Hoffman

In this case, publicising, and getting more contributors can help fix these problems.


What makes you think that the HN poster has any affiliation with the project, or vice versa?


Yeah, knowing one of the developers personally, they actually didn't post this. Someone else posted this to Twitter/Reddit/HN.


I believe he person you're responding to means posting the public web page, not the submission here.


Maybe they prefer to focus on writing actual language than PR’ish stuff? It’s quite refreshing after seeing too much of the opposite (especially in blockchain space where you have in 99% of cases just pretty PR page and zero product/code).

I’m sure they’re open and would be happy to see contributions.

Translating source code into docs is a great way to learn and it looks like very pleasant language to read.


Documentation can hardly be called "PR'ish stuff". It is also (ideally) far from some kind of "translation" of code into natural language.


I have a distrust for languages that advertise themselves as "strongly-typed" or "bringing sanity to the front-end" - the former doesn't have a single clear definition and the latter - while not the case here - is questionable in it's own right.


Moreover, they don't really show anything to make their point — I can't see any type annotations on the frontpage.


> it's 2018 > type annotations

https://en.wikipedia.org/wiki/Type_inference


Pretty lazy comment.

There's a reason you still at least annotate the top level in Haskell, Elm, and friends even though the compiler could've inferred it. It's because humans have to read it.


Type annotations can be helpful in writing maximally polymorphic code and in self documentation, independent of type inference. When I write Haskell, I still annotate my code.


Also, throwing out all type annotations makes code pretty hard to read, especially outside your IDE like in a Github repo.

Taken to an extreme, it feels like reading a dynamically-typed language where you have to keep the various variable types in your head when debugging.

There's a middle ground.


How does do handle memory management? If I'm not mistaken, in webassembly programs use a fixed buffer to access memory, which means you need some runtime support to manage that, or apply some kind of technique like this[1].

[1] http://home.pipeline.com/~hbaker1/CheneyMTA.html


It appears to allocate memory but never free it:

https://github.com/grain-lang/grain/blob/master/runtime/src/...

A lot of new languages start out this way, where they just assume memory is infinite and then eventually add the GC later. That's a really nasty ball of technical debt to be sitting on, and I've seen nascent language implementations die because they couldn't get past it.


Since it's pretty ML-flavored, a comparison with Reason [1] would be nice.

I see compiling to WASM as the key differentiator now. This likely means that all the mechanics required for an ML-style language, like garbage collection, must be included in a runtime library.

[1]: https://reasonml.github.io/


Only until the WebAssembly spec included access to the host GC:

>>> there's already a very efficient, solidly tested, constantly improving garbage collector in your browser that uses all the possible dirty low-level tricks known to mankind, which is the GC being used for JavaScript. What if we could give you access to the garbage collector directly?

https://blog.benj.me/2018/07/04/mozilla-2018-faster-calls-an...


Isn't it still pretty experimental?


Well, Grain is as well so is that really an issue?


How does the example in Functions section show that function is first class citizen?

It is merely calling a function within another function. I would expect the second function to take in first function as a parameter (which JavaScript already has).


I was hoping for currying, but was disappointed when all the functions were fully applied. It don’t really see either, how this tells anything about first-class functions :/


Why do all new languages aimed at the browser seem to be "strongly typed, functional". I have no objection whatsoever, but we've seen Elm, Purescript and now this. Or is my perception skewed by the "HN Echo Chamber"?


Probably because people already have their pick of those languages on the server but not on the browser client where it's even more important to get things right from a UX standpoint.

Static-typing, immutable datastructures, FP... these are things that even the Javascript ecosystem has been moving towards with React, Typescript, Redux, and more.

If you're making a language for the browser, you can offer a lot of value by rolling these into a single tool. For example, it may be much simpler for you to use Elm than to approximate Elm with your own menagerie of JS tools.


"Strong typing" as a term has actually been muddied and doesn't mean much nowadays other than "not C", being re-purposed for marketing reasons. Irregardless of the language, you don't want your language to be weakly typed, although in fact you can say that about most dynamically typed languages.

Leaving the marketing efforts aside, what people want to express is "statically typed". A static type system can prove a lot of things about your program. Traditionally you could rely on (good) static type system to prove that your program is memory safe, or that it has as few runtime errors as possible, preferably zero. N.B. don't fall into the trap of saying that a language like C is statically typed. When you can pass void pointers around and cast that to anything, that's in fact not static typing.

Type theory has a lot to do with math logic actually, the two being highly interchangeable. Statically typed compilers can prove various properties about your program, freeing the developer from writing certain classes of tests, plus it makes refactoring easier.

Refactoring, correctness, these are things people have always struggled with in JavaScript, especially due to JavaScript's nature. Quick, can you tell me the answer to "Math.min() < Math.max()" in JavaScript?

You know it's a trick question, given this is JavaScript, don't you ;-)

---

Other than ClojureScript, I don't know of any other dynamic language targeting JavaScript engines and that's interesting. None. And ClojureScript is only interesting because it is a LISP and because it has sane conventions and defaults.

Languages providing superficial syntactic sugar, such as CoffeeScript, have been a really bad idea and I'm glad that we've moved on.

Therefore the innovation that happens tends to happen in statically typed languages, because there's a lot there left to explore and because such languages tend to bring value.

That said I always wonder what new languages bring to the table versus already available alternatives like PureScript, Scala.js, ClojureScript, Elm, etc. And from the TFA I don't understand what those advantages are.


I think it's just that JS caters to those features the least, so that's where you see new languages from people that want them.


What is meant by "zero runtime errors?" An example of a "runtime error" is the head of an empty list; how does Grain handle this?


Grain seems way unfinished and the stdlib does not seem to have any such function[0], so here's how Elm handles it instead as it has the same target/ethos of avoiding runtime errors: http://package.elm-lang.org/packages/elm-lang/core/latest/Li...

[0] it's not very useful either if you have pattern matching on lists, you can just match the list directly.


The DOM example further down the page, which does no result checking on queries that can absolutely fail to find an element, makes me very skeptical of this claim.


Presumably the head function returns a value of type Optional/Maybe, that needs to be pattern matched against statically.


IMO modern web language should have some notion of events as in my Sciter ( https://sciter.com/event-handling/ ) for example:

    event click $(table > thead > tr > th) {
      // click on table header cell
      // 'this' is that th element
    }

    class Widget : Element 
    {
       function attached() {...}

       event click $(span.up) { this.increment(); }
       event click $(span.down) { this.decrement(); }

       event mousedown { this.state.focus = true; } 
       ...
    }


YES. Every time I see a new language come out without first class event support it is obvious the creator never wrote a lick of front end code in their life.


Can you explain how these differ from ordinary callback functions?

I haven't done any web development to speak of, but I've done a fair bit of GUI stuff, doing things like this snippet appears to in languages that don't have "events" as a separate category of thing.


This

    observable.on("click", observerFunc);
    // or observable.addEventListener(...);
is an executable statement. And this

    class Widget : Element {

      event click { … observer's code … }
    }
is a declaration.

That executable statement needs to be called at some point of time. So the observable can be in two states - with and without that event handler.

While class declaration is an invariant (at least in Sciter) - as soon as element is in DOM it has that class and consistent set of event handlers in place. Such binding is done by, again, CSS declaration:

    widget {
      prototype: Widget url(path); // <widget> controller
      display: block;   
      ...  
    }  
"I've done a fair bit of GUI stuff"

AFAIR VB6 and Delphi allow to declare event handlers without explicit/runtime binding - that's close as a concept to the above.


If it's not too late to change, might I suggest doing away with the 'let rec ... and ...' syntax? The way to group mutually recursive bindings can be determined by finding the strongly connected components of the graph of the references between functions, so users don't need to manually specify it.


I do like the explicitness of marking recursive functions with 'rec', however. What would the syntax look like if it was no longer required?


But then how do you shadow previous bindings?

    let f x = if (x == 0) then 0 else (f x)


Potentially controversial opinion: shadowing bindings is a bad idea.


How does this compare to Elm and F# (Fable)?


It seems as if Grain compile to web assembly vs compiling to JS.


Nothing requires Elm or Fable to compile to js. I don't know about Fable but both Elm's maintainer and its community are interested in eventually targeting wasm.


Why not just compile OCaml to wasm? Why another language?


Poking around in the sources, it looks like this is the OCaml compiler, with the frontend apparently tweaked to accept the new syntax. But this is not mentioned anywhere that I can see. The "Copyright copyright 2017-2018 Philip Blair and Oscar Spencer." line in the README is highly misleading in this context, since most of the actual source files are marked with OCaml's copyright header.


Many files are taken from the OCaml compiler and then adapted, but changes seems a bit deeper than just a different syntax. It would indeed seem fair for the authors to at least make it clear in the toplevel README that the front-end (parsetree representation and type-checking) is indeed / started as a fork of the OCaml code base.

Considering ongoing efforts to create a WebAssembly backend for OCaml (and thus Reason), I wonder what would be the selling point of Grain.


Even syntactically, this looks a lot like if you took ReasonML, disallowed type annotations, did away with its ecosystem, and compiled to WebAssembly.

Aside from the wasm compilation, I don't see why this exists.


That was my question only having recently started looking into ReasonML seriously. Sounds like I'll just stick to Reason!


> Many files are taken from the OCaml compiler and then adapted [...]

I went back and checked, and Grain not only takes OCaml's code without clear attribution but redistributes that code under GPLv3 where the original code is under LGPLv2.1. That's bad.


To have a JavaScript feel with type inference, which I assume they are aiming for given the syntax, I would like to see extensible records (structural typing), which remove the need to declare nominal record types. Purescript and Elm both have this feature.


How does Grain compare with the Reason language from Facebook which is also based on OCaml?


Doesn't encourage immutability because variables are not readonly by default. I would advice the language designers to require the "mutable" keyword for variables than can be re-assigned. (Similar to F# approach.)


I think you're mistaken. It seems to me that you need to create an explicit "box" (a reference) to make things mutable.


ML approach.


The "DOM Interaction" and "DOM Manipulation" sections just show imperative code. So how is this a fundamental improvement over using jQuery? Especially for a language that markets itself as "seriously functional", it feels like a big step backward. I can't see what benefit this actually brings except being easier for OCaml developers to write web apps with. And even then they'd probably have no trouble learning TypeScript which doesn't need a bridge since it's a strict superset of JS.


Is the language incomplete like the docs?


What's the incentive people have to combine type systems with functional programming? At least in broad strokes I think of typing and immutability (which is the goal of functional-style programming) as solving the same problem in two different ways. If you write control flow that returns values which are immutably bound lexically, what is the point of a type system?


Immutability gives guarantees at runtime, a type system gives guarantees at compile time. Or, from a different perspective, immutability guarantees a certain property – the value – of an item, while a type system guarantees different properties for all items of a type. Those two concerns are mostly orthogonal to each other.


Moderators: The title should have the WebAssembly in it to avoid confusion. This is not yet another language that compiles to Javascript.


This looks nice, but it does need more documentation and an online playground/eco system which understandably should come in time


I'd hope this is indicative of wasm precipitating a wave of creativity. From a purely technical point of view I actually don't care. Elm is fast, complete, elegant and has perhaps the most user-friendly compiler on the planet. The fact it happens to target js and not wasm (at this point) is practically irrelevant.

I can see, though, that wasm might encourage experimentation by a wider group of people who are put off by targetting js.

Coming back to Elm, it'd be great to see others consider the wider architectural questions. The Elm architecture and language are beautifully complementary. That's stark when looking at react - which was copied from (or at least inspired by) the Elm architecture. However, being js-based, it doesn't have the mutual consistency of elm - and so has to resort to syntactic gymnastics.

Anyway, that's getting off topic. Grain is clearly early stage. Anyone motivated enough to conceive and deliver a language deserves some encouragement. Will be interesting to see how it evolves.


How does Grain prevent bottom with static type checking? Totality is undecidable in Turing complete languages, so “no runtime errors” is probably false (maybe they “solved” the halting problem?).


Arch installs some version of dune-project the make command does not recognize. Not in the mood to edit a package build just to try this programming language out.


Dart anybody?


Grain is quite different. Grain is more functional (no classes or context, tuples). Dart compiles to Javascript. Grain compiles to web assembly. Dart requires you to define types. Grain provides type safety and zero runtime errors without ever defining types manually.


The documentation about types says to look at the Readme of the compiler, which is very short and doesn't tell anything about types. Same thing for many other entries in the side menu. The examples don't have any type declaration. So, type inference and no reuse of the same variable with a different type, even when forcing mutation? Unfortunately the documentation is still too skinny.

A note to language designers: I can understand the { } but if you use them why do you also need the ( ) around the conditionals here?

  if (n <= 1) {
    n == 0
  } else { 
    isOdd(n - 1)
  }
The parser can find where the condition starts (after the if) and ends (at the {) and we won't have to type two useless characters. It's ergonomics.


( ) are only optional if { } are mandatory, which is not true for some languages. e.g. in Haxe, one can write

    static function isEven(n)
        return if (n <= 1)
            n == 0;
        else
            isOdd(n - 1);

    static function isOdd(n)
        return if (n <= 1)
            n == 1;
        else
            isEven(n - 1);
i.e. An if expression consists of other expressions that may or may not be a block expression ({ }).


There is a great deal of variability in languages about that.

Python ends if lines with a : which might help the parser, but maybe not because it was a late addition to help readability. For me it adds work when moving code around because I have to add or remove the :

Ruby does totally without any terminator in that context (in other contexts it has its own {} or do...end). The parser keeps processing successive lines until it understands that what follows can't belong to the conditional. I prefer that approach because it's the language/compiler that has to help me, not the other way around.


Well if you are worried about those 2 characters, you should be stoked about Grain, because it gives you type safety without ever defining types. Think how many characters you will save in an application by never defining a type and at no cost.


Well, the languages I'm using are Ruby, Python, Elixir, JavaScript so I definitely like not to write types. Still, I don't know how types work in Grain. The documentation doesn't say anything about it or it's very well hidden.


Are you asking how they are implemented? You'll have to read the compiler code for that. But the documentation does tell you... "No runtime type errors, ever. Every bit of Grain you write is thoroughly sifted for type errors, with no need for any type annotations." So you never write them, but you get all the benefits as if you did.


Reading the compiler code... well, I think I'll wait they write the documentation.

But how's that possible in general? Example, JSON parsing of data from a HTTP request. This is pseudo code

   # request.data is
   # {"a":1, "b":2, "c":"a string"}
   data = parse(request.data) 
   total = data["a"] + data["b"] # 3
   total = total + data["c"] # ops!
The last line is either a compiler error (how? forbidding input is not an option) or a runtime error (Grain doesn't have that) or what?


I imagine the result here would be the same as in Javascript. This is not an error in Javascript. It would concatenate them and convert to a string. So, you would get the string "3a string". I do see your point though and now agree there should be more documentation covering these cases


So it's more like elm then?


More like elm than Dart, but elm also compiles (transpiles?) to javascript and Grain compiles to web assembly


Dart is amazing


Yes, Dart is amazingly broken for a modern language. This compiles:

    List<Animal> animals = [Dog()];
    List<Cat> cats = animals;
What is a static type system worth if it doesn't catch type errors at compile time?


Dart 2 has a sound type system and catches this at compile time.


This is just a prototypal IDEA right? motivation? benchmark? demo? build system?


Has anyone found in the sources if/how they treat garbage collection?


No, I would be interested in that too. I don't know anything about Web Assembly but maybe GC is handled by Web Assembly?


There are plans to eventually integrate GC into wasm, but at the moment the GC has to be implemented and bundled in the wasm file. That is in fact one of Rust's advantages there. And of course wasm supporting a GC does not mean languages targeting wasm will want to use it (and lose the flexibility of providing their own).


Yes. They currently use reference counting since WASM doesn't allow stack inspection.


Do you have a link to the source?


A functional language with readable syntax for mere mortals?

Sign me up :)


isOdd and isEven don't work with negative numbers.


anybody try it?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: