> it's a high-level language with static types, so easy to make large scale changes as we figure out what the language/product was
> you mostly model data with sum types, which in my mind are the best way to model data
>it's very similar to the language I wanted to build (in particular, we could reuse built-in immutable data structures for Dark's values)
> it had a reputation for being high-performance, which meant that we could write an interpreter for Dark and not have it be terribly slow (vs writing an interpreter in python, which might be too slow)
> We have struggled to make editor tooling work for us.
> Lack of libraries
> Multicore is coming Any Day Now™, and while this wasn't a huge deal for us, it was annoying.
> we also use ReasonML
> lazyness and pure functional-ness I felt were not great
> I plan to leave keep the frontend in ReasonML
You want a statically typed functional (but not pure) language with sum types, immutable data structures, good performance, editor/IDE support, a broad library ecosystem, multithreading, a C-like syntax (I assume, since you're using ReasonML), and a nice compile-to-Javascript experience to run in the browser.
I don't want to sound too much like a fanboy, but all this really sounds like Scala. You want Scala.
Scala.js generates significantly larger bundles than Rescript — because Ocaml's semantics map more closely to those of JS, the runtime dependencies are smaller and there's a lot of tricks to minimize indirection/overhead. Additionally, interop with npm modules is much less painful with Rescript and bindings for popular third party libraries are more mature. Ignoring the merits of Scala-the-language, there are very good reasons to pick Rescript over the Scala.js toolchain.
In the small, Rescript generates smaller code than Scala.js, that is true. It is mostly a question of the interdependencies in the standard Scala collection library, though, not so much the semantics of the language. And Scala.js also has its share of "tricks" (aka optimizations) to reduce indirections and overhead.
Yes, Rescript has more built-in types that map straightforwardly, but that is at the cost of some correctness. For example, using a JavaScript array means that JavaScript can resize it under your feet. Using `undefined` to represent `None` means that you cannot tell the difference between `None` and `Some(None)`. It's a fine trade-off to make. Scala.js happens to make the other trade-off, and offers separate types for JavaScript interop.
I like the concepts and design of Rescript, really. It's very interesting because they have all the essential requirements right (IMO) like comprehensive JavaScript interop, while making all the opposite design decisions on nonessential trade-offs compared to Scala.js.
> ReScript is more correct than scala, ReScript's type system(essentially borrowed from OCaml) is sound while scala is unsound.
Ah, I did not make it clear what I meant by "correct" in this context. I meant "faithful to the original language" (OCaml natively compiled and Scala on the JVM).
You are of course right about the type systems. (I believe many soundness holes are fixed in Scala 3.)
> > Using `undefined` to represent `None` means that you cannot tell the difference between `None` and `Some(None)`
>
> I don't know where you get the impression, of course they are different in ReScript.
I get that impression from the document linked above, which says that `None` is represented as `undefined` and `Some(x)` is represented as `x`, so `Some(None)` must be represented as `undefined` as well. Oh and `()` too. Do at run-time you can't tell which is which.
> One thing missed is that ReScript compiler may be 100 times faster than Scala.js (I am not kidding)
Yes, I believe that's true. It is often mentioned as the biggest weakness is Scala, by advocates and detractors alike.
“Fewer” is a relative term - F#’s type system is noticeably less fancy (basically generics are “weaker” than in OCaml) but having the entire .NET System library is very useful for practical programming, along with many 3rd party options. The author mentioned a lack of GCP support in OCaml as a major stumbling point, and also mentioned how nice it was that F# had a built-in immutable map. There is more to computer science than type theory - System.Collections.Concurrent is particularly cool, and hard to do in OCaml. If you know C#/Java/C++/etc some of that code is also very well-written and informative.
There are use cases where OCaml’s type system leads to elegant type-safe code that, if expressed in F#, involve inelegant unsafe boilerplate. It’s usually not a whole lot and it’s typically routine but it certainly happens. For some teams and products OCaml is a better choice.
But there’s also a reason why many developers abandoned Lisp for Python: some theoretical fanciness and robustness only infrequently outweighs ease-of-use and quality-of-life.
Rust doesn't work well as a purely functional language. You'd be constantly fighting the type checker for memory issues. (Or otherwise, why doesn't Haskell take the same route as Rust and ditch its garbage collector?)
> You'd be constantly fighting the type checker for memory issues.
I think you mean borrow checker for ownership issues...
> why doesn't Haskell take the same route as Rust and ditch its garbage collector?
The Rust language was designed in a way (with ownership) that doesn't need a GC, while Haskell have many different properties (such as lazy evaluate everything) that I don't think would work well without a GC.
The borrow checker is usually a nuisance when you're writing highly mutable code, which isn't a problem for functional code. If you need to store the same data in multiple structures, the sibling is right, .clone() and .to_owned() will usually fix your problem with a very small overhead.
[1] just looked at it again and while it doesn't default to mutability, it doesn't default to immutability either (i.e. to compare it with F#, it's 'val vs var' versus 'let vs let mutable', so the latter being much longer means the default/lazy thing to do by devs is immutability, in the F# case).
Honestly I think the "val vs var" debate is kind of a misdirection. The real difficulty with mutability comes from interior mutability. The scala standard library has collection.mutable.{List, Map, Set, etc.} and collection.immutable.{List, Map, Set, etc.} with collection.immutable being the one in the prelude (imported by default). In scala like any ML language that supports mutability, you can do
val immutableValue = mutable.Map.empty
immutableValue.insert(key, value)
In this case the distinction between "val" and "var" is at best a misdirection. The only language that I know of that explicits interior mutability is rust. Where your map must have a "type marker" `&mut` to be able to use the `insert` method. The only way to have that type marker is to define the map with a `let mut`:
let clearlyImmutable: Map<String, Int> = Map::new()
clearlyImmutable.insert("hello", 10) // Does not compile
The conclusion here is that if you are not working with rust, immutable data structures (where no method mutates the object it refers to) are what guarantees you you won't run into spooky mutability at a distance.
However, it would be disingenuous to not mention that you can run very quickly in the wild into scala code that is just "java without semicolons". You can very deliberately break null safety in scala. For example `val x: (Int, Double) = null` does compile. You won't normally run into `null` unless you are interacting with java libs or with code from programmers who don't understand type safety.
I want to point out: scala let you do all those nasty things, but this will be a deliberate choice. The mutable data structure from the standard library are clearly marked. There is no methods in the standard library that returns null (outside of regex methods which are thin wrappers around java regexes). Writing mutable code requires a completely different architecture and design choices. If you are in an immutable team, it will be very hard to justify that kind of code. While in a mutable team, the reverse might be true.
Your rust example isn't considered interior mutability, since it needs `&mut`. Since `&mut` references are exclusive, they're quite similar to immutable values in functional languages, despite the API appearing mutable.
In Rust the 'interor mutability' term is used for types like `Mutex`/`RefCell`, `Cell`, or atomics, which are mutable even through shared references. These come with the same problems as interor mutability in scala.
> you mostly model data with sum types, which in my mind are the best way to model data
>it's very similar to the language I wanted to build (in particular, we could reuse built-in immutable data structures for Dark's values)
> it had a reputation for being high-performance, which meant that we could write an interpreter for Dark and not have it be terribly slow (vs writing an interpreter in python, which might be too slow)
> We have struggled to make editor tooling work for us.
> Lack of libraries
> Multicore is coming Any Day Now™, and while this wasn't a huge deal for us, it was annoying.
> we also use ReasonML
> lazyness and pure functional-ness I felt were not great
> I plan to leave keep the frontend in ReasonML
You want a statically typed functional (but not pure) language with sum types, immutable data structures, good performance, editor/IDE support, a broad library ecosystem, multithreading, a C-like syntax (I assume, since you're using ReasonML), and a nice compile-to-Javascript experience to run in the browser.
I don't want to sound too much like a fanboy, but all this really sounds like Scala. You want Scala.