
Rust-style resource management in OCaml - winter_blue
https://arxiv.org/abs/1803.02796
======
spraak
Tangentially related: has anyone been exploring ReasonML? I am enjoying it so
far, but still hesitant because I'm not sure if its momentum (in the community
with packages, etc) will sustain, and the interop into JS feels still somewhat
clunky (though that is partly BuckleScript)

~~~
parley
I have, and really enjoy it. My previous go-to for frontend was TypeScript,
but it wore me down. I am ridiculously productive in ReasonML with proper
ADTs, (exhaustive!) pattern matching, immutability and other niceties. The
superb type inference is also great for prototyping, and I could go on and on.
Rust for backend and ReasonML for frontend is making me very happy these days.
I think the community will only keep growing. I feel like I’m an extra in the
filming of Revenge Of The MLs, and it feels great.

~~~
spraak
Can you share more about how you're enjoying Rust alongside ReasonML? I don't
have any experience with Rust but it sounds very conflicting.

~~~
parley
Sure. I would say that Rust and ReasonML are very alike and very different at
the same time, and I'll explain why I feel that. ReasonML's own docs give Rust
a notable mention: "Close cousin of ours! Not garbage collected, focused on
speed & safety."

They both

\- have powerful static type systems with good inference.

\- default to immutability, with optional mutability.

\- encourage functional constructs over procedural/imperative ones.

\- ADTs and exhaustive pattern matching with great ergonomics which can be
used for everything from rigorous error management to unambiguous program
state representation.

\- have escape hatches to write "I know what I'm doing and need more wiggle
room"-code, but those are clearly visible for linting/auditing/testing, as
opposed to languages that allow foot guns to invisibly permeate entire code
bases because the language has no clearly discernible rigorous subset that one
easily and naturally can keep to.

The list of features that make them alike can be made as long as your arm, and
it's basically a laundry list of features that (once internalized in the
developers mind) helps one build robust, correct, maintainable software. Sure,
they're not carbon copies of each other but what it comes down to is enabling
and encouraging the same semantic constructs.

So where are they not alike? I would say that the one constraint that the main
differences result from is the fact that ReasonML's automatic memory
management is a run-time solution (garbage collection) whereas Rusts is a
compile-time solution (using static analysis), coupled with Rusts focus on raw
performance. Everything else about the Rust language has (IMHO) been designed
with the same sound underlying values as many other languages which encourage
correctness. The differences that one notes when learning Rust are really
mainly just the concessions that were necessary to make to achieve compile-
time automatic memory management and raw performance.

Did that help? And as always, if I'm mistaken about anything, please correct
me. I'm not a PLT person.

~~~
empath75
Are you planning on also looking at Rust on the front end?

~~~
parley
Absolutely. I've been a Rust lurker (and later user/advocate) since
~2012/2013, I think, and the visible WASM support strides being taken right
now coupled with my trust in the Rust community gives me great hope for Rust
on the front end.

It remains to be seen to what extent and which problems it will be suitable
for, but just as Rust despite the fears of some is (again, IMHO) turning out
to be ergonomic enough to write all kinds of end user apps in (although some
areas are still lacking, but give it time), so too I think it might surprise
people with how widely applicable it may turn out to be on the front end.

And I'm hopeful and optimistic. I feel like we're seeing some tides turn with
respect to how important software correctness/robustness really is perceived
to be and -- importantly -- what we're prepared to pay for it. We're still a
"young" industry compared to a lot of engineering disciplines, so it's not
surprising that we're still maturing with periodic waves of change. Of course,
many will disagree with the direction, but for me it can't come fast enough.

Sorry, I hadn't ranted in a while, and it _is_ Friday afternoon. =o)

------
steve-chavez
Haskell has also an ongoing implementation of linear types
[https://github.com/ghc-proposals/ghc-
proposals/pull/111](https://github.com/ghc-proposals/ghc-proposals/pull/111).

Anyone know what are the major differences between these two implementations?
Will linear types work better for OCaml than for Haskell?

------
Ono-Sendai
Why would you want to burden a GC'd functional language with the syntactical
and semantic requirements of Rust's resource management?

~~~
catnaroek
Because Rust-style resource management is not only about performance, but also
about correctness. And the ML crowd cares a lot about the latter - rightly so.

I'd kill for a language with ordinary ML types for _values_ (integers,
strings, etc.), and substructural types for _objects_ (file handles, database
connections, etc.).

~~~
zenhack
Worth pointing out, the resource management strategy isn't even necessarily a
net positive for performance. In typical GC'd languages you have a much richer
design space for your memory system. You can move stuff around, you have
flexibility as to when you do collection, you often have allocation as a
built-in primitive, which means the compiler knows about it and can help
optimize.

The end result is that good GC-based memory systems tend to perform better
than malloc/free style APIs (with the call to free() possibly being implicit).
to get good performance in C/C++/Rust, the programmer needs to be concientious
about allocation. You have more control, but also more responsibility. In
OCaml, allocation is bumping a pointer -- go nuts.

~~~
dpc_pw
> The end result is that good GC-based memory systems tend to perform better
> than malloc/free style APIs

Citation needed.

GCs promise wonders, and yet in practice they are going to eat your memory,
trash caches, and slow everything down. Performance benefits are typical "in
some applications, in some use cases, etc." not "tend to"

~~~
saurik
> Citation needed.

Oh come on: this is a well-known property of garbage collectors and should
have been covered in an entry-level CS class. Did you even try to find this
before pulling out the "citation needed" trope? Hacker News needs an auto-
responses to any comment that has that phrase in it with "have you tried
Google yet?" :/.

With just ten seconds of Google searching (so surely less time than it likely
took you to type your "citation needed") I was able to find a meta-reference
(a Stack Overflow answer with links to papers looking at this result). The
results were "similar or up to 4% faster" and "much faster with lots of ram".

[https://stackoverflow.com/questions/755878/any-hard-data-
on-...](https://stackoverflow.com/questions/755878/any-hard-data-on-gc-vs-
explicit-memory-management-performance)

(At least one of those links is dead, but it is to the same paper that thesz
linked to in a sibling to this comment... a comment which provided a citation
and which someone downvoted, because people care more about opinion than
actually finding citations. I upvoted his comment back to being rendered in
black text :/.)

What is so strange about this being controversial is that it is truly an
_obvious_ result: a real heap allocation is really really slow, and a copying
collector never has to allocate anything (it just bumps a pointer). So only
when you have memory pressure does it eventually prune objects, and until then
it runs lightning fast: faster than the malloc/free code could possibly dream
of.

The advantage of affine types (as in Rust) isn't actually that it is avoiding
GC: it is that it makes it possible to avoid _allocation itself_ by making it
safe to allocate things on the stack (where you get the speed of bumping a
pointer again instead of the painfully slow malloc). That is the kind of
analysis that is often attempted by languages like Java ("escape analysis"),
but is only available in limited circumstances.

~~~
dpc_pw
> Oh come on: this is a well-known property of garbage collectors and should
> have been covered in an entry-level CS class.

There's plenty of BS tought in academia. In CS that would include love for
modeling tools (I had classes about IBM Rational tools, bleh), OOP and other
forms of sophisticated complexity. The practicioners tend to dislike GCs, but
again it's not a good argument.

> a real heap allocation is really really slow

There is nothing preventing a heap allocation to be almost as fast, at the
cost of memory fragmentation/overutilization and/or slower deallocation.

And that's where the whole tradeoff is. Memory allocation has to track data
explicitily, GC tracks data implicitly. Typically this tradeoff is expressed
in a memory over-consmpation of GC to amortize the GC deallocation slowness.
And it's all dandy on paper, because the argument is "oh, just take more
memory and you're fast again". Which is not such an easy thing to do. Such
memory could be used as a IO caches or to run other processes. And over and
over companies rewrite their memory-hogging GC-ing services into explicit-
memory allocation languages and they run faster and what's more important,
utilize less memory and thus make the whole system faster. They can run on
much smaller VM instances etc.

So the whole "GC is as fast or even faster" is dubious in general sense, IMO.
And most of the papers trying to prove it is of form "in this specific
circumstances GC _can_ run the same or even outperform mangaged memory
lanagues".

