Hacker News new | comments | show | ask | jobs | submit login
State of Multicore OCaml [pdf] (kcsrk.info)
160 points by systems 5 months ago | hide | past | web | favorite | 103 comments



I was tickled to see that the multicore memory allocator they're implementing is based on the multithreaded memory allocator I worked on in grad school (http://www.scott-a-s.com/files/ismm06.pdf; https://github.com/scotts/streamflow/).


The author of the talk here. I am excited about the Multicore OCaml upstreaming plan.

We're going to phase it into 3 distinct phases. First, we will upstream the multicore runtime so that it co-exists with the current runtime as a non-default option. This will give a chance to test and tune the new GC in the wild. Once we are happy with it, it will be made the default.

The second PR would be support for fibers -- linear delimited continuations without the syntax extensions (exposed as primitives in the Obj module). This will help us test the runtime support for fibers and prototype programs.

The last PR would be effect handlers with an effect system. Any unhandled user-defined effects would be caught statically by the type system. The effect system is particularly useful not just for writing correct programs, but to target JavaScript using the js_of_ocaml compiler where we can selectively CPS translate the effectful bits à la Koka. As a result, we will retain the performance of direct-style code and pay extra cost only for the code that uses effect handlers. In the future, we will extend it to track built-in OCaml effects such as IO and mutable refs. At that point, we can statically distinguish pure (as Haskell defines it) functions from impure ones without having to use Monad transformers for the code that mixes multiple effects. I'd recommend Leo White's talk [0] on effect handlers for a preview of how this system would look like.

[0] https://www.janestreet.com/tech-talks/effective-programming/


The next two years or so will be really exciting.

Rust is beginning to understand that they need more stability and LTS versions, and libraries are blossoming nicely.

Ocaml already has a very mature module ecosystem and is now becoming safe and modern.

I think rust will still have an edge in adoption due to its portrayal as C++ unfucked, but ocaml is definitely the easier tool to work with, imo. And maybe that will change.

I don't think even linear types and multicore would be enough for ocaml to make any significant dent in the systems programming world. Rist and C/C++/D/Zig all do memory management too conveniently, and it opens doors too close to the bottom for ocaml to keep up.

Any ocaml hackers: would you want to write system drivers in ocaml? Why/ why not?


I feel like OCaml is a language that has a lot of potential, but is hampered by somewhat gnarly syntax, a toolchain that feels utterly antique (for example, last time I worked with it, the REPL had no Readline support and had to be run with rlwrap), and lack of a good killer app.

In some ways, OCaml's problem mirror that of Erlang. The gnarly syntax is largely being addressed by Reason (much like Elixir does for Erlang), but I don't see it catching on as much as I'd like, and it still has some warts, I think.

As for killer apps, distributed data processing is something that OCaml could be great at, given that it marries Erlang's functional style with a rich type system, and there was a minor wave of libraries (for concurrent, distributed actor programming) way back in the 2010 or so, but those libraries are now dead and nothing really happened. Meanwhile, Scala largely leads that story (Akka is very popular, and Scala also seems to be the language of choice for Spark, Beam, etc.) and Haskell now has Cloud Haskell, which is modeled on Erlang. Also, we kind of need multicore for this.

(Distributed data is an area where I hope the Java/Scala world gets competition soon. A lot of people, I suspect, would like this, so there's an opportunity to rapidly gain mindshare. I don't see much happening there. Pony is promising (e.g. Wallaroo), and some people have had success with Go (Pachyderm). I don't know Pony, but without genetics, Go is a pretty awkward fit for data pipelines; witness the number of hoops jumped, and resulting limitations, in the Apache Beam SDK for Go. Spark et al rely on distributing bytecode to worker nodes, something you just can't do in Go. Not sure about OCaml.)


I'd say if you ever get the time again, give OCaml another look. The toolchain experience has been improving at a rapid pace! I started to explore OCaml again earlier this year and the day to day tooling around it has felt at par or better than my experience with other languages.

Between Opam [1], Dune[2] and utop[3] the "new comer experience" has been really good so far. The editor integration with vim/emacs has been top notch, and visual studio code has turned into a really nice option as well for people who don't like vim/emacs. I'm sure there is still room for improvement, but i'd say its atleast heading in the right direction!

[1] https://github.com/ocaml/opam

[2] https://github.com/ocaml/dune

[3] https://github.com/diml/utop

[4] https://github.com/ocaml-ppx/ocamlformat


When I learned Caml Light, OCaml was starting as project, still called Objective Caml.

I don't remeber anyone on those assignments ever complaining about syntax.


compiler writing could well be ocaml's killer app; the problem is that it's a very niche application in terms of the number of people who do it. i was hoping facebook's pfff [https://github.com/facebook/pfff/wiki/Main] would extend that to language analysis tooling in general but it never seemed to catch on.


I don't program in OCaML or any ML for that matter, but what's so gnarly about the syntax ?


One common critique is that in the toplevel (REPL) you must terminate your inputs by ;; and sometimes you will see code that puts ;; between function definitions too.

Also, nesting pattern matches is a bit ugly:

    match x with
    | Foo y ->
      match y with
      | Yes -> ok ()
      | No  -> not_ok ()
    | Bar z -> whatever()
does not work, it must be:

    match x with
    | Foo y ->
      begin match y with
      | Yes -> ok ()
      | No  -> not_ok ()
      end
    | Bar z -> whatever()
which is not a big deal but surprisingly easy to forget.

Some people would also really like to write

    let x = foo
    let y = bar
    x + y
but it has to be

    let x = foo in
    let y = bar in
    x + y
and those many "in"s accumulate.

A bunch of these things look a lot cleaner in Haskell. Although it does it by being indentation-sensitive, which other people love hating.


Nobody really cares about programming language syntax as long as the grammar is reasonably sane and the semantics are clear. Which is the case with OCaml.

The other criticisms are the ones worth attending.


Well, enough people cared about the syntax to make Reason...


It's a bit of a stretch to say cloud Haskell is modeled on erlang


Hijacking this thread to ask you this.

What should I learn if I want to develop desktop apps and I like functional programming? I feel like most of the cool 'new' programming languages are OOP (rust, go, scala).


If you can afford to exclude windows and linux, Swift.

If you like windows but also want cross platform, C#/F# and Xamarin.Forms or UI or whatever its called. I've heard very positive things about dotnet gui work these days.

I'm spending time with rust to try my hand at game development. As an older developer, it's all a bit foreign to me so I don't really know what's good and what's not.

That having been said, desktop development is something the rust community at large is dedicated to ant interested in. There's already a fee libraries, like Conrod for Piston, gtk.rs, etc but I don't know how these options stack up against industry standards.

C# 7 is a lot more functional and pleasant than the C# 4/5 everybody dismissed years ago.

F# can compile to wherever you need it and you can write functional first and just include it in your nuget built just like you normally would.

If you're adamant about functional, consider dotnet. Of course C# is still very much OOP, that's just what it is. But I find the primary benefits to FP in my own code to be more related to correctness, testability (and therefore more correct) and a convenience in modelling data structures quicker than subclassing. Even with generics, having unions and record types makes it just that much more satisfying.

F# actually has a little micro-discipline called domain-driven programming. Search for that term and you can see how different researchers have used F#'s incredibly flexible type modelling to adapt to their domain and make their code easier for people in that domain easier to understand.

Hope this helps!


To expand on your point, the most interesting library I’ve seen for desktop GUIs in Rust is “relm”. It’s as the name implies an FRP-like and Elm-inspired library for Rust.

https://github.com/antoyo/relm/blob/master/README.adoc


For desktop apps specifically + functional programming, F# is probably your best bet. On Windows WPF is king, and F# works great. For desktop apps on Linux and elsewhere, you can use Xamarin, GTKSharp, AvaloniaUI...and the list goes on. Desktop apps are a strength of the .NET ecosystem.

Some links:

https://fsprojects.github.io/FsXaml/

https://github.com/Prolucid/Elmish.WPF

https://github.com/GtkSharp/GtkSharp

https://github.com/AvaloniaUI/Avalonia

https://github.com/xamarin/xamarin-macios


WPF has been stagnant for a while, I think they just have a maintenance team on it at MS. UWP is where the new action is supposed to be, but it is limited to store apps.


F# is not compatible with UWP because .net native doesn’t implement the .tail IL opcode so it’s a no go. Moreover WPF allows a wider distribution that UWP which is restricted to Win10.


F# can be used to develop for UWP via ReactNative (Microsoft fork) and Fable compiler. Here's a cross-platform sample: https://github.com/elmish/sample-rn-counter


You can always disable tail calls via a compiler flag. I think the real issue is that .NET Native can't handle very deeply nested generics (although this may have been fixed). There are also issues with reflection in certain cases, but I believe they can be worked around.


True. But WPF is not going to be improved, it is just like WinForms now. UWP has all the modern API goodness (win2d, better rendering) while Microsoft seems to be pushing developers to web for everything else.

Starting a new project in WPF at this time doesn’t make sense.


Except that main feature of .NET Core 3.0 roadmap is to support Forms and WPF, while allowing mixing UWP controls on them called XAML Islands.

So they are getting some newly found love on their way to the store.

Microsoft has realised that the best way to force devs into the shop is to merge Win32 and UWP worlds.


Reason (an alternative syntax for ocaml developed by facebook) has special support for react and soon react native (https://github.com/jaredpalmer/reason-react-native-web-examp... and https://github.com/reasonml-community/bs-react-native)


Yes.

Since it's basically OCaml, it has objects, but it's mostly a FP language.


@isakkeyten I would look at a 'crossing' between a functional programming language, and a multi-platform OS GUI toolkit.

Because significant portion of your learning/cognitive effort you will be spending on understanding the UI paradigms offered by a given GUI toolkit.

So take a look at the languages, in language bindings available for

1) Qt UI toolkit http://wiki.qt.io/Language_Bindings

2) at the languages, in the language bindings available for wxWidgets UI toolkit

https://en.wikipedia.org/wiki/List_of_language_bindings_for_...

2) And same for FLTk http://www.fltk.org/wiki.php?LC+P139+TC+Q

Perhaps, also, you can also investigate if you can use F# (equivalent somewhat to ocaml), to develop Windows Apps with https://github.com/Microsoft/react-native-windows

Same for Nim https://github.com/andreaferretti/react.nim (there is probably a way to package it as Electron App ).

(depending on your desire for adventure/early/small community/small ecosystem interest)


> I feel like most of the cool 'new' programming languages are OOP (rust, go, scala).

For what it's worth rust and go are not OOP languages. Scala and Swift (to add another new, hot language) have inheritance and OOP features for ffi with their platform langagues java and objective-c respectively. However, the OOP features are largely discouraged and their type systems have more in common with haskell and other strongly typed functional languages. Rust has basically the same type system as haskell and swift with its famed borrow checker on top. Scala, Rust, Swift are all mixed paradigm with their community's' preference for functional design in roughly that order. Go has a totally different type system from the others and is almost entirely an imperative langauge.


OO people always find a way to write OO code in any language. Go, for example, is dominated by such people. Something that could be written in a few lines of code and no hidden state is often a bunch of objects with lots of hidden state. I'm not sure about Rust, but I don't see why it would be different there. It starts with a language having a method call syntax (foo.bar()). Once it's there OOP is inevitable.


Agreed, it's hard to write nice, immutable, functional code in Go. For one, you have to put considerable design effort into make your functions chainable while at the same time support error propagation, which tends to necessitate the use of interfaces (e.g. "Iterator"), which without generics leads to interface{} all over the place.

I think Go works best for applications that don't need any "internal plumbing". For example, one of my projects is a query engine. It takes an AST and plans a pipeline of operators that compose together (map, filter, join etc.). It's pretty damn awkward to write in Go, and type safety often goes out the window. It's something that in other languages (Haskell, OCaml, Rust, Scala and Swift would all qualify) could be expressed with an elegant smattering of union types, generic functions and monadic chaining.

But that awkwardness is because the app is 99% framework (various types and functions being used to compose stuff together), most of which is a big, generic factory to produce a small engine. If your app is all imperative "meat", with little framework, then Go is a much better fit.


Good question, I'd look at Clojure with JavaFX or even the good old swing using seesaw.

Or F# with WPF, though F# is also OOP, but for some reason I feel its less so then Scala, and forces more functional constructs on you.

ClojureScript with Electron can also be an option if you're okay with Electron apps.

Finally Haskell and reactive bananas is an option too, that's pretty strongly functional.

I'll also mention Red, red-lang.org, its still in alpha, but is pretty sweet for simple GUI app on desktop.


F# got mentioned already but I would like to add another vote to it. C# is the first choice for most people when building desktop apps for Windows but F# works exceptionally well too. There is a small but robust community and there are a ton of libraries/tools that you can hook into via .NET.


I second this. Also the F# implementation was multiplatform from the beginning and VS Code tooling is available on Mac and Linux.

I code mainly in C# but I as F# for its REPL for some data manipulation tasks.


after some functional programming experience, you may learn that functions just compose more elegantly than objects. "Objects are a poor man's closures" (NN)


"Objects are a poor man's closures"

Yea but don't forget that closures are a poor man's objects!


"A closure is an object that supports exactly one method: 'apply'."

-- Guy Steele


Too bad it took the Java language committee forever to figure out they can just use lambda syntax to make those objects look like functions.


When you have too many cooks and care about keeping backwards compatibility, things take time.


In some languages they also have methods like 'curry', which further blurs the lines.


I can't do OO anymore after I learned FP!


Keep trying, then.

I found myself in that space for a while, but everything kind of made sense when the "Erlang is the most object-oriented language in the market" meme clicked into place: a (micro-)service, an Erlang-style actor and an object are almost the exact same unit of abstraction, applied in different ways. The model is always the same: you communicate by message passing — and, for the most part, you want your objects to be given orders rather than be asked questions.

When I went back to Java, it became evident how damaging the "everything is an object" mentality of Java really is — it really pushes people towards a whole bunch of anti-patterns. One obvious example is that structs are not objects, and you shouldn't try to make them be. Getters/setters are a clear sign that your entity is a struct rather than an object — if you're at that stage, stop trying to apply OOP principles around its design. Another recurring problem is abusing inheritance, and underusing interfaces. I've written maybe 2-3 abstract classes over the last few years, and, when I code review a PR with any abstract classes (or inheritance of any sort) in it, I can almost always get the original author to factor it away while agreeing that it was a problem in the design.

I suggest giving Kotlin a try. Data classes are as close to structs as you'll ever get in the JVM (until Valhalla ships), delegation is crazy easy to achieve, inheritance is heavily discouraged (by having everything be final by default), bare functions and extension functions help keep interfaces clean, and the lambda situation is much better than Java's (receiver lambdas are the language's killer feature, IMO).


I wonder how many of us managed to use Smalltalk and CLOS then.


Aren't these things orthogonal to each other?


OOP as described by java and C++ involves a graph of objects changing each others' state. This is not an orthogonal pattern to functional programming.

If you're thinking of functions operating on immutable objects and returning new objects then that is not typically what we refer to as a OOP pattern though technically it may fit the definition.


Java and C++ are just one way of doing OOP.

More universities should spend time teaching BETA, Self, CLOS, Smalltak, component based architectures.


I know. The dominant paradigm has changed the definition of the vernacular. Right now the default definition of OOP is the Java and C++ way unless you explicitly state otherwise.


In my career I started off with OO/imperative, then got heavily into functional programming. I came out at the other end strongly preferring immutable objects over struct+procedure function programming. It's a style I consider deeply OO, and still has referential transparency, but which usually gets ignored in these discussions.


May I ask for what reason? My guess: encapsulation and/or data hiding? That's the biggest thing I miss when doing FP.


I find it to be cleaner than having module M with struct t and functions M.create: () -> t, M.doStuff: t -> 'a. Which is what tends to happen. It just looks like immutable objects for slow learners...


The web space I think is offering a bit more on this space than the desktop space: Elm, Reason, ClojureScript are the big ones in the space.

If you don't mind your app working through Electron or some other variation of a web-view, there's a lot of resources for those.


some heropunch community members have expressed their intention to take responsibility for x0[1]. the plan is to build a nice DSL in elixir that binds to nuklear[2]. some of my notes are in the linked repo, but i'm currently focused on other aspects of the toolkit.

really looking forward to seeing where they go with it!

[1]: https://source.heropunch.io/tomo/x0

[2]: https://source.heropunch.io/mirror/nuklear


I wouldn't call Rust and Go object oriented. Scala is, but not also mentioning it has great support for functional programming is misleading.


I’m not sure where your perception is coming from. Rust and Scala are more functional than they are OOP. Rust for example uses Hindley-Milner type inference, putting it squarely in the functional camp. And Scala is highly functional as well, considering its use of higher order functions and monads.


> Rust for example uses Hindley-Milner type inference, putting it squarely in the functional camp.

That's not a very good way to characterize functional programming languages. Obviously you can do functional programming without type inference (see Lisp), and type inference is similarly applicable to non-functional languages (see OCaml's purely imperative sublanguage).

Also, as far as I can tell, Rust uses (a modification of) the Hindley-Milner algorithm to infer lifetimes, not types. As languages in the functional camp typically don't have lifetimes, this argument is a bit strange.


Rust uses Hindley-Milner to infer types in general, but there is a conscious decision to require explicit types in function signatures etc. Also, lifetimes are part of all rust's types, they are just usually elided since it would be pretty verbose and unhelpful to always write them out.


> Rust uses Hindley-Milner to infer types in general

True, I had forgotten that you can just write let foo = bar. (Not a regular Rust user.)


By reference to HM, I mean to illustrate that Rust has fundamental similarities to the typed lambda calculus, which has been done more formally by Rust Belt. Additionally, traits in rust are highly similar to type classes in, say, Haskell. Describing rust as a functional language is reasonable, see eg http://science.raphael.poss.name/rust-for-functional-program...


The newest thing in functional languages and UI is haskell + reflex. Reflex is a usable FRP system (not that reactive-banana, et al, were not; just that they were not usability focused at all). Reflex-dom is pretty nice for web stuff, and you can use reflex to build apps with other UI toolkits as well.


Qt is pretty great for cross-platform desktop apps. Tableau is a shining example of a highly-performant app built on Qt.


Would it be heretic to mention Picolisp?

You can also check out Racket.


rust is more functional than oop so is scala


> Ocaml already has a very mature module ecosystem and is now becoming safe and modern.

It just needs a cargo equivalent. The existing tools weren't sufficient last I checked.


OPAM and Dune (jbuilder) work very nicely IME. You don't need to touch a Makefile, you don't need to build anything by hand outside of the package manager.


i wish there was something like `jbuilder new`, but alas. my current impression is that everything in ocaml-land wants you to spend time writing cryptic configuration files and inventing bespoke project structures.


It is very straightforward to use opam and dune.


OPAM? [1]

(Haven't used cargo, and only briefly touched OPAM. Don't know the feature gap, but am curious to learn more)

[1] https://opam.ocaml.org


Yes, opam is a mess to work with.


- Ocaml already has a very mature module ecosystem and is now becoming safe and modern.

Do you have any resources on this? I am learning Ocaml and am interested to see what is changing.


Are you aware of OPAM? That's what I am primarily referring to there.


Phew, I thought you were talking about the ecosystem as it refers to docs. The docs for OCaml and the Core library are so absolutely terrible. It's like 1996 threw up in an HTML doc and no one has thought to change any of it. Sadly, ReasonML is taking the same road. Elixir on the other hand has some great documentation. I'd love to see some work done on OCaml's standard library docs so that they had more in the way of examples.


OCaml has odoc now, which is what creates the Core web page. Core does have bad documentation, but you can try Containers, which is much lighter and better documented, but less featureful.


That sounds interesting, thanks!



It's a wonderful book. I own a paper copy of the first edition. I know there's a second edition coming. It isn't a substitute for api docs... at all. Take a look at this: https://doc.rust-lang.org/std/vec/index.html THAT is decent documentation.


opam is bad. One particular pain I've had is that it refuses to be installed as root so it refuses to run in containers unless you make a user in the container.

Wercker only just got the ability to run jobs as a non root user. It's been painful.


> it refuses to be installed as root so it refuses to run in containers

Not sure what you are speaking about. I'm using opam as root in containers everyday. It prints a warning, but apart from that works OK. What version are you using?


Yes, although sorry I wasn't very clear - I was wondering what you had in mind saying 'modern and safe'.


I phrased it awkwardly, what I could've written was

"ocaml has a healthy module system and lots of module authors and maintainers and recent developments suggest ocaml the language continues to move towards more safety and more modern, convenient features."

Linear types are HUGE imo. I'll be playing with ocaml a lot when those comes out. I can't wait to see how far ocaml wants to chase other languages into the memory safety/ correctness rabbit hole. It's great.


I saw a discussion once (can’t remember where) about the possibility of adding opt-in GC functionality to Rust. If that ever comes to fruition, I wonder if that’d cover many of the current use cases for OCaml?


It was more about integrating with external GCs; imagine writing a Python extension where the Rust code can inform the Python GC about its usage.


Interesting idea. I think it would be useful to have that sort of integration with the Dalvik/ART GC on Android.


> Rust is beginning to understand that they need more stability

i though rust has been stable since 1.0. am i wrong?


I mean stability in what they are offering. LTS versions are essentially being done through the concepts of Editions, where every other year or so, a new edition is released and the changes and new features from the previous edition up to that point are summarized.

It's not that rust code is breaking, that's not it. It's that with new things happening every six weeks, if I wanted to write something that 100 developers will work on, how would I do that? Where I would begin? What version should I choose to ensure everybody is on the same page?

Once 2018 edition is polished up, this will be mostly a solved issue, so kudos to rust team.

But yeah. This is an ocaml thread anyways.


Note that official Rust team position is that Edition is explicitly not LTS and should not be considered as such.

Actually there was a long thread on Rust internals forum where they worried people may mistake Edition as LTS, but conclusion was that confusion can be prevented by explicit messaging. Looking at your post, I think worry was justified.


Isn't OCAML garbage collected, which should be too slow and memory hungry for system drivers.


I don't think that 'slow' is the problem here: C++ developers use shared_ptr almost everywhere, which is just reference counting and hence slower than if they were to use a garbage collector (yes, there are garbage collectors for C++). I think the problem is that the cost is unpredictable. They want absolute control over _when_ things happen.


> C++ developers use shared_ptr almost everywhere

shared_ptr is rare in idiomatic C++ code, and is only used when you actually have a shared object, or need a cyclic data structure with no clearly identified root. I wrote plenty of modern C++ with not a single shared_ptr in it.


not only when, but where : a fairly common pattern is to have computations done in one thread, then once the computation is done and the thread starts another, the shared_ptrs (or any other kind of memory owners) are swapped in another thread and cleared there.


Just because a language has a GC, it doesn't mean it is the only way to manage memory in the said language.

Computing history has lots of examples, all the way back to Mesa/Cedar workstations at Xerox PARC.

Android Things user space drivers are mostly written in Java.


MirageOS (https://mirage.io/) has everything written in OCaml, and they don't seem to find it too slow.


'Slow' is a loaded term with a lot of different definitions.

In terms of throughput, there's nothing wrong with a garbage collector, but in terms of latency (particularly 95+ percentile) you're never going to be able touch 'manually' (I'm including Rust here) managing the heap. Even Azul's 'pauseless' gc only guarantees something like 10ms max pauses. That's an eternity for a lot of use cases.


In a lot of use cases, like the drivers mentioned above, you shouldn't allocate on the heap anyway, whether manually or in a managed way. And if you don't allocate, you never need to garbage collect (since you never need to reclaim memory to make it available for new allocations). In that case, the two are even.

(The above assumes that GC is only triggered by allocations. Many GCs are like that, but others trigger periodically even if there isn't anything to do. I don't know which group OCaml is in.)


You heap allocate all the time in drivers. You just shouldn't do it from interrupt context. Source: I write plenty of drivers for a living.

And even if the GC is only triggered by allocations, they'll still pause the other contexts in order to scan their stacks for live references. So, in the case of a unikernel, you can end up with critical paths being paused by allocations outside the critical path.


> You heap allocate all the time in drivers. You just shouldn't do it from interrupt context. Source: I write plenty of drivers for a living.

Fair enough. I was thinking of low-level things like "shovel a few bytes from some hardware component into a user-provided buffer" without allocating anything, you seem to be working on more high-level drivers.


> And even if the GC is only triggered by allocations, they'll still pause the other contexts in order to scan their stacks for live references.

If the GC is only triggered on allocations when there isn't enough free heap space, then why would stack scans happen at other times?


For a driver, you have many contexts you're running. That's what the windows error "IRQL not less than or equal" blue screen means, that someone called a regular kernel function from within interrupt context. In drivers, you almost always heap allocate in your non interrupt contexts in order to dynamically handle load. This would be even more necessary in a unikernel, where you don't have a user space to defer that kind of work to.

What I'm saying is that the necessary heap allocations happening in other contexts will block progress of your non allocating critical path, irq code on a GC based unikernel because of the need to discover liveness information. There are potential schemes to fix this, but they aren't implemented by MirageOS/OCaml, or most other managed unikernel environments I've seen.


But if I understand correctly (I've never used mirage) mirage is a unikernel for ocaml applications. It compiles down to xen system calls, for monitoring and administration. Mirage isn't actually an implementation of an OS in the traditional sense.

Ocaml is of course already fast. INRIA has been using ocaml for compiler hacking for several decades. The speed comes as no surprise, but eventually rust will have inline assembly support, it's already nurturing some basic SIMD features, it is already being turned for lower level dominance.

I think ocaml could compete, but not the way it is now.

With linear types in the future, perhaps ocaml could follow rust's lead and implement a memory model around move semantics.

I'm not an expert in this area, but I do perceive a nontrivial gap here. Ocaml is not built for writing system drivers. And a good system driver is written in something that was designed to be used for doing so, imo.



Oberon and Blue Bottle drivers written in Oberon and Active Oberon respectively.

http://www.projectoberon.com

https://svn.inf.ethz.ch/svn/lecturers/a2/trunk/

Mirage TCP/IP drivers

https://github.com/mirage/mirage-tcpip


While ocaml is garbage collected, you can always tell exactly when the garbage collector will run - and it's very feasible to write code in a non-allocating, non-garbage collecting style when necessary.


There is also a report on discuss.ocaml.org thanks to gasche: https://discuss.ocaml.org/t/ocaml-multicore-report-on-a-june...


What really surprised me, that Facebook with so active usage of ReasonML doesn't push/sponsor Multicore OCaml. Surely that will improve performance in their high-load setups.


ReasonML has only very superficial changes to ML using the ppx AST extension framework. Multicore GCs are not really what Facebook planned on contributing, or at least has shown signs of contributing yet. It is also a bit outside of the scope of their project since they (mainly) market reason for compiling to JS VMs.


Running more processes is often the solution.


Now between fibers and effectual types one could implement an Erlang interpreter pretty readily. That’d be interesting!

Or perhaps port or create a system similar to OTP using the effects, fibers, and threads.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: