Hacker News new | past | comments | ask | show | jobs | submit login
Project started to run Erlang on bare metal (kerlnel.org)
52 points by bandris on Jan 25, 2013 | hide | past | favorite | 47 comments

Don't announce your project until you at least have something going. The empty repo at https://github.com/kerlnel/kerlnel ? That should at least have enough code in it to boot something, even if it's just a "hello world!" that doesn't have anything to do with Erlang. Getting a build system in place and developing some sort of contribution guidelines are also important.

There are major advantages to keeping your grand projects in stealth mode for a while. For one, it's good to wait until you have instructions for even the dumbest ADD teenager to compile and run the thing, or else you'll have to deal with a flood of endless terrible questions. For another, it doesn't exactly inspire confidence to come out and say, "Hey, we want to run Erlang on bare metal, we have README.md, please contribute!"

I fully acknowledge that the creators intended to wait longer and that bandris may have just come across an interesting looking Github repo, but even that proves a point: Your repo should be private until you have something to show. When Linus announced Linux, he already had code running in userspace, with bash and gcc ported!

Edit: I say this having been involved in projects that announced too soon. It's important to establish something concrete first.

It's an exciting idea. Why should server software run on OS's designed to let you play Tetris or compose emails at the same time? It's not like you get all that for free. In theory, specialized apps have a lot to gain from running on more specialized platforms. There ought to be more experimentation in this area.

The name "Kerlnel" is going to be a problem, because it's so awkward to say out loud. You don't want your name to be difficult for people to say; you want them saying it. "Kerlnel" is a clever pun but that doesn't last beyond the first time you read it.

Glad I'm not the only one who immediately balked at the name. I just spent like the last two minutes repeating it in a trance to see if it would become any easier to say. Nope, sounds like I'm trying to say "kernel" while balancing a grape on the back of my tongue. It's a physically uncomfortable word to say, which is an impressive linguistic construction!

Seems kernerl would have been better.

What about erlkernel? That's fun to say.

Or Erlkerlnig, now it's a Goethe reference!

Bravo! And the domain is available.

> specialized apps have a lot to gain from running on more specialized platforms

Guess there could be some gains, but a lot? Enough to bother with?

What about hardware support?

What about hardware support?

Quite so, hardware is the holy grail. (Alan Kay: "People who are really serious about software should make their own hardware.") I dream of a golden age of experimentation in vertical stacks: specialized hardware designed for specialized classes of application with only so much OS as is needed to support them. Perhaps if the cost of developing hardware falls the way the cost of developing software did, we might see something. Why not an Erlang machine? A Lua machine? A spreadsheet machine?

Enough to bother with?

Order of magnitude is table stakes for interesting, wouldn't you say? Radical experiments demand radical gains. Surely there is room for an order of magnitude if one is willing to sacrifice general-purpose computing.

Order of magnitude performance improvement isn't going to be possible. That basically requires that over 90% of your cycles are currently being wasted by the OS somehow. Maybe this project could get 20% improvement.

You're talking about the OP's project and I was not – at least not when I brought up orders of magnitude. The confusion is my fault. I implicitly changed the subject to my own fantasy tangent.

My point is that if one is going to build a narrow vertical stack up from specialized hardware, there had better be a 10x advantage over running the application the ordinary way or the experiment becomes a why-bother. Also, the application had better be valuable enough to justify the effort.

This vision of systems design has been alive in the Forth community for a long time – maybe not the "iterating on hardware as part of application development" part, but certainly the specialized vertical stack idea, just in a very austere form. They make the tradeoff of dramatically reducing what the software will do in order to make it feasible to develop that way. That's a tradeoff most of us aren't willing to make. But I have a feeling there are more options if one is talking strictly about servers.

> Order of magnitude performance improvement isn't going to be possible.

I think the term 'order of magnitude' has started taking on a connotation of essentially meaning 'a lot'. It's a fair observation, but I hear it bandied about so often that I rarely actually think the parties are in fact using it literally.

Pretty sure you may assume that people here know what "order of magnitude" means.

In this case I don't even think 2x is possible.

It depends on how efficient/inefficient the OS's network stack and data transfer to user space is. For managed runtimes in a VM taking advantage of zero copy APIs is a challenge. I don't think 'order of magnitude' is possible but clearly there are a lot of cases where if implemented correctly this idea could dramatically improve performance.

A company called Wang took this approach with their word processing workstations. It might have been before your time, but anyway, this approach has been tried before and it didn't really work out. General purpose hardware running a general purpose operating system that abstracts away that hardware's peculiarities won the day for a variety of reasons.

Pendulums swing back the other way, though, when there's a game-changing advantage to be had. And server software that only has to produce well-formed output to be sent over the wire has considerable leeway in how those well-formed outputs get produced. We've seen that leeway be exploited in a major way at the programming language level, not so much at the OS level and not at all at the hardware level, yet. The question is what hidden advantages one might uncover by doing so.

You must be right about Wang; I haven't heard of them.

Targeting xen instead of bare metal sounds better to me. openmirage is doing that with ocaml.

Wow feeling old. Wang was a major supplier of purpose-built word processors for offices. 1970s timeframe. Prior to that they made sophisticated calculators for science and engineering and later finance.

Executives and most managers still had secretaries and dictated letters and memos. The Wang system was revolutionary. A multiuser, networkable word processing system that completely changed the game in terms of the time and effort necessary to produce typewritten documents.

They were supplanted in the 1980s by the more general purpose PC but definitely hold a significant place in the history of business computing.

Well, do you think targeting xen api instead of bare metal is a good idea, or would that repeat history to no benefit?

> Order of magnitude

Are general purpose operating systems really that inefficient?

I've thought about "boot into JVM" before, and I think it's enticing for technologists since it's so "clean", but all the projects aiming for this seems to have died from lack of interest (e.g. BEA Virtual JVM/JRockit Virtual Edition).

I'm not asking whether the general-purpose stacks are that inefficient at general computing, but whether there are classes of applications that could gain from a much more specialized stack. "Order of magnitude" comes in only as a way of saying that the gain would have to be large to justify the effort.

Edit: Perhaps I should explain where I'm coming from. I work on a high-performance spreadsheet system. One of the things that makes spreadsheets interesting is that their computational model is powerful enough to be valuable, yet not so powerful as to amount to general-purpose computing. Think of a server that doesn't need to do anything but access spreadsheet data, perform spreadsheet calculations, and serve them over the network to some client. Such a server's responsibilities are so specialized that one can't help but wonder how far down the stack one might push them and what one might gain by doing so. I daydream about this sort of thing.

How would one start thinking about specialized hardware? Any examples of specialized hardware?

There's many examples of hardware currently in use that can be programmed using software rather than a soldering iron (or more modern equivalents), but they tend to be within the realm of electronic rather than software engineering.

At a previous job I wrote software for a manufacturing company, and it was a real eye-opener to see one of the head engineers there - who had never in his life wrote a program, as we would understand it - modifying the complex ladder logic of a PLC[1] that operated parts of the factory, while I made changes to the software on the controlling PC. I realised that we were doing essentially the same thing, just in completely different spheres of operation.

Another example would be FPGAs, for relatively cheaply one can get a board with such a chip on it, and prototype all sorts of hardware designs essentially by writing software (in VHDL or Verilog). Again I've not done it personally but a friend of mine in smartcard research does this all the time, and doesn't call himself a software developer either even though it's really the same thing, just a different application to the usual general purpose machine.

[1] http://en.wikipedia.org/wiki/Programmable_logic_controller

In many such projects the hardware support is handled by a hypervisor. But that's more like bare virtual metal.

How about Kerlang? Pronounced by adding a k sound to the front of Erlang.

> The name "Kerlnel" is going to be a problem, because it's so awkward to say out loud

It sounds pretty good out loud if you say it while doing a Stephen Hawking impression.

I'd say "curl-null", more or less. Not too bad I think.

As far as I can tell, it's a vague concept, and two "authors" who haven't authored anything asking people to help without really explaining any of the goals? Help with what, exactly?

I fully support making more languages embeddable, but this project doesn't seem to be real. I hope there's more to it than what's listed on their site right now!

Wouldn't it be more feasible to make the Erlang VM run on Baremetal OS[1], and contribute to it? Thousands of hours will be spent on kernel development and hardware issues for what is mostly duplicated effort.

[1] http://www.returninfinity.com/baremetal.html

Porting the erlangvm to bmos is what I am currently working on - llvm optimisations or compiling erlang to native code will come later.



Running erlang on bare metal has been a side project of my thesis for a while. I love erlang, and I also love getting close to the hardware, which is perhaps why the Ian Seyler's ReturnInfinity Baremetal-OS has drawn my attention and efforts over the past 8 months or so.

Over this time, my colleague Andrew has been tinkering also, and we seem to have a fair degree of overlap in our end goals, hence kERLnel came about.

It is great to see a lot of excellent discussion, both positive and negative. I think all open discussion is useful, even negative comments have a positive affect in their own way. Although I admit I am surprised that kERLnel was even mentioned here, particularly, as we haven't actually dropped any code publicly yet ;-)

There are other exokernel projects out there that may also be suitable also. In fact, not just exokernels, but also a number of {nano|pico|micro} kernels too, however, the idea of hand coded assembler kind of grabbed me completely for some reason.

For me, the project is not about whether X or faster than Y, or anything like that. It is about what I like doing myself and what I find interesting. I think its great for some many people to comment and have different views, particularly different views to myself - this makes the world a better place!



I prefer language agnosticism. Work being done by Nix-OS, a fork of Plan 9, is moving into similar spaces by merely offering you ways of getting direct access to CPU cores for HPC loads: http://code.google.com/p/nix-os/

It also has the benefit of having actually existing code right now.

That repo is extremely out of date. The lsub.org guys (http://lsub.org/ls/nix.html) decided they'd prefer to roll their own "code review" system, and the nix-os repo was essentially abandoned. Apparently they've recently decided to stop using their own code review thing, and just make changes directly in their local tree while taking emailed diffs from everyone else, so there's that.

Edit: oh, and Erik Quanstrom has started shipping the Nix kernel with his own set of patches in the 9atom distribution, if you can handle the hour+ download time for his ISO. There are some other nice improvements in 9atom, too.

Not to be confused with "NixOS", no dash, a Linux distro based on the Nix package manager. Sheesh, how many annoying names can we get into one thread? http://nixos.org/nixos/

Okay, I understand developers can spend their time doing whatever they want. Nobody can (or even should) tell other people how they're going to spend their spare time.

But... wouldn't the effort be better spent improving the existing interpreter? The vague description makes it sound like they're trying to run Erlang without an OS "getting in the way". Looking at the language shootout benchmarks [1], it seems like there's still a lot of room for improvement before the OS would be slowing them down. Operating system services aren't causing Erlang to take 3-25x as long as the C code.

[1] http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te...

We are completely surrounded by embedded devices, and their numbers are growing exponentially. Everything has a computer in it. You might be shocked to know the power that's in some of these devices... your cable box might be a quad-core 1.2GHz ARM.

Currently, your common options for embedded software are:

* bare-metal with C, C++, or ASM

* load a full OS (i.e. Linux) and run an interpreter to get access to higher-level languages

There are a lot of domains where the problem being solved doesn't require particularly high hardware resources, but requires quite advanced software design. Implementing something like a modern Blu-Ray player's UI and high-bandwidth/low-latency AV in C is rough. You spend a LOT up front on software development costs, and pay more in the maintenance phase. Alternatively, you can save on development and maintenance by running an OS and using high-level languages and existing libraries... but then your board price per unit skyrockets because of the extra processing power, RAM, and flash that you need to support the OS and interpreters.

Access to a modern programming language at a "bare-metal" environment is the holy grain for embedded. It would change the world.

Unfortunately, it has always been practically impossible. And likely still is.

Other options:

FORTH. Icky, but I thought I'd mention it; very fast bringup time . . . but then it's FORTH.

A commercial RTOS. There are lots of these, some of them are even pretty good.

We went the C-and-ASM route and ported over some USB drivers from another in-house OS. Worked okay. USB is horrible to work with, and that was actually the hardest part to get right on our system.

Erlang is as much of a platform as it is a (pretty peculiar) language and its functionality does overlaps with the operating system, for example thread management (and in Erlang you tend to have thousands of threads as far as I understand) and things like being able to boot up Erlang on ten computers and have them behave like one big Erlang instance. In other words, considering what Erlang does, it seems plausible this is a very reasonable thing to try, I don't think this has to do with "operating system services" in the windowish sense of the word "service", it's more about conflicting approaches to same problems between the OS and Erlang.

I was thinking this also! The difference between Erlang and an OS seems like little more than device drivers. Moving BEAM into kernel-space would improve the speed of the thread switching and IPC. So Erlang is a particularly good language to try this with.

Also I think this is a good idea in general. With cloud hosting, why run a full Linux OS if you are only going to run a webserver or database shard on it? If the OS could be replaced by a file manager and the runtime code of a single application (with some significant speedup), there are probably a bunch of companies that would be interested.

> it seems like there's still a lot of room for improvement before

Room for improvement to do what? Compute n-body simulations. Permutations, or searching a FASTA file, spectral norm? You can't be serious.

The only one of consequence maybe the thread ring Erlang comes 3rd then after Go and Haskell.

It would be interesting to see benchmarks to indicate if such a project would give a useful gain in performance, compared to say, just running Erlang on its own, via of the many available OS kernels.

I have been following this project, and it makes so much more sense to me.

As another poster mentioned, going all the way down to the bare metal does not make sense to me.

Or maybe I have lived through too many hardware upfrades to not want to worry about whether the APP will run if we upgrade the RAID controller or something.

When Darren and I were looking at baremetal OS - we liked it because we could quantify what Xen and other kernels were missing - generic drivers - and we saw we could improve everything by making a system more specialized and what we wanted to use (think about what you actually put a commercial system on, there's not too much choice - especially for HPC). An example of this would be an athlete like a runner, and a shot-putter. Think of their body shapes - they are very different! Yet we all treat operating systems as if they are the same (for compatibility). Being more specialist is obviously not for everyone! Linux (or any other OS) is...

Regarding compatibility - my personal long term aspiration is to abstract the HW through LLVM which will eventually allow us to target Xen or other exokernel systems or run more close to the metal by running/developing a true erlang kernel through a mixed ERTS/BMOS (Baremetal OS) code-base.

As mentioned in earlier comments, erlang already has a pretty mean scheduler, and memory management system. Personally as an academic exercise - I'd like to see how this operates at a kernel level and in the future, I'll be spending more time on this. All other existing operating systems I personally believe are just getting in the way of best possible performance.

Finally erlangonxen which you mentioned above is not open source. That does not make sense to me, and it does not give me what I want... Doing this is a very selfish exercise, as it's what I want. Hopefully other people will want (and make sense of) this too, but atm I'm not fussed.

Cheers Andrew

Would what's proposed be considered an "exokernel"?

EDIT: To add some context, would it be like http://www.openmirage.org/ ?

It has been suggested that running an app directly in a VM with no OS uses the hypervisor like an exokernel.

I was just going to post the Wikipedia link (http://en.wikipedia.org/wiki/Exokernel), but Mirage looks pretty cool. I've liked the idea of an exokernel since I heard of them. Thanks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact