Hacker News new | past | comments | ask | show | jobs | submit login
Biscuit: A monolithic, POSIX-subset operating system kernel in Go (csail.mit.edu)
252 points by appwiz 36 days ago | hide | past | favorite | 130 comments



Oh nice, the results are pretty good!

> On a set of kernel-intensive benchmarks (including NGINX and Redis) the fraction of kernel CPU time Biscuit spends on HLL features (primarily garbage collection and thread stack expansion checks) ranges up to 13%. The longest single GC-related pause suffered by NGINX was 115 microseconds; the longest observed sum of GC delays to a complete NGINX client request was 600 microseconds. In experiments comparing nearly identical system call, page fault, and context switch code paths written in Go and C, the Go version was 5% to 15% slower.

10% slowdown in return for memory safety could be a worthwhile tradeoff in some cases. And GC pauses were almost not an issue (less than 1ms in the worst measured).


This is pretty remarkable and surprising that it's only 5-15% slower. The Go compiler isn't very aggressive about optimizing even ignoring GC overhead concerns. I'm also curious to understand better how they bootstrapped the dynamic memory and scheduling facilities since Go obviously relies on an underlying operating system for these things.


I believe the Midori project at Microsoft found similar in trying to write an OS in C#.


Several posts from Joe Duffy, who led that project, are at [1].

[1]: http://joeduffyblog.com/2015/11/03/blogging-about-midori/


It's because typically for most applications the OS isn't very much overhead.

If the kernel was only using 1% of the CPU time before, and you rewrite it in QBASIC for a 5x slowdown, your application only sees a ~4% slowdown...


The quoted text says that overhead on the kernel code was 15%, not the application.


But didn't you just explain why our kernels should be in QBASIC? Or any other safe language? I feel like this is the next version of Christmas not being true. That unsafe languages were a dumb scam.


Most of it comes from a few bad decisions in C in particular that we ended up with stuck for a very long time (nulls, badly designed array type) OSes were written to run on very different hardware from what we use now and over time the hardware was adapted to running specifically C code well. High-level languages like we expect these days hardly existed at all before the mid 90s or so, and they have been gaining in dominance ever since.


> High-level languages like we expect these days hardly existed at all before the mid 90s or so

I beg to differ.

Lisp has been influential since the 60s, especially in certain niches (AI, symbolic algebra, etc.) and is still going strong.

Smalltalk was used throughout the 70s (although first "released" in 1980), and is far more "dynamic" than the scripting languages widely used today (e.g. Python); the classic example is 'ifTrue' being a method call which we can override (via a built-in live editor, no less!).

Prolog has been around for a while, and saw a lot of attention in the 1980s. It's much more high-level than most languages, since its interpreter runs a search algorithm to calculate results, rather than dumbly stepping through a single execution path.

Most of the cool features popping up in "modern" languages, like pattern-matching, currying, (Hindley-Milner) type inference, etc. came from ML in the 70s.

Scheme is also from the 70s, and has features like tail-call elimination which many languages/platforms are still lacking (spawing a whole sub-genre of blog posts about how to implement it in language X via trampolines). Scheme also brought attention to continuations, giving us things like coroutines. The async/await and yield features added to many languages recently are close cousins of Scheme's call/cc; as are try/catch/throw, for that matter! Although call/cc is undelimited, delimited continuations have been around since the 80s.

That's just off the top of my head, wearing my Programming Language Theory hat. If I put on my Hacker hat I could mention Sh, Snobol, Awk, Icon, ABC (Python's predecessor), etc.


I’m aware of all these facts but Lisp wasn’t practical except on Lisp machines for quite a long time, and most of these languages only started running on microcomputers in the mid to late 80s. Caml Light, (Xavier Leroy) the standardization of ML, object oriented scripting languages, etc.

My initial statement was too strong, what I meant to say was that the higher level languages were not practical for commodity application development and distribution until more recently. They’ve been along for a very long time but Smalltalk is an infamously isolated system and it was very slow even when running on an Alto (I’ve used it!). I admittedly know much less about the practical history of Scheme and Prolog.


I think you are both right.

To jolux's point, Scala, F#, Ocaml/Reason, Clojure, Haskell, Erlang while not as huge as Java and C#, they have billions in value attributable to them.


You will notice that Go shares some authors with C.

They did the same with C back in the day, versus what was being done in systems programming since the 60's.

The language C builds on, B, is a spin off of BCPL, a language designed to Bootstrap CPL.

Thanks to UNIX's success we ended up with a language whose original purpose was to bootstrap compilers, not to be a full stack programming language.

Here, systems programming in 1961, 10 years before C was born, still being sold by Unisys.

https://en.wikipedia.org/wiki/Burroughs_large_systems

Or at US military,

https://en.wikipedia.org/wiki/JOVIAL

Or during the 70's,

https://en.wikipedia.org/wiki/PL/8

https://www.computerhistory.org/revolution/input-output/14/3...

https://en.wikipedia.org/wiki/Interlisp

https://en.wikipedia.org/wiki/BLISS

https://en.wikipedia.org/wiki/Modula-2

https://en.wikipedia.org/wiki/Modula-2%2B

Remember that project Bell Labs stepped away from, which gave C's authors plenty of free time?

https://en.wikipedia.org/wiki/PL/I

https://multicians.org/myths.html

"Thirty Years Later: Lessons from the Multics Security Evaluation"

https://hack.org/mc/texts/classic-multics.pdf


I’m not sure what you’re trying to tell me but I’m certainly aware of these facts already.


Comments in these forums are directed at two populations, the direct participants and everyone else viewing from the bleachers. I don't think the parent is implying that you don't know these things, but added them for context.


This is true. I got a lot of downvotes for my GP.


That was indeed the case.


Future programming historians may indeed look back on Ken Thompson as the Thomas Midgley of the software world, having plagued us more than once with costly and dangerous mistakes: null pointers, C, and the abhorrent POSIX API.


> In 1923, Midgley took a long vacation in Miami, Florida, to cure himself of lead poisoning. He "[found] that my lungs have been affected and that it is necessary to drop all work and get a large supply of fresh air".[9]

Null terminated strings.

In, "Trusting Trust" Ken tacitly admitted that C itself was the inside job. Once you align yourself with Cs core tenets, performance above all else, the game itself is up and your mind has been infected against looking at things holistically.


iirc they modified Go runtime


Obviously, usually a language runtime expects an underlying OS.

When one isn't available, the runtime plays the role of an OS, and much be adapted as such.


Yes, the throughput results are pretty great, as is this work in general. It's worth noting the memory usage, though -- "Go’s garbage collector needs a factor of 2 to 3 of heap headroom to run efficiently (see section 8.6)".


Amortize it over the life the machine. Go is no gold standard, but what does it mean in terms of total energy cost? Manufacturing cost? A twice larger cache and half the frequency ram bus? I dunno what the answer is, but I think being able to run a safe language for the OS gets us to an entirely different regime.


Then use Rust.


Then improve compile times and borrow checker usability.


If you don't mind, what are your grievances with Rust's borrow checker, and do you have any ideas on how it could be improved, or any alternative implementations of the idea of "ownership" and "borrowing" that do some things better?


Here is one example,

https://github.com/rust-lang/rust/issues/63818

Alternative ideas, here are four, combining the productivity of automatic memory management with ownership,

https://swift.org/blog/swift-5-exclusivity/

https://github.com/apple/swift/blob/master/docs/OwnershipMan...

https://dlang.org/blog/2019/07/15/ownership-and-borrowing-in...

"Linear Haskell", https://arxiv.org/abs/1710.09756

In the context of Rust, maybe showing lifetimes graphically in an IDE would help productivity versus languages that offer automatic memory management alongside ownership.


> 10% slowdown in return for memory safety could be a worthwhile tradeoff in some cases. And GC pauses were almost not an issue (less than 1ms in the worst measured).

Implementing Nim's bounded time GC could cut that overhead down and defer work for idle periods. That should be first priority when implementing an OS in managed code.


We've been discussing the "10% slowdown" for two decades now, since at least the time of Java's first release.

Microsoft even tried to rewrite Windows in C# on the premise.

Meanwhile in 2020 Java programs still suck ass to use. (Sorry for the crude language, but it's true.) Microsoft memoryholed the whole C# thing and went back to promoting C++.


Yet another one that doesn't understand GCs.

And you are mistaken with "Going Native fever from MS", it was a flop that produced UWP, now being fixed with Project Reunion.

Most of the System C# features from Midori ended up in C# 7.x, 8.0 and 9.0 brings even more goodies like naked function pointers and C ABI to call into .NET code.


> Meanwhile in 2020 Java programs still suck ass to use

Isn't that due to memory usage, rather than speed?


I agree with the poster below. GC's problem isn't speed, per se; the problem is that GC only works effectively if you assume infinite memory and cache. Once you start approaching memory limits you start feeling pain.


Another problem is that most developers using GC languages seem to keep failing at learning that there are other means to manage memory in GC languages, but writing new is so easy...


What other means are you referring to here? Arenas are all that comes to mind.


Many GC enabled languages offer:

- value types

- some form of RAII

- untraced reference (aka raw pointers)

- stack and global memory segment allocation

Examples of such languages:

Mesa/Cedar, Active Oberon, Mesa/Cedar, Component Pascal, D, Modula-3, Nim, Swift, C# (depends on the version, C# 9 is already quite sweet)


All those are just ways to avoid actually using the GC. Which proves the point: GC is ineffective and works best when not used at all.

The future is not garbage collected.


Yes that is what anti-GC crowd tends to sell, except that it the same set of capabilities that languages like C also offer, with the difference that use after free and leaks abound.

The future is pretty much GC with affine types, even Rust has to deal with crude RC<RefCell<>>, without the corresponding tracing GC performance, when doing GUI applications.


pjmlp's point is that GC can be the default, and manual memory management can be used where necessary.

It's essentially the Pareto principle. Most of your garbage is generated by a minority of your program. So only small bits of your program need to be optimised to not use the GC.


Yeah java is the gold standard of GC language design. It sucks so all GC languages must be horrible.


Excuse me, Java is a horrible design. The gold standard is Common Lisp, and this is wonderful.

Also Go's GC still is terribly naive. They recently fixed the latency problem, but it still needs to be redesigned to match a good Common Lisp GC.


I suspect that higherordermap's comment was sarcasm...


Yes, because GC is a woefully inefficient memory management strategy.

Rust is not just the future of systems programming -- it's the future of all programming. You may not program directly in Rust the language, but borrow-checked static memory management is coming to an application language near you.


What is inefficient are programmers that don't learn to use the features for value types and deterministic memory management that many GC languages offer them.


It is faster to write an app with GC than to fight the borrow checker. For OS kernel development Rust may be worth the fight, but for userland a language with GC is better in most cases (much easier to learn, use and maintain for relatively small speed and memory penalty).


At a cost of requiring two to three times as much RAM as a non-GC solution to get that performance, as noted elsewhere in this thread. Indeed, it's been known for years[1] that to have comparable performance to an equivalent program without GC, a GC-enabled program needs several times as much memory available.

Yeah, no thank you. The OS community should stick with Rust.

[1] Matthew Hertz, Emery D. Berger. Quantifying the performance of garbage collection. ACM SIGPLAN Notices, volume 40, issue 10, October 2005. https://people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf


Yeah once you have a old paper that compares a not-particularly-efficient JVM of that time with manual memory management and concludes GC needs 3x memory, it is even easier to spread GC misinformation.


The OS community seems to use other tools.

> Swift is intended as a replacement for C-based languages (C, C++, and Objective-C).

-- https://swift.org/about/

Reference Counting, chapter 5 from The Garbage Collection Handbook.


The OS community is almost only C and C++. If by community you mean the people working on operating systems. Or you're only considering a small subset to make your point?


> The longest single GC-related pause suffered by NGINX was 115 microseconds;

Wow!

It is funny that we have these capabilities within arms (fingers) reach but it takes someone daring enough to put symbols in a certain order to show that a different world is possible.


It is impossible to fight cargo cult with facts, only a generation change helps.

Joe Duffy mentioned in one of his talks (Rust Keynote) that even with Midori running in front of them (it even powered part of Bing for a while), there were people on the Windows team quite sceptical that it was possible at all.

Anti-GC cargo cult goes a long way.


Seeing these results touted as being pretty good makes me wonder why we haven't fully adopted microkernels yet. Their singular disadvantage, which is the need for two IPC calls vs one for a monolithic kernel, is completely irrelevant in comparison to these numbers. The average IPC cost of an L4 IPC call is about 500 cycles, which is about half a microsecond on a 1ghz processor. Cache locality is a big deal, and L4, which can fit entirely within a core's l2 cache, is king when it comes to cache locality.

And you don't have to worry about memory safety either. Memory safety has been formally proven as part of the SeL4 spec.


> Seeing these results touted as being pretty good makes me wonder why we haven't fully adopted microkernels yet. Their singular disadvantage, which is the need for two IPC calls vs one for a monolithic kernel, is completely irrelevant in comparison to these numbers.

And that's even the worst case of synchronous/blocking calls. With different API design - e.g. asynchronous interactions as done with io_uring or IOCP - the latency becomes less important as long as the overall efficiency and thereby throughput is high.


That is 500 cycles without mitigation for CPU bugs, and without considering the non-local performance effects.

Non-local performance effects arise from TLB and other cache invalidation, which is required when changing from one task to another. You can't avoid that without putting everything in the same address space, which would make the system a fake microkernel. Fake microkernels have the less-readable code of microkernel design but without any benefits other than buzzword compliance.

CPU bugs like Meltdown and Spectre mean that every IPC now needs to blow away all sorts of caches and prediction. For example, you wipe out branch history. The cycles spent during the IPC call are only a tiny portion of the cost. The loss of cache content (TLB, branch history, data cache, code cache, etc.) greatly slows down the CPU.


Oh, I wouldn't dismiss "fake microkernels" so completely. One of them used to power a rather popular (proprietary) NAS Filer. I don't have a reference -- I believe this little gem never found its way into a publication (though the file system that ran on top of it did: https://www.cs.princeton.edu/courses/archive/fall04/cos318/d...).


Well, for one, those bugs aren't an issue on non-shared devices, and anything else outside of x86. That is not insignificant...that is still the majority of devices.

Two, the benchmarks for IPC usually include the cost of cache invalidation. This one [0] for SeL4 clearly shows the difference on Skylake for the kernels compiled with Meltdown mitigations. Still in the sub-microsecond range.

[0] https://sel4.systems/About/Performance/home.pml


Plenty of non-x86 got hit by those bugs. ARM got both. PowerPC was hit as well.

Meltdown: Intel x86, IBM POWER, ARM Cortex-A75

Spectre: Intel x86, AMD x86, ARM (Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73, various Apple-designed cores) and some IBM hardware.

Lots of devices are shared, thanks to things like javascript.


That is the thing, when you put all mitigations in place, the supposed performance gain is already out of the window.

And there is the whole point that many of the performance issues with microkernels have long been solved.


I've also wondered why we haven't seen more industrial effort around using sel4. Seems like it'd be perfect for mission critical firmware in cars and such too. For replacing linux, I guess there is still a lot of effort that would be required, but it seems worth it from my admittedly surface viewpoint.


Depends on who "we" is. Apple's OS is largely a microkernel design & that ships on billions of devices. Build services for that OS at Apple from a safety/security/performance/battery life perspective is a dream compared to other OSes. Granted outside of that it's a bit rarer.


>Apple's OS is largely a microkernel design

They are hybrid design, or some would even refer them to monolithic. Definitely not microkernel, although they are moving in that direction with DriverKit and other kit initiative.


True they are hybrid but most drivers lived in user space except for ultra critical performance stuff if I recall correctly (mainly graphics stuff if I recall correctly but I never took a complete inventory). IPC was a core part of how both drivers, system daemons, and apps communicated with each other (with thin shim APIs to make it friendlier). It was pretty great at how consistent a lot of it was.


Hypervisors are the microkernels that are socially acceptable.


How about you choose rust. You get 0% slowdown while also getting a more safe language in general.


Also never finish implementing, just blogging about it and mentioning rust everywhere.

Plaudits to all involved.


Not to steal the thread, but for the Go sceptics in system programming, F-Secure decided also to prove them wrong and is shipping bare metal Go for their security solutions.

https://labs.f-secure.com/blog/tamago/

https://www.f-secure.com/en/consulting/foundry


Note that this isn't unmodified Go; it's Go with some features for memory management, in particular having to annotate loops with the number of trips (see section 6.3).


As proven by Oberon, it can be made with unmodified Go as well.

What it would be missing, and was fixed in Active Oberon, is explicit support for untraced references.


> untraced references

could you explain that a little more. Are references in Oberon bidirectional? Meaning there is a list of every reference ever taken and by whom?


Common GC speak.

Traced references are tracked by whatever form the language offers automatic memory management.

Untraced references are a kind of safe pointers, basically the memory they point to isn't tracked by the GC/RC infrastructure and they are managed in some form of manual memory management, and can be used as part of unsafe code for pointer arithmetic.

The languages then provide means of converting between both worlds, naturally it always requires making use of some form of unsafe code block.


Ok, got it. Off GC Heap memory allocations.

I thought it might be an Oberon specific concept. I'd like to see something like ref counting, but an actual ref list where there is a global list of references to an object.


It would have to be, unless you wanted to write a thin hypervisor in freestanding C, C++ or Rust to handle the syscalls/memory management required for the GC.


That is perfectly doable in Go.

Just like C, C++ or Rust you would need a thin Assembly help and that is about it.

https://people.inf.ethz.ch/wirth/ProjectOberon/Sources/Kerne...


Freestanding C requires nothing more than the bootloader be written in assembler and that's an architectural dependency; most I/O mapped, embedded devices simply require the binary be loaded in a specific location (usually an offset on ROM). CRT0 itself can also be bootstrapped in C.

That's quite different than a low-level kernel bootstrap, which is what you linked. Regardless of whether you do it in Assembler or the aforementioned languages, you must build a hoist outside of Golang.


Try to implement malloc/free in freestanding C, or soft floating point for that matter.

Inline Assembly doesn't count as C, isn't part of ISO C and any language can have such extensions, including Go.


Why would I need to, when it’s already been done?

https://github.com/blanham/liballoc/

It’s not even uncommon, as liballoc is one of the top recommended allocators for hobby OSes.

I never mentioned inline assembler. Whilst any decent OS will most definitely fall back on assembler, and I never claimed otherwise, it’s not a requirement. It, or another of the aforementioned, would be within the realm of standard Golang.

You’re jumping through hoops to equate a language out of it’s intended use cases to ones completely within their’s. It’s not a knock on Go (or other GC’d languages), it’s just a reality of the compromise on specific features. You also couldn’t write an OS in a VM’d language without having a VM hypervisor; it doesn’t mean they’re bad languages.


> There are 4 functions which you need to implement on your system:

    int   liballoc_lock();
    int   liballoc_unlock();
    void* liballoc_alloc(int);
    int   liballoc_free(void*,int);
So where is the ISO C implementation of those?


Note that in addition to the CPU overheads, the paper notes that Go's heap requires a factor of 2 to 3 of headroom to run efficiently:

    > A potential problem with garbage collection is that it
    > consumes a fraction of CPU time proportional to the
    > “headroom ratio” between the amount of live data and
    > the amount of RAM allocated to the heap. This section
    > explores the effect of headroom on collection cost.
    >
    > [...]
    >
    > In summary, while the benchmarks in §8.4 / Figure 7
    > incur modest collection costs, a kernel heap with millions of live
    > objects but limited heap RAM might spend
    > a significant fraction of its time collecting. We expect
    > that decisions about how much RAM to buy for busy
    > machines would include a small multiple (2 or 3) of the
    > expected peak kernel heap live data size.
    >
    > [...]
    >
    > If CPU performance is paramount,
    > then C is the right answer, since it is faster (§8.4, §8.5).
    > If efficient memory use is vital, then C is also the right
    > answer: Go’s garbage collector needs a factor of 2 to 3 of
    > heap headroom to run efficiently (see §8.6).


This is fairly damning and another data point that indicates that go (and garbage-collected languages/runtimes in general) may be poor successors to C and C++ for applications where consistent performance and memory efficiency are important.

I'm sure there are tricks you can do in go (as in java) to subvert the garbage collector, but I'm not sure I'd want to build a kernel based on them when I could just use another language.

Another data point is Apple adopting automatic reference counting for objC and Swift. Another is Discord switching from go to Rust ("go did not meet our performance targets.")

Potentially an OS kernel written (for example) in a language like Rust (though not without its own challenges) could have more consistent performance and lower memory overhead.

disclaimer: I am aware of OS kernels written in Rust but have no experience developing or using one


> Potentially an OS kernel written (for example) in a language like Rust (though not without its own challenges) could have more consistent performance and lower memory overhead.

Yeah potentially, but implementing in easier language in less time may be acceptable trade-off.

Because Go is not poster child of fast GC'd languages either (conscious choice of a simpler optimzer for fast builds), and the implementation only showed 10%-15% difference with linux, the situation is optimistic.

With better escape analysis in GC languages and better compilers, the gap can be reduced further.

> Apple adopting ARC

That's because of legacy interop concerns mainly. Swift's ARC implementation was once horrible, and ARC still seems to be a significant bottleneck in SwiftUI. The ARC more efficient blanket statement myth is mainly spread by apple fanboys, who might not even have heard the words 'cache' and 'contention'.

The efficient ARC methods like deferred RC approach a tracing GC.

The benefit of RC is predictability (though not always) & RAII. That's why Rust and many C codebases use explicit refcounting when lifetimes are unknown.


> With better escape analysis in GC languages and better compilers, the gap can be reduced further.

While making the compiler slower, contradicting the upside you mention earlier.


Better escape analysis doesn't take much time.

Not to mention 70% of optimization can be done in 10% of code and much faster. It is the more exotic optimizations that have diminishing returns for slow compile times.


Now I would probably also pick rust over go for something like an operating system, but this here was written explicitly to research the performance trade-offs of writing it in a high-level gc'd language.

> go (and garbage-collected languages/runtimes in general) may be poor successors to C and C++ for applications where consistent performance and memory efficiency are important.

I think the latter part here is indeed important. From my pov, Go is a lot better suited for a lot of the stuff I previously used C/C++ for, where a scripting language like Python was sometimes an alternative. I rarely touch C/C++ anymore these days (except for arduino stuff), and python, which I used a lot before, has become something I only use when I really need a dynamic language for some hacky stuff.

But languages are just tools, pick the best-one for your problem. Go certainly carved out it's niche, and Rust is also going pretty strong, and although maybe not as accepted, is also very interesting and promising.


Don't mistake lack of effort in improving performace, with what could be achieved.

Were we having this discussion during the 80's, no one would think C was usable for anything serious, when its compilers couldn't produce better code than junior Assembly developers and all AAA games of the time were 100% Assembly.


Matter of implementation.

I also read the paper, but I also have read tons of other papers of systems implemented in other languages.

While current implementation is quite good, there is lots of room for improvements, including improving the way Go deals with value types.

Also with a small improvements to Go (untraced references), or even a //go: annotation, there would be more room for C like data structures, when required to go down that path.

Without trying to diminish the work that went into this thesis, speaking from experience, when it was done it was done, performance improvements were most likely not pursued as the point to write an OS in Go as thesis was already proven.


About untraced reference support is there something specific that you’re looking for? Although it doesn’t have a special type I’m pretty sure you can have regular Go object references / interfaces / arrays point to non-GC memory if you want (although you’ll have to cast the result from malloc or something similar). So most library functions should work with unmanaged memory OK by default (if they’re using standard slice/array types) which is not too bad as far as GC languages go.


If you have explicit untraced reference support the GC doesn't have to guess and better algorithms can be taken advantage of.

Without it, some guessing games might happen that bring everything down, or the GC has to be more conservative regarding its guesses.

An example of this are the latest restrictions on Go 1.15 regarding pointer conversions

https://golang.org/doc/go1.15#compiler


Wouldn’t the GC be able to just stop scanning memory once it encounters an off GC heap pointer (since it knows what address ranges it manages)? I’ve only allocated non-pointer-containing Go objects manually (that presumably wouldn’t be scanned anyway) and it’s hard to find the specifics of how things work, although it seems like there are some libraries that allow it. [1]

Since it’s possible to use C data structures and pointers directly from Go that seems like the safest way to do it, although of course then you need a C toolchain installed.

[1] https://github.com/teh-cmc/mmm


It is a bit more complicated than that, hence why conservative GC are pretty bad reclaiming memory, because they need to play safe.

One of the reasons why they are changing the pointer rules in Go, was exactly because there are data races across domains if you happen to nest conversions between GC and non-GC pointers, and the GC happens to run just in the moment that it thinks the reference is no longer in use, while actually it was stored temporarily in an uintptr.


Sure, if you use less GC, then the penalties of GC are minimized. But that just reinforces the point that GC has a real cost.

When you use less GC, you increase performance, but you also get fewer of the safety and convenience benefits of GC. There is no Holy Grail of GC just around the corner that will make this tradeoff obsolete, despite what GC advocates have been telling me for 20 years.


Like anti-GC advocates not getting that a GC in a language like D, is only the replacement for new/delete, with everything else that C++ is capable of still available.

Learn to use the tools and enlightenment will be achieved.


See also, for something surprisingly and alarmingly close to this, gvisor:

https://github.com/google/gvisor


As far as I understand it, the biggest difference is that biscuit restricts itself to working with a statically allocated amount of memory, reserved for the kernel, and then manages GC etc to not run out of that. That's the greatest contribution of the paper, IMHO.

gVisor, being a regular userspace program, has no such worries.


That looks more like a container runtime than an os kernel.


Yes, that's what you'd think from the description. Take a closer look.


But the development seems stopped. The latest commit was on Jun 8, 2019.

At the time this project showed up, I gave it a try on QEMU. It was cool and I was like "I must join this!" As the result, except for one minor fix, I was not able to contribute more because the development environment was not comfortable. The patched go tree was not easy to follow. It also looks impossible for others to just rebase to later go.

My take on this is that, although go provides many OS-like features, you still have to draw a clear line between the OS and the language it uses if you want it to be maintainable and evolvable. I am still obsessed to the idea of making an OS in go and somehow trying doing one in a very slow pace.


This is just someone's thesis, most projects like that are abandoned immediately after defense.


> you still have to draw a clear line between the OS and the language it uses

I'm hoping to avoid this in one of my own OS projects using Forth, actually; I think using a language with most features implemented outside of the core language implementation, but rather in libraries, helps a lot here.

For example, exceptions are implemented in the OS code rather than by compiler magic (though there are some built-ins the exception system needs to call that are only really relevant if you're building an exception system). The actual Forth implementation is under 2000 lines of aarch64 assembly, after removing comments and whitespace.


English is not native to me so I'm not sure it is me that misunderstanding you or the reverse. By "no clear line", I mean something like building the OS on top of some of the language modules. Just like what biscuit did, the runtime and some other packages were integrated as a part of the OS. Such an approach brings inevitable impact to the project, because there will be no longer any easy way to keep up with the new features/progress in the language.

On the other hand, you stated

> I think using a language with most features implemented outside of the core language implementation, but rather in libraries, helps a lot here.

I don't see how your experience contradicts with my previous statement. Would you clarify more?

Also, I don't know Forth before. Thanks for enlarging my view.


Ah, I think I was seeing a slightly different problem; the problem I saw was that when the OS and compiler start to be tied together tightly enough that changes to one make it necessary to change the other, so anyone working on them needs both skillsets. Forth helps here since the parts of the language that are "baked into" the implementation (i.e. not just standard library code) are extremely minimal; in most of the Forths I implement, even function definition is itself defined in the standard library.

Needing to avoid diverging from an upstream compiler is also somewhat alleviated by (most dialects of) Forth being suitable for OS development out of the box, so patching the compiler isn't often necessary.

Additionally, it's not ridiculous to implement a custom Forth for each project, depending on its needs, which makes tracking an upstream a non-issue. A Forth implementation is far less code than an implementation of C, Go, etc., so it's not very time-consuming, and the way Forth user code is typically written makes it easy to port code between implementations.


See Oberon, Smalltalk, Interlisp-D, Lisp Machines, Mesa/Cedar as well.


This is interesting in that if you can use channels as described by the CSP book[1] you could build a kernel that is guaranteed to be free of concurrency bugs.

This would be important because even if you have proven the functional correctness of a kernel, that typically excludes the concurrency aspect.

[1] (https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf)


But go doesn't provide channels in the CSP way nor does it do any model checking on it right?

Like one thing it already gets wrong is that you can send mutable pointers around without clear ownership.


If it's 'wrong' to be able to pass around mutable pointers, is the only language that is 'right' Rust? (and some Lisps maybe?)


I think you can do "real" ownership in ATS; or check ownership with a static analysis tool in many languages, including C; and you should be able make a hacky version as a library that dies at runtime in any language with parametric polymorphism and modules. "Modern C++" too, ish.

Which Lisps are you thinking of? CL and Scheme both allow having multiple copies of mutable objects.


No. For example Erlang only allows you to send immutable values around. For very good reason.


If "concurrency bugs" includes deadlocks, no, such a kernel would not be free of concurrency bugs. Any blocking receive operation on a channel can create deadlocks.


It will be free of concurrency bugs including deadlocks. This is the promise of CSP. The requirement is data is shared only via blocking IPC and never directly using a lock. (and similarly one must not share a pointer to private data, as another poster has pointed out)

You can compose small systems, even with multiple parties, prove they cannot deadlock, then make them a 'black box' with defined IO, and build larger, more complex systems with equal properties.

The downside is you must guard every piece of shared data with a separate thread, but there may be ways to reduce the performance penalty.


How do you prevent process A from waiting on a receive from process B while process B waits on a receive from process A?


Reminds me of when I was excited about https://web.archive.org/web/20120104065532/http://web.cecs.p... for a high level language kernel.


The best thing to use Rust for is writing garbage collectors :D


Oh lul :) thx


Why don't they distribute an ISO to boot from Virtualbox?


[flagged]


From the abstract:

The longest single GC-related pause suffered by NGINX was 115 microseconds; the longest observed sum of GC delays to a complete NGINX client request was 600 microsec- onds.


Oh that's alright then


Depends on the use case, I just wanted to point out TFA (TFP?) considers this point a central note and goes into great detail about the question.


they say its go but the soruce code is full of C...


From the README in the repo:

"This repo is a fork of the Go repo (https://github.com/golang/go). Nearly all of Biscuit's code is in biscuit/."

From one of the linked papers:

"Biscuit has nearly 28 thousand lines of Go, 1546 lines of assembler, and no C."

There is C code in biscuit/user/c, but it appears to be userland test programs, not kernel code.


part of the bootloader seems to be written in C


[deleted]


There never was a battle. Rust is the up and coming Systems Language (classic definition) and Go is the up and coming Systems Language (modern, colloquial definition).

They’re in completely different spheres, with regards to practical production usage.


Yes, having used both I would say there is only a small overlap between them.

Go generally is nicer when you can afford a GC and a runtime. By nicer, I mean you'll get things done faster.

Rust generally is nicer when you can't, or you need better C interop, more low-level control, or you need a stronger type system for some reason. It's not as productive as Go, or at least I'm far from reaching that point after months of full time rust development, and I'm a pretty experienced polyglot developer.


> Rust generally is nicer when you can't [afford GC.]

Since this doesn't include "OS kernel" what is the state of Rust on deeply embedded devices? Rust for PIC12 anytime soon?



That people have done it doesn't make it a good idea.


That people have luddite attitudes regarding OS development doesn't make it better either.


You've misunderstood me entirely.


Just wondering, what parts of Rust do you feel take longer to code out vs doing so in Go or its contemporaries?


Most things take longer.

Dealing with the more expressive type system, additional errors and warnings, figuring out generic functions, types and type constraints.

Having the additional concept of ownership to think about, design around, and run smack into face-first. GC is a joy by comparison, it requires zero effort.

Figuring out which abstraction to use, in Go there is typically only one obvious way to do it.

Dealing with less mature libraries.

In general there is just way more cognitive overhead working in Rust, much more to think about, more options to choose from, and more constraints. Some of this should pay itself back by avoiding certain classes of bugs - but I find on a solo project (I've used Go in teams, but not Rust) I get very little benefit here because all the code is written by me, and I'm experienced enough not to make a lot of the mistakes Rust can protect me from, like race conditions or nil pointers/interfaces most of the time. On a larger team you'd get more benefits here, especially if working with more junior developers. But I don't think you ever get to the same productivity you get with Go.

So my advice is if you can afford a GC, and you don't need too much low-level C interop or performance optimiations, choose Go.

I do want to add, that from a software craft perspective, I find Rust more elegant to read and write, but only really when I'm not pulling my hair out because of the borrow checker, or hairy details around closures, or whatever else. I do take a certain satisfaction in that elegance sometimes, but overall I'd rather get stuff done.


Why not pick D and use a GC when you can afford it and @nogc otherwise? C interop is also great, productivity is good, probably comparable to Go.


+1 for D.

What plays against it is having a tiny community without mega-corp sponsorship, so anyone that wants features has to do them themselves.


[rust intensifies]




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: