A Linux-compatible kernel under a BSD license is intriguing :)
Jokes aside, this looks great as an educational project, because you can run real Linux software on this kernel, from Vim to Doom.
Things will become really interesting once this gets ported to ARM. There is a serious amount of MCUs with 4 MB of RAM where Tilck should run fine, but even Yocto or OpenWRT won't fit.
[1] apparently didn't read the link and asked foolish questions about it.
[2] I answered your questions.
[3] You attacked me and questioned its usefulness.
[4] I answered that too.
[5] You attacked me again and mocked me.
[6] And now, it's not really clear to me, but you are blaming me for being unhappy that you attacked me?
I don't know if it is possible to block people on HN, but for you, I will try to find out, because you personally have both wasted my time and made my weekend noticeably less pleasant, and now you are mocking me.
The comparison was due to the licence. Both Net and FreeBSD have monolithic kernels in the style of Linux. (Or more properly, Linux is in the style of the BSD kernel.)
Unless you want really high performance networking, you don't need tons of RAM to implement it. ESP32-based MCUs are known to implement Wifi + TCP/IP + TLS [1] on 1-2MB of RAM. This is, of course, without a Linux-compatible kernel.
Much RAM is not needed if most of your code resides on the flash and runs directly from there.
When I was your age, I sent 24 people to the actual moon with my software in 4K of RAM and here I am clicking your button and it takes ten seconds to load a 50 megabyte video ad and then it crashes. - @natecull, on Margaret Hamilton perspective
You can run IP based networking on machines with well under 1MB (smallest I've personally a basic graphical web browser on would have been a 512KB Amiga)
on NXP K60 with FreeRTOS and lwIP IP based networking runs without problems with 256k Flash and 64k RAM. Of course it depends on the application, because the throughput is not high, but that is not the point here
> A Linux-compatible kernel under a BSD license is intriguing :)
> Jokes aside
If you just were. That project uses indeed a BSD license. Isn't that making it unnecessarily hard on oneself as one then cannot 'borrow' Linux kernel code, including drivers?
> [..] in kernel mode while retaining the ability to compare how the very same usermode bits run on the Linux kernel as well. That's a unique feature in the realm of educational kernels.
Nice to see managarm mentioned! managarm does have vastly different goals though, as in running Linux software unmodified (i.e. compiled from source, not necessarily binary compatible due to ABI differences) on a fully async microkernel, including desktop apps a user might find useful for general-purpose stuff. Currently, we run weston (the wayland reference compositor) and have support for Xwayland and some graphical applications (as both Qt and GTK are ported). However, there's still a large part of Linux' API surface to be covered, so support will only improve with time.
We’re working on support for raspberry pi 4 at this moment, and on emulated AArch64 targets we can boot into Weston, but our primary platform is x86_64.
It seems we are more and more at taking a shot at kernel dev, I mean a full kernel.
What is really annoying with linux (and *BSD), it is their dependence on gcc/clang. It is not C, it is a dialect of C using too many bazillions gcc extensions. It means in the long run, we are hit straight in the face with planned obsolescence from C syntax, as ISO tantrums(c11/c17/c7489374893749387) or some ultra-recent gcc extensions ("sorry, you need THIS extension only available in gcc from 2 weeks ago").
I have been coding assembly lately, and being independent of those compilers gives me an exhilarating feeling of freedom.
Who said 64Bits RISC-V assembly written (with a conservative usage of a preprocessor) linux compatible enough kernel which runs the steam client, dota2/csgo and those horrible "modern" javascripted web browsers (I personnaly use only noscript/basic (x)html browsers)? Would even be fine for a self-hosted mini server (email, personal web/git(mirrors) site, p2p, all ipv6 to avoid that disgusting ipv4 NAT, etc).
To stay real, I would go incrementaly: I would start slowly by moving back some linux code to real C (namely _NOT_ compiling only with gcc/clang, and this will be hard), port some code paths to assembly, probably x86_64 at first then 64bits RISC-V would follow (and why not ARM64).
I am more than fine with linux GPLv2, actually, I would provide the "new" code under affero GPLv3 with a similar linux "normal" program exception (but more accurately defined, to target drivers in userspace and stay explicitely ok with closed source userspace programs using those drivers) _and_ as linux GPLv2(and more based on their author wishes) to be legal with re-used linux code.
So it used to be possible to compile a (lightly patched?) Linux with tcc, which was part of the really cool "live disk that compiles linux and then boots it in a matter of seconds" demo[0]. I think the problem is that kernels (need to!) care about extremely precise implementation details that most programs don't need; things like "this data structure of exactly this many bytes must be placed at this exact memory address with this exact alignment, then we need to set the CPU registers to this exact list without touching the stack, then run this CPU instruction to make the hardware do the action we just staged", and they care about doing so with good performance, and AIUI, spec-based C either can't do all the things that modern OSs want, need you to jump through a lot of unergonomic hoops, or don't get the performance that people want. Hence, compiler extensions to do what the kernel wants to do while minimizing undefined behavior but keeping performance. Honestly, the fact that every major OS (I specifically know about Linux, every BSD, illumos nee Open/Solaris, but I'd be shocked if NT/Darwin were different) needs to extend the compiler is probably a glaring criticism of the C standard.
It means that in the end, maintaining duplicated assembly code paths for different ISAs would have been cheaper and much easier than the absurdely complex linux+gcc(or clang) duo. And I could bet than some code paths could be kept in simple and plain C (compiling with _not_ only gcc/clang) without that much loss of performance.
Erm. So that's a language/compiler issue, nothing to do with the ISA? And I might be on board with embedding assembly, but AFAIK that also requires compiler extensions. And I'm not an expert on compilers or C, but I suspect that still doesn't cover all the uses of compiler extensions.
Since RISC-V should be an international standard for assembly level interoperability, that removes in a reasonable way this "compiler/language" issue ... until no technically very expensive code generators/code preprocessors are used since those would not be that less worse than an optimizing compiler.
Write RISC-V assembly once, run anywhere.
The cherry on top, RISC-V ISAs do not have toxic IP tied to them, like x86_64 or ARM, and that at a worldwide level.
I would be ready to pay the price of losing some speed on C code paths until I can compile them with "toy"/small/alternative compilers which are not gcc/clang, BUT I cannot even do that since linux code is hard dependent on gcc extensions.
>>OpenMandriva Lx is a unique and independent linux distribution, a direct descendant of Mandriva Linux and the first Linux distribution utilise the LLVM compiler
IMhO, experimental OSes greatly benefits from supporting _some_ way of running _some_ Linux binaries - it solves the bootstrap problem and makes it so much easier to explore the system.
OT: I always felt the synchronous system call model is very dated. All high performance systems I know of (say, GPUs, NVMe, NFSv4, etc) all use a similar async command list model. Each exchange package up as much as possible and more work is done per context switch. Instead in Linux we just get an ever growing list of compound system calls (like pwritev, pwrite64, renameat). There's some hope with io_uring and ebpf, but it really should just be general mechanism. There's no need for a context switch outside of exceptions (like page fault) or blocking on completions for commands.
One could probably implement a Linux-compatible kernel that only implements io_uring (and only whatever sync calls are required to set it up). May not run many precompiled executables at the moment but maybe an interesting research direction.
It would be really interesting for me, to once see a kernel in a higher language than C. Would it be possible to write something like that in Rust, or am I missing something?
That's a whole family of OSes, from tiny text-mode up to SMP, TCP/IP-capable GUI OSes, in the language that followed Modula-2 (which in turn was the successor to Pascal.)
There was TUNIS, a UNIX implemented in a Pascal derivative:
Rust is definitely NOT a higher language than C and there are dozens of OSs like TockOS, R3 and OS-like frameworks (like embassy-rs and RTEMS-inspired RTIC) written in it in the embedded world and quite a few in nonembedded world (i.e. world where you have MMU not just MPU or less).
There are lot's of kernels written in higher level languages (including languages with a GC). Really, if you start searching for them then you will find several every day for weeks if not months.
Yeah, but I get the impression that Torvalds was just being realistic; if given the ability to realize a full system I don't think he would have objected. Tilck, on the other hand, is deliberately an educational kernel that seems to reject fancier features because they would undermine its goal of staying small, simple, and understandable to students.
I also thought this was bollocks so have just been running pyperformance on a virtualised arm64. I have some figures from Linux running on the same platform. I've not been very scientific about it.
FreeBSD is consistently slower. Between 20-50% across the board. I must admit I'm amazed that Python spends enough time calling into the OS to make a difference this big. Perhaps it's a problem with libc? Maybe there's something going on with memory barriers? My "fist unix" was FreeBSD so I'm amazed and just a bit gutted.
Maybe, it is QEMU/FreeBSD problem, not FreeBSD/Python one?
I'm using FreeBSD for 20+ years on real hardware and never seen huge performance differences for all common software, which is not using CUDA or alike.
Jokes aside, this looks great as an educational project, because you can run real Linux software on this kernel, from Vim to Doom.
Things will become really interesting once this gets ported to ARM. There is a serious amount of MCUs with 4 MB of RAM where Tilck should run fine, but even Yocto or OpenWRT won't fit.