Hacker News new | past | comments | ask | show | jobs | submit login
ExectOS – brand new operating system which derives from NT architecture (exectos.eu.org)
254 points by belliash 10 months ago | hide | past | favorite | 224 comments



Hello,

I am excited to introduce ExectOS, a new open-source operating system built on the powerful XT architecture, a direct descendant of the time-tested NT architecture. With ExectOS, you will get full NT compatibility.

As a free, community-driven project, ExectOS not only incorporates clever ideas from other open-source initiatives but also stands out with our own unique innovations that set it apart. We've designed it to support both i686 and x86_64 architectures. It should be also easily portable to other architectures.

Dive into the world of ExectOS and join us in shaping its future. Learn more on our website at https://exectos.eu.org/, where you can explore our vision. If you are ready to be part of the conversation, our Discord server at https://discord.com/invite/zBzJ5qMGX7 is the perfect place to connect, collaborate, and contribute.


If I were going through the trouble of writing an adaptable, minimal μkernel, I would start with a capabilities-secure, efficient IPC, formally-verified one like seL4 and then build an NT-compatibility layer. This has a potential advantage of preventing entire classes of vulnerabilities. Finally, if going through such an exercise, might as well write 99.98% of it in Rust to also eliminate numerous traditional categories of frequently-encountered programming errors that oft repeat themselves.


GitHub.com/cl91/NeptuneOS

Source: I wrote this.


Please tell me you're working on this.


Not him, but multiple such efforts (not necessarily matching your exact description) exist.

LionsOS[0] is an effort by the seL4 foundation itself.

Makatea[1] is trying to implement a Qubes-equivalent on a safer seL4 base.

Genode[2] is an OS framework built around capabilities that supports several microkernels including seL4 itself.

0. https://lionsos.org/

1. https://trustworthy.systems/projects/makatea/

2. https://genode.org/


DOPE

Thanks for sharing!



What are the XT and NT architectures in this context? I couldn't find clear reaults when searching.


Saw this on their page:

"Unlike the NT™, system does not feature a separate Hardware Abstraction Layer (HAL) between the physical hardware and the rest of the OS. Instead, XT architecture integrates a hardware specific code with the kernel. The user mode is made up of subsystems and it has been designed to run applications written for many different types of operating systems. This allows us to implement any environment subsystem to support applications that are strictly written to the corresponding standard (eg. DOS, or POSIX)."


Modern NT builds haven't really been using the HAL the same way either. It's been a pain because windows on arm kernels have been pretty tied to Qualcomm hardware so far.


This is an ARM issue, not a Windows one. Same reason Linux needs device tree overlays.


HAL.dll was intended to solve the exact same problem as device trees. That's why there's custom HAL.dlls for weird x86 but not PC platforms like some of the SGI boxes. Stuff like sure, it's the same processor arch, but the interrupt controllers, system bus, etc are completely different and not introspectable via normal PC mechanisms.

The issue is that WoA kernels have moved away from heavily embracing hal.dll, instead inlining a lot of functions into the kernel that used to be hal.dll functions for perf reasons. If they kept the original architecture it would have been easy, but they've changed it fairly recently to be less portable.


"Instead, XT architecture integrates a hardware specific code with the kernel."

Isn't this a bad idea?


I'm not taking with authority here, but isn't Linux doing it like that, too?

When you're compiling the kernel you're able to toggle various hardware flags to add to the compilation.

And AMD graphics cards generally work better then NVIDIA (on Linux) because the official drivers have been upstreamed vs Nvidias that haven't


Sounds like you know more about it than I do!


It's a little hard to follow, but I'm thinking more monolithic kernel than "hybrid"?


You might want to change the title to a Show HN post: https://news.ycombinator.com/show


Kudos for the ambition on taking on such a project!

> "Keep the greatest advantages of the NT™ architecture, while implementing new features and technologies known from other Operating Systems."

Wouldn't hurt to hear what the greatest advantages of the NT architecture are from the author's POV...


There's some great information about IOCP vs. the various methods used on Linux (sans io_uring).

PyParallel: How we removed the GIL and exploited all cores -

https://news.ycombinator.com/item?id=7861942

The speaker's HN profile for additional commentary on this topic -

https://news.ycombinator.com/user?id=trentnelson


I should probably do an updated talk/article/deck on io_uring.

I really do like NT internals though.


I would love it if you did! You're actually one of the few sources that shines a bright spot on what the NT kernel is good at, outside of the ex-Sysinternals folks.


Is there any good article on NT internals (that isn't Russinovich' book), that highlight where/how it is better than Linux and other *BSDs?

When asked people point to IOCP vs epoll, but I'm not sure how relevant it is now that Linux has io_uring.

(They also point to stable ABIs for drivers, but I am more interested in internals)



I know it was slow, but I miss when NT had the graphics drivers outside the kernel. Crash the UI? Just wait a minute for it to restart. No BSOD.


Graphics driver crashes don't result in BSODs anymore. Microsoft fixed that in... Windows 7? Vista? They just trigger a black flash of the screen for a few seconds and then everything is back up as it was - including active GPU contexts. You can have the driver crash & recover in the middle of playing a game and barely even notice, it's incredibly impressive tech.

https://learn.microsoft.com/en-us/windows-hardware/drivers/d...


About a year ago, the aging Nvidia 980 in my aging gaming desktop decided to suddenly die in the middle of playing Valheim. Windows helpfully switched it over to the on-board graphics. It was smooth. Almost a little too smooth because I didn't notice the screen black out, but I did notice I was suddenly at 1024x768. I was terribly confused for a few minutes before I realized what happened.


That's pretty damned impressive.


Unless you have an amd card I guess. Multiple games reliably took down my 5700xt and the system along with it on certain driver versions. While other versions would crash when using obs. Good god what a terrible card that was initially.


I thought the app itself had to support losing the display and safely/sanely recovering from that?


No. The driver model changed in Windows Vista. After that video driver crashes can't take down your system. As of Windows 8 DWM (the compositor) is always on so everyone gets those benefits too.


Really? I knew I liked 7 for a reason.


The worst part about now in hindsight is how they probably could've moved to a work-list system like Vulkan or io_uring and kept it all in separate processes to retain that stability.

Yes, memory wasn't as plentiful back then and with single-core machines there still would've been more unavoidable context switches but with queues the context switches could've still been reduced in total as opposed to crossing the kernel boundary at all.


I personally would have a hard time identifying any advantages that has not been superseded by modern OSes


By "modern OSes" here, are we talking GNU/Linux, which is older than NT and modeled after a system designed in the 60s? Or maybe macOS os iOS, whose foundations are found in FreeBSD, released in the same year as NT and with a direct lineage to that system from the 60s, plus a kernel from 1985? Using the word "modern" to describe Linux and the BSDs but not Windows NT strikes me as odd...

Now I use and like Linux and macOS and iOS, and I strongly dislike Windows. But I don't think I would find it difficult to find advantages to NT's approaches to certain problems over the UNIX-style approach to the same problems. For example, the idea that pipes send structured objects rather than text is interesting and has definitive advantages (and disadvantages) compared to UNIX's text-based pipe model. Its filesystem permissions layer is also way more flexible than UNIX's, with hooks for arbitrary programs to inspect file accesses, which has advantages (and disadvantages). And its GUI-first approach to everything, where everything is primarily configured through some GUI rather than a command line or text file, has obvious advantages (and disadvantages). And although I don't understand it very well (again, not a Windows user), what I hear from HyperV is pretty cool.

NT is super interesting as the only serious alternative to UNIX-style systems. There is value in studying it, even if I find the overall experience provided by Windows to be much, much worse than my Fedora desktop or my macOS laptop.


NT has no notion of pipes that send structured objects, but it does have Unix-like pipes.

Maybe you are thinking about Powershell. Powershell is interesting (although in practice I find it not very practical to use), but is quite another subject than NT. It's really also its own segregated world, that relies on dotnet, that is really another platform than NT (although in the first place implemented on top of it, and of course there are some integrations)

Windows ACL are powerful in theory but hard to manage in practice. Look at this fine textual representation for example: "O:AOG:DAD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-0-0)". Hum; at least ugo+-rwx you can remember it, and actually POSIX ACL are also easier to remember than Windows ACL.

Windows NT is not even that much GUI first. There are tons of things that you just can't access through a GUI, let alone a user friendly GUI. Funny example: ACLs on tasks from the Task Scheduler: no GUI access at all. It would probably not even be too hard for MS to plug their standard permission Window so that you can access them with the GUI, but they never did it. So much for the GUI first. Oh, I'm not even sure it has a command line interface to set the ACL there. Maybe just the Win32 API.

I also don't think there is an integrated Windows tool to view for examples the processes in a tree, even less to show Win32 jobs.

HyperV by itself has nothing revolutionary but there are a few interesting ideas that it can bring when integrated in a few Windows component (some security related sadly reserved to Entreprise version, because it is well known that in 2024 making good security architecture unreachable from the general public and SME is a brilliant idea). But compared to Qubes OS for example, it is very little. Oh there are also no Windows GUI to show HyperV states for these integration (as opposed with regular full system VMs)

Now I still think there are a few good ideas in NT, but the low level layers are actually not that far from Unix systems. It's closer than Cutler would admit. (In particular, there are not so much differences between "everything is a "file"" and "everything is an "object"", at least when you look at what Linux as done about "everything is a "file"" -- this is quite ironic because Cutler particularly disliked the "everything is a "file"" idea)


Which security features are exclusive to enterprise?

Because any ol’ Surface ships as a secure core pc which utilizes virtualization for memory security etc:

https://learn.microsoft.com/en-us/windows-hardware/design/de...


MacOS and iOS internals are much more closely related and based on Mach (via NextSTEP) than Freebsd.


mach is the kernel he mentioned from 1985. NextSTEP userland was BSD4.something, and macos modernized somebits to freebsd userland.


NT kernel is IMO pretty good. Here’s a few points.

ABI for device drivers allows to add support for new hardware without recompiling the kernel.

First-class support for multithreading, Vista even added thread pool to the userland API.

Efficient asynchronous APIs for IO including files, pipes, and everything else. Linux only got this recently with io_uring, NT implemented IOCP decades go in version 3.5.

NT security descriptors with these access control lists and nested security groups are better than just 3 roles user/group/root in Linux. This makes launching new processes and opening files more expensive due to the overhead of access checks, but with good multithreading support it’s IMO a reasonable tradeoff.

Related to the above, CreateRestrictedToken kernel call for implementing strong sandboxes.

Good GPU support, Direct3D being a part of the kernel in dxgkrnl.sys. This enables good multimedia support in MediaFoundation framework because it allows applications to easily manipulate textures in VRAM without the complications of dma-buf in Linux.

Related to the above, GPU-centric 2D graphics (Direct2D) and text rendering (DirectWrite) in the userland API.


> NT security descriptors with these access control lists and nested security groups are better than just 3 roles user/group/root in Linux.

I’ll bite. POSIX permissions are lousy, and NT permissions are mostly worse. It’s way too easy to mess up, and it’s way too hard to specify a sensible policy like “only a specific user can access such—and-such path”. At least NT can restrict directory traversal.

S3 got it right when they deprecated object-level ACLs.

> This makes launching new processes and opening files more expensive due to the overhead of access checks,

fork() is terrible and slow. CreateProcess is overcomplicated, but creating a process directly is a vastly better design IMO.

> but with good multithreading support it’s IMO a reasonable tradeoff.

Huh? Linux has had proper multithreading support since NPTL, which was a long time ago. Windows, in contrast, didn’t have reasonable support for multithreading on systems with >64 CPUs until Windows 11:

https://learn.microsoft.com/en-us/windows/win32/procthread/p...

I assume this is related to the way that Windows leaks all kinds of bizarre threading details into the user ABI.

I will grant that Linux’s original threading was an abomination.

> Related to the above, CreateRestrictedToken kernel call for implementing strong sandboxes.

Eww. The Windows sandboxing scheme is IMO an overcomplicated mess. Seccomp is not particularly friendly, but it does exactly what it says on the tin, and I would be far more comfortable running untrusted user code under seccomp than under Windows restrictions from token, jobs, integrity, etc.


Linux has a ACL's too. And consider SDL/Pango/Cairo the counterparts for DWrite/DDraw.


Linux (and FreeBSD, too) has ACLs for FS.

NT has ACLs for everything. Each handle (read: descriptor) has associated ACLs.

Also, each handle can be waited ("selected") with same system call. No select()/epoll() vs wait() distinctions. Nw Linux has timerfd and procfd and others, but NT had these from birth.

In some way NT is more UNIX ("everything is a file") than UNIX itself.


Hm which things are protected by ACLs on NT but not on Linux? Even though the "everything is a file" thing breaks down quite quickly on Linux, with lots of drivers just using ioctls for everything, you still have to open pretty much everything through their device node in /dev, which is affected by ACLs AFAIK. The only real exception I can think of is network sockets. But I'm probably thinking in a very UNIX-centric way, so there may be classes of things I'm missing


Here’s some of the Windows things which have these ACLs applied, except obvious ones i.e. files and sockets.

• Disk volumes and physical disks

• Pipes

• Registry keys

• Processes and threads

• Inter-process synchronization primitives like mutexes, semaphores, and mailslots

• Shared memory sections

• Desktops; you need to pass access check before interacting with a desktop. The OS has multiple of them, used for fast user switching, concurrent remote desktop sessions, UAC prompt, logon screen.

• Other, more exotic things like job objects, windows stations, and ALPC ports.

To be fair, some of them are protected with ACLs on Linux because they are mapped into the file system. For example, physical disks are visible in the file system and the kernel does apply these security things to them.


Interesting, thank you.


Well, for NT -almost- everything is an object.


> consider SDL/Pango/Cairo the counterparts for DWrite/DDraw

I’m not sure Cairo is comparable to Direct2D, the ecosystem is too different.

On Windows, Direct3D 11.0 is guaranteed to be available. Even on computers without any GPU (like most cloud VMs) the OS uses a decently performing software emulation called WARP. For this reason, Direct2D is designed from the ground up to use these shaders as much as possible, and it shows because hardware GPUs deliver way more gigaflops than CPUs. For example, on my computer D2D implements anti-aliasing on top of MSAA hardware.

Cairo is cross-platform, and Linux doesn’t have a universally available GPU API. Some Linux computers have GL 4.5+, some have GLES 3.1+ (both GPU APIs have approximate feature parity with D3D11) some others have none of them. For this reason, Cairo renders vector graphics on CPU. Some computers, with slow CPUs and high resolution displays, don’t have the performance to render complicated 2D scenes in realtime on CPU.

This may change some day once Vulkan support on Linux is ubiquitous, but that day is yet to come.


These days, MESA provides llvmpipe as a fallback software implementation of OpenGL. But your point absolutely stands, the various graphics APIs are much less consistently available on Linux than DirectX is on Windows, and the split between OpenGL and OpenGL ES hurts a lot, with a lot of systems (especially ARM Mali based ones) only providing OpenGL ES drivers.


> a lot of systems (especially ARM Mali based ones) only providing OpenGL ES drivers

And sometimes minor hardware revision of these Mali GPUs change the internal API between user mode and kernel mode halves of these Linux GLES drivers.

The kernel handled it pretty well as it supported both hardware revisions. But my user mode app failed because the new hardware required a different version of that libmali-midgard*.so DLL.

Unlikely to happen on Windows because the GPU driver installs both kernel/user mode halves of the driver in one transaction. Linux doesn’t have ABI for drivers, all kernel mode drivers are pre-compiled into the kernel.


ARM it's a crapshoot, but on X86 once you get the libraries right everything runs.

That's why free software should be a requeriment at least for drivers.

I remember the -fake Intel- GMA500/3000 fiasco. My old GL 2.1 based n270 netbook it's still supported and with a small ~/.drirc it fixes some quirks as it MESA misdetects it as a GL 1.4 device, but overall it's tons better than the PowerVR chipset with an Intel coat.

The GMA500 users are stuck even without EXA.


Linux had XRender and similar.


Linux XRender is functionally similar to Windows DirectComposition

Linux does not have anything similar to Direct2D, despite it’s technically possible to make it. Here’s a proof of concept for ARMv7 Debian (Raspberry Pi4), on top of GLES 3.1: https://github.com/Const-me/Vrmac/?tab=readme-ov-file#2d-gra...


> Linux does not have anything similar to Direct2D

Sure it does: WINE. Not only is it similar, but it's the exact same API & ABI.


> modern OSes

Like NT? It is in fact the UNIX-likes that are compelled into a fairly ancient stream-of-bytes model; NT (and Windows atop it) understands that data needs to have structure, and imposes structure on that data at an OS level; everything is a manipulable handle, rather than an opaque block of memory to be written to/read from, arbitrarily.


NT/VMS offers no immediately quantifiable advantage, but rather a different philosophy than Unix where everything-is-a-file-even-when-it-isnt-really. It's more of a batteries included system where the high-level and low-level parts combine to form a coherent whole. The HAL, dynamically loadable drivers, the registry, services, API personalities. It's a shame that all the good stuff about the design of NT takes a backseat to the modern Microsoft shenanigans.


But in NT everything is a handle in much more consistent way than UNIX's everything is a file.

Each handle has security descriptor/ACLs, not only a files, and format is the same. Each handle can be waited for fr with same system call, and you could mix and match file, socket and process handles in same call.


ExecOS doesn't implement a HAL.


Yeah. NT used to be so fast even through remote desktop, now it is so slow because of the bloat. Also I've read somewhere NT suffers from young developers wanting to rewrite parts in higher level languages, avoiding old winapi. But the Kernel is fast and nice...


A fair amount of my work is done via remote desktop, via VPN even, and it doesn't strike me as a particularly slow. I guess the question is, compared to what? On what hardware and network infrastructure?


It suffers from young developes wanting to ship Chrome alongside each application.

And it doesn't help that UWP and C++/WinRT turned out such mess that even Microsoft own teams rather use Webviews than native UIs.

https://techcommunity.microsoft.com/t5/microsoft-teams-blog/...

https://blogs.windows.com/windowsdeveloper/2024/06/03/micros...

https://github.com/microsoft/microsoft-ui-xaml/discussions/9...


I used to use the WinAPi directly and then MFC.


I still find NT very fast when using through RDP, especially compared with any FLOSS solution that exist in the GNU/Linux world. I've not tried proprietary graphic remoting solution for GNU/Linux systems though.


Those proprietary just use compression. RDP is genially invented, passing messages/calls efficiently. X also has a ton of messages, but compressing them is not sufficient


So, what's the current state? Can I already run this and get a usable desktop? Or some terminal with POSIX toolchain (shell etc)? SSHd?

What kind of desktop would this run actually? The Windows shell, the ReactOS shell, some Unix DE (KDE or so), or some own custom desktop environment? I'm not sure if the latter is a good idea, as this would be a whole big project on its own?

What hardware does it currently support? (Either via their own drivers or via the NT compat layer?) E.g. does it support the common things, video output, keyboard input, network?

I think it would be good if the homepage would clarify the current status, and maybe show some high-level changelog of the recent changes, and some roadmap.


“ExectOS is in very early development stage, thus its requirements have been not specified yet.”

I would say you’re a while away from running a desktop.


NT is like, the only serious modern antitheses to UNIX. A full UNIX toolchain is thus pretty far down on the list of things I would expect from this project :-)

If I had to guess, I would say they are probably going for Powershell, or something similar; and that would be an incredible achievement already. A desktop environment is way to far out of reach for the time being.


There is already a somewhat-working port of the NT personality on the L4 µkernel (seL4 specifically) called NeptuneOS: https://github.com/cl91/NeptuneOS

If you're into microkernels and/or the NT kernel model, I highly recommend checking it out


Tbh I'm happy to see any new OS thats not just yet another heavily unix-inspired clone. We desperately need more fresh ideas in the OS world


Given the execrable state of the Windows UI, I'd be excited to have modern Windows compatibility with a competent GUI on it. I'd be psyched if it looked like XP running with the "classic" UI.

I switched to "classic" mode so automatically on every install that I'd forgotten the Fisher-Price default one... which was still better than the current shitshow.


I thought that NT was also a Unix system. In any case Windows as a whole isn't terribly different from a Unix system. Unlike say Smalltalk.


No, NT was a re-engineering of the kernel, mostly with Mach-microkernel influence, but also with input from VMS.

I think the point is that people who whinge (like here) about Unix are actually complaining about Unix filesystems and user-space. I'm not so sure they care about how exactly privileged execution is partitioned (or not).


Not sure if it's true, but I've heard than WNT is VMS + 1, so to speak, with each letter "incremented". I believe the team behind WNT had previously worked on VMS.


Not really a fresh idea; this is instead a Windows NT clone, but certainly something different from the usual *nix OS projects.


very interesting take.

maybe with AI we can all get bespoke heuristic intelligent operating systems? these could be almost impossible to crack and exploit.


This codebase bears striking resemblance to Minoca, another (much further along) NT-inspired kernel (https://github.com/minoca/os).

There are strong structural similarities in the source tree and there are also source files that seem completely copypasted with comments changed and function contents gutted (such as ke/sysres.c, ke/runlevel.c, among others).

Are you one of the Minoca developers working on a new project or are you defrauding Hacker News for attention?

Edit: This same poster (Rafal "Belliash" Kupiec) has been noted for other plagiarism incidents going back to 2005:

https://github.com/reactos/reactos/pull/2853#issuecomment-65...

https://lists.reactos.org/hyperkitty/list/ros-dev@reactos.or...


The screenshot of their bootloader (https://exectos.eu.org/images/exectos/xtldr_boot_menu.png) bears striking resemblance to GRUB, as well.


There're worse issues actually. This person tried to commit to ReactOS repository, and then the PR blocked when the code is allegedly detected as Windows Research Kernel code: https://github.com/reactos/reactos/pull/2853#issuecomment-65...

The decision may or may not be correct though as ReactOS maintainers can be paranoid for the first commits of new devs. Mostly, people play with IDA and try to write some code for a small portion, which is against the project's philosophy.


(eu.org is a free DNS provider wholly unrelated to the European Union, so this isn't an EU-funded project or anything like that)


I think they just live in EU. They also mention some EU directives related to reverse engineering.


I love reading the source code of operating systems because I have no idea what is happening there, but I find it fascinating.


I always read the bootloader source code. I think clean room OS boot process if fun to read about :)


It’s fun to implement if you haven’t. I read and implemented one from this book:

https://www.cs.bham.ac.uk/~exr/lectures/opsys/10_11/lectures...


The *.eu.org domain is so misleading. For a brief moment I was expecting a Windows-compatible alternative created by the EU.


But EU.org has been a place for free domains for 30 years now. Can be misleading but don't think this is on purpose.


Are you thinking of `.eu.int`? They've had .eu for twenty years or so now.


Wait until you learn about .sh and .io domains.


The EU does not create.


What a strange comment. Ever seen the Open data portal at https://data.europa.eu/? And that’s really just one singular example.


What would be advantages of not having a hardware abstraction layer (HAL)?

Are there any other operating systems without HAL?


Most "full featured" OSes do not use HALs - they are most commonly found in embedded development where you have to wrap a platform that otherwise has no mechanisms to inform the OS what is where.

Which is also related to how original NT used them. ARC (and its PC BIOS-based emulator, NTLDR) provided early boot ability, HAL provided "driver" to access platform, and the NTOSKRNL itself didn't have to worry so long as it was running on same base ISA and had all the necessary drivers loaded.

So for example on x86, there were HAL.DLLs for PC BIOS, "standard" PC BIOS multiprocessor, ACPI (uniprocessor and multiprocessor variants), Sequent, SGI Visual Workstation, etc.


To add to your point, Microsoft had originally envisioned that vendors would write their own HAL.DLL’s to make Windows compatible with their hardware, but it turned out nobody really had the expertise needed to do that, and Windows started to depend more and more on implementation details of the most common one, and at this point the idea is dead. Hardware vendors target one single windows HAL for x86_64, and porting windows to another platform requires rewriting/rebuilding all sorts of stuff, not just the HAL. It was a cool idea at the time that didn’t really pan out.


There was a certain level of custom HALs being written, but those depended heavily on the "shared development" model that Microsoft did with vendors, and a lot of hardware was shipping bog-standard PCs anyway.

The rare custom HALs were mainly for things like x86 NUMA systems, or from ports to alternate architectures.


To whom it may concern,

I understand that there have been concerns raised regarding the uniqueness of the code in my project. I want to clarify that I have a broad exposure to various open-source projects, including Boron, Carbon, Minoca, Palladium, and NeptuneOS, among others. My interaction with these projects is purely for educational purposes and to stay informed about different coding practices within the open-source community. This does not equate to copying their code for my own projects.

I utilize a programming tool called Codeium, which is equipped with AI capabilities based on the GPT-4 model. This sophisticated tool assists in generating code suggestions and snippets to aid in the development process. It is important to note that while Codeium's AI provides recommendations, it does so from an expansive dataset of open-source, properly licensed public code. However, as a developer, I exercise discretion over which suggestions I implement into my codebase. The AI's suggestions are merely a starting point, and I often enhance or modify them significantly in the context of my work.

In light of the concerns raised, I am committed to conducting a thorough review of the code in question. Should I find any similarities with the Minoca project, I am prepared to take the necessary steps. This may include rewriting or removing the code entirely, or appropriately crediting the original authors as per the licensing agreements and norms of the open-source community.

I take the originality and integrity of my work seriously and appreciate the opportunity to address any issues that may arise. Thank you for bringing this to my attention.


I am writing to address the concerns that have been raised regarding the code I contributed to enhance the Local Procedure Call (LPC) mechanism in ReactOS. I want to categorically deny these allegations and provide an explanation of my development process to clear any misunderstandings.

Firstly, I must emphasize that I have never accessed or reviewed the source code of WRK. My work on ReactOS's LPC improvements was conducted through legitimate reverse engineering practices. Reverse engineering is a recognized method for understanding system behaviors and is protected within the European Union under certain conditions as outlined in Directive 91/250/EEC and the Trade Secrets Directive 2016/943.

The system from which I reverse engineered had reached its End-Of-Life (EOL), and under the original licensing terms, reverse engineering for compatibility purposes is permissible. My intent was to ensure interoperability and enhance the functionality of ReactOS, which is a legally acceptable and common practice.

In the process of enhancing the LPC mechanism, I integrated the improvements with pre-existing code to form a coherent and functional system. The new function that I implemented, despite being short and the subject of criticism, was composed of calls to existing functions and references to global variables that are part of the ReactOS project. This function was necessary for the improved LPC mechanism to operate correctly within the system's context.


But... does it retain Windows's dumb use of backslashes?


very cool concepts. are you taking a modern approch to security? e.g. making everything nice and auditable / traceable etc.? (thinking about EDR etc. here). Am working on annOS myself around that premise, though admitedly its my first real program and likely will perform like a big turd :D. Just curious on your take on this when desinging a new OS in these times.


Thoughtful of them to refer to "NT architecture" as opposed to "NT Technology".


New Technology technology


The 'e' in 'Apple IIe' stood for Enhanced. Then they came out with the 'Apple IIe Enhanced'.


And the "i" in "i-{everything}" never stood for anything, making it even dumber.


This is great. I will keep my eye on it. I really want non unix like operating system start to offer alternatives.


To what end?

What are you looking for to distinguish from a unix like system?

At what level of the stack are you looking for things to stand apart?

Consider that for many folks today, the Operating System interface is not much more than an HTTP/S socket. Where Unix was "everything is a file", today it's, almost, "everything is a POST" (I appreciate this is a broad brush).

At this level, the underlying OS is mostly shrouded.

The OS doesn't determine the user interface. We conflate the two, but that's not necessary. A unix system would happily accommodate a Window 3.1 style interface, if that's what you wanted.

Those in the field will kibitz about how one "serialized object" protocol is better/worse than the other. But, in the end, most of it doesn't really matter. Not at a human scale.

Sure, when you have half the planet using your services, and you measure your computer power in hectares, and power budget in Megawatts, small gains are big gains.

But however we, say, talk to a printer? How the handshake is managed before we dump a 20MB photo into it, doesn't really matter so much.

With our trend of just wrapping execution contexts into more and more context wrappers (threads in processes in containers in vms), there's so many layers to penetrate, we've not quite given up on the security of kernels, but they're certainly less important. Do we really need something like a capability system anymore?


for most people, the browser is the platform. for a somewhat different "most", the phone is the platform.

none of those people care whether it's apache or nginx or something else handling the socket, or what kind of socket it is, or how app routing works, or how that app accesses storage, allocates RAM, etc.


i want an operating system fine tuned for performance from the ground up, needs to be atleast 10x faster than all the current operating systems. its kernel needs to be written from scratch keeping only modern hardware post 2015 in mind. It should have minimal bloat and absolutely spectacular benchmarks compared to the rest of all of them


If we follow your reasoning, then why have different programming languages? Why are people still spending time making new programming languages? We already have C. It built UNIX, databases, games.

Nobody cares what something is coded in.

Why do we have more than one database? In fact just storing data in the file system was more than sufficient Nobody cares how data is stored.

The answer is that they are all crucially important and we still have engineers and creators who want to innovate and push the borders of what is possible. invent new and improved ways to accomplish what we are doing and extend what we will be able to do.


> The ExectOS community keeps in touch via Discord.

100% Non-starter, such a shame.


Holy crap, I've been calling for this for years now. NT is a great design but most people studying OS design get intercepted by the Linux users and get the "Windows kernel is old and sucks" misinfo implanted into their brain.


I'm not sure this is the case. Source code availability for current and historic versions is what has made Linux so popular for studying OS design. Linux, the BSDs, the Mach kernel and many other embedded and academic OSes more interesting than Windows. Incidentally, even in the 80s and 90s you could get access to Unix OS source code. So in short people study what they are allowed to study.


What's good about it? Not being argumentative, I'm just not clear on what the benefits are.


Can someone add to Distrowatch?


>"Why don’t you help Wine?

Wine implements Win32Api only, while ExectOS is a featureful Operating System, that implements a compatibility layer with NT™. This means, ExectOS will be able to run NT™ drivers as well, not only Windows® software. However, thanks to its modular design, it will be possible to implement Win32 subsystem as well at some point, based on Wine.

Why don’t you help ReactOS?

ExectOS goals are very different from ReactOS, and contrast the project’s core philosophy as being quite on different paths. While ReactOS aims to replicate Windows® NT™, ExectOS is a completely new Operating System implementing the XT architecture which derives from NT™. Although both projects share the goal of being NT™ compatible, they intend to achieve it in different ways. What ReactOS tries to replicate, ExectOS only implements as a compatibility layer. Thanks to that, ExectOS does not need to strictly follow NT™ architecture and is capable of providing modern features."

ExectOS seems highly interesting -- basically it can be thought about as an open-source OS that can run Windows binary closed-source drivers...

If ExectOS is going to do this (and apparently it is!) -- then let me propose the following use-case and corresponding suggestion...

First, the use-case: I think it would be great to run a headless (non-GUI) OS infrastructure capable of hosting Windows binary closed-source drivers on a different networked piece of computer hardware.

For example, let's say I have an old PC.

An old PC with some expansion card or piece of hardware that is not produced anymore, where the vendor went out of business years ago, where there are no open source drivers -- where the only thing that remains is the ancient hardware and the ancient proprietary closed-source driver...

OK, now let's suppose that I can run the ExectOS OS infrastructure headless on that old PC along with the ancient proprietary closed-source driver (I use a second attached modern networked PC with GUI, remote shell, debugger, etc., etc. to connect with and control the old PC...).

Well, that would be awesome, because then ExectOS could be used to isolate, debug, and potentially understand (better) old Windows binary closed-source drivers for old (ancient?) attached hardware devices that are no longer produced, where open-source drivers are not available...

So the suggestion: Via IFDEFs, compilation flags, conditional compilation, etc., create a headless (but still with built-in networking and/or the ability to run a network driver) version of ExectOS -- for the purpose described above.

All I know is that if that version was created, then the old/vintage hardware restoration/documentation community -- would be forever in ExectOS's debt...

Anyway, ExectOS sounds great! (Could it be made to work on old 386's?)


Would I trust an OS which states "Why don’t you use GCC? – Because GCC is a crap." in the project's FAQ [1]? This isn't really inspiring confidence.

Also, it's completely not obvious from your web pages which features already work in your OS. And, IMHO, Discord is definitely not "the perfect place to connect, collaborate, and contribute".

Just my 2ct...

[1] https://exectos.eu.org/faq/


It seems very odd to me that the author would choose to add such an FAQ entry but fail to explain /how/ or /why/ GCC "is a crap." I'm sure there are several valid criticisms of GCC, but to be so forward without a list of examples or even a link to someone else's article does not inspire me to trust your software in my ring 0.


Add to that the very next FAQ entry after that:

> Do you have any kind of tests to check if the code is working as expected?

> We don’t need tests. If it compiles, it is good enough; if it boots up, then it is perfect.


As someone's personal hobby project - more power to that person! It takes a lot more skill and dedication than I possess. I wish nothing but success and good fortune. And you can be as opinionated as you want.

As something to be presented to the wider world, with a desire to have (testing) users and contributions... Hmm.


This is obviously being said at least partially tongue in cheek. Not every website is Microsoft dot com.


I don't want to use tongue in cheek software


Then don’t? Its not like it’s going to replace iOS any time soon.


It seems like it's some kind of joke software, but it isn't labeled that way. Here is the creator:

https://news.ycombinator.com/item?id=40725452


They're quoting Linus Torvalds.


Then they should link a reference


> "Regression testing"? What's that? If it compiles, it is good; if it boots up, it is perfect.

Source: Torvalds, Linus (1998-04-08). linux-kernel mailing list. https://lkml.iu.edu/hypermail/linux/kernel/9804.1/0149.html


"They" are in no obligation of doing anything. It's a hobby project from someone. They owe you nothing.



You left out "In the spirit of lighthearted development" and "As the project matures, implementing a comprehensive suite of automated tests is definitely on our roadmap."


That text was added since the original comment: https://archive.is/4CEWV


Performance art maybe?



Right! Even though that post is almost 20(!!) years old, Theo is respected enough that you could use that as a rationale for crapping on GCC.

Obviously Theo has his detractors, rightfully so. And the language he uses in that post is not what I would use at all. It's downright unacceptable. But at least it's something - a thing from an authority you can point to and say "this is why!"


Anyone with half a drop of technical competence know that GCC is not "a crap". GCC is a fuck, and doubly so if you need D, Ada, JIT or want to find a C++ template error.

> gcc is the worst compiler except all those other compilers

- Winston Churchill


The compiler it uses: https://git.codingworkshop.eu.org/xt-sys/xtchain

"This is a LLVM/Clang/LLD based mingw-w64 toolchain". Ahh LLVM :D


Wow, A++ job on having white text on a white background for their gitea instance. I was curious if it was just some Firefox specific quirk or something but it seems the .css file is 404 (named "theme-arc-green", so maybe it's best it is 404)


That's actually true. While, LLVM/Clang is better compiler, I definitely would expect some example here.


Clang isn't better. It just has a more permissive license so it's what Apple uses.


It's also usable as a library in a way GCC isn't. That's why Apple started using it with their OpenGL software drivers back in the day. That's why almost all modern and new programming languages use LLVM, and those that don't do not use GCC.


Even if GCC was easily usable as library, many of those LLVM use cases don't work that well with GCC's license.


That's out of date, libgccjit exists. LLVM is a bit difficult to use as a library because it has no stable API and is in C++ so it doesn't have a stable ABI either.


Hmm.. why is it called "jit"? That seems like it's telling everyone the use case is very different from LLVM. Also the big EXPERIMENTAL banner doesn't instill confidence. Unlike with LLVM which is used by dozens of production languages.


Why would you need it as a library other than to do jit?


LLVM does offer a C based stable ABI, even though it isn't as feature rich as the C++ one, is good enough for most use cases.

https://llvm.org/doxygen/group__LLVMC.html


Actually GCC for Windows target is like pain in the ass. Look for how many problems ReactOS had with it. Also this would need SEH to be implemented. Clang gives that out of the box.


Don't care for legacy dying OS's, sorry.


Apple started the Clang project buddy, you’ve got your cause and effect backwards.


Yes, because they weren't willing to keep using their GCC fork, for various reason, namely the license (which already made Steve Jobs unhappy once), and being less modular (as per design decision).


Only people who were using gcc 20+ years ago will know the pain


> Would I trust an OS which states "Why don’t you use GCC? – Because GCC is a crap."

No.

Two decades ago I could be willing to accept this kind of comment from the 'FLOSS' community if it was about a proprietary vendor (especially microsoft). Nowadays, this behaviour is only useful for pointing: 'look they are still acting like teenagers'.


EdgeOS


*EdgyOS


It inspires confidence because GCC is bloated and buggy.


My thoughts exactly.


I've seen this before. Someone starts a hobby project, and then creates this elaborate public website to try to make their hobby project look really serious and professional and important – maybe as a form of public daydreaming, maybe because marketing is more fun than actually writing code, maybe because they hope collaborators will flock to their project as result of said marketing.

Personally, I much prefer those hobby projects which focus on code instead of publicity.


That was my first thought but, checking repo, dev has done solo work for 4 years now, and, based on IA, site is has only been recently made. So the focus for the most part has been in code.


Git history shows 2 years of development.


ExectOS repo, yes, but XTChain, which is under project's umbrella and seemingly an important part of it, is 4y.


I don't get that vibe at all from this site. It's an extremely concise web-1.0 style hub of simple text and links covering exactly the set of information you would hope to get out of a project like this. Nothing there seems unearned or unwarranted given the scale and history of the project.

Personally I wish more professional projects followed this school of design instead of publishing cookie-cutter landing pages slathered with hero images and feature cards.


I just went and took a look at the site based on this comment, and oh boy.

> this elaborate public website to try to make their hobby project look really serious and professional and important

You evidently have a very low bar for what qualifies as "elaborate" and "really serious".


For anyone wondering, this has no association with the EU or seemingly any funding from there. EU.org is a free (sub)domain service.


They also mention that on the FAQ:

>No. ExectOS is not funded by the European Union; however, it is a project led by Europeans committed to adhering to EU laws and regulations. While the project has its roots in Europe, we wholeheartedly welcome all individuals around the world. Everyone is invited to join, use, and contribute to ExectOS, fostering a truly global community.


Why this over ReactOS, a much more mature open source OS, which is also based on NT? https://reactos.org/


Probably out of frustration from ReactOS's lack of development. ReactOS must take the record for software with the longest gestation in history. The project is decades old and there's little to show for it.

I've never figured out why the inordinate delay given that millions are unhappy with Windows and Microsoft generally. You'd reckon open source developers would be falling over themselves to join such a project but it seems not.

Perhaps there's something about how the ReactOS project is run that I'm unaware of that puts people off. The other problem is trying to find out news of the project, the ReactOS website goes dead for many months at a time and updates are at times years apart. You'd reckon its developers would be much more communicative, and I reckon this is one of the reasons why there isn't more interest in the project.

The fact that ReactOS is going nowhere is most annoying, having no alternative O/S to Windows is a real pain.

BTW, I've tried every every release for years and I have been unable to get a stable enough release to run even one dedicated task. (Some of my tasks such as email or wordprocessing could be put on a dedicated machine so it was the only user-installed software, that way an unstable O/S wouldn't be stressed too much but unfortunately with ReactOS even that idea hasn't worked out.)


> I've never figured out why the inordinate delay given that millions are unhappy with Windows and Microsoft generally. You'd reckon open source developers would be falling over themselves to join such a project but it seems not.

I think there is a strong misconception that there is this massive pool of open-source devs twiddling there thumbs just itching to jump in on some project were they can pour time and effort for nothing more than the "good of the community". I don't have any sources for this, but it my strong suspicion that the vast majority of "open source" contributions are actually done by contributors that are compensated either by a company that doesn't mind paying their employees to work on open source projects, or by a foundation behind the open source project. Take Go-lang for example, originally created by Google, then opened up. I am sure there are Google employees that still contribute (on Googles dime) to the project. Why would Google do this? Why not keep the Go just for themselves? Simple, if they open it up and can get other people/companies to use it, then they can make future hires where they don't have to train everyone on a proprietary language.

ReactOS doesn't have a large foundation behind it, and it doesn't make sense for companies to allow their employees to develop and contribute to it on their dime.

Development is skilled labor, especially for an OS. Dev's need to eat, need a home, etc. I don't know a single dev that is itching to give away their skills for 0 compensation. The only time devs really do that is when it is a personal passion project.


> I think there is a strong misconception that there is this massive pool of open-source devs twiddling there thumbs just itching to jump in on some project were they can pour time and effort for nothing more than the "good of the community". I don't have any sources for this, but it my strong suspicion that the vast majority of "open source" contributions are actually done by contributors that are compensated either by a company that doesn't mind paying their employees to work on open source projects, or by a foundation behind the open source project.

I actually think this is a wider societal issue. People love calling for work to be done in some abstract sense ('someone should really...', 'they should make it so that..'), but who is this 'they'? Or this 'someone'? You? Because you're either volunteering yourself, or you're 'volunteering' someone else for the job, there's no third option.

There's this general sense that everything will be (or is) documented; every tool made, every itch scratched. But unless the incentives (money, fame, prestige, personal fulfilment, love, curiosity, self-expression, etc.) are there for someone to do it, it won't get done. Most things will never be done.

So if someone says "I don't understand why X hasn't been done", I feel like an appropriate response is to ask why they haven't done it. And generally, whatever reasons they give, those reasons will be a good explanation why anyone else hasn't done it either.


"I feel like an appropriate response is to ask why they haven't done it."

The average user can't program let alone build an operating system. The same way the average driver cannot build a car. Or smartphone users cannot build a smartphone.

Users have requirements of their tech and the more experienced they become the more they reqire of their tech. The trouble is that with the monopolies that run Big Tech their monopolistic practices give them little incentive to provide features that benefit users, instead the new features benefit them.

I could give you numerous examples of software that requires new features but no attempt has been made in decades to add them. Take Windows, whatever happened to the WinFS file system? It's sorely needed but MS and Big Tech generally want users to use Cloud storage and that benefits them, WinFS would help sidestep that. Windows and Windows Explorer need major extensions to the file attributes sysyem, Explorer needs major ergonomic enhancements to make file manipulation easier, and that's just for starters.

Without real competition none of this will occur, not even Linux and Apple can fix this because of their differences, they too are moribund in their own ecosystems for the sane reason.

Meanwhile, users like me have unfulfilled needs that are quite technically within the means of existing computers and well within the capabilities of tech companies to provide but these needs still remain unfulfilled after decades.

That we are nearly 80 years into the computer revolution and users still cannot perform simple basic tasks on a PC that have been straightforward commonplace operations in a paper-based filing system for hundreds of years just isn't good enough.

The fact is it's impractical for users of modern tech to start from scratch just because Big Tech doesn't fix bugs or add much-needed features. Unfortunately, attitudes like yours do not help.

Marx once said workers need command of production, these days I'd alter that to users need command of production so they can get the necessaries to do what they need to do.


>Marx once said...

Why anyone, reads, or cares at all about Marx, is beyond me. The guy was the biggest fucking loser bum to ever exist. He was constantly hitting up family for money, never had a real fucking job, was an absolute slob of a human being, treated is children like dog shit, and in general was too impressed by his own "intelligence".


>ReactOS doesn't have a large foundation behind it, and it doesn't make sense for companies to allow their employees to develop and contribute to it on their dime.

It's also a solution to a problem that largely doesn't exist. The people working on it do so for fun, not because they need Windows but not made by Microsoft. The parent comments says "having no alternative O/S to Windows is a real pain" but it's not a real pain to any significant amount of people.


> ReactOS must take the record for software with the longest gestation in history.

May I introduce you to GNU Hurd? First release happened 34 years ago.

https://www.gnu.org/software/hurd/


OK, thanks, I've learned something new, GNU Hurd has the edge over ReactOS by eight years. :-)


At least Hurd is making better progress than TempleOS these days.


I would say that TempleOS was complete considering Terry's goals.


*Google*

https://en.wikipedia.org/wiki/Tandem_Computers: 1974 (50y) (semi-dead)

https://en.wikipedia.org/wiki/QNX: 1982 (42y)

https://en.wikipedia.org/wiki/VxWorks: 1987 (37y)

https://en.wikipedia.org/wiki/GNU_Hurd: 1990 (34y)

https://en.wikipedia.org/wiki/4690_Operating_System: 1993 (30y) (these run the self-service touchscreens at Costco's food section, IIUC)

It's surprisingly hard to query for "graph intersection between earliest first release date and most recent release date" :(


I think the parent’s point was that Hurd is still unfinished. All of the others in your list shipped releases that met their intended goals in a much more compact timeframe.

edit: clarification


"gestation period" != age


To not get cease & desist'd to hell, great effort is spent to make sure every bit of code introduced to ReactOS has been properly clean-room engineered. The Windows source code leaks specifically have made a great job to stall ReactOS development. Also the potential legal issues and making an enemy of MS makes it hard to get sponsors. Hence project is entirely developed by unpaid volunteers.


"Windows source code leaks specifically have made a great job to stall ReactOS development."

What you are in effect saying is that Microsoft has essentially killed the ReactOS project off. Are you reasonably sure about this? The reason I ask is that I was under the impression that to overcome any MS code issues that ReactOS was porting Wine code (this ought to be clean).

A supplementary question, as ReactOS is using Wine code what parts still have to be coded that might be in conflict with MS's proprietary code? This question relates to my earlier point about how little info developers are providing potential users. The lack of info doesn't doesn't sound good nor does it offer users much hope.

I've been waiting about 20 of ReactOS's 26 years, unfortunately it seems to me I'll be dead before it's ready.


If I remember correctly, Hartmut Birr, a developer from early times had suspicions (at the time of v2.8/2.9) that a new and very productive developer had disassembled MS code. His code used the MS calling convention to the kernel, before that Reactos used interrupts to access the kernel.

The other developers disagreed and Hartmut Birr quit the project.


Right, I'd read something like that but without the specifics. If that's over-spooked developers then it's a shame. Microsoft has what it wanted—an almost halting of the project.


> Perhaps there's something about how the ReactOS project is run that I'm unaware of that puts people off.

My guess is that the intersection of people who don't need need bug for bug windows compat (so aren't are stuck on real Windows) and those who really care about the NT compatibility at the kernel level is about zero.

> having no alternative O/S to Windows is a real pain.

That really depends on what you count as an alternative, doesn't it? As far as regular applications are concerned Linux + Wine is a pretty good alternative.


I think the main issue is that there's just not much money in it.

Let me explain. I don't have hard numbers on this, but I'd venture to guess that a vast, vast majority of funding/code-time-donations towards Linux is specifically for making server infrastructure more stable. Fortunately for the community, these changes get pushed upstream and also fortunately a lot of them end up also benefiting the desktop environment as well.

Windows does have a server presence obviously, but I think if you're using a Windows server, you're not going to drop it and replace it with ReactOS (even if it were less unstable); you'd probably move to Linux with .NET Core. I don't think any company is going to fund the development of ReactOS on server, or as any key part of infrastructure, and so the only thing that React has is consumer desktops.

I don't think there's a lot of funding going towards consumer products; I'm not saying it's zero, but even for Ubuntu and Canonical or Fedora and Redhad, I always kind of figured that the desktop OS's were effectively loss-leaders for commercial clients. I think the final nail on the coffin is Valve and Proton; for awhile Microsoft still basically had a monopoly on games, regular Wine was hit or miss, but Proton keeps getting better and better, to a point where I almost never have to worry about a game not working on my Steam box. Valve can continue to work on SteamOS specifically because they have funding in the form of people using their platform to buy games.

I was rooting for ReactOS for quite awhile, but nowadays I'm not really seeing the point of it. Linux driver support is actually pretty decent nowadays (particularly with AMD GPUs), it runs reasonably fast, and most applications have moved to web-based stuff anyway.


It’s because all of the early React developers now have incredibly successful careers and don’t really have time to support it


As I said, why then aren't other/new developers rushing in to fill the void? Again, this is still a mystery.


Making a reliable OS from the ground up takes a lot of work from a lot of really skilled people. I don't think people give ReactOS enough credit.

It seems a lot of people look at the success of Linux and *BSD and assume that it's easy once people put their heads together, but what they're missing is:

A. Way more people had direct experience with Unix kernels by the early 1990s due to things like source available licensing and Lions' annotated Unix V6 source code being used as a university textbook since the late 70s. In the case of BSD, the code was mostly written already, they just had to get through the AT&T lawsuit and then remove the 6ish files that were deemed to be AT&T's IP.

B. Specifically for Linux, the project got a lot of financial and manpower support from big established companies like IBM and Oracle once it became clear that Linux could be a commercial Unix killer.

ReactOS doesn't get any of this. While there have been source code leaks, Microsoft remains very secretive about NT's internals and protective of its source code. Unlike with Unix, there's no NT-family of operating systems for people to draw knowledge from. There's NT and there's OpenVMS as a sort of distant cousin, neither of which are open source.

For what it's worth, I do think that ReactOS's goals are orthogonal to what people really want, which is the ability to run Windows software without needing to deal with Microsoft. You really don't need the NT kernel in order to do that, you just need a robust userland compatibility layer. I think this is why Wine has been so much more successful (especially now with Proton and SteamOS) than ReactOS.

I still dream to one day have an OSS Windows replacement that's like a Windows 7/XP/2000 desktop but with modern kernel features, APIs, and security patches. But I think the more likely future is compatibility layers for gamers and the continuing death of desktop computers for anyone who isn't an enterprise or enthusiast.


I really like this answer.

It is cool that ReactOS can run windows drivers natively, though that seems to go against the values of open source


We're so far behind from having alternative OSes that are able to run software that's based on Win APIs that any compromise seems reasonable. Essentially, all we have at present is monopolistic, spying, ad-dropping Microsoft or nothing, so any alternative has to be better.

Linux with Wine is fine in its own right and I use the combination but it isn't a true substitute for the ordinary user who has been used to using Windows for decades. Witness the pitifully small numbers of Linux desktop installations compared with Windows and the even smaller number of Linux users who use Wine. (Yes, I know Linux's desktop share has increase recently, and that's a good thing, but the numbers are still trivial.)

Seems to me pragmatism has to reign in the way that many users install Nvidia drivers on Linux. Granted, it's not the ideal for open source but the compromise is better than the alternatives.


What would a NT-compatable kernel get you that wine doesn't already have, other than the drivers? And my point is that having that is cool, but the drivers aren't open source so a lot of potential volunteers won't care anyways because of that.


Actual complete compatibility, which wine doesn't have.


Because OSS is a thankless job and _free volunteer_ work. The more niche something is such as Windows kernel clone development, the ridiculously smaller pool of potential contributors that may even want to contribute _their personal free time_ to spend on it.

And I come from the perspective of other large niche OSS projects.


"Because OSS is a thankless job and _free volunteer_ work. "

Agreed, it's why I've advocated a halfway 'house' to overcome the problem and pay for the project's development.

It goes something like this (but no doubt there are many suitable variations): create a nonprofit cooperative organization/society that is revenue neutral to develop programs and pay developers a reasonable wage. Employment could be flexible, the organization could employ both full-time and part-time developers (this would help those who've a keen interest in the project but whose principal job is too valuable to let go etc.)

In effect, this software has a cost but it would be very much cheaper than products from Microsoft, Adobe, etc. Also, licensing would be less restrictive—say make the product still cheaper or even free if one compiles the code oneself. There are ways of releasing the code so someone doesn't release a compiled version (each source could be different, have individual certificates, etc., thus compiled versions would individually identifiable), but I'd reckon the price would be so reasonable that it wouldn't be worth the effort.

By revenue-neutral I mean the price of the product would not only cover wages, administration but also necessary reserves. I've mentioned this concept on HN and elsewhere previously for software such a Gimp, LibreOffice and so on.

I'm somewhat surprised there aren't any software organizations that use this development model.


Because a lot of niche OSS has very little commercial value to even try and be revenue neutral.

Even take ReactOS, why in the world would a org use it when you could just license Windows properly with PCs pre built and have the accountants depreciate them into taxes as they are capital equipment.

If someone needs an old Windows kernel for compat, they'll just keep using that old Windows version they have on a box and not waste engineering labor to migrate it.

Combining a bunch of projects like that under a halfway house increases revenue but does not mean it will be revenue neutral. It'll still be in the red.

End of the day, there are reasons why some OSS stays completely free while others have commerical and free operations ran in parallel. Either it can bring in money or it can't.


Add to that that finishing the remaining 90% of a project is a lot less exciting than designing the first 90% of a new one.


> ReactOS must take the record for software with the longest gestation in history

I think that would be Project Xanadu, started in (depending upon what you consider as "starting") in 1961 or 1965.


You successfully answered why you would make a project that you haven't made, but that doesn't really answer the question, and the author already answered in the same subthread 30 minutes before your post.


Not answered to my satisfaction.


I'm pretty confident is that a core problem is that people who develop OS's realize that linux or unix-like systems are plain superior, and end up just building on that, being well versed in their structure and syntax.

This is great and all, except that the linux experience is about a 3/10 for people who are trying to leave windows. Especially when the core of using linux is still so incredibly CLI heavy.

It's kinda like having professional race car drivers build cars. They end up being fast, efficient, nimble, and easy to repair. But driving them is difficult as hell, the drivers seat looks like an apache helicopter cockpit and the clutch is so stiff and throttle so sensitive that you almost always stall or lunge. But it does have an automatic "beginner" mode, never leaves first gear and the throttle becomes slush, but it will get you from A to B around town, mostly. Great for grandma.


From FAQ:

Why don’t you help ReactOS? ExectOS goals are very different from ReactOS, and contrast the project’s core philosophy as being quite on different paths. While ReactOS aims to replicate Windows® NT™, ExectOS is a completely new Operating System implementing the XT architecture which derives from NT™. Although both projects share the goal of being NT™ compatible, they intend to achieve it in different ways. What ReactOS tries to replicate, ExectOS only implements as a compatibility layer. Thanks to that, ExectOS does not need to strictly follow NT™ architecture and is capable of providing modern features.

Do you intend to cooperate with ReactOS to achieve common goals? No. We share Wine’s opinion on the inappropriate reverse-engineering methods used in the ReactOS project, as well as its association with the TinyKrnl project, which used every possible method of achieving the end result of having a 100% compatible results. This especially applies to the so-called ‘dirty’ way.


Can you share a bit more about the inappropriate and dirty reverse-engineering by ReactOS and what the deal with TinyKrnl is?

This response really only answers the question for a select few people familiar with the windows reverse engineering community drama.


I don't know but similar can be found on https://wiki.winehq.org/Clean_Room_Guidelines

"Don't look at ReactOS code either (not even header files). A lot of it was reverse-engineered using methods that are not appropriate for Wine, and it's therefore not a usable source of information for us. [1]"


That looks strange and requires more information. ReactOS devs in past have submitted patches in Wine, since ReactOS itself uses some parts from Wine.


ReactOS takes from Wine, but Wine does not want to take from ReactOS any longer.


Not surprising when you compare some leaked NT code and some ReactOS code. I'm not even saying they straight copied the source btw, but some consider that disassembling as a reference is fine (and using MS symbols...). Well it might even be legal in some cases (more cases that some licence-aware people commonly imagine), but this is a really risky stance to take. For sure ReactOS is very far from clean-room.


I think there are more reasons. Maybe let's move this discussion here https://news.ycombinator.com/item?id=40730327 I don't want to make this thread shit-post.


[dead]


Seems like its completely different person with the same name of Rafal Kupiec. Pretty common Polish name and surname. WinuxOS is not related to ExectOS by any means. The post is from almost 20 years ago.

Also I don't see that level of code similarity you're claiming. KeLowerRunLevel in Minoca is literally a one liner. Most functions behave differently compared to Minoca's. I've asked Belliash on Discord, he was reading Minoca to derive some inspiration. And Codeium AI autocomplete may sometimes generate code similar to other projects. Nothing to worry about.


It goes beyond one-liners. This is especially apparent in sysres.c, where not only are the function names identical, they appear in the same order. There's also non-trivial functions, that are obviously copied line-by-line, with some modifications. See KepGetSystemResource for example.

While I guess there's no proof that WinuxOS and ExectOS share the same authors (aside from identical first and last name), this ReactOS pull request was certainly done by the same person: https://github.com/reactos/reactos/pull/2853#issuecomment-65...


What about this, was this a different person too, Rafal?

https://github.com/reactos/reactos/pull/2853#issuecomment-65...


GPL3, wow, exciting!


Why

    We believe, there is no ideal Operating System on the market. During ExectOS development, we try to bring most useful features known from existing solutions, while keeping compatibility with NT™ architecture at desired level.
    Some of our ideas differ greatly from other projects and it is much easier if we do not have to fight legacy code and ideas.
    We need the freedom to break things when necessary.


... those are reasons why the author is writing it, which I guess is a plausible answer.

But there is no reason to use it as a user or application developer, which are the far more important Whys.

As a tangent, what will really impress me about AI/LLM is if major projects like this gain huge amounts of usable ported/translated code using the LLMs.

So you start a kernel, but we all know you need a desktop environment, graphics subsystems, shell environments, terminal apps, etc.

LLMs seem best suited for breadth-knowledge application. Porting of apps between apis that doesn't involve deep algorithms on a major scale would actually show me they are useful outside of parlor tricks.


It is absolutely mind-boggling to me that anyone would start a new OS project today in C. Even if you're not a fan of Rust (which would be my choice), there are other, safer, better languages to write an OS kernel in.

Even if you're going for a microkernel design, there are still consequences to writing memory-unsafe code outside the core.


I'm not knowledgeable in this low level systems stuff - which other, safer languages could you write a kernel in?


Oh my, so many better options. D, Zig, Nim, to name a few. Even C++, for all its faults, would be a better choice than C!


I've been working full-time in Rust for several years, so I'm quite favourably disposed to Rust, but I don't think Rust would rank highly on my list of languages to start a kernel in.

For me, the big advantages of Rust come from its assumptions about the memory model, and the rigorous checking it does to ensure compliance with those assumptions. But surely those assumptions cease to hold if you're the one defining the memory model, and most of your code involves shuffling memory around inside of `unsafe` blocks. What's the advantage of Rust at that point?

Not to mention, if those guarantees break, Rust will panic and crash, which - in a kernel - would mean bringing down the entire system. In a kernel context, even things like 'let' might fail. Either you let the system panic, and have yourself a kernel that topples over in a slight breeze, or you check for and try to catch these errors, which - well - is no different from C at that point. Linus Torvalds has noted some of his issues with Rust in kernel dev, and he's in a pretty good place to render his opinion.[0][1][2]

Imho, a better choice for a modern greenfield kernel would be one of the strictly functional languages: Haskell, OCaml, etc.

[0] https://lkml.org/lkml/2022/9/19/1105

[1] https://lkml.org/lkml/2022/9/19/1250

[2] https://lkml.org/lkml/2022/9/19/1260


> But surely those assumptions cease to hold if you're the one defining the memory model, and most of your code involves shuffling memory around inside of `unsafe` blocks.

True! But I think your characterization here of an OS kernel is incorrect. Certainly there will be more unsafe in an OS kernel than in your average user space application, but it should still be the minority of code. One thing that Rust documentation/tutorials frequently ram into your head in the chapters about unsafe is that you should be using unsafe only sparingly, to build small, auditable, hopefully-safe abstractions that the rest of your code -- safe code -- can use to get its work done.

That's just as true in a kernel context (and perhaps even more critical to understand and internalize there) as in user space.

> if those guarantees break, Rust will panic and crash, which - in a kernel - would mean bringing down the entire system [...] Linus Torvalds has noted some of his issues with Rust in kernel dev, and he's in a pretty good place to render his opinion.

And this is why the team working on The Rust-in-Linux project have been working with the Rust maintainers to provide alternate APIs that don't panic, and instead return errors, in these situations. Torvalds' feedback has been instrumental in driving these changes. The fact that Rust is in the kernel now is a testament to the fact that this feedback has been taken to heart, and Rust has seen improvements because of it.

> In a kernel context, even things like 'let' might fail

No, that's absolutely false, and breathless statements like these feel a bit disingenuous. Even `let s = String::new()` cannot fail, as all it does is move a stack pointer. I think perhaps you intend to mean that the expression on the rhs of a let statement can sometimes fail? Sure, and that's just like any other statement or expression. But, again, this is why we now have a bunch of "try_"-prefixed variants of common memory-allocating things that return an error instead of panicking.

> ... or you check for and try to catch these errors, which - well - is no different from C at that point

That's not even a little bit true. C has no built-in error checking mechanisms, or even a way to build safe, ergonomic error checking. If I call kmalloc() from C, I have to explicitly check for NULL. I can very easily forget to do so, and the compiler won't help me. If I call a similar "try_alloc()"-style function from Rust, it will return a Result, and the compiler will not let me do something with that allocated memory without explicitly handling a possible error.

> a better choice for a modern greenfield kernel would be one of the strictly functional languages: Haskell, OCaml, etc.

I suppose it depends on what your goals are. If you want to build the next Linux (or something more modest, but still quite popular), then probably not; you're not going to find enough people who are competent in or even interested in languages Haskell or OCaml to build a strong contributor base.

If you're instead doing research or are building something for niche use cases, then sure, that could work.


Some very fair points! I think there's more warts in Rust than just these, but if indeed Rust can be incorporated into Linux sanely and productively, I think it will be a net positive.

I respectfully disagree on the Haskell front. C is a significantly better known language then Rust, but you're advocating for Rust due to certain memory guarantees it provides. I think a similar argument can be made for languages like Haskell, and the formal mathematical guarantees they provide. Interestingly, Haskell was the language used to prototype seL4, which was then rewritten in C for performance, but the logic of the original Haskell implementation was used to formally verify parts of the later C one.[0] Imho, Rust is a (fantastic) attempt to make Algolians safer, but strictly functional languages like Haskell obviate many of the issues of Algolian languages in the first place.

On your last point, I'm unsure there's really much interest in making the next Linux, outside major corps who bristle at the GPL - e.g. Google's Zircon, notably written in C and C++ and first deployed in 2021. I think kernel dev largely divides into two camps: (i) hobbyist / academic kernels, which will be a homecoming parade of languages and designs (and rightly so); and (ii) production kernels, of which there are very few, written in C and C-likes, and conservative by design. Anything too new and unusual should probably prove its mettle in the former group before it graduates into the latter.

[0] https://dl.acm.org/doi/10.1145/1159842.1159850


I see no issues with it.


Why? What is wrong with the Linux kernel?

Along with the web browsers it's the most sophisticated software in the world. Battle tested, flexible, secure, powerful, polished, honed and revised for decades by the some of the worlds best developers.

Why replace it?


It ultimately depends on your requirements, but there are valid reasons why you'd want to use an operating system that isn't Linux in a given project.

Linux for the most part is made up of a large amount of code written in C running in kernel mode. Its trusted computing base includes drivers, file systems and the network stack, which makes for a rather large attack surface. Operating systems running these components in user mode offer a different set of trade-offs between security/reliability and performance.

It's also a Unix-like kernel with a bunch of security features retrofitted in (SELinux, cgroups, namespaces...). It's fine if you want to run Unix-flavored software, but it is not a pure capability-based system like you can find on Fuchsia for example. Unix systems by design confer processes an intrinsic ambient authority (like access to the global filesystem or process table namespaces) and you have to explicitly isolate and confine stuff, whereas on capability-based systems you can't manipulate or even enumerate objects without a handle to something by design.

What I do wish is that people stop putting Unix and POSIX on a pedestal. These are 50 years old designs that keep accumulating cruft. They work, but that doesn't mean we should keep teaching computer engineering students Unix without criticizing its design at the same time, lest they think process forking is a good idea in the modern era.


>>> keep accumulating cruft

Evidence?


There's plenty of literature on the topic, but you can start with "A fork() in the road" [1] that explains why this Unix feature has long passed its best-by date. Another good read is "Dot Dot Considered Harmful" [2]. There are other papers on features that have badly aged like signals for example, but I don't have them on hand.

[1] https://www.microsoft.com/en-us/research/uploads/prod/2019/0...

[2] https://fuchsia.dev/fuchsia-src/concepts/filesystems/dotdot


It's interesting and I've experienced slow forks which lead to using a tiny companion process to execute programs (before spawn arrived).

I have to say I hate CreateProcess more for taking a string rather than an array of string pointers to arguments like argv. This always made it extra difficult to escape special characters in arguments correctly.


Another example is select() API, it’s still in use but the limitations are no longer adequate.

Another example is ioctl() API for communicating with device drivers. It technically works, but marshaling huge APIs like V4L2 or DRM through a single kernel call is less than ideal: https://lwn.net/Articles/897202/


Speaking of select(), a while ago I got a PR merged into SerenityOS [1] that removed it from the kernel and reimplemented it as a compatibility shim on top of poll() inside the C library.

You can shove some of the minor cruft from Unix out to the side even on a Unix-like system, but you can't get rid of it all this way.

[1] https://github.com/SerenityOS/serenity/pull/11229


poll(2) is also a bad design.


Well, it's the best design that was implemented inside SerenityOS when I contributed this, as mentioned inside the PR. The event loop still used select() at the time, although it was migrated to poll() a couple of months ago [1].

Polling mechanisms that keep track of sets of file descriptors in-kernel are especially useful when there's a large number of them to watch, because with poll() the kernel has to keep copying the sets from userspace at each invocation. Given that SerenityOS is focused on being a Unix-like workstation operating system rather than being a high-performance server operating system, there is usually not a lot of file descriptors to poll at once in that context. It's possible that poll() will adequately serve their needs for a long time.

That PR was an exercise of reducing unnecessary code bloat in the kernel. It wasn't a performance optimization.

[1] https://github.com/SerenityOS/serenity/commit/6836091a215229...


Nobody has to use select(2) and hasn't had to for a long time.

So the alternative is to rip it out. The benefit of that (in the real world) is unclear ...


"Why? What's wrong with the Minix kernel."

– 1991.

Stop such a dismissive asshole towards people's projects they find interesting to work on.


> [The Linux kernel is] the most sophisticated software in the world

Do you have any evidence to support that claim?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: