Hacker News new | past | comments | ask | show | jobs | submit login
A Look into CBL-Mariner, Microsoft’s Internal Linux Distribution (jreypo.io)
145 points by rcarmo on July 11, 2021 | hide | past | favorite | 119 comments



A MS dev writing a blog on building MS’ own distro of Linux on his _Macbook_ must have Ballmer sweating bullets.


Fortunately, nobody has to care what Ballmer thinks anymore.

IMO he, even moreso than BillG, was the animus behind the insanely cultish POV that MSFT had for a long, long time. As recently as 2009, working there led otherwise smart people to say really dumb things for, I guess, political reasons.

I'm in the project management space. We have a product that compliments Project Server, and early on we did lots of joint deals with MSFT's Proj Server unit. Even well after the introduction of the iPhone -- which, for most phone makers, was a serious wakeup call about what you could do with a phone -- they acted like it was a personal insult every time they saw a professional associate using something other than Windows Mobile. "Get one of these! It's just as good!"

Keep in mind that, in 2009, WinMo was a complete dumpster fire. App availability was awkward (no built-in stores yet), but there were MONSTROUS gaps in capability in the devices out of the box. For example, the native mail client couldn't do IMAP. But sure, it's "just as good".

iPods -- which were really ubiquitous -- set them off, too. And if you mentioned having a PlayStation, they'd want to know why you didn't have an XBox instead. They had "amnesty barrels" you could throw away your non-MSFT tech in. It was really, really, really goofy and rah-rah and honestly creepy.

You notice how, today, MSFT actually understands they're part of a diverse computing landscape? Sharepoint actually works on non-MSFT browsers, for example. Office 365 runs like a champ in pretty much ANY decent browser, on any platform. I can run a real iOS build of Office on my iPad, and the Mac version is really great.

ALL OF THIS is post-Ballmer.


> ALL OF THIS is post-Ballmer.

All of this is post losing the top position. It not possible to infer that the behavior won't be the same if the get to the top again.


Ballmer was forced out of MS 8 years ago, doubt he cares. And anyway, IIRC, it would've been Sinofsky and co. trying to squash anything that threatened the Windows division's power.


Sinofsky and co. are also to blame for Longhorn sabotage of using .NET, and most likely trying to bring Midori stuff into Windows.

Now Project Reunion, sorry Windows 11, is trying to fix the path started with Windows 8.


Longhorn sabotage, Midori, Project Reunion? Could you give a bit more background for those of us like me who aren't up to date with Microsoft internal projects and politics?


To fully get where I am coming from, you have to go back to when .NET was released.

.NET was supposed to be the great reunification of VB, C++ and COM runtimes, then also got a Java touch into the mix and .NET happenend (initially was known as Ext-VOS).

https://docs.microsoft.com/en-gb/archive/blogs/dsyme/more-c-...

Hence why CLR is just like WASM + GC if you prefer a modern comparisasion.

If you go back into web archives, when Visual Studio.NET was released, it was going to be .NET everywhere, across the whole stack.

However a big management mistake happened, .NET was part of DevTools business unit, while C++ was kept under WinDev, up until Satya started to change the culture, it has been pretty much WinDev vs DevTools.

So Managed DirectX comes, eventually gets killed, XNA and Silverlight take over Windows Phone 7, get killed by WinRT and DirectXTK and so on.

Going back to the originally statement, if you Google for why Longhorn did not work out, you will find many .NET blaming.

https://hackernoon.com/what-really-happened-with-vista-4ca7f...

Yet Android, ChromeOS, Midori are examples of what happens when actually everyone works into the same direction bringing an OS into production.

Joe Duffy does some remarks on his two talks, where he hints at why it was a failure to fight Windows culture

"Systems Programming in C# " - https://www.infoq.com/presentations/csharp-systems-programmi...

"Safe Systems Software and the Future of Computing" - https://www.youtube.com/watch?v=CuD7SCqHB7k

Note that for some time the Asian Bing nodes were actually running on top of Midori as production test.

A big decision of Vista, was to replicate the .NET design using COM instead (hello WinDev), hence why all major modern Windows APIs are now COM based.

Windows 8 doubled down on that by introducing WinRT, with AOT compiled .NET and C++/CX using COM as the future Windows runtime, this was a point of friction, as .NET Native isn't 100% compatible with regular .NET, and many C++ devs desliked C++/CX extensions (later C++/WinRT replaced C++/CX, but that is another story).

So to sort out all the adoption chaos, Project Reunion was born, which is basically merging the COM improvments brought by WinRT and app sandbox into Win32, and forgeting the split ever happened.

Even Reunion has had a couple of hicups, it started as XAML islands, it became eventually clear that that alone wouldn't do it, thus Project Reunion.

https://blogs.windows.com/windowsdeveloper/2020/05/19/develo...

And now a year later, it was renamed as Windows App SDK.

https://blogs.windows.com/windowsdeveloper/2021/06/24/what-w...

Note that many System C# features now live in C# 7 and later versions, and were also in the basis of C++ Core Guidelines.

Also note an example of the internal competition with the pleothora of GUIs being done now, Forms, WPF, WinUI, MAUI, Blazor, React Native for Windows.

Maybe if all divisions worked more together in Longhorn, the project would actually happened and Vista wouldn't have been needed, nor the strong emphasis on COM that it started.


Thanks for the context. It's very frustrating as a .NET developer that infighting set back .NET GUI development by 10 years. There's still no supported way to use DirectX from .NET. All the new GUI tech is moving in the right direction but is unfinished to the point that still only WPF and WinForms can meet my requirements. I really wanted to ditch WPF since the DirectX 11 -> DirectX 9 (WPF) interop is so hacky.


Unfortunately we are better off with community efforts, the DirectX team is really deep into C++ mindset and nothing else, no wonder it belongs to WinDev side.

https://github.com/microsoft/WindowsAppSDK/issues/14#issueco...


what was wrong with Midori? I wasn't on the team but I played around with it, and thought that the architecture was absolutely beautiful. It's a tragedy that it wasn't open-source. I understand that there wasn't much appetite towards "replacing Windows" when we were losing ground fast to mobile, but it's a loss to the academic community at least.


Politcal feuds if you read between the lines of statements like "The project included novel “cultural” approaches too, being 100% developers and very code-focused, looking more like the Microsoft of today and hopefully tomorrow, than it did the Microsoft of 8 years ago when the project began.".

Joe Duffy has similar remarks on his posts and post morten sessions done about the project.



Many people at Microsoft use Macs. It’s not a big deal.


Meanwhile no one at Apple is using windows unless they have to for very specific software.


That's unfortunately their loss in my opinion. Windows has a pretty awesome development story, far better than Apple's authoritarian hold on what software can be run and distributed for that platform.

Moreover, gaming is great on Windows, and always has been, and WSL2 is extremely slick.


+1

During the past three years I went from being my workplace's Windows hater to ditching Linux in my personal machines in favor of Win + WSL2.


Why did you ditch Linux for WSL2?


> Apple's authoritarian hold on what software can be run and distributed for that platform

What does that mean? I've never needed Apple's permission to run any software on macOS.


Unless you go out out of your way to disable code signing using the terminal & a root account, macOS will only run code signed by an Apple issued certificate [1]. It will also phone home [2] every time a binary is ran for the first time.

[1] https://en.wikipedia.org/wiki/Gatekeeper_(macOS)

[2] https://apple.stackexchange.com/a/391399


I had an overnight update install itself on my sole Windows machine the other night. When it rebooted it refused to let me use my own computer until I obtained direct permission from Microsoft giving them my email address. Once I was past that lock it told me "the computer is all yours now" as if it hadn't been earlier.

Does Apple completely deny your use of your own hardware like this until you submit? Asking because I have been used to Linux as a daily driver for over 20 years and haven't used Apple since Jobs banned clones.


> I had an overnight update install itself on my sole Windows machine the other night. When it rebooted it refused to let me use my own computer until I obtained direct permission from Microsoft giving them my email address.

This is not something that Windows does. Ever.

What do you mean by "obtained direct permission from Microsoft giving them my email address"? You're clearly not talking about logging into a Microsoft account so I'm struggling to understand what you're referring to.


1. You can disable auto updates

2. I've never entered an email for using Windows 10, they do try to hide it but you can set up an offline account.


You can literally just right click and press 'open'. Might not be obvious to the layman, but you don't need to do a whole code signing bypass song and dance.


Big Sur has made this intentionally more tedious [1]. You apparently have to right click, click open, close the dialogue and then open it again in order to actually get the option to approve the application.

[1] https://disable-gatekeeper.github.io/


I'd say this is a good thing.

If you can't even figure out how to bypass it, then you probably shouldn't.


I can't edit my comment so I'll just put this here: what is up with HN heavily downvoting factual, useful information? It seems something from the past 1-1.5y or so and it infuriates me to no end.


In aggregate, HN hates Apple. It's that simple, really.


In aggregate, HN loves/hates all of {GOOG, APPL, NFLX, TSLA, MS}.

Except for FB, everyone actually seems to hate them.


Doesn’t Apple provide a system preference option to disable gatekeeper completely (set to running signed applications by default, and it also allows you to limit apps to App Store only).


The system preference option is no longer visible in recent releases of macOS – it can only be enabled via the terminal.


IOW, yes, you can disable it.


Right click -> Open -> Open

That's not needing Apple's permission; that's me giving my permission.


Actually, they had an outage when Catalina was released because Macs phone home before starting 3rd party software.

https://9to5mac.com/2020/11/15/apple-explains-addresses-mac-...


Gaming is, in fact, the only reason to have Windows installed on any computer. Even though Wine and Proton are impressive projects, I'm somewhat pessimistic about their ability to completely replace Windows.

I won't be sad if I'm wrong there.


Right now, it's a bit like using Firefox instead of Chrome. They both render the website but some devs exploit Chrome only tech that Firefox hasn't yet implemented workarounds for and those won't work.

If you're ok with that, you can officially replace windows with Linux.


>far better than Apple's authoritarian hold on what software can be run and distributed for that platform.

No such hold exists on MacOS. You're confusing it with iOS.

Windows is better for gaming, but that doesn't matter to everybody.


You realize there’s no issue with development internally for Apple products at Apple on macOS, right?

And generally if you work at Apple you actually like/prefer/love Apple products?


WSL2 is slick, but the process of getting on Windows Insider Program and updating Windows to the right version is quite clunky.


You no longer have to be a member of the Windows Insider Program. There's a simplified installation method if you are, but if you're not, you can install WSL2 manually.


I was forced to use a Mac, wehen I worked at Apple. Absolutely horrible piece of hardware. Keys sucked, touch bar useless. Had to carry around an external keyboard. Luckily I could bring my own external keyboard, which meade it somewhat bearable.


I use Linux everyday with an apple keyboard. The way we train our mussle memory and how it becomes our reality is fascinating.


He's probably talking about the butterfly key keyboards build into Macbooks that were notoriously bad in the mid-late teens.

The external keyboards were never so bad.


They also write a lot of software for the Mac...


An Azure MS dev .. for running on their VMware vSphere 7 home lab.

This dev sure is not only looking at their in-house toolset. I'd say that that is a good thing.


> A MS dev writing a blog on building MS’ own distro of Linux on his _Macbook_ must have Ballmer sweating bullets.

Not to mention throwing chairs.


Ballmer just became the 9th person in the "$100Bn net worth club" - https://duckduckgo.com/?q=ballmer+%24100bn&ia=web

he's probably fine with everything.



Why would Ballmer care? He hasn't worked at Microsoft in a decade.


Who's Ballmer?


It's neat to see this! But somehow, an RPM based distro, that solely documents its instructions for Ubuntu feels very something.


Less talked about is CBL-D, a Deb packaging based Microsoft Debian distro that powers Azure Cloud Shell. https://rasmusg.net/2020/12/01/updates-to-cloud-shell-docker...


Microsoft-y


> Of course Mariner is open source and it has its own repo under Microsoft’s GirHub organization.

I was hoping GirHub wasn't actually a typo, but a funny internal name.


Does Mariner include SELinux? If not, what major LSMs are supported? Thanks!


It's not mentioned on their security features page[1].

[1] https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/do...


Actually yes.


How is this licensed under MIT and not GPL if it's a distro built on existing Linux code that's GPL-ed?


As indicated here (https://github.com/microsoft/CBL-Mariner/blob/1.0/LICENSES-A...), the MIT license is for source code actually contained in the repo.

The bulk of this repo are spec files which define dependencies and how to build/incoporate, and call out dependency licenses.

The actual output of this build would still have all the proper GPL goodness intact.


GPL does not require that.

E.g. you can develop closed source code that runs on top of the Linux kernel.


They can license their changes and userland as MIT. They would have to dual license any kernel changes with GPL though


They're not distributing any GPL code, so their changes can be under any license, including proprietary. And if they were, they'd only need to license your changes under something GPL compatible (like MIT, as they've done)—no need to dual-license. Upstream Linux already includes many parts that weren't licensed as GPL, only something compatible.


Can't they just say the code was randomly generated by Copilot so copyright and stuff like that doesn't apply?


It seems almost inevitable that Windows will eventually become a Linux distro. Microsoft could replace incredibly resilient cruft like NTFS with EXT4 or ZFS and the registry with human readable configuration files. On the other hand, they could replace Bash with PowerShell and hopefully pressure various orgs to adopt some sensible subset of configuration formats.


I think it more likely that Microsoft continues the ABI-hosting of Linux like it is currently with WSL. It doesn't make sense for them to use third-party kernel when the one they have works well enough and supports a long legacy of hardware and software, and it's something they can maintain control over.

If Microsoft feels competition from Linux, it makes more sense for them to allow hosting of Linux as a user-space application on their platform, rather than have a user migrate away.


Didn't they switch to hypervisor virtualization of Linux?


I'm not so sure. Windows driver model is vastly different than Linux. Linux drivers have to be compiled for every kernel release. Windows drivers that are more than a decade old can still be installed and work correctly.


Linux drivers can work for much longer than a decade once they reach mainline. Out of tree drivers are another story.


Right, but Windows drivers are (all?) out of tree and have been for decades; they'll never get every vendor to change that.


And this story that windows drivers work forever is a rosy tinted view of the world. Many drivers (my ltc scanner) stopped working working from win9x to xp. I saw an USB projector driver break when a system was updated from win 7 to 8. The same device worked as a fb device out of the box on my rockpi4. And windows drivers are platfrom dependent, the same driver will nit work on 32/64 systems or arm/x86. Linux is entirely another world.


I recently installed XP drivers on a Windows 10 system. Worked flawlessly, even though officially this wasn't supported. 9x to XP/Vista is the last breaking change we've seen and that was over 15 years ago. It's quite impressive actually.


I've got an Asus CP120 USB mini projector. When you plug it on a windows machine it presents itself as a mass storage device with the windows 7 (or xp, I don't remember) driver on it. Last time I tested, it didn't worked out of the box on a windows 10 machine; it would probably work if drivers were downloaded and installed, but I didn't bother.

On my rockpi4, I simply plugged it and instantly I've got a terminal on my wall. It is an ARM machine. That's the HUGE advantage of having a driver in the kernel: it will work on every architecture the code can compile. That was just the same experience with my usb wifi dongle, with my wacom tablet, with an epson multifunctional (I had to apt-get install escpr, but it is a single command), samsung printers... Some of these drivers are in user-space and not in the kernel, but it worked flawlessly in more than a single arch. That is impressive.

But I do admit I envy the number of drivers for desktop gadgets that are compatible with windows. Of course, binary drivers are only interesting you use a single arch.


Mainline drivers are maintained and updated, I think the statement you are replying to is short hand for “windows binary drivers work for years across kernel versions and minor distribution version”


The windows driver model is not semantically much different. There is greater cohesion at a higher level, leading to it being easier to create a shim layer for, if anything.


DKMS and driver segmentation enables 10 year old drivers to mostly still work on Linux too.


Yes. Sometimes those old drivers can't be compiled with newer headers.


And sometimes old drivers won't work on Windows 10.


any example of a decade old driver that will install? not even the .cat signature algorithm will be the same, and windows enforces signed drivers these days.


Can NDISwrapper approach be applied to other drivers?


I understand a lot of the benefits of ZFS and Powershell. But whats wrong with NTFS? Can you talk a little about the relative strengths of NTFS vs EXT4?


NTFS is a more sophisticated files filesystem but it has a worse performance under certain workloads, specifically the workload underneath `yarn install`


That'll be due to the file system filter driver[s] (i.e. antivirus). This should impact most/all file systems where the file system filter driver supports said file system.

Disable/remove the file system filter drivers and the performance issues largely disappear.


On current pro versions, you can’t really disable the live protection constantly, but you can add a permanent exclusion for your home folder or the whole drive. This should greatly speed up operations like that. Still a lot faster to do them inside a Docker container/volume on the same Windows host.


Also Docker and installing applications that have lots of small files.

In general NTFS doesn’t do well with lots of small files because opening files is expensive compared to ext4 (dramatic oversimplification). This shows up in random places where very little actual file reading/writing is happening for each file, like yarn, docker, installing video games, etc.


NTFS also has a 260 character path limit [0], which as the sibling comment says can really jam up node_modules and the current HEAD of the intellij-community repo which currently has some giant filename in a subdirectory of its repo

I'm aware of their claim about using Group Policy to remove the limit but I've never used GP in order to know its sharp edges for my gaming computer

0: https://docs.microsoft.com/en-us/windows/win32/fileio/naming...


NTFS has never had a limit of other than 32K. MAX_PATH limited 260 characters for the Win32 API (among others). It has always been possible to bypass the 260 character limit, though obviously most applications wouldn't work with a file that exceeded the system-defined MAX_PATH value.

Office does its own thing and doesn't leverage MAX_PATH. No idea why.


I love the idea of windows registry though, I hope it stays. With linux, i have to to poke around the file system or google "package X config location" and many programs have their own idiosyncratic rules around which config paths take precedence over others. Windows registry as implemented is not the best, but the idea of something like a central sqlite db for config is great.


Certainly not for me. Registry was one of the worst parts of the Windows experience for me; but I understand it can be subjective.

With separate files, the ability to download and replace them individually is a big win. Also, I can use terminal tools on them which is great for automation.


I've never understood the criticism of the registry. The only times I've ever had an issue with it were when there were RAM or disk problems with the computer. On Windows there's nothing at all stopping developers of standard Windows applications from implementing their own config files and many do.


1. Most of the programs put things all over the place. If you install software and use it for a while, you may find lots of keys in lots of places, all subtly affecting something.

It’s technically the fault of the program, not the registry. But culture and ecosystems matter, and by-and-large, Config in files is usually much more concentrated.

2. Related to (1), it is very hard to just move settings from one machine to another. How do you export your settings related to program X in order to use it on another machine? I keep my Linux config files in a git repository - I can easily track history and clone it to new machines. What’s the registry equivalent?

3. It is incredibly slow. When I still used windows, if I needed to do some registry editing, I would dump it to a text file, edit and then import the edited keys. That took about 1/10 of the time doing the same took in ResEdit.


For your first point, that's often the result of using third party libraries via COM. That COM library is its own thing and it wouldn't make a lot of sense to put all of its settings under your app. Plus, some parts of the registry (thinking of CLSID) are basically directories where the system looks when a program says "give me an instance of ThirdParty.Grid".


I agree to the above in a painful way. I don't think of myself as a power user but I have at times to clean up the registry for stuff that uninstallers missed because the updated application does not want to install. It's a complete mess of having to dig through a tip to find a broken match box so you can destroy it without setting on fire the whole thing.

Meanwhile OS X or Linux don't suffer from this and working with applications is a lot more "streamlined". As I said, this is just a slightly above average skilled end-user perspective.


My experience with GNOME configuration repository, or files scattered around /etc, /opt/etc and /share kind of tells otherwise


it's nicer using PowerShell. `cd hklm:\` and bob's your uncle.


Registry never made much sense on the local system: just use the filesystem. I assume there was once a plan to use it in a networked fashion. But Active Directory replaced it for that purpose.


The filesystem isn't inherently bad, but the lack of a standard config format, location, and APIs is quite terrible. There's no reason a thousand programs should have a thousand different ways to be configured, it's just a legacy of poor design with no standardization.


Actually there is a standard location for user settings: https://specifications.freedesktop.org/basedir-spec/basedir-...

Dconf complements it: https://wiki.gnome.org/Projects/dconf and dconf-editor is even similar to regedit.

For system settings... that is a bit more complicated.


It's not poor design because it wasn't really designed in the first place...because it didn't really need to be designed.

There is a standard and it's minimal:

- config files are text files - # indicates a comment

Why standardize further when the types of programs that require config files run an extremely wide gamut? Types of programs can be as diverse as web servers, graphic editors, kernel modules, networking programs, etc. Each are vastly different. I don't need to wait for, worry about, or try to install an updated registry processor that knows about new object types. Better to build it into the program or rely on a library. Why change the entire system of configuration storage just for one new type of program?

And text files are awesome for another reason: With the ability to comment config files you can understand any format - as well as include notes and conveniently have documentation where you need it.

Also: You can't use git or other versioning to backup your registry keys and rollback to previous versions.


For Windows, it's faster than accessing the file system (thanks to file system filter drivers).


Editing the registry never results in corrupt unreadable intermediate states, unlike writing to config files. A workaround is "atomic save" (where one app writes to a different filename and renames it over the original). This ensures you'll never get torn reads (I think the Windows registry doesn't support transactional/atomic updates of multiple values at once), but you lose permissions and symlinks or something like that.


Of course it did /does, and depending on how badly the registry corruption was, the whole system could be rendered unbootable.


It could and it did, but honestly, I personally have never seen any Win10 endpoint with any registry issues since it came out.


Have you never experienced an unbootable system because of a corrupted registry? It's unfixable. And I'm not saying it happens when manually editing the registry. It might happen when Windows crashes at the wrong time and the system is in the middle of modifying it. It's quite failure-prone and I'm surprised it hasn't been improved much over the past decades.


Somehow I've never gotten a corrupted registry resulting in a broken account or system. Perhaps I'm lucky or too innocent in this regard. Though it does sound concerning if the Windows registry doesn't perform write-ahead logging like SQLite or a client-server database to enable crash recovery.

I should research what key-value database libraries I can use as a cross-platform registry-like storage format that's more resilient to app or system crashes.


You can use terminal tools on the registry fwiw. Powershell provides a native PSProvider that exposes the registry like a file system so you can cd into it and make changes.


Personally I like the Mac plist system the best. It just works, and it works so well that you never hear about it because no one ever complains about it!


There are at least three different formats for plist files, and the tools are differently broken depending on the format. There are some really odd cases like terminal colour settings which are stored in the plist as a pickled object, so you have to use the gui to adjust a colour rather than (say) using css syntax.


No settings format can force a programmer to make good decisions. A Windows developer could put a .Net binary serialized object into a blob registry key.


> many programs have their own idiosyncratic rules around which config paths take precedence over others.

The place where programs put their config in Windows Registry is as if not more idiosyncratic.


AIX has a registry like thing called the ODM. I think it’s an interesting take on centralizing config on UNIX.

Fairly horrific to use, but I always appreciate unorthodox designs in operating systems. They’re exceedingly rare these days.


On the contrary, having configuration in text files allows using different formats depending on the needs of the application.

Also, it allows the use of git to track changes on config files and even replicate them across hosts.

etckeeper for /etc is wonderful.


> replace Bash with PowerShell

On the contrary I wish Windows shipped with busybox, so that it would be easier to run scripts with some bashisms portably.


I know everyone here really likes Linux, but bash is kinda a joke compared to powershell when it comes to usability. Everything from the way piping works to function names, makes powershell easier for the beginner and memorability. The best way to figure out how to do something in bash is to google it and hope someone answered the question in SO. With powershell I just go to ms docs and scroll through the functions until I see the one function name that is self descriptive, and after a 2 min read of the docs I’m usually set.

That’s not to say powershell is perfect but on the whole it’s a lot easier to find the thing you’re looking for, and you don’t usually have to wade through SO snark to do it.


Agree. Make Windows more Linux like where possible, not the other way around. Understand that most of us will never use Windows Server products but we will use Windows 10 if it works well enough as a development platform. Do away with the Teen Titans design aesthetic - carefully study what macOS is doing. Keep the fast, keep the low power consumption and keep the solidness however.


I've looked at the busybox source, and it appears to me that bash compatibility that is added to the Almquist shell is a very thin veneer, not much more than defining [[ as an alias for [ (test).

If you're looking for arrays, you will be sorely disappointed AFAIK.


Microsoft might chose Toybox for its zero clause BSD license over GPL.


Agreed. By the end of the decade, one expects Windows to be a desktop experience loke Apple's Aqua atop a Linix chassis, as the drift continues.

Why put money into the kernel beside that of a Linux Foundation member?


Hopefully not using the Linux kernel though. It changes too much, stuff gets dropped too often, it's buggy, it's insecure as hell, there no test framework. It releases features quickly, sure. But I kind of hate using it, both on my desktop and in production. Would rather BSD, but then you don't get features.


> It changes too much, stuff gets dropped too often,

Stable features that are visible from user space? That would be extremely surprising.

> it's buggy, it's insecure as hell

It's really not. It has bugs, even security bugs, but no more than any other modern kernel.

> there no test framework.

https://www.kernel.org/doc/html/latest/dev-tools/testing-ove... plus assorted fuzzing and integration testing


They can use an LTS kernel, and I hear there is some work in the android world to give LTS kernels a stablish driver interface




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: