Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft's Linux Kernel (github.com)
801 points by polyomino on June 28, 2019 | hide | past | favorite | 508 comments

This is really Microsoft catching up to the Mac in terms of integration with the open-source ecosystem which importantly drives the web.

In the mid 2000's the Mac really took off due to being a good-enough Linux replacement on the command line, while taking care of all the hardware integration and providing a sleek desktop experience. This really helped the Mac take off among hackers. This system wasn't as open-source as Linux, but it was good enough: Most of us don't want to mess about with the desktop stack. If the stack we care about is open-source and runs across many systems, then we still have the freedom. I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows.

The inclusion of a Linux kernel really seems to cause a lot of confusion. No, Microsoft is not about to remove the NT kernel. No, they're not about to supplant Linux. WSL1 did what the guys from Illumos and FreeBSD have also done: since the Linux syscall interface is quite stable, the NT kernel can simply implement these syscalls and handle them appropriately. The was that the NT kernel was still sufficiently different that the filesystem performance was quite poor and complex applications like Docker required more kernel features than was provided by just the syscall API.

Including a Linux kernel is really the nuclear option: rather than trying to fit Linux applications in Windows, they're running in their own world and providing integrations into this world. This means you can now run Docker on WSL2 for instance, but it's also complicated: Kernels really like to think that they own the world, for instance they keep memory around to use for later. WSL2 dynamically grows the amount of RAM allocated to the VM, which is quite clever, but I don't know how/if they're going to handle freeing memory again -- here the NT kernel really needs to communicate with the Linux kernel.

Anyways, the upshot is that no, Microsoft is not going to take over the world, but perhaps it will be easier to use a Windows laptop which supports all the stupid Enterprise software I need but still have a Linux shell for actual work.

> I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows

The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.

> I don't know how/if they're going to handle freeing memory again -- here the NT kernel really needs to communicate with the Linux kernel

Usually "free" memory only happens when a Linux machine starts. During normal usage most of the memory is being used to cache things and, when some large app frees its own memory (such as when Slack quits) it'll eventually be used for other things.

Not sure how that would work. Memory doesn't stay unused in Linux for long - whatever programs free will end up being used to cache information that's slow to access.

> The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.

This is definitely a problem especially considering that the shotgun approach to some Windows updates actually hurt the performance of some machines, even breaking Windows on occasion [0]. However, in fairness I would say that I've spent just as much or more time fixing rolling release issues on Arch/Manjaro, or troubleshooting the package stopping me from sudo apt upgrade. The worst experience for me was having to reformat after the update from Ubuntu 14.04 crashed.

What annoys me most about Microsoft's overarching paternalistic philosophy is is how creepy their telemetry initiatives are. Granted plenty of companies do this now, for the same profit-driven motivations, but Microsoft goes above and beyond in terms of disabling or even ignoring opt out options [1]. I've found the Winaero blog [2] to be a good source of tools to stop some of the Windows telemetry shenanigans.

[0]: https://www.forbes.com/sites/daveywinder/2019/05/19/microsof...

[1]: https://www.reddit.com/r/Windows10/comments/a4lpg0/windows_1...

[2]: https://winaero.com/

> In fairness I would say that I've spent just as much or more time fixing rolling release issues on Arch/Manjaro.

That doesn't even matter much to me. Obviously, less time spent is better, but what really matters is the timing.

When I have to spend time upgrading my linux box, I have set time aside. I decide that I have a bit of time, and if it turns out to be a bigger job than expected, I can decide to abort and continue some other time.

On Windows the upgrade is always inconvenient, because you have no way of controlling it. Microsoft decides when the best time for you to upgrade is, and once it starts you have no means of punting on it.

Although the telemetry is certainly a problem too.

> That doesn't even matter much to me. Obviously, less time spent is better, but what really matters is the timing.

That too. When I update MacPorts, it can, sometimes, take ages. I don't care - I just continue working.

> Obviously, less time spent is better, but what really matters is the timing... On Windows the upgrade is always inconvenient, because you have no way of controlling it.

True, very good point in favor of user control. Linux has the added benefit of teaching the discipline and foresight to understand that "Hey... maybe I shouldn't run sudo pacman -Syu and reboot until I have some time set aside". Though in recent years I've found Arch to be way more stable with very few updates causing major problems.

> Though in recent years I've found Arch to be way more stable with very few updates causing major problems.

Can confirm. I've had the same Arch install for nearly 6 years now. I've borked it only once in that time, and that one was my fault for not reading the news first. Fixed it in < 5 minutes.

And I've made considerable changes in that time, including moving the entire installation from one hard disk to another.

So after a very long hiatus I have decided to try installing linux on a raspberry pi - and wow, the results were amazing. Using my decade-old knowledge of using fstab I added some drives to /etc/fstab and rebooted. Oooops. Raspbian would fail boot with a message "root account locked, couldn't open console".

Fantastic - so if any of the fstab entries are missing, the entire system effectively bricks itself. The only way to fix it was to remove the SD card, put it in another machine, and manually remove the offending entries from the fstab file. Fantastic. I Don't think I have ever managed to get windows to self-brick itself with something so simple.

I think you are being unfairly downvoted. On of the major downsides to Linux is that while you can, and often have to, tinker to make it work you can break it on many levels. Sometimes it's even hard to know if it is working properly, until you realize it doesn't. Many developers uses virtualization just for those reasons alone. It might even be a major upside to using WSL, if they do it well.

Our tools are either too powerful or too weak. To each their own. I can cope with a tool that is too powerful, but I can't cope with a tool that is too weak.

I don't think that is the general problem. More so inconstancy. It's almost harder to quit vi then to delete your boot record. I am not sure many tools are even that powerful. Which often means you have to use many different ones, making it even more confusing. Linux lacks abstraction and makes you do things manually as root, it isn't very robust.

I've had a windows update brick my laptop. It failed to complete an update, would roll back, reboot, then try again. Nothing I did fixed it. I spent almost a week with it in a boot loop, trying various fixes and such before giving up and reformatting. Put linux on it and added it to my pile of linux laptops.

You probably haven't tried changing windows partition and messing with Windows bootloader's registry file...

With great power comes great responsibility, I guess. I agree it would have been nice for Linux to come with a few more sanity checks by default, like maybe a warning flag for rm -rf /* and dd commands...

Standard rm (coreutils) has included rootfs protections since at least 2012:

        do not treat '/' specially
        do not remove '/' (default)

Mind that if your rm comes from elsewhere (say, Busybox), it may not have that protection.

Very careful use of rm as root is an excellent habit to cultivate.

As are backups.

What about it? I don't think that does anything for "/*"

Interesting, I didn't know that and I stand corrected.

I'm just more offended that the default behaviour on an extremely popular distro is failed boot = brick. You don't even get a basic command prompt to fix something, the default behavior is to lock everything down and forbid access, making it impossible to repair the machine from itself. Windows will reboot a few times and then automatically start in safe mode when this happens, you don't need to extract the drive to manually edit some text files before the system can boot again.

> I'm just more offended that the default behaviour on an extremely popular distro is failed boot = brick

If you can't properly start the OS, that's correct behavior. You don't want the OS to start writing things to /var when it can't mount the filesystem that should be mounted to /var.

In any case, the fix with an RPi is easy - pop out the microSD, mount it on the other computer and fix whatever is broken (which should be in the logs). If the RPi is the only computer, put a microSD with a plain install of the OS, mount the other microSD through an USB dongle and fix it the same way you'd do with a laptop.

FWIW, you're using the word "brick" incorrectly. Bricking is when the only fix is to throw the device away and buy a new one, which clearly is not the case here.

Distinguish Hard Brick and Soft Brick.

I don't think I am. I've been fixing hardware for years and any device that doesn't switch on is "bricked" - its utility has dropped down to zero, it has turned into a literal brick. Just because you can revive it through some arcane procedure doesn't make it any less bricked to the end user.

Flashing a new OS onto the SD card is not "some arcane procedure" for the RPi's target audience.

you're getting downvoted, but as a 19-year Linux fanboi, I actually agree with you. If the system has booted at least once, then it knows how to successfully boot. i.e. it knows of a string of modules, kernel, initrd image, etc that worked. As you upgrade, the OS should have a courtesy feature where it doesn't simply delete these files (unless you do the equivalent of issue a "-rf" force command). For instance, we currently have (kernel, initrd, filesystem, and modules that exist on our fs). All we would need to make it (nearly) brick-proof is to add one more thing: fallback-rd that has all the previous shit that worked last time. This would save so many users' asses at hardly any cost since storage is so cheap these days.

Use a gui if you don't know your way around the shell. I'm pretty sure Debian provides one.

I wanted to setup a headless Samba server - so after the reboot I didn't even have any way of knowing why the system wouldn't appear on the network anymore. I had to find the right HDMI adapter to see if the system even attempts to start as it was completely dead by all other indications.

And no, I'm sorry but I hate the argument of "you should have known better" - the system first and foremost shouldn't have defaults set up in such a way that not only the SSH server doesn't come up in case of an issue during boot, but the local console is entirely disabled for any access. That's a crazy default.

Check your video card chipset and call support :P

Send a hotplug notification that a region of memory is going to be disconnected. The kernel will stop making allocations there, and eventually it won't be in use. (This could take a while for a heavy system with long-running processes.) Then you send the hotplug physical disconnect notification. If it fails, you'll need to wait longer. If it succeeds, the host system can reclaim the memory.

I don't see how that is a specific problem. All operating systems have annoyances. But if you are for example an engineer you won't find a competitive range of software on Linux. For someone in need of doing software engineering a similar situation have been developing on Windows. WSL changes that.

> All operating systems have annoyances

But not all annoyances are created equal

I don't find that annoyance particularly grave or different from other annoyances. I just updated and rebooted my Windows machine I haven't used for a month, took maybe 5 seconds longer than usual. Regardless if that happens every day that isn't something I am going to base my choice of operating system on. This is one of the problems with desktop Linux, caring about some detail (even though it might affect some a lot) rather than the big picture which affects everyone.

>The thing that annoys me most on Windows is Windows itself. After using Linuxes for almost two decades now, the notion of the OS taking tens of minutes to self update when all I wanted was to quickly reboot it is unbearable.

I find the same thing with electric cars. Eight hours to charge when I've spent two decades filling in minutes is absurd.

> Most of us don't want to mess about with the desktop stack. If the stack we care about is open-source and runs across many systems, then we still have the freedom.

A tangential point but a relevant one nonetheless. I owe most of my tech. skills to the fact that it was hard to get a Linux distribution running on a cheap PC. I had to read up about hardware, learn how to fix installation issues, sometimes recompile the kernel with different options (and hence study them) and a number of other things simply because it didn't "just work". It gave me an intimate comfort with Linux that I just don't have with any other system (including MacOS which I used for about 2.5 years).

I completely get the notion that when one is a professional he or she shouldn't need to meddle with system level quirks to get a productive environment and a desktop and thankfully, modern Linux is more or less there. I haven't had any major trouble in a long time with it. However, during college or early years, having a system that demands some amount of work to get it working has, atleast to me, be crucial in honing my skills. I wouldn't change that even if I could. I'd vigourously defend it.

There is a technique called "memory ballooning" http://www.vmwarearena.com/vmware-memory-management-part-3-m..., which is probably also used by HyperV to free memory again.

> rather than trying to fit Linux applications in Windows, they're running in their own world and providing integrations into this world.

It is effectively their way to have an "integrated Linux virtual machine" inside of Windows. The WSL1 on Windows was relatively similar (but not enough for me) to Wine on Linux. The WSL2 is in some ways "more integrated" than the typical VM would be, but otherwise still similar to that.

Not enough: my greatest disappointment up to now is that the "VM-like" behavior was present even in WSL1, if I understood correctly, the files "inside" were more special than the files inside Cygwin folders would be.

Another disappointment was: it was not easy to install WSL on previous Windows 10 versions. Specifically, it was "in app store" but you weren't supposed to "click to install" -- it resulted in something weird. There were recipes what to do from the command line with hoops to take care of, so at the end... I've installed Cygwin, which I knew will work. Does anybody know, did WSL get better now?

(Edit: in the last sentence: s/ it / WSL /)

I think you can do it all from powershell if you want or there are really only 2 GUI steps.

Yesterday I:

1. Enabled the Windows Subsystem for Linux option from the control panel

2. Downloaded Ubuntu from the windows store

3. Open Ubuntu and it worked.

Good. The last time I tried it was like "you just click... and then the problems just start":


Regarding the problems between the "different filesystems" it was like:


Which was especially bad: what's the point in having it "there" if you aren't allowed to access the files inside.

Also problems with making a backup of that etc. In that aspects using a real VM (e.g. VirtualBox) the situation was much clearer.

So from my perspective it was both worse than Cygwin and worse (or at least not better) than real VMs... I would really like to read if/how WSL improved.

The similar experience was when I've tried to "just install ssh server". The things I've expected to "just work" didn't. Again at the end I've just used Cygwin.

MSYS2 can do something better than Cygwin.

I'm disappoint to "VM-like" behavior too, so I don't expect they would be totally replaced by WSL.

But this is similar in the other direction. Many applications just rely on such "VM-like" behavior, even though they can be (re)built for Win32 and/or they do not need a real VM for functionality. Note WSL1 can already do something more, e.g. hosting programs as X clients working with VcXsrv. I don't think WSL2 will be necessarily better than WSL1 in many of such cases. (In particular, when I have to reserve VT-x for some other hypervisors, I have no other choice.)

> I've installed Cygwin, which I knew will work. Does anybody know, did it get better better now?

I really like Cygwin. With the MinTTY terminal, it was very nice. Slower than native Linux (or Windows), but enough of an environment to make me happy and productive.

> If the stack we care about is open-source and runs across many systems, then we still have the freedom.

No we do not.

Correct, every spying bureau, organization or company runs open source & that probably takes away freedom.

From the very start, the NT micro kernel was designed with multiple OS APIs in mind. They implemented a Posix subsystem decades ago.

I think it's a little sad that they are going this route (wrapping a running Linux kernel) rather than working to improve Windows disk performance and continuing to improve their WSL 1.x product. It looks like they are missing out on an opportunity to improve the NT kernel for what looks like short-term gain.

From what I understand (which is very, very little!), fundamental NT vs Linux architecture differences wrt their respective file systems prevent much more performance improvement in WSL 1.x.

They do, but NT file performance over many files scales abysmally. The two major reasons are a legacy of bad decisions: deleting files is forbidden if there are open write handles, and filter drivers are allowed to be placed between other drivers in the file I/O stack.

There's not much Windows can do about this, which causes tons of issues with e.g. git clone. Erick Smith is ludicrously smart, he just couldn't squeeze better FS perf out of WSL1 due to these issues.

I wouldn't say bad decisions so much as different decisions. Given WSL would abstract out file system access anyway, why didn't they just bridge WSL's FS similarly to how they did with WSL2 anyway? Though, I'm not sure there's much advantage left at that point.

I don't think they could improve I/O speed without modifying NTFS itself (unlikely). This new approach is certainly easier to maintain and develop

You could implement ext4 on the nt kernel and have wsl1 run on it

> I find it fairly easy to work on both Mac OS X, Linux and WSL2 -- but not pre-WSL Windows.

Cygwin really helped me with the Windows part of that for many years. I switched to a Powerbook about a year before the Intel switch & haven't looked back since. Linux has never been a good enough desktop all-rounder to make me switch to it as a desktop OS. Maybe I should take another look.

I'm planning a desktop upgrade (now sometime later this year), eyeing an r7-3950x or next gen threadripper. I'll probably go Linux as my primary desktop again, and VM for windows and macos as needed. I feel it's there enough at this point. Probably been 5-6 years since I've tried it as my primary OS. Though I did keep my grandmother on Ubuntu for about a decade before she passed (her old game ran under wine, and no worries about windows viruses/scams).

The Linux Kernel can free memory; the balloon driver included in all kernels allows it to dynamically shrink and grow memory with a variety of urgency levels and the kernel can (optionally) shrink memory itself once it's not needed based on various parameters. It's fairly reliable and works well.

> a good-enough Linux replacement on the command line

nit picking but it was more of a good enough bsd/unix than linux.

Not really a nit pick, considering that OSX is certified full-fledged Unix.

>In the mid 2000's the Mac really took off due to being a good-enough Linux replacement on the command line, while taking care of all the hardware integration and providing a sleek desktop experience. This really helped the Mac take off among hackers. This system wasn't as open-source as Linux, but it was good enough: Most of us don't want to mess about with the desktop stack.

Who are you to claim this?

The closed off nature of Apple is very dangerous to trust, and I don't know any 'hackers' who are willing to risk unchecked/vulnerable OSes.

I have no idea if Apple has a backdoor to the FBI. We do know on Linux there is no hidden backdoor.

As long as you some assume that there are no unpublished exploits in Linux. The Heartbleed bug was in open source code for a year and a half before it was found.

Well that is pretty nice. WSL gets that much more exactly like Linux.

What I really appreciate about WSL is that you get the accessibility of a bunch of OSS projects and a machine which has legit drivers for all of its component bits. What this means to me is that searching for a "linux laptop" won't be a chore, if it runs the latest Windows it will run Linux. And I can do development on the Linux side while communicating with senior management in power point on the Windows side :-). If they come up with a better/credible USB allocation scheme it will be icing on the cake.

I also think they should buy one of the X server vendors and bundle it (comments about X.org from Redhat not withstanding) I use Xwin32 on my laptop with WSL and its pretty seamless in terms of things that want to pop up a GUI.

There are a few annoyances with WSL2. For me the most important are:

1. You can't connect to a port listening on WSL localhost like in WSL1, you have to figure out the WSL IP address and use that.

2. From WSL you can't connect to a Windows TCP port on localhost, you must figure out the Windows IP (cat /etc/resolv.conf) and use that.

3. The WSL remote interpreter on PyCharm is not working anymore. The suggested workaround is "use SSH remote interpreter" but given #1 you can't connect to localhost (and the WSL IP changes every time you restart it)

In order to use the SSH remote interpreter in my favorite IDE I'm using this script (on Windows):

    import subprocess
    import re

    output = subprocess.check_output('wsl.exe ifconfig')
    match = re.search(r"\sinet\s+(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s", output.decode('utf-8'))
    if match:
        ip = match.group(1)
        with open(r"C:\Windows\System32\drivers\etc\hosts") as i:
            content = i.read()
            new_content = re.sub(r"(172.\d+\.\d+\.\d+)\s+wsl", f"{ip} wsl", content, re.M)       
        if new_content != content:
                with open(r"C:\Windows\System32\drivers\etc\hosts", "w") as o:
            except PermissionError:
First I use the Windows Scheduler to start SSH (wsl service start ssh) on logon. Then this script is executed on logon with elevated privileges to update the hosts file. Instead of "localhost", I use "wsl" and it works well enough for me while I wait for the fixes.

Despite the above, the combo Windows 10 + WSL2 is the best "Linux Desktop" for me.

I'll add the separate filesystems problem. I generally need to access files from both windows & linux. WSL 2 may have made file access faster within the vhd, but it seems to be slower to use those files from Windows, or impossible with tools that don't yet support the network file paths provided by the 9p server. VS Code's solution with the remote extension is really nice, but every Win tool would have to do something similar to make the experience truly seamless.

For the record I was used to SFTP NetDrive for my remote host (map a sftp target to a local drive on windows), and I just started using that to make sure all my windows tools can access it just fine.

Doesn't solve the speed issue, but at least you're never blocked by a software that only work with a local/non-network path.

That's a pretty nice idea for the net paths issue. I'll borrow it, thanks.

I'd guess all significant Windows developer tools will add support for the wsl paths over time. VS Code obviously leads the pack with its remote extensions. I hope Jetbrains will do something similar with IntelliJ Idea - the piecemeal approach doesn't work as well IMO.

I'll be interested to see what Microsoft can do about the speed of access to the windows file system. It's painfully slow right now.

You should file an issue for the changing IP address problem. Sounds like a serious pain point.

> you must figure out the Windows IP

The existence of c:\Windows\System32\drivers\etc\hosts cracks me up. Supposedly [citation needed] Windows uses/used part of the BSD network stack.

It (like most every OS) used the BSD TCP/IP stack because it worked and didn’t have license encumberances. They switched to their own stack in Vista though.

Ahhh, the eternel dilemna they face. Either recreate the wheel and get complaints about that, or re-use the wheel and have people "crack up" about you doing that.

What cracks me up is the irony of seeing ...\etc\hosts on Windows, not the entire decision.

Then I don't understand your comment. What's the irony in there ? Or do you mean humor instead of irony / is it not irony at all ?

The irony is that “etc” makes no sense whatsoever in Windows, and particularly in that place. It probably came about because it was easier to have it there than to patch whatever bit of code they borrowed, and it’s now likely vestigial, but probably there is too much stuff relying on it to change things. The irony is in an OS trying extremely hard to not be a unix, ending up with unixy folder names because laziness.

I know you probably don't want to give up your editor, but FWIW if you can find an acceptable collection of bindings and python extensions for VSCode the remote development extensions work amazingly well with WSL. I was able to set up a full haskell IDE using haskero in under ten minutes with zero configuration.

4. At least VMWare Workstation is broken when you're using WSL 2. That's pretty annoying.

Has VMWare Workstation ever worked with Hyper-V enabled?

The WSL team knows about this issue and I think they hope to fix it before it ships in Windows stable.

A solution to the IP address problem might be to write the IP address to the Linux file system when Linux boots, then read it from Windows as part of your SSH launch.

For local development in WSL my solution is to simply run a ssh server from WSL and connect to that from Windows, forwarding any ports I need. I've found the Chrome SSH app [0] to work really well (aside from needing to run Chrome), as it can forward ports and supports tmux w/ mouse control and copy paste nicely. But any decent ssh client will work.

[0] https://chrome.google.com/webstore/detail/secure-shell-app/p...

You don’t need any of that if you use VS Code’s “remote development” extensions. It has a few issues with agent forwarding and suchlike, but is getting there.

> the WSL IP changes every time you restart it

That'd suck! is there no way to nail it down?

use GitHub gist to host a script code. Hacker news formatting doesn't do good formatting

So, you buy Windows to get better hardware compatibility for your Linux system on top of whatever Windows ecosystem brings you. I did something like that with non-Windows VM's. Although I couldn't find the link, there was an academic prototype you might have liked that used virtualization to reuse Windows drivers in other OS's or VM's.

Seems like there could be a market for that where it's combined with recent enhancements in efficiency targeted toward people like you that want more compatibility. What do you think?

In general I think it is an interesting idea. There has always been an economic issue with hardware manufacturers who were unwilling to expose too much information in an open source driver, and security issues of running a binary blob at ring level 0 in the kernel.

Microsoft puts itself in the middle of that by certifying a driver through a vendor sign up / certification process, and since it doesn't require the vendor to release source, the vendor can keep that information secret. As a result you have more drivers for more hardware created under the "Made for Windows" driver development plan than you do from enthusiastic reverse engineering efforts in the Linux camp.

To the extent that you can capture the value of Microsoft's oversight without having to pay the 'tax' of a Microsoft OS, you add value for the end users.

There's another potential advantage. Thanks to SLAM, the Windows drivers have some strong verification against that interface. If one implements it well, then they get both benefits of Windows hardware in general and extra robustness of their driver verification. As in, drivers might be better than Linux drivers without such interface verification.

The drawback being whatever in middle interfaces the Windows interface to the non-Windows VM's might introduce bugs. It must be correct and compatible.

I always thought someone could try one of these driver-reuse techs with Windows drivers. I recently posted two examples elsewhere:



Well, a mix of Windows and Linux drivers where we use Linux whenever it works well enough. Windows when it doesn't. Might also be able to tie that in Windows Embedded to trim as much fat out of the driver VM as possible.

That was possible for quite some years if you'd just used any VM solution locally and just SSH into it. At least VMware has decent file system performance when VirtualBox doesn't.

So how do the drivers work? Is Microsoft implementing and integrating cross platform drivers? Have they somehow made Linux a microkernel that uses windows drivers through a shim?

I wish Microsoft would release their own linux-based OS with a compatibility layer to let me run Windows apps. I'm not overly impressed with the direction Apple is going but I really enjoy a *nix-native environment too much to go back to vanilla Windows. It would change the math a lot for me if it was full-blown linux under the hood.

Disclosure: I work at Microsoft but not on Windows (and my work machine is a MacBook Pro FWIW)

WSL 2 is a lot like that -- it's a VM in the sense that it uses Hyper-V, but it's super lightweight. And in fact, when WSL 2 is enabled, both Windows AND Linux are running in the same type of VM (separate, obviously), so it's near-native performance in a lot of ways. There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.

I'm nervous about WSL 2. I use WSL at home quite a bit and I'm on the Fast Insider ring. My PC hasnt been able to boot Win10 for over a year with virtualization support enabled in my BIOS. No errors or anything, just hangs at boot. Ive left feedback in the hub, but crickets. Don't know what the problem is. It used to work, but at some point an update broke it. Not bleeding edge hardware, either. About a 4 to 5 year old PC. Intel i7 CPU, think it's an X79 chipset (not at the computer, so cant verify). Its just weird...

Microsoft said that the classic WSL will be available alongside WSL2, so you'll be able to continue using WSL1 no problem if WSL2 will have issues

I had similar problem. With Hyper-V my PC BSODed like every other day. I did some investigation on dumps and I think that problem was in nvidia driver (not entirely sure, though, but stacktraces were from that driver). I don't really know whether it resolved or not, because I decided not to use Hyper-V ever again, VirtualBox is much better.

I never got BSODs, just hangs at start, so I don't have any dumps to look at. I'm also using nVidia (1080 GTX) and it has probably been a few months since I updated drivers, but I've been having the issue I think since at last the 1803 update.

I havent been on the PC for several weeks, so I don't have a link to my feedback. I'll post next time I'm on.

Why is op getting downvoted? This is a legitimate issue he’s facing on a specific hardware configuration

This is a total aside, but I really don't understand voting on comments. I've no idea what it could mean - levels of agreement, interest, offensiveness (or scores of other possibilities)? It seems entirely meaningless, therefore pointless, to me.

Upvoting moves comments (threads) closer to top. Downvoting does the opposite, and dowvoting below 0 makes comments look fainter.

This is basically it. I upvote things I want to go up and downvote things I want to go down. Off-topic meta discussions about votes should always go down.

Sure, I know what it does, I'm just not sure why anyone would use it. I never do normally want a comment to go up or down (other than the odd egregiously abusive or aggressive item).

I may disagree with a comment, but that doesn't remotely generate in me an opinion re where it should sit in the tree. I don't think other people should or are likely to find a comment agreeable/interesting/relevant/insightful just because I do. Different strokes I guess.

A reason to upvote: "this is a good comment, other readers should see it sooner, maybe the first thing".

I see it as a community service.

Fair enough. I honestly suspect your signal there is more than lost amongst the noise of readers reflexively voting for what they happen to agree with. Anyway that's probably enough of this particular dead horse.

A comment more near the top will be seen more and therefore get more discussion, so if you want that, vote it up. If you want it to get less discussion, vote it down.

I'm giving you an upvote. Make of it what you will. :)

Thank you (I think?) for making me aware of that ;)

Can you post a link to your feedback?

I am a network/systems guy so not being able to do full packet captures and such on WSL is the only thing stopping me from using it full time.

>both Windows AND Linux are running in the same type of VM

Is there a way to suspend windows? From what you say, there is nothing technical preventing this.

It’s happening at the hypervisor level, and in this case it’s Type 1, so I guess you could but I don’t know how. My ops colleagues are hanging their heads at me right now.

You can learn more about the architecture of WSL2 in this video from Build https://youtu.be/lwhMThePdIo

> There are still some I/O issues that the team is actively working on, but early benchmarks are quite promising.


Hoping to run my R code on WSL 2. Glad I got the answer. I wish you guys well on I/O issues.

Is there any performance hit for WSL2 in regard to process forking? I recall that how window and unix deal with process is different (my R code uses mclapply parallel package for unix). I'm curious if there will be any performance hit that stands out versus running on a unix system for parallel/concurrency code.

On Reddit, they had a benchmark for WSL1 vs WSL2 and it showed that the Single core perf of WSL2 was sometimes better than Windows (and even bare metal linux - how that happened I am not sure). The multi-core was not as good, but still better than WSL1.

I can't find that exact thread right now, but here's another one comparing WSL with Windows:


I bet the windows hardware drivers for initialising caches, memory busses, power states, cooling etc. set certain systems up for slightly better performance.

Just out of curiosity: did they let you use the MacBook Pro without problems? Can you guys @ Microsoft decide to use whatever OS is more suitable for what you do?

Microsoft has had Macs in use for a very long time, even pre-dating the famous Gates investment. There are some infamous pics of delivery vans unloading tons of Apple boxes in Redmond. I would expect them to be even more liberal now under Nadella, but to be honest, they probably get great prices on Surfaces, which are very nice machines now.

Just saw this. The answer is it depends on the team, but unless there is a hardware/software reason tied to your job for a specific platform, people can choose what they want.

Many of my colleagues use macOS, some use Windows, some Linux. I have a work-issued Mac and a work-issued Surface Book, because it’s important to test compatibility, especially when it comes to CLI stuff, across different platforms. I have Linux VMs and docker containers and WSL configured too.

do you know if there's hardware pass through for gpu? ie can you do cuda stuff in wsl2?

On Twitter, the team has said [1] it won’t be supported at launch but that this will make it easier to add that support in the future.

[1]: https://twitter.com/tara_msft/status/1125888319974400000?s=2...

I really hope this results in PCIe passthrough on Hyper-V becoming reasonable, it was an absolute nightmare to try to configure and I could never get it to work more than a single VM boot.


> it uses Hyper-V

does this mean there will be issues using VirtualBox if WSL 2 is enabled?

As of VirtualBox 6.0, they should be able to coexist (because VirtualBox piggies onto Hyper-V when it’s enabled effectively).

In practice it's not really working though. At least not yet. There's a very long thread on virtualbox forums with people trying to get it to work but failing.

I personally have to reboot when I need to use docker or virtualbox. Very annoying.

Yep, this is currently confirmed broken on Windows 10 1903 (Windows Hypervisor Platform extended user-mode APIs). Affects VirtualBox, qemu, etc. We may see a patch, but it's up in the air right now. (<closes eyes and sighs>, I know.)

Qemu? Qemu does processor emulation AFAIK.

But then it is really Hyper-V with a VirtualBox UI. Seems like a recipe for incompatibilities etc.

The team has said its working with VMware and VirtualBox on solutions but I have no idea what those will be.

Hyper-V supports nested virtualization (only Hyper-V nested inside th Hyper-V, though), and seeing as VirtualBox supports a Hyper-V virtualization backend, the solution may be to just find and fix all the bugs in that backend code when under Hyper-V

Unless virtual box utilizes hyper v, you'll have to reboot to switch between

This is probably an unpopular opinion but.. there is a lot of value in having multiple competing kernels. In the same vein as browser engines, any monoculture is a bad idea. Not to mention there's been a lot of good work done on the NT kernel itself. Under the hood it's pretty advanced, even with all the user-space cruft on top.

Sadly, much like with the web, things have evolved such that it is virtually impossible to create a competitor due to all the accumulated complexity. The driver problem on the PC platform is pretty much insurmountable at this point, so any competitor would have to begin life, and gain massively in popularity somehow, on a much less flexible hardware platform.

It's a shame and makes me sad for modern computing.

I have think the same but maybe TODAY is not as hard as before:

1- We have 4/5 mainstream OSes today: Win/OSX/iOS/Android + Linux. In the case of iOS/android: Some users alread switch between the 2.

2- If it have apps is almost all people need. Drivers are not THAT significant IMHO.

3- Chromebook is a thing, despite it sound stupid: A OS that is just a browser.

Web is the 6 truly mainstream "OS".

4- What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?) + wifi. With this alone and you get even better than iPad: Monitors, wireless mouse, keyboards, external storage, and by extension you catch the rest.

5- Printers? By proxy of iOS it can work full wireless with no drivers on device.

So I think a true desktop OS could launch with less worry about legacy hardware.

Where IS the problem is the apps. It need to be better than chromebook, have very great first party apps that cover basics and hopefully, a VM layer so you could run linux and catch the rest.

> What people need on drivers? USB (and with usb-c), hdmi, bluetooth (maybe?)

Do you honestly think there is a single USB-C driver that covers literally everything that can connect over USB? And anything with an HDMI port uses the same driver!?

No, but I suspect the array of devices is more narrow than suspected?

I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.

P.D: One thin I forget to articulate is the possibility to leverage linux as a bridge for drivers (possible)? so the new OS ship a smallish linux just for get compatibility.

Then you're just making a Linux distribution, or something like Android I suppose. You may as well say extending Chromium would solve the web monoculture issue. That isn't how it works.

> I think that nail the software (app + dev experience) is a bigger challenge and priority than worry about the (external) hardware.

You are ludicrously wrong. Using the Linux kernel as an example, because drivers are in-tree, as of 4.19 (late 2018): https://raw.githubusercontent.com/udoprog/kernelstats/master...

As you can see, drivers are the vast majority of the kernel. And Linux's hardware support is often called out as a reason people use Windows or MacOS!

I stand corrected :)

Considering that the alternative to “OS in a browser” these days seems to be “every app is a (different) browser” (Electron), the Chromebook concept is not that stupid.

It’s sad that commercial realities are pushing people to throw away 30 years of progress on desktop UIs, but here we are.

> 3- Chromebook is a thing, despite it sound stupid: A OS that is just a browser.

It's not really "just a browser" these days. Runs Android apps, and regular old GNU/Linux desktop apps.

Only on some selected devices.

I wouldn't be so cynical. As with all things, tech comes and goes in cycles. I think with the limit to Moore's law rapidly approaching for current silicon, there's a good chance that efficiency & simplicity will become front and centre again at some point.

Not sure if this will mean new architectures, operating systems & better tools.. but one can dream!

It is difficult to continue to hope for that while watching people cheer on additional layers of complexity like the garbage fire that is the modern web.

Try roasting marshmallows over it once in a while. See if that helps you feel better about it.

I did. I abused the history API on my website when it was new, and then they went and fixed it. As it turns out, sometimes people care more about the marshmallows than they do about the fire.

I don't have the website anymore, but the basic idea is detailed here: https://grack.com/blog/2011/03/07/abusing-the-html5-history-...

Less flippant answer: To some extant, I feel it is on me to try to access good things via the Internet, figure out how to interact with it reasonably safely and satisfyingly and not get overly bent out of shape about the inevitable problems.

All things have their downsides. There are no perfect things that are nothing but goodness and light.

As they say: Sunshine all the time makes a desert.

Yes, but if garbage never caught fire nobody would bother inventing better fire suppression techniques.

Also, nobody ever saw a fireman by hiding inside with their window blinds shut. Take a peek outside, what you see might bring you hope.

We don't need NT to have competing kernels. All of the following have their own kernels:

  + FreeBSD  
  + OpenBSD  
  + NetBSD  
  + DragonFlyBSD  
  + Minix  
  + Haiku  
  + Redox  
  + Plan 9  
  + SeL4  
  + Fuchsia
And many more.

Dozens and dozens of UNIX kernels (with Fuchsia being the exception I believe..? Not sure). Yes I know kernels have far outgrown POSIX, but it's still a form of monoculture. Diversity in all dimensions matters imo.

I'm not sure anything below Minix can be considered a UNIX kernel.

Unix kernels could use more cohesion, not less, in terms of API. Even if that API is implemented mostly in userspace

View the kernel API like HTML. Browser diversity is good, but they need to be implementing the same interface if the smaller players want to support any hardware that works with Linux

Not to disagree with your point, but Plan9 is not really a Unix kernel, despite being written by some of the same group at Bell Labs.

Less than half of those kernels are derived from UNIX. Do they replicate the same interfaces? Sure. But so does NT.

Windows Subsystem for Linux is really close to that though. If you have not tried it yet you can download a Windows VM for developers from Microsoft that includes WSL and other dev goodies. Its mostly for evaluation but good enough to get started with.


Here is the link to the Microsoft provided VMs:


This is still with Linux running in a VM though, right? I'd like a lightweight VM for legacy Windows and Linux running on metal.

Given Microsoft's zeal for legacy support I'd say there's a snowball's chance in hell for this, but a man can dream.

When you run a type-1 hypervisor, the only thing running on the metal is the hypervisor.

This is what happens in the case of WSL 2; both Linux and Windows (the host OS) run in very (very) lightweight VMs.

Iirc, even the Win32 APIs sit above the subsystem layer. There was the OS/2 subsystem back in the day, Win32, and one more I can’t remember from my training 11 years ago.

posix subsystem?

I think that was newer than my training. This was the Vista timeframe.

The posix subsystem was a thing even when xp was around

Just run Linux and then run a Windows VM in QEMU, seems like you can do this now.

I just want Linux with the ability to use PowerPoint.

(I also want to just be about to plug in my HDMI to my laptop and have it work. Literally the only two things I care about)

FWIW, I have given major conference talks years back on my linux laptop with PowerPoint running via Crossover Office (wine), mostly due to my advisor's insistence on using PowerPoint. So that has already been a thing for a long while.

My issue is that with nvidia laptops the hdmi connection almost never works as expected, when I have cuda installed. I'll try proprietary or noveau drivers. I've driven my monitor with intel drivers. I've tried so many things, but it has been multiple laptops that I've just never gotten this to work. I just can't have cuda and use my laptop for presentations. So I just always keep a windows partition just for presentations. If you do have a suggestion I'd love to hear it.

I've never used crossover. Thanks, I'll look into it.

I tried to do CUDA programming on a laptop with Ubunto but gave up. It worked fine on a stationary computer.

There has to be something with Cuda that messes everything up on laptops.

I would actually wonder if it could do that via VMs. That why you can keep compatibility, but then change the underlying system.

WSL version 2 uses the Hyper-V Virtualization API but is not a VM.

The first version of WSL is just another process in task manager, with all the syscalls and filesystems emulated.

Well, each WSL1 process is a Windows process. If you’re running vim in bash, for example, you’ll see “vim” and “bash” as separate processes in Task Manager. WSL is all about ELF binaries natively, just as Win32 runs PE executables natively.

"windows process" is misleading, WSL uses a separate "linux subsystem" with its own syscalls, process/thread data structures, etc. It also uses its own file system (albeit with all the state stored in NTFS and affected by filter drivers, which is why its I/O is slow)

They are Windows processes, just as the Win32 subsystem also spawns Windows processes.

Probably best to refer to them as NT processes for the sake of clarity.

True, I thought that a few hours after I wrote my last comment but couldn’t update it by that point and didn’t respond to it. Windows NT is the kernel, Win32 is the Windows subsystem that everyone’s used to (including kernel32.dll), WSL is a subsystem that runs ELF binaries and implements the Linux syscalls. It’s not yet obvious to me how much WSL2 is actually even a Windows subsystem, given that it’s running it as an actually separate OS.

I dont think its a VM like Docker no. WSL processes show up under the task manager. This is more like Wine but Line... In a sense. But a little trickier.

Do Docker people really think it’s a VM? Or has the meaning of “VM” shifted this past year?

I think the OP is probably referring to the fact that running Docker on Windows and macOS is accomplished by running a Linux VM which the Docker containers run in. Not that Docker containers are VMs.

This is what I meant by this. Most people I know using Docker are not on Linux but on Mac and Windows.

I didn't know that macOS users don't run Docker natively! On Windows, it's native:


In practice, the vast majority of docker containers are Linux containers, so most uses of docker for windows need a Linux kernel to do anything useful.

As such, the 'native' docker for windows you linked is actually managing a Linux VM (HyperV or VBox) fot you; the docker server runs on that VM, and the native docker windows binary simply connects to that server.

They also do a lot of tricks to connect networking and file shares with that VM. The file shares part is almost completely non-working in my experience, with scary comments in the bugs like 'doesn't work if your windows password contains certain characters' (it sets up CIFS shares between the Linux VM and some parts of your Windows file system).

So, not sure what docker for Mac looks like, but Docker For Windows is only 'native' in a very hand-wavy sort of way, unless (maybe) if you're running Windows docker containers.

docker for mac uses xhyve/hyperkit and has a really really slow i/o.

Docker desktop runs a Hyper-V VM. Prior to that, Docker used a Virtualbox VM on Windows. The current plan seems to be to integrate with WSL2 once that ships [1].

[1] https://engineering.docker.com/2019/06/docker-hearts-wsl-2/

That cannot run Linux containers, only Windows containers, no?

Yes. If you're running Windows containers you can be "native" docker.

However, we use VMs managed with Salt for our Centos VMs. I never saw the need for Docker. Maybe if I did "web" stuff.

You can switch between one or the other. Oddly, it can't run both at the same time.

Docker's not a VM.

Someone who uses MacOS might think Docker implies a VM perhaps? (Linux VM needed to get Linux Containers to run).

Why? This is a common ask, but what exactly makes a Linux kernel superior to Windows?

There seems to be an assumption that Linux is the ultimate in OSes. But most of the value comes from the large ecosystem and momentum, rather than specific technical advantages.

The biggest practical advantage I have found is that Linux has dramatically better filesystem I/O performance. Like, a C++ project that builds in 20 seconds on Linux takes several minutes to build on the same hardware in Windows.

I just want to say, because some posts here have mentioned community but not hit on this directly.

The value of Linux (to many) rests on its FOSS value system.

I don't think this is about technical superiority of one kernel vs. the other. It is about the software environment. Linux is the most actively used Unix-derivate at this time. So by running Linux you get a whole software stack. Parts of this stack has been ported to Windows too (e.g. via Cygwin) but you get the most coherent user experience by just installing a Linux distribution, as WSL allows you.

In the post-SGI, pre-Second-Jobsian-Revolution world of the the mid-late 90s, there was a real malaise in workstation computing. The systems were dumbed down, the hardware was homogenized and commoditized into joyless beige, an ocean of "Try AOL Free for 100 days" CDs jostled and slushed amidst the flickering fluorescence of the cube farm, where CRTs and overhead lights would never quite hum along at the same frequency, zapping the air with an eerie, sanitized sick-static. This was the future of computing.

This gloomy world of palpable beige-yellow-purple is the world of Microsoft Windows. As a product, it caters exclusively to that clientele, in that environment. To be blunt, above all other design considerations, Windows was built to accommodate the lackluster office drone -- or, more precisely, their bean-counting overlords who wanted to save on user workstations.

Despite attempts to undo this, the design has proven impossible to build upon. How many legitimate improvements in the cutting edge of either academic or industrial compsci have been built on Windows technology, for any reason other than "MS signs my paycheck so I'm contriving this to appear like I'm happy to be using PowerShell"?

MS was able to paper over the deficiencies for a while through sheer force (pushing .NET as a semi-unified computing environment, Ballmer screaming "DEVELOPERS!", etc.), but Windows was in no way prepared for the revolution brought by virtualization in the mid-aughts, and any shred of a hope that MS would somehow recover was utterly and entirely obliterated by the widespread proliferation of containerization. Good luck getting a real version of that working on Windows.

I've said it before and I'll say it again: in about 5 years, I wouldn't be surprised at all to see the new version of "Windows" plumbed end-to-end atop a nix-like kernel, and driven by a hybrid userland comprised of a spit-polished copy-paste from WINE + a grab bag of snippets from the proprietary MS-internal Win32.

The flexibility of the nix model is the indisputable, indomnitable winner here, and it stunned and killed the Goliath in its tracks. This is the ultimate surrender to the open-source model. Windows's top-down, "report to your cube by 8:26am sharp and don't question the men in the fancy suits" approach to computing resulted in a rigid operating system that was unable to keep pace with the technological demands of the many pantheons of loyal corporate drones that comprised their user base. Even strictly-Microsoft shops are forced to give developers MacBooks now, or they're unable to get anyone competent to sign on.

It really couldn't get more poetic than MS desperately integrating Linux into their OS so that people won't switch to the more flexible nix-like systems for their workstations, although I anxiously await MS accidentally linking in a GPL module and thus becoming required to disclose the Windows source code. :)

Meanwhile, per usual, nix-like OSes have been humming for 50-ish years now and show no signs of slowing down. IMO, that's all the objective, borne-out proof one needs to say that for all practical purposes, Windows couldn't stand the test of time.

Please note that this is not about Linux per se, but the overarching design theory and development processes in use in major operating systems, and the massive success it represents for open-source, research-driven systems.

The chapter on "The Windows Way" has been written, and whatever its benefits may be in theory, they don't bear out as sustainable in practice.

It'd be fun to do a macro-scale timeline comparison to Sun. These invincible tech behemoths that cater to their narrow niche become rotting, hollowed-out fossils as the free systems continue to develop and evolve the cutting edge. Microsoft is right on time here, and I fully expect to see them struggle through the next decade or so until Larry Ellison finally puts them out of their misery.

This is about UNIX clones being offered for $0.

No one can fight against free beer, regardless how bad it might taste, people still go for it.

I think I've read this free-beer explanation of yours in about 100 distinct comments. In most of them you explain why some particular programming language that you hate with a passion came to develop mindshare while it was so unlikely considering what a bad design it is.

For some balance, you could consider how Windows got preinstalled on most computers since the 90s and what huge dominance it had in the consumer and also development domains (also thanks to some evil capitalist practices, some might say!). That there are free versions of Visual Studio and other IDEs and dev tooling for various programming languages for Windows just as well. Also, consider that I can buy Windows 10 Professional for $8.90 with 5 seconds of googling.

So I have found the first person willing to pay for Linux or BSD the same off the shelf price as a Solaris, HP-UX, Aix.... license.

Maybe Coherent would have survived with such willingness.


file based system architechture makes it a winner for me.

Totally agree. This is maybe the one thing that could potentially bring me back into the Microsoft ecosystem.

Absolutely. At this point, there isn't much value for Microsoft to hang on to the legacy of Windows - with most of their revenue coming from Azure/Cloud and developer products, it might be time to say goodbye to that legacy and make it subsystem on a modern Linux-based system. That might also be a way for them to become more relevant in mobile devices and platforms.

The line item in Microsoft's last annual report which encompasses Windows is "More Personal Computing" and shows $10 billion dollars of operating income on $42 billion dollars of revenue.

Windows revenue grew $925 million. Not to $925 million. By $925 million.


> At this point, there isn't much value for Microsoft to hang on to the legacy of Windows

I would say this move by Microsoft is the exact opposite, because it maintains Windows at the centre of the picture.

Microsoft is basically saying, look you can have all your Linux goodness and best of all you can access all that goodness using your current Windows desktop.

Using WSL2 the between Windows/Linux are very blurred.

Well...having driver and app compatibility going back to Windows 7 in 10 is still a massive pro.

Linux's driver architecture and even app distribution is a fucking nightmare.

Windows is a massive business for Microsoft still.

Eh? ‘Most’ of MSFTs revenue comes from Windows and Office. Isn’t Azure/cloud/server less than ⅓? Not that that’s not substantial, but it’s not most.

Primarily Office business (on premises and cloud) server applications such as SQL and Windows Server, Azure, Windows.

I'm convinced there's nothing stopping Microsoft from making their own Linux based OS by porting their desktop and GUI models on top of a Linux kernel + filesystem + drivers, not that far from what Apple did with OSX. Technically it'd be an interesting challenge, but I fear the day we'll see also Linux software asking to be run on that Linux for "better compatibility" because that is what the developers used. In that case, thanks but no thanks.

I wonder whether they'll experiment with Windows compatibility in the WSL2 layer as Extend[1] phase, to possibility utalize it in a future upcoming distro creating an Extinguish[1] situation.

[1] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...

This will make windows more linuxy than macos.

So for all that just want a local nix enviroment for dev (and to most these means linux), Microsoft has actually beat Apple at this.

... except that (a) WSLv2's kernel will be running in a VM, and (b) macOS's kernel is much more like Linux's -- fork/exec, networking, filesystem, memory, TTY.

I see WSLv2 as more of a concession that a *nix-style interface is what developers want, and that the NT kernel can't deliver that.

NT supports interfaces for supporting a fork model efficiently, it's just not well documented. In fact, you can write an efficient, Windows-native POSIX environment entirely from user space using existing APIs: https://midipix.org/

The only thing that NT really lacks is a Unix-style TTY subsystem. The interplay in Unix between TTYs, process control, and signals is complex and deep, and none of it lends itself toward abstractions--it's all quintessentially Unix. For almost everything else, NT provides interfaces that can be used to efficiently implement Unix semantics.

Microsoft isn't interested in supporting a modern POSIX environment natively as that would merely accelerate the migration of the software ecosystem away from Windows APIs. IMO, it's why they killed WSL1. WSL1 was de facto a modern, native Unix personality on NT. They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something, used a small team to round things out, and received UNIX V7 certification in a jiffy.

WSL is about managing the shift toward a Linux-based cloud world, capturing as much of that business as they can without unnecessarily risking their existing footprint. WSL2 neatly allows them to maintain Windows API lock-in while also providing substantial value for those needing to develop Linux-based software. WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.

> They may not have managed 100% Linux compatibility, but they could've simply relabeled it Windix or something

My take on it was that they were never able to implement all the syscalls (WSL just couldn’t reliably run some production server software), and filesystem calls were horribly slow (try npm install on a reasonably large JS project if you have any doubt).

You may be right — but I’d still like to dream that one day Microsoft will release a true Linux (or OpenSolaris, BSD, or whatever!) OS of their own, which their Office apps will fully support. That’ll be the day I weigh another option besides macOS.

Accessing Windows' NTFS volume from WSL2 Linux is (and will always be) even slower. I have no doubt they've could've substantially improved file access, but management pulled the plug. WSL1 was basically just a proof of concept, afterall.

Also, don't forget that Windows and NTFS has never been known for performance. Expecting file access to be as fast as ext4 from a Linux kernel was just the wrong set of expectations. If you want Linux performance you need to use Linux, of course, accessing it's own block device directly; and people demanding that are going to get it with WSL2. The cost is that integration will be worse, both in terms of ease of use as well as performance. AFAIU WSL2 is using 9P now and in the future probably virtio-fs (https://virtio-fs.gitlab.io/) or something similar.

People keep saying how lightweight the WSL2 VM is. Well, that's how all modern VM architectures are now. If you launch a vanilla Linux, FreeBSD, or OpenBSD kernel inside Linux KVM, FreeBSD bhyve, or OpenBSD VMM there's very little hardware emulation, if any[1]; they all have virtio drivers for block storage, network, balloon paging (equivalent of malloc/free), serial console, etc; for both host and guest modes.

[1] I don't think OpenBSD VMM supports any hardware emulation.

WSL2 does use 9P, here is the BUILD session about it.

"The new Windows subsystem for Linux architecture: a deep dive - BRK3068"


> The only thing that NT really lacks is a Unix-style TTY subsystem.

The long, laborious ConHost / NT Console subsystem rewrite/refactor over the last few years has moved Windows a lot closer to the Unix PTY system for internal plumbing (while not breaking the NT Object side of things). The fancy new Microsoft Terminal in alpha testing now takes advantage of that.

> WSL1 fell short on both accounts, though standing alone I think it was much cooler technology.

I feel like this is an interesting case where version numbers hurt more than help, because it sounds like WSL1 and WSL2 rather than a pure 2 > 1 are going to be side-by-side for some time now (moving distros between WSL1 and WSL2 is supposedly a simple powershell operation and can be done in either direction) and hints from Microsoft that there may even still be additional investments in WSL1 depending on user use cases and interest.

I think it's good for the NT Kernel to maintain subsystem diversity, and so I do hope that WSL1 continues to see interesting uses, even if a lot of developers will prefer WSL2 more for day-to-day performance.

Or just lock down a BSD like Apple did and adopt wine.

Just thinking out look because that is just very improbable. But effort wise, implementing the win32 layer over linux is a harder thing than what they did with wsl1, surely right? But at the same, their wsl team was like what, a few guys? 10, 20? But the linux kernel source is opened maybe that's the difference.

SQL Server on Linux already uses a thing that resembles a Windows syscall-emulator to be able to run a bunch of Windows user-space unmodified. https://cloudblogs.microsoft.com/sqlserver/2016/12/16/sql-se...

(But it's more aimed at services, so doesn't do GUI etc.)

My brain threw an exception when I read this title. Awesome that this is a reality though. Now that we have a Linux kernels running and accessible on both Windows and Chromebooks I feel like we can finally say: 2019 actually is the year of Linux on the desktop. It's not a meme anymore, it's finally just a true statement.

Edit: To those questioning if this really counts as Linux on the Desktop. Yes, I understand what you're saying, I too am a bit of a blowhard about Linux and prefer to use the real deal, not some kernel on a kernel bs. But I think this is still a huge deal in terms of the accessibility of Linux, think of how many young programmers will be able to dip their toes in the Linux pool on their gaming rigs without having to go through an Ubuntu install process (I write, thinking in the back of my mind that I had to go through that, and so maybe they should too.) Yes there is something of an ideological compromise here, because the Linux they're running is sitting on a binary blob and not truly free... but Linux still exists in it's gloriously free form for them to use when they decide that's important to them, I see no harm in them reaping some of the other benefits of Linux without the freedom, it will hopefully become an on ramp to more Linux adopters and programmers.

Despite the wording I don't think that is the original definition? The idea is to make Linux viable as a desktop operating system, and whatever comes with that. Not running Linux as an application on another desktop. Which is almost the opposite as in leveraging capabilities outside of Linux.

Maybe more importantly redefining the goal isn't helping Linux much. I remember everyone raving about Android, but now look at the lack of vanilla graphics drivers for embedded platforms.

That said, I have nothing against WSL.

I keep seeing this angst over whether Linux will ever be ready on the desktop. Well, I've been using it exclusively for something like 15 years.

Me too, since 2004 with Fedora 3 back then. And I'm growing more and more frustrated with it the last few years.

Maybe I just have less time and patience nowadays :-)

I was on Linux desktop 99% of the time between 2006 and 2014 (I now use a Windows workspace with GPU passthrough on my main system), and just today my frustration with Windows was piquing to the extent that I'm ready to go back to a nix desktop.

Maybe people just get fed up with the same grating pain points and need to spread the frustration around. Periodic changes in scenery are healthy. :)

Windows and MacOS have their frustrating points as well, I don't think any of these systems are perfect. I suspect the vast majority of computer using people could get by with a Linux desktop with about the same amount of frustration they suffer already.

At least on Linux I can fix the things that are really making me nuts.

I too have been using it as my primary OS even when gaming for years now too. No issues.

In fact I would recommend it for anyone who only uses their computer for general, basic use.

When people on hacker news say "no problems" what I have found is they generally mean "no problems I don't consider trivial or easy to handle."

We are far, far past the point where the general population is going to spend effort to learn about something as simple as using apt-update or even a GUI application update manager, ever. Phone operating systems give the user everything they care about (mostly content consumption, a little creation) without having to read or learn anything. If you ask that of people (to read and learn about an operating system) it's a non-starter for >80% of users.

For general, basic use I would recommend a chromebook or ipad. That's what most people are looking for - something that lets them do the handful of things they want to do with as little overhead as possible.

>I keep seeing this angst over whether Linux will ever be ready on the desktop. Well, I've been using it exclusively for something like 15 years.

>For most purposes, it has been ready for years.[...] Most people, however, could switch to one of many Linux distributions and be just as productive if not moreso.

If "most people" includes non-technical users, I doubt they could switch to Linux without difficulties.

E.g. a typical non-geek user might be my friend that runs Windows. Some examples of showstoppers that makes Linux totally a non-option:

- Intuit Quicken which she's been using for 20 years. Yes Linux Mint was a possible alternative but its early releases (inside of your 15-year time period) didn't have reliable online downloads from financial institutions. Mint's later partnership with Yodlee api for transaction downloads still didn't make it equal to Quicken. Yes, Quicken is terrible and buggy software but early Mint was even worse for online banking scenarios.

- Netflix streaming was not easy to run on Linux until recently[0]

- AAA games (including recent ones like Fortnite) don't run easily on Linux. Valve Steam Proton is a recent effort.

- iPhone sync with Apple iTunes - running on Linux requires googling for articles of running a Windows vm or Rhythmbox on Ubuntu which may not work with certain iOS updates

- sewing machine embroidery software all runs on Windows and not Linux or even MacOS. The software also requires a dongle for copy-protection and the hardware drivers for the dongles only exist for Windows. Running Windows as a vm inside of a Linux Desktop and exposing the host USB port to the client vm won't fool the dongle software. If the ultimate solution to "Windows in a virtual machine" shortfalls is to dual-boot Windows and Linux, that advanced configuration adds more complexity and it contradicts the ideal of "run Linux desktop exclusively".

For people to run Linux without issue, the person would need to possess technical skills equivalent to you (e.g. a HN poster) -- or the person has a "guardian angel" as on-call tech support (e.g. a son/daughter/friend) to get them over technical issues (like Netflix) with workarounds.

I don't doubt you've been able to run Linux exclusively and there are more examples like you. Nevertheless, it still required a very atypical usage profile to run a Linux desktop exclusively for the last 15 years.

Even today in 2019, I would not recommend the Linux desktop to any of my non-programmer and non-sysadmin type of friends & family unless I was willing to be their on-call tech support to handle their inevitable Windows compatibility issues.

For Linux to work in a mass-consumer-facing situation, it has to be an "appliance" type of installation and "invisible" such that the user doesn't realize they're running Linux. E.g. as the underlying os in Android smartphones, or the os in smart TVs, or the os in Tesla cars.

[0] https://itsfoss.com/watch-netflix-in-ubuntu-linux/

I agree with this sentiment. I'm not a hacker, gamer or coder. I'm an architect who enjoys tech. I've used Ubuntu and other distros. The one thing that stops Linux from becoming mainstream for desktop use is software. Software for enterprises and software for consumers.

The tech community doesn't realize that there is more than just office applications and browsers that people use. I can not install BIM (Revit) software on Ubuntu for example. I can't install Lightroom on Ubuntu. I know that there are alternatives and work arounds to software, but consumers only understand what they understand and is easy and mainstream.

The tech community can't expect consumers to spend time looking for alternative software. I feel that this is why the Windows Phone failed, because there was a lack of mainstream software (apps).

The day that BIM (Revit) is available to install on Ubuntu is the day I switch.

>I can not install BIM (Revit) software on Ubuntu for example.

Yes, a lot of Linux desktop enthusiasts only include "web browsing and email" scenarios in their mental models. Therefore, they are not aware of how the Windows os is an unavoidable platform dependency in many critical workflows. This perspective is why "Linux desktop exclusively" appears totally realistic to them.

A similar scenario to yours just happened to me last month. A land surveyor gave me some 3D laser scan point cloud files. (Trimble RealWorks files which are ".rwcx" files generated by the Trimble SX10.) The software (Trimble Business Center) to import those files only runs on MS Windows. I tried running it on VMware but the Trimble software required DirectX 11 so it crashed with an unrecoverable error[0]. Well, VMware only supports up to DirectX 10[1]. It's another example of "just run Windows in a vm" on Linux Desktop doesn't always solve the problem.

This also highlights another underappreciated and unseen difficulty with Linux desktops: You often don't know you will have a roadblock with Linux until you encounter that roadblock. It's not easy to predict your future incompatibilities!

[0] https://imgur.com/a/PRHnR4r

[1] https://communities.vmware.com/thread/608743

In some fields (3D modeling, PCB CAD) FOSS software could be considered mainstream, in others (audio, photography, vector graphics) just really good.

Yes, unfortunately, architecture CAD and BIM are not among those fields :(

Few of those fields have professional software available for Linux i.e. what is actually used in the industry. It's the students, engineers and artists that are invested enough to switch platforms. Most people are just going to use the FOSS software on Windows instead.

I have been watching netflix on linux for the past couple of years using Chrome. Install. Just works.

Gaming is still very game dependent. Think of it like a console. Some "exclusives" just wont run.

If someone is using their computer to surf, write emails, watch netflix, alongside lite gaming, I find linux to be more enjoyable. I don't have to do any command line oriented stuff at all for general use. The initial install is also very simple. On a desktop :)

We will have to agree to disagree. I think it is more an issue of framing perspective than anything.

So your opinion is that it's ready?

For most purposes, it has been ready for years. There are a handful of proprietary programs that individual people may need for work that aren't available, and you aren't going to be able to play the latest games on Linux. Most people, however, could switch to one of many Linux distributions and be just as productive if not moreso.

As much as I love Linux (I'm typing this from my IBM Thinkpad running Arch with a KDE desktop) There are far too many warts to call it "ready". Every morning, I plug into my widescreen monitor, and watch as application windows randomly decide which monitor they'll appear on. Then begins the fight with my bluetooth mouse. I enable bluetooth, then turn on the mouse. It's recognized and "connected". But it does not work. I have to disconnect and reconnect. I can't in good conscience advise my mother or wife that this is a "normal" experience, therefore, it's not "ready".

Now, does using Ubuntu cover some of these glaring warts? Perhaps. But that often opens its own set of problems. Each comes with its own set of workarounds. Where macOS and Windows excel, is their sense of "polish". Most happy path things just work, and work 99% of the time.

All that said, I'm die-hard Linux ALL THE WAY! I just couldn't say that it's ready for most purposes. For the things that I'd use a tablet for? Sure.

Then we disagree. Linux works until is doesn't, which is the problem. It isn't a handful of programs so much as a wide range of capabilities. We can all speculate the taste of the average user, but do you think companies wouldn't love running desktop Linux instead of paying millions to Microsoft? That is what everyone did when Linux actually became good enough as a server OS.

I guess we do, as I've said elsewhere, it has literally been years since I've encountered anything I couldn't do on Linux just the same as on Windows. I firmly believe the reason more organisations haven't changed is inertia. To paraphrase a cliché, no one was ever fired for choosing Microsoft.

I don't see how inertia would be the reason if you also believe that it has been good enough for 15 years. I would say desktop Linux simply doesn't offer enough unique value for the effort involved in large scale deployments, unless you are someone like Google. Most things that have improved for users in recent years, like web application, actually favours Windows. Because the primary use case for desktop operating systems are becoming what is beyond that of web or smart phone applications. Leaving desktop Linux with the lowest common denominator.

I said I've used it for 15 years not that it would've necessarily been a good choice for a non-technical user that long ago. On the other hand, non-technical users usually require significant IT support on Windows as well. That's where inertia comes in. Linux may not offer a significant value proposition over Windows, so Windows stays. That doesn't necessarily mean Windows offers a significant value proposition over Linux either. Inertia.

> That doesn't necessarily mean Windows offers a significant value proposition over Linux either.

But as far as I know that is this case. That Microsoft's offering is much stronger when it comes to large scale corporate deployments. Unless you want to make the claim that e.g. RedHat's offering is on par or better, which isn't something I have heard in the wild.

Given that drivers are Linux's biggest problem, isn't there benefit to leveraging Windows drivers and then running Linux on top of that virtualized hardware?

I don't see why you couldn't run x11 in the window manager with the new Linux kernel in Windows.

Windows has less hardware support than Linux. Most single board computers can run mainline Linux, whereas just a handful support Windows IoT.

AMD, ARM (CPUs & Mali GPUs), Intel, VIA, Qualcomm (Adreno) all have support in kernel 5, whereas Microsoft is still stuck subsetting a small group of ARM GPUs and branding Windows 10's OpenGL ES support as DirectX 11. This is a repeat of the troubles Microsoft had with supporting Windows Phone, but now the userbase is significantly smaller despite a wider array of hardware to support.

There are things that don't work well though, classic example are laptops that dynamically switch between discrete and integrated graphics. You'll probably run everything on the dedicated GPU which hurts battery life.

Still, my old desktop scanner that the manufacturer stopped publishing drives for during the Windows Vista era? Yeah Linux runs it like a boss. No looking up drivers or config parameters on the internet, it just works.

> classic example are laptops that dynamically switch between discrete and integrated graphics

YMMV, but as far as I know that's more or less a solved problem by default (for X anyway) with DRI3.

This particular complaint echoes folks (I was one) who booted Ubuntu desktop a decade ago and couldn't get wifi to work, and proceeded to complain about shoddy driver support (to present day, clearly), using only that single outdated* example as an argument. Of course this is compounded by a ~months to ~years delay in most desktops getting those improvements thanks to the glacial pace at which the mainstream desktop distros update their repos.

Was there a point when Optimus/Bumblebee/Prime was a shitshow? Yes. Is that still reality? No.

What this ignores is that Linux driver support is generally fantastic, works out of the box in a way that desktop architects at MS dream about and is infinitely more current in practice since you go to one place to update all your software, including driver software, something MS hasn't been able to get right in a decade of trying.

Regardless, mobile battery life's still worse on Linux. And as much as some things are super convenient compared to the Windows/Apple world, the truism a friend told me as I wrestled with Ubuntu ten years ago remains true today: Linux is for folks that enjoy configuring Linux.

* I have to use a combo of DKMS and an AUR package to get WiFi on my one year old IdeaPad, so outdated may be the wrong word there. Better to say that realtek and broadcom chips have gotten hit hard by Intel's move into consumer networking.

Worth pointing out that 'year of the Linux desktop' probably predates that.

> What this ignores is that Linux driver support is generally fantastic, works out of the box in a way that desktop architects at MS dream about and is infinitely more current in practice since you go to one place to update all your software, including driver software, something MS hasn't been able to get right in a decade of trying.

Generally, yes, it's pretty decent on first go and there's a nice default happy path.

Unfortunately in my own experience, that path isn't particularly wide, and there's a huge number of gaps.

Version compatibility is a major issue, imo. One driver or package works great on one kernel version is completely broken on the next.

A few hours ago I tried to install the official AMD drivers for Ubuntu. It's not until the install script has already gone and screwed up my system that I get told that they don't support 19.04.

I just don't have that issue on Windows as a rule. I'm not claiming Windows is perfect by any means, it's got it's own set of issues.

Version incompatibility is expected, because Linux kernel changes a lot. You need the correct driver for your kernel. The amdgpu driver is a part of Linux distribution, your best option is to install it from Ubuntu apt repository, not from manufacturer's website. Which now explicitly and clearly says the driver is for Ubuntu 18.04 only - did you not see that?

> Version incompatibility is expected, because Linux kernel changes a lot.

Expected by whom, though?

I could understand it if it was major kernel versions or something like that, but it seems that a whole bunch of things are really tightly linked.

> Which now explicitly and clearly says the driver is for Ubuntu 18.04 only - did you not see that?

I honestly didn't. I went back and checked - yep, it does say for 18.04[1].

I have to say though that I wouldn't have automatically assumed it was ONLY for 18.04 though without it being more explicit about that. If the official drivers are available within the repo from now on, then it'd be great for AMD to actually say that. (I realise this isn't Ubuntu's fault)

[1] https://www.amd.com/en/support/graphics/amd-radeon-hd/amd-ra...

Of course, not expected by a basic user. But if you follow Linux, it is quite well known that the Linux project intentionally does not promise stable internal api in the kernel, they "run a tight ship" where if a program needs to use kernel api, it has to be maintained in the Linux tree. Given this modus operandi of the Linux project, breakage of the old drivers with the new kernel version is expected.

Advice to Linux users: one would best get their the graphics driver and a matching Linux kernel from the same source, either the Linux project, or the OS distribution. Mixing versions downloaded from AMD website with random kernel is supported by nobody and is testing your luck. That is a Windows model and kind of works with Linux only with nvidia drivers for their hardware, although it brings a lot of headaches too.

I agree that the installer should have warned you about the incompatibility at the beginning of the install, not at the end. That sucks.

With graphics, it is usually best to run the newest drivers with Linux, that means the newest kernel possible. Except for the older cards, which are not supported by AMD anymore (which sucks), where one can only use old drivers with appropriate old kernel.

Those are pretty old cards though (I've had amdgpu support for my six year old 7970 since kernel 4.9, and I think they've extended it back to a generation or two older architectures now), and you can use the open source radeon drivers with any kernel.

I think the folks using Catalyst for better 3D acceleration have probably moved on to cards supported by amdgpu by now.

Version compatibility is not an issue for in-kernel drivers, only for the few remaining external ones. On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista, on Linux, once a driver is in the kernel it's continuously adjusted to driver API changes and will keep working. You can still run a current kernel on a 386 if you want to.

I'm running a quite current Dell on an essentially unpatched kernel (just includes Gentoo's default patches) with no additional modules involved and everything I tested up until now works, even fancy things like Dell's mini-dock.

> On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista,

I think that's a bit of a difference though. Vista came out nearly ~13 years ago, and we're talking things breaking a year or less later.

Heck, for more obscure drivers[1], it seems necessary to recompile for every kernel patch. Perhaps that's the fault of that driver's developer for not following the correct way to build kernel drivers, or there's something unique about this particular device - I don't know.

[1] an example: https://github.com/milesp20/intel_nuc_led

> > On Windows you have this issue much more often if you are trying to use an older device, in particular if it's one that came out before Vista,

> I think that's a bit of a difference though. Vista came out nearly ~13 years ago, and we're talking things breaking a year or less later.

I'm not. I'm talking about the fact that drivers for devices older than me that have been merged into the kernel keep on working today while on Windows for some subsystems (like graphics or sound) you can't expect things to work after "just" 15 years. I admit that this is a long time, it's still a huge difference.

And yes, on Linux you are expected to recompile drivers for every new kernel version, that's intentional (https://github.com/torvalds/linux/blob/master/Documentation/...). Since the driver API is reasonably stable, the code doesn't need to be adjusted for every version, and if the driver has landed in the kernel, this is done while changing the API.

Linux driver support is fantastic because it's not made by manufacturers.

This is not correct.

Many manufacturers contribute drivers to the kernel.

I do not enjoy configuring Linux, but I hate fixing Windows issues. I just reboot, deinstall, reinstall and at the end I have not learned anything and I do not even know if the problem will occur again. When I fix a problem in Linux, it is a journey that makes me discover unknown territories. At the end, I have improved my experience and knowledge.

For linux, I am in control. For windows I am a puppet of Microsoft will.

The last time I was shopping for a laptop (a year ago), Arch's wiki said it was all broken for the models that interested me. Here's an example, if you have better information maybe update the wiki.


I'm not updating the wiki for a device I don't own, or plan on owning.

Especially when the relevant part of the wiki is correct.[1]


I spent a lot of time trying to get dynamic switching working and after countless hours I gave up.

Optimus is definitely not a "solved problem", unless you know some method I didn't find in my dozens of hours of googling how to get it to work on a system76 laptop (Linux preinstalled) and a 2012 MacBook.

The "solved problem" is using the kernel implementation of muxless hybrid graphics (PRIME), not Nvidia's proprietary one (Optimus).

Yeah, the "happy path" here is applicable if you don't attempt to install janky/poorly maintained proprietary out of tree drivers. The reason these drivers are out of tree is usually due to either serious hardware flaws, incompetent/inept vendors, or a combination of both. AMD has (in large part) fixed this by reusing the kernel shim from AMDGPU (open source & mainlined in kernel) for their proprietary driver (AMDGPU-Pro).

Nvidia meanwhile has stated they will not support Wayland, and has sandbagged the integration of their Linux Kernel patches for their single board computers (like the ones that are used in Tesla's cars). They don't give a fuck if their clients are stuck on broken, insecure BSPs, and frankly they operate as a malicious vendor: https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/

Yes, Nvidia provides terrible support for Linux, but it doesn't matter whose fault it is. That is the basis of many, many complaints about drivers for Linux. Nvidia ships in a great many laptops. If Nvidia sucks on linux, linux has a problem with drivers, full stop.

I'm not sure how the bubble you live in came to be but for ordinary desktop/laptop hardware that is as false as can be.

And there is a very simple reason for that. All desktop PCs and laptops are sold with windows, if no support exist the hardware as a whole will not exist.

Meanwhile in linux land I still can't use my three monitors because displayport MST doesn't work with the open source AMD driver. Support has existed for quite a while, but it seems none of the five people on the internet that have actually tried it has gotten it to work. Just one of the many driver issues I currently have on a couple of machines with linux.

> YMMV, but as far as I know that's more or less a solved problem by default (for X anyway) with DRI3.

Just no.

It is not false. Complaining about obscure gpu driver feature not working in Linux kind of proves the parent's point. Most usual hardware works out of the box on Linux now, except for the nvidia cards, but even that can be made to work with their binary driver. If the AMD driver does not support some obscure feature, that is on AMD. They work on it, complain there, but naturally some things have higher priority than driving multiple monitors from a single port.

I'm not blaming linux for manufacturer not supporting linux well enough. But I'm stating drivers are an immense issue for linux - regardless of who is to blame.

Maybe I shouldn't have started with an "obscure" example. Maybe that the driver crashes if displays are awakened from sleep? Mind you - only waking the screens from sleep, not the entire system (that doesn't work either, but I don't know which driver that is to blame for that yet).

Or that ubuntu LTS are just incapable of turning off most machines I've installed it on? (machines that did exist for a while when the LTS version came out).

Yeah, those other issues are real and they suck, especially when one uses the most up-to-date software and they still happen. Point taken.

For what it's worth it is my experience that displayport MST has never worked reliably on any OS or hardware whatsoever. I gave up on it and am thankful I can now just buy a thunderbolt dock with multiple video outputs.

MST on a NUC was pretty seamless for me when I used it.


If you're gonna reference the wiki, cite the right article[1].

Under the 'GPU Offloading' heading (the feature most folks want in dual discrete/integrated laptops):

> Note: This setting is no longer necessary when using the default intel/modesetting driver from the official repos, as they have DRI3 enabled by default and will therefore automatically make these assignments. Explicitly setting them again does no harm, though.

I had to do nothing out of the box to have a working PRIME setup on my work laptop over a year ago, and I'd never used a laptop with discrete graphics prior. My only driver issue was with DisplayLink (proprietary), where I had to pin xorg because it wasn't compatible with 1.19.

[1] https://wiki.archlinux.org/index.php/PRIME

Most people want to use the proprietary drivers, because nouveau's performance is prohibitively low for gaming. And that only supports switching GPUs with a reboot.

It does work, but it's not very nice, which is what we are upset about.

Choose a vendor that isn't actively fighting the OS they sell and support if you want your hardware to work as expected: https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/

eGPUs in a laptop format is a bit silly, you sacrifice the mobility of the laptop to get a very cut down discrete mobile GPU. AMD has started to eat this market alive due to the performance per watt advantage: https://redd.it/bc5hkg

Consumers are much more likely to bring a sub-5lb HP x360 everywhere than a bulky 8+lb laptop.

Your comment confuses me. Are you talking about external GPUs not making sense? Or dedicated GPUs in laptops?

Anyways, I'm pretty happy with the weight of my Dell XPS 15 which has an Nvidia card, but I regret buying it because of the lackluster Linux support.

I haven't seen any laptops with dedicated AMD cards at all lately, and until recently I had no idea that AMD's integrated cards were so competitive, so that's why I didn't. I plan to buy AMD next time I'm in the market.

Drivers are far from the biggest problem for Linux adoption. The biggest problems are:

  + Network Effects
  + Brand recognition in the general public
  + Indifference to FOSS principles
  + Resistance to change

One more: OEMs like Dell and HP aren't allowed (by Microsoft) to sell PCs that dual-boot between Windows and something else, such as GNU/Linux so the only people that get to try Linux are those that have it pre-installed as the only OS, or those willing and able to install it themselves. Source: read about it somewhere or other; main topic was the history of BeOS.

This (the OEM deals) is really by far the biggest problem that Linux has on the desktop.

Microsoft still plays dirty and has done that for a long time.

Why? As far as I am concerned these are all legacy problems. Drivers mostly work, people are aware of Linux and it can be e.g. run off an external drive. The problem is desktop Linux just isn't good enough. From a macro perspective it takes significant effort to manage, significant effort to develop for and provides significantly less value to users, developers and organizations alike. And exceptions doesn't make that less true. There is no conspiracy against Linux, or at least not an effective one. If there is anything hurting Linux it is its mainstream proponents like Google, who like to take but not give back agency. If desktop Linux was good enough, or more precisely great at what people need, they would be using it.

Drivers are sometimes a bit slow to come to Linux, but it's been years since I've had a problem with any hardware more than about 1 year old. The community is large and talented and has a diverse range of interests. Some truly old and obscure stuff is supported. Most new hardware is supported with a reasonable time after release.

You can run X11 today with Cygwin.

You can run X11 on Windows today without Cygwin, too.

Can we just have Linux kernel but without that XXX thing please?

Sure, you can pick between ChromeOS and Android.

Wayland... Try Fedora?

You pinpointed exactly the rub I was experiencing. Reminds me of "Rules as Written" vs "Rules as Intended" dichotomy I see mentioned on the Dungeons and Dragons stack exchange.

If this really is the year of the linux desktop, it's not what I had imagined.

While people nitpick over what "linux desktop" means, don't lose sight of the fact that general purpose computing is slowly dying and being replaced by iOS and Android. Roughly as many iPhones are sold per year as desktops and laptops combined.

One contributing factor to that statistic it that the phone lifecycle is a lot shorter than the desktop/laptop lifecycle. Also desktops and laptops are more likely to be shared between family members. The point is people still have access to and use traditional computers.

While this must be true for a sizeable part of the population, more and more people also don’t ever touch a desktop/laptop or only use the one at work.

Unsurprisingly, the latter case gets truer for low computer litteracy adults for who investing in a decent smartphone has a better ROI than buying a meh phone with a cheap laptop.

Anecdotally we bought our aging parents each an ipad two years ago.

They were decent but not that proficient at Windows, but got comfortable very fast at viewing/saving photos, printing, email/messaging and browsing on the ipad.

From there they gave up on their budget “my carrier gave it for free” phone to move to the iphone Xr and started doing all of the above on their phones most of the time. The laptop is now a third rank device they use to check their savings once a month.

In your anecdote, your parents weren't really using their PC before they got the iPad. I think this is true of a lot of people who are smartphone-only today. Android and iOS are expanding to a market that PCs never really got to.

They were using their PC. But begrudginly, getting annoyed by the updates, virus warnings, us perstering them to switch browsers etc.

For most of their lifes PC were the only way to do some tasks, including photo management (and we were sharing them tons and tons of photo, they also filled SD card fter SD cards in the summer), and it’s only recently (basically the ipad Pro) that looking at us they though they could give it a try.

> While this must be true for a sizeable part of the population, more and more people also don’t ever touch a desktop/laptop or only use the one at work.

I have a hard time believing this, do you have any studies to back this up?

OTOH considering Termux popularity, you may say that general purpose computing on Android is growing rapidly.

I still spend 80% of my workday on a PC running windows. So not sure I agree.

Oh well, if YOU spend 80% of your workday on a PC then it must be true for everyone.

So does every office worker.

Most office workers could get away with a Chromebook, I'm surprised this hasn't caught on in offices yet.

Might be different in the US given popularity of Apple software, but in the rest of the world, Microsoft Office suite is a fundamental tool in almost every office. I don't think it'll run on a Chromebook.

(And I mean the desktop version. O365 is dumbed down half-way to Google's Office Suite level - which is cool for an occasional document or spreadsheet, but is lacking both features and efficiency for professional use.)

Depends on the use case. I work for a MSP and I do almost all my office work in GSuite on the web. The only thing I fire up LibreOffice for is dealing with CSV files which are a pain in GSuite.

99% of my job requires nothing more than a web browser and a terminal with a SSH client. The remainder is mostly Wireshark.

> Depends on the use case.

Indeed it does. At her last-1 job, my wife could probably do most of the spreadsheet-related stuff in GSuite, except for that one spreadsheet that brought desktop Excel to its knees, and would totally explode the browser if you've ever tried it.

My experience from watching other people and occasional summer jobs is mixed. Some of the stuff you technically could manage on GSuite. Others you wouldn't, not because the files were too big/complex, but because the office computers were underpowered.

Oh well if office workers in YOUR company get away with using a chromebook then it must be true for everyone.

Considering the majority of work computing is done on desktops I'm not sure it will be replaced by iOS or Android any time soon.

We need a proper keyboard, and a bigger screen, at a bigger distance. I can't look at my phone too long. My eyes start to tear up and the pain is horrible. A tablet won't be much better if you hold it in your hand - same distance as the phone. I can understand that many people use a tablet for work, but not for 6-8 hours a day.

And just as the Mac Mini is a laptop without screen and keyboard, the latest Macbooks are more and more iPads with a good keyboard. Well, good keyboards is questionable even these days...

iOS doesn't use Linux and on Android userspace doesn't actually see it.


So it is meaningless.

There's more than one Android userspace.


Which isn't fully POSIX, breaks down with every Android release that clamps down security, and most of it is just plain ISO C, ISO C++ and OpenGL, hardly Linux specific.

True for iOS that it isn’t based on Linux. But iOS is based on MacOS and MacOS is real certified Unix.

None of them have anything to do with Linux, given that I was replying about "Linux desktop" here.

Second, iOS being based on UNIX hardly matters, no one is doing shell scripts and writing UNIX daemons on it.

Thirdly, no one was talking about macOS on that comment,.

Finally, even if you want to bring macOS into the picture, while macOS is a proper UNIX, what matters for Apple community developers lives in C++, Objective-C and Swift libraries, hardly POSIX related.

I was like you when I read that. I remember when Ballmer called Linux cancer:


I'm pretty sure that Microsoft (and Ballmer) still consider Linux a cancer.

It's just progressed very far, and their latest moves are only desperate efforts to "embrace the cancer" before it completely consumes the market.

I was hardwired with that memory.

My hardwired memory is when Ballmer mocked me publicly during an intern Q&A for asking if Apple could become a threat in 2006.

Would you like to elaborate? Always eager to hear such anecdotes:)

I asked whether apple’s user friendly (integrated) experience, iPod halo effect, consumer interest in the new iPhone, and their shift to low cost Mac mini would be a threat. He laughed it off. I asked “what about those who have gotten tired of waiting for the new Windows to come out and bought a Mac?” He asked if that was a personal story & told me to come back; I’d make enough money to buy a real computer. He was right. I bought an iMac as a full time employee and was one of the top bug filers for Win7 compat issues.

It took years for Microsoft to take iPhone seriously. They never predicted the “byo phone” era and figured IT pros would insist on enterprise features. Years after iphones were out the Windows phone OS code names were bears that ate blackberries.

How could Apple be a thteat with such a glorious Zune coming out in 2006?!

Oh wait...

Installing Linux in VirtualBox is not hard. If anything, it's extremely easy. Microsoft does not bring anything substantial, you could use Linux in VirtualBox since forever and they just bundle it in their OS. Windows just getting worse and worse. I was amazed when they decided to develop Linux syscall layer, but now it's just extremely lazy approach. Hey, guys, we don't have enough money to build proper Linux layer, let's just ship VirtualBox with Windows. LoL. I just don't understand all that hype. There's nothing innovative with their approach.

>Microsoft does not bring anything substantial

Microsoft brings integration not possible in VM. You can actually make Linux tools part of your complex heterogeneous workflow.

Can you tell me more or provide a link where I can read about it? I know that WSL1 indeed provided that level of integration, when you can launch Linux binary from Windows program or vice versa, because they are actually executing on the same Windows kernel, when files are actually on the same NTFS file system. That is truly amazing integration.

How is their WSL2 integration is different from something like shared folders?

From what I've heard the change from the layer approach to the virtual linux was mostly related to file system performance. Does it have to be innovative if it offers better integration than VirtualBox? Or is it reasonable that they use well-known and working tech?

Also, while I think it is useful. I imagine much of the hype here is also from the weirdness of it all. Would you agree that it is noteworthy that Microsoft is publishing a Linux-kernel for Windows? It think it is.

If there's not enough file system performance, they should improve their file system layer. It'll benefit all Windows applications as a side-effect. Believe it or not, but people actually use Git with Windows :) If Linux is able to pull much better file system performance, I don't believe that it's just impossible for Windows to achieve similar levels. I think that Microsoft sent many patches to Linux kernel before. It might look weird, but it's nothing really new. Their Azure business uses Linux.

Somebody link that one Dropbox comment.

Well, technically yes, but…talk about moving the goalposts! Almost like swapping the goalposts.

Always remember to lift the goalposts with your knees, not your back.

Yep, except in this case is just one goalpost and everyone wins. Now we can enjoy using software as MS Office and userland from Linux distros in the same machine seamlessly.

This is the "embrace" phase. It also kind of encompasses "extend" by adding hardware support via Windows drivers. This alone is not enough to get to "extinguish" though, so allow me to make a prediction.

The next step is fully locked down hardware that requires a signed OS. Linux will not run on it, but MS will pretend to support you by allowing linux apps to run on their kernel. Then, once no hardware will allow you to run linux natively they will depreciate it and get people to migrate to better supported APIs of their own.

People are just walking right into this as if none of Microsofts history ever happened.

The only OK scenario is that MS becomes a larger version of Redhat and migrates users the other direction. Is that happening? No. It's all about bringing your toys to their house.

Problem with this line of thinking is that operating system kernel matters. It does not anymore. What matters now is: if people are buying their cloud solution. No one cares about desktop users, no one cares about selling server OS. Real money is in big/medium companies buying their cloud stuff with nice monthly subscription. Good luck making people pay you every month for an OS, they don't want to upgrade and pay for new one even in 10 years so running Win XP.

In that line of thinking you also want costs of running your own cloud solution as low as possible, so it kind of even makes sense to phase out Windows because all competitors are running their cloud on Linux based systems, where they get all upsides of collaboration on Linux. Where MS trying to run all their stuff on Win pour money into something they could have for free or just also investing a lot less into Linux than what they spend maintaining Win.

> No one cares about desktop users, no one cares about selling server OS.

I don't see this in reality though. Any kind of corporate office IT is still firmly in the hands of Microsoft and Windows and I don't see any change for that on the horizon. Quite the contrary as IT departments have to double down on applying the Microsoft way with Windows 10 Enterprise and its changed licensing scheme. Where you've been able to dodge having a machine dedicated to license management (Windows Server, of course), you now are forced into a different licensing setup with Windows 10 Enterprise. Why Windows 10? Because Microsoft forbids you to use anything older on new hardware. Why Enterprise? Because you can't reliably make Windows 10 Pro stop phoning home about the documents you are opening. The only offer here is Windows 10 Enterprise.

All this futzing around with open source and Linux subsystems is just for the developer facing part of Windows. All the other areas, those that developers rarely encounter unless they talk to IT, are still shaped by the same Microsoft of 10 or 15 years ago.

Microsoft has seen that there was a huge brain drain towards apps running on Linux servers, developed on Macs (webdev, containers, deep learning, ...).

The new strategy is to get developers back on Windows via Linux. Once it's the new norm to run your apps in Linux containers that run inside Windows Server 2022, developed on Windows 10 machines, they are going to edge Linux out of the equation again. You won't be able to switch your containers away from Windows after that.

Believing that it doesn't matter how your end user devices fit into corporate IT or ignoring what steps Microsoft has taken to make Windows Server the default in the cloud is really dangerous.

That kind of locked down hardware already exists, and microsoft never needed to make any kind of argument about how you can technically run linux in a VM.

In other words, I'm not worried about WSL as a vector.

Exactly this. I still remember M$ playing rough in the browser game. Bad actors never change. You can purge "bad apples" again and again, change masks, pretend to be fluffy and lovable, but you're still same old evil co. We remember. We see you snooping. We see you seething with hate for FOSS underneath your let's opensource .net core mask.

And I, personally, will never forgive M$ for Elopcalypse, burning platform and death of nokia.

MS is actually walking back from lockdown. New Qualcomm ARM laptops allow you to disable secure boot exactly like on old x86 ones.

They don't need to do this anymore, pushing Windows is not their priority, they want to get people on Azure, Office 365 and Xbox live. Everything is about services these days.

problem is the kernel is sitting on top of a binary blob, and that manipulates the access to the real machine. we shouldnt be so worried about the kernel, we should be concerned about what the binary between the kernel and the hardware is doing/notdoing , and what degree of control do we have over that layer.

As far as I know the binary blob is packed with anti-user features such as ads and mandatory telemetry.

It's not very sane to run anything on Windows.

The same problem exists in VMware, or really any cloud provider...

A cloud provider and your personal machine are different things with different considerations. The advantage of using Linux on a cloud provider is primarily to prevent vendor lock-in and be able to move to a different provider or a physical server under your control as needed.

Using Linux on a personal machine is primarily so you can trust your machine to serve you and no one else. This is defeated if Windows or backdoored firmware is running below Linux.

Note that the no lock-in benefit also exists on the desktop and because of that it makes sense to switch to Linux on a machine that requires proprietary firmware as an intermediate step to moving to a better machine like those from https://puri.sm/

Cloud providers have no profit motive in breaking your trust. I fully believe an OS-vendor for personal computers would, especially if there's almost no alternatives.

Or on any proprietary firmware...

Yes, it’s a gateway drug, like firefox and libreoffice.

Running Linux in a VM under Windows is not a new capability, nor is this convenience development particularly helpful to Linux's long term support of modern desktop hardware if it results in what would be new linux-on-metal users never bothering and instead using Windows as a driver layer for Linux VMs.

Chrome and Windows using the Linux kernel as a POSIX support layer is only going to diminish the Linux desktop community and the quality of its modern hardware support.

I had to laugh when I read about this "Ubuntu install process" - Is there really still people on earth who can screw that up? :-D

Most people don't know what a URL is, and you want them to install an OS ?

No. But I think the people who know how to install a windows can also install ubuntu.

Yes, many people I know would get stuck on the very first step: formatting and creating the bootable USB drive.

Canonical needs to put USB drives with Ubuntu into retail stores then :D

Aren't there apps to do that for you nowadays? :-D

Yeah, you need to google for a tutorial, install something like Rufus or unetbootin. Have the whole mental model about what iso files are what is a boot disk, how to fix the boot order in bios if needed, etc... Very far from trivial for most people.

>> on their gaming rigs

Maybe if ms makes an nvidia driver too. :/

How? Wouldn't it be the year of the "Microsoft emulating Linux behind the scenes" desktop, aka still Microsoft windows?

That's the point—they're not emulating Linux anymore, they're actually running Linux as some kind of lightweight Hyper-V VM.

Emualting? Literally shipping a kernel and distro with their OS..

What's the GUI going to run? X.org? Wayland? Or in the Windows ecosystem?...

Wine is not an Emulator

Also at one point "Windows Emulator": http://www.faqs.org/faqs/windows-emulation/wine-faq/

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact