All this is not to say that there's any reason for upstream to accept any bad patches that rely too much on proprietary code. But in this case, I think the analysis of the underlying motivations for that patch was outdated.
EEE was more effective when Microsoft was completely dominant, but it's clearly still effective even when that dominance has diminished.
This move, for instance, will cause Linux users to depend on a proprietary, closed-source API entirely controlled by Microsoft. It's very bad for Linux and very good for Microsoft. It's a lever of Microsoft control, extended into the rival Linux system. Very easy to see how it effectively fragments the Linux ecosystem and allows Microsoft to forcibly pull users who come to depend on DirectX from Linux to Windows by manipulating this lever - reducing Linux support in the future, etc.
This is classic EEE, and the only way you can dismiss it is if you believe their PR, as apparently you do:
> they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action.
As the other comment explained, they are still a huge gorilla in spaces that are absolutely vital to Linux: server, desktop, and laptop.
Linux as we know it lives or dies on the server and desktop / laptop markets. Yes, Android is technically based on the Linux kernel, but it builds an entirely different ecosystem on top of it. The Linux ecosystem is only on the server and PC - exactly where Microsoft is competing.
Finally, the availability of this proprietary API will reduce the incentive for hardware companies like Nvidia to allow development of direct Linux support (drivers etc) for their products - which is what Linux really needs in order to survive, let alone prosper.
No reasonable person at Microsoft considers Linux to be a threat on the desktop/laptop. No reasonable person at Microsoft considers Windows Server to be a threat to Linux on the server.
The on-premises infrastructure market has been and is projected to continue shrinking in favor of cloud spend.
Googling around one can find reports with names like "IDC Worldwide Operating Systems and Subsystems Market Shares report" that show how prevalent Linux is vs. Windows Server even in Enterprise IT.
I agree with your general sentiment, but just wanted to note that high-end embedded systems are also a very Linux-heavy domain. Even though from hardware side it might look similar to the mobile space (ARMs everywhere etc.) it is actually a completely different world.
Said of a company which owns a market-shaping player in every layer of the stack: OS, directory, database, applications, development languages, source code repository, and the second-biggest cloud to host it all in. Given that mobile and web involve about half of that list, I fail to see how they're some sort of non-player, trying to play "catchup" to the rest of the world now.
Perhaps in some countries, definitely not everywhere. At my university, there is only a single Mac across 40 or so laptops.
> because it compiles iOS apps
I doubt students buy Macs for that. iPhones are not nearly as popular outside the US.
> and it's a *nix
Students don’t care about its UNIX certification. Most don’t even know what that is when they start their undergrad.
> it easily runs most of the same tools as a linux webserver
No, it does not. Many important dev tools are not available, not even through brew.
> That's what WSL is about, trying to win back developers
Globally, the majority of developers are on Windows/Linux, not macOS.
> They're trying to make it feasible to target a linux webserver
I’d bet this is about avoiding Linux becoming the major platform for ML/DS/AI, rather than anything about macOS.
So in other words, what you're saying is they are Embracing this FOSS linux-based developer workflow...
And you could almost say they're Extending the linux kernel to support DX12 on WSL...
Am I missing something?
It's not so hard to imagine picking up a legacy project at some point in the future, and pretty much having to develop it on Windows and deploy to Azure because of some WSL-specific dependencies baked into the framework it's built on.
That's just one scenario, but the point is why would you expect that they wouldn't leverage their market-share to disadvantage competitors if they ever got into position to do so? Just because they aren't in that position now doesn't mean we shouldn't be put off by the fact that they're building those levers to fragment the ecosystem now.
So if you want a machine that can play games and run a Linux dev toolchain, that would require a Windows install. If you can incentivize folks who may never have installed Windows to do so, and keep them there as the only place they can both play games and use their preferred dev environment, then you've extinguished that portion of the desktop linux install base.
EDIT: Also I absolutely love the "I don't think Microsoft thinks so either." Microsoft is a for-profit corporation who's eyeing a way to get users who traditionally avoid them to come to their platform and stay there. They aren't some benevolent font of cool tech, they're trying to sell you products.
Microsoft is taking steps to embrace desktop Linux, we've seen that with WSL. Instead of ditching Windows for a different/better dev platform, just keep using Windows.
They are extending their own desktop Linux by adding the capability to use DX12 in Linux, so long as it's run on Windows. Note that they are not adding this capability to a straight Ubuntu install, let alone any other distribution.
To claim "Microsoft knows they can't Extinguish desktop Linux" is naive at best, and shill-like at worst. They don't need to completely eliminate desktop Linux for EEE to apply here. the "Extinguish" part comes naturally from fewer and fewer devs choosing an alternative to Windows.
Yes we are. I was talking about the motivation behind their action. And I think it's quite obvious that 'get businesses to use Azure over AWS' or 'get web developers to use Windows instead of Macs' is a much, much more likely motivation than 'Extinguish desktop Linux'. Why would they care about that? Desktop Linux is absolutely no threat to them, and there's no money to be made winning that fight more than they already have. Know what, I'll grant you that in their attempt to steal web devs over from MacOS, and getting a better user story for a web dev deploying Linux servers from Windows, they will probably also hurt desktop Linux - if nothing else, simply by creating another viable alternative for devs that isn't MacOS. But I can pretty much guarantee you that extinguishing desktop Linux is not their motivation - it simply makes no sense.
Presumably because we smart and they dumb. Or something, I don't know?
Personally, I think that their thirty years of experience and billions of dollars might make it possible for them to come up with a plan I wouldn't have thought of.
They don't have to kill Linux, and in fact, would not want that IMO, because then someone might bring another pesky antitrust suit against them. But if they can dent it in any way, they will.
For those who say Microsoft has changed, is a different company, is embracing open-source, etc.: if that truly is the case, wouldn't they release DirectX for all Linux distributions, not just theirs?
Microsoft is the dragon here. Even if it appears friendly you still don't cut a deal with it, or you become a pawn in its game.
Apple can be as hardheaded as Microsoft. I have several Apple machines and each of them was bought specifically for developing iOS apps, other uses appeared later. Apple can provide Swift for Windows and Linux, but they will never allow building iOS apps on non-Mac hardware.
The modern form of EEE is being platinum sponsor of the Linux Foundation and insert a CoC into the source tree.
Corporations don't want the "angry young OSS men" of the 1990s and early 2000s, who could be an actual threat.
Corporations want submissive, neutered and obedient developers.
First of all the market share of desktop versus mobile is roughly 43% for the desktop, 57% for mobiles and tablets. More importantly however is that the rise of mobiles has been sharp since 2009, but it's been stagnating since 2017 and this stagnation trend is clear. Mobile devices too are a commodity, their market isn't growing any bigger and people aren't replacing their phones as often. The market for the desktop did not shrink. It just reached saturation. And as it will become increasingly clear, mobiles have reached saturation too.
What devices do people use in companies to do their jobs? Laptops. Sometimes tablets, for drawing on them, although a piece of paper would do. Mobile devices have been and remain essential for communications, this meant phone calls in the past, nowadays it means WhatsApp, Slack, email, along with shitposting on Facebook/Instagram/Twitter, but that's about it.
> "webservers (where Linux is massively dominant)"
Sorry to disappoint you, but the market for web servers is actually small, unless you're Amazon and Microsoft realized at some point that selling Linux boxes on Azure is probably more lucrative than what they were doing. The web servers space was never Microsoft's so they had nothing to lose.
But the enterprise space is dominated by Microsoft, with their dominance only increasing. Exchange, Sharepoint, Office 365, Skype, Microsoft Teams, Azure DevOps, freaking Yammer, MS SQL, soon GitHub, their reach in the enterprise and their adoption, once you're familiar with the space, is actually quite scary.
I have to hand it to them, they became really good at marketing. Otherwise I can't explain this portrayal of them as being the underdog. Or the constant nagging messages I see about them having changed, due to them releasing VS Code or .NET Core.
No they haven't changed much. The tooling they make for developers has always been top notch and while .NET was proprietary, they standardized it and they never attacked Mono. Office has been available on Macs since 1989. And they no longer target Linux with patent threats of course, they target Android instead. Until they'll find some way to sell Android. They adopted Chromium, a master move since now they can cut some development costs and win back some users and sell them on Bing too. We'll probably see them windowfying Android too.
Note that I do enjoy several Microsoft products. But I'm always skeptical when hearing about their motivation. I don't understand what makes people cheer for these big companies, as if they are sports teams. Use their products for what they are and dump them as soon as you find something better.
I have to emphatically disagree there. Up until recently Windows didn't even have a decent terminal app. As a developer I tend to shy away from Windows precisely because the *Nix alternatives offer a better development experience, from cli to package management and more.
In fact Mac OS also didn't have any until OS X.
People should be much more concerned about EEE from Google, as we’ve been seeing more of all the time with Chrome.
Not only US government, but almost every government around the globe is using it. And on macOS, that's still the case.
Most recently with the Live Share extension for VS Code. From skimming the licensing it is only to be used with the Visual Studio family of products. Which is an incredibly disappointing approach.
I think this is simply about ML and GPU compute for WSL. And I think Microsoft is genuine in that it sees business value in doing open source work, integrating well with Linux, etc. But at the end of the day the business interests are there and different parts of the company can be motivated quite differently.
Some skepticism remains warranted.
I don't presume to know if EEE is Microsoft's strategy here, but if it is, do you think a fleeting moment of amusement at Linux's comeuppance is worth the damage done to the larger open source ecosystem in its wake? Linux and FreeBSD are on the same team here.
Why for example are Jails not compatible with Linux? Is that not considered upstream because FreeBSD's "integrated"? How convenient.
I think what you see is happening because these "upstream" projects are actually primarily for Linux and by Linux devs. They just happen to work on BSD as well, but weren't you guys saying this whole time how the BSD approach of having everything integrated into one system is better and the whole GNU/Linux ecosystem of several upstream components is strictly worse?
So what exactly are you complaining about? Us trying to use Linux-specific features to make the Linux ecosystem the best, rather than target the lowest common denominator while you guys take full advantage of FreeBSD's specifics?
Seems less than fair to me.
> Why for example are Jails not compatible with Linux?
What? Jails originated in FreeBSD over 20 years ago? This question doesn't make sense.
The OP was referring to systemd affecting upstream by requiring systemd for things (ie Gnome). Are you saying there are pieces of software you have always used, but no longer can because they directly rely on jails?
I don't even use FreeBSD, but this was a weird argument.
Just because GNOME wasn't using systemd in the 90s when systemd didn't even exist does not mean it should not use it ever, even if it makes sense for GNOME devs to do so now.
And if yes, can you point to an example of *BSD doing something similar while pushing their platform forward?
I administrate multiple systemd-based linux distros, FreeBSD servers and buildroot-based embedded systems and I can tell you that systemd still gets in my way regularly while the sh-based init systems tend to Just Work and are very simple to understand and maintain.
Of course I know that a big reason is that I'm very familiar with un*x system administration and the gotchas of shell scripting while systemd is probably more approachable for somebody who doesn't want to learn arcane knowledge about chmod and file locking and setuid and symbolic links but I think that explains why there's still so much pushback against systemd all these years later: people who care about init systems know enough about them that systemd feels over-engineered and unnecessarily complex while not bringing a lot to the table.
Never once in the past years of running systemd have I thought "oh man, I sure am glad I'm using systemd and not an old SysV/BSD init system!". Not a single time. I did have multiple occurrences of systemd breaking stuff after an update though.
For me systemd based systems allow me to have declarative, portable unit files where init scripts don't. They allow me to reliably monitor and restart services, they shut things down properly instead of just force killing as many init scripts end up doing.
I instantly know how to manage most major distros now that systemd's common among all of them, have no hesitation of writing a proper service file even for minor tasks and I get a ton of functionality 'for free' too.
Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic. If your experience differs there's still plenty of non-systemd choices out there. They're not as prevalent as systemd ones, but that's because the people who sit down and actually write the code we all use find the services systemd provides valuable.
And even if portability was the point I'm not sure I see the big deal. Writing an RC script from scratch if the software you use doesn't provide it is generally trivial. The vast majority of the time you wouldn't have to do that anyway as it ships with your OS's packages anyway. Sure, systemd might be "tidier" with its standard APIs and whatnot but it's also a lot more complicated and opaque than a bunch of shell scripts running one after an other. And if you're serious about sysadmin you'll have to learn shell scripting one day or the other anyway.
>Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic.
They're non-portable, that much is true but so is systemd, that's a weak argument. If everybody had adopted FreeBSD-style init it would be equally as portable, it's a self-fulfilling prophecy. With that logic we should just all ditch un*x and start running Windows since most people already use that anyway.
The rest is nonsense. It's poor quality, non-consistent and non-deterministic if you write them that way. Sure, shell scripts being turing-complete opens the door to a lot of nonsense if people go wild and gives more latitude for very sloppy code but it doesn't have to be that way.
>the people who sit down and actually write the code we all use find the services systemd provides valuable
Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch. I'm sure many of them use it because that's what's available. I wrote a bunch of systemd unit files myself, I assure you that it wasn't meant as an endorsement.
Besides it's only one side of the equation. Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them. I'm sure many software maintainers would prefer if everybody ran the same OS on the same hardware with the same use cases but that's not how the real world works.
It's merely a statement of fact, systemd services accept the same set of commands across distros, which is rather unlike SysV.
> It's poor quality, non-consistent and non-deterministic if you write them that way.
That's a bullshit statement, because everything fits it. Of course everything is great if you make it great. And?
The point is that systemd's declarative nature makes it hard to screw up services and even badly written ones will get enough common functionality for free that they'd be usable.
> Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch.
Systemd got adopted because people generally found it valuable enough to adopt over what they had before.
> Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them
Matter of opinion, but I happen to think that having a uniform set of commands working at work and at home is nicer for users too, over the patchwork of scripts that SysV was across the various distros.
Portable to what?
sysv init scripts simply weren't portable between distros, leading to tons of non-standard, incompatible fragmentation.
Users simply couldn't simply take their own scripts over to another distro and expect them to just work, given said differences.
With systemd, unit files will simply just work between all systemd distros, given the standardized format.
You're ignoring what the parent said and talking past.
It's not the 'sets' that are standardized, it's the set of commands that are applicable to a systemd service and to any systemd Linux distro that is.
If I was approaching building an init system I'd make a better language for writing init scripts than bash, some kind of interpreter that processes mostly declarative init files, sets things up, and then exits. An incremental improvement that works with existing systems instead of putting a whole bunch of new (generally un-audited) code into PID 1, with all the security implications that implies.
Redhat may contribute the majority of the work but they're also very good at positioning themselves so that outsiders can't really contribute any work, or get any independently developed standards implemented.
Guess what systemd does?
> sets things up, and then exits
And that's the crux of the issue isn't it? Because on modern systems, things need setting up and tearing down all the time.
Well not really, that's an artifact of systemd's over-engineered design. There's nothing stopping you from tearing something down from an init script or doing more complicated dependency management using the CLI as your RPC mechanism (but red hat needed a reason to use their in-house RPC mechanism).
Honestly something like systemd could be pretty reasonable if it wasn't created with the express intent of "combating fragmentation". Not that there aren't a whole bunch of technical and architectural issues with it, but still.
Except most init scripts I've seen are rather brittle and only "work" if the PID dance is exactly as the author predicted, are not declarative and hard to debug.
I don't want to go back to init scripts, for all systemd's faults, the past was worse.
Using the CLI as my RPC mechanism etc. just sounds like I should spend a bunch of time doing work that systemd can do a better job of managing for me.
The problem is less the init system itself, but applications that depend on a specific init system  (gnome used to be a major source of contention in that regard).
Gnome does not depend on systemd, but rather logind.
KDE used to be the same, until someone started maintaining ConsoleKit2 again, proper, at which point they were happy to support it.
ConsoleKit was dropped because it just wasn't being maintained, and had various limitations.
Maybe it's simply easier to maintain this way?
elogind exists, if you care. It exposes the DBus interfaces that logind supports for applications to call.
Thus, environments like Gnome can be supported on non-systemd systems if they emulate and/or expose and implement the required DBus interfaces.
See also https://lwn.net/Articles/586141
And like I already mentioned in my previous post.. there are other alternatives around than just SysV-init..
By not tightly bundling their init with other OS components, by not having GNOME desktop somehow depend on what init system you're using (as opposed to services running under that init system).
Also you know how shell scripts have `#!/usr/bin/env bash` at the top of them? Well the reason why my hypothetical init system would be compatible with SysVinit is because instead of `#!/usr/bin/env bash` it would have `#!/usr/bin/env new_init_language` at the top of it. 'new_init_language" could implement almost all the same features that systemd unit files do.
BSD init is far better than Sysv. Adopting it would have been a step forward.
There are also other init systems that are better designed, like openrc or runit.
OpenRC is practically speaking a thin wrapper around SysVinit, not much of an upgrade if you ask me.
The other key, which everyone crying "Extend" seems so eager to ignore, is that they don't actually want anyone to use this API: https://lkml.org/lkml/2020/5/19/1139
"I'll explain below why we had to pipe DX12
all the way into the Linux guest, but this is not to introduce DX12
into the Linux world as competition. There is no intent for anyone in
the Linux world to start coding for the DX12 API."
They want people to continue to use OpenGL/Vulkan/OpenCL/etc., and this is just their mechanism for getting those APIs GPU access when running under WSL: https://www.collabora.com/news-and-blog/news-and-events/intr...
"We have recently announced work on mapping layers that will bring hardware acceleration for OpenCL and OpenGL on top of DX12. We will be using these layers to provide hardware accelerated OpenGL and OpenCL to WSL through the Mesa library."
I took that to mean that the developers on Linux are the people who won’t switch to DX even if it was ported.
From the email introducing the set of patches:
> The projection is accomplished by exposing the WDDM (Windows Display
Driver Model) interface as a set of IOCTL. This allows APIs and user
mode driver written against the WDDM GPU abstraction on Windows to be
ported to run within a Linux environment. This enables the port of the
D3D12 and DirectML APIs as well as their associated user mode driver to
The fact that they went through the effort of porting Mesa to get OpenGL and Open CL support, and are working on Vulkan support.
Their porting Mesa to use a proprietary library that interacts with the wsl only kernel interface isn't that same as behaving like a standard Linux gpu
And part of the discussion on the mailing list is how to integrate this with DRI/DRM and dma-buf so it can be used by more Linux clients with less work (although still non zero like you'd have on a DRI scheme as well).
Are we reading the same blog post?
BUT I MUST AGREE WITH YOU that, over time, this is likely to fragment the Linux ecosystem and encourage people to develop software that works only in Windows "with the Linux-thingy installed" environments.
That's because sooner or later, people using Windows will publish important AI papers or release must-have software or do something else with code that runs only on Windows "with the Linux-thingy installed." The "Linux-thingy" becomes a dependency that gets installed in the background. It becomes a near-invisible component of Windows, abstracted away by layers of Microsoft APIs, code, standards, etc.
The "Linux-thingy" on Windows, in other words, becomes like an init system on Linux, which can eventually be replaced in whole or in part without most people caring or noticing. Most Linux users don't care if their Linux machine uses Systemd, a Sys V Init, or even the dead-ended Upstart. That's the "Extinguish" part.
>With the "benefit" of forcing the Linux kernel developers to pay the maintenance costs of course.
No. Nobody is forced to maintain this. If Microsoft stops maintaining it then Linux is free to kick this to the curb.
Proof that Microsoft really has embraced Open Source development.
people need to stop using windows, entirely. i recently built a gaming machine, pretty high spec, and i've resolved to never install windows on it. there are plenty of games i can't play, a few that i would like to. but i wont give money to developers that wont release linux versions of their games - even when they are utilising engines that have linux ports (PUBG being the most recent example i can think of, but any unreal/unity engine based game). there's no reason to use windows any more (for people like us, at least).
There's a reason Windows is still dominant in the Desktop space despite all its problems.
interesting. i put ubuntu 20.04 on this and it's been plain sailing. now usually i'd say i've got a high tolerance for bullshit when it comes to linux, but i'm not exadurating when i say i did not have to touch the terminal to:
1. install it without any extra crap
2. switch from nouveau to the nvidia driver - version 440.64, a very recent driver (which isn't very fair to nouveau, i didn't even give it a try, really)
3. install steam and games and have them work flawlessly from the get-go. games i've played:
* counter-strike global offensive (a lot..)
* the binding of isaac (would be bad if this didn't run!)
* deus ex mankind divided (was _very_ surprised to see this available and working perfectly, to be honest)
* black mesa
a short list, but i've only had the machine for a week.
i know it's stupid to say "hey i didn't have to use the terminal", but really, ubuntu has come a long way. i couldn't have said that last year, i don't think.
> nVidia has shit Linux support so that's a no-go.
eh, i'm gonna respectfully disagree there. linux support from nvidia has been a bit patchy (and the nvidia driver windows isn't exactly a dream at times, too), but i've been using nvidia driver on linux on various installs on various architectures on various hardware for _years_ and i've never had a significant problem. AMD, on the other hand.. i don't miss fglrx.
> I also have a used Oculus Rift, which again rules out Linux.
well i also wont support facebook, so that's not an option for me :)
honestly, it's _really_ disheartening to see the replies to this comment and see how far the normalisation of deviance has come. the _only_ tool we have as end users to stop companies from being shit is to vote with our wallets. and yet the majority of responses is to just give up and accept that that's how it is.
i think we're doomed as a species :)
 Flatpak can do it if you setup an entirely new installation on that disk, but flatpak has its own issues. Snaps can't do it at all, and while AppImage is really great relatively few projects distribute that way.
if you have a separate OS on the external drive, you could chroot in and install it.
If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed.
However you probably don't wanna do that on an application by application basis. you could mount /usr/bin/ on the external drive for example. or use LVM to share disk space between the disks
Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.
Kinda illustrates my point about this being a foreign concept to Linux Desktop people. It's pretty simple: I want to put an application on an external disk and run it from there.
> if you have a separate OS on the external drive, you could chroot in and install it.
No thanks. I'd just like to have the application stored on an external disk, and execute it on the OS I have installed.
> If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed.
Not really, because Linux likes to have an application spread its files all over the hierarchy, so in reality I need to either use some form of union-mounting, and/or simlinks. Of course that is only sufficient if there are no library conflicts between what the application wants and what the system uses, in that case I need to use LD_LIBRARY_PATH and other tricks. In some cases I'll need to use a launch script that calls a different ld.so.
That's a lot of hoops compared to how sane operating systems do it.
> LVM to share disk space between the disks
System breaks when disk is removed. No good.
> Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.
Which is pretty much what I want, because the alternative is dealing with the bullshit I mentioned above whenever you step outside the package manager's sandbox.
This is what AppImage gets that the rest of Linux doesn't seem to: if an application is a single file/folder, you don't need anything more complicated than a file manager to manage it.
P.S.: after a quick glance at some documentation, I don't see any mechanism that would allow me to install nix packages to arbitrary locations anyway.
I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/
Well that's still better, in my opinion, than package managers and all their various restrictions. But ideally I want something like AppImage, where an application is a singular entity that can just be moved at will and run from wherever . Most DOS programs and many Windows programs (if you extract them from the installer) will still work like that.
> I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/
If you unplug the external disk, your system stops working. This is not what I want, as I mentioned already in the post you're replying to.
 classic MacOS, RiscOS, and NeXT all had programs that worked like this too. Linux has yet to achieve the flexibility in application management afforded by the OSs of the 1980s.
LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.
Generally speaking they have the same problems as the mainstream distros, but with the additional caveat of even less chance of googling solutions and worse or no support at all from non-oss software.
> LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.
Precisely. The very concept is just so foreign to most of the Linux community that even talking about it gets you strange looks. It is actually possible to do with some significant AUFS-fu, but its a hell of a hoop to jump through for functionality that is pretty natural in every other desktop OS that ever existed.
What you describe is like installing in $HOME. It's not unusual - python virtualenv, ruby rbenv, node_modules. This comes with trade off - either system knows where to search or one has to define per project. By FHS entire Linux tree is a project. In Windows... configuration is pain.
│ ├── exe
│ │ └── irb
│ └── lib
│ ├── irb
│ └── irb.rb
│ └── rackup
│ ├── irb
│ └── rackup
I still write code in a Linux environment because that's where the ecosystem is, but for my daily driver I've given up on desktop Linux. And I've found that WSL gets me a Linux based development environment side by side with Windows with less pain than dual booting. I've found that when it comes to using Linux smoothly, it's a lot easier to shoehorn myself into the happy paths than to bend my setup into my way of doing things.
And regarding using Windows: company's behavior is important, but product quality, especially when it's a tool you use every day for work, is more important. Linux is ok for certain types of developers, it's still awful for normal users, abysmal for office work and obviously not a choice if you develop Windows software.
What are the main costs of releasing a Linux version of a game when using Unity or Unreal? My limited experience with Unity has been "check the Linux box, click the build button". It even cross-compiles no problem. People I know tell me Unreal is similar.
It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that. We know we're a minority so we're willing to put up with bugs. Even if there's only 100 people who'd buy it only if it ran on Linux, that should pretty much cover the couple hours of dev time.
Supporting Linux is hard. So is supporting Windows, but if Windows makes up 98% of your customer base, it's justified. 2%? Ehhhhh.
(P.S. I love Linux. See my profile. This is the kind of stuff I deal with every single day.)
You claim I am being "self-centered" but the fact is if you don't provide something that the distros can work with and instead try to route around everybody and ship an untestable blob then that's your fault. The reason everyone tells you to just support Ubuntu LTS is because they are the only ones who have the resources to support all these random binary blobs for X amount of years, and that comes with all the pains associated with it, as you are well aware of.
Alternative view: distros don't provide a stable platform that developers can target.
Expecting every distro to have the exact same release & support cycle is nonsensical.
To contrast, if you take a Win32 game from 2003 that targets DirectX 9, it probably runs fine on the latest Windows releases. (You might have to enable some compatibility mode.)
"BUT BUT BUT if it's packaged with the distro it's no problem." Remember we're talking about proprietary applications (mostly games) here. Having these packaged with distros for years after the devs moved on to the next project is just a pipe dream.
It doesn't mean that there's no such way, but it does mean that attempting to find it is very risky. Making games is already a very high-risk, high-reward industry, do adding this amount of risk to the equation is advice that's insane for all but most successful companies.
One of the ways these companies can reduce risk is actually by using other proven open source components. If they won't even make an effort to try to expand this, then there will never be a well-understood way.
No, it's not - this is an absurd statement. Cloud services get most of their income from big and medium companies, billing tens and thousands of dollars in privately negotiated agreements. Digital Ocean and AWS don't live on single developers paying them $5. All their real money is on B2B.
Multiplayer games either get a flat subscription from all their playerbase, or rely on micropayments. Either way, they spend much less on account management where they interact with customers as individuals, and much more on analytics, where they measure and work with their customers in aggregate. These are completely different business models, that completely shape companies that operate on them.
> Find the pain points your business customers/partners are having and then charge for solutions.
Games are not solving any problems - they are entiertainment product. I've seen people who have tried to reason about games in the terms you're using, and it always yelded hillarious (but also sad) results.
Which might be one of the reasons Linux Desktop is so awful to develop for.
You wonder why people ignore your platform? Saying it isn't actually a single platform and instead is dozens of different platforms just means each of those platforms is even less relevant. Saying you're not just one platform with 10% market share but actually 20 with 0.5% each doesn't make the case for shipping on Linux Desktop any better.
And many proprietary pieces of software license components from other proprietary pieces of software, so that even if they did want to open their code they'd have to strip pieces out of it anyway which doesn't really help the cause of distros integrating it. And even if it did, then the developers are reliant on the maintainers for their relationship to their customers. Have an issue with the product? Oh, it turns out that's because of this patch made by the maintainer of the package for that distro, who now has to be contacted and convinced to fix it, which they may decide not to for arbitrary reasons. Even open source developers have problems with this!
If you need to strip out some parts and there is enough will in the community to replace those pieces that you stripped out, then it will happen. If you have a product that is anywhere near being popular among the FOSS developer communities then I don't think it makes sense for you to claim that this will doesn't exist, or that distro maintainers will lose interest.
For a simple game that uses the entire UE4 stack, you might be able to get away with that if none of your code is Windows-specific and works exactly the same on the Linux distribution you are targeting.
Once you start using your own middleware and libraries from outside of the default engine, you need to make sure every single one works across all the platforms you're targeting. Many won't have a Linux compatible version, and those that do may only work against specific distributions and hardware. Even then: Have they changed the window manager? What have else they customised?
There have been many issues with anti-cheat solutions over the years on Linux.
> It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that.
For a simple game, you may get away with this. Anything larger will require significantly more testing. I've been a producer on large games, and the QA process starts very early on. You don't 'finish' a game and then QA starts - It's an ongoing process that consists of significantly more than just launching a game.
>We know we're a minority so we're willing to put up with bugs
Some people don't see it that way though. People who have paid for a product have the right for it to work properly. Tickets for Linux support are often higher , so the time investment against payback can be problematic.
GPU driver support on Linux can also still be problematic. From feature differences against Windows, to crashes. These all take additional development time.
Linux developers are often more difficult to acquire in the game dev world. QA even more so.
Game dev is really hard. Some of it is hidden by engines like UE4 on the surface, but as soon as you start digging down into serious development, it's difficult.
If that would just be the money a single developer got for 2-3 months of work, it could still be fine for some regions (not everyone on HN is from US). But with a studio, you also have to spend money on marketing, payroll taxes, rent, and development of future projects, which most likely sell even worse.
So, no, I really don't see how porting to Linux would pay. Most of developers that I talked to who did it admitted that supporting OSX and Linus turned out to be a giant waste.
Don't want to get into an OS war here but merely stating your opinion as fact without supportive evidence is not in the spirit of HackerNews.
> it's still awful for normal users, abysmal for office work
For office work a lot of workload has moved to the web browser. Creating documents and working on spreadsheets for instance can now all be done online with Microsoft's Office offering and other competing products like Google Docs.
And why is it awful for normal users? ChromeOS for example is just linux, and that whole platform is targeted towards 'normal users'. I use linux on all of my personal devices and I use these devices to do very 'normal user' activities like watching YT, social media, blogging, etc. There are many communities that consist of 'normal users' like /r/Thinkpads or /r/LinuxOnThinkpad that are currently enjoying linux.
Precisely why do you think it's awful for normal users and office work?
These things are pretty much self evident.
> Linux is ok for certain types of developers
Web developers and HP computing. Game developers? Haha, no. Embedded? Too many proprietary toolchains that don't run on Linux. Productivity and business applications? See below section on office work. ML? nVidia [apparently I'm incorrect on this one].
> it's still awful for normal users
Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.
> abysmal for office work
In most offices, Microsoft Office is an indispensable productivity tool (which many HNers, having little experience with real office work, will underestimate severely) and it is only supported on Windows (No, the web version is not the same).
It's funny/surprising that you'd say that, given that Linux is the single most dominant OS in the embedded device space. What toolchains are you referring to? Here are some well known ones.
> ML? nVidia [apparently I'm incorrect on this one].
Not only are you wrong about CUDA, but Google's custom built chip designed for ML runs soley on Linux: https://en.wikipedia.org/wiki/Tensor_processing_unit
ML almost exclusively runs on Linux.
> and it is only supported on Windows
Office also runs on macOS. You can also run Office in Linux using WINE.
> Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.
It's really easy to say anything without providing any supportive evidence to substantiate your claims. Pray tell what tasks you are referring to that "normal users" partake that supposedly can't be done in desktop linux.
No, and no.
The parent was referring to embedded development toolchains - compilers, IDEs, endless small utilities, debuggers, analyzers, etc. etc. Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.
Also no. The "embedded device space" is v a s t. New projects for phones and similar hardware profiles might choose an Android or other flavor of embedded linux, for sure. Even within sexytech consumer products you're more likely targeting something like VxWorks or QNX as not. The physical world, the domain of embedded devices, is not smartphones and SaaS. Unless you're talking about a very specific product category it's laughable to call Linux dominant here.
I know what the parent was referring to.
> Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.
No it's not. Literally both of the embedded OS' you mentioned (VxWorks/QNX) support Linux as first class hosts.
What Windows exclusive tooling are you talking about?
And you are understimating the prevalence of Linux as the embedded OS for embedded devices.
A minor nitpick, but cross platform development is usually a painful work regardless of which platform you use.
In the process, they are rapidly regaining developer trust by building productive relations with the same developers they were alienating under Ballmer.
It's smart. They own Github now. Most stuff that happens there is not windows centric. But increasingly the MS developer ecosystem is being untangled from windows in any case. That's necessary to future proof it.
While the windows kernel is nice and the driver ecosystem around it serves MS well, it's actually been a problem for them as well. They've failed in the phone market (repeatedly) because linux was a better fit for OEMs. Also they've had Google and Apple compete with them effectively with the ipad and chrome os in a market where MS was peddling crippled laptops. These are all examples where windows legacy was part of the problem for MS. They weren't able to compete there. Their crippled laptops were too expensive and uncrippling them would kill their high end market. So people bought ipads and chrome os laptops instead.
Increasingly desktop software that is not web based is getting more and more of a niche. Even office at this point runs well in a broswer and MS actively supports native applications on platforms of all their competitors (android, IOS, OSX, etc.). At this point that's not optional from a revenue point of view. A windows only office would be a problem at this stage. They support it and they actually do a decent job too. This too is something that happened very quickly under Nadella. A lot of the Azure revenue is in fact office revenue.
IMHO this move to support GPU virtualization and ultimately running linux desktop apps, gets them access to a lot of niche OSS applications for Linux, the entirety of the machine learning ecosystem, and the community of professionals using that. Not all of them will switch to a windows laptop of course. But some will and they tend to be the type that spends money on their tools.
Additionally, they are doing a clever play with APIs, github, cloud native stuff in Azure, development tooling etc. where they are not blocking using non MS things but merely make the choice to buy into their premium SAAS subscriptions more attractive. All of this stuff is usable without doing that but if you are using VS Code already, have your code on Github, and are doing some AI stuff in the cloud, etc; it's a small step to buy into a well integrated ecosystem provided by MS where you are using a windows laptop with their oss tools, while deploying to Azure, and maybe doing your office stuff using office 365. The value is no longer in selling the OS but up-selling SAAS and making sure the choice to buy into that is logical, easy, and natural no matter what you use.
Github codespaces is a great example of that. I bet it will be really easy to setup and come with some nice SAAS subscription. It's all OSS and you are welcome to run it on AWS or your own cloud. But I bet it will be easier to run in Azure. MS tools, MS cloud, MS as the easiest place to run linux developer tools (!!!), etc. They won't force anyone to switch. They don't care about individual developers but they do care about what their bosses sign up for in terms of SAAS services. That's where the money is.
If that is what's happening, then what's the endgame? Switching WSL around so that "Windows host + Linux guest" becomes "Linux host + Windows guest", i.e. Windows becomes a Linux distribution running native Windows apps in a VM with seamless integration? I'm somewhat intrigued by the possibility, but I don't see how it could work given the ubiquity of vendor-supplied device drivers targeting Windows kernel API/ABIs.
I'd expect the importance of the windows kernel for revenue growth to be increasingly less important over time. Of course they won't drop it outright; at least not right away and it's likely to stay relevant for e.g. pcs/laptops and gaming. As for games and vr content, a lot of game studios already use cross platform sdks and Linux support for games with and without emulation is pretty decent these days. Also, Android and IOS are big targets for games.
Most hardware vendors don't target windows exclusively. Some do of course but a lot of hardware works fine on other platforms even if vendors don't actively support that. Anything intended for data centers runs linux primarily. Most laptop vendors have a few linux friendly laptops at least to not lose out on the pro developer market that tends to actually buy their more expensive laptops. Likewise, most graphics card vendors want to support e.g. machine learning and that requires linux driver support.
But I get what your saying. My observation is that MS is very friendly lately with Ubuntu. I don't think Mark Shuttleworth is interesting in selling that outright but an MS distribution might be a logical next step given their increased dependency on Linux on the Desktop and in the cloud.
Disclosure: I work at Microsoft (on the Windows accessibility team). At work, I use Outlook. Outside of work, I use Mozilla Thunderbird, but even there I send HTML emails sometimes.
By stating "HTML email has features that most people actually want. The world has moved on...", you seem to be implying that the primary method of communication of Linux kernel developers is irrelevant and their use-case for email is not important. In fact, the more I think about this, this is a classic example of Embrace, Extend, Extinguish.
This is a step towards moving their API over to Linux so they can dump Windows as an OS and provide it as a docker container service service for enterprise.
Or, much worse, M$ now trying to kill Linux as desktop OS, same way as Elop killed Symbian as smartphone OS.
The CUDA support thing from other comments below seems the only sane use case so far, and in that case I don't really see the EEE either, but just good old "make shit work".
> but who the fsck would target their game at this?
It doesn't have to be games. Maybe MS releases an ML visualization library for Linux which requires D3D. It's not so far fetched.
Given the cumulative pain developers have had to deal with supporting IE6 over the years, I think it's warranted to be wary of this kind of thing.
Dollars to donuts this is why Microsoft is implementing this. GPU acceleration is becoming a critical feature for many users (but especially developers) and this will continue. If WSL is to be a serious competitor, this is necessary and I'm glad to see it showing up. This is true of cloud compute, too, and Microsoft is betting big on cloud as its future growth area.
> Only the rendering/compute aspect of the GPU are projected to the virtual machine, no display functionality is exposed.
The Linux gaming folks will be pretty sad about this one. Anyway, this isn't really a Linux port of DirectX. This is GPU compute via DirectX APIs.
So now, I'm just waiting on monitor mode/AF_PACKET for WSL...
> There is a single usecase for this: WSL2 developer who wants to run
machine learning on his GPU. The developer is working on his laptop,
which is running Windows and that laptop has a single GPU that Windows
Can't say I can get behind MS trying to shift maintenance for a Windows only "feature" onto the Linux devs here.
Interesting take on the situation. This is effectively a driver they need to get into the kernel (just one that targets a paravirtualization host and not “real” hardware), and Linus has been adamant that the correct way to write a driver in Linux is to upstream it into the kernel.
The perspective that upstreaming a driver into the Linux kernel is a burden for Linux kernel developers is one I haven’t heard before, and seems to clash with Linus’s typical stance. Is this something that has some prior examples? Genuinely curious.
The only time I faced the need for Linux box is trying a demo project from OpenAI, which did not use the features it required Linux for on a single machine anyway.
Doing something custom on top of the GPU is also not much different on Windows, than Linux. CUDA is basically the same. OpenCL and Vulkan are available too.
I'd like to hear perspective of a person, who actually does ML specifically on Linux for some reason.
I don't have to spend time explaining or justifying or isolating my setup.
Conversely though, my work is itself off the beaten path enough that I'm likely to run into weird bugs. If I was pushing images through a CNN, that'd be well-trodden enough on every platform that I'd be a lot less fussed about which particular platform I use.
Also windows not having a build in C compiler makes you dependent on the horribly convoluted Visual Basic stack that seems to have a lot of dependencies for some python ML libraries. Docker makes it a lot better to run and I almost always deploy in a docker container because the ML modules I deliver are often interacted with as a black box with a REST API on top.
You might be right about C compiler. But something itched when you mentioned Docker. Could getting Windows SDK installed be harder, than installing Docker?
That being said I had to avoid installing VS 2019 for quite awhile because Node.js native module build chain couldn't work with it. There are complexities
Even popular libraries like zeromq don't support namedpipes on windows because of how complicated they are and how different to everywhere else.
Just determining what visual studio version is installed seems to trip up projects all the time.
My main reason: It is the most convenient way to have Unix tools (grep/sort/cut/sed/less/...) and bash available. Cygwin always was a pain, MinGW / GitBash felt much better, but ultimately WSL just feels best.
These tools are incredibly valuable to my workflow. Sure, stuff like pandas can be nice for small datasets, and some data sits in some DB/Kafka/distributed system. But there have been countless cases where unix tools allowed me to take xxGB zpfiles of text and do basic examination or even build baseline models within a few hours.
Sure, there always are alternatives to use these tools and there are many equivalents. But I would always prefer WSL + conda for Linux to a typical "Windows Conda" installation with that weird GUI and the need to install so many different applications to even just look into the first or last few lines of a huge textfile.
EDIT: That said, of course I can/could always just run a juypter notebook under windows using windows cuda + GPU and share files with a WSL bash where I do my modifications. But again, everything within the same systems just feels better (ipython shell magic, no worries about if paths to the same file are really identical, etc) and while this is by no means a game-changer, it is just nicer that way.
Not to mention that there are quite a few development environments for more obscure platforms that still only exist for Windows.
Overall, since most development time is spent in an IDE, the OS is really of little relevance to software development. Sure, some people insist in using command line tools, and that is unlikely to be pleasant on Windows, but a lot of other developers don't, and we couldn't care whether we're running our Emacs on Linux or Windows or Genera or whatever.
I use a Windows laptop at the company I currently work for, because everything is locked down and I wouldn't be able to get my own laptop connected to the network. (Or so I thought; I co-worker managed to use the Windows laptop as a bridge to his own Macbook.)
Now you're right that as long as I stay in the IDE, it's not so bad. But every once in a while I need to do something outside the IDE, and I immediately get slapped in the face by how stupid some things are. And because it's an enterprise environment, some things are even worse than usual; opening a folder, or saving something, can be unreasonably slow because either it's a network drive or it needs to be checked for viruses and malware while I'm trying to use it. Or for some other reason. I don' t know, I just experience the extreme slowness.
Also, on top of the old terrible DOS shell, there's now also a Power Shell that's supposed to be better. It apparently has some powerful features I don't really grasp, but it's still not remotely as good as bash. And sometimes the command line really is unavoidable.
But the real pain is at home. When I activate Windows 10 on a new machine, I need to create a Microsoft account. I don't want one, but it takes serious determination to avoid it, because behind every message is another trick to sucker you into an MS account. When you finally do manage to create a local account, you're immediately expected to compromise your security with 3 insecurity questions, and no way to avoid it as far as I can tell. Previous versions of Windows did not have this stupidity.
Also, somehow Windows keeps losing my mic, speakers or camera. Once I've found the right troubleshooter, it immediately figures out how to fix it, which is great, but it also keeps losing them again. And finding the right troubleshooter takes a couple of steps and a bit of searching. I feel like I need to pin several relevant troubleshooters to the taskbar.
And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.
> And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.
Here I never understand this point. You can't do anything on a Linux system if you don't have sudo access - it's not like apt or yum have any special magic to allow non-admin users to install stuff. And if you can install software on a system, you can already do anything else. Especially Games, which install drivers to perform DRM and anti-cheat bull.
Now, if you want to look into it and waste quite a bit of time, Windows does allow you to configure access control at a very fine-grained level for access to non-system folders. But as long as the installers want to install things in system folders, there really isn't any solution.
However, if I'm using IntelliJ or Emacs and Firefox, I don't really need to care what OS is running underneath too much.
Edit: of course, Linux and Mac are available for devs that prefer them. It's still much easier for IT to manage 7000 Windows desktops and a couple hundred Linux ones than it would be to manage 7000 Linux desktops.
- Could you push a patch to Linux systems and have it install at the user's convenience (with some end date)?
- Can you do that in waves without manually configuring things?
- Can you remotely wipe a system if required?
- Is there any popular anti-virus software for Linux, to protect company files in home folders from user mistakes?
- Can you help users install some software without giving them full access, but also without requiring IT intervention for every installation?
Why? I have never seen Windows being managed entirely hands-off whereas Linux just works.
Where does the complexity on Linux come from that makes managing them more difficult?
Then there's the question of pushing an update to all managed computers. Maybe it's not a package update, but you want to change some SELinux policy for all users, or update some DNS server or the default search domain and so on.
Never mind the question of how you can instruct one of those Linux computers to delete all data it holds whenever it next connects to the internet (to handle the case of a stolen company laptop).
There are so many things that you need in an enterprise setting that have common (though probably quite expensive) tools available for Windows. Maybe some of these exist for Linux as well (I would expect RedHat to have some), but I'm not sure. Linux admin is usually reserved for servers much more than desktop computers.
Interestingly, apples have to be compared to oranges. On Linux, it is easy to identify the programs that are using a library. Thus it is easy to restart just the services that are patched. In general, things can be scripted so there are no tools available. But this requires somebody who understands the system. From a business perspective, this might be more expensive, or not, if the tools are expensive.
If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup.
In theory it ought to be possible to switch a single GPU to/from a VM without a reboot. In practice I have no idea how huge a refactoring to the Linux graphics stack that'd require.
This has been true for 16 bit games since long before Windows 10. Ages ago one of my favourite games stopped working on Windows, but Wine had no problems with it. So my impression has always been that Wine is excellent for really old games, but slightly more recent games, it could already be very hit and miss.
> "If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup."
Doesn't every CPU these days have onboard graphics? My Thinkpad X1E should support hybrid graphics, so it'd be nice if I could give the GPU to a VM and have the desktop use the CPU graphics.
But if a Windows VM does a better job, that means Wine doesn't yet do as good a job as Windows. Though it's certainly true that Steam support for Linux is growing. But I don't think every Steam game already works on Linux.
Official Proton "support" is limited, because it requires certification by Valve and/or the game developers that the game runs well (the equivalent of a "native" rating on winedb/protondb), but if you're willing to go down to "gold" levels of support it still runs 70-80% of all Steam windows games.
Don't bother attempting GPU passthrough on any laptop with an AMD CPU (eg Ryzen 2700U) and Radeon GPU (eg RX 560X).
It turns out the GPU passthrough needs a dump of the Radeon BIOS provided as a file, but no-one can dump the BIOS of discrete AMD laptop GPUs. :( :( :(
Note the complete lack of RX 560X BIOS's here:
On laptops, pretty much.
On Intel desktops, yes, aside from Xeons.
On AMD desktops, only some lower end Ryzens have "G" models.
I've got this working today. I do it through swapping the nvidia driver for the vfio-pci driver (and back again if required). The slight annoyance is that you may need to restart X11 (for me this is not an issue).
I wrote about this some years ago: https://me.m01.eu/blog/2016/05/pci-passthrough-vm-monitor-se...
Check out https://old.reddit.com/r/VFIO/
I guess this will be the standard until we can have nicer graphics drivers for Linux.
That's not the usual situation for people that are trying to use this scheme.
Like the previous response what does this buy me as compared to developing and running natively in windows - as there are native compliers that support cuda etal on windows.
(My gaming desktop, of course, runs Windows.)
On laptops, however, I continue to find myself jumping through idiotic hoops to use Linux. The driver support is always just-barely-good-enough. Maybe it's audio, maybe it's graphics, maybe it's power management. Maybe it's something involving networking-after-power-management or some crap involving "don't unplug your headphones while the lid is down". On my current work laptop, a beautiful Thinkpad Carbon X1, I can't get the power management to work properly, so I just have to accept that there's no hibernate. I'm constantly forgetting to shut down, put it in my backpack, and then pull it out drained. What a pain in the ass. Could someone fix this problem, probably. Can I? Not in the dozens of hours I've put into it. I hate doing IT, I hate it I hate it I hate it.
However much Macs make me want to vomit in my mouth, I can see the appeal. The drivers work at least 90% as well as Windows drivers, and the UX is at least half as good as a lightly-tuned Linux machine. "Jack of all trades, master of none, is oftentimes better than master of one"
Anyway, before this lockdown ends, I'm upgrading my laptop distro to this new distro I've heard of out of Redmond, I think it's called "Windows".
How do you configure compiler to use libraries downloaded from vcpkg? CMake? Something else?
aka wine / proton
Conversely, if you run Windows, it's rare that you need to work hard to run a game.
Not sure why, but I suspect that Valve has a lot to do with it.
But there is another way there - running Windows in a VM with GPU passthrough - works beautifully in my case.
I do not think it's purely windows to blame here though. It's only quite recently that NVIDIA started fixing their documentation and instructions on getting all the right CUDA CuDNN stuff running properly on a system.
Imagine if you could run AI/ML apps and tools that are coded to take advantage of DirectML on Windows and/or atop DirectML via WSL.
Now you can run the tools you want and need in whichever environment you like ... on any (capable) GPU you like: You don't have to buy a particular vendor's GPU to run your code.
If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.
Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).
Those days sucked. Bigtime. If we can avoid doing the same mistakes for machine learning then we should.
Which is still nonsense, since this only affected the OGL driver shipped by Microsoft. In contrast to truly bad actors like Apple, OEM were free to ship their own OGL drivers from day 1.
So sorry mate, but I have to call BS on that one.
Isn't the linked post saying you have to be running on Windows though? It seems like it would make way more sense to either port directX to Linux, or ditch directX and put those resources into supporting Vulcan.
Don't you think the effort to achieve this would be absolutely massive? I don't know what kind of resources are thrown on this project, but I'd estimate minimum to be 3 dev teams for 2 years just to get a few variations of ResNet to work "as is". And that's just for regular models, that don't require quantization or (auto-)mixed precision for training.
o-O nvidia-docker does not even support Windows.
I think the only thing you need to know is which CUDA version your cuDNN requires, and it was quite clearly stated on the download page. Also the same on Linux. For nvidia-docker you used to need a specific driver version.
Frankly this does just seem like MS wanting to reduce their maintenance burden on what they expect will be a very important part of their WSL offering on Windows. There's nothing inherently wrong with that desire, but the people on the other side need to weigh their maintenance burden with what benefit this will have to the Linux community as a whole, which at first blush seems minimal. Especially considering that the userland pieces that talk to this driver aren't open-source.
There's also the question of whether or not you believe WSL as a whole is good or bad for Linux. If there are people who would run a Linux desktop for development who then decide not to because WSL exists, perhaps that's a bad outcome. If you have people writing more DirectX GPGPU code who would otherwise write to a standard interface like OpenCL, perhaps that's a bad outcome (to be fair, there's also a lot of CUDA out there, which is similarly problematic). Is this the start of MS going back to their "Embrace, Extend, Extinguish" playbook, or is that just a paranoid fear? They've definitely been embracing Linux, and enabling people to write DX12 GPGPU code that targets a Linux environment but will only run under WSL on a Windows install does feel like "extend".
I'm not sure where I personally stand on this issue as I haven't done my research, but I think they're interesting questions to ask.
If this gets rejected, of course it doesn't stop MS from doing any of these things, but it does make it harder for them to maintain their extensions to Linux.
 Airlie is even concerned that just by looking at the code, he or other DRI developers could run into future IP derived-works trouble when designing future Linux graphics interfaces.
I think if that really matters you should be against anything in the kernel to make it work well as a VM under Windows or MacOS or BSD, and in VMware or VirtualBox too.
From that point of view, Linux in a VM on another host is taking away from Linux running as a desktop on that host. Linux running as a VM on a VMware cluster is taking away from Linux running on those bare metal servers instead of VMware ESXi.
I think the more sane way to look at it is that Linux is an application (and its subcategory is that it is an OS) which is meant to run on various hardware and software platforms, the more the better. This strategy has worked very well over that last couple decades.
Does allowing WSL mean that some people that would install Linux on their hardware just run Windows instead and use Linux on top? Probably. Does it mean that people that already ran Windows and have never used Linux get exposed to it for the first time and get familiar with it through a few click on their existing Windows computer? That's also probable. Does it really matter in the end? Probably not.
Why? I would guess that a very large share of linux kernels run under a hypervisor in some data center, in a public cloud or some OpenStack cluster. Won't those mostly be the same features?
Given that the WSL2 rewrite is essentially this, without even the niceties of a VirtualBox GUI wrapper to control the settings, I keep wondering what all the fuss is about.
>>> I think if that really matters you should be against
Should be interpreted as (and I meant to write as)
>>> I think if that really matters to you you should be against
The rest of the comment should have made that obvious though, especially the last two paragraphs.
Happy to know that we also agree then.
IMO it's a good thing. Given that windows accounts for 90%+ of the desktop OS share, Windows might very well become the world's most used Linux distro.
I can't even say I wouldn't use it - it might be nice! But I will not use any WSL-only capability, that's for sure.
WSL2 literally runs user-mode distros (and their binaries) in containers atop a shared Linux kernel image (https://github.com/microsoft/WSL2-Linux-Kernel) inside a lightweight VM that can boot an image from cold in < 2s and which aggressively releases resources back to the host when freed.
So when you run a binary/distro on WSL2, you are LITERALLY running on Linux in a VM alongside all your favorite Windows apps and tools.
If some of the tools you run within WSL can take advantage of the machine's available GPUs etc. and integrate well with the Windows desktop & tools, then you benefit. As do the many Windows users who want/need to run Linux apps & tools but cannot dual-boot and/or who can't switch to Linux full-time.
This will (and already has) resulted in MANY Windows users getting access to Linux for the first time, or first time in a while, and are now enjoying the best of both worlds.
So people who use it are married to Windows.
I think folks would be absolutely excited if this was an initiative to allow writing DirectX applications on Linux, and available for Linux on bare metal. But as people realize this marries them to Windows, they go meh.
If DirectX on Linux could also work on bare metal, the conversation here would likely be different.
I'm still piecing it all together, and I definitely feel that "Extend" feeling, but I'm not sure that's what's happening here. Looks more like a few devs at MS are trying to solve the GPU Accel use case for WSL...
Those tools and libraries will then not work on native Linux.
And that is, IMHO, a very real concern and it really should not be merged.
And if it is rejected, Microsoft can still ship it in the kernels for the distros they offer on WSL.
I doubt the linux devs ever see WLS as a target they have to maintain themselves.
Using that with wine would mean adding two emulation layers before reaching the actual driver. I fail to see any use case for that.
So yeah, vulkan is neat and opens up a whole lot for the linux world. In the future you'll probably see userspace implementations of opengl on top of vulkan, maybe even CUDA implemented on AMD gpus, although I'm not sure how practical that is. Also a whole lot of exciting GPU sharing tech, accessing GPUs inside of VMs for example.
DirectX will come to linux, but it won't be thanks to microsoft. You can thank valve hedging their bets on the microsoft store for that.
> Anyway, this isn't really a Linux port of DirectX
The entire user mode side of Direct3D is ported, in addition to the user mode parts of the Nvidia, AMD, and Intel graphics drivers.
Are we seeing the start of the migration of Windows to linux?
By making the virtualized hardware the "glue", they can avoid the GPL/copyleft infection of their commercial OS, while supporting different kinds of developer experiences.
Please no. Please keep your peanut butter out of my chocolate. Call me a purist, but linux should take nothing from windows, give no ground, make no compromise. One must die for the other to live.
Edit: yep, an online search seems to say that's an actual thing. I guess I'm part of the ten thousand today https://xkcd.com/1053/. I will never understand the US fascination for peanut butter.
And yes, as a sibling notes, Reese's peanut butter cups are actually alarmingly tasty, but.... as with any $1 chocolate bar, that's shitty HFCS-saturated chocolate and shitty palm-oil-laced peanut butter, with way too much sugar in it, so if you're too good for that, well, that's a credit to your tastebuds, good on ya.
So eat real chocolate with real peanut butter. Real peanut butter is nothing but peanuts and salt (it keeps well, but fresh-ground is better). Real chocolate, I trust you can figure out. Milk and dark are both good in this application.
Although, of course, peanuts are not true nuts (no more than macadamia or almond or walnut), they're nonetheless very nutty, and the effect is pretty similar to "almond bark", or hazelnuts with chocolate, or pecans and chocolate. And of course you can just eat peanuts with chocolate, an okay combination. But there's something weirdly perfect about peanut butter with chocolate, better than peanuts with chocolate.
But hey, although I'm not American, I am from America's hat, and I do like peanut butter in a few other formats too.
At one point in history, US farmers were encouraged to grow peanuts as a rotation crop to improve soil quality. That led to a glut of peanuts in the market, so people tried to find uses for them. Peanut butter was invented+ as one of these uses, and has been a staple of American diets ever since.
+Or promoted, I can’t remember
As if describing things in terms of northern hemisphere temperate seasons wasn’t bad enough (and still worse commonly showing how little you care about any place other than the USA and maybe Canada by using the name “fall”), now we have this: “holiday 2020”. I don’t know when this is talking about. I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now, so it can’t be that. Christmas time would have been my next guess, but surely you’d describe that as “by the end of 2020”? And then other possibilities occurred to me—Halloween? Thanksgiving? I have no idea at all what Americans would call “holiday” as a time of year.
> I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now
Summer break more or less just started for most students here. The fall term will start around August.
They're referring to https://en.m.wikipedia.org/wiki/Christmas_and_holiday_season
Though in principle, there's nothing preventing them from using GBM instead of EGLStreams, and there are some good practical reasons such as having compatibility with the broad base of existing accelerated Wayland windowing libraries and applications.
The people working on the NVIDIA open source driver have no official support and were fighting with signed firmware blobs last I heard. I wish them luck, but even on older cards it is more likely to crash your system than render anything.
My understanding is that even 2015 is too new for nouveau to run with high performance due to something called reclocking, where the card starts up at a minimal clock rate and then it's up to the drivers to reconfigure it for running at the advertised clock.
And yet another piece is a layer to get OpenGL and OpenCL workloads running on DX12 as well, rather similar in scope to how MoltenVK and the gfx-hal Vulkan Portability work are a layer to get Vulkan workloads running on Metal. This is a big effort, and it seems to me their goal is to get things to the point where stuff Just Works and you don't have to think too hard about the various bits of (technically difficult!) infrastructure to get you there.
Huh? Are you sure about that? Regular TensorFlow on Windows uses CUDA, not DirectX-flavored compute.
Just run a Linux hyper-v vm. That's what WSL2 is doing under the hood anyway. I run it this way and it's great. I have windows terminal auto ssh into it. Performance is great. And using the X server x410 on the windows side gui performance is fantastic (though no hardware acceleration) because instead of ssh tunneling x410 suports AF_VSOCK for the x socket, which hyper-v supports for performance as good as a domain socket on the same machine.
* Sparse & light - they only allocate resources from the host when needed, and release them back to the host when freed
* Fast - it can boot a WSL distro from cold in < 2s
* Transitional - these lightweight VMs are designed to run for up to days-weeks at a time
Full Hyper-V VMs aim to (generally) grab all the resources they can and keep hold of those resources as long as possible in case they're needed. Full VMs are designed to run for months-years at a time.
WSL's VMs are MUCH less impactful on the host - FWIW, I run 2-3 WSL distros at a time on my 4 year old 16GB Surface Pro 4 and don't even notice that they're running.
I imagine this will be addressed, but claims of lightweight seem exaggerated?
But even more on my mind is the impact on the windows host. Is it running as a guest under hyper v? What's the overhead?
This leads me to believe that display support is intended in the future. It's a work in progress. They've gone this far why would they stop at compute? Still, it's pretty awesome if you ask me.
My experience is that Linux has significantly worse hardware support than Windows, particularly where newer hardware is concerned.
I was a tad bummed when realizing what this actually was, but still very much impressed.