Hacker News new | past | comments | ask | show | jobs | submit login
DirectX is coming to the Windows Subsystem for Linux (microsoft.com)
706 points by modeless 11 days ago | hide | past | web | favorite | 527 comments





It appears that Microsoft has now initiated the "Extend" phase of their classic Embrace, Extend, Extinguish playbook. The key is that the proprietary Microsoft specific API added by this patch is only usable in a WSL environment as it relies on many pieces of proprietary closed source software that Microsoft is unwilling to open source. This patch does nothing but fragment the Linux ecosystem and encourage people to develop software that only works in WSL environments. With the "benefit" of forcing the Linux kernel developers to pay the maintenance costs of course.

Look, I'm not saying saying we should "trust" Microsoft not to Embrace, Extend, Extinguish if they could - but they can't and they know it, so that's not their strategy. EEE was hinged entirely on the dominance of Windows and IE for the Extinguish phase. They won on desktop, but today most of the action has moved to mobile (where Windows lost completely to iOS and Android) and webservers (where Linux is massively dominant). In those domains, they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action. Look at a comp sci class in the last decade, and you'll see 1/2-2/3rds of them with MacBook Pros in their laps, because it compiles iOS apps and it's a *nix so it easily runs most of the same tools as a linux webserver. That's what WSL is about, trying to win back developers. That's why they bought Xamarin and Github. That's why they released VS Code. They're trying to win back developers by meeting them whereever they are, even if they know they are targeting platforms where Microsoft is not dominant and has no hope of becoming so. They're trying to make it feasible to target a linux webserver while developing on Windows. They're trying to make Azure a serious contender for cloud computing outside of the corporate C# world. There's not a realistic scenario where they become dominant enough on those platforms to execute any kind of Extinguish move. And they don't need to - there's still tons of money to be made with just a slice of the cloud pie. I think EEE went out the window along with Balmer and the 'Windows First' doctrine. I'm not saying they wouldn't, but I'm saying they can't and they know it, and they're ok with that.

All this is not to say that there's any reason for upstream to accept any bad patches that rely too much on proprietary code. But in this case, I think the analysis of the underlying motivations for that patch was outdated.


> EEE was hinged entirely on the dominance of Windows and IE for the Extinguish phase.

EEE was more effective when Microsoft was completely dominant, but it's clearly still effective even when that dominance has diminished.

This move, for instance, will cause Linux users to depend on a proprietary, closed-source API entirely controlled by Microsoft. It's very bad for Linux and very good for Microsoft. It's a lever of Microsoft control, extended into the rival Linux system. Very easy to see how it effectively fragments the Linux ecosystem and allows Microsoft to forcibly pull users who come to depend on DirectX from Linux to Windows by manipulating this lever - reducing Linux support in the future, etc.

This is classic EEE, and the only way you can dismiss it is if you believe their PR, as apparently you do:

> they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action.

As the other comment explained, they are still a huge gorilla in spaces that are absolutely vital to Linux: server, desktop, and laptop.

Linux as we know it lives or dies on the server and desktop / laptop markets. Yes, Android is technically based on the Linux kernel, but it builds an entirely different ecosystem on top of it. The Linux ecosystem is only on the server and PC - exactly where Microsoft is competing.

Finally, the availability of this proprietary API will reduce the incentive for hardware companies like Nvidia to allow development of direct Linux support (drivers etc) for their products - which is what Linux really needs in order to survive, let alone prosper.


> As the other comment explained, they are still a huge gorilla in spaces that are absolutely vital to Linux: server, desktop, and laptop.

No reasonable person at Microsoft considers Linux to be a threat on the desktop/laptop. No reasonable person at Microsoft considers Windows Server to be a threat to Linux on the server.


Microsoft holds a large and fast-growing share of the enterprise server market with products like Office365, Exchange, and Sharepoint.

O365 is SaaS. Customers that are hosting Exchange on-prem are being encouraged to move to O365 for email. Same for SharePoint. If I had to guess, O365 is helping to reduce, not grow, the market share of Windows server.

What's important is that server-side services that Microsoft offers are replacing Linux servers. Whether it's SaaS like Office365 or more traditional server software like Exchange and Sharepoint is not crucial.

Except that Linux usage on Azure has surpassed that of Windows a long time ago...

And I was talking specifically about Microsoft products like Office365, Exchange, and Sharepoint.

Microsoft's interest in on-premises business with these products is waning, just as the market share for them is also waning. They most assuredly aren't trying to win the war against Linux on the server. That war has been won (by Linux) and MS is off chasing other revenue streams that are growing.

Fast growing?

The on-premises infrastructure market has been and is projected to continue shrinking in favor of cloud spend.

Googling around one can find reports with names like "IDC Worldwide Operating Systems and Subsystems Market Shares report" that show how prevalent Linux is vs. Windows Server even in Enterprise IT.


There are enough Fortune 500 using Windows Servers to keep Microsoft in business for a long time.

> As one last point to consider: Linux as we know it lives or dies on the server and desktop / laptop markets

I agree with your general sentiment, but just wanted to note that high-end embedded systems are also a very Linux-heavy domain. Even though from hardware side it might look similar to the mobile space (ARMs everywhere etc.) it is actually a completely different world.


> In those domains, they are no longer the 500 pound gorilla in the room, they're the "hey, fellow kids" oldtimer trying to fit in and get in on a piece of the action.

Said of a company which owns a market-shaping player in every layer of the stack: OS, directory, database, applications, development languages, source code repository, and the second-biggest cloud to host it all in. Given that mobile and web involve about half of that list, I fail to see how they're some sort of non-player, trying to play "catchup" to the rest of the world now.


SV distortion field.

Not to mention the biggest public company in the world, alternating with Apple.

> Look at a comp sci class in the last decade, and you'll see 1/2-2/3rds of them with MacBook Pros in their laps,

Perhaps in some countries, definitely not everywhere. At my university, there is only a single Mac across 40 or so laptops.

> because it compiles iOS apps

I doubt students buy Macs for that. iPhones are not nearly as popular outside the US.

> and it's a *nix

Students don’t care about its UNIX certification. Most don’t even know what that is when they start their undergrad.

> it easily runs most of the same tools as a linux webserver

No, it does not. Many important dev tools are not available, not even through brew.

> That's what WSL is about, trying to win back developers

Globally, the majority of developers are on Windows/Linux, not macOS.

> They're trying to make it feasible to target a linux webserver

I’d bet this is about avoiding Linux becoming the major platform for ML/DS/AI, rather than anything about macOS.


[flagged]


Max 10% at my university in Germany. I’d say windows/linux was about an even split for the rest.

> That's what WSL is about, trying to win back developers. That's why they bought Xamarin and Github. That's why they released VS Code. They're trying to win back developers by meeting them whereever they are, even if they know they are targeting platforms where Microsoft is not dominant and has no hope of becoming so. They're trying to make it feasible to target a linux webserver while developing on Windows. They're trying to make Azure a serious contender for cloud computing outside of the corporate C# world.

So in other words, what you're saying is they are Embracing this FOSS linux-based developer workflow...

And you could almost say they're Extending the linux kernel to support DX12 on WSL...

Am I missing something?


Could you explain to me what you think an Extinguish step would look like in that sequence? Because I don't think there is a realistic one, and I don't think Microsoft thinks so either.

I think it's hyperbolic to imagine that MS will manage to fully extinguish Linux outside of WSL, but that's not the only way they could damage the linux ecosystem by leveraging the foothold they are building in the FOSS world.

It's not so hard to imagine picking up a legacy project at some point in the future, and pretty much having to develop it on Windows and deploy to Azure because of some WSL-specific dependencies baked into the framework it's built on.

That's just one scenario, but the point is why would you expect that they wouldn't leverage their market-share to disadvantage competitors if they ever got into position to do so? Just because they aren't in that position now doesn't mean we shouldn't be put off by the fact that they're building those levers to fragment the ecosystem now.


Correct me if I'm wrong, but my impression here is that Windows Subsystem for Linux will support DX12, but not broader Linux support for DX12.

So if you want a machine that can play games and run a Linux dev toolchain, that would require a Windows install. If you can incentivize folks who may never have installed Windows to do so, and keep them there as the only place they can both play games and use their preferred dev environment, then you've extinguished that portion of the desktop linux install base.

EDIT: Also I absolutely love the "I don't think Microsoft thinks so either." Microsoft is a for-profit corporation who's eyeing a way to get users who traditionally avoid them to come to their platform and stay there. They aren't some benevolent font of cool tech, they're trying to sell you products.


I agree, when we look at Microsoft's motivation, considering them driven by profit makes more sense than anything else. Hence why I'm critiquing comments that seem to think Microsoft is driven by some innate desire to destroy Linux and FOSS. They're not, they just want to make money, but back when their making money was based on Windows, and they thought Linux was an existential threat to that, those two goals coincided. Today, I think Microsoft cares much more about building up Azure to beat AWS and become the biggest cloud platform - they don't care if the servers are running Linux or Windows, as long as it's running on Azure - being seen by developers as Linux-hostile runs counter to that goal. So again, I'm not saying they wouldn't go for an Extinguish if they could, but I think they know that they can't and they have a strategy where they don't need to.

We're not talking about Microsoft's cloud / server strategy though. Linux as a server OS is different from Linux as a development platform. The two have nothing in common except the word "Linux."

Microsoft is taking steps to embrace desktop Linux, we've seen that with WSL. Instead of ditching Windows for a different/better dev platform, just keep using Windows.

They are extending their own desktop Linux by adding the capability to use DX12 in Linux, so long as it's run on Windows. Note that they are not adding this capability to a straight Ubuntu install, let alone any other distribution.

To claim "Microsoft knows they can't Extinguish desktop Linux" is naive at best, and shill-like at worst. They don't need to completely eliminate desktop Linux for EEE to apply here. the "Extinguish" part comes naturally from fewer and fewer devs choosing an alternative to Windows.


> We're not talking about Microsoft's cloud / server strategy though

Yes we are. I was talking about the motivation behind their action. And I think it's quite obvious that 'get businesses to use Azure over AWS' or 'get web developers to use Windows instead of Macs' is a much, much more likely motivation than 'Extinguish desktop Linux'. Why would they care about that? Desktop Linux is absolutely no threat to them, and there's no money to be made winning that fight more than they already have. Know what, I'll grant you that in their attempt to steal web devs over from MacOS, and getting a better user story for a web dev deploying Linux servers from Windows, they will probably also hurt desktop Linux - if nothing else, simply by creating another viable alternative for devs that isn't MacOS. But I can pretty much guarantee you that extinguishing desktop Linux is not their motivation - it simply makes no sense.


What you're saying is essentially: If we can't come up with a way in which Microsoft could proceed to the Extinguish phase, they can't. Because clearly, if we, after spending five minutes thinking about it, can't come up with something, then it is also impossible for this company that has spent three decades perfecting the concept, and literally has billions of dollars to throw at the problem.

Presumably because we smart and they dumb. Or something, I don't know?

Personally, I think that their thirty years of experience and billions of dollars might make it possible for them to come up with a plan I wouldn't have thought of.


I agree with you. Microsoft plays a long game. Through giving out promotional / free copies of Excel, with 1-2-3 compatiblity, they killed Lotus. The same with Word and Wordperfect. The same with Netscape. And those companies completely dominated their markets. As market leaders maybe they were lazy, taking profits, and not enhancing their product. That happens with most market leaders. But the fact is, it is impossible to charge $150 for WordPerfect when Microsoft is giving away a compatible word processor. As their only / primary source of revenue, WordPerfect and Lotus could not survive a free competitor.

They don't have to kill Linux, and in fact, would not want that IMO, because then someone might bring another pesky antitrust suit against them. But if they can dent it in any way, they will.

For those who say Microsoft has changed, is a different company, is embracing open-source, etc.: if that truly is the case, wouldn't they release DirectX for all Linux distributions, not just theirs?


I think the Shadowrun adage is applicable here: "Watch your back, shoot straight, and never, ever, cut a deal with a dragon."

Microsoft is the dragon here. Even if it appears friendly you still don't cut a deal with it, or you become a pawn in its game.


The extinguish step is breaking the spirit of the FOSS developers.

They already did that to themselves by throwing away GPL.

> you'll see 1/2-2/3rds of them with MacBook Pros in their laps, because it compiles iOS apps

Apple can be as hardheaded as Microsoft. I have several Apple machines and each of them was bought specifically for developing iOS apps, other uses appeared later. Apple can provide Swift for Windows and Linux, but they will never allow building iOS apps on non-Mac hardware.


The modern form of EEE is buying OSS devlopers, who increasingly make excuses for Microsoft and proprietary extensions.

The modern form of EEE is being platinum sponsor of the Linux Foundation and insert a CoC into the source tree.

Corporations don't want the "angry young OSS men" of the 1990s and early 2000s, who could be an actual threat.

Corporations want submissive, neutered and obedient developers.


The impression that "most of the action has moved to mobile" is just an impression.

First of all the market share of desktop versus mobile is roughly 43% for the desktop, 57% for mobiles and tablets. More importantly however is that the rise of mobiles has been sharp since 2009, but it's been stagnating since 2017 and this stagnation trend is clear. Mobile devices too are a commodity, their market isn't growing any bigger and people aren't replacing their phones as often. The market for the desktop did not shrink. It just reached saturation. And as it will become increasingly clear, mobiles have reached saturation too.

What devices do people use in companies to do their jobs? Laptops. Sometimes tablets, for drawing on them, although a piece of paper would do. Mobile devices have been and remain essential for communications, this meant phone calls in the past, nowadays it means WhatsApp, Slack, email, along with shitposting on Facebook/Instagram/Twitter, but that's about it.

> "webservers (where Linux is massively dominant)"

Sorry to disappoint you, but the market for web servers is actually small, unless you're Amazon and Microsoft realized at some point that selling Linux boxes on Azure is probably more lucrative than what they were doing. The web servers space was never Microsoft's so they had nothing to lose.

But the enterprise space is dominated by Microsoft, with their dominance only increasing. Exchange, Sharepoint, Office 365, Skype, Microsoft Teams, Azure DevOps, freaking Yammer, MS SQL, soon GitHub, their reach in the enterprise and their adoption, once you're familiar with the space, is actually quite scary.

I have to hand it to them, they became really good at marketing. Otherwise I can't explain this portrayal of them as being the underdog. Or the constant nagging messages I see about them having changed, due to them releasing VS Code or .NET Core.

No they haven't changed much. The tooling they make for developers has always been top notch and while .NET was proprietary, they standardized it and they never attacked Mono. Office has been available on Macs since 1989. And they no longer target Linux with patent threats of course, they target Android instead. Until they'll find some way to sell Android. They adopted Chromium, a master move since now they can cut some development costs and win back some users and sell them on Bing too. We'll probably see them windowfying Android too.

---

Note that I do enjoy several Microsoft products. But I'm always skeptical when hearing about their motivation. I don't understand what makes people cheer for these big companies, as if they are sports teams. Use their products for what they are and dump them as soon as you find something better.


Yeah, most of the consumption has moved to mobile. But people who make stuff are still predominantly using desktop/notebook computers.

Right, and most of the consumption is being served from linux servers. So Microsoft is trying to use their strength on desktop and in production to stay relevant in that ecosystem, even though it involves other platforms than Windows - hence the focus on cross-compilation and cross-platform tech.

> The tooling they make for developers has always been top notch

I have to emphatically disagree there. Up until recently Windows didn't even have a decent terminal app. As a developer I tend to shy away from Windows precisely because the *Nix alternatives offer a better development experience, from cli to package management and more.


Not every developer needs a terminal app to feel like one.

In fact Mac OS also didn't have any until OS X.


Real developers automate. It's hard to automate without a terminal :)

You can automate with scripting languages and REPL.

If you're using a terminal to automate on windows, you're going to have a bad time. You'll want something like AutoHotKey which has been around for a long while.

While you're not wrong, I think that you're referencing a different flavor of automation than GP is.

You can automate just fine in a console with PowerShell.

OS X was released in 2001, which is a pretty long time ago.

Yet classical Mac OS developers were able to do without one for 17 years, and I bet most developers actually targeting Apple devices still do without one, when I look around the office over here and see InteliJ, XCode and AppCode.

Doesn't MPW count?

Yeah, but most people were using the GUI tooling anyway and not everyone was a MPW user.

Windows still doesn't have a decent terminal app! The new one just crashes when I resume from suspend. (I believe it's because when the computer suspends, it forgets about the external monitor. But it's still not evident to me why that should make it crash.)

If we consider third parties, ConEmu is well beyond decent.

So what you're saying is "Developers, Developers, Developers!"?

Watch out for flying penguins.

Very well stated.

People should be much more concerned about EEE from Google, as we’ve been seeing more of all the time with Chrome.


On businesses and government Microsoft Office continues to have a dominant market share. And that's a lot of money.

Not only US government, but almost every government around the globe is using it. And on macOS, that's still the case.


Google is already one step ahead of Microsoft with ChromeOS and Android.

I find this a cynical take as far as EEE goes. But even with Microsofts mostly solid efforts into open source I find myself a bit skeptical as well.

Most recently with the Live Share extension for VS Code. From skimming the licensing it is only to be used with the Visual Studio family of products. Which is an incredibly disappointing approach.

I think this is simply about ML and GPU compute for WSL. And I think Microsoft is genuine in that it sees business value in doing open source work, integrating well with Linux, etc. But at the end of the day the business interests are there and different parts of the company can be motivated quite differently.

Some skepticism remains warranted.


The license of Live Share as appears among VScode plugins indeed says "You may use the software only with Microsoft Visual Studio or Visual Studio Code", but the license of the source code of the extension on Github is an MIT license:

https://github.com/MicrosoftDocs/live-share/blob/master/LICE...


This is not the code of the extension, it's just documentation.

I think it is as well, honestly the DX12 part of this is DOA as far as I'm concerned because it's not open source. So that leaves the better support for other APIs as the key delivered feature. Could MS fix that by setting DX12 up as an open to implement API and open sourcing much of this? Yes. But until they do you're better of using the various translation layers from DX to vulkan when using linux even on WSL.

As a FreeBSD user I find this kind of comment hilarious given I see Linux specific code littering upstream open source projects with no thought for other platforms on a regular basis. I occasionally see resistance to fixing these issues as well.

Frankly, I don't see what's so funny.

I don't presume to know if EEE is Microsoft's strategy here, but if it is, do you think a fleeting moment of amusement at Linux's comeuppance is worth the damage done to the larger open source ecosystem in its wake? Linux and FreeBSD are on the same team here.


As a Linux user I find this kind of comment hilarious given I see FreeBSD specific code littering upstream open source projects with no thought for other platforms on a regular basis.

Why for example are Jails not compatible with Linux? Is that not considered upstream because FreeBSD's "integrated"? How convenient.

I think what you see is happening because these "upstream" projects are actually primarily for Linux and by Linux devs. They just happen to work on BSD as well, but weren't you guys saying this whole time how the BSD approach of having everything integrated into one system is better and the whole GNU/Linux ecosystem of several upstream components is strictly worse?

So what exactly are you complaining about? Us trying to use Linux-specific features to make the Linux ecosystem the best, rather than target the lowest common denominator while you guys take full advantage of FreeBSD's specifics?

Seems less than fair to me.


> I see FreeBSD specific code littering upstream open source projects

Any examples?

> Why for example are Jails not compatible with Linux?

What? Jails originated in FreeBSD over 20 years ago? This question doesn't make sense.

The OP was referring to systemd affecting upstream by requiring systemd for things (ie Gnome). Are you saying there are pieces of software you have always used, but no longer can because they directly rely on jails?

I don't even use FreeBSD, but this was a weird argument.


I am saying GNOME is primarily a Linux desktop environment, developed practically exclusively by Linux users and thus it makes sense for it to make use of Linux specific features, just as it makes sense for FreeBSD technologies to take full advantage of FreeBSD-exclusive features.

Just because GNOME wasn't using systemd in the 90s when systemd didn't even exist does not mean it should not use it ever, even if it makes sense for GNOME devs to do so now.


I think a lot of that is coming from redhat/IBM, which is currently doing their own take on EEE. Believe me I get pissy about that too.

Honest question, is your opinion that we should have been stuck with something like SysVinit just so it's more convenient for the BSDs, while the likes of RedHat contribute the majority of the work?

And if yes, can you point to an example of *BSD doing something similar while pushing their platform forward?


This is a loaded way to frame the question, some of us were just fine being "stuck" with SysVinit. I try not to get dragged into systemd flamewars because life is too short but it's disingenuous to claim that before systemd init systems were some kind of inescapable hellscape.

I administrate multiple systemd-based linux distros, FreeBSD servers and buildroot-based embedded systems and I can tell you that systemd still gets in my way regularly while the sh-based init systems tend to Just Work and are very simple to understand and maintain.

Of course I know that a big reason is that I'm very familiar with un*x system administration and the gotchas of shell scripting while systemd is probably more approachable for somebody who doesn't want to learn arcane knowledge about chmod and file locking and setuid and symbolic links but I think that explains why there's still so much pushback against systemd all these years later: people who care about init systems know enough about them that systemd feels over-engineered and unnecessarily complex while not bringing a lot to the table.

Never once in the past years of running systemd have I thought "oh man, I sure am glad I'm using systemd and not an old SysV/BSD init system!". Not a single time. I did have multiple occurrences of systemd breaking stuff after an update though.


> systemd still gets in my way regularly while the sh-based init systems tend to Just Work

For me systemd based systems allow me to have declarative, portable unit files where init scripts don't. They allow me to reliably monitor and restart services, they shut things down properly instead of just force killing as many init scripts end up doing.

I instantly know how to manage most major distros now that systemd's common among all of them, have no hesitation of writing a proper service file even for minor tasks and I get a ton of functionality 'for free' too.

Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic. If your experience differs there's still plenty of non-systemd choices out there. They're not as prevalent as systemd ones, but that's because the people who sit down and actually write the code we all use find the services systemd provides valuable.


Arguing that the advantage of systemd is portability is rather bold!

And even if portability was the point I'm not sure I see the big deal. Writing an RC script from scratch if the software you use doesn't provide it is generally trivial. The vast majority of the time you wouldn't have to do that anyway as it ships with your OS's packages anyway. Sure, systemd might be "tidier" with its standard APIs and whatnot but it's also a lot more complicated and opaque than a bunch of shell scripts running one after an other. And if you're serious about sysadmin you'll have to learn shell scripting one day or the other anyway.

>Init scripts were always a poor-quality mess, non-portable among systems, non-consistent, non-deterministic.

They're non-portable, that much is true but so is systemd, that's a weak argument. If everybody had adopted FreeBSD-style init it would be equally as portable, it's a self-fulfilling prophecy. With that logic we should just all ditch un*x and start running Windows since most people already use that anyway.

The rest is nonsense. It's poor quality, non-consistent and non-deterministic if you write them that way. Sure, shell scripts being turing-complete opens the door to a lot of nonsense if people go wild and gives more latitude for very sloppy code but it doesn't have to be that way.

>the people who sit down and actually write the code we all use find the services systemd provides valuable

Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch. I'm sure many of them use it because that's what's available. I wrote a bunch of systemd unit files myself, I assure you that it wasn't meant as an endorsement.

Besides it's only one side of the equation. Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them. I'm sure many software maintainers would prefer if everybody ran the same OS on the same hardware with the same use cases but that's not how the real world works.


> Arguing that the advantage of systemd is portability is rather bold!

It's merely a statement of fact, systemd services accept the same set of commands across distros, which is rather unlike SysV.

> It's poor quality, non-consistent and non-deterministic if you write them that way.

That's a bullshit statement, because everything fits it. Of course everything is great if you make it great. And?

The point is that systemd's declarative nature makes it hard to screw up services and even badly written ones will get enough common functionality for free that they'd be usable.

> Systemd has been pushed down everybody's throat for a while now, saying retroactively that people use it because they find it valuable is a bit of a stretch.

Systemd got adopted because people generally found it valuable enough to adopt over what they had before.

> Maybe it's nicer for the people writing the unit files, doesn't mean that it's a good thing for people actually having to use them

Matter of opinion, but I happen to think that having a uniform set of commands working at work and at home is nicer for users too, over the patchwork of scripts that SysV was across the various distros.


> For me systemd based systems allow me to have declarative, portable unit files where init scripts don't.

Portable to what?


Portable to other systemd-using Linux distros.

sysv init scripts simply weren't portable between distros, leading to tons of non-standard, incompatible fragmentation.

Users simply couldn't simply take their own scripts over to another distro and expect them to just work, given said differences.

With systemd, unit files will simply just work between all systemd distros, given the standardized format.


Oh. That hasn't been my experience. Where did they standardize the names and set of the services you can depend on?

> Where did they standardize the names and set of the services you can depend on?

You're ignoring what the parent said and talking past.

It's not the 'sets' that are standardized, it's the set of commands that are applicable to a systemd service and to any systemd Linux distro that is.


If the names of dependencies are the only unportable thing, we have indeed come a long way.

I don't think we should have stuck with something like SysVinit, there's definitely room for improvement, but saying "it's either SysVinit or systemd" is a false dichotomy.

If I was approaching building an init system I'd make a better language for writing init scripts than bash, some kind of interpreter that processes mostly declarative init files, sets things up, and then exits. An incremental improvement that works with existing systems instead of putting a whole bunch of new (generally un-audited) code into PID 1, with all the security implications that implies.

Redhat may contribute the majority of the work but they're also very good at positioning themselves so that outsiders can't really contribute any work, or get any independently developed standards implemented.


> If I was approaching building an init system I'd make a better language for writing init scripts than bash, some kind of interpreter that processes mostly declarative init file

Guess what systemd does?

> sets things up, and then exits

And that's the crux of the issue isn't it? Because on modern systems, things need setting up and tearing down all the time.


> things need setting up and tearing down all the time.

Well not really, that's an artifact of systemd's over-engineered design. There's nothing stopping you from tearing something down from an init script or doing more complicated dependency management using the CLI as your RPC mechanism (but red hat needed a reason to use their in-house RPC mechanism).

Honestly something like systemd could be pretty reasonable if it wasn't created with the express intent of "combating fragmentation". Not that there aren't a whole bunch of technical and architectural issues with it, but still.


> There's nothing stopping you from tearing something down from an init script

Except most init scripts I've seen are rather brittle and only "work" if the PID dance is exactly as the author predicted, are not declarative and hard to debug.

I don't want to go back to init scripts, for all systemd's faults, the past was worse.

Using the CLI as my RPC mechanism etc. just sounds like I should spend a bunch of time doing work that systemd can do a better job of managing for me.


How can people replace SysVinit without "doing their own take on EEE"? Your alternative approach is still not going to be compatible with SysVinit either, not to mention it being strikingly similar to systemd.

Archlinux had their own non-sysvinit system (which went away in favor of systemd) as does voidlinux (runit based) and alpine (OpenRC) today.

The problem is less the init system itself, but applications that depend on a specific init system [1] (gnome used to be a major source of contention in that regard).

[1] https://wiki.gentoo.org/wiki/Hard_dependencies_on_systemd


What apps depend on a specific init system?

Gnome does not depend on systemd, but rather logind.

KDE used to be the same, until someone started maintaining ConsoleKit2 again, proper, at which point they were happy to support it.

ConsoleKit was dropped because it just wasn't being maintained, and had various limitations.


Why does logind depend on systemd?

It's part of the systemd project umbrella? It's part of the systemd monorepo? It uses libsystemd?

Maybe it's simply easier to maintain this way?

elogind exists, if you care. It exposes the DBus interfaces that logind supports for applications to call.

Thus, environments like Gnome can be supported on non-systemd systems if they emulate and/or expose and implement the required DBus interfaces.


That's sort of the complaint, why I accused them of "doing their own weird version of EEE". When faced with a choice of "make everybody else do a whole bunch of work so that they're not forced to use your entire project-umbrella" or "make some minor changes to our architecture so things aren't as tightly coupled" they almost always choose the one that forces you to use systemd.

To the extent that you believe that free software development should work such that random people on internet forums dictate the architecture to the people who are doing the work, and instead of writing their software to solve problems that actual users have, they should comply with these whims, you better prepare to be disappointed.

logind has a stable API, it is possible to build alternative implementations not coupled to systemd, yet so far I don't see much on that front.

See also https://lwn.net/Articles/586141


Is applications depending on systemd particularly a bad thing? If we don't want applications to use non-SysVinit features, we might as well keep using SysVinit.

Why should my window manager dictate what programs are allowed to start it?

For the same reason every piece of software 'dictates' its dependencies.

If you don't care about portability at all.. (regardless if it's just within linux or to *BSD).

And like I already mentioned in my previous post.. there are other alternatives around than just SysV-init..


>How can people replace SysVinit without "doing their own take on EEE"?

By not tightly bundling their init with other OS components, by not having GNOME desktop somehow depend on what init system you're using (as opposed to services running under that init system).

Also you know how shell scripts have `#!/usr/bin/env bash` at the top of them? Well the reason why my hypothetical init system would be compatible with SysVinit is because instead of `#!/usr/bin/env bash` it would have `#!/usr/bin/env new_init_language` at the top of it. 'new_init_language" could implement almost all the same features that systemd unit files do.


> Honest question, is your opinion that we should have been stuck with something like SysVinit just so it's more convenient for the BSDs,

BSD init is far better than Sysv. Adopting it would have been a step forward.

There are also other init systems that are better designed, like openrc or runit.


> There are also other init systems that are better designed, like openrc or runit.

OpenRC is practically speaking a thin wrapper around SysVinit, not much of an upgrade if you ask me.


Linux getting first-class support from applications isn't a valid basis for criticism. It's not like Linux is actively doing harm to FreeBSD or anything.

> The key is that the proprietary Microsoft specific API added by this patch is only usable in a WSL environment

The other key, which everyone crying "Extend" seems so eager to ignore, is that they don't actually want anyone to use this API: https://lkml.org/lkml/2020/5/19/1139

"I'll explain below why we had to pipe DX12 all the way into the Linux guest, but this is not to introduce DX12 into the Linux world as competition. There is no intent for anyone in the Linux world to start coding for the DX12 API."

They want people to continue to use OpenGL/Vulkan/OpenCL/etc., and this is just their mechanism for getting those APIs GPU access when running under WSL: https://www.collabora.com/news-and-blog/news-and-events/intr...

"We have recently announced work on mapping layers that will bring hardware acceleration for OpenCL and OpenGL on top of DX12. We will be using these layers to provide hardware accelerated OpenGL and OpenCL to WSL through the Mesa library."


> There is no intent for anyone in the Linux world to start coding for the DX12 API.

I took that to mean that the developers on Linux are the people who won’t switch to DX even if it was ported.


According to the comment by christoph-heiss, this is just a paravirtualisation technique and will appear like a normal Limux GPU driver to applications running on WSL, rather than a special DirectX for Linux API that won't work on Linux itself.

I'm not sure where you are getting the idea that this would expose a normal Linux GPU API. The blog post and the thread on the kernel mailing list seem quite clear in that this is explicitly designed with the goal of exposing the Windows driver API and associated client APIs like DirectX.

From the email introducing the set of patches:

> The projection is accomplished by exposing the WDDM (Windows Display Driver Model) interface as a set of IOCTL. This allows APIs and user mode driver written against the WDDM GPU abstraction on Windows to be ported to run within a Linux environment. This enables the port of the D3D12 and DirectML APIs as well as their associated user mode driver to Linux.


> I'm not sure where you are getting the idea that this would expose a normal Linux GPU API.

The fact that they went through the effort of porting Mesa to get OpenGL and Open CL support, and are working on Vulkan support.


The standard for Linux gpu access is the dri/drm kernel interface, not the new wsl specific api they've added.

Their porting Mesa to use a proprietary library that interacts with the wsl only kernel interface isn't that same as behaving like a standard Linux gpu


It's not really a standard though, each GPU has a totally different set of ioctls and device files to the point that you can't write against a generic kernel interface either way. AFAIK, the only DRI/DRM interface with two used clients is amdgpu (of which one of the clients is proprietary).

And part of the discussion on the mailing list is how to integrate this with DRI/DRM and dma-buf so it can be used by more Linux clients with less work (although still non zero like you'd have on a DRI scheme as well).


> The blog post and the thread on the kernel mailing list seem quite clear in that this is explicitly designed with the goal of exposing the Windows driver API and associated client APIs like DirectX.

Are we reading the same blog post?[1]

[1]: https://devblogs.microsoft.com/directx/wp-content/uploads/si...


Keep in mind, a ton of people and companies are asking for this, because they genuinely want to use hardware-accelerated PyTorch and TensorFlow "through a Linux-thingy" on Windows.

BUT I MUST AGREE WITH YOU that, over time, this is likely to fragment the Linux ecosystem and encourage people to develop software that works only in Windows "with the Linux-thingy installed" environments.

That's because sooner or later, people using Windows will publish important AI papers or release must-have software or do something else with code that runs only on Windows "with the Linux-thingy installed." The "Linux-thingy" becomes a dependency that gets installed in the background. It becomes a near-invisible component of Windows, abstracted away by layers of Microsoft APIs, code, standards, etc.

The "Linux-thingy" on Windows, in other words, becomes like an init system on Linux, which can eventually be replaced in whole or in part without most people caring or noticing. Most Linux users don't care if their Linux machine uses Systemd, a Sys V Init, or even the dead-ended Upstart. That's the "Extinguish" part.


I hate how this is constantly brought up. Yes- that was a Microsoft strategy and might still be. With that said, Microsoft has no power to make people switch. If people switch off of desktop Linux in favor of Windows it's because Windows is either a better product or Linux was too much of an issue. Why shouldn't consumers choose the better product for themselves? Why shouldn't Microsoft or Linux compete for business? Why do you care so much if someone wants to use Linux or Windows?

>With the "benefit" of forcing the Linux kernel developers to pay the maintenance costs of course.

No. Nobody is forced to maintain this. If Microsoft stops maintaining it then Linux is free to kick this to the curb.


> This patch does nothing but fragment the Linux ecosystem

Proof that Microsoft really has embraced Open Source development.


i (and others) called this 3 years ago[1].

people need to stop using windows, entirely. i recently built a gaming machine, pretty high spec, and i've resolved to never install windows on it. there are plenty of games i can't play, a few that i would like to. but i wont give money to developers that wont release linux versions of their games - even when they are utilising engines that have linux ports (PUBG being the most recent example i can think of, but any unreal/unity engine based game). there's no reason to use windows any more (for people like us, at least).

[1]: https://news.ycombinator.com/item?id=14320200


I'd love to be using an open source desktop, but sadly Linux Desktop is still a goddamn garbage fire as far as I'm concerned. Not that long ago (months) I would say that 4 out of 5 of my desktop and laptop computers ran Linux, but now that number is 1/5 because I just got sick of dealing with Linux Desktop bullshit on them. My main gaming rig runs nVidia because it is frankly best in class, and nVidia has shit Linux support so that's a no-go. I also have a used Oculus Rift, which again rules out Linux.

There's a reason Windows is still dominant in the Desktop space despite all its problems.


> Not that long ago (months) I would say that 4 out of 5 of my desktop and laptop computers ran Linux, but now that number is 1/5 because I just got sick of dealing with Linux Desktop bullshit on them.

interesting. i put ubuntu 20.04 on this and it's been plain sailing. now usually i'd say i've got a high tolerance for bullshit when it comes to linux, but i'm not exadurating when i say i did not have to touch the terminal to:

1. install it without any extra crap

2. switch from nouveau to the nvidia driver - version 440.64, a very recent driver (which isn't very fair to nouveau, i didn't even give it a try, really)

3. install steam and games and have them work flawlessly from the get-go. games i've played:

* counter-strike global offensive (a lot..)

* the binding of isaac (would be bad if this didn't run!)

* deus ex mankind divided (was _very_ surprised to see this available and working perfectly, to be honest)

* black mesa

a short list, but i've only had the machine for a week.

i know it's stupid to say "hey i didn't have to use the terminal", but really, ubuntu has come a long way. i couldn't have said that last year, i don't think.

> nVidia has shit Linux support so that's a no-go.

eh, i'm gonna respectfully disagree there. linux support from nvidia has been a bit patchy (and the nvidia driver windows isn't exactly a dream at times, too), but i've been using nvidia driver on linux on various installs on various architectures on various hardware for _years_ and i've never had a significant problem. AMD, on the other hand.. i don't miss fglrx.

> I also have a used Oculus Rift, which again rules out Linux.

well i also wont support facebook, so that's not an option for me :)

honestly, it's _really_ disheartening to see the replies to this comment and see how far the normalisation of deviance has come. the _only_ tool we have as end users to stop companies from being shit is to vote with our wallets. and yet the majority of responses is to just give up and accept that that's how it is.

i think we're doomed as a species :)


What kind of linux desktop bullshit? Has it been getting worse?

No, it just isn't getting better. Any time you want to do something even remotely interesting, like install software that is actually up to date and therefore not in the repo, you have to jump through a bunch of hoops. One of my other usecases involved a device with only a 16GB internal disk, so naturally I would like to install applications to an external disk so they actually fit. This is usually trivial on Windows but not possible with any package manager on Linux [0]. I still constantly run into issues with hardware, especially sound and video, that require hours of googling and manually tweaking config files to fix. Oh yeah, and half the time the results from google for any issue applied to the way things were done 5 years ago, which in typical Linux fashion means that there's an entirely new way to do it now that bears no resemblance.

[0] Flatpak can do it if you setup an entirely new installation on that disk, but flatpak has its own issues. Snaps can't do it at all, and while AppImage is really great relatively few projects distribute that way.


what do you mean you wanted to install applications to an external disk?

if you have a separate OS on the external drive, you could chroot in and install it. If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed. However you probably don't wanna do that on an application by application basis. you could mount /usr/bin/ on the external drive for example. or use LVM to share disk space between the disks

Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.


> what do you mean you wanted to install applications to an external disk?

Kinda illustrates my point about this being a foreign concept to Linux Desktop people. It's pretty simple: I want to put an application on an external disk and run it from there.

> if you have a separate OS on the external drive, you could chroot in and install it.

No thanks. I'd just like to have the application stored on an external disk, and execute it on the OS I have installed.

> If you just want that one application to be on the other disk, you could mount it to wherever on the filesystem the program would be installed.

Not really, because Linux likes to have an application spread its files all over the hierarchy, so in reality I need to either use some form of union-mounting, and/or simlinks. Of course that is only sufficient if there are no library conflicts between what the application wants and what the system uses, in that case I need to use LD_LIBRARY_PATH and other tricks. In some cases I'll need to use a launch script that calls a different ld.so.

That's a lot of hoops compared to how sane operating systems do it.

> LVM to share disk space between the disks

System breaks when disk is removed. No good.

> Flatpak, Snap, Appimage and other portable format aren't really preferable. they lead to a software ecosystem that amounts to just downloading and running executable binaries off the internet, each with their own overhead.

Which is pretty much what I want, because the alternative is dealing with the bullshit I mentioned above whenever you step outside the package manager's sandbox.


NixOS gets you there I think. They deviate from the standard hierarchy to keep each application in its own directory that could be installed on it's own drive if need be, then with symlinks back to the main tree for legacy purposes. Like /bin/sh would be a symlink to /nix/store/s/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-bash-4.3.43/bash but only because there's enough legacy out there that expects /bin/sh to be runnable verbatim.

Now if only I didn't need to learn a new language in addition to all the OS tooling to use NixOS.

This is what AppImage gets that the rest of Linux doesn't seem to: if an application is a single file/folder, you don't need anything more complicated than a file manager to manage it.

P.S.: after a quick glance at some documentation, I don't see any mechanism that would allow me to install nix packages to arbitrary locations anyway.


You mean you want an actual installer, like in DOS or Windows? IMHO, it's crazy.

I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/


> You mean you want an actual installer, like in DOS or Windows? IMHO, it's crazy.

Well that's still better, in my opinion, than package managers and all their various restrictions. But ideally I want something like AppImage, where an application is a singular entity that can just be moved at will and run from wherever [0]. Most DOS programs and many Windows programs (if you extract them from the installer) will still work like that.

> I had similar problem. I joined two physical disks into one logical using LVM and forgot about problem. :-/

If you unplug the external disk, your system stops working. This is not what I want, as I mentioned already in the post you're replying to.

[0] classic MacOS, RiscOS, and NeXT all had programs that worked like this too. Linux has yet to achieve the flexibility in application management afforded by the OSs of the 1980s.


Hmm, that's fair. I'd encourage you to try something outside of the standard redhat/ubuntu distros. Manjaro is nice and is pretty much always up to date. Also has access to the AUR, so there are packages for pretty much everything. It's a lot easier to do interesting stuff.

LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.


> I'd encourage you to try something outside of the standard redhat/ubuntu distros.

Generally speaking they have the same problems as the mainstream distros, but with the additional caveat of even less chance of googling solutions and worse or no support at all from non-oss software.

> LVM can make two disks appear like they're one, but of course if you remove any of them they both fail. Doesn't really solve the underlying "I want to install apps outside of the root partition" problem though.

Precisely. The very concept is just so foreign to most of the Linux community that even talking about it gets you strange looks. It is actually possible to do with some significant AUFS-fu, but its a hell of a hoop to jump through for functionality that is pretty natural in every other desktop OS that ever existed.


I've heard GoboLinux [0] does not follow Filesystem Hierarchy Standard.

[0] https://en.wikipedia.org/wiki/GoboLinux

----

What you describe is like installing in $HOME. It's not unusual - python virtualenv, ruby rbenv, node_modules. This comes with trade off - either system knows where to search or one has to define per project. By FHS entire Linux tree is a project. In Windows... configuration is pain.

Flatpak someday.


Now, as I think about it, ruby is not like FHS

    └── gems
        ├── irb-1.2.3
        │   ├── exe
        │   │   └── irb
        │   └── lib
        │       ├── irb
        │       └── irb.rb
        └── rack-2.2.2
            ├── bin
            │   └── rackup
            └── lib
                ├── rack
                └── rack.rb
    
vs

    ├── bin
    │   ├── irb
    │   └── rackup
    └── ruby
        ├── irb
        ├── irb.rb
        ├── rack
        └── rack.rb
    
FHS approach does not support multiple versions, gems approach requires declaring version in code or Gemfile, playing with PATH, considerable slowdown https://github.com/Shopify/bootsnap

I think that sums up my issues with desktop Linux: any user has at least some niche requirements. For mine they are fairly doable on MacOS/Windows, but they require hours of staring at documentation and downloading/compiling source code in Linux.

I still write code in a Linux environment because that's where the ecosystem is, but for my daily driver I've given up on desktop Linux. And I've found that WSL gets me a Linux based development environment side by side with Windows with less pain than dual booting. I've found that when it comes to using Linux smoothly, it's a lot easier to shoehorn myself into the happy paths than to bend my setup into my way of doing things.


As a game developer, for most mid-tier inside titles releasing Linux version costs more than it makes, so, no.

And regarding using Windows: company's behavior is important, but product quality, especially when it's a tool you use every day for work, is more important. Linux is ok for certain types of developers, it's still awful for normal users, abysmal for office work and obviously not a choice if you develop Windows software.


> As a game developer, for most mid-tier inside titles releasing Linux version costs more than it makes, so, no.

What are the main costs of releasing a Linux version of a game when using Unity or Unreal? My limited experience with Unity has been "check the Linux box, click the build button". It even cross-compiles no problem. People I know tell me Unreal is similar.

It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that. We know we're a minority so we're willing to put up with bugs. Even if there's only 100 people who'd buy it only if it ran on Linux, that should pretty much cover the couple hours of dev time.


I can tell you've never released a popular product on Linux. It's a lottttt more effort than you're implying. To do it well, you need to have Linux as a target from the start of your development, it's not something you can tack on to the end. Linux graphics drivers are very different from Windows (the AMD stack isn't even the same codebase). Linux window managers are... wild. You need to build to an ABI that will be supported across distros, which mostly means the Steam Runtime, which means you need to go learn about and understand that. You can't introduce any platform assumptions ("ehh, I'll just hardcode this path with a backslash... oops"). Once you release, you're talking support for dozens of different distros, different versions of distros, different package managers, different filesystems. I know what you're going to say: "Just support Ubuntu LTS and ignore everyone else". Well, now you've just cut your customer base by at least 75%[1], and your customers on other distros are going to demand support anyway. Do you tell a paying customer to F-off because they're not using a distro with only ~20% of the market?

Supporting Linux is hard. So is supporting Windows, but if Windows makes up 98% of your customer base, it's justified. 2%? Ehhhhh.

(P.S. I love Linux. See my profile. This is the kind of stuff I deal with every single day.)

[1] https://boilingsteam.com/we-dont-game-on-the-same-distros-no...


Open source your games. Open source is the only practical way that existing distros are able to test and support the majority of packages that they ship. If you do that, the cost of supporting all those other things will fall on the distros, not on you, but you need to actually play ball with them and give them something workable that isn't an opaque binary blob that is illegal to redistribute, modify or reverse-engineer. The feelings you're describing (that you need to shelter 100% of those costs and you have no other options) are a side effect of the game industry's cargo-culting of closed-source business models and refusal to even consider that there are ways to make money from open source.

Yeah, the solution to a platform not making enough money to justify supporting it is... giving away your game for free! The self-centered attitude of your comment is astounding, but all too common in open source communities.

You don't have to give your game away for free. I urge you to look into how prominent open source companies are actually making money and think about how it can be applied to games. It's not an impossible reality.

You claim I am being "self-centered" but the fact is if you don't provide something that the distros can work with and instead try to route around everybody and ship an untestable blob then that's your fault. The reason everyone tells you to just support Ubuntu LTS is because they are the only ones who have the resources to support all these random binary blobs for X amount of years, and that comes with all the pains associated with it, as you are well aware of.


> You claim I am being "self-centered" but the fact is if you don't provide something that the distros can work with

Alternative view: distros don't provide a stable platform that developers can target.


Some do provide that, such as Ubuntu LTS and RHEL. Others don't.

Expecting every distro to have the exact same release & support cycle is nonsensical.


Linux LTS releases are not particularly stable from the perspective of someone who wants to distribute applications as binaries. Its API can change with every release, i.e. every two years. It's incredibly unlikely that a binary compiled for Ubuntu 18.04 will be able to run on Ubuntu 20.04 (as in Python etc.).

To contrast, if you take a Win32 game from 2003 that targets DirectX 9, it probably runs fine on the latest Windows releases. (You might have to enable some compatibility mode.)

"BUT BUT BUT if it's packaged with the distro it's no problem." Remember we're talking about proprietary applications (mostly games) here. Having these packaged with distros for years after the devs moved on to the next project is just a pipe dream.


My original statement was that if this is really a problem for you then you need to stop writing and distributing proprietary binaries and open source your game. I know this is a hard pill for game developers to swallow but there is no other way. Open source communities move fast, they are not going to slow down just to support some opaque binary blob that is illegal to redistribute or fix bugs in, and that the original developer doesn't even care about anyway. It is nonsensical to expect these communities to work exactly like Windows. These are not fortune 500 companies with billions in the bank like Microsoft that can afford to keep innovating while also supporting every single legacy program in existence forever. The only way open source communities can provide the same level of support is if you provide source code that other interested parties can keep up-to-date without worry of being sued.

These prominent open source companies that you're taking about make money from consulting or cloud services. There's no well understood and well rested way to do that as a game company.

It doesn't mean that there's no such way, but it does mean that attempting to find it is very risky. Making games is already a very high-risk, high-reward industry, do adding this amount of risk to the equation is advice that's insane for all but most successful companies.


A multiplayer game is just like any other cloud service in terms of economics. And a game that allows a certain degree of customization is analogous to a consultancy. Find the pain points your business customers/partners are having and then charge for solutions. If you don't have any business customers/partners then get some, not having them is a much bigger risk than choosing any specific development model.

One of the ways these companies can reduce risk is actually by using other proven open source components. If they won't even make an effort to try to expand this, then there will never be a well-understood way.


> A multiplayer game is just like any other cloud service in terms of economics.

No, it's not - this is an absurd statement. Cloud services get most of their income from big and medium companies, billing tens and thousands of dollars in privately negotiated agreements. Digital Ocean and AWS don't live on single developers paying them $5. All their real money is on B2B.

Multiplayer games either get a flat subscription from all their playerbase, or rely on micropayments. Either way, they spend much less on account management where they interact with customers as individuals, and much more on analytics, where they measure and work with their customers in aggregate. These are completely different business models, that completely shape companies that operate on them.

> Find the pain points your business customers/partners are having and then charge for solutions.

Games are not solving any problems - they are entiertainment product. I've seen people who have tried to reason about games in the terms you're using, and it always yelded hillarious (but also sad) results.


Open sourcing doesn't mean you need to provide all the souds/music/3d models/assets/etc. Those can all be copyrighted separately.

> Open source your games. Open source is the only practical way that existing distros are able to test and support the majority of packages that they ship

Which might be one of the reasons Linux Desktop is so awful to develop for.


The "Linux Desktop" is not and has never been a thing beyond a vague marketing term. What you're thinking of is a loose collection of unrelated projects. Pick a single stack and go with it and things get better.

....Which might be one of the reasons Linux Desktop is so awful to develop for.

You wonder why people ignore your platform? Saying it isn't actually a single platform and instead is dozens of different platforms just means each of those platforms is even less relevant. Saying you're not just one platform with 10% market share but actually 20 with 0.5% each doesn't make the case for shipping on Linux Desktop any better.


I'm not making a case for shipping on the "Linux desktop" because that doesn't exist and has never been a real platform. Most smaller distros are not trying to be "relevant" to whatever the gaming community's fad of the day is. They're perfectly fine filling what niche they do. If you want to target those smaller distros as a platform, you can give them some source code to work with, or you can pay all the cost of maintaining things yourself, or you can just ignore those. They will do fine without your game.

So what are you saying? Developers should just choose a major distro and target that, then get all the flack for issues their product has on every distro that isn't supported? Because that's basically what's already happening and developers hate it, which is why they often don't ship on Linux at all.

And many proprietary pieces of software license components from other proprietary pieces of software, so that even if they did want to open their code they'd have to strip pieces out of it anyway which doesn't really help the cause of distros integrating it. And even if it did, then the developers are reliant on the maintainers for their relationship to their customers. Have an issue with the product? Oh, it turns out that's because of this patch made by the maintainer of the package for that distro, who now has to be contacted and convinced to fix it, which they may decide not to for arbitrary reasons. Even open source developers have problems with this!


In general, distro maintaners can't help you with legal conundrums you created for yourself (you signed restrictive NDAs and proprietary license agreements and didn't consider the fine print until it was too late) or with other unrelated market problems (lack of popularity, lack of developer interest in supporting your game).

If you need to strip out some parts and there is enough will in the community to replace those pieces that you stripped out, then it will happen. If you have a product that is anywhere near being popular among the FOSS developer communities then I don't think it makes sense for you to claim that this will doesn't exist, or that distro maintainers will lose interest.


> What are the main costs of releasing a Linux version of a game when using Unity or Unreal? My limited experience with Unity has been "check the Linux box, click the build button". It even cross-compiles no problem. People I know tell me Unreal is similar.

For a simple game that uses the entire UE4 stack, you might be able to get away with that if none of your code is Windows-specific and works exactly the same on the Linux distribution you are targeting. Once you start using your own middleware and libraries from outside of the default engine, you need to make sure every single one works across all the platforms you're targeting. Many won't have a Linux compatible version, and those that do may only work against specific distributions and hardware. Even then: Have they changed the window manager? What have else they customised?

There have been many issues with anti-cheat solutions over the years on Linux.

> It's not like they have to do a full run of play-testing for Linux. Just check if it starts, play for 10 minutes and release that.

For a simple game, you may get away with this. Anything larger will require significantly more testing. I've been a producer on large games, and the QA process starts very early on. You don't 'finish' a game and then QA starts - It's an ongoing process that consists of significantly more than just launching a game.

>We know we're a minority so we're willing to put up with bugs

Some people don't see it that way though. People who have paid for a product have the right for it to work properly. Tickets for Linux support are often higher [1], so the time investment against payback can be problematic.

GPU driver support on Linux can also still be problematic. From feature differences against Windows, to crashes. These all take additional development time. Linux developers are often more difficult to acquire in the game dev world. QA even more so.

Game dev is really hard. Some of it is hidden by engines like UE4 on the surface, but as soon as you start digging down into serious development, it's difficult.

[1] https://twitter.com/bgolus/status/1080213166116597760


Well, I've been working with Unity for 11 years now, but my experience with building for Linux is limited, so I'll go with experience from mobile and console development, and say that with shaders (which never ceases to surprise with platform-specific glitches), filesystem (although Unity is supposed to abstract it out, once you get serious about resource management, this abstraction starts to leak), input (because you want to support gamepad with minimum hassle for the player), and million of unknown unknowns that hopefully will be found by QA, but realistically, by very angry players, I'd budget at least a couple of man-months for this. With sales in hundreds or even tens of thousands, we can expect to get about 2% of audience from Linux, which will account for single-digit thousands, or hundreds of copies sold. With a price tag of $10-20, after Steam cut, taxes, regional prices, sales and refunds, it's about $5-10 of revenue per unit, so in total, about $10k.

If that would just be the money a single developer got for 2-3 months of work, it could still be fine for some regions (not everyone on HN is from US). But with a studio, you also have to spend money on marketing, payroll taxes, rent, and development of future projects, which most likely sell even worse.

So, no, I really don't see how porting to Linux would pay. Most of developers that I talked to who did it admitted that supporting OSX and Linus turned out to be a giant waste.


> Linux is ok for certain types of developers, it's still awful for normal users, abysmal for office work and obviously not a choice if you develop Windows software.

Don't want to get into an OS war here but merely stating your opinion as fact without supportive evidence is not in the spirit of HackerNews.

> it's still awful for normal users, abysmal for office work

For office work a lot of workload has moved to the web browser. Creating documents and working on spreadsheets for instance can now all be done online with Microsoft's Office offering and other competing products like Google Docs.

And why is it awful for normal users? ChromeOS for example is just linux, and that whole platform is targeted towards 'normal users'. I use linux on all of my personal devices and I use these devices to do very 'normal user' activities like watching YT, social media, blogging, etc. There are many communities that consist of 'normal users' like /r/Thinkpads or /r/LinuxOnThinkpad that are currently enjoying linux.

Precisely why do you think it's awful for normal users and office work?


> Don't want to get into an OS war here but merely stating your opinion as fact without supportive evidence is not in the spirit of HackerNews.

These things are pretty much self evident.

> Linux is ok for certain types of developers

Web developers and HP computing. Game developers? Haha, no. Embedded? Too many proprietary toolchains that don't run on Linux. Productivity and business applications? See below section on office work. ML? nVidia [apparently I'm incorrect on this one].

> it's still awful for normal users

Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.

> abysmal for office work

In most offices, Microsoft Office is an indispensable productivity tool (which many HNers, having little experience with real office work, will underestimate severely) and it is only supported on Windows (No, the web version is not the same).


> Embedded? Too many proprietary toolchains that don't run on Linux.

It's funny/surprising that you'd say that, given that Linux is the single most dominant OS in the embedded device space. What toolchains are you referring to? Here are some well known ones.

https://www.yoctoproject.org/ https://www.ptxdist.org/ https://openwrt.org/

> ML? nVidia [apparently I'm incorrect on this one].

Not only are you wrong about CUDA, but Google's custom built chip designed for ML runs soley on Linux: https://en.wikipedia.org/wiki/Tensor_processing_unit

ML almost exclusively runs on Linux.

> and it is only supported on Windows

Office also runs on macOS. You can also run Office in Linux using WINE.

> Alright, this is a bit subjective, but is supported by the stats that show Linux Desktop still in a distant and well deserved third place. You list a few tasks that you seem to believe are "normal user" tasks, I submit to you that you drastically underestimate the things a normal desktop computer user does with their computer, since most of the tasks you listed are now things that people do on phones and tablets.

It's really easy to say anything without providing any supportive evidence to substantiate your claims. Pray tell what tasks you are referring to that "normal users" partake that supposedly can't be done in desktop linux.


>It's funny/surprising that you'd say that, given that Linux is the single most dominant OS in the embedded device space. What toolchains are you referring to? Here are some well known ones.

No, and no.

The parent was referring to embedded development toolchains - compilers, IDEs, endless small utilities, debuggers, analyzers, etc. etc. Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.

Also no. The "embedded device space" is v a s t. New projects for phones and similar hardware profiles might choose an Android or other flavor of embedded linux, for sure. Even within sexytech consumer products you're more likely targeting something like VxWorks or QNX as not. The physical world, the domain of embedded devices, is not smartphones and SaaS. Unless you're talking about a very specific product category it's laughable to call Linux dominant here.


> The parent was referring to embedded development toolchains - compilers, IDEs, endless small utilities, debuggers, analyzers, etc. etc.

I know what the parent was referring to.

> Commercial tooling is almost all Windows-exclusive, even if it's using a smattering of open-source bits underneath.

No it's not. Literally both of the embedded OS' you mentioned (VxWorks/QNX) support Linux as first class hosts.

https://blackberry.qnx.com/en/software-solutions/embedded-so...

https://www.windriver.com/support/site_configuration/docs/wr...

What Windows exclusive tooling are you talking about?

And you are understimating the prevalence of Linux as the embedded OS for embedded devices.


ML is actually mostly done on linux. CUDA works absolutely great with nvidia binary drivers. Display performance is irrelevant.

Because when I open a 300-page, media-heavy game design document in Google docs on my i7, 32gb computer it still lags when I scroll it. And the last time I tried to use OpenOffice (or whatever was the fork called, I mix them up), I couldn't shake the feeling that UX was created by developers who just wanted to check the boxes in the list of features, without any field testing whatsoever.

> not a choice if you develop Windows software

A minor nitpick, but cross platform development is usually a painful work regardless of which platform you use.


I, for one, like to use my computer. I do not have time for my computer to be some battleground of political OSS zealotry.

seriously, the problem is apps wanting unfettered, opaque access to your system and data. If the default Steam install can r/w everywhere, anti-cheat needs kernel modules and a game can install a Trojan (like recently exploited with Source) - I say: no thanks to that, have a VM for gaming and all proprietary stuff and say a big fucking NO to the shshow the "games" ecosystem as a whole is...

This feels more like a reverse takeover to me. MS actually has a legacy problem in the form of the windows ecosystem. Under Ballmer this was a problem that required defensive moves and rhetoric to not lose people to Linux and OS X and it didn't work. Under Nadella, a bait and switch was initiated where the problem (windows legacy) is slowly switched out for Linux. This is not the embrace and extend policy of last century. They don't care about software licenses; they care about SAAS revenue these days.

In the process, they are rapidly regaining developer trust by building productive relations with the same developers they were alienating under Ballmer.

It's smart. They own Github now. Most stuff that happens there is not windows centric. But increasingly the MS developer ecosystem is being untangled from windows in any case. That's necessary to future proof it.

While the windows kernel is nice and the driver ecosystem around it serves MS well, it's actually been a problem for them as well. They've failed in the phone market (repeatedly) because linux was a better fit for OEMs. Also they've had Google and Apple compete with them effectively with the ipad and chrome os in a market where MS was peddling crippled laptops. These are all examples where windows legacy was part of the problem for MS. They weren't able to compete there. Their crippled laptops were too expensive and uncrippling them would kill their high end market. So people bought ipads and chrome os laptops instead.

Increasingly desktop software that is not web based is getting more and more of a niche. Even office at this point runs well in a broswer and MS actively supports native applications on platforms of all their competitors (android, IOS, OSX, etc.). At this point that's not optional from a revenue point of view. A windows only office would be a problem at this stage. They support it and they actually do a decent job too. This too is something that happened very quickly under Nadella. A lot of the Azure revenue is in fact office revenue.

IMHO this move to support GPU virtualization and ultimately running linux desktop apps, gets them access to a lot of niche OSS applications for Linux, the entirety of the machine learning ecosystem, and the community of professionals using that. Not all of them will switch to a windows laptop of course. But some will and they tend to be the type that spends money on their tools.

Additionally, they are doing a clever play with APIs, github, cloud native stuff in Azure, development tooling etc. where they are not blocking using non MS things but merely make the choice to buy into their premium SAAS subscriptions more attractive. All of this stuff is usable without doing that but if you are using VS Code already, have your code on Github, and are doing some AI stuff in the cloud, etc; it's a small step to buy into a well integrated ecosystem provided by MS where you are using a windows laptop with their oss tools, while deploying to Azure, and maybe doing your office stuff using office 365. The value is no longer in selling the OS but up-selling SAAS and making sure the choice to buy into that is logical, easy, and natural no matter what you use.

Github codespaces is a great example of that. I bet it will be really easy to setup and come with some nice SAAS subscription. It's all OSS and you are welcome to run it on AWS or your own cloud. But I bet it will be easier to run in Azure. MS tools, MS cloud, MS as the easiest place to run linux developer tools (!!!), etc. They won't force anyone to switch. They don't care about individual developers but they do care about what their bosses sign up for in terms of SAAS services. That's where the money is.


> Under Nadella, a bait and switch was initiated where the problem (windows legacy) is slowly switched out for Linux.

If that is what's happening, then what's the endgame? Switching WSL around so that "Windows host + Linux guest" becomes "Linux host + Windows guest", i.e. Windows becomes a Linux distribution running native Windows apps in a VM with seamless integration? I'm somewhat intrigued by the possibility, but I don't see how it could work given the ubiquity of vendor-supplied device drivers targeting Windows kernel API/ABIs.


A large part of their business is already running on or compatible with Linux. The endgame is that they get your money regardless of what OS you are on.

I'd expect the importance of the windows kernel for revenue growth to be increasingly less important over time. Of course they won't drop it outright; at least not right away and it's likely to stay relevant for e.g. pcs/laptops and gaming. As for games and vr content, a lot of game studios already use cross platform sdks and Linux support for games with and without emulation is pretty decent these days. Also, Android and IOS are big targets for games.

Most hardware vendors don't target windows exclusively. Some do of course but a lot of hardware works fine on other platforms even if vendors don't actively support that. Anything intended for data centers runs linux primarily. Most laptop vendors have a few linux friendly laptops at least to not lose out on the pro developer market that tends to actually buy their more expensive laptops. Likewise, most graphics card vendors want to support e.g. machine learning and that requires linux driver support.

But I get what your saying. My observation is that MS is very friendly lately with Ubuntu. I don't think Mark Shuttleworth is interesting in selling that outright but an MS distribution might be a logical next step given their increased dependency on Linux on the Desktop and in the cloud.


It looks like they are already at the "Extinguish" phase for email. Try reading this:

https://lkml.org/lkml/2020/5/19/1527

:(


I think that's a bit hyperbolic. The Outlook team's choice to not do word wrapping in the plain-text version of an email is unfortunate, but defensible IMO, considering that most people use an email client that can display HTML email. And before the defenders of plain-text-only email chime in, HTML email has features that most people actually want. The world has moved on from the time when emails were displayed in fixed-width terminals. So Microsoft has as well.

Disclosure: I work at Microsoft (on the Windows accessibility team). At work, I use Outlook. Outside of work, I use Mozilla Thunderbird, but even there I send HTML emails sometimes.


I don't think it's hyperbolic. If someone has sent you an email in plain text with wrapped lines, your mail client will know this and it would be reasonable to assume that perhaps the sender may be using a mail client not capable of rendering HTML emails nor autowrapping long lines. It would be a sensible choice to format the reply in the same format as that of the email received.

By stating "HTML email has features that most people actually want. The world has moved on...", you seem to be implying that the primary method of communication of Linux kernel developers is irrelevant and their use-case for email is not important. In fact, the more I think about this, this is a classic example of Embrace, Extend, Extinguish.


I think what we have here isn't a deliberate effort to extinguish a standard, but a clash between two different cultures. LKML follows traditional hacker norms, from the days of actual terminals and slow connections, whereas Outlook is built for the world of GUIs, WYSIWYG, and more or less high-speed connections. The latter is what the vast majority of people have chosen, so it's a sensible business decision for Microsoft to not pay much attention to the old ways. It's just unfortunate when those two worlds collide.

I appreciate your sentiment, but I don't know, I'm not convinced. You can have all the GUIs and WYSIWYGs you want, but as I said, email replies should default to the same format as that of those received. Thunderbird (a modern email client) does this, if I recall correctly. It's very easy to avoid this mess.

I even doubt the patch will be accepted upstream. That patch adds nothing for other distros and is not convenient to maintain upstream.

No. They are not that company anymore and it’s been clear they have not been that way for over a decade.

This is a step towards moving their API over to Linux so they can dump Windows as an OS and provide it as a docker container service service for enterprise.


> It appears that Microsoft has now initiated the "Extend" phase of their classic Embrace, Extend, Extinguish playbook.

Or, much worse, M$ now trying to kill Linux as desktop OS, same way as Elop killed Symbian as smartphone OS.


How exactly will they extinguish Linux and open source?

While this was my initial thought as well, on a second thought that doesn't make too much sense. Well depending on whether you're wearing your tinfoil hat properly, you could say this is for testing the waters, but who the fsck would target their game at this? Write against naive Linux but use DirectX for the graphics? Just why? Sounds like the most stupid thing ever.

The CUDA support thing from other comments below seems the only sane use case so far, and in that case I don't really see the EEE either, but just good old "make shit work".


EEE is incremental by nature. This represents a lever they could use to create an ecosystem of Linux software which only works on WSL.

> but who the fsck would target their game at this?

It doesn't have to be games. Maybe MS releases an ML visualization library for Linux which requires D3D. It's not so far fetched.

Given the cumulative pain developers have had to deal with supporting IE6 over the years, I think it's warranted to be wary of this kind of thing.


Eh, still not convinced. Porting that to OpenGL is half a day of work. Like I said, you might say they're testing the waters, but I'm just not paranoid enough to scream EEE yet.

> This also enables third party APIs, such as the popular NVIDIA Cuda compute API, to be hardware accelerated within a WSL environment.

Dollars to donuts this is why Microsoft is implementing this. GPU acceleration is becoming a critical feature for many users (but especially developers) and this will continue. If WSL is to be a serious competitor, this is necessary and I'm glad to see it showing up. This is true of cloud compute, too, and Microsoft is betting big on cloud as its future growth area.

> Only the rendering/compute aspect of the GPU are projected to the virtual machine, no display functionality is exposed.

The Linux gaming folks will be pretty sad about this one. Anyway, this isn't really a Linux port of DirectX. This is GPU compute via DirectX APIs.

So now, I'm just waiting on monitor mode/AF_PACKET for WSL...


Yup, from one of the MS staff replies further in the thread[1]

> There is a single usecase for this: WSL2 developer who wants to run machine learning on his GPU. The developer is working on his laptop, which is running Windows and that laptop has a single GPU that Windows is using.

Can't say I can get behind MS trying to shift maintenance for a Windows only "feature" onto the Linux devs here.

1. https://lkml.org/lkml/2020/5/19/1139


> Can't say I can get behind MS trying to shift maintenance for a Windows only "feature" onto the Linux devs here.

Interesting take on the situation. This is effectively a driver they need to get into the kernel (just one that targets a paravirtualization host and not “real” hardware), and Linus has been adamant that the correct way to write a driver in Linux is to upstream it into the kernel.

The perspective that upstreaming a driver into the Linux kernel is a burden for Linux kernel developers is one I haven’t heard before, and seems to clash with Linus’s typical stance. Is this something that has some prior examples? Genuinely curious.


Drivers for use by Linux, not for use by antagonists.

To be honest I don't quite understand what stopped those developers from running machine learning on their GPUs under Windows itself. Most frameworks work just fine. I've been doing quite a lot of TensorFlow with both Python and .NET.

The only time I faced the need for Linux box is trying a demo project from OpenAI, which did not use the features it required Linux for on a single machine anyway.


I believe the purpose is for devs who are deploying to Linux, their toolchain may not fully work in Windows, but they want to have a similar dev/test env within Windows... pretty much the whole point of Windows Subsystem for Linux (WSL). It's not that the frameworks don't work on Windows, it's the deployment tech, like Docker, Ansible, their build scripts etc etc. Some users may be doing something totally custom on top of the GPU without a framework layer but that is probably very few users. I'd have to guess though that those users would be the type of users that would get MS to go through the effort though...

Do people actually use Docker and Ansible a lot for ML???

Doing something custom on top of the GPU is also not much different on Windows, than Linux. CUDA is basically the same. OpenCL and Vulkan are available too.

I'd like to hear perspective of a person, who actually does ML specifically on Linux for some reason.


The reason to use Linux is to follow the crowd. I use Nvidia's docker container because the plurality of devs use it. Over the course of my career I've found that well-trodden paths tend to have a _lot_ fewer bugs thanks to other people finding them first, and when I do get some

https://github.com/pytorch/pytorch/issues/37790

weird

https://github.com/pytorch/pytorch/issues/32575

bug

https://github.com/pytorch/pytorch/issues/25301

I don't have to spend time explaining or justifying or isolating my setup.

Conversely though, my work is itself off the beaten path enough that I'm likely to run into weird bugs. If I was pushing images through a CNN, that'd be well-trodden enough on every platform that I'd be a lot less fussed about which particular platform I use.


Well CUDA is only one thing. Especially in the python environment it's only a matter for time before you run in some wierd dependency issue that is Windows only. For example, getting XGBoost completely up and running on Windows requires you to either build it yourself or download a .dll from a university link. Installing it on Linux is just a proper pip install.

Also windows not having a build in C compiler makes you dependent on the horribly convoluted Visual Basic stack that seems to have a lot of dependencies for some python ML libraries. Docker makes it a lot better to run and I almost always deploy in a docker container because the ML modules I deliver are often interacted with as a black box with a REST API on top.


It looks like Anaconda supports XGBoost on Windows.

You might be right about C compiler. But something itched when you mentioned Docker. Could getting Windows SDK installed be harder, than installing Docker?


I just use chocolatey for that > choco install windows-sdk-10.0

That being said I had to avoid installing VS 2019 for quite awhile because Node.js native module build chain couldn't work with it. There are complexities


Things like filename limits and command length limits seem to get hit on windows way more frequently, making all code more fragile in general, and a million other little things.

Even popular libraries like zeromq don't support namedpipes on windows because of how complicated they are and how different to everywhere else.

Just determining what visual studio version is installed seems to trip up projects all the time.


Mostly doing several kinds of NLP: My actual setup is a Windows Laptop to SSH into Linux machines w/ tmux session. However, I really appreciate WSL for working offline, etc.

My main reason: It is the most convenient way to have Unix tools (grep/sort/cut/sed/less/...) and bash available. Cygwin always was a pain, MinGW / GitBash felt much better, but ultimately WSL just feels best.

These tools are incredibly valuable to my workflow. Sure, stuff like pandas can be nice for small datasets, and some data sits in some DB/Kafka/distributed system. But there have been countless cases where unix tools allowed me to take xxGB zpfiles of text and do basic examination or even build baseline models within a few hours.

Sure, there always are alternatives to use these tools and there are many equivalents. But I would always prefer WSL + conda for Linux to a typical "Windows Conda" installation with that weird GUI and the need to install so many different applications to even just look into the first or last few lines of a huge textfile.

EDIT: That said, of course I can/could always just run a juypter notebook under windows using windows cuda + GPU and share files with a WSL bash where I do my modifications. But again, everything within the same systems just feels better (ipython shell magic, no worries about if paths to the same file are really identical, etc) and while this is by no means a game-changer, it is just nicer that way.


I do all of my development on linux, if I can, but to be honest the GPU support is generally better on windows because that seems to be the main platform AMD and NVIDIA target - though linux support is not too bad. GPU support is the only potential benefit in using windows that I can think of though. Everything from package management, to build tools, FOSS support, community, troubleshooting, etc. is generally better on linux.

Docker at this point is InstallShield infrastructure for startup production quality freewares

It is being used both for training and inference in order to package dependencies and scale up.

Just curious: what kind of models are you training, that require scale-up using Docker/Ansible/Kubernetes?

But Windows is a painful OS to use for anything other than gaming. Ideally, I'd like to see the exact opposite of this: run Linux with a Windows subsystem just for gaming.

Many companies, including the one I work for, have most development happening on Windows. It is far easier to manage for Enterprise use-cases, with all relevant services integrated and obtained from a single vendor.

Not to mention that there are quite a few development environments for more obscure platforms that still only exist for Windows.

Overall, since most development time is spent in an IDE, the OS is really of little relevance to software development. Sure, some people insist in using command line tools, and that is unlikely to be pleasant on Windows, but a lot of other developers don't, and we couldn't care whether we're running our Emacs on Linux or Windows or Genera or whatever.


I’ve programmed in a professional capacity on macos, windows, and linux. Windows used to be the worst, but with WSL and VSCode, i would say it has overtaken macos in my mind, because WSL is better than using brew

Sure, companies may love it, but developers hate it.

I use a Windows laptop at the company I currently work for, because everything is locked down and I wouldn't be able to get my own laptop connected to the network. (Or so I thought; I co-worker managed to use the Windows laptop as a bridge to his own Macbook.)

Now you're right that as long as I stay in the IDE, it's not so bad. But every once in a while I need to do something outside the IDE, and I immediately get slapped in the face by how stupid some things are. And because it's an enterprise environment, some things are even worse than usual; opening a folder, or saving something, can be unreasonably slow because either it's a network drive or it needs to be checked for viruses and malware while I'm trying to use it. Or for some other reason. I don' t know, I just experience the extreme slowness.

Also, on top of the old terrible DOS shell, there's now also a Power Shell that's supposed to be better. It apparently has some powerful features I don't really grasp, but it's still not remotely as good as bash. And sometimes the command line really is unavoidable.

But the real pain is at home. When I activate Windows 10 on a new machine, I need to create a Microsoft account. I don't want one, but it takes serious determination to avoid it, because behind every message is another trick to sucker you into an MS account. When you finally do manage to create a local account, you're immediately expected to compromise your security with 3 insecurity questions, and no way to avoid it as far as I can tell. Previous versions of Windows did not have this stupidity.

Also, somehow Windows keeps losing my mic, speakers or camera. Once I've found the right troubleshooter, it immediately figures out how to fix it, which is great, but it also keeps losing them again. And finding the right troubleshooter takes a couple of steps and a bit of searching. I feel like I need to pin several relevant troubleshooters to the taskbar.

And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.


You generally have some good points, and some points that have more to do with preferences, or luck with hardware. I would note that the lack of anti-virus software is one of the reasons a lot of companies don't want to run Linux on employee systems. Locked-in Windows systems at least have the advantage that even if you Run as Administrator some .exe that you got emailed, there is a good chance that your AV will not allow it to run; if you're running something as sudo on Linux...

> And then there's the total lack of access control. To install anything, you need to be admin. I gave my son a restricted account, but he can't do anything with it. I'd like to be able to create an account that can instal games, but can't compromise the system. No such option in Windows. If you can do anything, you can do everything. Unless you're in an enterprise environment, in which case you often still can't do anything. So I guess more detailed access control does exist, but only for enterprise users or something.

Here I never understand this point. You can't do anything on a Linux system if you don't have sudo access - it's not like apt or yum have any special magic to allow non-admin users to install stuff. And if you can install software on a system, you can already do anything else. Especially Games, which install drivers to perform DRM and anti-cheat bull.

Now, if you want to look into it and waste quite a bit of time, Windows does allow you to configure access control at a very fine-grained level for access to non-system folders. But as long as the installers want to install things in system folders, there really isn't any solution.


I admit I've never really looked into how detailed Linux is in access rights, but on Unix systems, it's very normal to have install rights for specific directories. If Linux doesn't allow that, that would be disappointing, but I strongly suspect Linux allows this just as much as other unixen. So that would mean you can install stuff without sudo rights as long as you get group rights to the right directory. And that's a much safer approach to security than all-or-nothing.

I'm pretty sure neither `apt` or `yum` or other common package managers support any way of running as non-root. Of course you can download the sources and compile yourself, or maybe even find a binary distribution with all dependencies included (good luck with that).

I disagree, Windows is very painful for me to use at a basic level compared to Linux. I would be very unlikely to take a job that forced me to develop under Windows.

The same is mostly true for me when trying to use a Linux desktop.

However, if I'm using IntelliJ or Emacs and Firefox, I don't really need to care what OS is running underneath too much.

Edit: of course, Linux and Mac are available for devs that prefer them. It's still much easier for IT to manage 7000 Windows desktops and a couple hundred Linux ones than it would be to manage 7000 Linux desktops.


I'm not sure I agree with that. 7k "normal" machines for any given value of "normal" seems like it would always be easier than 7k "normal" machines and 200 oddballs. Is the sysadmin tooling for Windows really that much better?

Are there even systems for managing many desktop machines running Linux?

- Could you push a patch to Linux systems and have it install at the user's convenience (with some end date)?

- Can you do that in waves without manually configuring things?

- Can you remotely wipe a system if required?

- Is there any popular anti-virus software for Linux, to protect company files in home folders from user mistakes?

- Can you help users install some software without giving them full access, but also without requiring IT intervention for every installation?


> It's still much easier for IT to manage 7000 Windows desktops and a couple hundred Linux ones than it would be to manage 7000 Linux desktops.

Why? I have never seen Windows being managed entirely hands-off whereas Linux just works.


I don't understand what "being managed entirely hands-off" means. I can assure you there is no manual intervention from IT on every desktop system in the fleet, so I would call that exactly "being managed hands-off". I am certain there are always a few systems that do need some manual intervention for whatever reason, but that seems to just be the way with computers. It's not like Ubuntu updates never break, or no one ever gets to install a broken package before it is retracted.

I mean with 'entirely hands-off' that the computer just keep on running. On Linux, most packages are part of the repository and update automatically. On Windows, those packages have to be updated manually. Even if everything is automated, the computer will have to be restarted quite regularly. At least to me, Windows seems to be more difficult, especially if you have a fleet of desktop systems that can be chosen to be perfectly Linux compatible.

Where does the complexity on Linux come from that makes managing them more difficult?


Even on Linux, you need restarts if you want the updates to shared libraries to actually happen. You can't just apply a critical security update to OpenSSL for example and hope that the user actually restarts their programs at some point - if you care about the update being applied, you need to know that by some date all of the programs on that system are actually restarted, and a scheduled reboot is by far the simplest way to do this.

Then there's the question of pushing an update to all managed computers. Maybe it's not a package update, but you want to change some SELinux policy for all users, or update some DNS server or the default search domain and so on.

Never mind the question of how you can instruct one of those Linux computers to delete all data it holds whenever it next connects to the internet (to handle the case of a stolen company laptop).

There are so many things that you need in an enterprise setting that have common (though probably quite expensive) tools available for Windows. Maybe some of these exist for Linux as well (I would expect RedHat to have some), but I'm not sure. Linux admin is usually reserved for servers much more than desktop computers.


Why would you want to delete the data? That's when you open the door for an attacker to access it. If the data is encrypted, it can remain on its partition because it is inaccessible.

Interestingly, apples have to be compared to oranges. On Linux, it is easy to identify the programs that are using a library. Thus it is easy to restart just the services that are patched. In general, things can be scripted so there are no tools available. But this requires somebody who understands the system. From a business perspective, this might be more expensive, or not, if the tools are expensive.


Wine / Steam's Proton does a decent job, and some older games even work better with Wine than with Windows 10.

If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup.

In theory it ought to be possible to switch a single GPU to/from a VM without a reboot. In practice I have no idea how huge a refactoring to the Linux graphics stack that'd require.


> "some older games even work better with Wine than with Windows 10."

This has been true for 16 bit games since long before Windows 10. Ages ago one of my favourite games stopped working on Windows, but Wine had no problems with it. So my impression has always been that Wine is excellent for really old games, but slightly more recent games, it could already be very hit and miss.

> "If you have a spare GPU, a VM with PCI passthrough does an even better job, except for some anti-cheat software that artificially discriminates against this setup."

Doesn't every CPU these days have onboard graphics? My Thinkpad X1E should support hybrid graphics, so it'd be nice if I could give the GPU to a VM and have the desktop use the CPU graphics.

But if a Windows VM does a better job, that means Wine doesn't yet do as good a job as Windows. Though it's certainly true that Steam support for Linux is growing. But I don't think every Steam game already works on Linux.


Wine has gotten very good at recent games... or more specifically, Proton has. (Proton is a Steam-maintained fork of Wine, and is built in to the Steam client.)

Official Proton "support" is limited, because it requires certification by Valve and/or the game developers that the game runs well (the equivalent of a "native" rating on winedb/protondb), but if you're willing to go down to "gold" levels of support it still runs 70-80% of all Steam windows games.

See https://www.protondb.com/


> Doesn't every CPU these days have onboard graphics?

Don't bother attempting GPU passthrough on any laptop with an AMD CPU (eg Ryzen 2700U) and Radeon GPU (eg RX 560X).

It turns out the GPU passthrough needs a dump of the Radeon BIOS provided as a file, but no-one can dump the BIOS of discrete AMD laptop GPUs. :( :( :(

Note the complete lack of RX 560X BIOS's here:

https://www.techpowerup.com/vgabios/?architecture=AMD


> Doesn't every CPU these days have onboard graphics?

On laptops, pretty much. On Intel desktops, yes, aside from Xeons. On AMD desktops, only some lower end Ryzens have "G" models.


> In theory it ought to be possible to switch a single GPU to/from a VM without a reboot. In practice I have no idea how huge a refactoring to the Linux graphics stack that'd require.

I've got this working today. I do it through swapping the nvidia driver for the vfio-pci driver (and back again if required). The slight annoyance is that you may need to restart X11 (for me this is not an issue).

I wrote about this some years ago: https://me.m01.eu/blog/2016/05/pci-passthrough-vm-monitor-se...


You can do that today with Qemu and PCI-passthrough. You just boot a VM, and pass it a physical grapics card.

Check out https://old.reddit.com/r/VFIO/

I guess this will be the standard until we can have nicer graphics drivers for Linux.


That requires two GPUs though, one for the guest and one for the host.


Right, it's easier to have an SR-IOV situation with an integrated GPU that doesn't have it's own VRAM.

That's not the usual situation for people that are trying to use this scheme.


Yes, this is true, I use my on-board Intel graphics card for my host, and my Nvidia card for my guest operating system.

Apart from photoshop video production audio and many other professional uses.

Like the previous response what does this buy me as compared to developing and running natively in windows - as there are native compliers that support cuda etal on windows.


Windows is a painful OS to use for anything other than gaming and web browsing, I agree. (Well, it was, until WSL.) If I ever again own a work-first non-gaming desktop, I will once again install some Debian GNU/Linux on it.

(My gaming desktop, of course, runs Windows.)

On laptops, however, I continue to find myself jumping through idiotic hoops to use Linux. The driver support is always just-barely-good-enough. Maybe it's audio, maybe it's graphics, maybe it's power management. Maybe it's something involving networking-after-power-management or some crap involving "don't unplug your headphones while the lid is down". On my current work laptop, a beautiful Thinkpad Carbon X1, I can't get the power management to work properly, so I just have to accept that there's no hibernate. I'm constantly forgetting to shut down, put it in my backpack, and then pull it out drained. What a pain in the ass. Could someone fix this problem, probably. Can I? Not in the dozens of hours I've put into it. I hate doing IT, I hate it I hate it I hate it.

However much Macs make me want to vomit in my mouth, I can see the appeal. The drivers work at least 90% as well as Windows drivers, and the UX is at least half as good as a lightly-tuned Linux machine. "Jack of all trades, master of none, is oftentimes better than master of one"

Anyway, before this lockdown ends, I'm upgrading my laptop distro to this new distro I've heard of out of Redmond, I think it's called "Windows".


Painful? What exactly is painful? Apart from the Settings/Control Panel debacle, I don't know any issues you could be facing on Windows, unless you do a lot of C/C++ development which is still problematic due to the lack of a proper package manager.

I must disagree with the C/C++ take. Visual Studio is still, for me, one of the best IDEs out there, and the single best one for C/C++ development. And for the longest time, Windows indeed didn't have a good package manager, but over the past few years we've had vcpkg, which fills the vacuum pretty well when it comes to getting libraries without much hassle.

I still did not have good experience with vcpkg, but, admittedly, I don't do a lot of C++.

How do you configure compiler to use libraries downloaded from vcpkg? CMake? Something else?


>>run Linux with a Windows subsystem just for gaming.

aka wine / proton


That's exactly what Wine is.

It's exactly what Wine wants to be. Sadly that's not quite the same thing.

What do you mean? Wine has a different architecture because distributing microsoft binaries is not legal, so technically it's not the same thing, but it still does an amazing job and a lot of apps/games works flawlessly.

And a lot don't. And then you have to spend a lot of time researching why and messing around with config options and maybe even compilers. And the games sometimes stop working after an update.

Conversely, if you run Windows, it's rare that you need to work hard to run a game.


This is definitely not my experience. Off my 100 games on steam, gog and egs, 2 doesn't work straight out of the box. - rocksmith 2014 that can work by switching to alsa audio but the audio lag make the experience subpar - bit trip beat that i know can work by changing something but didn't try

It does an amazing job, but a lot of apps and games do not work flawlessly. Or didn't, last time I tried Wine. Maybe this situation has changed a lot since then, which would be awesome.

Every single versions has a lot of improvements, esp regarding compatibility. I suggest giving it a try once again.

That directly implies that every single version has had lot to improve. And this is not to diss wine, but it is difficult problem they are tackling.

Holding software to that standard eliminates most of it. "No, I haven't tried Google Docs yet. They're still adding features and fixing bugs. I'm holding out until it's stable."

Context matters a lot. I'm not holding Wine to unreasonable standards, its a matter of recognizing reality of the situation and that Wine is not such a exact mirror of WSL and as such it will continue to have significant issues.

Are you familiar with Proton? Steam have put a lot of work in to making WINE work flawlessly for many games.

https://www.protondb.com/


It seems like over the last 5 years, Wine has improved a ton. Or at least - it seems way more active.

Not sure why, but I suspect that Valve has a lot to do with it.


I mean, it's good enough for lots of games.

But there is another way there - running Windows in a VM with GPU passthrough - works beautifully in my case.


My experience with GPU accelerated development is quite horrible on Windows for anything other than the NVIDIA prepped docker container. There was always something missing or some drover was incompatible. In the long run I have always regretted developping Python on Windows, also often because whatever was developped was to be deployed on a Linux box.

I do not think it's purely windows to blame here though. It's only quite recently that NVIDIA started fixing their documentation and instructions on getting all the right CUDA CuDNN stuff running properly on a system.


Hi. PM on Windows & WSL here.

Imagine if you could run AI/ML apps and tools that are coded to take advantage of DirectML on Windows and/or atop DirectML via WSL.

Now you can run the tools you want and need in whichever environment you like ... on any (capable) GPU you like: You don't have to buy a particular vendor's GPU to run your code.

If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.


> If you're old like me and remember the dark ol' days when games shipped with specific drivers for (early) GPU cards/chips, but failed to run at all if you didn't have one of the supported cards, you'll understand why this is a big deal.

Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).


This. Some games would have a handful of different renderers for different setups, while other games would only support one specific card type (and if you were lucky, a software renderer).

Those days sucked. Bigtime. If we can avoid doing the same mistakes for machine learning then we should.


> Maybe I'm not that old, but I'm old enough to remember the days when microsoft was intentionally degrading opengl performance on windows ;).

Which is still nonsense, since this only affected the OGL driver shipped by Microsoft. In contrast to truly bad actors like Apple, OEM were free to ship their own OGL drivers from day 1.

So sorry mate, but I have to call BS on that one.


Prob. different PM though.

>Now you can run the tools you want and need in whichever environment you like

Isn't the linked post saying you have to be running on Windows though? It seems like it would make way more sense to either port directX to Linux, or ditch directX and put those resources into supporting Vulcan.


Whichever environment you like as long as it's on Windows

Hi!

Don't you think the effort to achieve this would be absolutely massive? I don't know what kind of resources are thrown on this project, but I'd estimate minimum to be 3 dev teams for 2 years just to get a few variations of ResNet to work "as is". And that's just for regular models, that don't require quantization or (auto-)mixed precision for training.


Neither pytorch nor tensorflow support WinML, so this is going to be a bit of a stretch still, since CUDA is still the toolkit of choice for mainstream ML frameworks.

> horrible on Windows for anything other than the NVIDIA prepped docker container

o-O nvidia-docker does not even support Windows.

I think the only thing you need to know is which CUDA version your cuDNN requires, and it was quite clearly stated on the download page. Also the same on Linux. For nvidia-docker you used to need a specific driver version.


Lots of ML frameworks are built for Linux/UNIX first. The OpenAI projects you mentioned are a good example, but also projects like Ray (ray.io). Even PyTorch - a lot of stuff works fine, but their parallel DataLoaders actually work slower on Windows than the single-threaded ones (see this github issue: https://github.com/pytorch/pytorch/issues/12831 )

If your dev or production stack is Linux based, I think it makes sense to try to bring the GPU to your dev stack instead of the other way around. If you're working with other devs who are on actual Linux stacks, it'd be a pain in the ass to always require for there to be hybrid Windows/Linux tooling.

This mainly to woo developers who are now working on Mac OS or thinking to use Mac OS to use Windows and WSL combination instead for ML & AI application development.

I couldn't get torch's cuda stuff working on windows. Runs fine on OSX though.

Is there a possibility Linux upstream won't accept it?

If you read the replies by Dave Airlie and Daniel Vetter, it seems somewhat likely that upstream won't accept it. Perhaps that's just initial skepticism that will evaporate after more discussion, but perhaps not.

Frankly this does just seem like MS wanting to reduce their maintenance burden on what they expect will be a very important part of their WSL offering on Windows. There's nothing inherently wrong with that desire, but the people on the other side need to weigh their maintenance burden[0] with what benefit this will have to the Linux community as a whole, which at first blush seems minimal. Especially considering that the userland pieces that talk to this driver aren't open-source.

There's also the question of whether or not you believe WSL as a whole is good or bad for Linux. If there are people who would run a Linux desktop for development who then decide not to because WSL exists, perhaps that's a bad outcome. If you have people writing more DirectX GPGPU code who would otherwise write to a standard interface like OpenCL, perhaps that's a bad outcome (to be fair, there's also a lot of CUDA out there, which is similarly problematic). Is this the start of MS going back to their "Embrace, Extend, Extinguish" playbook, or is that just a paranoid fear? They've definitely been embracing Linux, and enabling people to write DX12 GPGPU code that targets a Linux environment but will only run under WSL on a Windows install does feel like "extend".

I'm not sure where I personally stand on this issue as I haven't done my research, but I think they're interesting questions to ask.

If this gets rejected, of course it doesn't stop MS from doing any of these things, but it does make it harder for them to maintain their extensions to Linux.

[0] Airlie is even concerned that just by looking at the code, he or other DRI developers could run into future IP derived-works trouble when designing future Linux graphics interfaces.


> There's also the question of whether or not you believe WSL as a whole is good or bad for Linux. If there are people who would run a Linux desktop for development who then decide not to because WSL exists, perhaps that's a bad outcome.

I think if that really matters you should be against anything in the kernel to make it work well as a VM under Windows or MacOS or BSD, and in VMware or VirtualBox too.

From that point of view, Linux in a VM on another host is taking away from Linux running as a desktop on that host. Linux running as a VM on a VMware cluster is taking away from Linux running on those bare metal servers instead of VMware ESXi.

I think the more sane way to look at it is that Linux is an application (and its subcategory is that it is an OS) which is meant to run on various hardware and software platforms, the more the better. This strategy has worked very well over that last couple decades.

Does allowing WSL mean that some people that would install Linux on their hardware just run Windows instead and use Linux on top? Probably. Does it mean that people that already ran Windows and have never used Linux get exposed to it for the first time and get familiar with it through a few click on their existing Windows computer? That's also probable. Does it really matter in the end? Probably not.


> I think if that really matters you should be against anything in the kernel to make it work well as a VM under Windows or MacOS or BSD, and in VMware or VirtualBox too.

Why? I would guess that a very large share of linux kernels run under a hypervisor in some data center, in a public cloud or some OpenStack cluster. Won't those mostly be the same features?


Because WSL doesn't compete with people who would otherwise run a Linux box, it competes with people who would otherwise shove Ubuntu into a VirtualBox and run it on their Windows box in seamless mode. So the idea that it drives people away from Linux proper is nonsense (BTW., WSL is Linux proper), unless you also believe that installing a Linux in a VM on a proprietary system is also driving people away.

> ... shove Ubuntu into a VirtualBox and run it on their Windows box in seamless mode

Given that the WSL2 rewrite is essentially this, without even the niceties of a VirtualBox GUI wrapper to control the settings, I keep wondering what all the fuss is about.


I agree with you completely. I think that the comment I was replying to thinks otherwise.

No, I was just a little more abiguous due to a left out word or two than I intended.

>>> I think if that really matters you should be against

Should be interpreted as (and I meant to write as)

>>> I think if that really matters to you you should be against

The rest of the comment should have made that obvious though, especially the last two paragraphs.


Now I get your intention, the original phrasing made me it parse it incorrectly.

Happy to know that we also agree then.


>>>There's also the question of whether or not you believe WSL as a whole is good or bad for Linux.

IMO it's a good thing. Given that windows accounts for 90%+ of the desktop OS share, Windows might very well become the world's most used Linux distro.


It is, of course, decidedly not a Linux distro though. If it was, it wouldn't be an issue. I think there are positives, but it looks a lot like "Extend" to me.

I can't even say I wouldn't use it - it might be nice! But I will not use any WSL-only capability, that's for sure.


Hi. Microsoft PM working on WSL, Terminal and Windows.

WSL2 literally runs user-mode distros (and their binaries) in containers atop a shared Linux kernel image (https://github.com/microsoft/WSL2-Linux-Kernel) inside a lightweight VM that can boot an image from cold in < 2s and which aggressively releases resources back to the host when freed.

So when you run a binary/distro on WSL2, you are LITERALLY running on Linux in a VM alongside all your favorite Windows apps and tools.

If some of the tools you run within WSL can take advantage of the machine's available GPUs etc. and integrate well with the Windows desktop & tools, then you benefit. As do the many Windows users who want/need to run Linux apps & tools but cannot dual-boot and/or who can't switch to Linux full-time.

This will (and already has) resulted in MANY Windows users getting access to Linux for the first time, or first time in a while, and are now enjoying the best of both worlds.


The question isn't asking whether you, a Windows user who runs Windows, benefit. The question is asking what it does to Linux users who don't run Windows even a little. (And I think you know that.)

That's like asking whether Linux users who don't run ESXi benefit from their paravirtualized drivers being upstreamed. No, they don't, but they were accepted with way less bruhaha. And that's despite VMWare blatantly violating the GPL for more than a decade.

With DirectX on WSL, you can do new things when Linux is running on Windows (via WSL). But these new things aren't possible when Linux is running another way (e.g. on the bare metal).

So people who use it are married to Windows.

I think folks would be absolutely excited if this was an initiative to allow writing DirectX applications on Linux, and available for Linux on bare metal. But as people realize this marries them to Windows, they go meh.


They're not intending d3d to be the client library, they ported Mesa for OpenGL and OpenCl, and are working on a Vulkan port for all of that.

I think the concern with this DirectX implementation is that it only works for WSL users, not standard Linux users. So, it's a software API that will only work in your ecosystem, not the overall Linux ecosystem.

If DirectX on Linux could also work on bare metal, the conversation here would likely be different.


My understanding is that this is meant to become transparent to the user, that really this is about enablement of hardware acceleration within WSL, and that the typical Linux userland graphics APIs like OpenGL would layer on top. So the goal isn't to get you to link to libdx12 or whatever it is they have here, it's actually a piece that will be used by Mesa to provide accelerated GL for plain old Linux apps to use when running in WSL. It seems like the easiest bite off the apple was offscreen rendering and GPGPU functionality, but the MS devs seem willing to work with the kernel devs to rearch it so it fits into the typical Linux graphics stack ie DRI and other lower level systems. As far as I understand, this would be required for, say, Wayland to be able to have hardware acceleration when running in WSL.

I'm still piecing it all together, and I definitely feel that "Extend" feeling, but I'm not sure that's what's happening here. Looks more like a few devs at MS are trying to solve the GPU Accel use case for WSL...


The DirectML API is not otherwise available on Linux. If it promises cross vendor support (which is a good thing) tools and libraries will use it.

Those tools and libraries will then not work on native Linux.


IMO it's exactly embrace extend extinguish that parent noted that can be spun to kick CUDA out of the game for Linux which is good in short term.

> [0] Airlie is even concerned that just by looking at the code, he or other DRI developers could run into future IP derived-works trouble when designing future Linux graphics interfaces.

And that is, IMHO, a very real concern and it really should not be merged.


Yeah, I think MS should address this point. Airlie also asked if everything here is covered by Open Invention Network (OIN) of which Microsoft is a participant, and indeed I'm pretty sure it all has to be since that is the point of OIN. But the MS devs will now need to have legal sign off that it's all covered by OIN before they can respond in the affirmative. I assume that will take some time as these things usually do. Especially since this seems more like a bottom-up effort than something top-down managed

That's pretty much always a possibility. On the other hand, they're also generally fairly open to add things as long as the developers react to concerns. I'd guess if this can be neatly stuffed in a corner and treated like any of the other Hyper-V specific drivers, and quality is okay, it has a reasonable chance to be accepted.

And if it is rejected, Microsoft can still ship it in the kernels for the distros they offer on WSL.


Yeah not the end of the world if it doesn't land immediately, based on the original devs responding that they'd rather find the right place for it than the expedient place for it

Sure, but that doesn't really stop Microsoft from achieving their goal with WSL2. They would be happy to upstream it if possible if not then oh well.

Why would it? That's a functionality that can't be used without Microsoft proprietary parts. Unless I missed something, I don't think there is an open source implementation of DirectX somewhere?

I doubt the linux devs ever see WLS as a target they have to maintain themselves.


Wine has an open source implementation of DirectX, FWIW.

Ok, not my field, but isn't it based on OpenGL? Does it count as an implementation or an emulation layer? My understanding is that Microsoft is trying to give WSL a transparent access to the GPU using the regular linux interface and transmitting it to DirectX.

Using that with wine would mean adding two emulation layers before reaching the actual driver. I fail to see any use case for that.


It's based on vulkan, which means that it has low enough level access to the GPU that I think it's fair to call libraries implemented on top of it "native". Most GPU drivers have some kind of translation for directx already, and there's no inherent reason why the open source directx implementation has to perform worse than the GPU implementation of directx. I hear that the open source dx11 implementation actually beats AMD's DX11 implementation in some cases.

So yeah, vulkan is neat and opens up a whole lot for the linux world. In the future you'll probably see userspace implementations of opengl on top of vulkan, maybe even CUDA implemented on AMD gpus, although I'm not sure how practical that is. Also a whole lot of exciting GPU sharing tech, accessing GPUs inside of VMs for example.

DirectX will come to linux, but it won't be thanks to microsoft. You can thank valve hedging their bets on the microsoft store for that.


What would be the big loss other than MS bundling a ppa ? Nvidia drivers and cuda anyway need a ppa. So wsl + nvidia + cuda being a single ppa is anyway not that far off.

[flagged]


I don't see it, but if it reads that way it wasn't my intention.

The associated blog post (https://devblogs.microsoft.com/directx/directx-heart-linux/) does have more details on exposing display functionality. There is no swapchain functionality yet, but it's clear they're working on it, and part of the work is "DxCore", which seems to be a cleaned up and simplified version of DXGI. So not now, but soon.

Yeah, this should be the right link. TL;DR Microsoft brings DirectX 12, OpenGL, OpenCL, and CUDA to WSL. Vulkan is under investigation. That's really a piece of exciting news!

We've changed to that from https://lkml.org/lkml/2020/5/19/742. Thanks!

Yes, machine learning is the first priority as they say in the thread. However, the blog post goes on to say that window system integration is coming. This will eventually be a full graphics stack.

> Anyway, this isn't really a Linux port of DirectX

The entire user mode side of Direct3D is ported, in addition to the user mode parts of the Nvidia, AMD, and Intel graphics drivers.


> This will eventually be a full graphics stack

Are we seeing the start of the migration of Windows to linux?


My guess is that MS and Apple are both slowly trying to steer their ship in the same general direction as ChromeOS: a stable, locked down OS that runs applications in dedicated sandboxes/containers/VMs. No longer does the OS need to provide the same "shell" to those applications. You don't need a library-based wrapper to the syscall layer. The paravirtualized hardware is the new syscall layer. You can wrap whatever OS interface you want around that in order to support different kinds of workloads. Games can run as close to the system as possible. Workloads destined for the cloud can run in a Linux environment. Instead of being intermediated by clunky VM kit from third-party vendors, they'll provide a lot of it themselves to optimize performance and ensure adequate security between environments in a user-friendly manner.

By making the virtualized hardware the "glue", they can avoid the GPL/copyleft infection of their commercial OS, while supporting different kinds of developer experiences.


>> the migration of Windows to linux

Please no. Please keep your peanut butter out of my chocolate. Call me a purist, but linux should take nothing from windows, give no ground, make no compromise. One must die for the other to live.

https://bugs.launchpad.net/ubuntu/+bug/1


Chocolate and peanut butter are pretty good together. Just sayin'.

Wait, is chocolate and peanut butter really a thing?! That sounds quite horrible to my non-US ears.

Edit: yep, an online search seems to say that's an actual thing. I guess I'm part of the ten thousand today https://xkcd.com/1053/. I will never understand the US fascination for peanut butter.


Chocolate and peanut butter is amazing.

And yes, as a sibling notes, Reese's peanut butter cups are actually alarmingly tasty, but.... as with any $1 chocolate bar, that's shitty HFCS-saturated chocolate and shitty palm-oil-laced peanut butter, with way too much sugar in it, so if you're too good for that, well, that's a credit to your tastebuds, good on ya.

So eat real chocolate with real peanut butter. Real peanut butter is nothing but peanuts and salt (it keeps well, but fresh-ground is better). Real chocolate, I trust you can figure out. Milk and dark are both good in this application.

Although, of course, peanuts are not true nuts (no more than macadamia or almond or walnut), they're nonetheless very nutty, and the effect is pretty similar to "almond bark", or hazelnuts with chocolate, or pecans and chocolate. And of course you can just eat peanuts with chocolate, an okay combination. But there's something weirdly perfect about peanut butter with chocolate, better than peanuts with chocolate.

But hey, although I'm not American, I am from America's hat, and I do like peanut butter in a few other formats too.


You should try a buckeye! It's basically peanut butter wrapped in chocolate, giving an appearance similar to a buckeye nut.

> I will never understand the US fascination for peanut butter.

At one point in history, US farmers were encouraged to grow peanuts as a rotation crop to improve soil quality. That led to a glut of peanuts in the market, so people tried to find uses for them. Peanut butter was invented+ as one of these uses, and has been a staple of American diets ever since.

+Or promoted, I can’t remember


They promoted soy family members for there self-fertilization attribute.

https://aces.nmsu.edu/pubs/_a/A129/


Reese's peanut butter cups [0] are a fairly popular "candy" item here. Try one if you get a chance, they're pretty tasty.

[0]: https://commons.wikimedia.org/wiki/File:Reeses-PB-Cups.jpg


As a European, sandwich with Nutella on one bread side and peanut butter on the other is awesome (except for your cholesterol).

and sugar...


> These changes are on the WSL’s team roadmap and you can expect to hear more about this work by holiday 2020.

As if describing things in terms of northern hemisphere temperate seasons wasn’t bad enough (and still worse commonly showing how little you care about any place other than the USA and maybe Canada by using the name “fall”), now we have this: “holiday 2020”. I don’t know when this is talking about. I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now, so it can’t be that. Christmas time would have been my next guess, but surely you’d describe that as “by the end of 2020”? And then other possibilities occurred to me—Halloween? Thanksgiving? I have no idea at all what Americans would call “holiday” as a time of year.


Holiday == November or December. Thanksgiving and Christmas are the big ones, but we use generic terms like "holiday" to accommodate people who celebrate different holidays around those times. This is pretty common terminology; IIRC the "War on Christmas" has been a thing for a couple decades.

> I’d have guessed northern hemisphere summer school break first, but I guess that’s just about finished now

Summer break more or less just started for most students here. The fall term will start around August.


Northern hemisphere summer school breaks vary but typically run in the May-August time range, so they're just starting rather than ending.

They're referring to https://en.m.wikipedia.org/wiki/Christmas_and_holiday_season


it’s a neutered term for Christmas.

Aren't large parts of the wayland stack incompatible with NVIDIA drivers? It would be ironic if Microsoft was the one to bridge that gap.

That's more a matter of buffer management. Though it may turn out , if their work is too tied to NVIDIA, to only support NVIDIA's de facto proprietary selection: EGLStreams.

Though in principle, there's nothing preventing them from using GBM instead of EGLStreams, and there are some good practical reasons such as having compatibility with the broad base of existing accelerated Wayland windowing libraries and applications.


You can use Wayland with NVIDIA drivers. The problem is that NVIDIA and the open source drivers expose different buffer management APIs and Wayland does not abstract over that, so it has to be explicitly handled by every client application. Some Wayland clients refuse to support both, others had NVIDIA support patched in by NVIDIA itself.

Sorry if this question misses the point but how did AMD avoid this issue? Are their drivers open source?

AMD started to support the development of the open source driver several years ago by publishing hardware specs. . I am not sure if they switched completely or are still maintaining a closed source driver on the side - I haven't bought AMD cards in years, my last one only "works" with the binary blob.

The people working on the NVIDIA open source driver have no official support and were fighting with signed firmware blobs last I heard. I wish them luck, but even on older cards it is more likely to crash your system than render anything.


Single data point here but I just switched from an NVIDIA GTX 960 (2015 card) to an AMD RX 570 (2017), both using open source drivers, and the performance improvement in Wayland/Sway is huge.

My understanding is that even 2015 is too new for nouveau to run with high performance due to something called reclocking, where the card starts up at a minimal clock rate and then it's up to the drivers to reconfigure it for running at the advertised clock.


That would be great news - something I asked for since the presentation of WSL - I just hope it uses the new graphics driver infrastructure, as the first implementation mentioned in your link seems to be using RDP, which is less efficient. But if it does, that would be excellent news.

Do ML people actually care about DirectX? I thought everyone is using CUDA? Anyway in my university building I have not seen anyone that does machine learning on Windows.

Yes, (almost) everybody doing machine learning is on CUDA. There are multiple pieces of this work, and the DirectX API's are only part of it. Other pieces get CUDA running as well.

And yet another piece is a layer to get OpenGL and OpenCL workloads running on DX12 as well, rather similar in scope to how MoltenVK and the gfx-hal Vulkan Portability work are a layer to get Vulkan workloads running on Metal. This is a big effort, and it seems to me their goal is to get things to the point where stuff Just Works and you don't have to think too hard about the various bits of (technically difficult!) infrastructure to get you there.


In ML if you decide not to kiss NVIDIA's ass you're screwed. Figuratively they have 100% market share. Having major alternative backend, even if it's proprietary, will force diversity and that has some upsides I think.

They typically don't - the team published both a special build of Tensorflow that uses DirectX, as well as working with nVidia to get CUDA running against their DX Linux kernel implementation

> special build of Tensorflow that uses DirectX

Huh? Are you sure about that? Regular TensorFlow on Windows uses CUDA, not DirectX-flavored compute.


I was also surprised, but found this RFC for TensorFlow on top of DirectML: https://github.com/tensorflow/community/pull/243

Oh well, I think that will take several years to land.

CUDA is supported as well.

A bit of a tip, your university development experience is not going to be like anything you see in production

> AF_PACKET

Just run a Linux hyper-v vm. That's what WSL2 is doing under the hood anyway. I run it this way and it's great. I have windows terminal auto ssh into it. Performance is great. And using the X server x410 on the windows side gui performance is fantastic (though no hardware acceleration) because instead of ssh tunneling x410 suports AF_VSOCK for the x socket, which hyper-v supports for performance as good as a domain socket on the same machine.


I've had trouble researching if WSL2 is in fact a hyper-v managed VM. I've seen some documentation referring to WSL2 as a tightly integrated Krypton (scaled down hyper-V) VM. It seems to imply the host overhead isn't as high as a guest on hyper-V

WSL uses a Hyper-V derived virtual machine that is

* Sparse & light - they only allocate resources from the host when needed, and release them back to the host when freed * Fast - it can boot a WSL distro from cold in < 2s * Transitional - these lightweight VMs are designed to run for up to days-weeks at a time

Full Hyper-V VMs aim to (generally) grab all the resources they can and keep hold of those resources as long as possible in case they're needed. Full VMs are designed to run for months-years at a time.

WSL's VMs are MUCH less impactful on the host - FWIW, I run 2-3 WSL distros at a time on my 4 year old 16GB Surface Pro 4 and don't even notice that they're running.


But then you have this thread with people running Cron jobs to free cached memory: https://github.com/microsoft/WSL/issues/4166

I imagine this will be addressed, but claims of lightweight seem exaggerated?

But even more on my mind is the impact on the windows host. Is it running as a guest under hyper v? What's the overhead?


^ ms pm

AFAICT Krypton is stripped down in the sense that a lot of the management framework is gone, but as far as the guest is concerned, it's running on hyper-v.

Ah that sucks, for a moment I thought I would be able to run Wine from within WSL2 and get my game on.

if you are already running Windows and using Linux through WSL, why would you want to use WINE to run games?

Because older games run better on wine than they do on windows. Though there's more recent software available to mitigate some of that (wined3d, winevdm, etc).

Possibly parent comment was a joke?

"There is currently no presentation integration with WSL as WSL is a console only experience today. The D3D12 API can be used for offscreen rendering and compute, but there is no swapchain support to copy pixels directly to the screen (yet )."

This leads me to believe that display support is intended in the future. It's a work in progress. They've gone this far why would they stop at compute? Still, it's pretty awesome if you ask me.


They've announced that Linux GUI apps are coming later this year to WSL2; while that is possible without GPU, I imagine MS wants a decent UX for the feature, though, which suggests...

And wouldn't be faster simple use native Linux installed on the computer ? Instead of running on a VM, and a glue/translation layer from OpenGL/OpenCL/Vulkan/CUDA to DirectX. And don't forgot all the blotware and slowness that have Windows 10.

But then you lose all the benefits of an established desktop OS, like all hardware pretty much working, the OS pretty much working out of the box, plus software like Microsoft Office just runs without any additional effort, plus you get to profit from years or decades of muscle memory and OS-specific knowlegde, and maybe it's even about being allowed to connect your laptop to the company networks at all.

Speaking as someone who spent like a dozen hours trying to get his duel monitor setup working in Ubuntu and Fedora: no. It's not.

My experience is that Linux has significantly worse hardware support than Windows, particularly where newer hardware is concerned.


Gaming folks won't be running games in WSL when they can just run them in W.

> The Linux gaming folks will be pretty sad about this one.

I was a tad bummed when realizing what this actually was, but still very much impressed.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: