Hacker News new | past | comments | ask | show | jobs | submit login
WSL2 corrupting Git repositories and shell history (github.com/microsoft)
614 points by delduca on Jan 2, 2021 | hide | past | favorite | 408 comments



WSL2 was promised as a good working extension for a Linux dev environment on windows by my peers. Sadly, a whole host of issues specific to my dev environment meant that it was useless as I’d spend more time fixing it than getting any valuable use out of it. More importantly, this became the very reason I switched to arch full time, and haven’t looked back since. I still hope that it becomes what it was promised to be, although I don’t see myself going back to windows any time soon.


Opposite anecdata - I got a gaming PC and i migrated all my development to it and WSL(2) has been a godsend. VS Code has a "WSL Remote" mode that works really well (where a vscode server runs in Linux and the windows GUI access it "remotely"). I even use the Windows Github app for the occasional GUI-assisted commit, and apart from being slow its fine.

I've had only two problems with this set up:

- Occasionally VS Code Typescript features slow down, but it fixes itself a few days later (maybe after a restart). I presume this is due to the WSL Remote, but not certain.

- The occasional line endings snafu, but this is more of a tooling issue.


Also opposite anecdote, I switched from Pop!_OS to Windows after getting into music production recently as a hobby (MIDI controller driver software doesn't always play well, even with WINE-devel) after over a decade on Linux.

I expected to hate it, but I'm asking myself why I didn't do this sooner.

It's the same sort of scenario as before -- I have Windows running the games I occasionally play, and music stuff, and I do all of my code stuff in WSL2/Ubuntu.

But this way, I never have to fiddle with weird WINE patches or googling bugs, everything "just works". Asking myself why I didn't do this sooner to be honest.

I had one big complaint which is that copying files from Windows to WSL2 would create ".ZoneInfo" file copies of every file, that was downloaded from the web, but they patched this recently too.

With the support for Linux GUI apps that launched with Windows Insider Preview recently, I have a hard time making arguments against it now. The taste of crow is a little bitter, eh.

Linux + WINE/Proton basically felt like shittier, more bug-ridden Windows + WSL2. With the one notable exception that as an OS/host, Windows 10 "feels" a bit slower in general.


I've recently come across the same issue with DAWs. Settled on having multiple machines and switching the drives if I need to reuse a workstation for something else. Most of the time you can dedicate cheap systems to specific utilizations, it will save you time and effort. e.g. There is no reason for your DAW machine to also be your gaming machine as the hardware requirements are pretty different.


It seems fairly expensive to have different machines for gaming and music production/recording when the only substantial difference is relative investment into CPU and GPU cycles.


You dont need new computers for everything. Older machines are often super cheap and saving them from getting scrapped reduces ewaste.


There is no reason to have multiple computers, apart from a laptop and workstation for people who need to be able to work on the go. Not only upfront cost, but license costs, update management, hardware replacement and finally disposal and recycling are piling up. I fail to see the upside, honestly.


When you're starting out making music having just one computer for everything is fine but once you've reached the point where you spend a lot of your time on it, it quickly becomes a no brainer to at least have a dedicated computer for just that. It's about putting yourself in a productive setting where you only focus on creating and you have minimal distractions.

Besides, a dedicated computer for music production has a low entry cost since you basically only need a decent amount of ram. My studio computer has an old i7, 32 GB ram and it doesn't even have a GPU and the price was less than $500 a few years ago. It's going to outlast my current gaming rig and probably my next one as well.


I'm making music for a few years, and for at least two years my Windows workstation has been the main piece of gear.

I totally get what you mean by productive setting. I achieve that with two user profiles - one for music (dark colors, only DAW and GuitarPro, no social networks or messengers), another for fun (brighter colors, steam and battlenet). It is a much cheaper - free! - than $500, and I get to save some space at home.

In the end different strokes for different folks I guess. If your setup works for you - keep it up!


I’ve always solved this issue by keeping windows on one drive and linux on another. It does have an upfront cost, but keeping my dev work / linux environment setup has always worked well for me. Lately I’ve thought of trying WSL, but I don’t see any real benefit over my current setup


IMHO dedicated environments encourage mindsets specific to those environments thus make you more productive. Would not switch to not having them.


Oh you need to fucking disable Window's Defender though, or add exclusions for your Linux distro/WSL2 folder, because when installing "node_modules" it will attempt to do realtime scans for threat-protection which absolutely cripples the speed.

https://www.cicoria.com/improving-windows-subsystem-for-linu...

https://www.reddit.com/r/bashonubuntuonwindows/comments/eok7...

"I noticed a significant performance difference after adding exclusions to the Windows Defender. My Rails server and NPM installs sped up dramatically (I mean like 4x faster, no joke)."

Stupid.


This is nothing specific to WSL though, Windows has always had this problem. I expect there are many Windows developers who have no idea how much time they spend waiting for their code to be virus scanned every day.


To give an idea how bad it is, building Firefox on a Threadripper 3970X on Windows takes slightly over 12 minutes if you don't disable Defender and somewhere around 7 minutes if you do.


What a horrible amount of overhead. Why does Defender need to scan the contents of a bunch of non-executable files? I suspect the answer runs deep and is at least somewhat horrifying.


> Why does Defender need to scan the contents of a bunch of non-executable files?

While I agree the overhead is atrocious, buffer overflows and similar bugs can turn an "innocent" non-executable file into an executable, like this[1] JPEG file parser issue affecting a large number of applications using GDI+.

That said, I'd be surprised if it couldn't be smarter about this.

[1]: https://us-cert.cisa.gov/ncas/archives/alerts/TA04-260A


because windows has no concept of an executable bit, so a rename could make it runnable? or a benign looking program has a built-in interpreter run code from external file?


> windows has no concept of an executable bit

Sure it does. Execution is a permission you can set in the security tab.

> a rename could make it runnable

This is more likely why.


> because windows has no concept of an executable bit, so a rename could make it runnable

This much I understand - but why can something so tightly integrated into the OS not instead intercept a file rename event and scan at that point?

I seem to be lacking sufficient information as to what specifically about Windows necessitates scanning everything on the filesystem when macOS and the various common Linux distros seem to do fine without it. It's not as if Windows is the only OS with interpreters, either.


Virus do abuse non executables like js, vbs, or whatever script file. Antivirus can't really know if it is the case without open it.


What is it about Windows that makes this a problem where macOS and Linux seem to be fine? The same mechanisms exist in all three major operating systems, which is to say that technically any executable I choose to run on my computer can load a file and interpret it.

Is it really just that Windows users download and run random junk? It's been so long that I've seen a virus of any kind anywhere at all that I genuinely have no clue how people become infected with them or indeed whether it still really happens at all.

It certainly feels to me that the era of random toolbars and clearly visible desktop malware is over, but at the same time I live in a different bubble than in the past.


It's possible you don't see random toolbars and malware now because Defender works. Microsoft could be smarter and offer default exclusions for some things.


But even file type that you think it is pretty safe (jpg, wav and some other media foramt) can abuse parser bug in viewers / explore.exe to result in code execution. And how do you prevent the malware just name its .js as .jpg and execute it with some interpreter? Some interpreters definitively don't care about file extensions.


If that was the default, wouldn’t it be immediately exploited?


> I genuinely have no clue how people become infected with them or indeed whether it still really happens at all.

I'm a bit late to the conversation, but can confirm that yes, people do still get infected.

We almost always turn something up while doing virus scans at the repair shop I work at, and we mostly use off the shelf products, along with a couple other tricks.


Some IDEs have started warning about this performance hit. IntelliJ + Gradle for example, but I'm guessing other Jetbrains products do the same.


That's what made me aware of the problem. Now I need a comfortable gui for managing just those folders.


Try "every compile." If you wonder why your .exe takes a couple of extra seconds to start up the first time you run it after a build, well, this is why.


Tip: Add a separate SSD and format it as ext4. Mount it to wsl2 and do all your work there.

https://docs.microsoft.com/en-us/windows/wsl/wsl2-mount-disk

Currently, the mount is not automatic so you can make a startup.ps script and run it on start through group policy.


Ah this is a great tip, I will have to try this one -- thank you!


It may be a great tip but Windows users shouldn’t have to do this. Microsoft needs to fix their shit.


I see you this, and I raise you "the package repository I was mirroring contained an AV module that had a copy of EICAR in its test suite".

AAAAAAAAAAAAAAAAAAAAAAAAAAAARGH.


Oh god yes, I know this pain!


Thanks for this tip! I've already been running into Defender problems myself.

Defender kept trying to quarantine my Windows hosts file so I had to add it as an exception but then 3 weeks later it started doing the same to my WSL2 Ubuntu's unmodified hosts file.

I'm glad I opted for Ubuntu and manually installing tools that I need instead of using the Kali subsystem. I couldn't imagine the headache that'd have created.


Surely this is only a problem if your files live in the Windows filesystem (i.e. inside /mnt/$drive on WSL2)? I can't imagine Windows Defender scanning inside a disk image (/home on WSL2).


I was wondering about the same thing.


Why do you even need a Windows workstation as a Ruby or Node developer? That doesn't make any sense.


You don't need it but it's 1/3 the price of an Apple workstation and just runs with all hardware compared to Linux.

Plus, some people just like Windows and/or don't like macOS.


> runs with all hardware compared to Linux.

No, it doesn't. Linux runs on more architectures than Windows. Linux supports a wider range of machine specs than Windows does. There has never been an operating system in human history having hardware support as vast Linux. The television in your living room might be running Linux. Your refrigerator might be running Linux. Your car might be running Linux. Linux is everywhere.

When it comes to hardware support for laptops, the choice of distro is important. Enterprise linux distros ship with ancient kernels and you do not want to install that on a laptop.

Most hardware compatibility issues come from WiFi cards, and there are solutions for that other than looking for a driver and installing it manually. You can tether WiFi from your phone, or you can get a USB WiFi adapter for like $5 on Amazon.


Windows can run on ARM and x86(-64). Both are the widest used architectures for desktop computing. I don't want to develop on MIPS or Atmel.

> When it comes to hardware support for laptops, the choice of distro is important.

I don't care. I just want to develop my programs. Each minute I need to invest into searching for compatible hardware or drivers is costing me money or my free time.

I'm using Linux at home and on servers but for workstations I don't care for it, at all.


The question was specifically about workstations for Ruby and Node development. You're not going to do that on a refrigerator.


I also work mainly from a gaming desktop PC.

WSL2 linux distros are not equal to bare metal linux distros. WSL2 Debian doesn't have systemd installed and what a fvckery it is trying to get it to work. That means sshd, docker, k3s don't work as expected on reboot. Some apps do not install properly becasue scripts expect systemd. Windows networking is PITA too just to do something simple like SSH into a WSL2 instance.

I don't get why deep Windows integration is so important. It's not something I would use on a production box. I just want Hyper-V to work like VMware and VirtualBox. As you say, VSCode remote makes it all seamless anyway.

I had to go back to VMware 15.5.2 (version before hyper-v layer) and disable hyper-v completely.


code . is some dark WSL magic that turned learning to develop on Windows from an endless headache to an intermittent headache. I always ran into some weird/inscrutable issue trying to follow tutorials with Windows ports of things.


I don't really know if you can call running a remote vs code setup on localhost as opposite anecdata.

Isn't that just more of the same bullshit hoop jumping?

Everytime I hear about how great someone's wsl setup is I roll my eyes because it inevitably involves some rediculously convoluted wrangling to the nth level for something that is relatively basic on native Linux.

And these people rarely have an excuse for why they prefer windows over say fedora which has been a better than windows windows experience since windows 8.

Edit, wow even this thread people are suggesting the most drastic of things, let's run an entirely separate hard drive, oh but you need to fiddle with startup scripts because it couldn't be automatic.


> Isn't that just more of the same bullshit hoop jumping?

But it’s not jumping through hoops? I just either run `code .` (like I did on macOS) or select recent folder from within VS Code. It all just works fairly well, identical to ‘natively’ within Windows (except the occasional ts slowdown I mentioned before).

> And these people rarely have an excuse for why they prefer windows

Not that anyone needs an ‘excuse’ for their personal preference, but I would actually prefer macOS. But I got a Windows PC to use software that only runs on Windows (games) and then the whole wfh thing happened and here we are. I’m not interested in rebooting between different operating systems when I switch tasks, especially when they overlap (developing for games).

Convince Bungie to port their game to macOS and I’ll buy a Mac Pro to be back home on a Mac full time :)


> Not that anyone needs an ‘excuse’ for their personal preference

No kidding. But on this site I’ve been lectured about the “consequences” of my personal preferences (when they in no way make alternatives less viable) and told my conscious tradeoffs are actually ignorant of the things I’m trading off even after I explicitly stated them. Tech communities love to talk about choice and freedom and love to try to Jedi mind trick people out of it in the same breath.


>> convoluted wrangling

With good humor, his describes my experience with Linux distros since I first started using them. Doing just about anything seems to require some convoluted wrangling. :)


i've used WSL2 with a Debian install and its worked flawlessly. Only issues i've had is working with ntfs volumes from the host, e.g /mnt/c. that's really slow. otherwise wsl2 has been fine for me


I’m in the same boat as you. I moved to linux for similar reason and haven’t really looked back


> Sadly, a whole host of issues specific to my dev environment meant that it was useless as I’d spend more time fixing it than getting any valuable use out of it.

Sounds like last time I tried Linux on the desktop.


Agreed, I can't use virtualbox and WSL2 at the same time, epic fail.


That is no longer true since VirtualBox 6 (or VMware Workstation 15.5), both of which can work alongside WSL2.


I tried the hyper v VirtualBox option and consider it a waste of several hours. The graphics support in that mode is abysmal.


Can you expound on this? Windows doesn't let you use VirtualBox and WSL2 at the same time?


WSL2 uses hyper-v. You can only run either hyper-v or ESXi (VB's hypervisor) at the same time.


That's not longer true.

Yes, WSL2 is type-1 hypervisor (which means Linux under WSL2 and windows are both VMs of sort, managed by higher level invisible "main" OS) and initially this made running VMs inside windows impossible (since technically that would be VM inside VM). But this has been fixed for months now - originally not only it was not possible to run virtualbox (or vmware, ESXi is vmware type-1 btw, dont mistake it with free vbox) but also it was not possible to run docker either.

Edit: btw. this is why it took so long for WSL2 to be released to public; they gave time to most of the VM solutions to adapt to the fact windows would be running under type-1 hypervisor.


The virtualbox VM launches but it is too slow to be usable, kind of like a car that only goes 5 mph. So slow in fact that virtualbox shows the little turtle icon on the status bar.


ESXi is VMware's 'bare-metal' hypervisor. I'm not sure what VirtualBox uses, but I doubt it's ESXi, unless Oracle wants lawsuits (oh, wait...nevermind; they love them.)


i have three different/language dev setup, in one of them, i hit a known bug with ulimit. but the two rest worked well. I guess with time it will get fixed, we have to be patient, report try/use it so we can report bugs.


between apple silicon and x86 container performance issues from emulation and wsl2 being what sounds like a pretty big regression... drum roll

...maybe 2021 is finally the year of the linux desktop...


More:

- WSL2 sometime corrupt .zsh_history and git https://github.com/microsoft/WSL/issues/5026

- WSL2 corrupts ext4 filesystem https://github.com/microsoft/WSL/issues/5895


This is probably HyperV. I’ve seen exactly this ext4 corruption in production on windows server 2012R2 with CentOS 7. Even to the point that the machine remounts root read only. Unfortunately our windows operations guys are severely lacking in diagnostic savvy and just reboot the machine over and over again or blast it and provision a new one and don’t analyse the problem.

From what I’ve seen it’s a combination of the storage drivers and the storage virtualisation in HyperV rather than a specific issue. I imagine it’s something similar in WSL.

I really don’t trust it as a platform at all. It’s barely better with windows guests.


Had a serious talk a Unix manager over a decade ago who was convinced Windows ops didn't require as much expertise as Unix/Linux. It was a common misconception that MS seemed to encourage. As someone who came over from Windows, I knew better. That attitude continues to influence standard practices, hiring and, most importantly, training and education opportunities for Windows admins -- to the detriment of all. I've also had my collisions with Hyper V, and have come away with the same impressions as you have.


Agreed. I've done both and if you ask me Windows ops is vastly more difficult because everything is brittle, inconsistent and unreliable and rarely repeatable almost all of the time. It requires great skill, determination and persistence to navigate issues like this. Unfortunately as you suggest, the outcome is hiring as cheap as possible and fixing all issues by not changing anything other than replacing everything every few years. There is rarely any day to day admin I see other than planning the next major rollout with some vain hope it'll have less problems than the last one.


I don't know if I'd call it brittle, I would call it super complex (when you get into wmic & friends) and harder to get info online, compared to Linux, because everyone has to tinker with Linux while only a minority of power sysadmins dig that deep into Windows.


> Unfortunately as you suggest, the outcome is hiring as cheap as possible and fixing all issues by not changing anything other than replacing everything every few years.

I think you’ve hit the nail on the head there. It seems like every organisation I’ve been part of which has a significant Windows presence is either static or planning a rollout of and migration to some new magical enterprise software that replaces the old enterprise software they purchased and this time it’ll definitely make everything better. It’s amazing to me how much money gets spent on per-seat licensing for what essentially amounts to no noticeable improvement for anyone involved. But sure, I’m sure this company-wide spyware of choice will be the one that finally means we can just stop caring about security or provisioning machines, right folks?

It’s like the folks doing it have mastered the art of finding busywork that’s just complicated enough that folks signing the cheques can’t really tell they’re burning money. In that sense it’s beautiful I suppose.

Just, erm, ignore the fact a dozen different developers have essentially root access to production databases... at least they can’t install software!


It's not that brittle, MS backward-compatible supports a lot of APIs, but ... the upper layers on top are just vendor-ware shit 99% of the time. (Even/especially their own config/setup wizards/GUIs.)

I mean a bash script that handles no errors and outputs nothing just screams madness. The same thing wrapped in a .MSI .. well, you'll never know what hit your, and if it's your job to somehow unfuck this, it's virtually a piece of literal hell itself, slowly rotting and eroding people's soul and mind.


I've worked with good windows admins.

I have so much respect for them.


MCSE was a punchline 20 years ago so this is a misconception almost as old as the entire profession of Windows Admins.


The only situation when BTRFS breaked badly to me, was when we run guests on Hyper-V .


I can confirm that hyper-v snapshots break ext4 just about everytime


After upgrading to WSL2, I started having issues with a Virtualbox VM. Turns out it didn't play nicely with HyperV. I went back to WSL1.


Newer VirtualBox releases can run virtual machines on top of Hyper-V as a virtualization engine. It is slower than VirtualBox' own engine, but overall it is still a better experience than Hyper-V Manager.


Networking doesn't work properly if you do this. It's a mess.


disagree, it's unusably slow... i had to move to hyper-v vagrant driver as vbox, while it would boot the VM, was so slow it defeated any purpose on speeding up development


That’s because HyperV is a type 1 hypervisor and vbox is a type 2 hypervisor. They don’t mix well :)

Best option is still vbox and putty IMHO. VScode will work with it and SSH fine.

Or say fuck it, buy a Mac and do all your Linux work in the cloud.


> Unfortunately our windows operations guys are severely lacking in diagnostic savvy and just reboot the machine over and over again

What are you talking about, that is how you diagnose a Windows box...


But why? WSL1 was something like wine but reverse, but WSL2 is actually linux.


The problem is likely not in the Ext4 code, but in the block I/O driver (which is Hyper-V specific, IIRC) or even in Hyper-V itself. Several reports mention Windows shutdowns, sleep or hibernation, so it may be a simple unclean shutdown of the VM.

A bigger problem would be if Hyper-V is either ignoring memory barriers, or caching writes to the disk and losing them when the Hyper-V service is shutdown. But that would likely affect more than just WSL, so we'd have seen the problem sooner (or so I vehemently hope).


Huh, interesting. I run a variety of linux based services at home. For years I ran them on a Hyper-V VM (because my computer was technically my gaming machine). I only recently migrated everything to a cluster of Raspberry Pi devices.

I used to have occasional problems with this setup, and it was always some kind of drive corruption or mounting issue. I wonder if this is related?


I recall ext4 had[1] some issues[2] with data loss due to unclean shutdowns.

I assumed that had all been fixed by now, but yeah, these things can get tricky fast.

[1]: https://lwn.net/Articles/322823/

[2]: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/317781/...


> Hyper-V service is shutdown

as far as I know when wsl2 is actiavted, hyper-v runs as type-1 and linux, windows are virtual after activating it.

thus if it crashes, you get a bsod


Virtualbox has (had?) similar issues in certain configurations where it maintains a small write cache and doesn't honor IO barriers which lead to journaled/cow filesystems reporting an inconsistent state that should have been prevented by journaling.


That's probably the cause.

WSL2 being Linux means that, unlike WSL1 which directly uses the host NTFS filesystem, it's probably using an emulated block device to hold its filesystem. If that emulated block device doesn't correctly honor write barrier requests from the Linux kernel, it could explain the corruption.


Wild guesses:

* the kernel is not properly shutdown (and sometimes some buffers are not flushed)

* the virtual block device and/or its linux driver has bugs


FreeBSD has native support for Linux binaries by mapping system calls, and it's fairly reliable when it works. What's nice is that when it works, it works, adding support for system calls improves coverage, and since underlying things like the FS aren't virtualized, it tends to be pretty reliable.


Yeah; Windows had something like that, too. It was WSL1 (or just "WSL"). I also tend to think that was the better approach.


It didn't extend to use cases like containers, that would have basically required MS to rewrite large parts of the Linux kernel's core code for namespaces, mount points etc.


Sure. Running a Linux VM (WSL2) just to use containers seems to kind of defeat the point, though. You might as well just run your containers in VMs.


The use case is for developers to use their Linux tools with Windows integration. WSL1 only did the latter half well, "traditional" VMs only did the former. WSL2 does both, however that brings both advantages and disadvantages of VMs.


After constantly getting caught in bugs between WSL and Docker, i'm now running Ubuntu Server headless with a Windows VM (with GPU passthrough) on top.

The biggest issue across all these things is filesystems. There is simply no good way to share a filesystem between Windows and Linux that is both 1. fast 2. fully correct - i.e doesn't break some code.

SMB/NFS just never work super well because they are slow, or programs think they are local filesystems so things break (e.g VScode won't detect new files created on the linux side of a SMB mount without a manual refresh). Plenty of details on getting permissions right too. Not to mention extended attributes which differ slightly and break stuff. Network file systems in general are just too different to local FS's for programs to work seamlessly.

The 2 non-ideal solutions I've found are 1. keep all files in linux + edit over network using VSCode Remote or IntelliJ Remote support or 2. keep a copy of the dev workspace in windows, and use something like unison (or manual git) to sync.


While we're on the topic of WSL2 causing issues, I will add one that I noted to the pile. If you have WSL2 installed then the first bash on PATH is the WSL2 version of bash. For whatever reason, this version of bash has a major impedance mismatch with Emacs and org-mode. From windows native Emacs (not an Emacs installed in WSL2) if you try to run an org babel block that contains bash code, whole commands will simply be ignored. The end result is that if you blindly execute bash blocks in Emacs on windows without checking which bash is being used there can be disastrous results because a seemingly safe script like `pushd some-folder; rm -r ; popd` suddenly becomes `rm -r ` without warning. I'm guessing that it has to do with mismatched line endings since mingw bash (aka git bash) doesn't have these issues. Also, you can't rely on the ordering of your PATH environment variable to protect you because updates can change it.

tl;dr WSL2 bash is not bash but it pretends to be and there are terrifying changes to the semantics of bash scripts as a result.


I have experienced this exact issue as well.


WSL2 works until you need it really, and then it starts giving you problems. The networking problems are the worst. Installed a Linux VM under VMWare and called it a day.


Yah, or use Linux.


its like running linux through a vm but with a lot more random bugs


I do prefer Linux to pretty much any other OS.


Reading through the issues it seems like the problem is occuring when opening the same files from directly inside the WSL2 container and also through the network device that exposes them to Windows at the same time. I'm not very familiar with how WSL2 exposes files but that seems to be the problem


Huh. I've used WSL2 and got extensively and never had any problems.


Me too. I abuse it often as well (hard shutdowns with wsl --shutdown, interacting with files from both Linux and Windows side, etc.). The only issue I have is that because of the virtualization approach it uses more RAM as there is no unified RAM pool for both Windows and WSL (as was the case with WSL1). But that's perhaps expected behavior.


You can still limit the amount of ram used by wsl 2 by setting it in the wslconfig. I like to limit the amount of cores it has access to as well as i tend to find that these two things are generally prone to spiking.


same. no issues


I think calling it WSL2 was a mistake.

The predecessor, WSL, "just worked" and it was more a less a linux experience for most practical purposes-- and certainly better than hoary old cygwin.

This caused a lot of people to believe they could just transition to WSL2, lead on by the promise of an even more performant linux experience. The documentation didn't say anything about complications from attempting this, so a lot of people just tried it as soon as they could, thinking it would go as smoothly as when they tried WSL. But nope... it many cases, it doesn't just work out of the box. There's network configuration and gateway issues, snags with vpn, and now this git repo corruption. When you look on git issues, it's just people randomly shot-gunning suggestions, some of which work, some of which don't. I think WSL2 was rushed out too early, or at least it's lacking a comprehensive troubleshooting guide to get it up and running.


File system corruption is unforgivable mistake, but FS work is really hard to get right. Even stuff like VMware corrupts shared folders ... wherever they try to bypass a driver translation layer. Just Google for “vmware shared folder corruption”


FS corruption is the worst as you lose total confidence in the product. This is a mega process escape that the WSL team would need to transparently detail why it happened, what the remedy is, and why it'll never, ever happen again.


My guess is it's the result of shutting down the virtual machine that runs the Linux kernel too soon, leaving writes to the virtual hard disk in an inconsistent state. This coincides with "shutdown /r /t 0" being able to cause it as well as blue screens or power loss. And explains why I've never seen it, despite using WSL2 on insider preview builds on multiple machines daily: I almost never shut down, only for updates and new builds.


The kernel should and can handle unexpected ahutdowns without corruption. Data loss sometimes, but not corruption.


This is data loss. A few object files in the Git directory are truncated.

Which results in a corrupted repository, from Git's point of view.


That’s corruption. Data loss would indicate an entire fs transaction getting dropped entirely - but not breaking the principle of atomic ops. A write should either happen entirely or not at all.


Corruption is when you write data and you read back different data of the same size. Data loss is when you write data and you read back correct data of the smaller size or nothing.

Not sure what you mean by entire fs transaction. But Linux doesn't have transactional fs interface, so open, write, close can be interrupted at any point with result of just having a new empty file after open, being one of the valid outcomes after crash.


“A write should either happen entirely or not at all.”

I think many modern file systems try really hard to make that true, but I don’t think you can count on it. “man write” (https://man7.org/linux/man-pages/man2/write.2.html) still says:

“Note that a successful write() may transfer fewer than count bytes”

It also says

“A successful return from write() does not make any guarantee that data has been committed to disk. On some filesystems, including NFS, it does not even guarantee that space has successfully been reserved for the data”

That should be handled by calling fsync, but of course, if that fails, there’s not a lot you can do (even if you exactly know what happened) (https://research.cs.wisc.edu/adsl/Publications/atc20-cuttlef...)

I also don’t think calling data loss due to writes that do not make it to disk “file system corruption” is correct. For file system corruption, the file system data structures have to be overwritten (e.g. the boot record or directory data structures)


Filesystems don't try to order writes to different files. So you get HEAD pointing to a truncated commit, or a commit pointing to a truncated blob.

The same happens if you have a power loss or kernel crash on Linux (as a kernel developer, it happened to me several times when testing freshly-committed code).


Does git actually attempt to write all objects atomically?


Good to know, seems like a design flaw in git then, unless its using fsync correctly.


> WSL team

I'd say WSL2 team, because it worked well in the original WSL.


Interesting. I basically live at the VMware community forums and could not remember this issue...

FWIW I am a user moderator down there and VMware desktop products (Workstation/Fusion/Player) has my focus and as a result I basically read almost every post that would report this issue.

I just DDG'ed it and see one report from 2014 [0], ok some more from around 2008 when using google.

Looks like this was resolved in 2015 as that was the last time I see it being mentioned.

[0] https://communities.vmware.com/thread/485062


Reminds me of an infuriating old bug in VirtualBox where it wouldn't notice the size of a file had changed. Devs were working in some Windows editor then using Git within VirtualBox to commit changes, resulting in inexplicable trailing nulls and garbage turning up in the repo

Took quite some time to figure out what was causing it. "Everyone is somehow corrupting files except for me, wtf?" Magic filesystem translation layers always suck


I'm now in the habit of doing a git diff before every commit. Even if I don't thoroughly read the diff, I at least skim it for sanity.

I can't recall what made me start doing that, but now it's a habit.


I do git add -up

Same thing, sorta but all in one go.


Is this file system corruption? The article isn’t 100% clear it ‘only’ loses some data writes, but it also doesn’t explicitly say files not being written are affected, directory structures are corrupted, etc.

So, to me, it looks like WSL2 not completely flushing writes to the underlying file system. Bad, but not as bad as file system corruption (which could lead to losing all data on the disk)


There's no enough information to know if all the reported problems are the result of the same defect. But in: https://github.com/microsoft/WSL/issues/5895

The first instance of a problem is:

    [    1.956835] JBD2: Invalid checksum recovering block 97441 in log
And that's corruption that leads to log replay failing, i.e. rejecting it because honoring the replay in the face of checksum errors could make things much worse. Subsequently mount fails:

    [   21.151232] ERROR: MountExt4:1659: mount(/dev/sdb) failed 5
That's good because the purpose of journal replay is to make the file system consistent following a crash/power fail. And if the file system is dirty, replay is called for, but can't happen due to a corrupt journal, so now an fsck is required. i.e. it is in an inconsistent (you could say partly broken) state and needs repair.

I haven't seen syslog/systemd journal for other cases to know if there's instances of ext4 log replay that succeeds, but with missing files. That's not file system corruption, even if it leads to an inconsistent state in a git repository (or even a database). But this is still concerning, because to get a situation where log replay is clean but files are missing suggests an entire transaction was just dropped. It never made it to stable media, and even the metadata was not partially written to the ext4 journal.

qemu-kvm has a (host) cache setting called "unsafe". Default is typically "none" or "write back". The unsafe mode can result in file system corruption if the host crashes or has a power failure. The guest's IO is faster with this mode, but the write ordering expected by the file system is not guaranteed if the host crashes. i.e. writes can hit stable media out of order. If the guest crashes, my experience has been that things are fine - subsequent log replay (in the guest) is successful, because the guest writes that made it to the host cache do make it to stable media by the same the guest reboots. The out of order writes don't matter... unless the host crashes, and then it's a big problem. The other qemu cache modes have rather different flush/fua policies that can still keep a guest file system consistent following a host crash. But they are slower.

So it makes me suspicious that for performance reasons, WSL2 might be using a possibly volatile host side caching policy. Merely for additional data point, it might be interesting to try to reproduce this problem using e.g. Btrfs for the guest file system. If write order is honored and flushed to stable media appropriate for default out of the box configuration of a VM, I'd expect Btrfs never complains, but might drop up to 30s of writes. But if there's out of order writes making it to stable media, Btrfs will also complain, I'd expect transid errors which are also a hallmark of drive firmware not consistently honoring flush/fua and then you get a badly timed crash. (And similar for ZFS for that matter - nothing is impervious to having its write order expectations blown up.)


Thanks. That definitely is file system corruption. And that is very scary, as (assuming you have backups, which you should) losing files you’re working on is not the biggest problem you can have. That will lose you a few days at most.

Silent corruption of parts of the disk that you rarely access but still want to keep is scarier (you might have rotating backups for months or years and still eventually lose data)


So long as the file system is fixed, it should be straightforward to fix the git repository. I'm no git expert but maybe 'git repair' can deal with it; and if not then 'rm -rf' and 'git clone'.

To avoid silent corruption requires full metadata and data checksumming, ala Btrfs or ZFS. In those cases, not only is corruption unambiguously detected, but it's not allowed to propagate.


Not my area but I seem to remember bitches that Linux lies about fsync. As in it'll swear up and down that it flushed everything to disk, but it's lying.

Also over the years it seems like everyone I've seen that habitually edits files remotely ends up with this sort of pain and butthurt.


My experience was locked files or lost shared folders (VM doesn't see them until you say the magic words, aka randomly stop and start services until it works). Both happened so (relatively) frequently that I just lost confidence in the feature until version 16 where read-only folders works fine. (I stopped using it in version 9)


I could swear I ran into that corruption (or a similar one?) almost a decade ago. They still haven't fixed it?! It was quite reproducible too, it just happened when I transferred a large file...


Is this the reason why they have deprecated shared folders in Workstation 16?


I don't think they deprecated shared folders, only shared VMs (a function that enables Workstation to act as a virtualization server).

Source: https://en.wikipedia.org/wiki/VMware_Workstation#Version_his...


That’s a shame. I have a high-powered desktop and sometimes it’s nice to work from the patio by opening a few VMs on my laptop. I get the oomph of the big box with the mobility of the laptop.

To be fair, this feature always felt..rickety. But it was very nice.


You can still use remote desktop for accessing remote VMs.

You would be missing the remote power operations.

FWIW, I am working on a product called Vimarun [0] that is aimed to replace most of that missing functionality over time.

No remote power operations yet, but that will come.

[0] https://vimarun.com


I'd also be missing the ability to plug in USB devices, which is very central to the work I do. RDP has some device passover support, but it's not even close to being able to replace vmware's USB support.


You are talking about redirecting USB via workstation as well? eg. connecting USB devices remotely to your VM? (asking as normal USB pass-through doesn't work for a shared VM, see [0])

I hadn't even considered that. Connecting USB devices remotely works well with vSphere, but never tried that with Workstation. They are going to completely removing the hostd engine from VMware Workstation. That would indeed also include that part and it is most likely not easy to get that working without hostd.

[0] http://kb.vmware.com/kb/2005585


Qemu/Spice


Not a bad suggestion, but the host running the VM would have to run Linux in that case, not Windows.


Huh, I had the opposite experience trying to use wsl1 with a rails app, which required lots of workarounds.

Postgres never worked for example natively. Several npm modules would fail when running webpack.

wsl2 worked perfectly for these cases.

I wonder if people are mounting an ntfs volume in wsl2 which is really slow and janky?


> Postgres never worked for example natively

We (postgres) did fix an ENOSYS (missing syscall) problem at some point so WSL could run Postgres. The surprising thing for me was how long it took for anyone to tell us it was broken/spewing warnings. That was forced when we changed a warning to a panic.


There's examples of people using the Linux filesystem and having the issue.

From reading the issue and related ones, it sounds like it might be related to some sort of unpredictable unclean VM shutdown.


I develop Rails in WSL1 just fine.

> Postgres never worked for example natively.

That's a minor incovenience at worst.

I will stay put in WSL1. If I wished a VM I would have just installed VMWare and run some Linux ISO image from it.


> That's a minor incovenience at worst.

What a bizarre claim. It's irrelevant if you don't need Postgres, a minor inconvenience if you can easily adopt a workaround, and a show stopper if you were relying on accessing a local Postgres instance.


Have you developed in this environment? I have developed more than 20 sites in WSL1, all with PG as the DB. I have it running in the same machine in windows for developemnt(which you are running otherwise you wouldnt be in WSL).Instead of using "localhost" you use "127.0.0.1" in your configuration, that's it.


I have scripts that rely on connecting to PostgreSQL via a UNIX socket, I couldn't use these scripts on WSL. A workaround wouldn't be too hard, but ideally WSL should be 100% Linux-compatible in my opinion.


No, I haven't. Thanks for clarifying, that does make it sound like much less of a problem. Though it might still catch out some people, e.g. on a corporate machine where you're permitted to run WSL but not Postgres.


Can't you just run Postgres natively on Windows? It's a database, you can talk to it from WSL over a local socket, no?


Yes, that's what I do, and it is completely transparent. I dont know why people are so dumbfounded because I wrote it was a minor inconvenience at worst.


> That's a minor incnvenience at worst.

The level of inconvenience purely depends on your stack and how its developed. Often things which don't bother me have huge effects on other members of my team, or on people working on other projects.


I mean, why not do that?

I tried WSL because I thought it would be faster than a heavyweight VM. Turned out it's dog slow in comparison.

Honestly don't see a use for it.


WSL1 is much better integrated, which is useful for some things, especially when networking is involved. And it wastes less memory as a consequence.

The root partition is slower, but I'm usually manipulating windows files anyway so both versions are similarly slow.

I use WSL2 right now, but only because I need to mount a vhd that's formatted with BTRFS.


You can’t compile anything with WSL1, did you notice that?


Do you mean kernel modules? You can compile programs just fine. There's a whole infrastructure around using visual studio code to run the UI natively and compile things inside the linux environment.


I find the opposite, WSL1 is much faster for pretty much every usage except workloads that involve reading/writing lots of files.


The difference on I/O is enough to make WSL unusable.

I wasted a day trying to figure out if I had a problem with anti-virus or something that was blocking me before realizing that WSL I/O is just... well, slow.

People kept telling me to upgrade to WSL2 to solve that but the version of windows I had didn't allow it. Might have been a blessing in disguise given the data corruption bugs.


What version of Windows doesn’t have WSL2?


It was introducing in windows 10 release 2004. A lot of users myself included are still stuck on the 1909 update because it does not show up in our list of automatic updates. This usually happens if Microsoft update determines that some of your hardware may cause BSODs with the new update.


You need at least Win 10 version 1903


Any version from 1.0 to 10.1809.


Rushing half-finished products out, offering little/zero support is the new Microsoft.

The reason $MSFT loves open source is because they can get press hype over projects that are 75% complete (which is the main goal), and they don't even need to support it, document it, or make it actually work.


Yeah I think you nailed it there actually. That's exactly their modus operandi.

What's even worse is they have managed to abstract most of the support away in this cycle. You can't get enterprise support now because they gutted that entirely. You can't get them to do anything on github because they keep moving all the projects around and erasing them all or auto closing the tickets and no one on first line support knows anything now other than how to reset a Microsoft account password.


> Yeah I think you nailed it there actually. That's exactly their modus operandi.

They're following the "rules" from "The Cathedral and the Bazaar", remember. Specifically, the "Release early and often" bit, for the purposes of this conversation. Microsoft are considered "good open source citizens" because of the changes they've made to follow the written non-rules as well as the the unwritten rules.

If you're going to fault Microsoft for following the rules, fault EVERYONE ELSE that does it as well.

I want people to realize that they crap on Microsoft hard for things that they gladly accept from other developers or other companies. The double standards in the IT community are absolutely insane.


I think Microsoft gets more shit exactly because people expect more of them. They are the giants. they aren't a scrappy startup that needs to release or go bankrupt.

Microsoft are seen as the safe choice, so their stuff has to "just work".

That's obviously a bad place for innovation within Microsoft, but i don't really think anyone cares about the future of Microsoft.


People having high expectations of Microsoft is not Microsoft's problem. Microsoft are human beings just like the rest of you, and just like any large company, organizational inefficiency handicaps skilled developers a great deal.

Base your expectations on reality, and you'll have a much better time.


It’s worth noting that people do crap on google for doing this style of stuff constantly. It has basically nothing to do with the release early part, and everything to do with how things are deal with after that early feature light release. The entire point of “release early” is to be able to communicate with users about what direction the project should go. If you don’t keep iterating and working with user feedback (as at least google often doesn’t), then that’s why people complain.


The thing is 99% of what I get elsewhere does actually work properly. Microsoft are just excessively bad at this.


Wow, you're quite lucky, because everything I use has bugs and edge cases.


No luck. I look for things where people aren’t complaining and use those.


I would never use a project without complaints. Just means no one is using it.


>on github because they keep moving all the projects around and erasing them

who would've thought that managing hundreds of repos with shitton of issues, dependencies and people is difficult to get at 1st attempts


If it works for Facebook and Google, with its legions of coffee shop developers, why not for others.


Half-finished releases haven been Microsofts SOP since forever. And especially for payware, bugs are only fixed in the next release, so you need to buy the subscription or the new release.



"haven" should be "have". Sorry, cannot edit it now.


New? Windows 95 comes to mind.


DOS 3.0 comes to mind as well.


could you name hyped projects that weren't supported or documented?


I don't think calling it WSL2 was a mistake. It wasn't something that was good for the users, but it had a very clear benefit for the people working on WSL: calling it WSL2 allowed them to close WSL1 issues en masse as "fixed in WSL2" and never look at them again.


Did they really do that? MS's official line is that WSL is not deprecated, and can be run alongside WSL2.


Yes they did.

We discussed this here before: I am reasonably certain -- without having any insider info -- it was various ptrace types which broke the camel's back besides the abhorrent file system performance. Both PTRACE_SEIZE and PTRACE_TRACEME was closed as fixed-in-wsl2 https://github.com/microsoft/WSL/issues/2028 https://github.com/microsoft/WSL/issues/3031


Was that incorrect somehow? WSL2 fixed a ton of issues for me.


It's a true statement but it's quite unhelpful to tell people that a problem they have is fixed in a different semi-compatible piece of software.


It probably depends a lot on your specific use patterns, but I expect for most people this was a change that basically fixed a bunch of bugs, introduced negligible new issues, and had an identical interface.


Is WSL really so buggy? I never got that impression myself. All my problems with things getting fussy have been on WSL2.


I tried to build two projects in WSL1, one using the Z3 theorem prover, and one using Chrome for scraping. Both ran into kernel issues. So for me it failed about 100% of the time on anything non-trivial.


Chrome uses almost every single Linux syscall under the sun. So I guess that's not too surprising.

I am a little surprised that Z3 had difficulties. I did not think it used anything exotic.


Z3 had a timer to stop the solve if it takes to long and that used a specific option of clock_gettime that wasn't supported. I hacked around this and it otherwise worked fine.


You can say "fixed in <name>" regardless if the name is numerically sequential.


True, but if MS hadn't presented WSL2 as a variant of WSL, any issues fixed in WSL2 wouldn't count as fixed in the WSL issue tracker. I would prefer it if their issue tracker marked these, more honestly in my opinion, as "This issue is will not be fixed in WSL, you can migrate to HVL instead" (using HVL as a hypothetical name for WSL2) with a separate HVL issue tracker.


Whether they are counted as fixed in the WSL tracker is completely up to them. "Fixed, use HVL" is just as valid way to close a WSL1 issue ticket as "Fixed, use WSL2".


A poor argument IMHO. A naming change for the sake of an issue tracker that appears to be a net negative for users is not a wise choice. Naming and branding doesn't exist to serve the project's management tools.


I guess I don't follow your point. The very reason WSL2 exists is because there were countless issues that COULDN'T be fixed with the way WSL1 was implemented. Why would they leave an issue open they fixed, just because the fix required a complete re-implementation? Furthermore why would they change the name, this is literally how they are carrying forward the functionality of WSL 1. It's still Linux on Windows, there is still a custom subsystem to allow the functionality. It is quite literally still Windows Subsystem for Linux. As documented:

https://i.redd.it/po98dksksjx21.png

Should Mac Office not be called Office because they completely re-wrote it?


They aren't carrying forward the functionality of WSL1. Yes, there are issues that cannot be fixed in WSL1. There are also issues that aren't, and quite likely can't be, fixed in WSL2, that do work in WSL1. The file system corruption that happens here in WSL2 is a nice example, it is something that could not possibly ever happen with WSL1 because of the way it was designed. WSL2 is not and will never be a full replacement for WSL1; WSL1 and WSL2 are two separate products, both with their own advantages and disadvantages, and I wish Microsoft would treat them as such.

> Should Mac Office not be called Office because they completely re-wrote it?

I do not know how different Office for Windows and Office for Mac are, but to go with a different example, yes, I do think Visual Studio for Mac and Visual Studio Code should not have carried the Visual Studio name, it causes unnecessary confusion.


I was using WSL to do esp8266 development so I could use linux tools. The official esp8266 windows toolchain is based on cygwin. If I'm using something that needs a unix environment anyway, why use cygwin when you have WSL?

I upgraded to WSL2 because well 2 is bigger than 1 so it must be better. But no, nothing worked. Serial ports are not supported in WSL2.


http://matevarga.github.io/esp32/m5stack/esp-idf/wsl2/2020/0...

Here's how you can flash ESP devices under WSL2.


I literally spent the last couple of days getting ESP32 to work under WSL. Was not painless.

You can have WSL1 and WSL2 side by side IIRC. And there are scripts out there to pipe serial into WSL2.


> why use cygwin when you have WSL?

Well. Has cygwin ever corrupted get repos?


No, but WSL1 (which they are referring to in that quote) also has never done that.


Why use WSL if you have Cygwin?


I agree. I tried wsl2, and while it's nice, it has issues wsl1 didn't have. For instance, networking almost never worked until I applied a common workaround of resetting the ip stack. Wsl1 always worked fine for that. It's just not ready for primetime yet.


Same difference in my experience - WSL1 git screwed up git repos for me, broke git lfs, +++. I guess it’s more of the same-ish on WSL2, just different edge cases due to different edges.


Wasn't the main purpose of WSL2 to make WSL finally usable because before v2 it had really bad IO perf?


There are other issues, too. At least at some point, absolutely no haskell based apps would run, since their stdlib used some syscall which WSL1 did not implement. Broke pandoc for me. Stopped bothering with WSL1 there and then.

Other issues which colleagues encountered include abysmal performance and broken python installations as PATHs and other environment details are wildly mixed inside WSL.

I just don’t understand why some people were seemingly happy with WSL1, there were so many rough edges. WSL2 is much much better in my experience, on virtually all fronts.


It didn't have bad IO performance. It had the same IO performance as Windows.

The problem is software that is badly written and does bad assumptions, like that continuing opening/closing files is good just because in Linux is good, that maybe true on most UNIX systems but nobody said that.

I think that WSL2 is a very very bad idea, you are no longer making a POSIX subsystem of Windows, a way to use the POSIX API in the Windows kernel, without any emulation (basically the same thing as WINE), you are running a virtual machine.

I would say that WSL2 performance is very bad if you work in the Windows filesystem. Sure, if you work from the WSL home directory that is mounted in a ext4 virtual filesystem performance is good, it's a VM.

But this is useless, you see the main advantage of WSL over having a VM or a dual boot was integration with Windows, the ability to use bash scripts to manipulate your Windows files, the ability to launch Windows executables and pipe the output into a POSIX executable.

All of that is useful if there is a strong connections between the two systems, if I can work with WSL in the same home directory as Windows where I have all my files. How is useful if before working on something (that could be a stupid thing like running a script to rename a bunch of files) I have to first copy the files that I intend to work on in the WSL home, run what I have to turn, and copy them back? And what if I want my IDE in execution in Windows with the project in Windows and I want to launch on the project bash scripts?

I hope they will not discontinue WSL1! If they will discontinue WSL1, unfortunately I will have to go back to cygwin that was not great but worked mostly fine, since I need integration between Windows and Linux.


Except they moved performance backwards in WSL2 for accessing files shared with Windows: https://github.com/microsoft/WSL/issues/4197


Cygwin was fast and mostly just worked. I did not find WSL to work well, WSL2 seems more usable. Still has warts but definitely an improvement.


If you drop WSL then you get confused branding of what they are. WSL1 and WSL2 make it pretty clear you're getting the Hyper-V thing for the latter and the former is a Linux sys call API layer.

I'm actually surprised they can't be used together.


Those names don't imply anything about their implementation.


Tangent: they imply running on Linux; a Wine substitute.


There's a few exclusive portions, like the executable load error handler that triggers ELF to load under the subsystem, and the binding of the 'bash' executable. But mostly, to prevent a great deal of confusion.


Agreed. Whatever the implementation the name indicates the next step in this solution. To do otherwise would be like naming windows95 something other than windows after windows3.1. Marketing.


Docker support was the main reason to upgrade in my case. IIRC, WSL was missing some key functionality that made Docker unusable for my purposes. Of course I don't remember the details; all I know is I had no real choice.


> and certainly better than hoary old cygwin.

Say what you want about cygwin, but it never did this.


Parent was talking about WSL1, which didn’t do this either.


The last job I had where I had a Windows desktop (about a decade ago, now) I used Cygwin extensively and never had any big issues. That includes running X11 not just shell stuff. It was quite solid.


I won't leave WSL1 for my bash on Windows needs.

For native linux, I have my choice of distro installed on my choice of hypervisor (Debian, VMWare).

I use VSCode and the Remote SSH extension (functions identically to WSL2). The difference is I know exactly what is going on. I know when and why certain network conditions exist (port forwarding, etc).


I only use Windows because of a couple of tools there is no equivalent version for Linux. Unfortunately companies think there would be not enough customers to warrant development for that OS so I am kind of stuck with maintaining dedicated Windows PC. I have mixed feelings about Microsoft appropriation of Linux. Whatever they touch turns to excrement with a few exceptions.


Problem found, added sync() before shutdown. https://github.com/microsoft/WSL2-Linux-Kernel/issues/168#is...


I experienced this problem and a ext4 corruption, too. Because my company force me to use Windows for internal policy, my previous solution was a VMware VM , accessed "remotely" by my IDE. Then I wanted try the new sauce. My colleagues working to legacy code on SVN also experienced problems (anyone else ?). So I switched back to VMWare VM. The other considerations were that a VM can be easily backupped copying a directory and, if I need to change computer or an additional environment, the migration of the whole environment is extremely easy. Moreover i use snapshots, so if something goes wrong at OS level I can go back with a click.


Important I think to point out that these are (numerous) user reports of files "corrupted". There isn't afaics any confirmation yet as to exactly what's happening nor the underlying cause.



Are you sure this person is MSFT? There is zero information about them on GitHub.


Did MSFT start giving commit/moderation rights to outsiders on their repos? I assumed only MSFT org members can add tags to issues etc.


Apparently. The comment box is enabled for me for example.


I think filing issues and PRs and commenting on them is open to anyone. Assigning issues, labelling them etc. is a member privilege.


the scary part is that git reports an error and that's why people are noticing. what else is going on with files that are not source controlled?


I will need to monitor this more closely. I have switched to a WIN/WSL2 setup recently from Mac and have not had this problem happen to me.

Nonetheless I hope this gets fixed before it hits me.


Keep the Mac handy. I just went the other way to you because of a thousand paper cuts.


Same here. I want to like Windows but MS just makes it impossible.

The MBP M1 has blown me away, I don't think I'll try to go back any time soon.


Yeah M1 Mini here. Total game changer.


Sorry if this doesn't apply to you but how do you deal with Docker development? I've heard some horror stories regarding Docker for mac and I don't think I'll be able to live without my Docker containers.


I do that on an AWS instance. I have learned over the years to keep my desktop and my tools well apart as there have been some fatalities which have knocked me out for a day at a time before.

I mostly write Go though which is fine on M1 macs.


If you are doing remote work why does the client matter? You can use an iPhone terminal app and achieve the same functionality. BTW the new windows terminal app is ages better compared to anything I had seen in the windows world.


Comfort/efficiency..


Three MB Pros in 3 Years. Every single one was repaired before being replaced. Roughly 7k € down the drain.

Combine this with Endpoint Management crapware from my employer and you have a toxic combination. Esp. as said crapware is primarily targeted and optimized for WIN machines as the parent company is > 500k WIN machines and only about 25k - 40k Mac machines.

Sad, as I really loved the pre 2018 MB Pros and it was a blast doing my data analysis work there.


A while back spent about 2 mos mastering WSL2 and came to the conclusion that the default of automatically opening up in the Windows fs share was... unwise. Besides fs permissions being a complete mess, the unstable behavior described here liked to show up at the most inopportune moments. Wound up confining my work to the virtual Linux fs, but that kind of defeated the whole purpose of WSL for me. I'm also back to using a VMware machine on my company owned laptop, but my personal systems are still Linux on bare metal. As an aside, I struggled with Hyper V to replace VMware, but it makes the system unstable on the particular Win 10 builds my company provides employees. I'd gone at least 8 years not seeing a blue screen before that.


While using Visual Studio (not code) with git in wsl2 with Ubuntu 20.04, I saved three open files with changes, committed shutdown the laptop. On next start files showed as empty 0 bytes in Visual Studio and thought as far as I recall were non-zero size in wls2. No way to recover them from git. I stopped using git repos in wls2 for now.


Since there’s almost always comments from people saying how they’re happy on Windows whenever there’s news about some fuckup on macOS, I’d like to balance it by saying that shit like this is why I’m never going back to Windows.


I only see 1 user confirming it and he had a special or odd disk setup (merged disks which also do not show as merged, might be the source of the problem).

Is this confirmed by anyone else?


Huh? skhaz, 2n3906, PulsarFox, Annih, Champkinz, sidharthramesh, mbrumlow, luigimannoni, and jmfury all confirm the problem in the linked GitHub issue.


My zsh history on wsl2 got corrupted last week. A friend shared a script to fix it, mentioning the same happened him in the past. Git is ok though.


I was afraid that WSL2 was old Microsoft's EEE again.

Thanks for keeping it so bad that you can't even push to a repo without corrupting the filesystem or the repo itself.

It might have supposed to be EEE, but it turned out to be the last push many people needed to switch to a full Linux OS.


Reminded me of a different issue:

If you have 2 different git clients (different git versions) accessing the same shared .git directory, bad things can happen -- incorrect file status, iirc.


I used WSL2 with Debian and found it slow, aside from that Windows required what seems like an excessive amount of system resources to even get to WSL2.


Having trouble with WSL2 networking. Suddenly stopped working. Looking into Ubuntu Multipass.


I haven't had a problem so far, but I do use only Linux binaries (including VS code)


I recently upgraded to WSL2. Let's hope this doesn't happen to me


Another funny interaction I found with WSL2 and Windows is file case sensitivity leading to all sorts of weird error messages

I wish I can use a real Linux installation but their display drivers don't work well with my multiple displays with different resolutions


>their display drivers don't work well with my multiple displays with different resolutions

Unless you're writing through a time warp from 2004, this is not true at all, IME.

But the statement is vague. "Work well" can mean some esoteric DPI scaling stuff that I think is only noticeable with 4K combinations. What distribution? What's "their display driver"? Nouveau, the open-source Nvidia one, is pretty bad, and everyone uses the proprietary Nvidia driver (this may be hard to come by on some distros, like Debian, but don't use those; use Ubuntu or Manjaro). The AMD open-source one is great and everyone on AMD GPUs uses that. I've been running triple monitor, diagonally-aligned 1080p/1440p configurations for years on several distros and DEs with both Nvidia and AMD drivers.


I have a drawing tablet with a display in 1080p while the rest of my monitors are 1440p. For whatever reason, my mouse cursor thinks it's on a 1080p display inside my 1440p monitors resulting in a humongous cursor.

(nvidia's proprietary drivers if it matters)

I doubt this will ever be fixed since it's an obscure hardware that's rarely used by other Linux users


I use a 1080p and a 4k display, it's pretty tricky getting consistent scaling per display.


What kind of problem do you have with case sensitivity?


My most recent experience was with typescript-eslint inside WSL. It's a known issue but I'm not sure if anything will be done other than just renaming all my JS projects to use lowercase names

https://github.com/typescript-eslint/typescript-eslint/issue...


> Feel like booting Linux on a separate disk because of these issues.

It always seemed strange to me that people would rather use WSL than the real thing when Windows doesn't bring much advantage. What am I missing ?


> It always seemed strange to me that people would rather use WSL than the real thing when Windows doesn't bring much advantage.

Windows brings lots of advantage to some things (including interfacing with the large number of people who rely on people having software that works only or best on Windows; in anything other than very tech-focussed firms this probably includes your employer, and even in such firms it often includes your customers, which can matter a lot even in tech roles), and not switching between physical machines or rebooting between different tasks brings advantages.


being able to play video games and alt tab into writing and testing code during queue times is huge for me


I remember when we used to write code and alt tab into a game while it compiled.

How the world has changed!


My team has to support both windows and Linux builds of our product. WSL1 was a godsend for this - I no longer had to ssh into a VM and could build both the Linux and windows versions from the same source, which was amazing. I hit one issue with wsl2 and then I reverted because I learned that this scenario (different builds from same source) would always be slower on wsl2.


I tried to revert from WSL2 to WSL1 and just cannot. I think it is due to Windows Defender which I cannot disable due to a corporate GPO. I gave up and just use both Cygwin and a VMWare image again.


I'm one of those people who is forced to keep Windows around because of my employer. Our VPN only works in Windows.

Some people have made Juniper Pulse Client work in Linux, even a co-worker has posted some instructions. But I already have a setup where I start the VPN client in a Windows VM and tunnel through it.

I feel like any Linux solution would take a lot of time to setup and might not be as robust.

Another reason I still need Windows around is we only support S/MIME encryption in the Outlook client. Part of this is because of how our internal IT configures the cert, there is a way to make it work in the webmail but our IT guys have either opted out of that or not gotten around to it.

That's pretty much it though. I can happily use Linux for 99.99% of my time.


You are missing Office, Outlook, Visual Studio, and any enterprise nonsense IT requires on company computers. All of that requires Windows.


The web version of Outlook has come a long way, and in my opinion has become the superior product (for my use cases, at least).


You’re also missing Adobe products as well as most games.

Using Linux without a reboot is an incredible convenience. Glossing over the value of proprietary software that only runs on Windows is narrow minded.


My comment was really only about Outlook itself (using the web vs the desktop version, I prefer the web version).

that said, I hate Adobe products with a passion. When I build identical computers for my wife and I, the moment I install the Abode shit on hers, it becomes noticeably slower at everything. I don't truly understand it.


few games, not most games.


The web version does work until you need to sign into more than 2 accounts per day (1 in regular, 1 in private). If you need more than 2 accounts, it does not work anymore.

Edit: it does work, but it will sign you out from all M365 services which requires a lot of logins for nothing.


Firefox container tabs are great for this sort of thing.


FF containers are great. Edge profiles do basically the same thing as well.


Couldn't you just use Firefox Containers for that? Seems to work fine for Google stuff.


Fair, I'm only using it on one account. Also, I have a special Firefox plugin to make notifications louder.


I get most of that running a Windows VM inside Linux. Granted, if there's enterprise IT involved there's no escape.


The desktop experience Windows offers is often a lot better than the desktop experience of Linux. Then there's also the issue of Linux hardware support which often is not optimal. Also a lot of people need to use Windows for work and don't have a choice. And then there's the possibility people actually like using Windows. Having a full Linux terminal in your Windows desktop environment is the best of both worlds for a lot of people.


>The desktop experience Windows offers is often a lot better than the desktop experience of Linux.

Shockingly, this is untrue if using KDE. Almost everything in it is better than a multi-billion-dollar company's monopolistic OS shell somehow, from the taskbar customization to the features (disable compositing, deep customization of effects and behavior, have windows remember size/position, etc.) to the file manager, Dolphin, which has split views, tabs, had a dark theme a dozen years ago, more file metadata to show optionally like date modified and size, thumbnails for even text files, terminal integration, and more (although technically that's an independent package available on any DE). The exceptions are how "smoothly" windows glide around the screen when dragged and that the Windows taskbar looks slightly better.


Yet FOSS GNU/Linux developers prefer to give money to Apple, Google and Microsoft instead of making Desktop Linux a reality.

Quite ironic.


Linux support for touchscreen is measly, and it's a really nice feature of many Windows laptops. IMO Windows GUI has the best out-of-box features, OSX second and Linux third. Now, if you like working from the terminal, that order is reversed.


Windows Terminal is roughly on par with GNOME Terminal, and PowerShell is actually pretty decent once you learn it.


A lot of people out there don't get to choose their workstation OS. Their only options are a linux VM in virtualbox, or a linux VM in WSL2.


This is it for me. We're a 'windows shop'. I think only the IT guy is happy about that.


> What am I missing ?

Visual Studio.

The other thing on my Windows VM is Affinity Photo/Designer (because GIMP is not a realistic Photoshop alternative).


I'd buy affinity designer, photo and publisher again if they ship a Linux version. I have it both for Windows and Mac.


Absolutely, I would too. They are fantastic products.

I did some Googling yesterday and the last word on the matter is that Photo would cost $500k alone, which they couldn't see recuperating.[1]

[1]: https://forum.affinity.serif.com/index.php?/topic/626-affini...


I'm pretty sure they could recoup the cost if they made sure their installer could install on ChromeOS's Linux container. The number of Chromebooks out there is staggering, and is starving for great graphics software. This would also be a great way to get ahead of Adobe in the education space, as it appears Chromebooks have taken over education (at least k-12 in the US).


Games and MS Office are a big draw but the deal killer app for me is OneNote because unlike Note-taking options on Linux, it syncs to iOS and screenshots go straight into my current note (no need to copy and paste)


You should look into Zotero. Syncs to WebDAV and Git. Enables snagging text, video, pdf, images. Has exporters to common bibliography managers. Really amazing and not tied to any particular operating system().

Disclaimer: I use it exclusively on Linux but there are iOS clients https://www.zotero.org/support/mobile


It's quite obvious actually. What people want from Unix is the terminal. The GUI experience on Linux is subpar. Random strange bugs and driver issue are still rife in 2021. Windows has the opposite issue. WSL promises to solve this by giving you the best of both worlds.


Just compatiblity. I was Linux for a long while for work and gaming. Constant compatibility and hardware issues.

Even something like Zoom worked like shit on Linux half the time. Or was missing features.

Audio/Video hardware had issues too. Something as simple as a webcam became this giant problem. Configuring mice and keyboards was a nightmare.

This is a lot to do with companies not supporting Linux and having to use community created reverse-engineered stuff, so it's not all Linux's fault, but at the same time, it just became a pain in the ass to maintain and fix constantly.

Also games, as much as Steam/Photon/Wine have improved, there are just some things that don't work.

So, I switched back to Windows, and even if I prefer linux, I just can't have it as my daily driver. so WSL2 is great for me.


I need to use Windows for Visual Studio, Unity Editor and other things targeting Windows, but I've been a Linux person for 14 years now and would like to still do as much as possible via the tools I'm used to.


I dunno how long it's been available but I installed Unity Hub and Unity on Ubuntu yesterday. Haven't played with it much yet though.


This is why I like macOS so much. I understand that it's not the same as having Ubuntu in WSL, but using the Mac terminal is so close to the experience I have when I SSH into my Ubuntu box that it's nearly 1:1 for me.


macOS is not perfect. Brew is inferior to apt-get (which WSL2 offers), and Docker needs virtualization in macOS as well. Windows + WSL2 is a strong competitor for the Unix experience on Mac, and sometimes even better (because it is actual Linux and not just a POSIX-compliant Unix terminal).


Oh, blessed are those who never had to deal with the assumption that every shell is bash despite using /bin/sh in shebangs of their scripts, or with the assumption that all core utilities (like find or grep) are of the GNU flavor.

Ignorant they are, yet their ignorance is a bliss.


Company IT policies. Can't make *nix the main OS, can't easily dualboot, and running the dev workspace in a VM is slow.

The answer, at least in my case, has been using Windows where I must and WSL for whatever I can.


> What am I missing

Being able to use both at the same time. Being able to open windows apps, and have them use the linux filesystem and executables (like IDEs)


My work PC with Office on it.


I don't really understand the use case for WSL. What are the advantages? You could run a full Linux VM and do everything on it on any machine these days. Which is what I do, but I'm just an old school hack. Anyone who uses WSL care to enlighten me? Thanks!


When I launch my WSL2 Ubuntu shell, I have a CLI prompt in < 1 second. Clipboard works perfectly since it is a window similar to PowerShell, my windows file system is mounted in under /mnt, and generally everything is lighting fast.

Getting that same quality of experience under VirtualBox was nightmarish. Longer boot times, shared clipboard/filesystems were always breaking, random stuff related to hardening made update difficult.


Other than the startup time (mitigated by starting the VM right after startup) I have no issues with virtualbox. File mounts, copy & paste... everything works more or less fine.

WSL, on the other hand, has been nothing but trouble.


One downside there is the constantly huge RAM consumption of such a setup.

EDIT: At least compared to WSL1. However WSL2 also has some advantages in that it can reclaim unused memory and suspend the VM when not in use.


Why would VM-based WSL2 use any less RAM?


WSL VMs are gen2 (fully paravirtual) hyper-v VMs, so use dynamic memory allocation. VirtualBox, even with Hyper-V as its engine, doesn't do this. I don't think any standalone Linux distros offer a standalone Hyper-V paravirtual image that is not a WSL2 image.


The Ubuntu image you can select in the "New VM" dialog in the Hyper-V management tool (or whatever it's called) absolutely does work with dynamic memory allocation and I seem to recall getting it to work on an Arch VM as well.

I believe the "Linux Integration Services" for Hyper-V are actually mainlined at this point so I would expect most things to work. Setting up RDP for enhanced desktop sessions is the only painful thing I remember.


You don't need paravirtualization to do dynamic memory allocation -- all VMs do this.


No. Gen1 images use a fixed allocation pool instead of an oversubscription model. You need a gen2 image with a cooperative memory allocator on the guest in order for this to work properly.


Well, so barring ancient HyperV versions, all VMs do it. This is more of a guest thing than hypervisor thing, anyway. All the VM needs to support is balloning and this is completely unrelated to paravirtualization.


>Clipboard works perfectly

Which one? Secondary? And the primary does not work at all?


The Windows clipboard just works, because WSL2 terminals are just regular Windows apps.


At this point - Can't you just SSH into the local linux VM or a remote linux server?


You can, but to get the same edit in the windows UI experience you would need sshfs, which is another nightmare thing to get running on Windows.


Primary and secondary selections are specific to X11 which is not involved at all when running WSL command line apps in a Windows terminal


I use a mix of WSL2, and VMs. WSL2 has a few nice properties for some use cases:

a) It comes default with some network binding magic (ok, just configuration) that makes your WSL env almost actually behave like localhost. I know you can configure this yourself, but it's nice out of the box. (on the other hand, sometimes is this exactly the opposite of what you want...)

b) My main machine is a bit RAM constrained, and WSL2 will "magically" reclaim RAM when I close the consoles (unless I left something running...). Once again, not the biggest win, especially if you spec'd out your machine to run VMs, but still nice.

I think WSL2's biggest wins are not for 'full out development'. But it's nice when you need want to jump in and poke at something quickly in Linux land. The startup time for the console/VM is fast enough (~1-2 seconds), and the cleanup is good enough I can quickly jump between, without having a VM actually hanging around all the time.

I'm sure the file system integration is useful for some usecases... classic one would be if someone emails you data files or something. You download in your browser in Windows and then can access pretty seamlessly from WSL.

I think I would summarize WSL's advantages as being a bunch of little quality of life advantages that are very much suited for Windows dominant workflows that periodically jump into Linux.

There are all sorts of other disadvantages though - I would not recommend anyone who has a comfortable VM based workflow to ditch for WSL. An example of an uncomfortably stupid rough edge would be periodically time-desyncing: https://github.com/microsoft/WSL/issues/4149 . I shit you not, my workaround is to manually adjust my linux time forward and back with `sudo date --set "${INCREMENT} seconds" until it gets close enough that OAuth doesn't shit itself.


Have you tried WSL1? It sounds like it might fit your use cases better:

- WSL1 shares the same network interfaces and addresses as the host, no proxying like WSL2

- Significantly less RAM usage

- No time sync bug


Oh, I use both =P

I use WSL2 for some docker based projects.


Also, I think WSL2 is currently only way to access GPU through Cuda API from the guest linux.


Hyper-V can also do PCI pass-through if you have a compute card to assign to the guest. If you're just running the one card, though, you need WSL2 to handle the sharing.


Additionally Win Pro is needed for Hyper-V, while WSL2 can be used in lower level Win versions.


I used to do all work on Fedora, dual booting to Windows for the Windows apps. WSL was OK when I tried long back, so hadn't bothered. But for the last few months I was forced to try WSL (now WSL2) again due to some Windows work, and have been using the Ubuntu app. I've got to say I'm now a believer and don't scoff anymore. I can now do ALL my work without problems on WSL2: mainly Go/C++/Dart, containers (podman/buildah), Flutter, GRPC, etc. The added beauty is that you can go seamlessly between Windows and Linux apps: e.g., you can just run an .exe from the BASH command line in WSL2 just as you run an Linux command. It's insane how they did it, and you have to try it to experience it.

There are some issues still: GUI apps are still a pain to set up (but they're fixing this), there are some networking issues (e.g., accessing servers running on Linux), file path issues on some apps (like I guess git here), etc. I never personally experienced any of the git issues cited, and I use it regularly from the Ubuntu command line. Overall, I very highly recommend WSL2 now, Microsoft hit a home run with VSCode with devs and IMO WSL2 is turning out into another.


I have been teaching computer related classes to uni students for some time. Most of them run Windows. Running a virtual machine often means accessing the UEFI to be sure they have virtualization extensions enabled. And many of them have shitty personal laptops, which means running a VM will really slow the "Linux" experience. Sharing files between the host and the guest is also sometimes tricky, and I don't want to force them to install another OS on their personal laptops. WSL is nice in that it allows them to have a bit of Linux world without being very intrusive, and runs better than virtual machines on many of their laptops.

I'm what is called a "vacataire", which means I'm also not in the position of asking the school IT departement to enable stuff on the computers for the classes…


> Running a virtual machine often means accessing the UEFI to be sure they have virtualization extensions enabled.

Likewise for WSL2.

I hope modern computers with a capable processor are all shipped with virtualization enable at firmware level, because Hyper-V can be used for tons of things in recent Windows.


Enabling hardware virtualisation opens up a significant and deep attack surface. Considering the vanishingly small percentage of users which benefit from it, I hope it stays off by default.


Windows is moving to a model where Windows itself is run as a virtualized OS. I believe this is enabled by default in new installs.

So having a Linux VM in Hyper-V isn't opening up much new attack surface.


It's not enabled by default. Enabling Hyper-V still causes a battery/performance hit that is going to be hard to get rid of.


Virtualization-based security -- a lighter mode of Hyper-V sans real VMs -- is enabled by default on new installs on recent-enough hardware:

https://techcommunity.microsoft.com/t5/virtualization/virtua...


The link you posted only contains one device and it happens to be an ARM device. Seeing the impact it still has on battery life at least on x86, I really doubt they have enabled it by default. It was not enabled by default on x86 in 2020 at least.


That's just a blog post about the feature being deployed, of course it won't have many examples. Take any PC from the past few years and install Windows 10 x64 on it. It will have VBS enabled and hypervisors that do not support Windows Hypervisor Platform won't work. That's been my experience since at least 2017.

If you click through "capable hardware" to here[1], you'll see the list of requirements for VBS, including:

> Virtualization-based security (VBS) requires the Windows hypervisor, which is only supported on 64-bit IA processors with virtualization extensions, including Intel VT-X and AMD-v.

So it will never be the case on x86/IA32

1: https://docs.microsoft.com/en-us/windows-hardware/design/dev...


Well, I'm asking because my experience is exactly the opposite. I've installed Windows countless times on systems with all the requirements and HyperV is not enabled. The day it starts being enabled, I don't even want to imagine the number of support calls.


Do you have a reference to the increased power consumption that running a hypervisor causes? This thread is the first I've heard about it, and I would like to learn more.


No, and I also would like to find some academical test. Try it -- you can even dual boot hypervisor on/off. It's not a small effect. Around 1h extra one on "almost idle" scenarios.


First I hear of this. Source? Googling for this predictably returned unhelpful results.


In VBS environments, the normal NT kernel runs in a virtualized environment called VTL0, while the secure kernel runs in a more secure and isolated environment called VTL1.

https://www.microsoft.com/security/blog/2020/07/08/introduci...

https://docs.microsoft.com/en-us/windows-hardware/design/dev...


Erm, explain? What attack can you do with hardware virtualization enabled that you cannot otherwise do?


I've read[1] that it can make Meltdown/Spectre attacks possible and thought I'd seen many more reports and discussions about it, but seems I was mistaken.

[1] https://nvd.nist.gov/vuln/detail/CVE-2018-3646


Are there significant risks from running virtualization locally like this? If so, can you provide any links or elaborate a bit so I can follow up? Most of what I've seen on such vulnerabilities refer to server infrastructure.


Having searched through my browsing history and some web searches it seems you're right. I do wonder if the move to more virtualisation outside of the server world will open up additional vulnerabilities but it does look like that's where most of the trouble is at the moment.


In Windows, enabling virtualization actually reduces attack surface. E.g. it is used to protect against kernel-level malware:

https://www.techrepublic.com/article/how-virtualisation-is-c...


Yes quite correct. I know some anti-virus engines have been using it for behavioural analysis. I wonder if this will mean more exploits against anti-virus emerge, as with their unpacker routines.


Well, android simulators do use hardware virtualization. And there is actually many people use them outside of developments.


unning a virtual machine often means accessing the UEFI to be sure they have virtualization extensions enabled

Note that this benefit applies to the deprecated WSL 1 only, WSL2 actually runs a VM underneath so it requires the same hoops that a VM would require.


To sell Windows laptops to the same crowd that buys Apple laptops to develop GNU/Linux software, instead of supporting Linux OEMs, and are unhappy that Apple only cares about developers on Apple ecosystem.

Microsoft understood that they only care about having some kind of POSIX support, and nowadays being Linux compatible is more relevant than straight POSIX, as the BSDs and IllumniOS also found out with their compatibility layers.


Or, alternatively, a crowd whose needs aren’t completely met by Linux.


Then why they develop GNU/Linux software to start with?


People might get paid money to develop (proprietary!) software that deploys to Linux. That doesn't mean they enjoy using Linux.


Indeed, I have been doing that for years and am yet to install WSL.


Because their needs aren't met by developing Windows or macOS software?


Then again why they don't support GNU/Linux vendors?


So if someone installs Linux themselves they don't meet this bar you're setting? That seems pretty far from the normal open source ethos.


Ah, so using evil commercial OSes is part of that so called ethos?


No, installing your own OS is very open source.


Which isn't a thing with macOS and Windows, so where is that ethos again?


You were confused why people would do things with GNU/Linux without "supporting GNU/Linux vendors".

But that's restricting it to people that use the preinstalled OS, which is strange, because of how fitting it is to install your own copy of GNU/Linux.


"Yes, you see, you're only free if you do exactly what we tell you."

Also, it's a thing on Windows/PCs.


We are talking about GNU/Linux here.


We are not. You are.


That was the whole point of this thread, naturally some rather move goal posts to avoid the subject that Linux users rather give money to proprietary desktop platforms than help desktop Linux ever become a reality.


Yes. Because Linux is not competitive in some cases.


One reason might be, is that they might need to in part work with a server-side system that runs on Linux and it’s easier this way.

But maybe they also need to work with stuff thats Windows only. Say they need to produce media assets with the Adobe suite.

Not everything is vimmable.


SSH and X Windows servers also exist for Windows.


It takes me 2 minutes to install WSL Ubuntu. Or I could spend half a day on figuring out a hacky solution to a Windows/SSH/XWindows workflow that offers even a comparable level of integration. It will grow into weeks of obsessive tweaking until I feel compelled to write a blog post for HN where I show my sick setup and hours and hours I dumped into this, while meaningful work piled up in my TODOs.

... Or I could just install WSL.


Or just install Humming Bird, working just fine for me since 2000.


Or just install WSL?

What's the problem with Microsoft having an answer to this?


In what concerns Microsoft, I praise them for having acknowledged the strategy error that was not giving the UNIX subsystem the same love as Win32.

Because as proven by macOS and now WSL adoption, GNU/Linux would never have taken off if PCs already had a mature set of POSIX toys, given that its users care 0% about GNU/Linux, and would be deploying to HP-UX, Solaris, Aix, Irix, Tru64 just as well.


Cool. I'll use WSL though.


I guess you'll have to use it to "believe" ;)

Setup is much less hassle, startup is much faster (one or two seconds) and resource usage is much lower than running Linux in a VM.

Also little things like VScode (running as Windows UI application) automatically detecting WSL and setting itself up to connect to the WSL side. To do the same for a VM requires at least to configure the SSL ports on the VM.


Being light-weight won it for me. And the hassle free setup to use the same disk on both environments.


Docker for Windows using WSL2 as its engine is also nice. The Docker commands somehow (magically) get added to your Linux distro and you can mix and match how you like.


It's very useful for projects that require (or work better in) a linux environment. I don't want to start up a full linux VM, that sounds like a lot of overhead and I'm more likely to not bother at all. WSL is much lighter than that (we're not talking virtualbox here), if I open windows terminal its up and running, there's almost-zero start up cost.

With VSCode I can switch between working in WSL and windows easily, without needing a desktop GUI to write code in. There's no startup cost, I'm immediately in the correct environment for that project. I've found it very useful since it came out.


You've not explained why you would not run a native Linux system in the first place…


I write C# for .Net Core and .Net Framework professionally.

I'm open to suggestions but it simply seems crazy difficult to do this outside of Windows. WSL lets me jump into *nix quickly when I need it for whatever reason. I don't need a desktop machine. Why not just use WSL?


Office, which doesn’t have a good replacement on the web or on Linux.


What is inadequate about Libre Office ?


Calc is really not an adequate substitute for Excel if you need to use VBA macros - which might be coming from your customers! And in the wild I see plenty of older Excel documents (before 97) which I don’t think are compatible with LibreOffice. Obviously this isn’t an ideal situation but historical data around prices/etc from the 90s are often stored as old Excel binaries.

In general I think using Calc (or Google Sheets) instead of Excel is a bad idea for a business. Word processing and spreadsheets - sure (although there might be some things specific to Word/PPT that I am not aware of). But Excel is a very specific piece of software and should not be thought of as a general “spreadsheet tool.”


Calc has several options for languages to use for macros, for instance Python – I'm not certain about VBA support, but it would seem that many industries are slowly migrating to Python anyway :

https://news.ycombinator.com/item?id=25588720


The point is that VBA macros are portable across Excel but not between Excel and Calc. Calc does not support VBA macros. As many businesses need to consume spreadsheets with VBA macros, and since porting VBA to Python isn’t sustainable, Calc is simply not a good enough replacement for enterprise.

I am not sure that one anecdotal blogpost supports the idea that industries are moving from Excel to Python. It is more likely that people are productionizing Excel spreadsheets with Python these days, instead of C++ or Java.


Not sure about Calc, but 'modern' Excel doesn't even seem to be able to deal with text (UTF-8) in CSV properly :

https://news.ycombinator.com/item?id=25015679

(And I know from experience that 2003- versions of Excel don't.)


I am not arguing in favor of Excel! I hate Excel. I don’t have it installed on my personal devices. I never use Excel unless I have to. I am saying that Excel does specific things that are widely used in industry (especially finance and industry) which are not supported by Calc and that Calc is not a good replacement for many businesses.


Because corporate only allows their sanctioned images.


I just like being able to work in a Windows IDE and compile in a (99%) Linux environment without messing with a VM or mounting shares. Like QMK[1] -- develop in Visual Studio, compile in WSL, flash in QMK Toolbox (Windows). Or in some very specific corner cases, have a Linux-based workflow that suddenly calls into a Win32 utility.

In some cases I need to test something that's Linux-only -- an idea or a GitHub project that just doesn't run on Windows -- and it's easier to jump into a WSL console than boot a VM. But on the other hand, I don't do that frequently enough to just keep a VM running all the time. (Plus there's VM idle RAM usage, and I can't use Hyper-V [which can reduce VM RAM] due to host performance concerns)

[1] https://github.com/qmk/qmk_firmware


To chime in. The company I work for has (after being acquired by Accenture) implemented "Endpoint Management" for all machines (read monitoring and spyware).

EPM prohibits us from creating virtual machines, but WSL2 is possible. So a lot of devs at our shop (at least the 5% or so who are using WIN) use this setup.

Personally - I do this on my private machine - I switched, because with the EPM software I wanted to separate my work machine from my private efforts, as I just do not like Accenture to be able to read all mails, read all files and install arbitrary software on a machine that I have private data on. Before we got acquired my employer allowed private use, we were admins on the machines and there was no spyware installed - so only using one machine for work and private stuff was feasible.


I do not use WSL but see a good reason for WSL. The benefits for the users are superficial but the really big thing is on the business side for Microsoft. With WSL they try to trap more and more Linux developers into their ecosystem. It is all about getting more control of the most talented and fruitful developer minds. In the end Microsoft and its shareholder will benefit if it keeps its hands on this crowd.


Docker. Docker can use WSL2, so it works much nicer and less problematic than when running under standard windows emulation.


Business. For better or worse, there are plenty of places where the policy and tech allow WSL but installing a hypervisor on your machine is much more challenging.

WSL2 is basically just a Linux VM, though (I think it actually uses Hyper-V containers which seem to occupy some weird space between Linux container and VMs)


Hyper-V is VMs not containers. When you enable Hyper-V it's actually the 'operating system' and Windows is a guest VM believe it or not.


Sure but, afaik, the docs originally called the "light weight" isolation mode Hyper-V Containers vs the traditional Hyper-V VM with more features exposed


This is how every type 1 hypervisor works.

https://en.wikipedia.org/wiki/Hypervisor


But it's not how type 2 hypervisors work, and traditionally when you used virtual machines on a desktop it was type 2.


That was long time ago, most people using VMware also use ESXi, and mainframes use type 1 as well.

Just those using Virtual Box for free not.


> most people using VMware also use ESXi

Most people using it are also using servers. Is ESXi on desktops/workstations anything other than very niche?


Yeah and? The whole point was about knowing what type 1 hypervisors are all about.

Where they are used is orthogonal to having knowledge.

Managing VMware installations requires having that knowledge.


WSL2 is "just a full Linux VM", just easy to install and integrated nicely with other parts of the OS


Lower overhead, and full integration with the host filesystem I believe are the 2 primary reasons many use it


I can pop open an Ubuntu tab in Microsoft Terminal and use *nix utilities as though they were made for Windows. For example: I could chain some scripts and commands together to process and produce reports on my Ableton Live sets without fussing with host extensions in a VM.


[flagged]


Yes, let's pretend that the 4.19 EXT4 corruption issue in 2018 never happened.


No need to pretend, because it never happened:

> Initially, the problem was thought to be in the ext4 filesystem, since that is what the affected users were using...It took until December 4 for Lukáš Krejčí to correctly bisect the problem down to a block-layer change.

https://lwn.net/Articles/774440/


OK, the 4.19 filesystem corruption issue.

The fact that it was in the block layer doesn't exactly mean it didn't materialize as a FS corruption issue.


I never said the problem was in the EXT4 filesystem. The problem resulted in EXT4 corruption.

Quite sure the Git corruption issue WSL2 has isn't in Git either.


This was reported back in August and unless the people commenting yesterday just haven't updated Windows in a while it still isn't fixed.



Not sure how an issue like this doesn’t make it into the test suite.

Probably another symptom of them firing their QA team [1].

https://www.ghacks.net/2019/09/23/former-microsoft-employee-...


Things can’t go into the test suite if they don’t know what causes it


The issue is almost certainly related to WSL2 not being shut down properly and has been occurring in WSL2 for months. See the descriptions in [1] and [2] which are both linked to from the original thread [3].

[1]: https://github.com/microsoft/WSL/issues/5026

[2]: https://github.com/microsoft/WSL/issues/5895

[3]: https://github.com/microsoft/WSL2-Linux-Kernel/issues/168#is...


> Mind that I come from a Ubuntu distro, Windows is the most energy efficient solution at the moment and allows me to run my Dev tools and work on battery for 4 hours straight. Ubuntu (or any Debian-based distro) destroy my battery in 40 minutes and there's no solution or optimization for that.

I know this is not a solution, but the Librem laptops from Purism hold up for hours straight. I also used to run Windows on laptops for the same reason, but the obvious solution to this is to buy a laptop built with Linux in mind from the ground up, and now we have options available.


I quadrupled battery life when I switched form Windows to Ubuntu. No idea why. I'm using a Dell XPS 15 which has factory support for Linux. I didn't do a lot of digging into why... was just happy to be able to go 4 hours instead of one.


As far as I can tell, and I am not an expert, it has always boiled down to hardware/driver parity. Laptops ship with proprietary hardware and drivers that Windows can tap into to optimize battery life, things like turning off hard drives and stuff, but that a Linux doesn't have access to. Provided hardware/driver parity, a Linux environment should generally be more lightweight and last longer. Maybe that was your experience with the XPS; those guys also ship with Linux out of the box like you said, so the hardware should lend itself just as well to Linux distributions.


If you were only getting a single hour on a laptop, I’m almost certain your battery management settings weren’t configured correctly (brightness on max, etc)


sounds like torturing yourself or masochism


it's a GPL violation as far as i'm concerned, so anyone who uses it deserves what they get


Not sure what you mean, the source is right here:

https://github.com/microsoft/WSL2-Linux-Kernel


Microsoft should come out with a new Linux distro. That would have been easier than developing WSL.


I'm so glad that I don't use Windows LOL


Jfc that’s some janky shit many of you seem to be into.

I don’t quite get why some developers seem hell bent on those kinds of hybrid development environments. Why not straight up Linux or macOS where he user-space is consistent at least?


Reminds me of this e-mail chain at Microsoft, on September 27, 1991:

Brad Silverberg: "drdos has problems running windows today, and I assume will have more problems in the future."

Jim Allchin: "You should make sure it has problems in the future. :-)"


The fact that this can even happen is enough for me to never touch this for doing work.

I stopped developing on Windows after the Windows 8 fiasco and I don't see myself ever coming back.

Both Mac and Linux are faster, more convenient, and more solid in my experience.

I considered advising my son to buy a Surface for his school work and developing on WSL2, but I'm glad the M1 Mac came out with a much better cost/performance, so he got one. At least his git repos won't get corrupted.


I am a macOS user myself (daily driver since circa 2016, Arch/Ubuntu 2013-2016 and Win 7 before that) and I can tell you that the honeymoon is surely over. On top of that, I have my 2 old laptops running as Win10 and Ubuntu 20.04 LTS home servers and I can say they have not given me any grief. I feel that Windows and Ubuntu LTS are getting more and more stable while macOS is going towards "move fast and break things" with every release (both still require some group policy fu and command line fu respectively but that's not a big deal). Back in the days, you'd snapshot your Windows XP with Acronis or something similar every time you'd install something major and these days I am contemplating downgrading to Mojave from Catalina. Big Sur is out of the question with things like firewall bypass and many others.

P.S. You simply cannot make this stuff up: as I was typing this comment in Safari, my input field text became blurred just like that in an instant https://imgur.com/a/2Ae0QpZ




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: