I've had only two problems with this set up:
- Occasionally VS Code Typescript features slow down, but it fixes itself a few days later (maybe after a restart). I presume this is due to the WSL Remote, but not certain.
- The occasional line endings snafu, but this is more of a tooling issue.
I expected to hate it, but I'm asking myself why I didn't do this sooner.
It's the same sort of scenario as before -- I have Windows running the games I occasionally play, and music stuff, and I do all of my code stuff in WSL2/Ubuntu.
But this way, I never have to fiddle with weird WINE patches or googling bugs, everything "just works". Asking myself why I didn't do this sooner to be honest.
I had one big complaint which is that copying files from Windows to WSL2 would create ".ZoneInfo" file copies of every file, that was downloaded from the web, but they patched this recently too.
With the support for Linux GUI apps that launched with Windows Insider Preview recently, I have a hard time making arguments against it now. The taste of crow is a little bitter, eh.
Linux + WINE/Proton basically felt like shittier, more bug-ridden Windows + WSL2. With the one notable exception that as an OS/host, Windows 10 "feels" a bit slower in general.
Besides, a dedicated computer for music production has a low entry cost since you basically only need a decent amount of ram. My studio computer has an old i7, 32 GB ram and it doesn't even have a GPU and the price was less than $500 a few years ago. It's going to outlast my current gaming rig and probably my next one as well.
I totally get what you mean by productive setting. I achieve that with two user profiles - one for music (dark colors, only DAW and GuitarPro, no social networks or messengers), another for fun (brighter colors, steam and battlenet). It is a much cheaper - free! - than $500, and I get to save some space at home.
In the end different strokes for different folks I guess. If your setup works for you - keep it up!
"I noticed a significant performance difference after adding exclusions to the Windows Defender. My Rails server and NPM installs sped up dramatically (I mean like 4x faster, no joke)."
While I agree the overhead is atrocious, buffer overflows and similar bugs can turn an "innocent" non-executable file into an executable, like this JPEG file parser issue affecting a large number of applications using GDI+.
That said, I'd be surprised if it couldn't be smarter about this.
Sure it does. Execution is a permission you can set in the security tab.
> a rename could make it runnable
This is more likely why.
This much I understand - but why can something so tightly integrated into the OS not instead intercept a file rename event and scan at that point?
I seem to be lacking sufficient information as to what specifically about Windows necessitates scanning everything on the filesystem when macOS and the various common Linux distros seem to do fine without it. It's not as if Windows is the only OS with interpreters, either.
Is it really just that Windows users download and run random junk? It's been so long that I've seen a virus of any kind anywhere at all that I genuinely have no clue how people become infected with them or indeed whether it still really happens at all.
It certainly feels to me that the era of random toolbars and clearly visible desktop malware is over, but at the same time I live in a different bubble than in the past.
I'm a bit late to the conversation, but can confirm that yes, people do still get infected.
We almost always turn something up while doing virus scans at the repair shop I work at, and we mostly use off the shelf products, along with a couple other tricks.
Currently, the mount is not automatic so you can make a startup.ps script and run it on start through group policy.
Defender kept trying to quarantine my Windows hosts file so I had to add it as an exception but then 3 weeks later it started doing the same to my WSL2 Ubuntu's unmodified hosts file.
I'm glad I opted for Ubuntu and manually installing tools that I need instead of using the Kali subsystem. I couldn't imagine the headache that'd have created.
Plus, some people just like Windows and/or don't like macOS.
No, it doesn't. Linux runs on more architectures than Windows. Linux supports a wider range of machine specs than Windows does. There has never been an operating system in human history having hardware support as vast Linux. The television in your living room might be running Linux. Your refrigerator might be running Linux. Your car might be running Linux. Linux is everywhere.
When it comes to hardware support for laptops, the choice of distro is important. Enterprise linux distros ship with ancient kernels and you do not want to install that on a laptop.
Most hardware compatibility issues come from WiFi cards, and there are solutions for that other than looking for a driver and installing it manually. You can tether WiFi from your phone, or you can get a USB WiFi adapter for like $5 on Amazon.
> When it comes to hardware support for laptops, the choice of distro is important.
I don't care. I just want to develop my programs. Each minute I need to invest into searching for compatible hardware or drivers is costing me money or my free time.
I'm using Linux at home and on servers but for workstations I don't care for it, at all.
WSL2 linux distros are not equal to bare metal linux distros. WSL2 Debian doesn't have systemd installed and what a fvckery it is trying to get it to work. That means sshd, docker, k3s don't work as expected on reboot. Some apps do not install properly becasue scripts expect systemd. Windows networking is PITA too just to do something simple like SSH into a WSL2 instance.
I don't get why deep Windows integration is so important. It's not something I would use on a production box. I just want Hyper-V to work like VMware and VirtualBox. As you say, VSCode remote makes it all seamless anyway.
I had to go back to VMware 15.5.2 (version before hyper-v layer) and disable hyper-v completely.
Isn't that just more of the same bullshit hoop jumping?
Everytime I hear about how great someone's wsl setup is I roll my eyes because it inevitably involves some rediculously convoluted wrangling to the nth level for something that is relatively basic on native Linux.
And these people rarely have an excuse for why they prefer windows over say fedora which has been a better than windows windows experience since windows 8.
Edit, wow even this thread people are suggesting the most drastic of things, let's run an entirely separate hard drive, oh but you need to fiddle with startup scripts because it couldn't be automatic.
But it’s not jumping through hoops? I just either run `code .` (like I did on macOS) or select recent folder from within VS Code. It all just works fairly well, identical to ‘natively’ within Windows (except the occasional ts slowdown I mentioned before).
> And these people rarely have an excuse for why they prefer windows
Not that anyone needs an ‘excuse’ for their personal preference, but I would actually prefer macOS. But I got a Windows PC to use software that only runs on Windows (games) and then the whole wfh thing happened and here we are. I’m not interested in rebooting between different operating systems when I switch tasks, especially when they overlap (developing for games).
Convince Bungie to port their game to macOS and I’ll buy a Mac Pro to be back home on a Mac full time :)
No kidding. But on this site I’ve been lectured about the “consequences” of my personal preferences (when they in no way make alternatives less viable) and told my conscious tradeoffs are actually ignorant of the things I’m trading off even after I explicitly stated them. Tech communities love to talk about choice and freedom and love to try to Jedi mind trick people out of it in the same breath.
With good humor, his describes my experience with Linux distros since I first started using them. Doing just about anything seems to require some convoluted wrangling. :)
Yes, WSL2 is type-1 hypervisor (which means Linux under WSL2 and windows are both VMs of sort, managed by higher level invisible "main" OS) and initially this made running VMs inside windows impossible (since technically that would be VM inside VM). But this has been fixed for months now - originally not only it was not possible to run virtualbox (or vmware, ESXi is vmware type-1 btw, dont mistake it with free vbox) but also it was not possible to run docker either.
Edit: btw. this is why it took so long for WSL2 to be released to public; they gave time to most of the VM solutions to adapt to the fact windows would be running under type-1 hypervisor.
Sounds like last time I tried Linux on the desktop.
...maybe 2021 is finally the year of the linux desktop...
- WSL2 sometime corrupt .zsh_history and git https://github.com/microsoft/WSL/issues/5026
- WSL2 corrupts ext4 filesystem https://github.com/microsoft/WSL/issues/5895
From what I’ve seen it’s a combination of the storage drivers and the storage virtualisation in HyperV rather than a specific issue. I imagine it’s something similar in WSL.
I really don’t trust it as a platform at all. It’s barely better with windows guests.
I think you’ve hit the nail on the head there. It seems like every organisation I’ve been part of which has a significant Windows presence is either static or planning a rollout of and migration to some new magical enterprise software that replaces the old enterprise software they purchased and this time it’ll definitely make everything better. It’s amazing to me how much money gets spent on per-seat licensing for what essentially amounts to no noticeable improvement for anyone involved. But sure, I’m sure this company-wide spyware of choice will be the one that finally means we can just stop caring about security or provisioning machines, right folks?
It’s like the folks doing it have mastered the art of finding busywork that’s just complicated enough that folks signing the cheques can’t really tell they’re burning money. In that sense it’s beautiful I suppose.
Just, erm, ignore the fact a dozen different developers have essentially root access to production databases... at least they can’t install software!
I mean a bash script that handles no errors and outputs nothing just screams madness. The same thing wrapped in a .MSI .. well, you'll never know what hit your, and if it's your job to somehow unfuck this, it's virtually a piece of literal hell itself, slowly rotting and eroding people's soul and mind.
I have so much respect for them.
Best option is still vbox and putty IMHO. VScode will work with it and SSH fine.
Or say fuck it, buy a Mac and do all your Linux work in the cloud.
What are you talking about, that is how you diagnose a Windows box...
A bigger problem would be if Hyper-V is either ignoring memory barriers, or caching writes to the disk and losing them when the Hyper-V service is shutdown. But that would likely affect more than just WSL, so we'd have seen the problem sooner (or so I vehemently hope).
I used to have occasional problems with this setup, and it was always some kind of drive corruption or mounting issue. I wonder if this is related?
I assumed that had all been fixed by now, but yeah, these things can get tricky fast.
as far as I know when wsl2 is actiavted, hyper-v runs as type-1 and linux, windows are virtual after activating it.
thus if it crashes, you get a bsod
WSL2 being Linux means that, unlike WSL1 which directly uses the host NTFS filesystem, it's probably using an emulated block device to hold its filesystem. If that emulated block device doesn't correctly honor write barrier requests from the Linux kernel, it could explain the corruption.
* the kernel is not properly shutdown (and sometimes some buffers are not flushed)
* the virtual block device and/or its linux driver has bugs
The biggest issue across all these things is filesystems. There is simply no good way to share a filesystem between Windows and Linux that is both 1. fast 2. fully correct - i.e doesn't break some code.
SMB/NFS just never work super well because they are slow, or programs think they are local filesystems so things break (e.g VScode won't detect new files created on the linux side of a SMB mount without a manual refresh). Plenty of details on getting permissions right too. Not to mention extended attributes which differ slightly and break stuff. Network file systems in general are just too different to local FS's for programs to work seamlessly.
The 2 non-ideal solutions I've found are 1. keep all files in linux + edit over network using VSCode Remote or IntelliJ Remote support or 2. keep a copy of the dev workspace in windows, and use something like unison (or manual git) to sync.
tl;dr WSL2 bash is not bash but it pretends to be and there are terrifying changes to the semantics of bash scripts as a result.
The predecessor, WSL, "just worked" and it was more a less a linux experience for most practical purposes-- and certainly better than hoary old cygwin.
This caused a lot of people to believe they could just transition to WSL2, lead on by the promise of an even more performant linux experience. The documentation didn't say anything about complications from attempting this, so a lot of people just tried it as soon as they could, thinking it would go as smoothly as when they tried WSL. But nope... it many cases, it doesn't just work out of the box. There's network configuration and gateway issues, snags with vpn, and now this git repo corruption. When you look on git issues, it's just people randomly shot-gunning suggestions, some of which work, some of which don't. I think WSL2 was rushed out too early, or at least it's lacking a comprehensive troubleshooting guide to get it up and running.
Which results in a corrupted repository, from Git's point of view.
Not sure what you mean by entire fs transaction. But Linux doesn't have transactional fs interface, so open, write, close can be interrupted at any point with result of just having a new empty file after open, being one of the valid outcomes after crash.
I think many modern file systems try really hard to make that true, but I don’t think you can count on it. “man write” (https://man7.org/linux/man-pages/man2/write.2.html) still says:
“Note that a successful write() may transfer fewer than count bytes”
It also says
“A successful return from write() does not make any guarantee that data has been committed to disk. On some filesystems, including NFS, it does not even guarantee that space has successfully been reserved for the data”
That should be handled by calling fsync, but of course, if that fails, there’s not a lot you can do (even if you exactly know what happened) (https://research.cs.wisc.edu/adsl/Publications/atc20-cuttlef...)
I also don’t think calling data loss due to writes that do not make it to disk “file system corruption” is correct. For file system corruption, the file system data structures have to be overwritten (e.g. the boot record or directory data structures)
The same happens if you have a power loss or kernel crash on Linux (as a kernel developer, it happened to me several times when testing freshly-committed code).
I'd say WSL2 team, because it worked well in the original WSL.
FWIW I am a user moderator down there and VMware desktop products (Workstation/Fusion/Player) has my focus and as a result I basically read almost every post that would report this issue.
I just DDG'ed it and see one report from 2014 , ok some more from around 2008 when using google.
Looks like this was resolved in 2015 as that was the last time I see it being mentioned.
Took quite some time to figure out what was causing it. "Everyone is somehow corrupting files except for me, wtf?" Magic filesystem translation layers always suck
I can't recall what made me start doing that, but now it's a habit.
Same thing, sorta but all in one go.
So, to me, it looks like WSL2 not completely flushing writes to the underlying file system. Bad, but not as bad as file system corruption (which could lead to losing all data on the disk)
The first instance of a problem is:
[ 1.956835] JBD2: Invalid checksum recovering block 97441 in log
[ 21.151232] ERROR: MountExt4:1659: mount(/dev/sdb) failed 5
I haven't seen syslog/systemd journal for other cases to know if there's instances of ext4 log replay that succeeds, but with missing files. That's not file system corruption, even if it leads to an inconsistent state in a git repository (or even a database). But this is still concerning, because to get a situation where log replay is clean but files are missing suggests an entire transaction was just dropped. It never made it to stable media, and even the metadata was not partially written to the ext4 journal.
qemu-kvm has a (host) cache setting called "unsafe". Default is typically "none" or "write back". The unsafe mode can result in file system corruption if the host crashes or has a power failure. The guest's IO is faster with this mode, but the write ordering expected by the file system is not guaranteed if the host crashes. i.e. writes can hit stable media out of order. If the guest crashes, my experience has been that things are fine - subsequent log replay (in the guest) is successful, because the guest writes that made it to the host cache do make it to stable media by the same the guest reboots. The out of order writes don't matter... unless the host crashes, and then it's a big problem. The other qemu cache modes have rather different flush/fua policies that can still keep a guest file system consistent following a host crash. But they are slower.
So it makes me suspicious that for performance reasons, WSL2 might be using a possibly volatile host side caching policy. Merely for additional data point, it might be interesting to try to reproduce this problem using e.g. Btrfs for the guest file system. If write order is honored and flushed to stable media appropriate for default out of the box configuration of a VM, I'd expect Btrfs never complains, but might drop up to 30s of writes. But if there's out of order writes making it to stable media, Btrfs will also complain, I'd expect transid errors which are also a hallmark of drive firmware not consistently honoring flush/fua and then you get a badly timed crash. (And similar for ZFS for that matter - nothing is impervious to having its write order expectations blown up.)
Silent corruption of parts of the disk that you rarely access but still want to keep is scarier (you might have rotating backups for months or years and still eventually lose data)
To avoid silent corruption requires full metadata and data checksumming, ala Btrfs or ZFS. In those cases, not only is corruption unambiguously detected, but it's not allowed to propagate.
Also over the years it seems like everyone I've seen that habitually edits files remotely ends up with this sort of pain and butthurt.
To be fair, this feature always felt..rickety. But it was very nice.
You would be missing the remote power operations.
FWIW, I am working on a product called Vimarun  that is aimed to replace most of that missing functionality over time.
No remote power operations yet, but that will come.
I hadn't even considered that. Connecting USB devices remotely works well with vSphere, but never tried that with Workstation. They are going to completely removing the hostd engine from VMware Workstation. That would indeed also include that part and it is most likely not easy to get that working without hostd.
Postgres never worked for example natively. Several npm modules would fail when running webpack.
wsl2 worked perfectly for these cases.
I wonder if people are mounting an ntfs volume in wsl2 which is really slow and janky?
We (postgres) did fix an ENOSYS (missing syscall) problem at some point so WSL could run Postgres. The surprising thing for me was how long it took for anyone to tell us it was broken/spewing warnings. That was forced when we changed a warning to a panic.
From reading the issue and related ones, it sounds like it might be related to some sort of unpredictable unclean VM shutdown.
> Postgres never worked for example natively.
That's a minor incovenience at worst.
I will stay put in WSL1. If I wished a VM I would have just installed VMWare and run some Linux ISO image from it.
What a bizarre claim. It's irrelevant if you don't need Postgres, a minor inconvenience if you can easily adopt a workaround, and a show stopper if you were relying on accessing a local Postgres instance.
The level of inconvenience purely depends on your stack and how its developed. Often things which don't bother me have huge effects on other members of my team, or on people working on other projects.
I tried WSL because I thought it would be faster than a heavyweight VM. Turned out it's dog slow in comparison.
Honestly don't see a use for it.
The root partition is slower, but I'm usually manipulating windows files anyway so both versions are similarly slow.
I use WSL2 right now, but only because I need to mount a vhd that's formatted with BTRFS.
I wasted a day trying to figure out if I had a problem with anti-virus or something that was blocking me before realizing that WSL I/O is just... well, slow.
People kept telling me to upgrade to WSL2 to solve that but the version of windows I had didn't allow it. Might have been a blessing in disguise given the data corruption bugs.
The reason $MSFT loves open source is because they can get press hype over projects that are 75% complete (which is the main goal), and they don't even need to support it, document it, or make it actually work.
What's even worse is they have managed to abstract most of the support away in this cycle. You can't get enterprise support now because they gutted that entirely. You can't get them to do anything on github because they keep moving all the projects around and erasing them all or auto closing the tickets and no one on first line support knows anything now other than how to reset a Microsoft account password.
They're following the "rules" from "The Cathedral and the Bazaar", remember. Specifically, the "Release early and often" bit, for the purposes of this conversation. Microsoft are considered "good open source citizens" because of the changes they've made to follow the written non-rules as well as the the unwritten rules.
If you're going to fault Microsoft for following the rules, fault EVERYONE ELSE that does it as well.
I want people to realize that they crap on Microsoft hard for things that they gladly accept from other developers or other companies. The double standards in the IT community are absolutely insane.
Microsoft are seen as the safe choice, so their stuff has to "just work".
That's obviously a bad place for innovation within Microsoft, but i don't really think anyone cares about the future of Microsoft.
Base your expectations on reality, and you'll have a much better time.
who would've thought that managing hundreds of repos with shitton of issues, dependencies and people is difficult to get at 1st attempts
We discussed this here before: I am reasonably certain -- without having any insider info -- it was various ptrace types which broke the camel's back besides the abhorrent file system performance. Both PTRACE_SEIZE and PTRACE_TRACEME was closed as fixed-in-wsl2 https://github.com/microsoft/WSL/issues/2028 https://github.com/microsoft/WSL/issues/3031
I am a little surprised that Z3 had difficulties. I did not think it used anything exotic.
Should Mac Office not be called Office because they completely re-wrote it?
> Should Mac Office not be called Office because they completely re-wrote it?
I do not know how different Office for Windows and Office for Mac are, but to go with a different example, yes, I do think Visual Studio for Mac and Visual Studio Code should not have carried the Visual Studio name, it causes unnecessary confusion.
I upgraded to WSL2 because well 2 is bigger than 1 so it must be better. But no, nothing worked. Serial ports are not supported in WSL2.
Here's how you can flash ESP devices under WSL2.
You can have WSL1 and WSL2 side by side IIRC. And there are scripts out there to pipe serial into WSL2.
Well. Has cygwin ever corrupted get repos?
Other issues which colleagues encountered include abysmal performance and broken python installations as PATHs and other environment details are wildly mixed inside WSL.
I just don’t understand why some people were seemingly happy with WSL1, there were so many rough edges. WSL2 is much much better in my experience, on virtually all fronts.
The problem is software that is badly written and does bad assumptions, like that continuing opening/closing files is good just because in Linux is good, that maybe true on most UNIX systems but nobody said that.
I think that WSL2 is a very very bad idea, you are no longer making a POSIX subsystem of Windows, a way to use the POSIX API in the Windows kernel, without any emulation (basically the same thing as WINE), you are running a virtual machine.
I would say that WSL2 performance is very bad if you work in the Windows filesystem. Sure, if you work from the WSL home directory that is mounted in a ext4 virtual filesystem performance is good, it's a VM.
But this is useless, you see the main advantage of WSL over having a VM or a dual boot was integration with Windows, the ability to use bash scripts to manipulate your Windows files, the ability to launch Windows executables and pipe the output into a POSIX executable.
All of that is useful if there is a strong connections between the two systems, if I can work with WSL in the same home directory as Windows where I have all my files. How is useful if before working on something (that could be a stupid thing like running a script to rename a bunch of files) I have to first copy the files that I intend to work on in the WSL home, run what I have to turn, and copy them back? And what if I want my IDE in execution in Windows with the project in Windows and I want to launch on the project bash scripts?
I hope they will not discontinue WSL1! If they will discontinue WSL1, unfortunately I will have to go back to cygwin that was not great but worked mostly fine, since I need integration between Windows and Linux.
I'm actually surprised they can't be used together.
Say what you want about cygwin, but it never did this.
For native linux, I have my choice of distro installed on my choice of hypervisor (Debian, VMWare).
I use VSCode and the Remote SSH extension (functions identically to WSL2). The difference is I know exactly what is going on. I know when and why certain network conditions exist (port forwarding, etc).
Nonetheless I hope this gets fixed before it hits me.
The MBP M1 has blown me away, I don't think I'll try to go back any time soon.
I mostly write Go though which is fine on M1 macs.
Combine this with Endpoint Management crapware from my employer and you have a toxic combination. Esp. as said crapware is primarily targeted and optimized for WIN machines as the parent company is > 500k WIN machines and only about 25k - 40k Mac machines.
Sad, as I really loved the pre 2018 MB Pros and it was a blast doing my data analysis work there.
Is this confirmed by anyone else?
Thanks for keeping it so bad that you can't even push to a repo without corrupting the filesystem or the repo itself.
It might have supposed to be EEE, but it turned out to be the last push many people needed to switch to a full Linux OS.
If you have 2 different git clients (different git versions) accessing the same shared .git directory, bad things can happen -- incorrect file status, iirc.
I wish I can use a real Linux installation but their display drivers don't work well with my multiple displays with different resolutions
Unless you're writing through a time warp from 2004, this is not true at all, IME.
But the statement is vague. "Work well" can mean some esoteric DPI scaling stuff that I think is only noticeable with 4K combinations. What distribution? What's "their display driver"? Nouveau, the open-source Nvidia one, is pretty bad, and everyone uses the proprietary Nvidia driver (this may be hard to come by on some distros, like Debian, but don't use those; use Ubuntu or Manjaro). The AMD open-source one is great and everyone on AMD GPUs uses that. I've been running triple monitor, diagonally-aligned 1080p/1440p configurations for years on several distros and DEs with both Nvidia and AMD drivers.
(nvidia's proprietary drivers if it matters)
I doubt this will ever be fixed since it's an obscure hardware that's rarely used by other Linux users
It always seemed strange to me that people would rather use WSL than the real thing when Windows doesn't bring much advantage. What am I missing ?
Windows brings lots of advantage to some things (including interfacing with the large number of people who rely on people having software that works only or best on Windows; in anything other than very tech-focussed firms this probably includes your employer, and even in such firms it often includes your customers, which can matter a lot even in tech roles), and not switching between physical machines or rebooting between different tasks brings advantages.
How the world has changed!
Some people have made Juniper Pulse Client work in Linux, even a co-worker has posted some instructions. But I already have a setup where I start the VPN client in a Windows VM and tunnel through it.
I feel like any Linux solution would take a lot of time to setup and might not be as robust.
Another reason I still need Windows around is we only support S/MIME encryption in the Outlook client. Part of this is because of how our internal IT configures the cert, there is a way to make it work in the webmail but our IT guys have either opted out of that or not gotten around to it.
That's pretty much it though. I can happily use Linux for 99.99% of my time.
Using Linux without a reboot is an incredible convenience. Glossing over the value of proprietary software that only runs on Windows is narrow minded.
that said, I hate Adobe products with a passion. When I build identical computers for my wife and I, the moment I install the Abode shit on hers, it becomes noticeably slower at everything. I don't truly understand it.
Edit: it does work, but it will sign you out from all M365 services which requires a lot of logins for nothing.
Shockingly, this is untrue if using KDE. Almost everything in it is better than a multi-billion-dollar company's monopolistic OS shell somehow, from the taskbar customization to the features (disable compositing, deep customization of effects and behavior, have windows remember size/position, etc.) to the file manager, Dolphin, which has split views, tabs, had a dark theme a dozen years ago, more file metadata to show optionally like date modified and size, thumbnails for even text files, terminal integration, and more (although technically that's an independent package available on any DE). The exceptions are how "smoothly" windows glide around the screen when dragged and that the Windows taskbar looks slightly better.
The other thing on my Windows VM is Affinity Photo/Designer (because GIMP is not a realistic Photoshop alternative).
I did some Googling yesterday and the last word on the matter is that Photo would cost $500k alone, which they couldn't see recuperating.
Disclaimer: I use it exclusively on Linux but there are iOS clients https://www.zotero.org/support/mobile
Even something like Zoom worked like shit on Linux half the time. Or was missing features.
Audio/Video hardware had issues too. Something as simple as a webcam became this giant problem. Configuring mice and keyboards was a nightmare.
This is a lot to do with companies not supporting Linux and having to use community created reverse-engineered stuff, so it's not all Linux's fault, but at the same time, it just became a pain in the ass to maintain and fix constantly.
Also games, as much as Steam/Photon/Wine have improved, there are just some things that don't work.
So, I switched back to Windows, and even if I prefer linux, I just can't have it as my daily driver. so WSL2 is great for me.
Ignorant they are, yet their ignorance is a bliss.
The answer, at least in my case, has been using Windows where I must and WSL for whatever I can.
Being able to use both at the same time. Being able to open windows apps, and have them use the linux filesystem and executables (like IDEs)