But it should definitely be well publicised/documented, because otherwise people won't realise they have a gaping hole in their greens m defences.
I can't agree with this. Everything is running on Windows. The VM runs on Windows and WSL exchanges data with Windows all the time. That the data on the Windows side can leak because I installed a Microsoft-approved product from the Microsoft store on a Windows box with a Microsoft firewall is unacceptable.
This is how you get real linux “on” windows - the on part is an illusion, trickery to make using linux transparent and integrated. By comparison, WSL1, which is still supported, is “just” (it’s actually pretty impressive in its own right) syscalls translated to the NT kernel.
Microsoft could do a better job communicating this, but I don’t think any of their design decisions are bad in this regard.
You know, just like the software inside the Windows VM can launch a separate Linux VM; you're already controlling HyperV from inside that VM.
It makes perfect sense now you say it - I knew hyper-v was a hypervisor, I knew in basic broad strokes what a hypervisor is and where it sits, but for some reason this didn't occur to me.
It could be very alarming to people running containers 'on a Windows' server, but then such people are probably more familiar with hypervisors anyway.
Is hyper-v networking still somehow configurable from the 'host', or is it undesirable for containers unless you don't want to do anything to the network (in software on that machine)?
It is a bit more advanced than Wine, with first class support from NT kernel.
Whereas WSL 2 uses Hyper-V, which is a type 1 hypervisor, all OSes run as guest, including Windows itself.
Microsoft should make it more obvious since most Windows 10 users shouldn't be expected to make this distinction by themselves.
As far as I understand, that is not quite right. With WSL2, everything is running on Hyper-V, the VM and Windows both run in parallel on Hyper-V.
Granted, I don't know much about WSL, but that's a very surprising model to me. I would naively assume that anything in userspace is controlled by the Windows OS-level firewall, not that Linux gets to emit raw packets. To say the least, I'm a little more hesitant than you are to call that reasonable.
These VPN authors are just idiots - let's stop over complicating things. Half the time people LIKE that they can use linux firewall features on their linux hosts for stuff.
Maybe work on your reading comprehension?
"How it leaks
WSL2 uses Hyper-V virtual networking and therein lies the problem. The Hyper-V Virtual Ethernet Adapter passes traffic to and from guests without letting the host’s firewall inspect the packets"
So they are complaining that the linux subsystem and distribution packets are not processed by the windows firewall. I don't know what to tell you, but the idea that the windows firewall should be in the mix on a fedora distro seems a bit rediculous?
The host OS image, dom0, also routes its network traffic through that VM, to get updates. (It doesn't trust the updates it gets that way; it checks their signatures.)
QubesOS provides another VM as a dedicated firewall just to route untrusted guests' traffic through, first. With enough cores, it all runs fast.
For many users, all guest VMs are untrusted. Dodgy programs like browsers get their own VMs, spun up as needed and discarded. That does take a fair bit of RAM; my maxed-out 16GB laptop notices the strain. But memory is cheap these days, if you have the sockets to put it in.
As an aside, dom0 also mediates access to the UI hardware, including display RAM. Each guest can run X, but its pixels are copied to the real display by dom0. Guest VMs can't see one another's pixels or input traffic. dom0 also mediates access to audio and video streams, and can route them to selected VMs as needed. (In a future release they plan to manage the display in its own VM, because display drivers are a big attack surface of their own.)
It all works astonishingly well.
Incidentally, this model of a hypervisor with all the user-level OSes as VMs, including the host, originated at IBM in the 1960s. That worked in a megabyte or two, which seemed like a lot at the time.
I know of people who run Windows 10 in a Qubes VM. It is dizzying to think of what they are really doing: running a Hyper-V system, with its own VMs, in a VM on a Xen hypervisor.
UPD: The solution may be to have Windows Firewall rules apply to WSL2 or have Mullvad control Linux internet access through on-the-fly UFW settings update or completely disconnect internet (but that likely does not work nicely and is why Mullvad went for the Windows Firewall based solution in the first place).
Linux has lots of options for firewalling. For Windows sysadmins, firewalld with a GUI could be a reasonably familiar option. Failing that, ufw is quick and reasonably easy for simple use cases. If you are feeling macho, then roll your own with iptables or nftables. The last time I did that properly was with ipchains ...
UPD: I think it will be resolved in a much neater way soon https://github.com/microsoft/WSL/issues/4277#issuecomment-69...
"How it leaks
WSL2 uses Hyper-V virtual networking and therein lies the problem. The Hyper-V Virtual Ethernet Adapter passes traffic to and from guests without letting the host’s firewall inspect the packets in the same way normal packets are inspected. The forwarded (NATed) packets are seen in the lower layers of WFP (OSI layer 2) as Ethernet frames only. This type of leak can happen to any guest running under Windows Sandbox or Docker as well if they are configured to use Hyper-V for networking."
That is how virtual machines are supposed to work. Hyper-V is a virtualisation thing. Whatever Mullvad is doing is immaterial - they are only worrying about the host. If you use full on virty stuff, you need to treat each VM as a VM, not a container.
Basically, the tunnel doesn't leak under ideal conditions, with non-ideal conditions being trivial to induce.
For example, StrongSwan (IPSec) talks about this in their best practices page here: https://wiki.strongswan.org/projects/strongswan/wiki/Securit...
The StrongSwan process can do some tricks to tell linux to not allow this outbound traffic by creating a kind of dummy/shunt tunnel. Also, iptables should be used to prevent the outbound transmission of non-ipsec traffic to that destination.
It's notable that I had a run-in with this issue a year or so ago with Ubiquiti Edgerouters, which run a fork of Vyatta. They don't allow the "-m policy --pol none --dir out" iptables module to be used in configuration, even though the underlaying linux kernel supports it. They even support it's use in-bound. Pure stupidity, if not malice.
Yes I am a network engineer.
Can you confirm that WSL is supposed to be dealing with (the nightmare) of the windows firewall for internet access? How does fedora / ubuntu etc coordinate / know to do this?
As for the parent, if it's a Microsoft product running on Windows and Windows has a firewall, I'd expect it to be an effective firewall, at least for the things Microsoft gives me.
WSL2 uses Hyper-V, so Windows running WSL2 is running on Hyper-V, not bare metal. Being a different VM than Windows “Dom0”, Linux Kernel in WSL2 would have direct connection to Hyper-V virtual ethernet switch. I think that’s what is happening.
OTOH, did anyone ever consider the average pollution of the banking system? 10.000th of banks, 200+ central banks, BIS, IMF, ECB, etc, etc. Millions of employees, millions of desktops & servers, day-in-day out. Anyone with a link to a guestimate?
A Bitcoin transaction uses about 1,005 kWh, while 100,000 VISA transactions use 169 kWh, according to https://www.statista.com/statistics/881541/bitcoin-energy-co...
If a transaction costs half as much power every 4 years that's only 193 years until it's cheaper than visa! Truly the financial instrument of the (distant) future!
These are not alternatives to cash, they're alternatives to checks. The actual cash is held in accounts at centralized third parties (banks) who must be trusted to maintain accurate records, remain solvant, and not interfere with transactions legitimately approved by the account holders. What we see, however, is that the records are not always accurate, and banks do interfere with account holder-approved transactions, based on either their own policies or legal constraints. As for solvency… let's just hope that particular house of cards is never really put to the test.
Bitcoin, like physical cash, does not depend on trusted third parties. There are technological measures in place to guarantee accurate record-keeping, and while the sender of an "illegal" payment may be prosecuted after the fact (if they can be identified) there is little anyone can do either to prevent the payment from going through or to claw back the funds once they have been confirmed by the network.
not on a per transaction basis, which is the only relevant measure because the banking system supports a lot more people than bitcoin does.
A single bitcoin transaction uses 610.20 kWh right now, which is comparable to the energy consumption of an average US household over 20 days.
Also for a comparison of scope, Tenpay, Tencents payment service processes about 1.2 billion transactions per day, Bitcoin does about 300k. If all financial transactions conducted in China alone would consume the amount of energy that a bitcoin transaction does, it would roughly eclipse the energy the country consumes in a year, in one day.
And then everyone gets "too cheap to meter" fusion power? There is not a /lot/ of headroom there, we surely can't go to outputting as much waste heat again as the planet gets from The Sun - and before you say "solar", you already said "fusion".
If you have an issue with how the energy is generated take it up with your local government.
I mean we don't really have that in the case of bitcoin, which is predominantly mined in China these days probably precisely because state subsidised energy projects have created a ton of useless energy surplus, on which bitcoin lives.
Which is ironic in and of itself, the libertarian currency de jure runs on the misallocated resources of a state planned economy lol.
Just imagine if the transactions actually costed as much as their energy consumption suggests and environmental damage priced in.
Still, I think that's the proper comparison—human processes are the analogue to keeping a blockchain online and mining.
All the energy in bitcoin is not wasted on keeping and organizing that tiny ledger (barely 300 GB of data!), it's wasted on brute forcing hashes, with the energy required ramping up exponentially with interest in bitcoin.
As ingenious as bitcoin is, that is a fatal flaw. Using bitcoin is like rolling coal, only worse for the environment.
This isn't a defense of the modern financial system, which is arguably a trash fire for plenty of reasons, but of course it's fairly energy intensive. It's massive. If it were replaced entirely by Bitcoin, it would be even more intensive.
* Bitcoin: 0.1% of all electricity, 7 transactions per second.
* THE ENTIRE REST OF CIVILISATION, FINANCIAL SYSTEM AND ALL: 99.9% of electricity, a heck of a lot more than 6,993 transactions per second.
It's the only way they can reliably prevent abuse like a thousand people using one number - because this way you can just track the number of open connections per account number.
This is superior to tracking IP-addresses to detect fraud for obvious privacy reasons. I do a similar thing for a service I run.
Out of curiosity, how do you even manage to use more than five devices for private use at once? Even just owning that many is unlikely.
For that use case, I can't justify paying double/triple the price as other providers that offer 2/3x the devices for the same price. The provider I use now, Surfshark, offers unlimited devices for about 1/3 of the price, and also recently started offering WireGuard, it would be financially irresponsible for me to choose Mullvad which would effectively 10x what I'm paying right now for the same number of devices.
FWIW I understand that their account number mechanism is superior from a privacy perspective, and that there's no way to support unlimited devices while combating fraud using that mechanism. It's just not the right set of tradeoffs for my use case.
I’m not GP and I certainly don’t take GP’s stance about limiting to 5 devices (I think it makes sense), but claiming it’s unlikely that someone owns more than five devices is silly, especially if someone has a family. My non-tech sister’s family of four has two phones, three iPads, two laptops, etc. As another example, I literally own over an order of magnitude more devices than just five devices for private use (yes, I’m an outlier).
No I specifically said use, not own. You can own more than 5 devices with your mullvad account number, you just can't be connected on all of them at the same time. Also I wasn't expecting people would share their accounts with their family, which is already questionable.
Do families not already share Netflix, iTunes, Spotify, Amazon Prime, etc etc? I’m not sure why it would be such a leap for them to share a VPN, especially if the reason they are using the VPN at all is simply to get around GeoIP restrictions (which I’m not condoning, but obviously many do it).
> No I specifically said use, not own.
These two verbatim quotes from you seem to be in conflict with each other.
> Out of curiosity, how do you even manage to use more than five devices for private use at once? Even just owning that many is unlikely.
One sentence is a question, the other is a statement which I consider to be true (and explains how I arrived at that question).
Also it was quite clear from my argument that I was talking about people singular, and you responded pretending I was saying that an entire family owning more than 5 devices is unlikely.
I can't imagine why you'd be arguing like this, I just hope it's not on purpose.
Seriously? OP never said just me and only me uses all five plus devices. I and others gave you multiple examples of how that could be very possible realistically, and then you shift goal posts and say it’s us being argumentative. I’m done, have a good life!
I expect the distributions on WSL to use their own firewall - that's half of the fun of using WSL.
PLEASE don't push fake news like this that results in distribution on WSL having to deal with / modify the window firewall - that would be a total nightmare!
I think the question is whether you consider a VM more like another machine in your network that merely happens to run on the same hardware or a part of the host system.
Personally I really liked the resource efficient WSL1 approach and I lament that they dropped it. But I know for some usecases (e.g. docker) a real Linux kernel was needed.
It works just fine. Just tested it
I don't think users of NordVPN, ExpressVPN, MullvadVPN et al. are as sophisticated as you think.
I think VPNs can be a powerful tool for many people who would normally not be able to find out about their existence, but the predatory nature modern VPN ads have taken is quite sad.
This leads to some cases of Youtube fan bases angrily calling out shitty VPN ads while the video creators just want to pay their bills, a situation nobody wants.
Not sure how much of it is true. I cannot imagine what would happen to some people there were it to be illegal. I would move out.
Using WSL2 though... you kind of have to be tech savvy to do use it, and those people are probably willing to work around the issue.
That way the Linux network config can deal with the Linux side of things and the Windows network config can deal with the Windows VPN routing.
Of course you can just configure OpenVPN inside WSL2 and also run a VPN on the desktop but that's tunnels in tunnels and that way madness and network issues lies.
WSL2 is basically a VM and any VM which binds directly to the Adapter (e.g. not NAT mode) will have the same behaviour. In some cases you'd even want it to do this.
WSL2's NAT is close to a standard Hyper-V NAT adapter but there's unexpected differences (like the localhost binding) that make it stand out.
It's tunnels, all the way down :-)
Currently, I run Linux on a Xen domU and configure VPN client inside the guest.
PS: I don't want all my traffic to go through VPN. Especially things like Netflix or Youtube where VPNs are blocked (and VPN BW is lower anyway).
I used to run Linux VM inside HyperV before WSL2 released, and it worked like a charm. WSL2 just adds a lot of hacks to integrate Windows & Linux experience.
One docker image with openvpn:
1. at startup erases all routes except to VPN gateway and 18.104.22.168.
2. before and after connect it only has routes through VPN (no default ones - if vpn goes down, network goes down until re-established)
Start it like:
# ... --name vpn ...
Another docker image with what I want VPNed gets started with the network of the first
# ... --net container:vpn ...
I keep a browser within the second docker image (firefox) and use my main machine to show it. Note: you want to pass '--no-remote' to it and likely split /dev/shm
It can't really leak since it doesn't have routes to do anything other then through VPN.
So what mullvad would prefer is that Linux traffic to be routed through the adjacent Windows Guest by default, so that the windows software can control the Linux network traffic.
I think a better solution would be to explore creating a VPN solution for HyperV OS itself if possible...
0 - https://github.com/microsoft/WSL/issues/4277
I think that’s a bit different than this, though it’s possibly related. As you said, the situation there is traffic is blocked.
WSL and WSL2 are fundamentally different in how they work. In fact, the poor I/O performance (caused in part by Windows Defender) in WSL is part of what led to the Hyper-V based approach to begin with.
My guess is that something might need to change either in the way VPNs use the firewall rules in Windows when passing on to WSL2 or in WSL2 to make for more granular control over how that stuff is passed on - to address the Mullvad. Because as it stands now, the way Mullvad performs under WSL2 seems to be by design (by WSL2 design, if not Mullvad’s design).
Obviously, many users who enable a VPN in Windows will want that connection to persist when they use WSL2 — but I can also think of plenty of scenarios where that might not be the case, which I imagine makes coming up with a solution more difficult.
I will say, the WSL2 team is incredibly responsive to feedback. You can file issues on GitHub and the team is very active on Twitter. If this is something that can be fixed on the WSL2 side, I feel confident the team will work to do it.
Not what's happening here (despite the title).
Root Partition – Manages machine-level functions such as device drivers, power management, and device hot addition/removal. The root (or parent) partition is the only partition that has direct access to physical memory and devices.
It may not automatically send traffic through the windows FW because the networking setup now has traffic on a virtual switch/bridge, but the VPN creators have all the access they would ever need to control the networking from the root partition..
Other interesting note, Docker Windows does some funky stuff with firewalls too. It puts and any/any exception in the firewall when you install it . So may also be important to know with VPN stuff.
It does something similar on Linux, actually. Huge pain when trying to firewall servers only to discover that Docker happily bypasses all of your rules.
Pretty much sums it up.
First, GUI support is coming  and the team is working to support both X11 and Wayland .
Second, the Remote Development Extension for VS Code  lets you do this seamlessly. It auto-configs to work with WSL or WSL2 and can also connect to a container or remote machine or GitHub Codespaces codespace. It’s awesome and all of your files, your terminal, everything is mapped to WSL2, with all the GUI parts from Windows. It’s one of my favorite things.
I’m not trying to convince people that WSL2 is the end-all be-all, even though I’m an unabashed fan, but I just want to correct the record a bit (regarding VS Code) and share that X11/Wayland GUI support is coming
The Windows version can be fully integrated with WSL. Windows handles the GUI, Linux handles the CLI and all that. 
I have not found any need that it does not meet this way, but as I mentioned in another comment, I have a very narrow focus. So would not be surprised if I was missing something.
Probably going to check out this for GUI stuff again soon: https://github.com/cascadium/wsl-windows-toolbar-launcher
I’m not even being rhetorical, I’m genuinely curious if there are games with significantly better performance under Linux (and I’m assuming we would have to be talking about using an AMD card so I’m also curious if that performance under Linux is better than an Nvidia card under either OS), because maybe there are and I’m just totally unaware.
But it's good to clarify a few things to avoid confusion:
1. You can use Nvidia on Linux, including for gaming. Nvidia's problems are related to lack of support for modern features (Wayland use cases and so on) caused by the fact that their blob driver in not upstreamed. But it's usable otherwise.
2. AMD drivers are open source and upstreamed, that's why it's a common preference for Linux gamers. Performance of AMD is very good on Linux (amdgpu, radeonsi, radv/aco and etc. all provide very good performance). That stereotype that "Nvidia drivers are faster" has been false for quite a while already. When comparing same classes of cards, AMD is totally on par with Nvidia if not better.
3. Besides native games, you can play many Windows only games using Wine + dxvk / vkd3d, Proton and etc. Performance in such cases usually is slightly lower than on Windows, but not significantly. The only problems now remain mostly with intrusive, rootkit styled "anti-cheats" that don't work in Wine, but I personally wouldn't even touch such games, so that doesn't bother me.
To sum up - using Linux for gaming is totally doable, as long as you want to use Linux in the first place and don't want to use Windows.
And again, I understand Linux is your chosen OS — I’m happy you’re so happy. My question was why a person who is using WSL2 would want to run a game in Linux instead of inside Windows. I understand you can game in Linux. That’s not the question. The question is why would a person run a game inside Linux, which is running side-by-side Windows, run the game in that subsystem instead of just using Linux.
I didn’t know if there was a place where a game would get better performance in Linux, making that a better target.
I just don’t understand the criticism of doing something inside a subsystem that could be done just as well/better outside the subsystem. If you don’t want to use WSL2 or Windows or macOS or anything else, that’s fine. But for people who DO choose to use it, I don’t understand why “games inside Linux are slower inside of it” makes much sense.
ArcGIS - Windows only, has enough issues as it is, virtualizing it doesn't tend to go well. Though you can do something like VMWare Fusion mostly successfully.
MS Office - Yes there are alternatives, but we sill operate primarily in Office, and the alternatives are not perfectly compatible. Especially when collaborating with other companies its important. Teams / O365 are certainly getting better, but still not there yet.
Steam - Although that is certainly getting better on Linux as well. And my gaming time is pretty limited these days.
And anyway, just the way it lets me manage multiple instances of Linux is far superior to anything I experienced on Mac or Linux itself. By the current standards, Wine is just _painful_ to use. Meanwhile Windows window management and the terminal app have made great strides in last couple years.
So if you do need to run something that's Windows only but can work in Wine, I'd totally recommend running using Wine ditch Windows for good. For me it's a benefit, not a hindrance.
And you can run multiple VMs on Linux too if you need actual Windows still (KVM, virt-manager and etc. are quite handy).
How's the display scaling these days? Is it still a better experience to run a 4k monitor at a lower resolution?
What's the Nvidia driver situation? Still janky because their drivers are doing their own thing?
Except Spotify, that needs a command line flag to set the scale factor, but that app is well known to be half-assed on Linux (they also don't support input methods, so searching for Japanese songs is a copy and paste exercise) and that's not Linux's fault.
AIUI the nvidia drivers are a lot better these days, but most Linux users, myself included, know to stay away from nvidia unless you have very good reasons not to. AMD cards work beautifully.
Wayland doesn't work on nvidia and is missing some features too. Linux desktop sucks.
I think there are great desktop environments and window managers for Linux.
Linux users don't use Nvidia if they are interested in the modern desktop use case. That's a well known factor. If someone migrates to Linux using Nvidia, chances are high they'll change it to AMD on the next GPU upgrade.
Which rules out anyone who wants to game or do CUDA stuff.
Everyone is welcome to their own opinions and preferences, but if you ask me, if the response to a request to use the most powerful/performant graphics cards is to switch to AMD (and AMD has some good cards but Nvidia’s are better and OpenCL can’t compete with CUDA when it comes to any machine learning work), well, that’s part of why Linux’s modern desktop adoption is still so small.
If the only option is to use an AMD GPU, you might as well just get a Mac and use actual UNIX.
And honestly, to each their own! But you asked why anyone would use WSL2 and you’ve got a good answer: they want to be able to take advantage of their chosen hardware and access the various Linux tools.
AMD is fine for gaming, I'm using 5700XT on Linux for playing games. And AMD will match Nvidia higher end cards next month. So I don't see any reason to use Nvidia for that.
WSL offers nothing for gaming or similar use cases that regular Linux can't. If you need to use CUDA with Nvidia hardware, you can do it on Linux proper just fine, you don't need WSL for it - Nvidia provide support.
I was responding to your response that Nvidia drivers for HiDPi and other display issues are subpar with “well, everyone who is serious about using Linux on the desktop uses AMD.”
First, that’s not true (as evidenced by the many people who do CUDA workloads in Linux). Second, my overarching point is that it’s strikes me as being really dismissive to say “well just don’t use the hardware you like/want/need if you want a good Linux on the desktop experience.”