There ought to be like a "didn't read the article" link next to the comment people can click to quickly bury those comments. It would avoid people having to reply again and again to say the same thing that commentator could have just learned had they read the article. Especially, you know, this bit:
> Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes
There are multiple comments in this thread right now with that line in them.
There is, it's the downvote button. The distinction of why a downvote is useless.
Edit: yeah, karma is needed.
Actually I think it would be incredible useful.
You could use it to shift a community to a more useful direction.
A more direct way of eliminating common errors.
If on a cool new tech that's renewable if someone starts another bloody global warming discussion being able to downvote as OT could try and keep people on topic rather than having to down vote something you agree with.
There are issues with the idea, it can become censorshop, but I find it an interesting one.
You could blend colors too. Reddish-purple = off-topic troll. Dark orange = weird and scary troll. etc.
Edit: another thing I should have mentioned is that it seemed to me that only those comments made after I gained this ability were downvotable, the thread my last few comments were in was sprinkled with downvotable comments, which were newer, and non-downvotable comments which were there before mine. I might have been imagining this or be mistaken though.
EDIT: checking again at 507 and I now have the power to downvote. So there's our answer (plus my moderately shameless play for some upvotes worked).
I think you might be mistaken. You might not have noticed yet that the downvote option disappears on its own a set time (24 hours, I think) after the comment was made. Perhaps it was only the comments older than this threshold that were non-votable?
I think my karma accrued overnight, so when I noticed I'd exceeded the threshold a lot of the thread was perhaps a day old. The only thing I am relatively certain of is that some of the older comments in the thread were not downvotable, and the newer ones were.
That would be quite computationally intensive to implement.
How about a tag system associated with each posting? Then you can have just a scalar "weight" and a set of keywords (string labels), each with a designated meaning. You press a "plus" button or something, and a menu with optional tagging pops up (tag clouds can be used).
Some tags could be "spam", others could be "didn't read" and users can build their own threshold filters for tags. (0) spam; (4) didn't read; (3) incomplete; ...
As for "didn't read", if you're not interested, just move on. If you think the article is inappropriate and can, flag and move on. If you see a comment that looks like the poster didn't read the article, you can call civilly call out what is in the article, and perhaps down vote the comment.
Yeah I thought about this very hard actually, and came to the conclusion that just a "weight"(+) button is enough.
And, if you care that much so that you would waste your time providing more ... contextual meta-info (about the weight itself), you simply need a pop-up and a quick way to select words from a common dictionary. Certain users can even make content "more spammish" by simply increasing the weight with a click (they agree with the labeling and they are "special" in some way).
This, and the weight itself should provide enough info for a comprehensive "decoration" of the postings.
I can't fault you - slashdot hasn't been relevant for.. at least a decade.
Lolling hard at suggestions to use Cygwin as a viable alternative to WSL because of this tiny nit.
So basically "there is a gaping hole in the middle of the street but it's ok, I've put a 'road ends here' sign in front of it so now it's the drivers' fault if something happens"
In this case this results in a lot of files not being editable by windows programs, plain and simple - such as the entirety of /etc and /home. That's a pretty bad situation.
What you're describing there is road works and yes of course it would be the drivers fault if they drove into the middle of the work. Even aside the obvious point about the work being sign posted, it's the drivers job to be vigilant of hazards - including those that aren't signposted (eg potholes).
> In this case this results in a lot of files not being editable by windows programs, plain and simple - such as the entirety of /etc and /home. That's a pretty bad situation.
Think about it more like using the right tools to edit the right job. You wouldn't blame Windows / Microsoft if someone screwed up their registry because they edited that binary blob in Notepad instead of Regedit. Equally if someone is clued up enough about Linux to install and use the Linux subsystem then it's reasonable enough to expect them to use Linux tools to modify the files on the Linux filesystem.
I create a file in my WSL home directory, and want to edit it in Sublime. So I google "where is the WSL home directory stored" and get `%LOCALAPPDATA%\lxss`. I plug that into WIN+R, and get the files in explorer. At no point was I stopped or even warned.
The reason is clear. WSL is grafted on to Windows, and has low priority. The windows shell and filesystem teams are currently not working to improve integration from their side (at least not publicly visible).
The shell people could put up a banner in explorer, like you got in Program Files in previous windows versions, that you shouldn't edit these files (e.g. via desktop.ini). From the filesystem side, it should be possible to make the kernel know about WSL and prohibit writing to those files from windows.
WSL could also have been implemented with an image file, instead of individual files in `%LOCALAPPDATA%\lxss`. The reason they did it this way, I believe, was to keep open the possibility of future two-way-integration. But this would mean change-listeners not only in Linux land, but in Windows-land... which again would need deeper integration from other groups at MS. I suspect once WSL has proven itself a bit, they'll add full transparent two-way-integration, i.e. you'll be able to edit lxss files from windows one day.
And how would the shell team put up a banner in Sublime?
If the kernel prevented these files from being written to, how would we install the distro in the first place?
Could, should, might, may. Yep, there are 100 ways in which some things could, and may yet, be done, but these choices were made for hundreds of sound reasons.
We're still working hard on WSL - lots of improvements in the pipe.
I just wanted to make clear that the parent comment is slightly hypberbolic.
> I have to provide this guidance at least 2-3 times a day so instead I am publishing it here so everyone can find / link-to this guidance.
This may be oversimplifying, but if that's the case, then there's a design problem. Either you should make it work, or you should do some better defense against it up front.
If the solution is "eh, just work out of a Windows folder", this isn't really better than using a VM. Heck, I've been using WSL since the beta, and using Vagrant + Putty gives me better results.
I was hopeful it would get better, but then I saw this post and shook my head. "Silly users, stop trying to do things we didn't plan for you to do!"
That said, Windows Subsystem for Linux is beta and they went out of their way to make that clear -- going so so far as to put (beta) in the Add Features dialog. And this was the right move, IMO. Such a small percentage of their user base use this feature -- it's targeted at us and we're used to dealing with beta problems. They hit a large enough number of users to test the feature thoroughly, while managing expectations that it's not ready for prime time, yet.
> "Silly users, stop trying to do things we didn't plan for you to do!"
Yep, that's pretty much it. But in the context of a beta product, it's more "Here's something we are getting large numbers of reports about while we're refining the software for production release. In the meantime, don't do this."
Now, if they release this production with that limitation and nothing to prevent you from destroying your data if you modify that folder, that'd be a major oversight, but I have a feeling that'll be resolved. My hope would be that you'd be able to mess with these files using whatever tool you wish, but even adding a dialog to Explorer, and an error in PowerShell/Cmd that catches you trying to do something that'll break the subsystem would be an improvement.
I don't see any indication in the article that this behavior is something they intend to change in order to "fix" a "problem". It looks to me like don't do this, period.
I can understand their frustration, because this is probably really hard to fix.
I guess there would not be problems if Windows apps were not overwriting files by first deleting them and then re-creating them with new contents. It's easy to see why this leads to data loss (file permission metadata is data in the Unix world). You rarely see Unix apps re-creating files: there you instead you specify the behaviour in case the file exists with the flags of the open(2) call.
I guess such contract is absent / not as strict in the Windows world (disc: I have never programmed for Windows), which would make solving the conflicting APIs almost impossible: if a re-created file would inherit a previously deleted synonymous file's permissions, what happens to a program who is deleting a file exactly to get rid of its permissions?
I think MS has done pretty impressive work with the Linux subsystem. It is certainly not a trivial task, and I am looking forward to seeing more awesome stuff running on Windows.
I think your lack of knowledge about this problem is betraying you. This is outright wrong. First, look up CreateFile() in Windows; it has flags to specify things like this as well. Second, the reason programs delete files in the first place has nothing to do with the lack of such flags. It has to do with the fact that they want to write new contents but don't want to lose data in the event of an abnormal termination. If you truncate the file that you're opening instead, then you lose that data if something goes wrong. So they create a new file and replace it with the old one when it's written.
Finally, as far as I know, 'nix tools have a tendency to outright replace files MORE than Windows tools do. That's why the underlying Windows kernel API function (NtCreateFile/ZwCreateFile) has a FILE_SUPERSEDE parameter... whose entire intention is to mimic POSIX behavior:
> The CreateDisposition value FILE_SUPERSEDE requires that the caller have DELETE access to a existing file object. If so, a successful call to ZwCreateFile with FILE_SUPERSEDE on an existing file effectively deletes that file, and then recreates it. [...] Note that this type of disposition is consistent with the POSIX style of overwriting files.
The reason: Delete behavior.
In NT, when you delete a file and handles are open, the name sticks around until all handles are closed. Any attempt to re-open (including re-create) with the same name will fail with STATUS_DELETE_PENDING while this is the case.
Combine that with the fact that most operations in NT (including delete or rename) happen via opening handles to the file first and you have lots of chaos and race conditions.
So the delete + recreate pattern is very likely to bite you later as a "re-create fails randomly".
The problems here only occur if your app is modifying files that do not "belong" to it. For apps that store config files, databases, etc., these problems should not arise, because those files should only be touched by those apps, and (a) the app is already aware what metadata the file should have and is the one maintaining control over the files, and (b) the app can make sure that it is never using a file that it is trying to delete. If the user is messing with the app's files, then the user is just as responsible for not preventing their deletion as he/she is responsible for not corrupting their contents. You can't blame the deletion behavior for that; the responsibility for proper care rests entirely on the user, and having a different delete behavior doesn't fix the core problem.
For files that don't belong to the app, the situation is entirely different. First, apps should not be deleting files that don't belong to them without user consent. And if the user is consenting, the user is responsible for ensuring this does not cause problems. Second, those that modify files they do not own are responsible for all aspects of this, including preserving metadata. This is of course difficult and quite a burden, but at this point, it is considered a bug in the application if it does not do this correctly. Again, the app is the one mucking with files that it does not own, so the app and the user together are responsible for maintaining consistency, not the OS.
There isn't even such thing as "my files" and not. Are you aware that many people run antivirus products that sit between you and the filesystem and might choose to also open "my" files in an asynchronous fashion based on my own actions? That such an antivirus product has been built into Windows for some time now? (msmpeng.exe) Suddenly that file you thought it safe to have exclusive access to... it wasn't.
Even controlling your own actions is hard and yields unexpected results all the time. Almost any time setting dwShareMode to something other than 0x7 and I saw it bite me in some unexpected way due to things my own process is doing, that I did not think of ahead of time.
Once you have Windows filesystem code doing this at any appreciable scale you will see this. It takes enough of seeing sharing violations and delete pending errors time after time to realize these are just dangerous patterns on the platform, avoid them and move on.
Regarding antiviruses: yes, of course I am. That's why they should be opening the files with FILE_SHARE_DELETE, and properly handling the edge cases. Unless you're telling me it's impossible (which I strongly suspect isn't the case, but then again I've never tried to write an AV myself), I don't see how it's the OS's fault. This flag and the associated functionality obviously exists for a reason, right? It's not an OS design flaw if people aren't using it, is it? People can refuse to play along with anything... that doesn't mean the OS is flawed.
Also some AV will duplicate your handles, meaning they get the same sharing mode you asked for.
> This flag and the associated functionality obviously exists for a reason, right?
It's the kind of stuff that sounds reasonable when you hear about it, but when you see it put into practice is the source of way too many bugs.
Maybe I misunderstood the problem. But I think my reasoning holds regardless? To be clear, it seems you're talking about a situation where an app is trying to delete and re-create a file, and an AV is scanning the file in between, hence a recreation fails.
But with their file system filter drivers, AVs can detect such recreation attempts, so they should be able to handle it properly. So isn't it their fault for not doing so?
> Nor does it let you delete a directory while children are open.
Right, but same as above: there exists functionality to detect this, right? So if it isn't used, whose fault is it?
Usually I'd say come up with a unique name if you can. Otherwise I'd say try to overwrite (possibly overwrite via rename, rather than CreateFile, to avoid the data loss you mention).
I've even written some code to rename to a unique name first, then delete.
That makes it sound almost like this could be actually fixed by swimming deep enough to the NTFS driver in Windows.
To those who are not well versed in this:
This was a hack for 2 things that exist in the universe:
1. Apps that save their documents by "delete document, re-create document by the same name".
2. The fact that said apps (especially when you consider that they might be written for Win3.1 or DOS) could be accessing the documents by 8.3 names (eg. MICROS~1.TXT)
What ntfs.sys will do is: for some period of time after deleting a file, remember the "long filename" of a corresponding short name, and re-hydrate it when you re-create. So when you delete MICROS~1.TXT and create a new MICROS~1.TXT, ntfs.sys will remember that this file is also called "Microsoft.txt" and re-create that name.
Vim's default behaviour is to save via complete re-creation and atomic rename. I wouldn't be surprised if other classic editors used the same technique.
Completely disagree. Considering that some file systems which have been around for decades don't even have this permissions metadata, I would say that the file contents are the most important to users. MS should absolutely give a warning that the extended metadata can be lost, but the "If WSL can’t find Linux file metadata for a given file, it assumes the file is corrupted/damaged" behaviour noted in the article is completely contrary to user expectations.
The filesystem model in Linux and Windows is not identical, but there is considerable overlap. It therefore makes perfect sense --- and is expected --- for the overlapping parts to work.
The alternative I can think of is spreading hidden metadata files all over the place, à la Apple with their .DS_Store and AppleDouble files that everyone always complains about. That comes with its own set of problems (moving files around will lose the metadata when they're no longer in the source directory along with the metadata)
They already did that, but as usual "with great power comes great responsibility", so if you ignore the safeguards installed .. well .. bad things can happen.
So yes, if you decide to grant yourself permission to see and modify those files, you can give yourself a bad time. Or maybe you could develop an application that handles them intelligently, after researching how they work extensively.
Someone's gonna ask: With the right amount of registry and file tampering, yes, you can absolutely render automatic updates and telemetry nonfunctional. Results may vary, but it's definitely doable.
I live in both camps. This is the same attitude I took issue with in the original article. It's condescending. "I have to keep telling you to not do something because we didn't plan for it."
I'm not saying it's an easy problem. But I am saying that blaming users for something doesn't give me confidence.
if it was done via a VM, people would just mount the VHD, edit the files, and still complain
Show file extensions is disabled by default. I'll always go in and change that. But I can't think of a time where I've had to go into a system file before.
What exactly has caused you to need to unhide those files this decade?
Same for the "drivers" and "etc" sub-folders.
Which has both the hidden and the system flag set. You have to change two explorer settings to even access the directory, one of said changed gives you a prompt warning you that it might be dangerous to enable it.
*nix: "Go for it, edit what you want, just be aware you can fuck up"
Windows: "what the fuck are you doing in the special area? get out!"
What's the solution? Put less warnings? Then you'd get even more people destroying their data. Put more warnings? Then you'd get more people complaining that "Windows doesn't let you edit what you want!".
It doesn't really seem like they can win.
Uneducated users should be responded to with "WTF were you doing?" as computer is a precision machine and it is not unreasonable to expect some basic level of literacy.
vacri@thingy:~$ rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
If I've elevated myself to be a global administrative user, then I should be able to do global administrative things, without being treated as an 'uneducated user'.
> it is not unreasonable to expect some basic level of literacy
I think it's perfectly reasonable to expect some level of literacy for administrative-level tasks, and that's what nix does.
One of my favorite Windows v. Linux things is when Linux users try to work in Windows, and try and do things that would be perfectly normal in Linux, and then wonder why their Windows box is so unstable. I once ran across someone who tried putting their Windows\system32 folder into a version control app. When their Windows registry exploded spectacularly, their response was that this was a normal thing to do on a Linux box.
Windows and Linux take very different engineering approaches to things, so skills developed in one don't necessarily translate well to the other.
And, of course, ZFS boot environments are the same sort of idea to an extent.
You might be able to accomplish that with a system service that runs in the context of the LXX shim and a local (in memory) network connection.
If WSL can't do the equivalent of Vagrant-style vm.synced_folder, how is anyone expected to use it for a real-world workflow?
I've found the combination of Vagrant/Virtualbox with the Git command-line tools (using MSYS2) to be an excellent cross-platform working environment. Really, it's been my daily workhorse for the past two years.
Also, there is a not-free program called SecureCRT which is much more usable than Putty.
"Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes "
So if you go messing about in hidden system folders, you get what you deserve.
I would have preferred msft to have posted an article explaining that files have attributes and editors must not loose those attributes when making edits.
An editor which just kept the existing attributes would still end up with incorrect time stamps - so now all Windows apps need to know how to handle Linux file attributes? Nope - because these files were never meant to be edited by Windows apps. And it works best that way.
I'm on OSX and every now and then see how apps are making interesting use of file attributes (e.g. tags, or remembering the cursor position of a text file when re-opening). So, I'm honestly surprised Windows isn't doing anything with this useful OS feature.
There is currently no tool that writes the EAs. It should be not too hard to write one, however it won't work transparently - WSL maintains a cache of this metadata, so you'd likely have to quit all WSL applications before you make any changes to avoid corruption.
- the actual project for the decoder
To those who claim this need to be fixed, I want to ask: how?
I mean if I modify a file in the Linux filesystem as a Windows user, what should the ownership be? Root? Some non-root user? Should we just make those files 777? Each of those solutions would bring more problems. It seems to me that there is no "one size fits all" fix here.
Inherited from the directory, like it is on Windows and Linux (by default)? And 777 is a perfectly fine default, since it's a good match for the Windows default attributes and ACLs.
It seems to me that there is no "one size fits all" fix here.
There isn't. But there is a sensible default, which is not "assumes the file is corrupted/damaged".
Consider what happens in Linux if you mount a filesystem like FAT which doesn't even support all the extended metadata. No extended metadata does not mean "I can't do anything with the files", unlike what seems to be happening with WSL here.
Who lied you such horribly that under any unix a newly created file is owned
by the owner of the directory? It's not even generally true for group
ownership, which has a little more frivolous behaviour!
> And 777 is a perfectly fine default
0777 permissions are never fine as a default. It's asking for security
There is simply an impedance mismatch between how these operating systems view files. Sure, you can suggest a bunch of guesses about what to do if you edit or drop a file from one context into the other without both operating systems agreeing on how to manage that but you will find yourself arguing about what "sensible default" means until the cows come home.
It is. One can reasonably argue that it's natural for the parts that don't correspond between the two OSs to be lost in the transfer. But the main problem is that according to the article, it treats missing Linux metadata as an error.
Sure, you can suggest a bunch of guesses about what 1to do if you edit or drop a file from one context into the other without both operating systems agreeing on how to manage that but you will find yourself arguing about what "sensible default" means until the cows come home.
No doubt at MS there was probably large amounts of bikeshed over this, but I don't think "sensible default" should ever be "give up". Unfortunately, having been in a few design discussions with similar results, I have a good idea of exactly how that decision was reached: no one could agree on the precise behaviour, and "leave it out" or "make it an error" got chosen, simply because anything else would be perceived as supporting one alternative over another. Even randomly choosing one of the options would've been far preferable to having no option at all, to the surprise and disappointment of the users.
Someone brought up a related issue at https://news.ycombinator.com/item?id=12981620 (HN discussion is at https://news.ycombinator.com/item?id=11008449 ) in a different context, but it may have come from a similar (in)decision process: "we can't agree on a default, so let's just leave it as an error" --- nevermind the fact that in that case, it means bricking the user's machine, and imploring systemd to try to work around it.
Cygwin is awful compared to WSL. Poor performance, always gotchas, cygpath handling etc. Sure it was a good option back in the day and a good engineering feat.
With WSL I get an environment that behaves exactly like Linux, complete POSIX without gotchas.
I work with WSL all the time on files located on Windows partions. Works perfectly. Just create symlinks in WSL to Windows folders and you are ready to go.
Why do people even need to work with files in the hidden system folder? The only files that I maybe want to change from Windows to WSL would be .bashrc et al, but just copy it or edit them from WSL and problem solved.
If I don't want to use WSL and I only need a shell, then MSYS2 is the way to go, handles Windows and POSIX paths interchangeably without the cygpaths hassle.
And no, WSL is not like SFU, that never really worked.
This just seems like a really bad idea compared to simply running a VM and sharing a filesystem using networking or other means... It's not like a modern machine, even a shitty netbook nowadays, can't handle a virtualbox instance, or localhost networking.
But assume you are 100% right that the WSL environment is removed or stops working in Windows 11+, how is that affecting my Linux experience with WSL today? Not one bit, because if it would change in the next release I can when i happens start a Virtualbox, but why run it now and have inferior experience when the result is the same? Does not make any sense. Or I can switch to Linux even, still same old shell.
In my case I pulled a git clone into the Linux side of a project that was building using a third party gui tool, ran that from windows and my Bash window got soooo confused.
Seriously though, if they can improve the terminal "behavior" and figure out a work around for USB device access this will be really stellar.
Seriously, is mounting files readonly that hard to do? If you can't deal with linux files, leave them alone.
"Of course it doesn't work that way, idiot! If only you read the obscure documentation and had years of experience like me you'd understand why you're the problem and not the software!"
Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes
You gotta work your way around a couple mild safeguards to go fucking with things.
Based on that, being hidden and system on Windows doesn't exactly say "we're guarding nuclear codes here" like mod 000 attr +I.
But that's kind of the root of the issue, isn't it?
Very few things are system. In 7+, the things that are system are full of nukes.
I imagine someone with the linux subsystem will be in a similar situation.
You're right, it should be easy.
Changing Linux files that are stored in C: and are accessed in Windows from C: and in Linux from /mnt/c/ is fine.
If one is running NGINX/PHP under WSL would one not even be able to safely edit .php files?
I do have to occasionally edit Linux config files and whatnot and, yes, I do that with emacs or vim (depending on mood), but it's honestly not often and I personally prefer the terminal based editors in Linux for that better (yeah, a preference thing there... ).
Anyway, been working like this since before the Anniversary Update on the later insider builds and it's been pretty solid and seamless for me.
Use case would be wanting to use an editor that can take advantage of programming runtimes installed on the Linux side of things.
Think about using Sublime, where you'd want to run a Python linter. In this case Sublime needs to be installed and ran in Linux alongside your Python installation.
I've run a few small GUI applications and the performance was pretty much native-level. I use VcXsrv as my X server. I've read that some people run entire Linux tiled window managers through WSL without complaint.
I would have expected VSCode (and Sublime/Atom/etc.) to need to be running on Linux to leverage the run-times installed in Linux.
How does it handle code complete for user written code when:
1. You have VSCode installed in Windows
2. You have your language and all packages installed in Linux
3. You have your source code sitting in a folder on Windows
Since you edited in some info about xserver performance, that's awesome.
What type of specs is your machine?
I'm mostly using it to C/C++ development for an embedded platform so my experience with code completion, etc, is a bit different. But it does work as well as it does on a pure Linux installation.
I have a pretty powerful desktop but I run the same configuration on my i5/4GB laptop and the performance is still good (and why not, it's all native code). I think there might be some disk performance issues but I haven't compared it too closely. I prefer it over using a VM just because of the integration and convenience.
I guess I just don't get how VSCode that's running on Windows can "see" and use the Linux filesystem. Is this something MS did that only works for VSCode (ie. it wouldn't be available with Sublime/Atom)?
Also it looks like VSCode doesn't work through VcXsrv according to this issue:
I can't. It's the other way around, the Linux file system can see the Windows one. So VSCode's code complete is based on files on the Windows system but the Linux side can also see those files and compiles/runs them. Linux doesn't know it's a completely different file system; it's just another path.
Then they would have their app level code off in a Windows folder to avoid corruption and having it accessible on both sides (makes 100% sense).
The confusing part is understanding how VSCode running in Windows is able to make sense of all of that.
That would mean VSCode would need to know that the runtime and packages are at "the private Linux FS path from within Windows", while app level code is at "some Windows path of your choosing". You're saying it does this flawlessly, and then there's no need to even mess around with xserver?
I don't use VSCode but I would guess it needs to know things like your Python path or Ruby path, etc.. These are things that would be installed on the Linux side of things. That's where it falls apart in my head.
For my development needs (embedded C/C++) this is not an issue.
> These are things that would be installed on the Linux side of things. That's where it falls apart in my head.
You setup your environment to run Bash.exe with the appropriate parameters to start the Linux application.
bash -c "linux command here"
Thanks for the replies, that clears things up.
For myself, an xserver is probably more overhead than I'd want to take on... I'd just drop WSL and go back to my VM at that point (different pain points, but simpler pain points).
Yep, I use a VM now (a seamlessly integrated one with vmware) but sadly vmware support for Unity mode Linux guests has been dropped and VirtualBox has too many problems to consider using.
Was hoping to eventually change to Windows 10 but it's sounding like I might as well stick with what I have now, despite the bugs and being stuck with a ~4 year old Linux distro because nothing else works.
If Windows can solve the problem of running GUI Linux apps through it, I'd be the happiest developer ever.
I had ~3 seconds of typing delay and a lot of visual graphical tearing on an i5 with an SSD and plenty of RAM (with SSH compression enabled) when I tried to use it on a locally running vmware VM under Windows 8 but I can't test it on 10.
Is it purely instant on 10, to the point where you would feel happy typing / looking at it all day?
According to MS, vscode doesn't work and perhaps all Electron apps wouldn't work: https://github.com/Microsoft/vscode/issues/13138
Edit: or as hartator says in another thread, store your files somewhere under /mnt/c/.
Microsoft's platform design is generally open enough to allow you to do dumb things. This is a good thing, because you can reg-hack your way around problems with the OS. But like editing the registry, doing so is very much at your own peril, it's really not designed to be tampered with by users.
People who enable visibility on protected system files and then act upset that messing with protected system files breaks things, are the sort of reason we don't give out admin rights where I come from.
I'm a 20+ year Linux user. I use Linux tools to find things and to navigate places. I guess if I were a Windows user, who does things in the usual Windows way, I might have received a warning of some sort.
I'm just saying it'd be cool if there were a warning somewhere in the docs about this issue. It wasn't something I expected. I mean, I don't have to worry about editing "Windows" files on my Linux system with WINE. And, I'm able to mount my Windows partition from Linux and edit files without fear of harming things. I assumed things worked the same way for WSL. It was a wrong assumption, but I don't feel like it means I'm stupid to have made it, based on the knowledge I had at the time.
If you find any docs about this folder from Microsoft, you will find that they do warn you about exactly this.
"Interoperability between Windows applications and files in VolFs is not supported."
It's no big deal, really. I didn't expect miracles, and it all works much better than I expected. I wish the interop were better, but I'm pleased it's even something Microsoft is trying to do.
But I suspect they specifically wouldn't mention it because it's an implementation detail people aren't supposed to know about or look into. The problem might have been more widespread had they actually said "Don't change anything in this specific hard to find hidden/system protected folder"
What, like this bit in the install instructions page: https://msdn.microsoft.com/en-us/commandline/wsl/install_gui...
After installation your Linux file system components will be located at: C:\Users\<Windows user name>\AppData\Local\lxss\ This directory is marked as a system file which are hidden by default. Accessing this location directly is not advised due to caching between the Linux and Windows file systems. Check out our blog post for more information.
This trips up some Linux tools when you use those files from the Linux subsystem. Best either turn that feature off or only use the Linux git for any projects you plan to touch from there, even if you're saving them under /mnt/... to share with Windows. And of course you'll have to beware of how your Windows editors are saving those files.
I wish the Windows world had just migrated to UTF-8 and \n 10 years ago. It would all be smooth sailing by now.
Or did you mean moving the metadata as well? Transferring all kinds of metadata from file A to file B is heinously non-portable across OSes and even OS releases. The specific metadata this is about isn't even available in the Win32 API, it's only part of the native API (and the kernel API ofc.)
There are APIs in windows apparently. A quick Google search reveals function like CreateDirectoryTransacted, and another quick search reveals that the Kernel leaves this kind of thing up to the filesystem. So on Linux filesystems like ZFA and BTRFS just use the normal posix file APIs and others can optionally provide other functions if they choose to.
So yeah... The whole make a file and rename it thing is dirty hack.
EDIT - Grammar
Windows has TxF (used in Windows Update and Installer to tame the eeeenormous complexity a tiny bit; TxF and VSS also use similar mechanisms), but no other mainstream OS has anything remotely close.
"if you delete some file "File with long name.txt" and then create a new file with the same name, that new file will have the same short name and the same creation time as the original file."
It might open a security can of worms, but there might be a safe way to do that for those essential Linux attributes, too, for example if the program deleting the file has write permissions on it.
It's funny, that this is so Windows-centered advice (regarding the "uninstall & reinstall"). Fixing things in Windows very often boils down to "just do the reinstall".
In Linux you have lots of other tools and methods to recover from such disaster.
And even on Windows you can fix most issues in different ways than reinstalling, it's just that for some degrees of brokenness it's by far the faster solution (cf. just re-cloning a git repository instead of figuring out what is broken).
People are right that it's mostly just like a VM, however, a VM which doesn't eat up all your memory and is actually fast since there's no VM layer is actually a pretty nice thing. I run WSL on my laptop with 8GB of RAM, whereas running VMs is something I generally avoid.
They have started improving the integration a little; you can call bash from windows and windows executables from bash, if that's useful to you.
The main questions I have are:
a) Will VS Code support WSL? I don't need to edit the files inside the linux filesystem, but things like running python from an IDE inside WSL don't work atm. I have to treat it like a remote system.
b) Will we ever get OpenGL/CUDA support?
c) Will developers treat WSL as a first class platform the same way they do OSX, so that it's not the end user who has to come up with workarounds?
Having said that, I'm a huge fan and I use it a lot. But I also keep real linux machines around since not everything is handled by WSL.
A) VS code supports WSL in the newer versions of Windows Insider Builds. Support for individual IDEs is now available  but will take time for individual IDEs to implement.
B) This is something that we are actively looking at but it is not an easy problem and will take time.
C) ¯\_(ツ)_/¯ We certainly hope so.
A) This is only a terminal, which isn't really more useful than opening a WSL window, I want to be able to debug python easily. I don't really know how the python debugging protocol works, so I'm not sure if stdin/out is enough, but even if it is enough there's some glue code missing to make this trivial, I need to have a binary that can take the same arguments as python and translate windows paths to WSL paths. And then the Python VS code tools also have some pretty cool Jupyter/IPython integration; I'm not sure that will just work with an executable that pretends to be python. Would be cool if the people working on VS code bits were thinking about WSL, rather than users having to come up with individual hacks.
2) Cool, I assumed backlog meant "not going to happen", will be great if it gets anywhere. Though I'm also interested in OpenGL for graphics; I wanted to run OpenAI's gym project, but that needed OpenGL for rendering. Not necessarily expecting you to write an X server, but would be cool if you could cooperate with or fund a project like Xming or https://github.com/kpocza/LoWe or something.
One annoying thing is that PHP's built-in webserver (which is also used by Laravel for it's development server) causes file read errors under WSL. Easy enough to avoid until they fix it though, by setting up Apache in Ubuntu or a local LAMP stack on the Windows side.
It's surprisingly good for an initial implementation, but it's far from perfect.
I do my editing on Windows and compile/test from the command line.
Of the two OSes, the weaker link, security-wise, is definitely Windows 10 rather than Linux. Putting Linux programs under a Windows 10 abstraction layer means that any malware, bugs or "bugs" (remember, this is proprietary software) in Windows 10 have the ability to transparently subvert or surveil the Linux programs.
In this aspect, WSL is just like putting a Linux VM on a Windows host (albeit more performant and convenient, yes).
Also note that this isn't intended as a security feature - it is a convenience to get Linux userland programs on Windows without a resource-hogging VM.
While I appreciate you're upset about this issue, what you just did is hard to view as anything short of disingenuous. If you're genuinely trying to hide something from government surveillance then NO simple OS decision will help you since only a few countries in the world allow you to actually say "no" to a court demand for access.
Please don't raft terms together to make your rhetorical device more seamless when it's strictly wrong to say. Security professionals do not define in-app telemetry as malware and this community will not, either. What's more, you even subtly hint open source software is more secure and free from bugs. I've got news for you: sourceless security hole propagation is 20 year old technology invented by open source people to discuss their concerns with the idea that open source means we don't need security audits.
When on windows I pretty much stick with git bash and windows implementations of my dev stack.
What bizarrely contradictory advice.
"Use Windows tools to modify files used by Linux, but don't use them to modify Linux files".
The NT POSIX subsystem, on the other hand, is a very different beast. It's more of a container or VM-like object. That kind of operation is useful for some tasks, but when I want to use POSIX and Windows tools together on the same task, I don't want a container. I want tight integration, which Cygwin provides.
Personally, though, I'd like Microsoft to make two changes to the _existing_ win32 subsystem:
1) expose as public API the interface between the kernel console subsystem and conhost.exe: this interface will not only allow Cygwin to supply a real pty layer, but also let programs like console2 work much better
2) teach win32 userspace components to tolerate NtCreateProcess-based fork, and make CSRSS understand what's going on. Then regular win32 processes (like Cygwin programs) can fork and take advantage of the copy-on-write semantics that NT already supports
If I ever rejoin Microsoft, I'm going to push to get this functionality implemented. If core Windows ever goes open source, I'll implement it myself.
I understand what you desire is to get something like a better Cygwin instead of following the WSL approach, however that won't leverage the Ubuntu ecosystem, so that would still have the same problems as what I described for NT Posix and SFU: Cygwin is a target completely separated from mainstream ones -- the tools WSL is trying to provide (and succeeds in doing so -- available today) would not be as well maintained and/or even working correctly (if available at all).
Now I agree that an interface is needed to provide alternate console implementation, and the current situation in that regard, and hacks used to make ConEmu or console2 kind of work, are terrible (and unfortunately it is now even worse with WSL, and an absolute mess with the new WSL interop in Insider builds)
> Cygwin already work well enough
Cygwin's fork works, but it has two major problems: 1) it's slow, and 2) it relies on awful hacks to load DLLs at the same virtual offsets in parent and child, and if these hacks fail (which they do if it's the wrong day of the week, or if the moon has the right phase), your fork fails. With proper fork support, this operation becomes fast and reliable.
I agree that vfork is better. Implementation complexities made Cygwin define vfork as fork though. :-(
> I understand what you desire is to get something like a better Cygwin instead of following the WSL approach, however that won't leverage the Ubuntu ecosystem,
That's true, but Cygwin has enough critical mass that its own ecosystem is usable. FOSS developers of third-party packages usually care to some minimal degree about Cygwin support, for example, and will take patches to help it work better. SFU never achieved that critical mass.
> Now I agree that an interface is needed to provide alternate console implementation
I never understood the difficulty of making this feature happen. It's clearly needed, as evidenced by all the awful hacks out there, and it's not even that hard: I've seen the conhost code. You could make the protocol a public API with minimal changes.
Yes, the resulting interface ends up being a bit more complicated than openpty(3), but that's okay.