Hacker News new | past | comments | ask | show | jobs | submit login
Do not change Linux files using Windows apps and tools (microsoft.com)
549 points by nikbackm on Nov 17, 2016 | hide | past | web | favorite | 262 comments



In this comment thread there are many people who did not read the entire article to its conclusion, and then also failed to read the many comments about the very last and very important bit at the end of the article.

There ought to be like a "didn't read the article" link next to the comment people can click to quickly bury those comments. It would avoid people having to reply again and again to say the same thing that commentator could have just learned had they read the article. Especially, you know, this bit:

> Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes

There are multiple comments in this thread right now with that line in them.


>There ought to be like a "didn't read the article" link next to the comment people can click to quickly bury those comments.

There is, it's the downvote button. The distinction of why a downvote is useless.


I don't see a downvote button. Do you need a base reputation for that, like Stack Overflow?

Edit: yeah, karma is needed.


> The distinction of why a downvote is useless.

Actually I think it would be incredible useful.

You could use it to shift a community to a more useful direction.

A more direct way of eliminating common errors.

If on a cool new tech that's renewable if someone starts another bloody global warming discussion being able to downvote as OT could try and keep people on topic rather than having to down vote something you agree with.

There are issues with the idea, it can become censorshop, but I find it an interesting one.


They could just make flagging dual purpose. But flagged comments probably alert the admins. Maybe clicking the flag button could spawn a drop down but I think there might be an aversion to introducing complexity and impedance. Requiring multiple clicks and using drop down elements might be a departure from the deliberate simplicity.


Colorize according to votes. Purple = off-topic. Red = troll. Orange = weird and scary.

You could blend colors too. Reddish-purple = off-topic troll. Dark orange = weird and scary troll. etc.


What are the requirements to be able to down vote?


a karma threshold. 800 or something?


500, I only just gained downvote ability in the past few weeks, when I finally gained more than 500.

Edit: another thing I should have mentioned is that it seemed to me that only those comments made after I gained this ability were downvotable, the thread my last few comments were in was sprinkled with downvotable comments, which were newer, and non-downvotable comments which were there before mine. I might have been imagining this or be mistaken though.


As a data point, I am at _exactly_ 500 right now, and cannot downvote. So perhaps you have to exceed 500?

EDIT: checking again at 507 and I now have the power to downvote. So there's our answer (plus my moderately shameless play for some upvotes worked).


Yup. 501. Remember. With great power comes great responsibility.


Indeed - compared with the ability to speak freely, the ability to downvote is not much of a privilege.


the real question is if it goes away if you're downvoted.


I believe it does, but don't have any evidence to back it up. If itp volunteers, we could test by down voting his comments :)


Exactly... This effect might reinforce 'upvote'-grabbing behavior, to either not lose the ability, or regain downvote if lost.


Upvote-grabbing behavior is already rampant here, just as with any forum with point tallies attached to comments. Proof: all of the "edit: why the downvotes?" comments.


Council of 300

https://xkcd.com/1224/


it seemed to me that only those comments made after I gained this ability were downvotable

I think you might be mistaken. You might not have noticed yet that the downvote option disappears on its own a set time (24 hours, I think) after the comment was made. Perhaps it was only the comments older than this threshold that were non-votable?


That's quite possible. I had not realized that the downvote option disappeared so quickly.

I think my karma accrued overnight, so when I noticed I'd exceeded the threshold a lot of the thread was perhaps a day old. The only thing I am relatively certain of is that some of the older comments in the thread were not downvotable, and the newer ones were.


only those comments made after I gained this ability were downvotable

That would be quite computationally intensive to implement.


Not if you store the timestamp of when you gain downvote rights, and compare against that.


>There is, it's the downvote button. The distinction of why a downvote is useless.

How about a tag system associated with each posting? Then you can have just a scalar "weight" and a set of keywords (string labels), each with a designated meaning. You press a "plus" button or something, and a menu with optional tagging pops up (tag clouds can be used).

Some tags could be "spam", others could be "didn't read" and users can build their own threshold filters for tags. (0) spam; (4) didn't read; (3) incomplete; ...


I've thought about this, too. In particular, I'd like to, say, separately up vote "substantive" and down vote "agree". That said, how complex do we really want the UX to be?

As for "didn't read", if you're not interested, just move on. If you think the article is inappropriate and can, flag and move on. If you see a comment that looks like the poster didn't read the article, you can call civilly call out what is in the article, and perhaps down vote the comment.


> how complex?

Yeah I thought about this very hard actually, and came to the conclusion that just a "weight"(+) button is enough.

And, if you care that much so that you would waste your time providing more ... contextual meta-info (about the weight itself), you simply need a pop-up and a quick way to select words from a common dictionary. Certain users can even make content "more spammish" by simply increasing the weight with a click (they agree with the labeling and they are "special" in some way).

This, and the weight itself should provide enough info for a comprehensive "decoration" of the postings.


So... slashdot?


Mmm, to be honest, I was never a slashdotter so I have no experience with it at all (strangely enough :))


It has/had exactly that system but with also positive descriptors.

I can't fault you - slashdot hasn't been relevant for.. at least a decade.


Also, isn't the entire new Linux/Windows subsystem still in beta? IIRC it's only in the preview builds and even then you still need to enable it explicitly.


From initial glance, it seems the problem could easily be fixed with a filesystem filter driver ( https://msdn.microsoft.com/en-us/windows/hardware/drivers/if... ) detecting modifications to paths under the Linux root directory. Actually it could be done entirely in userspace, this definitely isn't some insurmountable architectural problem.

Lolling hard at suggestions to use Cygwin as a viable alternative to WSL because of this tiny nit.


Not as simple as this because some tools create a working copy of a file when it's being modified, and then replace original file once changes are saved. Mapping these changes to the same file "identity" likely v. error prone.


It's available in the current public release, you still must enable it explicitly like other optional Windows features (like Hyper-V)


> Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes

So basically "there is a gaping hole in the middle of the street but it's ok, I've put a 'road ends here' sign in front of it so now it's the drivers' fault if something happens"

In this case this results in a lot of files not being editable by windows programs, plain and simple - such as the entirety of /etc and /home. That's a pretty bad situation.


> So basically "there is a gaping hole in the middle of the street but it's ok, I've put a 'road ends here' sign in front of it so now it's the drivers' fault if something happens"

What you're describing there is road works and yes of course it would be the drivers fault if they drove into the middle of the work. Even aside the obvious point about the work being sign posted, it's the drivers job to be vigilant of hazards - including those that aren't signposted (eg potholes).

> In this case this results in a lot of files not being editable by windows programs, plain and simple - such as the entirety of /etc and /home. That's a pretty bad situation.

Think about it more like using the right tools to edit the right job. You wouldn't blame Windows / Microsoft if someone screwed up their registry because they edited that binary blob in Notepad instead of Regedit. Equally if someone is clued up enough about Linux to install and use the Linux subsystem then it's reasonable enough to expect them to use Linux tools to modify the files on the Linux filesystem.


The metaphor is wrong. As a normal user you would never come into contact with these files. The "gaping hole" is actually in a fenced-off area with keep-out signs where nobody would wander in by accident.


But then, a "normal user" wouldn't use WSL. Developers and power users do.

I create a file in my WSL home directory, and want to edit it in Sublime. So I google "where is the WSL home directory stored" and get `%LOCALAPPDATA%\lxss`. I plug that into WIN+R, and get the files in explorer. At no point was I stopped or even warned.

The reason is clear. WSL is grafted on to Windows, and has low priority. The windows shell and filesystem teams are currently not working to improve integration from their side (at least not publicly visible).

The shell people could put up a banner in explorer, like you got in Program Files in previous windows versions, that you shouldn't edit these files (e.g. via desktop.ini). From the filesystem side, it should be possible to make the kernel know about WSL and prohibit writing to those files from windows.

WSL could also have been implemented with an image file, instead of individual files in `%LOCALAPPDATA%\lxss`. The reason they did it this way, I believe, was to keep open the possibility of future two-way-integration. But this would mean change-listeners not only in Linux land, but in Windows-land... which again would need deeper integration from other groups at MS. I suspect once WSL has proven itself a bit, they'll add full transparent two-way-integration, i.e. you'll be able to edit lxss files from windows one day.


"WSL is grafted on to Windows, and has low priority" - not sure the team of ~12+ of us would agree with you on that.

And how would the shell team put up a banner in Sublime?

If the kernel prevented these files from being written to, how would we install the distro in the first place?

Could, should, might, may. Yep, there are 100 ways in which some things could, and may yet, be done, but these choices were made for hundreds of sound reasons.

We're still working hard on WSL - lots of improvements in the pipe.


You say that, but I've seen a few too many people delete random system files because they didn't know what they were and needed space. Granted, they should've read the "keep out" signs and have only themselves to blame.


These files are hidden by default, so not only didn't they read the "keep out" signs, they tore them down and said "See? No sign." and then ran into the hole.


I guess at some point you can't protect people from themselves anymore. One might even argue that people who use WSL (especially in beta) would be tech-savvy enough to understand how it's implemented and works, but apparently not ...


Sure - it is not a good situation. To me it would have sounded perfectly reasonable to edit WSL files with a Windows editor. There could at least be some kind of __README_FIRST_BEFORE_DOING_ANYTHING_IN_THIS_DIRECTORY.TXT.

I just wanted to make clear that the parent comment is slightly hypberbolic.


FTA:

> I have to provide this guidance at least 2-3 times a day so instead I am publishing it here so everyone can find / link-to this guidance.

This may be oversimplifying, but if that's the case, then there's a design problem. Either you should make it work, or you should do some better defense against it up front.

If the solution is "eh, just work out of a Windows folder", this isn't really better than using a VM. Heck, I've been using WSL since the beta, and using Vagrant + Putty gives me better results.

I was hopeful it would get better, but then I saw this post and shook my head. "Silly users, stop trying to do things we didn't plan for you to do!"


I have to agree with you there. I've always used the 80/20 rule for prioritizing immediate fixes: If I have to take a support call from more than 20% of the people I hae deployed my software to (and in some cases, this may amount to one customer), then whatever the circumstance is, it's treated as a high priority bug.

That said, Windows Subsystem for Linux is beta and they went out of their way to make that clear -- going so so far as to put (beta) in the Add Features dialog. And this was the right move, IMO. Such a small percentage of their user base use this feature -- it's targeted at us and we're used to dealing with beta problems. They hit a large enough number of users to test the feature thoroughly, while managing expectations that it's not ready for prime time, yet.

> "Silly users, stop trying to do things we didn't plan for you to do!"

Yep, that's pretty much it. But in the context of a beta product, it's more "Here's something we are getting large numbers of reports about while we're refining the software for production release. In the meantime, don't do this."

Now, if they release this production with that limitation and nothing to prevent you from destroying your data if you modify that folder, that'd be a major oversight, but I have a feeling that'll be resolved. My hope would be that you'd be able to mess with these files using whatever tool you wish, but even adding a dialog to Explorer, and an error in PowerShell/Cmd that catches you trying to do something that'll break the subsystem would be an improvement.


> in the context of a beta product, it's more "Here's something we are getting large numbers of reports about while we're refining the software for production release. In the meantime, don't do this."

I don't see any indication in the article that this behavior is something they intend to change in order to "fix" a "problem". It looks to me like don't do this, period.


If/when this issue is resolved, the guidance will be updated. Until then, don't go futzing with files in hidden system folders.


EDIT: Misinformed, see wfunction's comment below.

I can understand their frustration, because this is probably really hard to fix.

I guess there would not be problems if Windows apps were not overwriting files by first deleting them and then re-creating them with new contents. It's easy to see why this leads to data loss (file permission metadata is data in the Unix world). You rarely see Unix apps re-creating files: there you instead you specify the behaviour in case the file exists with the flags of the open(2) call.

I guess such contract is absent / not as strict in the Windows world (disc: I have never programmed for Windows), which would make solving the conflicting APIs almost impossible: if a re-created file would inherit a previously deleted synonymous file's permissions, what happens to a program who is deleting a file exactly to get rid of its permissions?

I think MS has done pretty impressive work with the Linux subsystem. It is certainly not a trivial task, and I am looking forward to seeing more awesome stuff running on Windows.


> You rarely see Unix apps re-creating files: there you instead you specify the behaviour in case the file exists with the flags of the open(2) call.

I think your lack of knowledge about this problem is betraying you. This is outright wrong. First, look up CreateFile() in Windows; it has flags to specify things like this as well. Second, the reason programs delete files in the first place has nothing to do with the lack of such flags. It has to do with the fact that they want to write new contents but don't want to lose data in the event of an abnormal termination. If you truncate the file that you're opening instead, then you lose that data if something goes wrong. So they create a new file and replace it with the old one when it's written.

Finally, as far as I know, 'nix tools have a tendency to outright replace files MORE than Windows tools do. That's why the underlying Windows kernel API function (NtCreateFile/ZwCreateFile) has a FILE_SUPERSEDE parameter... whose entire intention is to mimic POSIX behavior:

> The CreateDisposition value FILE_SUPERSEDE requires that the caller have DELETE access to a existing file object. If so, a successful call to ZwCreateFile with FILE_SUPERSEDE on an existing file effectively deletes that file, and then recreates it. [...] Note that this type of disposition is consistent with the POSIX style of overwriting files.

https://msdn.microsoft.com/en-us/library/windows/hardware/ff...


I would go so far to say (though I've met few people who grok this, as Windows filesystems is pretty rare as a specialty these days) that delete-and-recreate is a huge antipattern and recipe for failure on Windows.

The reason: Delete behavior.

In NT, when you delete a file and handles are open, the name sticks around until all handles are closed. Any attempt to re-open (including re-create) with the same name will fail with STATUS_DELETE_PENDING while this is the case.

Combine that with the fact that most operations in NT (including delete or rename) happen via opening handles to the file first and you have lots of chaos and race conditions.

So the delete + recreate pattern is very likely to bite you later as a "re-create fails randomly".


OK, so let me explain why your reasoning is wrong.

The problems here only occur if your app is modifying files that do not "belong" to it. For apps that store config files, databases, etc., these problems should not arise, because those files should only be touched by those apps, and (a) the app is already aware what metadata the file should have and is the one maintaining control over the files, and (b) the app can make sure that it is never using a file that it is trying to delete. If the user is messing with the app's files, then the user is just as responsible for not preventing their deletion as he/she is responsible for not corrupting their contents. You can't blame the deletion behavior for that; the responsibility for proper care rests entirely on the user, and having a different delete behavior doesn't fix the core problem.

For files that don't belong to the app, the situation is entirely different. First, apps should not be deleting files that don't belong to them without user consent. And if the user is consenting, the user is responsible for ensuring this does not cause problems. Second, those that modify files they do not own are responsible for all aspects of this, including preserving metadata. This is of course difficult and quite a burden, but at this point, it is considered a bug in the application if it does not do this correctly. Again, the app is the one mucking with files that it does not own, so the app and the user together are responsible for maintaining consistency, not the OS.


You're funny. Telling me my years of experience of this behavior biting me in the rear end is "wrong".

There isn't even such thing as "my files" and not. Are you aware that many people run antivirus products that sit between you and the filesystem and might choose to also open "my" files in an asynchronous fashion based on my own actions? That such an antivirus product has been built into Windows for some time now? (msmpeng.exe) Suddenly that file you thought it safe to have exclusive access to... it wasn't.

Even controlling your own actions is hard and yields unexpected results all the time. Almost any time setting dwShareMode to something other than 0x7 and I saw it bite me in some unexpected way due to things my own process is doing, that I did not think of ahead of time.

Once you have Windows filesystem code doing this at any appreciable scale you will see this. It takes enough of seeing sharing violations and delete pending errors time after time to realize these are just dangerous patterns on the platform, avoid them and move on.


I didn't say your experience was "wrong" though? That doesn't even make sense; your experience is what it is. I only said your reasoning about your experience was wrong, i.e. you should be blaming the applications and not the OS (whether they're made by MS or anyone else).

Regarding antiviruses: yes, of course I am. That's why they should be opening the files with FILE_SHARE_DELETE, and properly handling the edge cases. Unless you're telling me it's impossible (which I strongly suspect isn't the case, but then again I've never tried to write an AV myself), I don't see how it's the OS's fault. This flag and the associated functionality obviously exists for a reason, right? It's not an OS design flaw if people aren't using it, is it? People can refuse to play along with anything... that doesn't mean the OS is flawed.


Sharing modes don't help with the STATUS_DELETE_PENDING issue. Nor does it let you delete a directory while children are open.

Also some AV will duplicate your handles, meaning they get the same sharing mode you asked for.

> This flag and the associated functionality obviously exists for a reason, right?

It's the kind of stuff that sounds reasonable when you hear about it, but when you see it put into practice is the source of way too many bugs.


> Sharing modes don't help with the STATUS_DELETE_PENDING issue.

Maybe I misunderstood the problem. But I think my reasoning holds regardless? To be clear, it seems you're talking about a situation where an app is trying to delete and re-create a file, and an AV is scanning the file in between, hence a recreation fails.

But with their file system filter drivers, AVs can detect such recreation attempts, so they should be able to handle it properly. So isn't it their fault for not doing so?

> Nor does it let you delete a directory while children are open.

Right, but same as above: there exists functionality to detect this, right? So if it isn't used, whose fault is it?


What's your alternate recipe for success?


I don't think I have one unfortunately. It depends greatly on the scenario.

Usually I'd say come up with a unique name if you can. Otherwise I'd say try to overwrite (possibly overwrite via rename, rather than CreateFile, to avoid the data loss you mention).

I've even written some code to rename to a unique name first, then delete.


Exactly. Most file systems have an atomic move, that you can leverage into an "atomic write".


Thanks for correcting.

That makes it sound almost like this could be actually fixed by swimming deep enough to the NTFS driver in Windows.


There is such a thing as NTFS tunneling, if that's what you mean. Not sure what you have in mind if you mean otherwise.


NTFS tunneling is a hilarious feature.

To those who are not well versed in this:

This was a hack for 2 things that exist in the universe:

1. Apps that save their documents by "delete document, re-create document by the same name".

2. The fact that said apps (especially when you consider that they might be written for Win3.1 or DOS) could be accessing the documents by 8.3 names (eg. MICROS~1.TXT)

What ntfs.sys will do is: for some period of time after deleting a file, remember the "long filename" of a corresponding short name, and re-hydrate it when you re-create. So when you delete MICROS~1.TXT and create a new MICROS~1.TXT, ntfs.sys will remember that this file is also called "Microsoft.txt" and re-create that name.


> You rarely see Unix apps re-creating files

Vim's default behaviour is to save via complete re-creation and atomic rename. I wouldn't be surprised if other classic editors used the same technique.


That is also what "sed -i" does.


Actually it's fairly common when making changes within a file, especially a text file, to read from one file, write the changed data stream to a new file, then do an atomic rename of the target file to the name of the source file.


It's easy to see why this leads to data loss (file permission metadata is data in the Unix world).

Completely disagree. Considering that some file systems which have been around for decades don't even have this permissions metadata, I would say that the file contents are the most important to users. MS should absolutely give a warning that the extended metadata can be lost, but the "If WSL can’t find Linux file metadata for a given file, it assumes the file is corrupted/damaged" behaviour noted in the article is completely contrary to user expectations.

The filesystem model in Linux and Windows is not identical, but there is considerable overlap. It therefore makes perfect sense --- and is expected --- for the overlapping parts to work.


> because this is probably really hard to fix.

The alternative I can think of is spreading hidden metadata files all over the place, à la Apple with their .DS_Store and AppleDouble files that everyone always complains about. That comes with its own set of problems (moving files around will lose the metadata when they're no longer in the source directory along with the metadata)


> Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes

They already did that, but as usual "with great power comes great responsibility", so if you ignore the safeguards installed .. well .. bad things can happen.


Indeed, even with my level of tinkering, I almost am never dealing with system attribute files. The cool thing is that Microsoft ALLOWS you to do anything with your computer, they don't hold the sort of superiority complex other companies do about telling you what you can and can't do on your own PC[0].

So yes, if you decide to grant yourself permission to see and modify those files, you can give yourself a bad time. Or maybe you could develop an application that handles them intelligently, after researching how they work extensively.

[0]Someone's gonna ask: With the right amount of registry and file tampering, yes, you can absolutely render automatic updates and telemetry nonfunctional. Results may vary, but it's definitely doable.


I have to assume you are alluding to Apple when you say "other companies". If that's correct, it's relatively trivial to disable the protections on system files in all versions of OS X / macOS. Furthermore, automatic updates and Apple's less invasive version of telemetry are both opt-in, not opt-out and certainly not requiring the magic mojo Windows requires to do so. Sure, iOS is a police state in comparison, but you explicitly stated "on your own PC", not mobile device.


I was actually thinking of Google at the time. Chrome OS devices, for example, wipe themselves if a system file is changed. The new Pixel needed an exploit to root it. Companies like Google (and yeah, Apple too to some degree) believe that you should only experience your own computer the way they intended it to be experienced.


For Chrome OS, all you need to do is enable developer mode to fix that. It's a supported feature in the OS and documented in detail on Google's website.


I think you are being a bit too harsh here, there is a way in which it does work, all of your files are on the "windows" side. For those of us who have lived in both camps (windows and Linux) the difference in file meta-data is extreme, it isn't an "easy" problem at all. Windows has metadata bits for all sorts of things that Linux doesn't which gives it much greater flexibility but means that the equivalent of your umask on Windows doesn't translate. It does however do everything Linux / POSIX needs so the inverse is doable.


"for those of us who have lived in both camps"

I live in both camps. This is the same attitude I took issue with in the original article. It's condescending. "I have to keep telling you to not do something because we didn't plan for it."

I'm not saying it's an easy problem. But I am saying that blaming users for something doesn't give me confidence.


That's like having users start deleting things out of your /usr/lib and then complaining the fault is clearly with the OS designers for not securing that folder sufficiently.


That's a valid position though. Maybe iOS is for those folks.


the system attribute is that better defense, changing explorer's options to display such files results in the following warning

https://i.imgur.com/uUaJEZI.png

if it was done via a VM, people would just mount the VHD, edit the files, and still complain


Microsoft puts so many things behind this flag that you pretty much have to display all system files if you are a power user.


I've left the system files invisible at least since installing Windows 10, and it hasn't once been a problem. I can't remember if it was always this way, or if it's a recent addition, but there's a separate option for showing hidden files and folders. (With this ticked, system files remain hidden, so this one is a much easier tick to recommend, if you're technically-minded. You get to see %USERPROFILE%\AppData, which you'll probably want, but not stuff like hiberfil.sys and boot.ini, which you need much more rarely.)


Hmm... I'm not too sure about that. Any examples?

Show file extensions is disabled by default. I'll always go in and change that. But I can't think of a time where I've had to go into a system file before.


Totally disagree. I enable it on the rare occasions I need to and disable it immediately after I'm done doing what I need. This happens quite rarely (a few times a year) so, basically: no, I beg to differ.


My experience is that this was arguably true in the NT/9x days, wasn't really true in the XP days, and is quite false with Windows 7 onwards.

What exactly has caused you to need to unhide those files this decade?


The registry hives are still system files, and so if you want to edit a particular user's registry, you need to show them.


Isn't Windows' hosts file located in a system folder? Not that I change it so often anymore, but when I did more web development, it was not uncommon for me to add a line there...


No, the "system32" folder is a normal folder from that perspective, i.e. it does not have the SYSTEM and HIDDEN attributes set.

Same for the "drivers" and "etc" sub-folders.


Hidden files, sure. But system files? I can't see any reason to ever display them.


Yes, it's totally a design problem that users are fucking with files in a system hidden folder. I do that all the time, easy mistake to make.


More like editing some config file under the WSL folder with Notepad++ and discovering you destroyed it.


> WSL folder

Which has both the hidden and the system flag set. You have to change two explorer settings to even access the directory, one of said changed gives you a prompt warning you that it might be dangerous to enable it.


This is the bullshit I really don't miss from the Windows world.

*nix: "Go for it, edit what you want, just be aware you can fuck up"

Windows: "what the fuck are you doing in the special area? get out!"


Isn't that the whole point of this post though? They've allowed you to edit files marked as "system files" despite a multitude of warnings, and now so many people have fucked up doing so that they've issued a stronger warning.

What's the solution? Put less warnings? Then you'd get even more people destroying their data. Put more warnings? Then you'd get more people complaining that "Windows doesn't let you edit what you want!".

It doesn't really seem like they can win.


I'm more talking about the culture between the two types of OS than this specific occurrence. The whole attitude of "well, why the fuck were you in there anyway?" is not something I've encountered in *nix.


Windows at least has the decency to warn you that you're about ready to trash something. Every -nix ever will let me blow /sbin/init away with a carelessly placed file operation without so much as a "hey".

Uneducated users should be responded to with "WTF were you doing?" as computer is a precision machine and it is not unreasonable to expect some basic level of literacy.


    vacri@thingy:~$ rm /sbin/init
    rm: cannot remove ‘/sbin/init’: Permission denied
I don't seem to be able to blow away /sbin/init.

If I've elevated myself to be a global administrative user, then I should be able to do global administrative things, without being treated as an 'uneducated user'.

> it is not unreasonable to expect some basic level of literacy

I think it's perfectly reasonable to expect some level of literacy for administrative-level tasks, and that's what nix does.


The only difference in assumption is that Linux users can safely assume most Linux users kinda know what they're doing. This is not true for Windows, where the default assumption is, you probably don't know what you're doing. ;) When you do know what you're doing, nobody complains about you going in the crevices.

One of my favorite Windows v. Linux things is when Linux users try to work in Windows, and try and do things that would be perfectly normal in Linux, and then wonder why their Windows box is so unstable. I once ran across someone who tried putting their Windows\system32 folder into a version control app. When their Windows registry exploded spectacularly, their response was that this was a normal thing to do on a Linux box.

Windows and Linux take very different engineering approaches to things, so skills developed in one don't necessarily translate well to the other.


That sounds obscene even to my Linux ears... why would it be a good idea to version control the system directory in any OS?


Presumably to be able to track changes and undo them if a set of changes causes a problem? I was kinda skeptical. But I certainly wasn't blaming Microsoft, like he was, for his system being unstable. =P


The person may have thought that \Windows\System32 was the moral equivalent of /etc , and expected to be able to do with it what people do to /etc with tools like etckeeper.

And, of course, ZFS boot environments are the same sort of idea to an extent.


Do you remember the hue and cry when deleting things from efivarfs could brick your machine, and it was somehow systemd's fault for making these files available and not locking out end users for their own protection? https://github.com/systemd/systemd/issues/2402


To be fair, that broke the paradigm of your OS not being able to brick your hardware. This had rarely been an issue before, and most people would have assumed it wasn't even possible.


Sure. But I feel like I remember warnings in the mid-'90s about bad modelines in XF86Config being able to cause smoke to come out of your monitor....


Isn't the entire point of the post to say that if you edit these files, be aware that you can fuck up (and here's how you can fuck up)?


I'm responding to mona3000, not the post.


they aren't telling you to don't touch them, they're telling you to don't touch them FROM THE WINDOWS SIDE. Start a bash instance and use vi/emacs/nano/ed/whatever to edit it from the Linux side.


You might want to understand my point before you start 'shouting' at me.


I'm sorry, I didn't want to appear rude, I just couldn't remember how to italicize text here on HN and tried to emphasize those words. By the way, here they're just telling you to stop messing with those files in a certain way because it isn't a supported scenario in a beta software, also giving you the reason why it isn't supported. By the way, I bet that thing isn't among the top 10 priorities.


Maybe they're working on that too. Writing a blog post to warn people about a problem doesn't preclude fixing it.


Short of literally removing admin access to the box, how would they go about fixing this?


One way would be to run a user level nfsd on the Linux side and mount it on the windows side as an nfs share. Or if they are morally opposed to that running a user level samba server and mounting it as a CIFS share. Both of these solutions allow you to access Linux files from windows apps on their native storage layer without corrupting them or the system they are running on.

You might be able to accomplish that with a system service that runs in the context of the LXX shim and a local (in memory) network connection.


"I've been using WSL since the beta, and using Vagrant + Putty gives me better results."

If WSL can't do the equivalent of Vagrant-style vm.synced_folder, how is anyone expected to use it for a real-world workflow?

I've found the combination of Vagrant/Virtualbox with the Git command-line tools (using MSYS2) to be an excellent cross-platform working environment. Really, it's been my daily workhorse for the past two years.

Also, there is a not-free program called SecureCRT which is much more usable than Putty.


I like mobaXTerm it is pretty useable and have a free version.


From the article:

"Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes "

So if you go messing about in hidden system folders, you get what you deserve.


One could also argue that an editor that loses extended file attributes when editing a file is broken.

I would have preferred msft to have posted an article explaining that files have attributes and editors must not loose those attributes when making edits.


Files on Windows don't have attributes. Files on an NTFS volume may have extended attributes associated with them but you'll need to go out of your way to find them and doing so has never been a requirement of working with files on Windows. Not supporting an obscure non-mandatory thing hardly constitutes "broken".

An editor which just kept the existing attributes would still end up with incorrect time stamps - so now all Windows apps need to know how to handle Linux file attributes? Nope - because these files were never meant to be edited by Windows apps. And it works best that way.


I didn't realize extended attributes were such an obscure feature. I haven't been on windows for years, yet knew about this feature.

I'm on OSX and every now and then see how apps are making interesting use of file attributes (e.g. tags, or remembering the cursor position of a text file when re-opening). So, I'm honestly surprised Windows isn't doing anything with this useful OS feature.


Fun fact: the linux metadata is stored in a little known NTFS feature called "extended attributes" (EA), which is not the same as alternate data streams. You need to use a pretty obscure API to access them. I started writing a tool to do so, only to find that someone already did, including a decoding of the (binary) EAs (which basically contain Linux uid, gid, permissions, ctime I think):

https://www.reddit.com/r/bashonubuntuonwindows/comments/52w8...

There is currently no tool that writes the EAs. It should be not too hard to write one, however it won't work transparently - WSL maintains a cache of this metadata, so you'd likely have to quit all WSL applications before you make any changes to avoid corruption.


LXSS extended file attributes viewer

https://github.com/dmex/lxssattr

- the actual project for the decoder


Honestly, this should be obvious for anyone who has at least a bit of experience with Linux. It's the same reason why you use tar.[extension] when you create an archive - Tar saves the Linux metadata (ownership, permissions).

To those who claim this need to be fixed, I want to ask: how? I mean if I modify a file in the Linux filesystem as a Windows user, what should the ownership be? Root? Some non-root user? Should we just make those files 777? Each of those solutions would bring more problems. It seems to me that there is no "one size fits all" fix here.


I mean if I modify a file in the Linux filesystem as a Windows user, what should the ownership be?

Inherited from the directory, like it is on Windows and Linux (by default)? And 777 is a perfectly fine default, since it's a good match for the Windows default attributes and ACLs.

It seems to me that there is no "one size fits all" fix here.

There isn't. But there is a sensible default, which is not "assumes the file is corrupted/damaged".

Consider what happens in Linux if you mount a filesystem like FAT which doesn't even support all the extended metadata. No extended metadata does not mean "I can't do anything with the files", unlike what seems to be happening with WSL here.


> Inherited from the directory, like it is on Windows and Linux (by default)?

Who lied you such horribly that under any unix a newly created file is owned by the owner of the directory? It's not even generally true for group ownership, which has a little more frivolous behaviour!

> And 777 is a perfectly fine default

0777 permissions are never fine as a default. It's asking for security vulnerabilities.


This should be top comment. If you want to preserve non-Linux attributes, you can NOT use tools that are unaware of how to handle the different assumptions made. If you don't mind smashing these attributes, go ahead, but don't cry foul. It is not broken. If you want to just transfer files for their content, there is a method provide in TFA, but that is not the matter at hand here.

There is simply an impedance mismatch between how these operating systems view files. Sure, you can suggest a bunch of guesses about what to do if you edit or drop a file from one context into the other without both operating systems agreeing on how to manage that but you will find yourself arguing about what "sensible default" means until the cows come home.


If you want to just transfer files for their content, there is a method provide in TFA, but that is not the matter at hand here.

It is. One can reasonably argue that it's natural for the parts that don't correspond between the two OSs to be lost in the transfer. But the main problem is that according to the article, it treats missing Linux metadata as an error.

Sure, you can suggest a bunch of guesses about what 1to do if you edit or drop a file from one context into the other without both operating systems agreeing on how to manage that but you will find yourself arguing about what "sensible default" means until the cows come home.

No doubt at MS there was probably large amounts of bikeshed over this, but I don't think "sensible default" should ever be "give up". Unfortunately, having been in a few design discussions with similar results, I have a good idea of exactly how that decision was reached: no one could agree on the precise behaviour, and "leave it out" or "make it an error" got chosen, simply because anything else would be perceived as supporting one alternative over another. Even randomly choosing one of the options would've been far preferable to having no option at all, to the surprise and disappointment of the users.

Someone brought up a related issue at https://news.ycombinator.com/item?id=12981620 (HN discussion is at https://news.ycombinator.com/item?id=11008449 ) in a different context, but it may have come from a similar (in)decision process: "we can't agree on a default, so let's just leave it as an error" --- nevermind the fact that in that case, it means bricking the user's machine, and imploring systemd to try to work around it.


For many years now, I've had one or two NTFS partitions on my dual-boot machine (Win7/Ubuntu), which are regular drives in Windows, and which are volumes mounted with all files set to 777 permissions in Linux. Have never had any problems with corruption, access denied, etc. Works great for volumes that mainly contain documents and media (and some Windows executables). But you can't mount "/" and co like that, and similarly, I can imagine that it would be inadequate for WSL, which stores many system files on NTFS, where many of those files have to use more restrictive permissions (e.g. 600 for many core binaries / configs), otherwise errors result.


Recently I tried to cut corners by copying my ~/.ssh from Cygwin into my WSL-user's home directory with the Windows Explorer and wondered why the folder was not visible from within WSL. cp-ing it from within WSL's /mnt/... solved the issue but left me still wondering. At least this post confirms that there be dragons, if I move files manually into the lxss folder.


I added some commands to my .bashrc that syncs my config files when I login.


Small correction: "I move files manually into the lxss folder" USING WINDOWS! ;) Copying or moving files via Bash using cp / mv works as expected.


To be fair cygwin has a similar problem. It attempts to map Unix permissions to NT and gets confused if Windows apps modify the permissions.


But this happens even if window tools don't change permissions. That is awful.


So many people in this thread that think that this is the end of WSL and you should use Cygwin instead. Laughing.

Cygwin is awful compared to WSL. Poor performance, always gotchas, cygpath handling etc. Sure it was a good option back in the day and a good engineering feat.

With WSL I get an environment that behaves exactly like Linux, complete POSIX without gotchas.

I work with WSL all the time on files located on Windows partions. Works perfectly. Just create symlinks in WSL to Windows folders and you are ready to go.

Why do people even need to work with files in the hidden system folder? The only files that I maybe want to change from Windows to WSL would be .bashrc et al, but just copy it or edit them from WSL and problem solved.

If I don't want to use WSL and I only need a shell, then MSYS2 is the way to go, handles Windows and POSIX paths interchangeably without the cygpaths hassle.

And no, WSL is not like SFU, that never really worked.


Honestly, as weird as cygwin can be, I question the judgement of relying on Linux's chief competitor to keep a mock linux kernel interface as well supported as the OS they really care about. Are you really sure that environment will be working in Windows 11... windows 12? etc?

This just seems like a really bad idea compared to simply running a VM and sharing a filesystem using networking or other means... It's not like a modern machine, even a shitty netbook nowadays, can't handle a virtualbox instance, or localhost networking.


Microsoft of today is not a software company, they are a service company. So how their services are delivered, from a competitor for example, is not that relevant anymore.

But assume you are 100% right that the WSL environment is removed or stops working in Windows 11+, how is that affecting my Linux experience with WSL today? Not one bit, because if it would change in the next release I can when i happens start a Virtualbox, but why run it now and have inferior experience when the result is the same? Does not make any sense. Or I can switch to Linux even, still same old shell.


You clearly only used the first Windows NT POSIX subsystem, and never used Interix, the second one.

* https://news.ycombinator.com/item?id=11416392

* https://news.ycombinator.com/item?id=12866843


I used the second one, SFU 3.5, with Windows XP.


Your description does not match what I know of it from direct experience, and is more appropriate to the first one.


What have I described?


Hah I wasn't the only one reporting this "mis-feature" I bet. I wonder if Microsoft could license the old NetApp 'unified' protections code that let you export a volume to both CIFS and NFS.

In my case I pulled a git clone into the Linux side of a project that was building using a third party gui tool, ran that from windows and my Bash window got soooo confused.

Seriously though, if they can improve the terminal "behavior" and figure out a work around for USB device access this will be really stellar.


Out of curiosity, how is the Linux integration? Does it feel like native, or is it disjointed like Cygwin always has?


I find it much more useful/natural than cygwin. The places where it reminds me it isn't Linux is when something wants to create a Unix domain socket (screen) or needs access to /dev or something like that. CPAN works fine as does ssh and most things I use from the command line (gcc embedded and make, vim, etc). Nothing that would pop up an xwindow works (as expected).


In more recent Windows Insider builds, Tmux & Screen now work correctly, as does network connection enumeration (e.g. via ifconfig), sshd works as expected, along with MySQL, Apache, nginx, etc.


I am clearly very excited that you seem to be making this a 'real' thing rather than just a talking point. You realize there is a 'usb over ethernet' spec right? And using that and a windows service on a local socket on the windows side you could arbitrate exclusive access to plugged in USB devices to the Bash shell user right?


"Windows deleted my homework…"

Seriously, is mounting files readonly that hard to do? If you can't deal with linux files, leave them alone.


You sound like the kind of engineer that keeps software perpetually stuck in a UX nightmare.

"Of course it doesn't work that way, idiot! If only you read the obscure documentation and had years of experience like me you'd understand why you're the problem and not the software!"


I mean.... the article does point out:

Remember: There’s a reason we gave the %localappdata%\lxss\ folder ‘hidden’ & ‘system’ attributes

You gotta work your way around a couple mild safeguards to go fucking with things.


To be fair, many people know App Data as where their bookmarks get stored.

Based on that, being hidden and system on Windows doesn't exactly say "we're guarding nuclear codes here" like mod 000 attr +I.

But that's kind of the root of the issue, isn't it?


AppData is not system.

Very few things are system. In 7+, the things that are system are full of nukes.


I agree with the article, but in all my years in Windows one of the first things I would do is turn on "show hidden folders" in Explorer. There's just so much stuff in hidden folders that you need to access on a day-to-day basis.

I imagine someone with the linux subsystem will be in a similar situation.


I don't know why you're being downvoted. I consider hidden files & folders in windows to act like dot files in linux.


Anything that has the potential to corrupt your filesystem should come with a huge disclaimer.


Like the system flag? You can make any OS unbootable with the same level of permissions. This is particularly mild given what we can do by ignoring these warnings as an administrator.


This doesn't corrupt the FS.


Any app, tool or script on your PC has the potential to corrupt your filesystem.


I think that the OP is saying that the software should mount the files read only saving the user from pain.


Exactly. Thanks for the clarification. It should be the job of the operating system to make it readonly.


I agree and you are correct. The thing is that I guess that this wouldn't work - Windows users are used to having control of everything so if MS started mounting system files read only (which they should) people would develop and other people install one click applications to remount things read-write.


Something like a forced check if it's a linux file, open in read-only, save in dynamically created relevant windows directory so it's easy to find when you're on linux.

You're right, it should be easy.


It sounds bad but WSL is working really good, but yeah use '/mnt/c' to share files between the 2 systems. It actually comes organically.


You have it backwards. The problem is changing linux files with windows editors.


The problem is changing Linux files that are currently stored in %localappdata\lxss\ using a Windows tool.

Changing Linux files that are stored in C: and are accessed in Windows from C: and in Linux from /mnt/c/ is fine.


This is a pity. One of the things I like with the Mac is being able to use Mac-side editors like BBEdit to be able to make changes to the configuration and files of the Unix-y server tools I need. If I can't reliably use Windows GUI editors that would limit my ability to migrate from macOS.

If one is running NGINX/PHP under WSL would one not even be able to safely edit .php files?


The lxss folder is for things you only want to access from Linux. Things like a .php file you want to access from both sides goes somewhere else. In Windows you go to C:\MyPHPFiles and in Linux you go to /mnt/c/MyPHPFiles.


This. I use WSL daily and edit in both Linux & Windows... but the files I'm dealing with are in a directory structure that set up on the windows side and I get to them through /mnt/<drive letter>. Completely painless for me.

I do have to occasionally edit Linux config files and whatnot and, yes, I do that with emacs or vim (depending on mood), but it's honestly not often and I personally prefer the terminal based editors in Linux for that better (yeah, a preference thing there... ).

Anyway, been working like this since before the Anniversary Update on the later insider builds and it's been pretty solid and seamless for me.


Have you tried using an xserver to run GUI Linux apps through it? If so, how's the delay? Is it instant and ready for full time development?

Use case would be wanting to use an editor that can take advantage of programming runtimes installed on the Linux side of things.

Think about using Sublime, where you'd want to run a Python linter. In this case Sublime needs to be installed and ran in Linux alongside your Python installation.


You can run bash.exe with command line options to start specific Linux applications. I setup the Windows VSCode to run use Bash as it's terminal and the build actions to trigger Linux programs.

I've run a few small GUI applications and the performance was pretty much native-level. I use VcXsrv as my X server. I've read that some people run entire Linux tiled window managers through WSL without complaint.


Can you go into more detail on VSCode's terminal?

I would have expected VSCode (and Sublime/Atom/etc.) to need to be running on Linux to leverage the run-times installed in Linux.

How does it handle code complete for user written code when:

1. You have VSCode installed in Windows

2. You have your language and all packages installed in Linux

3. You have your source code sitting in a folder on Windows

Edit:

Since you edited in some info about xserver performance, that's awesome.

What type of specs is your machine?


Bash.exe is a Windows console application as far as every other application is concerned. So for VSCode to run it instead of Cmd.exe, is really nothing at all. Same with other tools.

I'm mostly using it to C/C++ development for an embedded platform so my experience with code completion, etc, is a bit different. But it does work as well as it does on a pure Linux installation.

I have a pretty powerful desktop but I run the same configuration on my i5/4GB laptop and the performance is still good (and why not, it's all native code). I think there might be some disk performance issues but I haven't compared it too closely. I prefer it over using a VM just because of the integration and convenience.


Originally it sounded like you had VSCode installed on Windows but you're able to get all of the goodies (context sensitive code complete based on files in Linux) of it running on Linux -- without VcXsrv running on Windows.

I guess I just don't get how VSCode that's running on Windows can "see" and use the Linux filesystem. Is this something MS did that only works for VSCode (ie. it wouldn't be available with Sublime/Atom)?

Also it looks like VSCode doesn't work through VcXsrv according to this issue: https://github.com/Microsoft/vscode/issues/13138


> I guess I just don't get how VSCode that's running on Windows can "see" and use the Linux filesystem.

I can't. It's the other way around, the Linux file system can see the Windows one. So VSCode's code complete is based on files on the Windows system but the Linux side can also see those files and compiles/runs them. Linux doesn't know it's a completely different file system; it's just another path.


I imagine most web developers would have Python, Ruby, Elixir or whatever they use installed on Linux. That would be both the run time as well as all packages required for their projects.

Then they would have their app level code off in a Windows folder to avoid corruption and having it accessible on both sides (makes 100% sense).

The confusing part is understanding how VSCode running in Windows is able to make sense of all of that.

That would mean VSCode would need to know that the runtime and packages are at "the private Linux FS path from within Windows", while app level code is at "some Windows path of your choosing". You're saying it does this flawlessly, and then there's no need to even mess around with xserver?

I don't use VSCode but I would guess it needs to know things like your Python path or Ruby path, etc.. These are things that would be installed on the Linux side of things. That's where it falls apart in my head.


VS Code isn't going to be able to read any files on the Linux side but it can execute Linux applications against your Windows working folder. So having VS code run Linux Python to test will work. If code-completion works by reading package source files installed in the Linux partition, it's not going to be able to do that.

For my development needs (embedded C/C++) this is not an issue.

> These are things that would be installed on the Linux side of things. That's where it falls apart in my head.

You setup your environment to run Bash.exe with the appropriate parameters to start the Linux application.

   bash -c "linux command here"
http://winaero.com/blog/run-linux-commands-from-cmd-exe-prom...


Yeah, it would need to reference package source files from Linux for code complete to properly work.

Thanks for the replies, that clears things up.


I have not. I use Sublime on the Windows side; in those cases where I have issues like linting I'm just doing it via the Windows equivalent (I'm not working with Python) and having a different source for linting vs. some of the runtime environment hasn't been a problem for me... though I could see where you'd want something more apples = apples.

For myself, an xserver is probably more overhead than I'd want to take on... I'd just drop WSL and go back to my VM at that point (different pain points, but simpler pain points).


That's unfortunate. Having to "double install" things like that is very inconvenient.

Yep, I use a VM now (a seamlessly integrated one with vmware) but sadly vmware support for Unity mode Linux guests has been dropped and VirtualBox has too many problems to consider using.

Was hoping to eventually change to Windows 10 but it's sounding like I might as well stick with what I have now, despite the bugs and being stuck with a ~4 year old Linux distro because nothing else works.

If Windows can solve the problem of running GUI Linux apps through it, I'd be the happiest developer ever.


I use Xming and it works pretty seamlessly for both local applications and X-forwarded ssh sessions


Nice, is that the free version of xming?

I had ~3 seconds of typing delay and a lot of visual graphical tearing on an i5 with an SSD and plenty of RAM (with SSH compression enabled) when I tried to use it on a locally running vmware VM under Windows 8 but I can't test it on 10.

Is it purely instant on 10, to the point where you would feel happy typing / looking at it all day?


What editor were you using in xming?

According to MS, vscode doesn't work and perhaps all Electron apps wouldn't work: https://github.com/Microsoft/vscode/issues/13138


Do windows people have any respect for the idea of home folders? You should put your own files in /home/blah/foo or C:\Documents and Settings\blah\foo, not /myfiles or C:\myfiles


That was just for illustrative purposes. Your personal files should indeed be under C:\Users\name\


Like the article says. You point your Nginx to serve files out of the "c" drive mount point.


You can work this way beautifully; just do it from the Windows file system as intended and don't delve into the hidden/system files that back the Linux subsystem.


No, at least not from that directory. You need to use tools that know how to update the WSL-specific metadata in the NTFS extended attributes, and Windows programs do not do that. You would need to set up a file server so that the programs could communicate using a standard protocol.

Edit: or as hartator says in another thread, store your files somewhere under /mnt/c/.


I ran into this problem within days of starting to use Bash on Windows. Had several files just disappear, though I didn't need to reinstall anything (so far). They should really document this issue better, like in the docs for installing WSL.


In fairness, as the author points out, the files are hidden and marked as "system". People tinkering with files flagged system are usually going to have a bad time. You have to specifically go digging for this, and then decide you want to mess with it.

Microsoft's platform design is generally open enough to allow you to do dumb things. This is a good thing, because you can reg-hack your way around problems with the OS. But like editing the registry, doing so is very much at your own peril, it's really not designed to be tampered with by users.


I don't think it's fair to blame people for thinking they could edit simple text files under that folder, well-hidden as it is. How many people already knew there would be any issues beyond line endings (not counting those who learned by breaking things)? I sure didn't. It's pure luck that I hadn't tried it yet.


It's absolutely fair. To even SEE these files, there's a warning you must click through that notifies you that you can cause significant harm to your system if you modify them. So when one does so, and then one modifies them... one should definitely expect to cause significant harm.

People who enable visibility on protected system files and then act upset that messing with protected system files breaks things, are the sort of reason we don't give out admin rights where I come from.


Hmm...none of that matches my experience. I opened them from Powershell or git bash and from Atom, maybe vim was involved. I don't remember the process of getting there, but I do remember it being tricky to find the directory where the Linux environment lived, but, maybe, a grep or find was involved. Windows Explorer was not involved.

I'm a 20+ year Linux user. I use Linux tools to find things and to navigate places. I guess if I were a Windows user, who does things in the usual Windows way, I might have received a warning of some sort.

I'm just saying it'd be cool if there were a warning somewhere in the docs about this issue. It wasn't something I expected. I mean, I don't have to worry about editing "Windows" files on my Linux system with WINE. And, I'm able to mount my Windows partition from Linux and edit files without fear of harming things. I assumed things worked the same way for WSL. It was a wrong assumption, but I don't feel like it means I'm stupid to have made it, based on the knowledge I had at the time.


> I'm just saying it'd be cool if there were a warning somewhere in the docs about this issue.

If you find any docs about this folder from Microsoft, you will find that they do warn you about exactly this.

"Interoperability between Windows applications and files in VolFs is not supported." -- Microsoft


I wasn't looking for docs on a folder, I was looking for docs about WSL (of which there were none other than the blog post about how to install and enable it, at the time). It's beta software, and I'm not complaining (my review of WSL at the time commented on this problem, without knowing it was a known issue, and I just said something to the effect of, "I won't do that again!" rather than "I'm gonna burn down Microsoft for doing this to me!").

It's no big deal, really. I didn't expect miracles, and it all works much better than I expected. I wish the interop were better, but I'm pleased it's even something Microsoft is trying to do.


That quote comes from here:

https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subs...

But I suspect they specifically wouldn't mention it because it's an implementation detail people aren't supposed to know about or look into. The problem might have been more widespread had they actually said "Don't change anything in this specific hard to find hidden/system protected folder"


Cool. I might have even read that at some point in the process and lacked enough understanding of the system to know what it meant. I found the process of getting WSL installed quite confusing, so maybe I was distracted. But, reading it even now, it doesn't jump out at me that using a Windows app to mess with Linux files would be disastrous (it was not disastrous for me, but had the files been important, it would have been, as they disappeared). It sounds like it's a thing that just wouldn't work;, e.g. Windows apps wouldn't see them at all, rather than it would potentially destroy your Linux installation, which is what this blog post indicates can happen.


Wouldn't your OS not being able to see files from a component on your computer be even worse?


I don't think it's fair to be upset when editing those files breaks things, either. To be surprised? Yes, definitely. It is fair to expect software labeled "beta" to break in surprising ways, so I hope they're not giving permission to use WSL in production where you come from.


"They should really document this issue better, like in the docs for installing WSL"

What, like this bit in the install instructions page: https://msdn.microsoft.com/en-us/commandline/wsl/install_gui...

After installation your Linux file system components will be located at: C:\Users\<Windows user name>\AppData\Local\lxss\ This directory is marked as a system file which are hidden by default. Accessing this location directly is not advised due to caching between the Linux and Windows file systems. Check out our blog post for more information.


same here, tried to modify some files in my windows IDE while using them from an apache on the linux subsystem - had to change the webserver directory to /mnt/c/...


TLDR; if you are using BashOnWindows then don't modify files in lxss folder from Windows or otherwise BashOnWindows will break. Use bash to modify those files.


Another thing to watch out for is how you have git configured to handle newlines. Usually the recommended config under Windows is to have it transparently change them to \r\n when you clone a project, and back to \n when you push.

This trips up some Linux tools when you use those files from the Linux subsystem. Best either turn that feature off or only use the Linux git for any projects you plan to touch from there, even if you're saving them under /mnt/... to share with Windows. And of course you'll have to beware of how your Windows editors are saving those files.

I wish the Windows world had just migrated to UTF-8 and \n 10 years ago. It would all be smooth sailing by now.


Most Windows apps, both third-party and built-in, support UTF-8 for many years. Just not by default. You need to specify your data is UTF-8: http://unicode.org/faq/utf_bom.html


Why should an application need to be aware of the file meta data in most situations anyway? Shouldn't the OS handle the metadata unless the application specifically requests otherwise?


I'm guessing the issue is that many apps use the Windows APIs badly and just recreate files rather than updating them.


Or they use the Windows API well and purposely create new files rather than updating old ones to get atomic updates (the same way that most editors on Unix never ever update existing files and instead create new ones on which they call rename()).


Well they obviously aren't checking for metadata / extended attributes when reading the files.


This issue exists on Linux too :-) ext4 supports extended attributes including the creation time, but most apps blindly rename which means the metadata is lost.


I'm glad someone else said it. I was biting my tongue.


In both operating systems, aren't there APIs for this?


For what? Creatingafilewritingtoitthenrenamingitoveranexistingfile?

Or did you mean moving the metadata as well? Transferring all kinds of metadata from file A to file B is heinously non-portable across OSes and even OS releases. The specific metadata this is about isn't even available in the Win32 API, it's only part of the native API (and the kernel API ofc.)


An API for Atomic operations with files.

There are APIs in windows apparently. A quick Google search reveals function like CreateDirectoryTransacted, and another quick search reveals that the Kernel leaves this kind of thing up to the filesystem. So on Linux filesystems like ZFA and BTRFS just use the normal posix file APIs and others can optionally provide other functions if they choose to.

So yeah... The whole make a file and rename it thing is dirty hack.

EDIT - Grammar


> [Microsoft strongly recommends developers utilize alternative means to achieve your application’s needs. Many scenarios that TxF was developed for can be achieved through simpler and more readily available techniques. Furthermore, TxF may not be available in future versions of Microsoft Windows. For more information, and alternatives to TxF, please see Alternatives to using Transactional NTFS.]


> An API for Atomic operations with files.

Windows has TxF (used in Windows Update and Installer to tame the eeeenormous complexity a tiny bit; TxF and VSS also use similar mechanisms), but no other mainstream OS has anything remotely close.


I don't know how, but I misspelled "ZFS". In ZFS every single file write is atomic. That is kind of close and supported on Linux, Mac OS X and a few Unix flavors.


That kinda doesn't matter with ZFS or btrfs, though; as real CoW file systems their consistency properties are similar to a fully journaled FS (not only metadata but also data, like ext4 in journal=data mode). In that case you don't need to care anymore as an application developer about file committing patterns, as long as it's only about one file.


According to their UserVoice response, this feature is "in the backlog", so maybe someday it will be available: https://wpdev.uservoice.com/forums/266908-command-prompt-con...


https://blogs.msdn.microsoft.com/oldnewthing/20050715-14/?p=...:

"if you delete some file "File with long name.txt" and then create a new file with the same name, that new file will have the same short name and the same creation time as the original file."

It might open a security can of worms, but there might be a safe way to do that for those essential Linux attributes, too, for example if the program deleting the file has write permissions on it.


i'm a bit out of the loop - why/how would you be doing this? Windows has a directory now for linux files?


Windows 10, since Anniversary Update, has a new feature (not enabled by default) where you can have an entire (Ubuntu-based) Linux user space running natively (no VM) on your Windows machine, it's called Windows Subsystem for Linux: https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subs...


Windows 10 has an entire Linux subsystem now. They've implemented the posix APIs and there are other blog posts where people have gotten things like i3 running natively in Windows. It's only in the preview builds and you have to specifically enable it.


Well, um, I'm not running a preview build, but I have it. It's been there since the Anniversary update


> Creating/changing Linux files from Windows will likely result in data corruption and/or damage your Linux environment requiring you to uninstall & reinstall your distro!

It's funny, that this is so Windows-centered advice (regarding the "uninstall & reinstall"). Fixing things in Windows very often boils down to "just do the reinstall". In Linux you have lots of other tools and methods to recover from such disaster.


Well, if you're so inclined, you can re-create the NTFS extended attributes on the file you broke and things should be back to normal. However, the article is for people who clearly don't know what they're doing, or what they broke and how, so I think giving the »Have you tried turning it off and on again« advice is just about right for those folks.

And even on Windows you can fix most issues in different ways than reinstalling, it's just that for some degrees of brokenness it's by far the faster solution (cf. just re-cloning a git repository instead of figuring out what is broken).


Same goes other way. Modifying windows files on a POSIX system end up creating incompatibility which aren't easily resolvable. As an example, one can create a file with a trailing space easily but good luck trying to mamage it using standard windows tools. I am not sure why MS thinks this is a good idea.


what is the general opinion of linux for windows so far?


Quite good and improving regularly. Jupyter finally works right out of the box.

People are right that it's mostly just like a VM, however, a VM which doesn't eat up all your memory and is actually fast since there's no VM layer is actually a pretty nice thing. I run WSL on my laptop with 8GB of RAM, whereas running VMs is something I generally avoid.

They have started improving the integration a little; you can call bash from windows and windows executables from bash, if that's useful to you.

The main questions I have are:

a) Will VS Code support WSL? I don't need to edit the files inside the linux filesystem, but things like running python from an IDE inside WSL don't work atm. I have to treat it like a remote system.

b) Will we ever get OpenGL/CUDA support?

c) Will developers treat WSL as a first class platform the same way they do OSX, so that it's not the end user who has to come up with workarounds?

Having said that, I'm a huge fan and I use it a lot. But I also keep real linux machines around since not everything is handled by WSL.


I work on WSL.

A) VS code supports WSL in the newer versions of Windows Insider Builds.[1] Support for individual IDEs is now available [2] but will take time for individual IDEs to implement.

B) This is something that we are actively looking at but it is not an easy problem and will take time.[3]

C) ¯\_(ツ)_/¯ We certainly hope so.

[1] http://pjdecarlo.com/2016/06/bash-on-windows-as-integrated-t...

[2] https://blogs.msdn.microsoft.com/wsl/2016/10/19/windows-and-...

[3] https://wpdev.uservoice.com/forums/266908-command-prompt-con...


Thanks for the work you're doing, it's pretty exciting! I hope MSFT remains committed to this project.

A) This is only a terminal, which isn't really more useful than opening a WSL window, I want to be able to debug python easily. I don't really know how the python debugging protocol works, so I'm not sure if stdin/out is enough, but even if it is enough there's some glue code missing to make this trivial, I need to have a binary that can take the same arguments as python and translate windows paths to WSL paths. And then the Python VS code tools also have some pretty cool Jupyter/IPython integration; I'm not sure that will just work with an executable that pretends to be python. Would be cool if the people working on VS code bits were thinking about WSL, rather than users having to come up with individual hacks.

2) Cool, I assumed backlog meant "not going to happen", will be great if it gets anywhere. Though I'm also interested in OpenGL for graphics; I wanted to run OpenAI's gym project, but that needed OpenGL for rendering. Not necessarily expecting you to write an X server, but would be cool if you could cooperate with or fund a project like Xming or https://github.com/kpocza/LoWe or something.


I agree with KirinDave. I've been using it since the first insider preview that had it. Back then, most of the workflows I use (mostly frontend development, npm/gulp/webpack/php, that kind of stuff) ran into some sort of problem. Right now, I use Bash for about 90% of my commandline needs, only occasionally reverting to cmd. Bugs are few and far between (though still breaking in a lot of cases, so we're firmly in beta territory yet). In the latest builds it's possible to use windows executables from Bash (for me that's mainly Atom), which has upped my use by a few more percent.

One annoying thing is that PHP's built-in webserver (which is also used by Laravel for it's development server) causes file read errors under WSL. Easy enough to avoid until they fix it though, by setting up Apache in Ubuntu or a local LAMP stack on the Windows side.


It's good, although "beta" is a fair description. Many things work, but there are a few sharp edges. For example, gradle doesn't work well yet, and if you go backwards into lxrun (rather than out via /mnt/c) you run into this bug.

It's surprisingly good for an initial implementation, but it's far from perfect.


You should try-out more recent builds of Bash/WSL - I think you'll find A LOT more things work now than did when we shipped Win10 AU :)


I think fork() issues with Gradle are present in the most recent build.


I will sing its praises to the high heavens for one thing alone: seamless use of command-line git!


I love it; I do Pebble development using it and they have a Linux SDK. The compiler works, all the tools, and even the QEMU Pebble emulator via a Windows X server. The workflow is very seamless.

I do my editing on Windows and compile/test from the command line.


Wrong direction of layering, and thus of limited use imho.

Of the two OSes, the weaker link, security-wise, is definitely Windows 10 rather than Linux. Putting Linux programs under a Windows 10 abstraction layer means that any malware, bugs or "bugs" (remember, this is proprietary software) in Windows 10 have the ability to transparently subvert or surveil the Linux programs.

In this aspect, WSL is just like putting a Linux VM on a Windows host (albeit more performant and convenient, yes).


Evidence? Windows has continuously made improvements in security and there has been a lot of emphasis there in the last ~10 years. If you have some reasoning, I want to see it, because I just can't find any studies/blogs/anything one way or the other.

Also note that this isn't intended as a security feature - it is a convenience to get Linux userland programs on Windows without a resource-hogging VM.


For starters, I'd consider Windows 10's telemetry systems to be a significant regression in security.


This is a very loose definition of "security" you're using. You implied "malware" and other extra-vendor bad actor software then immediately switched back to policy decision w.r.t. the Microsoft store apps.

While I appreciate you're upset about this issue, what you just did is hard to view as anything short of disingenuous. If you're genuinely trying to hide something from government surveillance then NO simple OS decision will help you since only a few countries in the world allow you to actually say "no" to a court demand for access.

Please don't raft terms together to make your rhetorical device more seamless when it's strictly wrong to say. Security professionals do not define in-app telemetry as malware and this community will not, either. What's more, you even subtly hint open source software is more secure and free from bugs. I've got news for you: sourceless security hole propagation is 20 year old technology invented by open source people to discuss their concerns with the idea that open source means we don't need security audits.


WSL is a great feature that could really improve the development workflow on windows. The WSL would benefit from a blogpost with a different perspective, describing how to do web development, mounting into Windows filesystem, -how to setup and use tmux/vim etc.


Great idea - can't wait to see your post :)


Or cryptic error messages because the config file contains a "carriage return".


Pro tip: I use wsl-terminal with WSL

https://github.com/goreliu/wsl-terminal


Addition: Do not modify files which where modifyied in another OS that was send into hibernation. A world of pain awaits you.


I ran into an issue when I first started using WSL. I was able to mitigate it by changing the settings on my text editor to use LF line endings and ensuring that git does not automatically modify line endings upon checkout. I have not had any issues since.


Yeah... I learnt this the hard way a couple of weeks back... Sigh...


Perfectly reasonable but I do suggest that Microsoft should have made the folder invisible to admin (need SYSTEM or higher, etc), so users were not tempted to touch the files in that folder.


Except that you need permissions to those files to, you know, use them in the subsystem for Linux. The WSL, correctly and securely, runs under your user account.




Has Windows harmonized line breaks? Or is it still \r\n rather than \n like in the *nix world?


Is the Pope Hindu? Do bears use the restrooms in the Waldorf Astoria hotel?


I think what's sad about this is the users who will believe that it was a fault of Linux that their files disappeared and as a result they will stay with MS.


This is like some bad teen movie where the bully is forced to live with the nerd.


In general these types of issues are why their implementation doesn't work for me. I'd prefer something with more seamless integration.

When on windows I pretty much stick with git bash and windows implementations of my dev stack.


Microsoft is adding the ubuntu subsystem, which means this is a "problem" that's only going to get worse before it gets better. Thankfully once it gets better it will be a lot easier to use both systems and you won't need to keep such rigid isolation between the systems.


I would expect that something that's marked Hidden and System requires you to think a little harder before you do something to it. You have to jump through hoops to do something to those files.


"What should I do instead?" - use a decent os


1. DO store files in your Windows filesystem that you want to create/modify using Windows tools AND Linux tools 2. DO NOT create / modify Linux files from Windows apps, tools, scripts or consoles

What bizarrely contradictory advice.

"Use Windows tools to modify files used by Linux, but don't use them to modify Linux files".


Use Cygwin. This problem and related issues point toward reasons why: Cygwin is _designed_ to bridge, by hook or by crook, the gap between the Win32 and POSIX worlds. It's not perfect, but it tries to work in a mixed environment --- even ACLs mostly work. Cygwin's Windows integration is so good that you can call CreateWindow from a Cygwin process and have it Just Work.

The NT POSIX subsystem, on the other hand, is a very different beast. It's more of a container or VM-like object. That kind of operation is useful for some tasks, but when I want to use POSIX and Windows tools together on the same task, I don't want a container. I want tight integration, which Cygwin provides.


My experience with cygwin never "Just works". WSL needs to hit a much higher bar if they are going to seduce the people who use macOS because it's a better linux.


WSL is not the NT POSIX subsystem, which was a completely different thing that died a while ago.


It's not SFU either, but it's a reincarnation of the same idea. It's a subsystem distinct from the Windows one. This year's branding is irrelevant.


The new approach is relevant. NT POSIX or SFU was a completely different target from mainly used Unixes of their time. Now GNU/Linux has supplanted everything (Mac Os does not really rely on nor is administered by its Unix layer), Ubuntu is a major and well maintained distro, and even the WSL Beta in the AU has a quite impressive compat -- and the last insider versions are even better. So will WSL become irrelevant? Maybe: either at the same time Ubuntu does, or if MS decides to stop this subsystem project. I hope they don't... :)


It's just a different system call interface though. The big integration problem remains. Microsoft could definitely solve it differently: specifically, there should be a winuser.so loadable in WSL that would do the same job as user32.dll and kernel32.dll do in win32. It could, in principle, talk to win32k.sys.

Personally, though, I'd like Microsoft to make two changes to the _existing_ win32 subsystem:

1) expose as public API the interface between the kernel console subsystem and conhost.exe: this interface will not only allow Cygwin to supply a real pty layer, but also let programs like console2 work much better

2) teach win32 userspace components to tolerate NtCreateProcess-based fork, and make CSRSS understand what's going on. Then regular win32 processes (like Cygwin programs) can fork and take advantage of the copy-on-write semantics that NT already supports

If I ever rejoin Microsoft, I'm going to push to get this functionality implemented. If core Windows ever goes open source, I'll implement it myself.


Forking and copy on write is important for historical reasons but very few tools can take advantage of this model, and it causes its load of problems (ex: if widely used on the system, need for overcommit). Something like CreateProcess / posix_spawn / vfork+exec is actually a better model 99% of the time fork would be used for fork+exec. I don't see what would be achieved in the pure Win32 ecosystem by providing more support for forking+CoW, and Cygwin already work well enough without that kind of official Win32 support of fork+CoW.

I understand what you desire is to get something like a better Cygwin instead of following the WSL approach, however that won't leverage the Ubuntu ecosystem, so that would still have the same problems as what I described for NT Posix and SFU: Cygwin is a target completely separated from mainstream ones -- the tools WSL is trying to provide (and succeeds in doing so -- available today) would not be as well maintained and/or even working correctly (if available at all).

Now I agree that an interface is needed to provide alternate console implementation, and the current situation in that regard, and hacks used to make ConEmu or console2 kind of work, are terrible (and unfortunately it is now even worse with WSL, and an absolute mess with the new WSL interop in Insider builds)


Lots and lots of software does use fork, though, so in lieu of rewriting the world, fork has some utility.

> Cygwin already work well enough

:laughter_to_sobbing:

Cygwin's fork works, but it has two major problems: 1) it's slow, and 2) it relies on awful hacks to load DLLs at the same virtual offsets in parent and child, and if these hacks fail (which they do if it's the wrong day of the week, or if the moon has the right phase), your fork fails. With proper fork support, this operation becomes fast and reliable.

I agree that vfork is better. Implementation complexities made Cygwin define vfork as fork though. :-(

> I understand what you desire is to get something like a better Cygwin instead of following the WSL approach, however that won't leverage the Ubuntu ecosystem,

That's true, but Cygwin has enough critical mass that its own ecosystem is usable. FOSS developers of third-party packages usually care to some minimal degree about Cygwin support, for example, and will take patches to help it work better. SFU never achieved that critical mass.

> Now I agree that an interface is needed to provide alternate console implementation

I never understood the difficulty of making this feature happen. It's clearly needed, as evidenced by all the awful hacks out there, and it's not even that hard: I've seen the conhost code. You could make the protocol a public API with minimal changes.

Yes, the resulting interface ends up being a bit more complicated than openpty(3), but that's okay.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: