> Another interesting fact is that ADS names can contain symbols which are normally forbidden for filenames like ” or *
OK, this is going to completely mess up all of my (and most likely everyone else's) shell scripts. I always assume these characters cannot legally occur on Windows anywhere in the path...
_________________________________________
And wow, this is quite clever and also quite difficult to foresee:
> The best security vulnerability to explain directory junctions is in my opinion AVGater, where the attacker places a file in folder x. Then he marks the file as a virus and the installed AntiVirus solution will move it into the quarantine. After that the attacker removes the folder x and replaces it with a directory junction named “x” which points to C:\windows\System32\. If the attacker now clicks on the “restore” button the AntiVirus solution will copy the file to the x folder which now points to system32 with SYSTEM privileges (which directly leads to EoP).
For that second one: That sounds like a vulnerability in virus scanners even without junctions, although junctions make it enormously easier to exploit:
* Create a directory in a location you have permission to, but you think a higher-privilege user will create it later.
* Create and quarantine a file in it.
* Delete the directory.
* When the higher privilege user creates the directory, you can restore your file into it.
The fix is for virus scanners to operate at user permission level when restoring the file.
> The fix is for virus scanners to operate at user permission level when restoring the file.
"user permission" as such doesn't exist, it is always a specific user. Which user should the restore function run as?
There is no easy solution to this, as the restore must be able to restore the file to locations like the windows directory as well as directories that only specific user have access to, like a home folder on a file server.
> the restore must be able to restore the file to locations like the windows directory
Then the user will need to get the help of someone with an admin account i.e. their IT department (they probably have an admin account themself if it's a home computer). If this happened regularly it would be a problem, but how often do virus scanners false positive on system files and users quarantine them? I would bet not very often.
Problem might be that whoever is clicking restore is likely running that software as an admin thereby the user clicking restore is the admin user. I'm not 100% sure though, but I think that's the behavior.
Edit:
Found this[0] to clarify what happens when you run software as UAC. I can imagine an A/V program needs admin privileges to run altogether.
That will not solve the problem of writing to places an administrator can't write to:
1. Places where only SYSTEM have write access to (e.g. system32)
2. Places where only a specific user have access to (private folder in a file server share). An administrator won't be able to restore user's file without taking ownership on the folder.
Ok, let me ask: why? What is the absolute worst case scenario here? That a user in your organization gains admin privileges on their workstation?
Nearly every org I've ever worked at already gave their users admin privileges because trying to do their job without it caused far more friction than any imaginary gains from locking them down. So they might screw up their OS, big deal, that's what imaging is for.
It's not like ransomware needs admin privileges to ruin your day anyway. In fact, local admin does absolutely nothing for it since it will still only have permissions to the same network resources that the user does, and of course their local documents.
There are some kiosk-style single-application appliance use cases out there where it makes sense to lock things down just to make sure the user isn't browsing reddit all day or something, but by their nature those are at low risk of being infected and you care even less about them than a workstation if they are.
I honestly can't think of a realistic scenario in which this isn't a complete non-issue.
> Nearly every org I've ever worked at already gave their users admin privileges
I assume you've never worked at a large retail or investment bank, or similar organisation, then? Admin privileges are never granted to ordinary users; to install new programs or perform administrative tasks requires a call to the helpdesk, and there is usually some form of remote management service running to handle updates and so on.
An attacker that gains SYSTEM permissions on a machine can dump the SAM database, which may contain credentials for domain or enterprise administrators. This is very bad.
A user with administrator permissions can modify their DNS settings to avoid DNS-level filtering to block known malware domains. This can be bad.
A malicious or ignorant user can disable or remove antivirus protection. This can be bad.
A user can accidentally make a computer nonfunctional by, for example, deleting system files, perhaps because it was suggested (sarcastically) on a help forum ("LOL delete system32"). This is bad.
I'm far from a security expert, but from a sysadmin perspective, every user having local admin privs is an absolute nightmare.
> An attacker that gains SYSTEM permissions on a machine can dump the SAM database, which may contain credentials for domain or enterprise administrators. This is very bad.
True enough, but unlikely to be a significant risk. If you are concerned about it then it is much easier to have a separate AD account that only has local administrator on normal (non-admin) workstations and use that.
> A user with administrator permissions can modify their DNS settings to avoid DNS-level filtering to block known malware domains. This can be bad.
It is trivial to block or even reroute DNS queries to unapproved servers at the firewall. At least until DNS over HTTPS ruins that and simultaneously renders your point moot.
> A malicious or ignorant user can disable or remove antivirus protection. This can be bad.
I disagree. Anti-virus software has negative value and always causes significantly more trouble than it is worth. Case in point, the very issue we're talking about.
> A user can accidentally make a computer nonfunctional by, for example, deleting system files, perhaps because it was suggested (sarcastically) on a help forum ("LOL delete system32"). This is bad.
This is a minor inconvenience at best, that's what imaging is for.
> I'm far from a security expert, but from a sysadmin perspective, every user having local admin privs is an absolute nightmare.
If it were a nightmare, I'd have expected to see more issues arise from it throughout my career, but I've yet to see even one instance where a user having local admin has caused any significant trouble.
Many orgs haven't deployed LAPS yet. I know, lame, but its a fact. Even fewer have migrated to use DAWs, 2FA, delegated creds or the other top ways to secure AD. Its really complicated to do on a production environment of >1000 users.
Many networks are essentially flat, and don't make use of intra-network firewalling. So a compromised client can do MAC flooding, DNS spoofing, send SMB requests to other clients, pretend to be a printer, etc. All of these are preventable, but it just isn't in the mindset of most security orgs.
But the main reason that people don't have local admin is a psychological one: managers don't understand security and have a paranoid need to lock everything down for end users, even though they are not the main threat vector, and are the people who generate revenue.
> True enough, but unlikely to be a significant risk. If you are concerned about it then it is much easier to have a separate AD account that only has local administrator on normal (non-admin) workstations and use that.
I strongly disagree. If an attacker gains access to a system as an admin user, that is a much larger problem than a non-admin user. Designing an environment as you suggested helps compartmentalize that, but it doesn't help the fact that a SAM database from one of those machines can potentially spell an almost company-wide compromise. Such a strategy gives far too much opportunity for lateral movement in case of a single compromised machine.
> It is trivial to block or even reroute DNS queries to unapproved servers at the firewall.
That's a good point.
>I disagree. Anti-virus software has negative value and always causes significantly more trouble than it is worth. Case in point, the very issue we're talking about.
Regardless of your feelings on antivirus (I'm ambivalent myself), I don't quite see how a potential abuse of an antivirus software, which can lead to EoP, is justification for allowing all users in an enterprise to have full local admin privileges. You're cutting out the step where an attacker has to exploit an EoP vulnerability.
> This is a minor inconvenience at best, that's what imaging is for.
True, but if I can avoid the trouble by limiting user privileges, why not?
> If it were a nightmare, I'd have expected to see more issues arise from it throughout my career, but I've yet to see even one instance where a user having local admin has caused any significant trouble.
It's possible it may never be a problem in $arbitraryEnvironment, but that doesn't make it good security practices. I can imagine that in smaller environments, it may even be somewhat manageable.
There will probably be circumstances that create a need for local admin access as you described, but I don't think it should be the modus operandi.
> I strongly disagree. If an attacker gains access to a system as an admin user, that is a much larger problem than a non-admin user.
Not really. There's quite a lot you can do as a non-admin user too you know. You can probe anything on the internal network that machine has access to and act as a relay for the attacker, for instance. Since the user does work for the company, they probably already have access to a lot of things the organization would rather not give out. Local admin gives you a few more options, but the difference isn't really worth special attention in my book.
> True, but if I can avoid the trouble by limiting user privileges, why not?
Because you're also causing a lot of friction for the users. Users do work too you know, in aggregate hopefully a lot more than sysadmins. Every barrier you put between them and their work is a cost, and my argument is that the cost of locking everyone out of local admin pretty much always outweighs the benefits.
> I don't quite see how a potential abuse of an antivirus software, which can lead to EoP, is justification for allowing all users in an enterprise to have full local admin privileges.
It isn't, it's justification for disregarding "could disable the virus scanner" as a valid reason for denying local admin to the user.
"Good security practice" is often, in my experience, either a lot of academic crap that completely ignores the concept of risk analysis, or what someone who's trying to sell you something recommends. Every barrier you put between people and their ability to do work for the company is a cost that you have to justify against the cost of a compromise and the probability of said compromise actually occurring.
Depending on your environment, that level of cost may be worth it, but I think that's true for a much smaller segment than a lot of people who argue the point. I suspect this is because my paycheck depends on keeping the business running smoothly, not selling security consulting services.
> Not really. There's quite a lot you can do as a non-admin user too you know.
Yeah, of course. Otherwise ransomware would be much less prevalent than it is today. That said, there's quite a bit an admin user can do that a normal one cannot. I suppose we just disagree on how important the distinction is.
> Local admin gives you a few more options, but the difference isn't really worth special attention in my book.
And that's fine, but surely you can see why plenty of organizations feel that it is, right?
> Because you're also causing a lot of friction for the users...Every barrier you put between them and their work is a cost, and my argument is that the cost of locking everyone out of local admin pretty much always outweighs the benefits.
Well I can only speak to my experiences, but given the technical knowledge of 90% users on systems I support...it is definitely worth the cost.
> It isn't, it's justification for disregarding "could disable the virus scanner" as a valid reason for denying local admin to the user.
Touché. That said, in environments with antivirus requirements, for whatever reason, a user being able to remove such a program is a problem.
> "Good security practice" is often, in my experience, either a lot of academic crap that completely ignores the concept of risk analysis, or what someone who's trying to sell you something recommends.
The same could be said RE: antivirus. In many situations, protecting a user from running dangerous applications (e.g. a trojan delivered by a social engineering attack) is likely more important than a hypothetical escalation of privilege by a user who calls helpdesk every time their password expires.
> Depending on your environment, that level of cost may be worth it, but I think that's true for a much smaller segment than a lot of people who argue the point. I suspect this is because my paycheck depends on keeping the business running smoothly, not selling security consulting services.
Hey, it sounds like we're in the same business! :) I agree, different environments = different requirements. But that doesn't make for good headlines, either.
> Well I can only speak to my experiences, but given the technical knowledge of 90% users on systems I support...it is definitely worth the cost.
Is this meant to mean they have high or low technical knowledge? Because we support some users with a decidedly low understanding of... well, everything really, and it hasn't been a problem. They still give help desk a lot of grief, but it's because they don't know that windows can be minimized and things like that. Accidentally installing malicious applications or other reasons you'd think of restricting them really just doesn't come up.
Well, there are two groups that worry me: those that don't know Windows can be minimized, and those who believe they're much better at using computers than they actually are. And honestly, the second group worries me more.
That group doesn't represent the majority of users by any means, of course.
> Ok, let me ask: why? What is the absolute worst case scenario here? That a user in your organization gains admin privileges on their workstation?
I think the critical bit you're missing is that if a user can do something, a virus running under their credentials can do it too. Meaning I'd expect a virus can use this to get admin access.
It's less serious for any machine you have physical access to, but it is definitely an issue if you are remotely logged into a central server. If you manage to escalate privileges in this situation you could look at the files of other users e.g. the spreadsheets that only the payroll team is supposed to have access to...
On Unix systems there is one and only one byte value that cannot appear in a directory or file name: ASCII '/'. Note that I say "byte value": the kernel and the filesystem driver have NO idea what local you're using in user-land, and they don't care.
On Windows both '/' and '\' cannot appear, though as we can see here, ':' and others are problematic...
In shell scripts you really have to quote every damned string. I really wish that instead of by default splitting words on $IFS and interpreting glob and other shell special characters, the default was that $varname has no such handling performed, and instead if you wish for this you'd have to write ${varname*} or something like that.
Because I'm talking about Windows and not Unix systems, and on for Windows systems, literally every piece of documentation I've seen has said that * < > : / etc. are invalid name characters.
NTFS and the Native API had to support operating system personalities where these were perfectly legitimate characters to have in filenames, right from the start. It is the personalities like Win32 and 16-bit OS/2 that introduce the extra limitations.
That's just one of the many reserved file/device names in Windows!
"Do not use the following reserved names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9." [0]
When they were added Microsoft could have chosen to put special devices in their own directory (eg C:\dev on each device, or reserved D: for devices, or any number of other schemes). Instead they decided to make these special device names be implicit members of every directory on every drive.
Similar to make treating whitespace as significant, the decision to be compatible with the relatively few existing users doomed many, many more users in the future to endless problems. Backwards compatibility has a non-zero cost to users too!
It's a bit before my time, but I imagine that would have broken comparability with some CP/M programs that wouldn't be aware of the concept of directories and would look in cwd. Given the era we're talking about, that seems like it'd actually be a pretty big deal. Until relatively recently, Microsoft was big on backward compatibility and it was probably a pretty significant factor in their rise to dominance.
As Raymon Chen explains the compatibility is not with CP/M but with an idiom that people still use today, namely of assuming that one can redirect to/from devices like NUL without a path prefix. There's an awful lot of existing practice that does this and doco that says that this is what to do.
Random example: Here are SuperUser answers written this year, only a few months ago, employing this idiom.
On CP/M, device names ended with ":". To me, it'd be perfectly reasonable to treat "LPT:" as a magical file present in any directory, but not "AUX" or, much worse, "aux.c", which doesn't look like a device name. Besides that, "AUX:" was never a thing in CP/M IIRC.
That doesn't convince me they actually thought this through at the time.
An old DOS trick to create a file is to run "copy con newfile.txt" and hit enter. Then you can start typing the file's contents. I don't recommend trying to use backspace or delete since it doesn't always work like you think, depending on your shell. When you're done, hit F6 or Ctrl+Z followed Enter. It still works in cmd.exe.
Similarly, you can create an empty file with "copy nul newfile.txt" or test copy a file with "copy filename.txt nul".
Con was short for console, I think. “Copy con:test.bat” was a quick way to create and fill out a batch file provided you could write it error-free in one go.
It is possible using \\?\ prefix, eg. \\?\C:\con.txt is valid path. Unfortunately these files anyway confuse many applications that don't use this prefix, including Explorer.
Are ADS used for anything useful anywhere? It is kinda interesting how long MS has been carrying that feature around, especially as it is not really highly advertised feature; is it really even supported? ADS does constantly pop up in these sorts of (semi-)malicious contexts, simply dropping them seems like sensible thing.
The current uses in Windows I'm aware of (not including external programs) are:
- :$I30, :$Bad, $Info, $SDH, and a number of others, internally used to name the stream that stores the children of directories. But the stream names seem pretty unnecessary to me; they aren't exposed to external code so they could well be different files and it wouldn't affect any user.
- :Zone.Identifier, for denoting whether a file is from an untrusted origin (so you can get that annoying pop-up every time you run a downloaded file)
- :WofCompressedData, for the newer transparent file compression mechanism (older ones were different)
- :Win32App_1, used for I'm-not-sure-what
- :encryptable, used along with Thumbs.db, also for I'm-not-sure-what
- I think there was one more regarding extended properties or something, but I can't think of it right now.
Aside from breaking :WofCompressedData I don't think anyone outside Microsoft would shed any tears if it was removed.
Correction: It looks like I messed up the first bullet a little when editing. Only :$I30 is used for storing directory contents. :$Bad, :$Info, :$SDH, etc. have different purposes.
That's a different problem though, right? I thought you were asking where OneDrive stores the information necessary to know where the file is located on your cloud storage, and it would make sense for that to be in a stream.
Yep, but it looks like most things are done server side, then everything OS-side is done through the OneDrive client and a driver. AFAIK, this is the second implementation of "online files". Sorry, I'll stop dragging this offtopic now.
If you copy indirectly from OneDrive folder to another OneDrive folder, the file may not sync due to the ADS indicating that the data is already uploaded.
No clue! I don't use it and I uninstall it wherever possible, so I haven't really considered it part of the OS. :P But yeah, it's totally possible sync programs use it too.
> They are used extensively in [...] Windows itself. ADS underpins pretty much every FS operation on windows, including listing directories.
I think you might be confusing alternate data streams with NTFS attributes (not to be confused with file attributes or extended attributes). i.e. Paths are formatted as \PARENT\NAME:STREAM:$ATTRIBUTE and I think you're thinking about the :$ATTRIBUTE portion (like $DATA, $INDEX_ALLOCATION, etc.) rather than the :ADS portion, because as far as I know Windows does not use ADS extensively by any means. I've only seen a handful of uses in the OS. Or maybe you mean the fact that directory $INDEX_ROOT and $INDEX_ALLOCATION attributes are used with a stream called $I30, but I don't see why that's "underpinning" anything in any way... they could just drop the ability to create any other streams and it wouldn't affect this.
Both, ADS and xattrs are extremely useful. Let's say you need to associate some metadata with a file, but you can't change its contents' format. You could use some separate file that goes with it, but now you can't atomically rename(2) the thing... But with ADS/xattrs you just attach those to the file, and then when you rename(2) the file the ADS/xattrs go with it atomically. (Yes, you could rename a directory, but it doesn't quite have the same semantics as renaming a file. In particular, rename(2) of a directory won't rm -rf the target if it exists.)
POSIX-ish systems that support ADS use openat(2), not open(2) naming conventions, to get at ADS. POSIX-ish systems that support xattrs use other system calls specifically for that purpose. It's NOT ADS that causes these problems but the naming conventions used to make them accessible without having to write programs that use new system calls instead of open(2) (POSIX) or CreateFile() (WIN32).
Oh wow, interesting, thanks for sharing! So it says:
> In addition, if a file has any file capabilities, these are stored in an alternate data stream for the file. Note that WSL currently does not allow users to modify file capabilities for a file.
I've never actually seen this used... I wonder what commands would trigger it?
Last time I checked you could create a directory with a name "like:this" from Linux or something, and Windows would display it but not allow you to access it in any way.
Interestingly, when you create a file name from the WSL shell using reserved Windows characters, those characters will get mapped to unicode codepoints from the private use area when viewed in Windows. So for example a colon (\u003A) will show up in Windows as \uF03A.
This means you can create a filename in Windows using the character \uF03A, and that character will show up in the WSL shell as a colon. You can even do the same thing with "regular" characters, e.g. using \uF061 instead of "a", and produce a filename that appears to be ASCII in the WSL shell, but is not actually accessible.
I prefer openat(2) and friends for dealing with ADS. It means you have to have specialized programs to deal with them from shell scripts, as open(2) and friends provide no naming conventions for getting at ADS. This approach is much much safer than the WIN32 approach of using :$HACK_ME_PLEASE and :$THANK_YOU_MAY_I_HAVE_ANOTHER.
Note that Linux has openat(2), but it doesn't support ADS. Solaris/Illumos does. Linux has xattrs, which, like ADS in Solaris/Illumos, requires separate system calls to access -- no open(2) naming conventions there.
Both, ADS and xattrs are extremely useful. Let's say you need to associate some metadata with a file, but you can't change its contents' format. You could use some separate file that goes with it, but now you can't atomically rename(2) the thing... But with ADS/xattrs you just attach those to the file, and then when you rename(2) the file the ADS/xattrs go with it atomically. (Yes, you could rename a directory, but it doesn't quite have the same semantics as renaming a file. In particular, rename(2) of a directory won't rm -rf the target if it exists.)
As a teenager in the 90s I remember using a trick where if you created a directory on the command line with the character inserted by pressing alt-255 (I think) in the name you couldn't access it from the Windows GUI. Very useful for hiding stuff from my parents. I imagine this was FAT32 but can't remember for certain.
No, ASCII stops at 127. Everything above that is extended ASCII specific to a code page. As you say, DOS's code code page is 437 [0], and character 255 is a non-breaking space on code page 437.
The problem you're having is that you're using Windows, which uses either Windows-1252 or UTF-16-LE (Windows Unicode). On those code page, non-breaking space is 0xA0 (160). Windows is converting extended ASCII 255 from code page 437 to either Windows-1252's [1] non-breaking space, which is 0xA0 or 160, or UTF-16-LE's code page [2] where non-breaking space is 0x00A0. The glyph is silently translated.
However, even on Windows 10 you can still get to a place where you're using the original code page of 437.
Fire up cmd.exe. Run "chcp" and it should tell you that the active code page is 437. Run "copy con C:\text.txt" to create a new file from the console input. Type <Alt+255>. Press Enter. Hit F6 or Ctrl+Z and hit enter to finish the file. Now type "powershell.exe -Command "[io.file]::ReadAllBytes('C:\text.txt')"". Your output will be:
255
13
10
That's code page 437's extended ASCII non-breaking space followed by carriage return and line feed.
Now run "type text.txt". You'll get blank output lines.
Now run "powershell -Command "Get-Content text.txt"". Your output will be "ÿ" which is Windows-1252 or Unicode character 255 or 0xFF. Even if you try "Get-Content text.txt -Encoding ASCII" you won't get the same output as you do from cmd.exe because PowerShell's ASCII encoding is actually code page 20127 (7 bit ASCII), not code page 437.
Now try to run "powershell.exe -Command "[int][char]'<Alt+255>'"". You'll get 160.
That's also why you can fire up PowerShell and type this:
'<Alt+255>' -eq '<Alt+0160>'
And the result is true. (Note: Alt codes with a leading zero indicate a Unicode alt code.) Windows is translating the glyph from the alt code for you in the background. You have to use a program which doesn't try to do that for you.
What is there to fix? The colon has a special meaning to the Windows NTFS driver - if you use that in a filename then that file is inaccessible with that driver.
If you put a null or forward slash in a filename on an ext2 filesystem then that file becomes inaccessible on linux, should they somehow "fix" that?
Of course, they should. It may be hard to fix it, too hard for it to be worth the disruption in stability, but it is a design error that breaks user expectations and can cause severe issues when interoperating with other filesystems.
And with NTFS, this is much more severe than your example with ext2.
For one, ext2 is a niche filesystem at this point, users practically never interact with an ext2 filesystem. It's not the main filesystem used on Linux.
And secondly, as a user, I have created files with '/', ':', '*', '?', '"', '<' or '>' in the name ('\' and '|' are also forbidden on Windows, but I admittedly have not yet needed those AFAIK).
Not being able to use these characters limits the ways I can express myself in what a file contains.
For example, I've had to rename a list with different levels of grouping from
"List of members: City > Lastname > Firstname.pdf"
to
"List of members - City, Lastname, Firstname.pdf".
And I'm still not convinced the recipients of that file actually understood that the levels of grouping were listed in the filename in the order from biggest to smallest grouping level.
Presumably, fsck-ing the NTFS volume (or the Windows equivalent) should canonicalize the colon into some escaped/mangled form, since it isn't valid for it to be there in the filename.
They di seem to have fixed this, at least as far as WSL is concerned. If I open a bash shell on Win10 and do `echo foo >/mnt/d/temp/like:this` then I can open the resulting file in Notepad and see the content `foo`. `mkdir like:this` works too. The file or directory name does appear a bit mangled in Explorer, but still works (and still looks like a colon from inside bash).
I think you are right that this used not to work in earlier versions of WSL. You could use colons within the WSL home directory, but not on the mounted NTFS drives. But it seems sorted now.
I can't view the post right now, so no idea about the content or if anything is fixed, but that colon syntax is supported by NTFS and called "alternate data streams". Granted the feature isn't exactly prominent and tools have various support for them, but it is getting better. Powershell's Get-Item has a -stream option, for example.
I think that was outlined as one of the very first tricks in the post. I don't have a system available at the moment to test that with. Also, it explores doing the same thing without a Linux system, and shows the various ways such files/folders can/cannot be accessed.
If you created some directories to test, you can delete them with bash (whatever git installs works). It does create nested directories, so you need to do:
Speaking of, my favorite stupid Windows trick I discovered, was before long file support, you could get nested folders too long for Windows to handle due to some cool bugs and couldn't delete them with rmdir.
But I discovered I could robocopy sync a blank folder over them, and it was more than happy to oblige. robocopy is seemingly a more robust deletion tool than rmdir.
The author of the above blog copy & pasted the full article, explicitly removed the author names and references to the original source. That's pretty disappointing.
Wow. It appears that movaxbx.ru copied multiple of my blog posts as well[1][2][3][4].
I wonder how hard it would be to take legal action (in particular to stop them from doing this altogether, as opposed to just taking down a single article). The site appears to be hosted in Russia[5]. From a quick glance over the Wikipedia page[6], it seems to me that Russian copyright law is quite similar to copyright law in most developed countries.
Assuming bad intentions from nationality is kind of racist. I get the tricks (and the examples given) seem to be usable maliciously, but we need to know the threats so we can protect against them.
OK, this is going to completely mess up all of my (and most likely everyone else's) shell scripts. I always assume these characters cannot legally occur on Windows anywhere in the path...
_________________________________________
And wow, this is quite clever and also quite difficult to foresee:
> The best security vulnerability to explain directory junctions is in my opinion AVGater, where the attacker places a file in folder x. Then he marks the file as a virus and the installed AntiVirus solution will move it into the quarantine. After that the attacker removes the folder x and replaces it with a directory junction named “x” which points to C:\windows\System32\. If the attacker now clicks on the “restore” button the AntiVirus solution will copy the file to the x folder which now points to system32 with SYSTEM privileges (which directly leads to EoP).