Hacker News new | comments | show | ask | jobs | submit login
An error message, still found in Windows 10, is a mistake from 1974 (threadreaderapp.com)
791 points by InclinedPlane 35 days ago | hide | past | web | favorite | 252 comments



Back in the good old days of the early(er) internet, back when you could do a port search on 21 to find random machines with anonymous FTP access, a lot of the machines you'd find were inevitably Windows machines.

A 'trick' used by the file-sharing community in order to hide their files on these anonymous FTP servers was to create some nested directories with these kind of keywords. The FTP server allowed you to create the directories as well as access them (if you knew the full path) but on Windows they would just cause errors or crashes if someone tried to access them. Combine that with the ability of creating directories with just spaces as names and you could hide quite a bit of stuff from the unsuspecting FTP server administrator.


In my high school’s Windows For Workgroups 3.11 lab, we used to hide local installs of Doom, Descent, and other DOS games under a folder named the character Alt+255 (which looks like a space in DOS and was invisible to Windows Explorer).

The lab admin had disabled Ctrl+C and Ctrl+Break to keep folks from breaking out of the DOS-based login prompt to a C:\ prompt, but I somehow figured out that Alt+3 passed an equivalent character and had the same effect.

I once got yelled at for being “in the lab too much” by one of the teachers, but I never got in any trouble. I suspect the lab admin (a kind older programming and math teacher) knew what we were up to.


In the early 90s in middleschool (6-8th grade) we had access to a macintosh lab - the original macs - and they were all networked and relatively locked down. What they didn't do however, was lock down access to shared network drives. In the lab we had a limited number of games on 3.5" floppy disks and if you didn't get there early enough, no games for you. In one of the classes held in the lab we were instructed as to how to save our work to the network instead of a floppy. This was in the heyday of Hypercard and so that's what was usually stored by students on network drives. When my father showed me how to copy data from a disk to a drive at home, I connected the dots. I went into the lab after school one day and copied all of the games I liked playing, somewhere in the neighborhood of 50, from disk to the network. At 13 years old, I thought I was slick. No more waiting for games. I enjoyed this advantage for about a week, until I was called into the Principal's office and had a talking to by the network admin and the lab teacher. I had absolutely no concept of storage size, and at ~60mb I had taken up a considerable amount of storage on the network at the time.


Did something similar in the late 90’s / early aughts.

Figured out the default password scheme for teachers.

Found several teacher accounts that didn’t change their default password.

Teacher accounts could write to network drives when students couldn’t.

Put games like it, quake 2 and c&c ra2 on the network.

Lasted about six months.

A student I had confided in ratted me out.

I was no longer permitted to touch another school computer.

I failed every class that required me to use a school computer.

Despite the fact I brought my own laptop to school, they wouldn’t let me use it.

Formal education and I never got along after that.


That's ridiculous, maybe I'm over-involved with my kids but I'd think your parents would go to bat for you in that one. The school making you fail other classes over that is unacceptable


There's a reason why "The Mentor"'s Hacker Manifesto has disdain for schools. I've had similar things happen at high school. Still, anecdote.

Unless your parents can and will sue, the public schools (SPIT) will do as they choose.


They tried. I wasn't exactly a stellar student anyway. We switched me to another school district to finish out high school.

I'm convinced that I would have had a more useful education if I had dropped out, moved to Silicon Valley, and lived out of a van working for minimum wage at a startup than if I had finished high school.


I had a similar ban from using my high school's computing equipment for an even tamer reason - sending messages to my friend in the same classroom using "net_send", which the school claimed DDOS'd their network and blew it out of proportion like I was some hardcore out of control hacker.


I was given 20 hours of "community service" (school punishment, total joke) in middle school after discovering the messaging feature in Novell (the admin didn't disable it).

For a couple weeks the school was absolutely convinced my friend and I were responsible for taking a few computer labs in the district offline and claimed we created a virus.


I suspect we both had the same frustrating conversation with our head teacher :-)


I was suspended for three days in a separate incident at another school system I started going to the following year for using Winpopup to send messages.


We had a mac lab in late 90s/early 2000s and they didn't lock down shared network drives either. A friend and I discovered this one day and we were looking around on the network and found that we could access teacher's gradebooks lol. We also weren't allowed to install games, but found a way around that by scheduling a task to run the installer a minute in the future.


I had a similar experience but amongst other things I got everyone's passwords and figured out how to bypass / control this weird bookshelf launcher whose name escapes me now. The head computer teacher found out but didn't make a big deal about it. Instead, me and a friend ended up getting hired by the school for our last year and got an office with a coffee machine since we were easier to deal with than the school boards IT.


I wonder if you're talking about the launcher for managed classic Mac workgroups, At Ease. I have an almost identical experience to you, including the coffee machine (though more of a closet than an office).


All it took to get around At Ease was to hit the interrupt or programmers key (which on newer Macs was cmd + the power button on the keyboard). That brought up the micro debugger, and you just had to type "G FINDER" and it'd dump you right out to the Finder.

Eventually our folks replaced At Ease with some other app, (Cyber something, control something? no idea) but they setup a hotkey to disable it which was nothing less than shift + K. That didn't last long at all, with Karla, Kyle, Keith, and Katie getting incredibly frustrated just trying to type their name.


I remember on Deep Freeze for Windows, I was able to find the password in plaintext by searching win386.swp for the deepfreeze copyright string.


I had that as well. it was some IBM product.. we figured out that open file dialogs on well, everything still let you start windows explorer. From there you could get a dos prompt and do stuff.

It was used for broodwar mostly, this was around 1998


Yup it was IBM something and the same era.


Was this IBM School Vista by any chance? The login screen was a school entrance with yellow buses, the main screen was a student desk in a classroom, and the application launcher was a bookshelf in the classroom. One of my friends and I also had a fun time finding ways to break out of the locked-down shell into Windows Explorer -- as well as making interactive parodies of the classroom UI using PowerPoint and HyperStudio.


My friends & I used to play Netrek in the Mac lab, which was against the rules. When the lab assistant would walk into the lab, we'd all hit the reboot button. Hearing ~10 simultaneous ding noises was pretty hilarious. He knew what we were doing, but never caught us in the act, and I suspect he found it more amusing than not.


Around my branch of the University of California it was Marathon. When the lab got Power Mac 7100s, it was full at all hours with people playing, which made it impossible to get any actual coursework done.


He knew what you was doing, he was just letting you have some fun. If you work in IT now, he probably helped push you in the right direction.


You could also put ANSI codes in filenames, to set black text on black background for fun.


Back around the turn of the millennium, this was a great way of getting the latest releases of all sorts of warez groups. Put a machine on a public network somewhere, install ftp server, 'forget' to disable anon access to /pub, watch crappy cams of the matrix in your dorm room before anyone else. And if anyone came knocking about the bandwidth use - duh silly me, those hackers had me fooled again...

The only thing was that you had to access your own machines with an ftp client because of path tricks like this. (Or get a Linux box, and cross your fingers nobody would find yet another wuftpd buffer overflow)


> cross your fingers nobody would find yet another wuftpd buffer overflow

wuftp was responsible for the popularization of format-string attacks, which were magical at the time.

We all knew about buffer-overflows, and when it was explained format-string attacks were obvious, but it was the first time I'd seen a genuinely new class of software attacks.

Happy memories!


Currently, there are ~700,000 FTP servers that allow anonymous access and around 12% of them are running Microsoft FTP:

https://www.shodan.io/report/oZpN8rpp


And the rest are Linux boxes.


Do they allow write access, though?


Nowadays you can do the same with random S3 buckets of various companies (:


I also remember people hacking IIS, SQL Server, WebDAV or exploiting one of the hundreds of RPC/DCOM/LSASS vulnerabilities in Windows, the servers were then used to start private FTP servers used to sharing warez. At least that was something the FXP community did, the FTP servers were then filled very quickly from other FTP servers (hence the name FXP, File eXchange Protocol). Thinking back to it, it was quite astonishing how horrific Windows security was, to think that people (to this day even) are seriously considering Windows a viable server operating system still makes me laugh.


Windows security has improved a lot in the last ten years.


> exploiting one of the hundreds of RPC/DCOM/LSASS vulnerabilities in Windows

And then Microsoft got tired of it, and (IIRC starting with Windows XP SP2) locked it down. Now if you have a legitimate need to use DCOM (for instance, OPC-DA), you have to jump through a series of hoops.


OPC-DA is the worst. Fortunately at least a few vendors are pushing OPC-UA (no DCOM!), but the whole industry moves like molasses.


I recently discovered the wild world of OPC and its variants and the huge community of paid middleware around it. Sure goes deep. Wrote something to pull OPC stuff into InfluxDB and then moved on with my life not having to worry about... or remember how DCOM works again. Hopefully...!


I'd forgotten all about interop with DCOM until just now! Several years ago (when I last had to do it) I recall having to horribly mess with services on my local machine just to get anything to appear to work...let alone actually knowing if it did what I wanted (consistently) and then how to get that to work at a customer site (we were told to document it and throw it over the wall to support...)


See chapter 4 of this PDF for an idea of the amount of settings you need to change to make it work: https://www.kepware.com/getattachment/04042e47-c690-467c-a93...


It all seems like a fuzzy memory now. My exposure to warez started with BBS then migrated to IRC/FTP. As a teenager during this time I remember how many enterprise networks were easily compromised. Seeing this from the outside, I just figured that security wasn’t important to them — or they didn’t know how to manage their security.

I wonder if that was really the case in the early internet at an enterprise level. Did security take a backseat to functionality?


From first hand experience, it still does.


How did the Windows ftp create the files with protected names?

Shouldn't it have crashed, like the blue screen-causing <img src="C:/con/con">


As far as I know you can bypass that with a \\?\ prefix in the path. This tells the API that your path is a full NT file path, similarly the devices are mapped into \\.\ . Of course many tools on windows were not aware of that, so anyone trying to delete "C:/con/con" would run into the issue that they needed a tool which would delete "\\?\C:\con\con" instead.


To be honest, I don't know. This is nearly 20 years ago now.

I assume the FTP server created the directories using a different file-system API call than Windows Explorer.

Here's a post by someone who found one of these directory structures on his FTP server back in 2003: http://www.informit.com/articles/article.aspx?p=31278


My guess is that there's 2 APIs to access files, a user mode API and a kernel mode API, and this special casing is only available in the user mode one. If the FTP server uses the kernel API it bypasses the special case file handling.


The guy forgot to try to log into the ftp server itself to delete the files. After all, if the ftpd program used the correct APIs to create the directories and files, it would probably also use the right ones to delete them.


Judging by how the article was written, that probably wouldn't have helped him. You couldn't delete these directories directly, but rather had to delete the nested "unnamed" directory: “/path/to/COM1/ /“, which is also how you would create the directories in the first place. Usually, the tag-directory would also go along with a list of directories explaining that you should not create "undeletable" directories on NT-based servers, as that would annoy the admin and might cause them to remove anonymous access. You were only supposed to create large mazes of directories to prevent other random anonymous users from finding and deleting your files.


There has always been a discrepancy in capability between win32 file-related APIs and GUI tools. Even in Windows 10, it is possible to use the APIs (as simply as through, say, git bash) to create files and directories inaccessible (even for deletion) to Explorer.


Since the special filenames were handled at the OS level, any crash would have required the OS to mount the FTP directories. I don't think Windows' FTP program ever mounted anything, meaning the directories were handled by the program and not by the OS. When you copied anything to your local Windows machine, the file would automatically be copied to a directory with a safe name rather than mimicking the directory hierarchy from the remote server.


>>Back in the good old days of the early(er) internet, back when you could do a port search on 21 to find random machines with anonymous FTP access

Man, I remember doing this to find video game executables. Good times.


Memories of fxp


Ironically, I downloaded my first copy of Windows 95 from one of these FTP sites.


This was a common exploit during the hey day of the FXP scene.


Fun facts:

The reserved names are things like "COM" and "LPT" followed by a digit... or at least, a thing that Unicode recognises as a digit, so "COM²" is just as broken as "COM2".

This device remapping is done in the Win32 layer, not the NT kernel, so you can often use "verbatim path syntax" to bypass these hacks. So "C:\Temp\COM2.TXT" is a magic device, but "\\?\C:\Temp\COM2.TXT" is an ordinary file you can read or write without issue... until you try and look into C:\Temp with a program that doesn't use verbatim paths like Windows Explorer, and all hell breaks loose.


It amazes me that Windows Explorer (literally the most common way of interacting with files directly in windows) has so many limitations...


It amazes me that (a) it has an unhealthy relationship with the task bar - eg crashing it can take the task bar down too, (b) MS keeping messing about with it and removing useful things (c) 1989's xtree was better, (d) its search is horrible (e) like you say, that any of these issues are a thing considering how central it is to all Windows users


Explorer == Shell. The Explorer windows you see in the shell are merely windows.


I think the point was more that Explorer is a single program doing two wildly different things with little overlap. While what should/shouldn't be divided into multiple programs is very subjective, I doubt you'd find many programmers that agree that the functionality of the file explorer and the taskbar should be the same program. It's also confusing to users who tried to access a non-existent network share, and their taskbar locks up [ignoring the fact that neither of the programs should lock up anyway].


There is an option to run (file) Explorer windows in a separate process from the Explorer instance that's powering the shell, although even after all these years it's not on by default.


I still use ZtreeW64 almost every day. It's a clone of the old xtree and still very useful.


Thanks for that. May I also recommend "Everything" [0] - multi-drive search is very useful for people like me with an unholy number of files.

0: https://www.voidtools.com/downloads/


Not enough people know about Everything. It's excellent, small, and faster than Windows search will ever be.


Everything _is_ really good, but my ancient CPU doesn't like it running in the background all the time. My search needs are few enough and localised enough that I've found Agent Ransack to be the best option for me.

And to be honest I haven't found any significant search performance difference, compared to the indexed-search options.


Interesting, had never heard of this before:

http://www.ztree.com/html/ztreewin.htm


As someone who hasn't used any Explorer-alternatives before - what makes it so useful, and who would you recommend it to?


If you feel the current search needs improvement, you might be happy to discover it's being worked on:

https://blogs.windows.com/windowsexperience/2018/10/24/annou...

Refer to the Enhanced Mode for Search Indexer section of that page.


> (d) its search is horrible

This one caught me by surprise a few months ago when I helped out someone using Windows. I knew the files were there, but couldn't for the life of me find them. Turns out there was some search setting that prevented search to traverse all dirs... Wasted hours. In the end I googled how to search in Windows. Absolutely awful UX.


eg crashing it can take the task bar down too

The taskbar is managed by Explorer.


... and there's the unhealthy relation. Task bar should look be responsible for .. the task bar, and the explorer should just be a view into the file system. There is zero sane engineering justification for the two to have any relationship whatsoever.


Isn't that what the"Start folder windows in a separate process" setting is for? More stability at the expense of more memory and less responsiveness. In the days of Windows 95 it was actually beneficial to not do that. Nowadays, not so much, I guess. Although it's been ages that Explorer crashed on me. Not having too many shell extensions probably helps here.


I think that "new process" menu option is a bit of a lie. I use it every time, but I've had supposedly separate Explorer windows (including the task bar) crash simultaneously.


Due to how the whole Windows Shell mechanism works (COM all the way down) there will still be lot of shared state between the supposedly separate processes.


Not as much in Windows 8+ since it seems that the "Windows Shell Experience Host" process started taking over increasingly more of the taskbar duties. It still seems possible for particularly bad Explorer process crashes to crash the Windows Shell Experience Host (presumably due to COM communication channels between them?), but not every Explorer crash affects the Shell Experience Host anymore. (More interesting now is when the Shell Experience Host crashes; Explorer tends to keep working just fine, and unlike before, opening a new Explorer window doesn't guarantee that the Shell Experience Host reboots.)


Sometimes I find interesting things in it. For example, you can open a terminal set to the current directory by putting cmd in the path bar. This also works with other commands and arguments, so you can do stuff like notepad [filename].


In the Ribbon version of Explorer (ie, since Windows 8), open folder in Command Prompt was added to the File menu.


Know how to get admin cmd this way?


Try Ctrl+Shift+Return instead of just Return for launching.


It's not a bug - it's a feature: reserved filenames which have special features and functions. And it has been preserved through DOS, and Windows over the years as part of Microsoft's philosophy of "compatibility at all costs".

There are many, many examples of similar (mis)features. Not sure why this particular one is getting so much attention. But yeah, if you know the history, it's not really surprising.

For anyone interested in this sort of software archeology, have a look at Raymond Chen's blog https://blogs.msdn.microsoft.com/oldnewthing/ which is full of this stuff. Very interesting reading.


"The file AUX.H is too large for the system"

That's a bug. It should say something closer to "The filename AUX.H is a reserved filename and can not be used by this system"


I just tried it on Windows 10, and the error message is:

    This file name is reserved for use by Windows. Choose another name and try again.
So at least that has been fixed.


Never used it. What’s this screenshot from?

https://pbs.twimg.com/media/DrEsBhgUUAACZga.jpg


That's Win10. I suspect in that screenshot that the file is indeed too big for the filesystem. I don't know how that could be (the "no more free space" message is different.)


9.57KBytes? That would be a very full disk!


Or a very oddly formatted one.

The disk full message is different, so my money is on an oddly formatted disk.


Apparently they did think of namespacing these devices files way back in the day. According to Wikipedia:

"Versions 2.x of MS-DOS provide the AVAILDEV CONFIG.SYS parameter that, if set to FALSE, makes these special names only active if prefixed with \DEV\, thus allowing ordinary files to be created with these names."


Had they done that, it would have been fine.

It's bizarre that once it became clear that this was going to be a problem (when harddisks and directories were introduced, or with Win95, or XP at the latest), they didn't decide to deprecate the stupid way and push everybody to use the sensible way.

You may have to support both ways for a while, but once you start supporting stupid ideas like this out of a need for backwards compatibility, it should be obvious that it'll never go away unless you make it go away. Maybe allow old software to run in QDOS-compatibility mode or something. It's just insane that this is still a real limitation in Win10.


All OSs have silly legacy-based limitations. The problems caused by this compatibility feature were just never frequent enough to warrant attention. Nowadays Microsoft has less give-a-damn for user suffering than ever before, so I suspect that this bug will be in Windows right up until Microsoft finally stops pretending to care about the Desktop and abandons it.


Quoting Linus:

> If a change results in user programs breaking, it's a bug in the kernel.


But it doesn't have to break anything. Programs written for DOS or old Windows version already tend to run in compatibility mode. You can have these devices be "everywhere" when the program runs in PC-DOS compatibility mode, and in \dev\ when it's not.


So you lock cmd in compat mode for all eternity because there are batch files that use such things?


cmd is pretty crap anyway. Good excuse to phase it out together with the 'everywhere' devices, and replace it with a better command shell.


Oh look, developers of DOS/Windows software not giving one drop of thought to any new (security/compatibility) features unless things break.

Reminds me of how many things broke when Windows started any kind of basic enforcement of read/write permissions (files, registry keys) in NT 4, then in 2000, then in XP, until I guess they gave up and created an abstracted API so that programs can trash all they want)

(Yes, I'm still bitter)


[flagged]


I'm not complaining about MS, I'm complaining about the software developers (of 3rd party software)


This brings back fond, or not so fond, memories.

Back in the earliest days of Windows, I wanted a way to log some debug output, and there were not a lot of options. You could write to the AUX device, if you had a serial terminal attached.

So I figured I would write a device driver that redirected AUX output to the monochrome display, glass teletype style, while Windows ran on the color display.

Naturally, I named it AUX.SYS, with a source file called AUX.ASM.

Oops. That didn't work too well.

Eventually I figured out that I couldn't name any file AUX.anything.

I was going to call it AUXDRV.SYS instead, but then I thought it would be fun to use a name that sounded like AUX but wasn't spelled that way. So OX.SYS it was!


Being a nerd, I’m surprised you didn't go with ORCS.SYS ;-)

(Unless, in your dialect, you don’t pronounce 'orcs'and 'aux' in exactly the same way, as I do)


In my accent (a rhotic New England without the cot–caught merger) they’re all different:

• “ox” /ɒks/

• “aux” /ɔːks/

• “orcs” /ɔɹks/

But hey, close enough for a pun anyway. My favourite rhymes and puns are those that work in any accent, but it’s a bit harder to come by one that’s not overused.


Actually, you've reminded me that some of the best puns are the ones that don't quite work anyway. Part of the reason "surely you can't be serious..." is so funny is that it's tortuous to the point of absurdity!


I think I've told this story before, but several years ago I discovered this stuff for myself.

I had tidied up a script distribution that had been unloved for a while and got a bit out of hand. I'd been working at home on a Debian laptop, and committed my changes back to Subversion - one of which was to gather up and organise some auxiliary files into one neat and tidy directory. Naturally, since the script already had ./bin, ./lib and ./etc, I went with aux.

I came into work after the weekend to find my Windows-based colleagues, who were in the habit of checking out a single working copy of the entire repo, in full panic mode because "Subversion is broken, IT say we might have lost everything"...

These days I name aux directories 'etc'.


This gives a whole new meaning to source code not being portable.


I have lost count of the number of times I have checked out a large Java application which hits the 256 character MAX_PATH limit on Windows. Is a huge pain in the ass as checkout from git or whatever usually works fine but when you try and access the 40 deep folder tree Explorer will error on you.


While I have as much disrespect for Windows and MS in general as any web developer who had to deal with IE, I think this issue is a bug in Git.

Windows git executable should be aware of OS limitations and should be able to work around them. Of course, Windows API should also fail to create invalid fs objects, so there's that...


I kind of agree and disagree at the same time.

Microsoft provide file system access which is not impacted by MAX_PATH and that is what git is using when it writes to the file system. If anything I think Microsoft should fix Explorer which they kind of have in Windows 10 with the long path option.

I don't know enough about why Microsoft still have the MAX_PATH problem. I am guessing, like most things Microsoft, it is due to legacy compatibility.


An easy way around this is the `subst` command, which lets you map a folder to a drive letter; if it happens frequently, write up a batch/vbs script and put it in the right-click memu. Of course, if you extracted a prank zip file, you may want to find another method, as there's liable to be far longer paths nested in there...


Oh yeah there are plenty of workarounds, just an annoyance when it catches you out. Not much of an issue if you are on Windows 10 as you can enable Microsoft's long path setting. I don't often use Windows these days though so it is even less of an issue :)


Another way for source code to be not portable is to include two files with different casing in the same directory, for example "bin" and "BIN".

You can't work properly with that on the case-insensitive file systems on Windows and Mac.


You can actually turn on case insensitivity in windows, though not all software will like it.

NT has always supported case-insensitivity (at least somewhat) via a registry key[1].

With windows 10's linux subsystem, there is now a new command to make a _directory_ case sensitive[2].

[1]: http://www.nicklowe.org/2012/02/understanding-case-sensitivi... [2]: https://blogs.msdn.microsoft.com/commandline/2018/02/28/per-...


Windows is already case insensitive, no? Case insensitive but case preserving (a file named in ALL CAPS will remain ALL CAPS and can be accessed with lowercase letters as well.)


Hahaha! I think I'm going to include an aux or con file in every project I make from now on :)


For best effect, include both 'aux.\/aux.c' and 'aux.\/AUX.c', to also lance several other pustules of brain damage.


"The project that eventually became the .NET Framework was originally called COM3. I quickly realized this was a bad idea because of this bug. We then renamed it to COR (Common Object Runtime)."

https://mobile.twitter.com/ckindel/status/105881379620925440...


Read about Microsoft never ending list of compromises on https://blogs.msdn.microsoft.com/oldnewthing/

I don't even blame MS nowadays. My idealism is weaker than my nostalgia. Without these hacks MS would probably die long ago. #worseisbetter


There are bugs, there are design flaws, and there are limitations. Often people are confused how to classify an issue between these categories.

This is a 44 year old limitation.


Not expecting a limitation like this could lead to undesired behavior (it did!), which would constitute a bug. Maybe 44 years from now, someone will have a nanotech brain implant, and happen to read this tweet and think "AUX.H" and their implant will crash, and they'll have a seizure and fall in front of a bus. Technically, it was a "limitation" passed on, but it turned out there was an associated bug.


Hopefully 44 years from today, these timeless words will still echo:

"it's not a bug, it's a feature!"


The real bug is the misleading "too large for destination" message. The other message "invalid device name" is at least somewhat more descriptive, even better would be "filename is a reserved device name".


This. The file name is explicitly documented as not supported by the file system. The error message is buggy.

A correctly working system would have simply said what limitation the user had hit.

I’m sure that in every file system there are names you can’t give a file - and if there aren’t then that problem is at least as big. For example if it’s possible to call files null, the empty string, “.”, the directory separator etc then that just creates more trouble. These are documented limitations and a non-buggy system will explain what limitation the user has hit.

Inconsistency such as differences beteeen file system and file explorer I could agree are borderline bugs at least in the UX sense. The user doesn’t see the file explorer UI and the file system as different things.


AFAIK, Linux accepts any byte sequence. I never found one that couldn't be used.


0x2F and 0x00 are not allowed in Linux filenames.


Also, filenames composed solely of 0x2E bytes and with length zero, one, or two have a special meaning and can't actually be used to name files.


Try using a proper front slash in a file name.



This is why I detest talk about "quality" in software. Quality isn't an inherent property - it's a relationship between you and the software. What you might regard as a limitation I might regard as a showstopper.

It's that failure to appreciate quality as a relationship which is the confusion.


Quality is also about other relationships between the software and its environment. It's not just the user and one piece of software in a vacuum.

Imagine being a developer of some windows software, and a QA person in your team notices that the software can't save to "aux.foo". You explain that this is a limitation of the Windows OS and you can't fix this. But QA insists that this is a "showstopper", and management agrees.

So you do a bit more research and you discover the work-around with the "\\?\" path prefix. You implement the "fix". Now QA discovers that they can save to "aux.foo", but windows explorer has weird issues with it, and they can't open the file in a text editor, and the file doesn't get backed up anymore, etc.

Wouldn't it have been better to accept the limitation instead of calling it a showstopper?


I wish Microsoft would just accept that no one will use Internet Explorer and devote resources to breaking MS-DOS compatibility.


Right. It's not a bug if it's the intended behavior. It's merely insanity that this is the intended behavior.


Intended behaviour is not expected behaviour. A bug in the spec is still a bug.


I remember trying to make that point at a former employer, and getting nowhere. Once something was in the spec it was written in stone, no matter how much pain it caused the users.

Unlike Microsoft's case, this one wasn't required for backwards compatibility. It was simply a dumb decision that nobody wanted to revisit.


Does it even matter? undesired behaviour due to backwards compatibility with a little known (in the wider sense of computer usage) 44 year old feature which brakes someone’s workflow will be interpreted by the user as a bug regardless of whether it is expected behaviour to the developers who wrote the code.

If that’s not technically a “bug” then it’s still broken UX (and the error message produced certainly doesn’t help there either).


Bug? No. Works as expected, one could argue.

Limitation? Yes.

Design flaw? Hell yes.


Good point. Oh man I love coconut though.


There's one thing I don't understand. Why was the colon dropped from the special name? I personally remember typing things such as "copy CON: sys.ini" to create heredocs. Programs could have redirect their output to "AUX:.H" or "CON:.TXT" and the problem described by the author would not exist since the colon is not a valid character for a filename. Is it because programs did not let you pick a name with a colon in it?


This addendum is now in the article:

> CP/M actually didn't do these special names as simply as I described them, which is a fact I either never learned or had since forgot. > It actually required them to be followed by a colon, as if they were a drive name. > So PRN: is the printer, PRN is not.


Perhaps NTFS's alternative data stream has different treatment?


There never was a colon in special device names in DOS.


I remember using colons for these, and have never used CP/M.


The colon wasn't needed in DOS. If you added it, it was just ignored.


At least we can explain it away in terms of backwards compatibility.

Here's a quirk that's harder to explain: open a powershell window in windows 10. Hit alt-enter rapidly to switch back-and-forth to full-screen mode while looking at the min/max/close buttons. You can see flashes of windows-vista-era buttons. Is the new look just painted on top of the old one? That's crazy!


I'll explain it: In the olden Windows days, the application is in charge of painting its titlebar by handling the WM_NCPAINT message. The default window handler for this will paint the Windows Vista-era buttons.

During Windows 7's development, the responsibility of painting window frames moved from the application to a separate process known as the Desktop Window Manager, "dwm.exe". DWM tells the app that it should stop drawing its own frame when the application is windowed, and tells it is in charge of drawing "the frame" when fullscreen.

This is communicated using an asynchronous protocol (private window messages), thus during the transition it's possible for the window's idea of its frame state to be "out of sync" with the real state because the client hasn't processed all of its messages yet.


Wow, it's great to be able to finally connect the dots on that one, so thanks for the explanation. I guess either there is a subtle reason why it would be a bad idea to update the old-style handler, or they just never got around to it. Either way, I'm just glad the titlebar isn't being drawn twice at all times.


There's one other place where you'll see Vista-era title bars in Windows 10: in the title bars of MDI subwindows, where the application itself (or, rather, the Windows theming engine) is responsible for drawing the title bar rather than the DWM. For whatever reason, that little bit of the theme was never updated (probably because most users will never see it; AFAIK the only program that ships with Windows that still uses that style of MDI window is MMC).

My hunch is that something similar's happening here: when you bring a Windows Console window full-screen, it disables its window decorations and the DWM stops drawing them. Through some quirk of timing, it looks like the console window itself starts attempting to draw its decorations, and, when that happens, you'll see Vista-style decorations very briefly, because that's what the theme tells it to draw.

(I suppose it's also possible that console windows always draw their title bars themselves underneath the DWM-provided decorations... they've always been a bit weird in Windows.)


@foone is very quickly becoming my favourite account to follow on Twitter. He posts at least one super insightful Twitter thread every day or two about some fascinating quirk about computing history. Beats hitting Wikipedia:Special:Random for sure.


Related post from Raymond Chen's "The Old New Thing" https://blogs.msdn.microsoft.com/oldnewthing/20031022-00/?p=...


Chen's blog and book are both amazing. Highly recommend if you've every asked yourself, "Why the hell would MS even conceive of doing this?"


This is also why windows has the back slash for file names instead of the forward slash. When dos 1 came out with no directories, the forward slash was used for command line parameters. When directories where introduced in dos 2, the forward slash was already 'taken', so they used the backslash for backwards compatibility.


Any source for this? I see no reason why the same character can't be used in multiple places. Parser should parse the command line until it finds a command, then send the rest of the line to it, regardless of content...


https://blogs.msdn.microsoft.com/larryosterman/2005/06/24/wh...

Update: this link adds the tidbit that IBM was concerned about 3rd party utilities that made assumptions about the slash.

https://www.howtogeek.com/181774/why-windows-uses-backslashe...


Parsing would be ambiguous, though, as the built-in commands don't require a space before its first argument. So things like copy/y foo bar were (and still are) valid. With directories they would either retain their original meaning, or refer to the y program inside of the copy folder.


It’s never a good time to take a breaking change. Microsoft has had a pretty extreme view of what’s “compatible”, including what seems to be a desire to not break existing applications that rely on undocumented or buggy behavior.

I’d want to argue that removing bugs and fixing things like this should just be done with a reasonable notice period and if some aging system in the basement of a fortune500 company blows up then I’m sure that can be solved too.

This said, I find the 70s smell more present and annoying in a Unix terminal than in Windows explorer. All the big OS’es are dinosaurs and it will show up here and there.


There's no need to break the old stuff. Applications can signal compatibility level so that new applications need not support legacy cruft.


So your OS is always a bloated superset of its entire history?


Old compatibility levels can be deprecated at some point, this would qualify.


Windows already does it. The problem being there is no manifestation mechanism in MS-DOS executable format. No, it is harder to work with such a system.


So, the question would be, why would Windows not remove this limitation from the main OS and only keep it on backward compatibility mechanisms (like the DOS prompt which runs - or used to - in virtual 8086 mode?)


They probably can't remove it as long as they have Win32. Real Windows programs sometimes do things like fopen("con") or CreateFile("nul", ...) for the same reason a program might open /dev/null or /dev/tty on Linux. I guess you could ask why Microsoft didn't remove this feature when they created Win32, or why they still allow forms like "C:\dir\nul.txt".


But even so, why hasn't Win32 supported \dev\con since forever? Then programs could use that and the old stupid way could be phased out.


You'd have to modify and recompile the old programs, which may not be possible.


No you don't. Run them in compatibility mode. You often need to do that anyway. A _lot_ changed between PC-DOS 1 and Windows 10. It's ridiculous that this didn't.


Compatibility mode applies to processes, though, which raises a problem with batch files (or scripts) that use those things, as you'd then have to make cmd always run in compatibility mode (or Python, or Perl, etc.).


I don't think Python and Perl existed back then. You do have a point with batch files. Well, maybe some sort of compatibility mode should exist for them too.


They could have moved those into NFPS root.


My guess is, because of interoperability with older Windows machines on the local network, but I'm not sure.


There is very little motivation to fix this, and I can understand that.

The fix might potentially break some apps. So at the end there could be more pissed people after the fix, than there is now unhappy about the file naming restriction.


Images don't load for firefox for me with trackers blocked (which is default in private mode). Had the images had a caption or something I would at least have known the image wasn't an ad but something relevant to the article.


There aren't image captions since this is a Twitter thread, not an actual blog post


Twitter allows people to add image captions.

https://twitter.com/_Red_Long/status/948577112860086272


That's neat. Too bad it's entirely buried and no one uses it.


Click to the left of the URL on the shield icon and you can disable the blocking for threadreaderapp


I was a CS major in the late 80’s, and we’d regularly have communications majors come in to the CS lab to write their papers (this was back before most people had their own computers). At least once a week, one of them would save their paper as “COM1”... and shut off the computer and go home, thinking they were done.


I've written about this before (https://www.bitquabit.com/post/zombie-operating-systems-and-... ); we had to work around this on Kiln back in ~2010, because it meant that Unix users couldn't even access files named things like "aux.h" that were stored in their Mercurial repositories.

That said, I had an error in that original post which this weirdly replicates: CP/M doesn't actually have quite this same error, because these names show up through the PIP command, IIRC. I really want to link to the correction, but I can't find someone telling me what the actual correction is from the last time that got posted.



A coworker got forcibly introduced to this when he was writing a script that generated several thousand files that he'd decided to name alphabetically. Eventually AAA turned into CAA turned to COM and accessing the resulting file failed, and I got the immense pleasure of being able to explain this.


Now that’s what HN is all about! Great read, thank you! And if you want to listen plenty to Gary Kildall, go YouTube “Computer Chronicles”.


Some part of me hopes this bug/limitation/legacy behavior is never addressed. How crazy wouldn't it be if this behavior somehow survives for the next 44 years. I find things like this fascinating, it's like digital archaeology for me.


An even earlier bug in Unix that considers "file.txt" and "FILE.TXT" two different files is still causing user anguish to this day. I mean, honestly, can any of you raise your hand and say that's made your life better?


Especially if you want to support Unicode the idea of "case-insensitive" becomes an absurdly complex mess, and I believe it's even lead to some security vulnerabilities (filename filtering, for example, now requires special Unicode-aware comparison.) It's good that it stays out of the filesystem.


For me the asymmetry of case-insensitivity bugs me. Even if the filesystem is case insensitive, "touch file.txt" is different from "touch FILE.TXT". In the same vein, what should the case of the file be after "echo lowercase file > file.txt; echo uppercase file > FILE.TXT"?

Case insensitivity would make sense to me if all files were stored and displayed as completely lowercase (although I'm not sure how this generalizes to other character sets).


Raises hand.

(1) My Buck BUILD file can exist alongside my build/ directory.

(2) Outside of a 0x00 or 0x2F byte, I love not having to think about locales, character encodings, normalizations, etc. especially as I am sharing files across computers which may use different values for these.

macOS went from case-insensitve to optionally case-sensitive for what I suspect are similar reasons.


Fair anecdote, but compare that to all of the untold millions of times someone's day has been derailed because they didn't push the shift key. It's a user interface, not an API.


Does that really happen? Both CLI and GUI utils help find the correct file name (using Tab and by sorting, respectively). Can someone raise a hand if that happens to them?


raises hand not with gui but with cli.

Autocompletion stops early because 'SomeReallyLongSourceName.h' and 'SomereallyLongSourceName.cpp' are not the same after the fifth letter. It breaks globbing.


On the contrary, the file system is an application programming interface (hence its presence in every standard library and kernel).

File explorers, etc. are user interfaces.


World is full of these loopholes. On exFAT create file test and rename it to Test. ren test Test. Guess what, no error message, but the file remains named as test. You'll need to ren test test2 and then ren test2 Test and now works.

File system and operating system bound together, with Linux you can put lot of stuff on NTFS volume, which just gets booted when you run chkdsk on Windows machine for first time.


The wrong idea was to have file name extensions. The .whatever should should be part of the file name and have no special meaning.

All Unix derivatives look at the first bytes of the file to understand what it is. See man 5 magic for details. The shebang #! is one of those magic values and it makes the OS read the next bytes up to an end of line to get the interpreter and arguments to pass the rest of the file to.

Maybe CPM used extensions to gain some speed (no need to inspect the file), not that hardware running Unix was fast by today's standards. Or maybe to force users to declare the purpose of the file in the name. However, as all unnecessary things it eventually came back to bite people.


That's still the wrong approach. You can easily get wrong types from that (e.g. look at the output of "echo FORM > /tmp/mismatch ; file /tmp/mismatch") and that opens up annoying problems for the user in the best case and security issues in the worse case.

The data type of a file is metadata and metadata shouldn't be determined by in-band signaling.

What are you going to do if your system miscategorizes a file? File name extensions are not ideal either as they overload the file name (they are out-of-band signalling regarding the content of the file but in-band regarding the file name) but at least you can change them yourself.


What is the alternative? If you don't trust the information that is sent with the file, do you expect the user to decide on the format of each file?

The file type is an inherent pair of the file content, and must move together. Otherwise your images and documents become only a meaningless set of bytes. The information that "this is a png image" is as much a responsibility of the machine that is sending you the file as the pixel data.

Also, yes, accepting random data from a network is a big security risk. It's up to your OS to handle that data in a secure way, but this is not done by removing the "this is an executable" mark.


The obvious alternative is to have metadata about file in a separate descriptor, which could be saved either in first N bytes of the file, or in the file allocation table itself. However implementing this in this age would be impractical as it would break interoperability with every other (still alive) OS out there.


> have metadata about file in a separate descriptor

Maybe that metadata could be stored in the file's name, like "<filename><separator><format identifier>" or something?

ducks


> However implementing this in this age would be impractical as it would break interoperability with every other (still alive) OS out there.

It's deeper than that, although this is a very good point.

Who teaches the OS what file types exist? How is that updated?

You could rely on the OS vendor, but that's a bottleneck, especially for a consumer-focused OS where most of the applications (and, therefore, most of the application file formats) are not created by the OS vendor, or entities working in concert with the OS vendor. You could have a great new file format and a great new application and be unable to use it, or get anyone else to use it, because the people behind the OS haven't caught up yet, or because the OS updates haven't percolated out yet.

You could trust the application vendors to do it, which is a security hole: The ScuzzyTech Hyper Word Processor really, really wants to open your word processing documents, so it can synergize your monetization with microtransaction-based leveraged technologies. Therefore, all of your Normalcore Text Documents are now ScuzzyTech Hyper Word Documents only to be opened by the ScuzzyTech Hyper Word Processor, because the ScuzzyTech code helpfully updated all of your filetypes and, therefore, file associations.


You and the other people in this discussion are well on the way to reinventing extended attributes, used for exactly this purpose back in the late 1980s and 1990s in OS/2.


I'm all for a metadata standard that would enable filetype independent fields to live and move with your data.

That said, this gives you no security benefit at all. It's basically the same thing we have now, but done correctly.


Well, actually magic(5) is used by the file(1) command to identify a file. So when you type `file foo.jar`, it can tell you "hey, this is a jar file". This isn't something that other applications do, and exec and friends do not look at it to try to determine what kind of file something is or how to execute it.

#! is special - and not because of magic(5) - if a file starts with `#!`, exec and friends will treat it special. Anything else will be attempted to be executed as a binary.

Now, one _could_ go and install binfmt_misc and set it up so that files with a jar's magic number will be executed a certain way (`java -jar ...`), but this is neither out of the box nor something that all Unix derivatives do.

Notably, binfmt_misc will also work with extensions instead of magic numbers.


Doesn’t that get incredibly complicated when dealing with non-binary data? How many bytes would you have to read to determine an html file, a markdown file, an Apache config file, etc. etc.


One time in the mid to late 2000s I had a PC motherboard with a BIOS configuration or hardware problem; the boot process started falling back: hdd, CD, fdd, netboot... Finally it brought up the message "ROM BASIC NOT FOUND". I wonder how long THAT error string has been in the AMI BIOS and how long since anyone has seen it!


This bug even haunts Azure Functions: https://twitter.com/troyhunt/status/1059254759293542400?s=21


So.. can i still write to LPT on my win10 machine and have my printer print something?


Yes, if it's a parallel port printer.

Or, share your modern printer, then:

NET USE LPT1 \\server\shared_printer


PRN would be the default priner, LPTn would be the various ports. It will work if you have a printer connected to LPT1, got example. LPT on its own isn't reserved.


This reminds me of the C$ trick for networked Microsoft computers.

Full read access on the entire volume!


Those were the days. You could use Windows Explorer to browse the entire HD of half the computers in your neighborhood. Boy did we find some gems...


What do you mean those were the days? I use this pretty much every day at work. c$ etc are default shares that appear if you turn on File and Printer sharing. The NTFS ACLs default to read if you are an authenticated user (RW for local admins) and the share ACL is probably local admins only (domain admins get that by default). So, \\machine_name\c$\tmp works exactly as I need it to on say Windows server 2016.


File sharing defaults into disabled nowadays, and it will only accept users authenticated on the same domain as the computer. It used to default into enabled, and grant read access to anonymous users.


Original Twitter link for those who want to avoid the broken garbage that is threadreader. https://twitter.com/Foone/status/1058676834940776450

archive.org link: https://web.archive.org/web/20181104213003/https:/twitter.co...


BTW, here's my tweet about this...

https://twitter.com/jwiechers/status/1059039735929282560

And here's my personal tweet: https://twitter.com/CrankyLinuxUser/status/10558609139858063...

I credit HN user "ageitgey" for this.


I wrote code in 1993 to deal with this in a DOS-based document-management system, PC DOCS. If users of the system created files with these names, it had to change them to some tokenised version that would then magically re-appear as the original name when requested. I don't remember the exact algorithm, but I remember it was a lot more complicated than I thought it would be when I got the assignment.


It's not a mistake! Compatibility is a feature, and quite a difficult to maintain one at that.


So using a VM running Linux I can create a 'aux.txt' on a mounted share and write to it. In Windows I can not open/rename/delete the file. Not through the explorer and not through the command line.

This seems just wrong and also dangerous in my eyes.


Keyboard Error. Press F1 to Continue!


I remember that. Every single time I booted.


For those not familiar with CP/M and Gary Kildall's pioneering role in PC history, this Youtube video is educational:

https://www.youtube.com/watch?v=OVqBokd3l2E


I'm still affected by this. We checked our build of the Linux kernel into our version control server. A couple of legacy drivers have files named aux.h, which now screws up any Windows user that tries to check out that branch.



This looks like pure legacy code and not actually a bug...


Well, strictly speaking not a bug. Its by design. :)


The more I see of these things, the more I see reason to have partitions, or real drives, with every OS from DOS 5 to the presence.


I don’t see how you can “fix” it without breaking a ton of stuff. I guess they could do a compatibility shim.


Explanation? I guess it has something to do with the league of special file names like CON NUL and PRN?


The tweet is the start of a long explanation, btw. The poster makes a few corrections at the end, but the story is worry reading.


I’d worry too, trying to read a blog post spread out across 100 tweets.



Yea, it's not really a bug so much as just a very old legacy limitation. Microsoft has ripped out a lot of NT/XP compatibility in Win 7/8/10. I wonder if these special names are still in a lot of use, or if it's just low priority (or it's in a part of the code base where it's low tech debt and no one cares).


Exactly. I read this thread this morning and found it a bit frustrating that is was being called a bug. This isn't a bug, it's a documented feature that still exists for backwards compatibility. You may not like it, or may not agree that it still exists, but it was intentionally designed and has been intentionally kept around. It's like complaining that you can't make a file name with / in it under linux (or perhaps making a file with / in it's name on a system that allows it then saying linux has a bug when it can't open it).


This feature has long outlived its usefulness, and should've been deprecated decades ago. Windows can run programs in "compatibility mode", so there is really no reason to keep these special files around for most programs. Maybe run CMD.COM in compatibility mode by default, so that batch files still work like 40 years ago, but other than that there is no reason to have that feature/limitation imposed on the graphical file Explorer, or pretty much any other software.

Not being able to use / in filenames under Linux is slightly different because the / has an actual legitimate widespread use with no real alternative.


Yes and no. In Linux you can use an Unicode that looks like slash. Candidates are U+2044, U+2215, U+29F8, U+FF0F, and U+2571.

However similar trick probably would work in Windows as well. Does anybody care to try? :)


Unicode tricks work on Windows too, it's just the exact \ / etc. characters that are reserved.


A comment elsewhere on this page claims that COM² and COM³ are also forbidden in Windows, due to Unicode mapping...


Also, I've just tried and on Mac OS I can create a folder with "/" in the name no problem.


The path separator on HFS is :, not /


o_O TIL.


I imagine popular ones like NUL and CON are still in use; I have machines running daily batch files containing those.

Using long UNC paths or writing to the FAT/NTFS filesystem through mechanisms other than Win32 allows you to bypass the restriction.

If you really want to be nasty to someone: echo hello > \\.\C:\Con.fess.I.know.what.you.did


I recommend reading the submission, it may answer your question.


The author of the tweet does a fanstastic job explaining the bug. He wrote a bunch of follow up tweets in reply to his orginal tweet.


And he spends way too long with his crappy version of storytelling before he gets to the point


He also explains that he wrote that 5am from an hospital. From my point of view, he was just venting his frustrations making a point of how old the error origin was.

It was not "storytelling", he was not writing it with the main goal of "getting to the point" so you could benefit. Calling it "crappy" sounds unfair and mean to me.


It is crappy in the sense that it is yet another person repeating the same tired trope of highlighting just how supposedly egregious some issue is in context of arbitrary events that happen to share the timeline. I am sick of this sort of temporally-oriented futurism as it betrays the writer's inexperience in actually shipping code and I am sick of amateurs being taken seriously in this field.

Yeah, its an old issue, obviously nobody cared enough to fix it until now, STFU...

edit- and finally, its twitter. stick to the character limit, or put it in a blog post.

(and maybe this should be my last beer :)


I hadn't realized that CP/M dated from 1974, that's older than the Altair!


>Babe Ruth's home run record was about to fall.

1961?


Images aren't loading.


RIP @foone's inbox


> This idea was brought into CP/M by Gary Kiddal in 1974.

> Gary Kiddal

....I just died a little inside


Yeah, if you read to the bottom I apologize for this.

I made this rant at 5am after being in the hospital for 8 hours, I could barely see the screen. It's amazing the thread doesn't have even more typos


Why?


Perhaps because of the misspelled name. His name was Gary Kildall. Btw, I can highly recommend some of the old Computer Chronicles episodes with him as co-host. They are available on archive.org.



But a bug, a feature!


TLDR: the AUX filename is reserved.

Apparently this post is just a series of copy pasted tweet, which makes it quite difficult to understand.


Does anyone else get a message saying "you are rate limited try again later" on twitter? I'm not on a vpn


This happens to me almost every time I click a link to Twitter on chrome for Android even if I haven't been to Twitter in the last month. No VPNs or adblocks.

Refresh a few times.


Yup happened to me when using Firefox Focus with tracking disabled & no cookies.

Using Chrome didn't trigger that error.


Same here with Firefox + ublock.


Yes, refreshing the page fixes the issue.


Yes, using Safari on ios with adblocking software. Can’t get it to show even with a refresh.



What is the error? Clickbait title and the error buried in the wall of text...



That doesn't tell what is the error




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: