One of my favorite posts was [1], specifically this bit:
> Note: I actually don't know who ordered first. I just made up that part of the story to make it funnier. (Note that I make up parts of other stories, too. I'm not a historian. I'm a storyteller.)
It really made me rethink the value everyone places in the accuracy (or lack thereof) of storytelling.
Ah! This is probably the third time I'm reading this, and it's still just as entertaining as it always was. :-)
> "You waited a few DAYS?" I interrupted, a tremor tinging my voice. "And you couldn't send email this whole time?"
> "We could send email. Just not more than--"
> "--500 miles, yes," I finished for him, "I got that. But why didn't you call earlier?"
> "Well, we hadn't collected enough data to be sure of what was going on until just now." Right. This is the chairman of statistics.
I hadn't read the FAQ before though! Thanks for posting these. This guy definitely has a lot of patience for putting up with all those FAQs about inconsistencies in his story...
I've been in the "movie industry" for decades and it still astounds me that people come out of the theater thinking what they saw, in most cases, is reality--even when the disclaimer "based on" is clearly shown ahead of time.
I can't exactly fault them as "based on" seems to be more of a marketing gimmick used to imply some form of authenticity rather than an actual disclaimer. For the longest time I fell for the fallacy that "based on" implied some semblance of factual accuracy.
> Something that is inspired by something is also based on that thing.
Inspire -- give rise to -- stimulate, motivate, spawn, engender.
Based on -- rooted in -- built upon, be founded on/upon, be anchored in something, rooted in.
-
A hamburger inspired the design of the Millenium Falcon, the ship is not based on hamburgers. A comic book inspired the movie 300, which is only loosely based on the comic and historical fact.
A story that is based on specific events it must build upon those events. A story that is inspired by something has no presumption of content, only that some facet of it motivated the storyteller somehow.
And directly conflicting definitions are not "nuance". You are missing a qualifier in order to make the point you think you're making (ie "the design of X is rooted in..."). You've also wilfully read past both examples in order to stretch this point.
Inspiration can not be understood as "rooted in". That is not how words or sentences work. The millenium falcon is not rooted in Hamburgers. "Imagine" is not rooted in Yoko Ono.
I used to love reading his pieces. Then Google killed Reader, I never found a replacement I liked, and blogs just died for me. Not 'rational', I know, but I'm sure I'm not alone.
Off-Topic: As with most websites, I had to visit an "About" sub page to get to the description what it does. Why?! Shouldn't this information be the first you read when you open the main site? It's almost the same nonsense as with so-called "landing pages". If a company's main site doesn't qualify as landing page, why do they create a separate sub page instead of just improving their main site? And some websites don't have any newcomer-friendly sub page at all - then there's Wikipedia to the rescue.
Single apps might be good replacement for greader app but not really for greader as a service. For example: i have multiple devices with different OS’s, some of they had native apps some didn’t but no matter where and how I used reader, I was always in same state from where I had previously been (read items and tags).
I’ve personally settled with feedly now but even now after all these years, it’s not up to par with google reader.
> Not 'rational', I know, but I'm sure I'm not alone.
But in fact what you did was rational in its own way. You enjoyed blogs in the beginning, as long as they were easily accessible by an aggregator. When the user friendly aggregator went away, you decided the blogs didn't offer enough good content to justify the increased management and use of your time.
I've installed CommaFeed on my server (was easy), wrote some little hack-patches to make it behave just exactly like I want it to and it works very well :)
> While surprisingly informal, there are limits to how far the programmers go. There are no derogatory references to Microsoft or Windows themselves. Bill Gates is never mentioned. There are no racist or homophobic slurs.
You know, just because someone uses swear words doesn't mean they're racist or homophobic...
Also, I don't find those comments terrible. If anything, maybe a bit childish.
I’m an ex-MS dev and they brought a wave of nostalgia. The hypothesis they were injected by maligners isn’t needed. We wrote comments like that all the time, although after this leak they were somewhat discouraged.
There was a simple scanning system called "Policheck" that was introduced in most source trees; it checked against a list of keywords. It didn't prevent checkins IIRC, but it flagged files for attention (it's been a while, and I never personally ran into a policheck hit).
The list included the obvious "dirty" words, and milder words like "idiot" and "moron". Surprising inclusions were references to the DOJ, the Microsoft consent decree with the DOJ, and Janet Reno.
They have a certain aesthetic. In addition to form, they're hard to miss and ignore (articles written about them...), which, in this case, is functional.
> Oh, and be careful of your messages in debug/trace statements
This is one of the easiest ways I have found to distinguish mature coders and immature coders (green, rookies, whatever): read their test data entry.
New coders write all kinds of cute things, little data entry stories (test1, test2, test3whyisnothingworking), and frequently put in jokes or witty things... Longbeards have all, at one point or another, had a customer/management see something inappropriate and know it's a bad idea. More importantly, they've worked on too many systems to spend time thinking up cute ways to manually test if their CRUD operations are working.
In the Apple Newton source code, a cow-orker of mine wrote a comment about Sharp Electronics, our manufacturing partner. The comment was a snarky inclusion of their corporate motto, "From Sharp minds come sharp products" in a place where Sharp engineers had done something unusually boneheaded that my friend had had to work around with an ugly hack.
Then someone shared the source code in question with Sharp (as sample code or something), and they found the comment.
Many, many apologies later, $mike was able to laugh about it, though I don't believe he ever felt unjustified. This may have been the time that he decided that his unofficial job title in Newton was Official Scapegoat. "Feel free to blame me for anything that goes wrong. I won't do anything about it, but at least you can blame someone."
Teams cranking 80+ hours a week on a project for a year or two are just gonna vent. It's perfectly healthy.
My co-worker was dealing with some credit card operations on a kiosk solution. During a key demo a quick test of the system triggered some fraud detection and the system threw up a huge "Got you, sucka!!" message.
The new owners shared a look between themselves...
Copious embarrassment was the main thing. I believe the client was using it as much as possible to leverage some free consultancy, saying some of the comments obviously written by staff were unprofessional and violated good faith agreements and so on. Today I proceed with the thinking that anything I write could be read by my mother.
>Also, I don't find those comments terrible. If anything, maybe a bit childish.
I think it indicates that the people writing them felt comfortable enough with their environment team that they could talk freely.
Stuff like:
"if this alert fires you should start an incident and page teams X Y and Z before you page $alert_author."
"if this alert fires and we're not switching between active and standby data-centers then you should update your resume and begin the process of failing over to the standby datacenter"
"this is a terrible hack because this is a terrible legacy system and the worse it is the more incentive the remaining (internal) customers have to move to the new system and the author's time is better spend building good code for the current system"
"this process is specifically designed to be a massive pain in the ass in order to encourage you to do $stuff ahead of time and not rush it through in the last minute"
> people writing them felt comfortable enough with their environment team that they could talk freely.
There's talking freely and there's writing childish or unprofessional comments in the source code. It doesn't take much to keep code comments clean and you can still vent to your coworkers in other ways.
The example comments you wrote all seem ok to me though, except the second one, which just comes across childish or dickish. You can say the same thing and be blunt about it without suggesting someone should be fired for doing what you don't want them to do.
"just because someone uses swear words doesn't mean they're racist or homophobic"
Where does it ever imply that? It says racist and homophobic slurs, not just swears like "fuck" or whatever. And yes, if one uses bigoted slurs, then one is a bigot.
I remember downloading the source code in 2004, and promptly receiving a letter from my ISP stating that I should delete it immediately. I was impressed.
> private\inet\wininet\urlcache\filemgr.cxx:
> // ACHTUNG!!! this is a special hack for IBM antivirus software
Why should Microsoft produce a hack for an IBM antivirus product? That IBM software might be used by a few tens of thousands of people for a few years, whereas Win2k impacted billions of people and will continue to do so.
At some point it is likely the programmer, PM, and maybe even the bug tracker behind this code may have moved on, and a newer generation of contributors will have to waste time understanding the bug, and deciding to support it or not. Possibly without ever knowing if IBM has fixed their own bug, or still uses that antivirus software or not.
Because Microsoft have long had a reputation (especially at the time) for maintaining compatibility over a very long period for applications. They did horrendous engineering to make DOS applications still work through Windows 95 and onwards because their customers still needed it. This was considered an important thing for the product.
Windows is still one of the most backwards-compatible systems you're ever likely to encounter. Sure, not everything works anymore, but you can still install a lot of software written for Windows 95 and expect it to still function. That's hugely impressive, given that we're not even running the same kernel anymore.
>This was considered an important thing for the product.
Backwards compatibility used to be the norm for most stuff, didn't it? In the 80s/90s. You didn't need to get a new computer every few years. Now if there's a problem, the solution is easy - "Buy a new computer/phone/iPad."
That's true. You can supposedly run a program assembled or compiled in the 60's on a z14 that you buy today.
Seeing as most of IBM's largest customers are banks and other places that have no interest in rewriting battle tested software, it's probably the reason that their mainframe department is still alive.
This is one place where the Web continues to reign supreme. Did we re-implement an OS on top of our OS? Sure. But it's been a tremendous success for accessibility.
I mean that the web has been a tremendous success story for long-term backwards compatibility and widespread access. We've lost a few pieces of technology, like Flash, but overall you can take any modern browser and go visit ages old websites and find them fully functioning.
Browsers are on every platform, and for the most part different platforms require no different code for your web application to work. Mobile vs Desktop, Android vs Windows, you get it all.
Ohh right, thank you for explaining. Hmm but e.g. a lot of new websites dont work on any available browser on my computer. The problem is so many new sites need a new computer to work properly—as if they're only tested by people with new computers—not that new computers have a problem with old sites. Maybe I should, uh, buy a new computer. :-) But I figure there are as many people using computers as old as mine, as using brand new ones, maybe not. But I've realized there are two kinds of backwards compatibility, thanks.
"For older applications that use a 16-bit stub to launch a 32-bit installation engine, 64-bit Windows recognizes specific 16-bit installer programs and substitutes a ported 32-bit version."
Yea that was the big break in compatibility. There's also many applications that were written for the specific hardware and many undocumented features (hacks) on 95/98 that modern Windows cannot handle.
> That's hugely impressive, given that we're not even running the same kernel anymore.
Unless you're doing shenanigans with ioctls to devices or direct access to kernel functions, the kernel does not really matter - everything goes through user32 and friends anyway, so as long as MS does a good job keeping this abstraction layer stable OS upgrades don't concern you much.
[...] as long as MS does a good job keeping this abstraction layer stable OS upgrades don't concern you much.
The problem is that you have to keep them stable to a much higher degree than you might think at first. Obviously you have to maintain the functionality as described in the API documentation but that is far from enough. If the implementation has a bug, you can not simply fix it because there might be software either relying on the wrong behavior or having implemented workarounds that you might break with the fix. And no matter how often the API documentation states that this or that is not supported or undefined behavior there will be software doing it anyway or relying on what ever undefined behavior meant in the version they developed against. You return a list of something and say nothing about the ordering in the documentation but as an implementation detail it happens to be in some specific order, if you wait just long enough there will be a program critically depending on the ordering you choose. This is an endless nightmare especially if your APIs have as many different consumers as the Windows APIs.
This may answer your query. Microsoft takes (took?) user-friendliness as the most sacrosanct thing. Breaking a user's apps is the biggest anti-user thing to do. In hindsight, it has maybe turned out that taken to extremes this approach is not sustainable. But I agree with the basic logic.
Linux the kernel does something similar. I'm not sure about e.g. layout of what various /proc files produce (though it wouldn't surprise me if that were cast in stone as well), but syscalls in particular are compatible waaay back.
glibc and a handful other libraries are also careful not to break binary compatibility, providing versioned symbols for old binaries.
Beyond that it tends to suck, with some libraries not even bothering with sonames.
I bet someone at IBM found a bug, traced it back to some quirk in Windows that was hard/impossible to work around and the ticket got the attention of an engineer who figured out how to fix it in less than an hour. They decided the change was benign so they did it, commented it for posterity, and moved on.
Microsoft actually has a reasonably close relationship with large companies that develop on their platform. eg, their plug fests for devs to come put their stuff onto the same machine and see how they all inter operate on new versions of Windows. I used to work semi-closely with someone doing Windows driver work. He knew a lot of quirks in the platform and even could escalate to the point of getting help from kernel engineers.
Windows 2000 is from a time when software distribution was still mostly in the form of physical media, and everything having an internet-enabled updater was not as common. The IBM software in the comment was probably written for a previous version of Windows, and if a new version of Windows wasn't compatible with that IBM software (as found in the box the user bought), it would present an obstacle to upgrading to the newer version of Windows for users of that IBM software. Users wouldn't be expected to upgrade every boxed software they had bought just because they upgraded Windows.
There’s also a psychological element. When a user upgrades Windows and some software breaks, they immediately think “the new version of Windows sucks!” and tell all their friends. They neither know nor care that it’s actually a bug in the software that didn’t manifest before.
Microsoft do this all the time, because big institutional customers don't upgrade otherwise. If this horrifies you wait until you hear about the Vista compatibility modes.
At least these guys commented, nowadays the paradigm "Identifiers are comments" seems to prevail. Technically this might be good, but reading such code is an experience equivalent to eating unsalted fries.
On a scale from x to ListOfSinglePeopleWhoHasADogButNoCar I'd say that "self documenting" code tend to be quite a lot more so than some of the alternatives I've seen.
However, if someone claims to write self documenting code, and by that think it's enough to have long variable names and skip comments, then I can understand the sentiment above coming into existance.
In "identifiers are comments" paradigm the goal isn't only to just skip comments, but also to structure the code in a way so that it's as clear as possible what it does and why. This helps much more than a long comment block hidden somewhere in a big ball of spaghetti code.
Comments aren't a panacea either, as they take effort to write in a helpful way, often only occurs once (so you'd have to find the place where the purpose of x is defined) and of course, often are some level of wrong/outdated in relation to the purpose/function of the code.
Sometimes, it's not possible to write the code in a way that is clear, this could be for compatibility reasons or performance reasons. That is when comments really come into their light.
I think it is reassuring when I see code where programmers comment that they aren't doing something the best way. Even when they say it is a "hack" or because something else is moronic. It shows to me that the engineers know that not everything in a complex system is beautiful. It is when an engineer is convinced that every line of a very complex system is completely elegant that I worry dragons are hiding.
> It's noticeable that a lot of the "hacks" refer to individual applications... a Borland compiler came to depend on an existing bug, so their fix worked to preserve some of the bug's behaviour
Hyrum's law states: "With a sufficient number of users of an API, it does not matter what you promise in the contract, all observable behaviors of your system will be depended on by somebody."
I went to the nautical-themed torrent site, typed "windows 2000 source", and it returned one result uploaded in 2006 with 6 seeders, so I assume yes (but I haven't downloaded, so I don't know if it's real or a fake).
Yes it still is. I found both the NT 4 leak and the Windows 2000 leak while I was doing some research into the Windows kernel.
Lots of cool stuff in both; the NT 4 leak had a lot more of the cool kernel stuff, like a DEC Alpha HAL.
I don’t recall the links and won’t share them. I also notified Microsoft about them when I found them, so they may have been taken down already. But it’s the internet so realistically they’re still up somewhere.
You can also find the Windows Research Kernel, which is slightly sanitized and leaves out NTFS, HAL, PnP, and power management, but is slightly newer coming from the win 2k3 era.
People at betaarchive. managed to compile working NT out of the leaked code, and some dude even build working NT/2000 hybrid out of those sources with some wine/ReactOS thrown in (OpenNT 4.5).
My understanding is that the leaks were not complete source code. The leaks came from a third party company who was given a significant chunk (but not all) of the Windows source for development reasons (possibly driver development?). Can't compile partial code without weird hackery
[1] https://blogs.msdn.microsoft.com/oldnewthing/