Hacker News new | past | comments | ask | show | jobs | submit login
Windows: A software engineering odyssey (2000) (usenix.org)
180 points by xo5vik on July 14, 2023 | hide | past | favorite | 101 comments



What's crazy about this is that it sounds like the people writing the code haven't got the slightest hope of even compiling it, let alone running it. They're just writing code, crossing their fingers, and committing it.

It's a wonder the thing ever worked at all.


There’s a slide saying few developers could build a whole instance of win2000, but when working at this scale you rarely build everything from scratch locally. You work on your module and the build system will have to build everything nightly. I don’t read this as devs throwing completely unchecked code into source control.


I was a dev on the Visual C++ team at this time (1990s). Part of making largish compiler changes would involve throwing a test compiler at all of WinNT and seeing what broke in the build process. It was challenging to do so, to say the least. Just getting the whole mess set up was a struggle for me.

Of course, things got better as time went on thanks to process improvement. I started in 1991, and I remember driving over to the NT team's building with a large (for the time) hard drive to grab a physical copy of the source tree. This was before NT was first released - when you tried running your build and went to shut it down, you had to watch the activity LED on the drive flash a few times to be sure the cache had synced to disk and powering down was safe. Fast forward a few years, and building all of WinNT was more routine, to the point it was just another component built by the automated VC++ checkin procedure (we called it submitting code to The Gauntlet), along with Excel and other Office components.

I might be misremembering if NT was part of Gauntlet, but it was definitely something we could and would build as desired.


Certainly building everything from scratch is time consuming, but why should it be difficult? Windows 2000 was developed before I became a programmer, but not too many years later (2003) I was a teenager running Gentoo, and while it was time consuming to build everything from scratch, it was easy and reliable. It was also quite easy to switch out or hack on individual parts. In particular I remember fiddling endlessly with the kernel.

Windows 2000 was certainly complex, but was it really substantially more complex than a full Linux distribution (including compilers, desktop environment, office suite, etc)? Why was it so difficult to build from scratch?


I would think/guess you weren’t fully building from scratch, but from intermediate snapshots. If Linus pushed a commit, would your full build, minutes later, use it?

Also, the first beta of NT 5.0 shipped in September 1997 and it was renamed to Windows 2000 in October 1998 (https://en.wikipedia.org/wiki/Windows_2000#History), and 1998 vs 2003 is about 3 times 1½ years, so, at the time, about three performance doublings.

Chances are your hardware was at least 5 times as powerful as what the early Windows 2000 engineers used.


> I would think/guess you weren’t fully building from scratch, but from intermediate snapshots. If Linus pushed a commit, would your full build, minutes later, use it?

Yes, I would frequently download e.g. new kernel release tarballs (this was before Git) and slot it into the system. This didn't require recompiling anything but the kernel. Actually installing Stage 1 Gentoo required compiling everything (although it was on top of a compiler binary for bootstrapping.)

My hardware was cheap 2001 era consumer hardware, so I doubt it was that much faster than what the Windows developers had available. Besides, my question is more about why Windows (or anything else) would be difficult to compile, rather than just time-consuming. The nice thing about recompiling an entire operating system from scratch is that there are no external dependencies, because you're building everything! (Except the bootstrapping compiler, but for the Windows operating system there's no reason to rebuild that.)


Back in 2005 or so i was building freebsd, x11 and kde from scratch without issue. But these os’ are better engineered as there are no managers getting in the way.


Imagine you're a teenager building gentoo, except every single component of the system is a random development snapshot instead of a tested release.


Linux distributions put together the products of a a huge and very loosely organised community, and I never heard it was anywhere as horrible as these (old) horror stories of Windows development. Is it perhaps the case that the (enforced) loose coupling of open source development prevents the aggregation of build complexity and intermingled dependencies?


> Is it perhaps the case that the (enforced) loose coupling of open source development prevents the aggregation of build complexity and intermingled dependencies?

I think that's a huge help. I think that it's also helpful that the system is intended to be compiled by a bunch of other people and the code is released with that in mind.


I have no information here, but I wouldn’t be surprised if in 2003 Windows had an order of magnitude more code in it than a Linux distribution. Don’t forget, back then, Windows NT had support for these runtimes: Win32, Win16, DOS, and POSIX. Also, Windows had drivers for a lot more hardware than Linux did. Add in all the management stuff (Active Directory came about around this time), and I think it almost had to have substantially more code than Linux.


Why would those runtimes be particularly large? I'd expect the Win32 runtime to be large, but the others should be tiny by late 90s standards. Also, I remember that Windows didn't embed that many drivers itself, but that it had a stable ABI that hardware vendors could target (I remember driver CDs). Further, while the Linux kernel is certainly smaller than all of Windows, my point of comparison was compiling an entire desktop Linux system, which also included a full set of compilers, a desktop environment like KDE, office suites, multiple browsers, etc. Someone working on Windows 2000 shouldn't have to compile all that stuff.

My main question is also not so much why compilation of Windows should be time consuming, but why it should be difficult.


> Also, Windows had drivers for a lot more hardware than Linux did.

Did it? It had more third party drivers, but did NT itself build in that many?


NT had a HAL and that's it, with VERY few drivers on its own.


The biggest reason is that you need to pull in code you don't have access to (i.e. the DRM media modules, or PatchGuard, which is restricted to only engineers that work on it), as well as the sheer size of all of the localization content to build every language of the OS (remember i18n content isn't just text, it's images that contain text too).

There's no reason to build everything from scratch, it's like working on a patch to e.g. KWrite, and deciding to build the kernel in order to do it. If you're working on a Windows component, you install a daily build so it's close to your equivalent of main/master, write your code, and overwrite the binaries on your test machine / test VM. Your development loop is pretty fast in practice


In the early 2000s, there was no integration between the Windows build environment (Razzle) and Visual Studio. It was common for a dev to keep their own .sln file on their dev box so they could work on their component, while the actual build happened against a I-cannot-even-remember-what file that the Razzle environment used.


MarkL's latest move - leaving Google, after changes in the VR/AR org at Google - https://arstechnica.com/gadgets/2023/07/googles-head-of-ar-s...


Can’t imagine how these Windows engineers feel about the enshitification of their baby. So much time invested into it - must be hard to see it taking this current trajectory.


The kernel is better than ever. As for the shell, it's always had junk on it for commercial reasons - remember MSN and baking IE into the OS, or how the system requirements were ridiculously low for the reasons of marketing and to placate OEM manufacturers? And how OEMs were allowed to add loads of junk software on computers wearing their "Designed for Windows" stickers?

I'm pretty sure XP came with a digital app store right at the top of the redesigned Start menu, Windows Media Player had ten music stores integrated all selling DRM'd WMA files...

The system requirements especially, must have created a lot of work right down into the kernel team.


NT 3.x, 4.0 and 5.0 (2000) which the presentation are about were, at least, quite free from commercial junk. It was only when consumer and professional Windows merged into one with XP that the enshittification started -- but that being said, it's bad on an entirely different level now. It's easy to look back longingly at the days of XP and 7 now that Windows is so overwhelmingly user hostile.

Like, rhetorical dude from 2002, you're mad that Windows XP will not let you remove Internet Explorer easily and that it requires online or phone activation to work? Let me tell you about Windows 11...


> Remember MSN and baking IE into the OS

Yeat another thing where Microsoft was ahead of the curve, nowadays we get Electron (aka Chrome) all over the place.

People even buy laptops where the browser turned into the OS!


Some of us wipe that and install Linux because the hardware is cheap.


And just like with Windows, contribute to the sales number.


> Windows Media Player had ten music stores integrated all selling DRM'd WMA files...

Apologies. This was me. Pretty much all 10. Most of them were just white-labels of the same code. Believe me, I hated doing it. MS didn't want to do the right thing and vertically-integrate everything like Apple was doing, which was the better solution as then you owned the entire user experience from end-to-end.

We know how that story ended.


Yes I could tell they were white label. One supermarket chain in the UK was offering everything from pet insurance and funeral insurance to phone plans and digital music. The way it was listed in the interface among so many other random brands was hilarious to me.


Well they got slapped for including IE forcible by the EU and there was never an App Store like that over XPs lifetime to my knowledge? Also you could quite easily replace the whole Shell. Sadly not anymore if you do that now stuff you need access to will just stop working because everything "metro" will need the explorer shell actively running in the background.



Windows Catalog (which I totally don't rememeber at all... was it in all the builds or only some regions??) was apparently "the showcase for products that are designed to make optimum use of Windows XP. Go digital, and discover a great new computing experience!" [1][2] More of a catalog listing software and hardware that got the designed for XP logo, rather than a place to buy those things, let alone an app store experience. At that time, it was much more common to purchase applications inside an actual store.

[1] https://web.archive.org/web/20011113052730/http://www.micros...

[2] https://web.archive.org/web/20020409123842/http://www.micros...


> Can’t imagine how these Windows engineers feel about the enshitification of their baby.

I think people are forgetting how unreliable Windows was in its early days. If you were doing anything complex (programming, editing pictures, ...) Windows couldn't run for 2 hours without crashing every so often.

If anything, the core of the Windows operating system has only gotten better with time. Yes, they keep adding fluff to the desktop environment but that doesn't take away the progress they have made in stablizing their core operating system.


> Windows couldn't run for 2 hours without crashing every so often.

I`m really curious, which version of Windows you mean?

Because I don't remember this on win 3.11, win XP, win 95, etc. etc. Of course there sometimes HW/drivers issue, sometime some programs corrupt system files, etc. etc. But crashing every so often.. thats strange.


Windows 95 and 98 loved to crash because a fly at the other corner of the room moved an atom which hurt the OS' feelings momentarily.

Ran out of memory? BAM. An official driver from Intel or nVidia or ATI did something slightly off-time because silicon decided to wait a clock for something, BAM. You had a professional capture card with high bandwidth for that time, and you wanted to capture a video, BAM.

A blue screen because of a spinlock access violation, a Windows bundled driver, or any high-end software was common back in these days.


Oh yeah it did. The amount of times when you are writing something in Word, then figure it has been 15 minutes, I should save. Only to move the mouse to the save icon only to have the entire system just stop. No error just a total lock.

This is why you can tell if people grew up in that era, you have muscle memeoy of CRTL + S every few minutes burned into your soul.


> you have muscle memeoy of CRTL + S every few minutes burned into your soul.

I'm honestly afraid that my child will born with it and do that pinky-midfinger combo on air like playing air guitar on day 1.


I didn't quite suffer from anything quite this severe, but Windows PCs definitely needed a restart once a day for sure. The weird one for me was installing new software requiring a restart. Some applications would insist on it, presenting you with a modal saying something like "Your computer will now be restarted, save your work and click the Ok button"


Windows NT 3.51 was a big milestone in terms of stability. Windows NT 4 got even better from a stability point of view. I can't remember the last time I got a Windows blue screen of death when it used to be common to see one but on a decreasing basis as new versions of Windows came out.


> I think people are forgetting how unreliable Windows was in its early days.

Not to mention being as easy to attack as a house made of butter.


And some people still use WinXP, Win 7, Win 8 etc...


At least you had a decent host firewall by then. Pre XP-SP2 you'd get malware just hooking up to the internet.


Later Windowses are full blown spyware with ads. If I had to use Windows for some reason it would be 7.


It's not difficult to de-shittify 10/11. There's a tool that automatically does it call ShutUp10.

It's arguably a bit shit from a business perspective, but has no real impact on power-users day to day.


You need to be careful with ShutUp10/11. You can easily break automated security rules or system APIs if you carelessly enable all of those settings. You can't just apply these patches and forget about them, sometimes you need to undo your work to get updates installed or to resolve problems (for example, the "disable internet check API" privacy setting can cause some applications to display "you're not connected to the WiFi" popups).

It's also an uphill battle against the ever encroaching Microsoft Edge bullshit; every time you remove part of the bullshit, Microsoft comes out with an update that adds more.

If you're stuck with Windows I'd consider the safe defaults for ShutUp1x as essential but you do need to read the notes for every setting you enable, which may require some Googling so you understand what you're doing.


Does it work without a Enterprise install?


Yes, I'd even go as far to say it's designed for use with Home and Pro; it's setting the toggles enterprise/LTSC users will most likely already be managing through their centralized group management software.


Was Windows NT ever that unstable? I know 95, 98, and ME were all notorious for stability issues, but was under the impression that NT was better.


Windows 98 was unstable because its drivers and usermode software components still came from a time where they controlled every aspect of the computer.

NT solved that problem by not allowing a lot of that nonsense, breaking code in the process. This incompatibility is the reason new Windows 95/98 PCs are produced until this day (https://nixsys.com/legacy-computers/windows-95-computers, https://nixsys.com/legacy-computers/windows-98-computers): back in the Win9x days, programming your computer like you would program a microcontroller today was quite a reasonable thing to do for certain applications, like controlling production lines.

There is the uptime overflow bug to deal with, but a monthly reboot is easier than reverse engineering and porting control software.


> Windows couldn't run for 2 hours without crashing every so often.

That sounds like a Windows 3.1, where applications could easily take down the operating system. Windows 9x wasn't quite as bad. If I recall correctly, properly written applications could not take down the operating system though drivers certainly could. That said, there were certainly ways for developers to break the rules since there was little (if any enforcement) so some applications did take down the operating system. With the Windows NT series, there was sufficient isolation and enforcement of that isolation, that it was very reliable. Drivers could be an issue, as with bugs in Microsoft's code, but that was nothing in comparison to contemporary versions of 3.1 and 9x.

On the whole, I don't think it is reasonable to blame Microsoft for the reliability of their operating system. There were certainly design issues that resulted in it being unreliable, especially when running third-party code. On the other hand, the operating system was basically an evolution of a product line that started on the 8088 with very limited memory (I'm speaking of PC-DOS here) and a great degree of compatibility had to be maintained. Keep in mind, the computer industry did not work at the same pace: features had to wait until processors incorporated them, processor adoption had to wait for manufacturers to build them into their systems, and then consumers buy those systems in sufficient numbers. For example: the 286 was introduced in early 1982, but the IBM PC AT did not come out for another 2.5 years. Microsoft was also limited by the hardware their customers owned, even when it supported particular features. Life is much harder when you cannot throw memory at the problem because people had 2 or 4 or 8 MB of RAM.

On the other hand, Windows NT was a completely different product. There was much less concern over compatibility. There was much more intent to throw away baggage to create a modern (for the time) operating system. It did not crash every two hours.


I'm honestly not sure if that was windows's fault. In that time period we also had:

1. budget devices from OEMs that cut corners at every cost

2. capacitor plague with merchants unable to guarantee good capacitors from any source


A core piece of enshitification though is that a product becomes Less Useful over time - Reddit and Twitter lose third party apps, Apple is making its desktop OS more "secure" (read: convoluted and does less stuff) every release. The things you Liked about it, goes away.

Windows, despite its legitimately annoying monetization strategy, has absolutely done the opposite - it does More Stuff every release, and the stuff it did before largely still works.


> Apple is making its desktop OS more "secure" (read: convoluted and does less stuff) every release.

Do you have some examples of how macOS is doing less / capable of less today, than say 1 or 2 or 3 releases ago?


Adding ads is clearly doing "More Stuff" yet becoming "Less Useful". That is the most obvious counter example imo.

Another would be fragmenting the settings between the control panel and the new settings menu. It does more stuff (you have twice as many settings apps!) but it is less useful, because you are less likely to find the setting you are liking for.

Another example of doing more and becoming less useful is requiring a TPM for Windows 11. My security should be my decision. Not letting one install Windows obviously makes Windows less useful than if it could be installed.

In general (ie, not a Windows specific issue) ever growing hardware requirements makes the software less useful over time, as it can only run on a smaller and smaller subset of hardware. As software gets better, it should run on more hardware than it did before. Not less. Windows will simply not run on hardware from 15-20 years ago that is otherwise fully functional. That means it is less useful than it was before.


> is that a product becomes Less Useful over time > it does More Stuff every release

I wouldn't say "doing more" is better. I'd be happy if it did a lot less. I don't care about most of the big new features in windows. I'd be a lot more happy they'd rework their old antiquated stuff that keeps making problems (drivers, registry, focus handling, etc. etc.).

> Apple is making its desktop OS more "secure" (read: convoluted and does less stuff)

What is apple really making less useful with time? For me I really like many of the new features. The only reason I stick to windows is that gamging is still horrible on macOS.


> I wouldn't say "doing more" is better. I'd be happy if it did a lot less. I don't care about most of the big new features in windows.

There are two levels of features here (maybe three) that we should consider:

- There are consumer facing features, the stuff pushed by marketing departments since it will grab the attention of customers and (perhaps) make it more desirable for customers. A lot of this is targeted towards specific groups of users, while being less useful to others, and goes out of fashion very quickly (assuming it ever went into fashion).

- There is the infrastructure. This stuff is harder to sell users on because relatively few people care about the details. It includes everything from exposing functionality to developers to improving performance and security. Sometimes it turns out this functionality is only of interested to a limited subset of developers. Sometimes it retrospectively seen as a problem that needs to be addressed. Either way it is very difficult to alter or remove because other software depends upon it. (Heck, even internal software depends upon it. While they may have the means to update internal software, that doesn't mean they have the resources to.)

I'm tempted to split the second category into two, but the net effect is the same so we may as well keep it simple.

As for the Apple thing, well, Apple has a more focused market. Choosing Apple also tends to be a conscious decision, while choosing Windows tends to be more a default position. For those reasons, I have no doubt that macOS is a better OS in the eyes of its users than Windows is in the eyes of its users.


Here's a few examples - software requires signing and App Store accounts (literally called "Gatekeeper") which causes problems with OSS, you can no longer write kernel drivers on Arm64, many apps now require an avalanche of Vista-style "Do you want to allow this Thing Y/n" prompts, many other apps have to walk users through clicking into security settings to e.g. enable screensharing or productivity tools that use a11y hooks, the list goes on and on. Software on the Apple platform is becoming Less Useful over time and the list of things you can Do keeps getting smaller.


That's a really interesting take. Thanks!


A few years ago when I was there, there were still remnants of that same Dave Cutler NT culture, especially around the folks who worked on minkernel/.

I agree there are definitely shitty chunks Windows, but there are still some very solid foundations there to this day.


For example, Windows kernel's write watch feature is useful for writing a GC. Linux lacked (and as far as I know, still lacks) this feature, so Microsoft had to rewrite .NET runtime.

https://devblogs.microsoft.com/dotnet/working-through-things...


> Linux lacked (and as far as I know, still lacks) this feature

AFAIK it is possible to do this on Linux (either through mprotect + SIGSEGV or userfaultfd) but it's slow. But there's a work-in-progress patch that the Collabora folks (probably on a contract from Valve if I'd had to guess, as some games do use this) are working on which will add a new fast way of doing this.


userfaultfd was ~2015 (https://lwn.net/Articles/636226/), so Microsoft couldn't use it at the time. It could be better, but yes, Linux is making progress.

Even in current form, userfaultfd is useful for GC, so Linux's lack of the feature in 2015 was unfortunate. Android 13 added a new GC taking advantage of userfaultfd: https://android-developers.googleblog.com/2022/08/android-13....

> A new garbage collector based on the Linux kernel feature userfaultfd is coming to ART on Android 13... The new garbage collector... leading to as much as ~10% reduction in compiled code size.


A lot of the original designs of Windows were elegant in theory but never simplified and unified. COM objects are a good example, they were just a pain to deal with from languages at the time (and arguably still are).


Delphi and VB made COM relatively painless.


That’s indicative of poor bindings for those other languages.

The nice thing about COM is that it provides a well-defined, C-based ABI for calling object-oriented interfaces; if your language has a FFI that supports C, then you can call COM objects.

I’m a big believer that COM bindings for any language with automatic memory management should not expose refcounts directly to the programmer (at least in 90% of cases). It’s not far fetched — the original, pre-.NET Visual Basic did a very good job of this.


COM was still a pain even when using MS languages. The WinRT era COM is a bit better but WinNT era was just needlessly elaborate. Everything was done with giant 128-bit GUIDs that were impossible to recognize or memorize, so they added a naming layer on top but it wasn't used consistently. COM servers had to be registered before they could be used, and that was the _only_ way to use them, so you couldn't just export some objects from a DLL and load them directly from that DLL (or at least it wasn't well documented, my memory is getting fuzzy about this stuff). Then you had the obscure thread safety approach in which instantiating some kinds of COM objects would create an invisible window that you were just expected to know about and then you had to write the boilerplate to pump the message loop, if you weren't doing the COM call on the UI thread. Etc.

The goals were good, and other platforms haven't really tried to achieve them (KParts and Bonobo were the closest equivalents but both were abandoned a long time ago, DBUS isn't quite the same thing). But COM was fiddly.


I switched from 98 to ME for a week before 2k, and it (2k) was rock solid for years.



Source Depot was still a thing in Office as of last year when they finally transitioned to Git. Windows started transitioning to Git 5 years ago.

Also, they have about 10k people working on Windows (and devices) and about 10k people working on ads nowadays (that paints a good story of priorities).

Source: 2nd hand from MS friends


Source Depot's largest disadvantage VS git how hard it is to share changes with others.

But I miss the ability to only pull down a portion of a monorepo, and the ability to remap where folders are at, or to pull down a single folder into multiple locations.

So much bullshit in with monorepos in Git land exists because Git doesn't support things that Source Depot (and Perforce I presume) supported decades ago.

As an aside for those who don't know what I am talking about, when pulling down a repo in source depot you can specify which directories to pull down locally, and you can also remap directories to a different path. This is super useful for header files, or any other sort of shared dependency. Instead of making the build system get all funky and fancy, the source control system handled putting files into expected locations.

So imagine a large monorepo for a company and you can have some shared CSS styles that exist and they always end up in every projects `styles` folder or what have you.

Or the repo keeps all shared styles in a single place, and you can then import them into your project, but instead of build system bullshit you just go to your mappings and tell it to pull the proper files put them into a sub-directory of your project.

It is a really damn nice to have feature. (That also got misused a ton...)


> But I miss the ability to only pull down a portion of a monorepo, and the ability to remap where folders are at, or to pull down a single folder into multiple locations.

We have all that with git in Microsoft though. We don't check out the entire office monorepo - only the parts relevant to what you're working on (Excel in my case).

Also sharing stuff in SourceDepot wasn't the bad part (you get links to changelists and those open in a desktop program). The bad part was the branching model, commits, no real/good CI (we had a commit queue) etc). SourceDepot was just overall a bad scm for us.


Ehhh can’t say I’m a fan of folder remapping. It gets a little too auto-magical and since all tools access the file system directly different users can see different things. That’s just begging for bugs and “works on my machine”.

I’m moderately confident the correct path is monorepo + centralization + virtual filesystem. Not every tool plays nice with VFS but at this point most do.

The D in DVCS is almost entirely a waste. Source control systems should, imho, trivially support petabytes of history and terabyte scale clones.


I haven't seen a virtual filesystem overlaid on top of a monorepo before, do you have any examples of what that looks like?

Semi-related, I try to use symlink shenanigans in git to share common files between monorepo projects w/o using 3rd party tooling, but my latest attempt worked on Windows but the symlink fell apart when the repo was pulled down on a Mac!

Not the OS that I thought would have issues. :)


> I haven't seen a virtual filesystem overlaid on top of a monorepo before, do you have any examples of what that looks like?

https://github.com/facebook/sapling


The Distributed part is definitely not a waste. Some people have different workflows from yours and depend on it heavily.


Tell me more. When is the D relevant? When is it super critical?

Working offline is distinct from distributed. In practice almost all development is defacto centralized on GitHub (or other central host).

> In software development, distributed version control (also known as distributed revision control) is a form of version control in which the complete codebase, including its full history, is mirrored on every developer's computer.

That’s a super mega anti-feature to me. Git still sucks for large binary files which is an insane limitation.


Yes, most development is de-facto centralized on GitHub/GitLab/SourceHut/BitBucket/etc.

The Linux kernel is not, and Git was designed by the creator of the Linux kernel to serve the needs of the Linux kernel developer community. And I am certain they are not the only ones with that workflow.


We agree.

Git makes fundamental design choices that are (maybe possibly but not necessarily) good for the Linux kernel. They’re objectively bad and problematic for the majority of dev work. Which makes it really fucking shitty that the industry standardized on a tool that is bad for standard workflows.


Perforce's support for that is not great these days. They don't support it in streams, and honestly if you're not using streams in p4 these days you're doing it wrong.


Am still in Microsoft, was part of that transition and can confirm it.

We did move stuff we could to other git repos inside Microsoft.

SourceDepot is still running for some stuff and is still awful but git is working great.

> Also, they have about 10k people working on Windows (and devices) and about 10k people working on ads nowadays (that paints a good story of priorities).

I'm not sure I'm privvy to all information but looking at the org chart this part is false. The ads org is much much smaller than E+D.


If you have a moment, a tangential question. A little while ago I read a very interesting comment responding to some general "why is this and that broken [in Windows]", that said

> Windows is only $5m a year

https://news.ycombinator.com/item?id=34934946

I was very impressed to determine that was only $416k/mo. Since I read that I've been like "that can't be right." (There's certainly no qualification of scope to work with.) That's roughly 15-20 (~$250k-$333k) senior developer salaries.

I'm very curious how and where Windows practically fits into the pie chart nowadays, mostly just from the perspective of a passively curious person who likes to file away watermarks and yardsticks :)

There's probably some perfectly externally-facing info out there under a rock I'm not sure where to look for...


I'm just a software engineer working for Microsoft and I'm on HN since I worked on 3 startups (one YC funded) and do a ton of open source (in my free time, Microsoft funds none of it).

I enjoy working for Microsoft (mostly) but I have _no idea_ how our sales looks like.


Appreciate the reply.


I would take that with a serious grain of salt. The kernel is also in the Xbox, the headset, the azure deployments in some form, the server os, etc. There are easy three other divisions neck deep in the funding of the kernel.


Very good point. It did seem a bit disjointed.


> I'm not sure I'm privvy to all information but looking at the org chart this part is false. The ads org is much much smaller than E+D.

Look at Panos' org and compare to the WebXT org (both under E+D).


Zachary's Show Stopper covers this ground in a very readable manner, and gives a lot of useful detail on Dave Cutler's design ideas, and DEC VAX VMS background.


Not sure why you were downvoted, but that book was excellent. Shows that people 30 years ago were having precisely the same problems on big software projects we have today.


> Shows that people 30 years ago were having precisely the same problems on big software projects we have today.

An interesting question is: why are we still having the same problems today? why haven't they been solved yet?


Because our industry barely left their baby shoes behind it. We are not even teenagers.

And why our brilliance does not make a difference: it is a human problem :)


This has certainly whetted my appetite for a deeper dive - what's a good book on the history of Windows (or even Microsoft), from the earliest days up until at least Windows 95?


G. Pascal Zachary Show Stopper! Cloth: THE BREAKNECK RACE TO CREATE WINDOWS NT AND THE NEXT GENERATION AT MICROSOFT

https://www.amazon.de/Show-Stopper-Cloth-BREAKNECK-GENERATIO...


There are details here no one else would know. Why would there be only a single mention of Xenix, when Microsoft bet the farm on it, only to vapor it for 4+ (79 to 84) years and outsource it less than 3 years later. (87)


Vapor? Xenix existed, and worked well, on the TRS-80 Model 16. Many small businesses basically ran on a Model 16 with Xenix back in the day.


Remember it well. I wrote a custom graphics card device driver for some mech eng code that would never fit in the 640K limit imposed by DOS.


Another great source is Show Stopper, a book about creating Windows NT. I read it years ago, it was the first time I saw the term dog fooding to describe using what you are building as you are building it.

https://www.goodreads.com/en/book/show/1416925


Plug for Dave Plummer's youtube channel: https://www.youtube.com/@DavesGarage

Dave was an engineer on NT and creator of Task Manager and zip folders. Lots of interesting stories and anecdotes from that period on that channel.


The built in zip folders that Dave wrote aren't something to be proud of these days, they are super slow and basic.


Slide 19 doesn’t load for me.


Slide 19 content (from .ppt):

  Serialized Development

    The model from NT 3.1 -> Windows 2000
    All developers on team check-in to a single main line branch
    Master build lab synchs to main branch and builds and releases from that branch
    Checked in defect affects everyone waiting for results
Diagram:

  Developer
  Developer
  Developer
  Developer
    -> Single MainBranch -> Product Build Machine -> Product Release Server


> Source control system (Windows 2000) > Branch capability sorely needed, tree copies used as substitutes, so merging is a nightmare

Ouch, and it looks like they only had version control with branching for the last nine months of development.


Branching is an overstatement. SourceDepot didn't _really_ do branching.

You had patches you'd float with "changelists" on top of enlistments. Each part large enough in thhe org (for example Excel or Word) gets a "branch" and it gets "forward integrated" and "reverse integrated" to the main "branch".

From your perspective the tool used to submit stuff (usubmit usually) you just push to the same branch as everyone else in your org and if your code breaks things it gets "backed out" by an automatic process.

Using git now is so much nicer.


SD was basically P4 with syntactic sugar. You likely didn't have permissions to create branches.

Windows sources 20 years ago used to have a ridiculously complicated branching strategy, driven by middle managers and made worse by having actual devs sneak around the edges to do "buddy builds" of changes with some godawful batch file that I heard may have originated with RaymondC (who was exactly the kind of person to make ridiculous MSFT somehow bearable for the rest of us). It was Conway's Law, somehow twisted and applied to version control. With permissions SNAFUs.

I still see companies today trying to map their org chart into their branching strategy and just shake my head . . . and run away.


Did branching in SD work the Git way (where creating a branch is instantaneous and requires zero resources), or did it work the TFVC/old-style-VCS way, where a branch required creating a copy of all files and took many minutes (probably hours at NT scale)? If you are stuck with this kind of system (why would you be these days?), long lived team branches would be the only sustainable strategy.


> You likely didn't have permissions to create branches

I did, it was just a very long and complicated process. You had to set up a lot of tooling for it and you were strongly discouraged from doing it so in the year of SourceDepot (on Office) I saw this option being used exactly once.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: