Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Controversial opinion:

Unix and C are the two greatest technical debts of all time with respect to computing. We are just now beginning to pay off some of those debts with respect to permissions, isolation, hardware access, instability, memory corruption, and remote code execution.

Many of our best practices are more reflective of the mainframes before Unix than Unix.



POSIX is a standard that offers some amount of backwards compatibility. It's odd to see this as just "technical debt" and entirely ignore what it has accomplished or get anywhere near the question of whether it has been worth it or not.

I think, uncontroversially, it clearly has. Particularly, if you look at what the competing "state of the art" has been, I think POSIX has been a major win.

If you're willing to ignore the cross platform compatibility it brings you, and write your own implementations, you're suddenly presented with all kinds of nice options for managing signals. For example, signalfd(2) is sweet, and there's no reason to think that some version of this couldn't be standardized by POSIX in the future.

Some people see Engineering as a church, where purity is the goal, I see it as a tool, where useful compromise is the goal.


I don't think this is all that controversial. Unix and Unix-like systems are basically an incremental series of good enough hacks glued together. It's all quite terrible from a coherent architecture perspective compared to say VMS or it's spiritual successors like WinNT but those also developed in somewhat parallel worlds or are contemporaries of Unix.


If Unix is technical debt, then I'd hate to imagine what horrible word would describe its competitors.


Windows? Every other general purpose OS we have today can trace its lineage pretty directly to UNIX, and they have not strayed that far from their roots. Even Windows is still of the same era.

The shear size of the undertaking has meant that we really haven't seen fundamental changes in OS architecture, despite a radically different computing environment. If that is not technical debt, I don't know what is.


My experience working with Windows is that it is saddled with technical debt more than Posix is. Things like filesystem semantics—you can’t safely put your auxiliary functions in aux.c because most programs will treat that as a device rather than an ordinary file. And the locking mechanisms on Windows are crazy—especially when you're doing something more complicated, like opening an Excel spreadsheet on a CIFS volume. Just a couple illustrations. There are a few things that Windows does better than Unix, but there are a lot of things that Unix does better.

The Macintosh basically collapsed under its own technical debt, and the only way Apple got out of it was buying a Posix-like system and writing a compatibility layer for it (and this Posix-like system became OS X). The kernel, xnu, is purportedly a microkernel, but some of the most useful parts of it are the BSD systems that were Frankensteined in.


rewriting history here - Macintosh was not aimed at being primarily a network and I/O device. Lots of ordinary hardware that most people have never heard of, has filled that specialized role very well, at every stage of tech history.


Who’s rewriting history? I don’t understand your comment.

The Macintosh had a lot of technical debt prior to the Mac OS X switch. This has nothing to do with being “primarily a network and I/O device”—for what it’s worth, it common to see Macs with networking in the 1990s. Ethernet was standard early on, and before that, you could use something like PhoneNET.

If you take a system that is designed to work within 128K of RAM and an 8 MHz, you make a lot of design decisions that just aren’t appropriate for, say, a system with 256 MB of RAM and a 1 GHz processor. That’s roughly the span of the classic Mac OS, in hardware terms. The original Mac operating system would only run one program at a time (not counting desk accessories), and it made sense to give the program unrestricted access to memory.

After that, how would you introduce protected memory, without breaking userland? That’s a big part of the technical debt that I’m talking about. There were several attempts to introduce protected memory to the Macintosh—A/UX, MkLinux, Copland, Taligent, and Rhapsody. Rhapsody is the one that managed to stick around.


All legacy is technical debt. If Unix and C are the greatest technical debt, it's because of their widespread use and influence.


if you'd prefer to use one of the contemporary alternative systems that unix and c outcompeted, well, some of them do survive, and in many cases you can run them on emulators

mit's its for the pdp-10 (runs in opensimh), ibm's mvs for the s/370 (runs in hercules), smalltalk-80, cp/m-80 with turbo pascal (since you wouldn't want to use bds c), openvms with bliss or pascal, f-83 or other forths, gw-basic — there are lots of options available, and some of them are even free software

there are also more recent alternatives (menuet, templeos with holy-c, reactos, symbian, oberon, risc os, symphonyos, xen, genode, sel4)

nowadays you can run any of these in emulation, you can run them even faster on a softcore in an fpga, and

i think that if you try some of these alternative systems you will come to understand that although most of these systems were superior to unix and c in some way, overall unix and c were part of the solution, not part of the problem

of course, we know a lot of things now that we didn't know 50 years ago when c was born, and one of c's original designers demonstrated how he would redesign c knowing what he knows now; the result was golang




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: