Hacker News new | past | comments | ask | show | jobs | submit login
“Unix and Windows are very, very similar and they should be bashed in unison” (c2.com)
33 points by vezzy-fnord on Aug 29, 2015 | hide | past | favorite | 80 comments



I'm having a hard time dating this rant (despite its historical importance, I can't seem to convince WikiWikiWeb to cough up a page history), but I would guess that it's from ~1996. If so, it is a time capsule that represents the stagnant OS thinking in the mid-1990s: the belief was that Unix was dead (!), that Windows was going to be everywhere (!!) -- or that both of these systems were essentially useless and what we _actually_ needed was radical surgery like object-oriented operating systems or microkernels or exokernels or orthogonal persistence (or all of the above!).

If it needs to be said, this time was a terrible atmosphere in which to aspire to OS kernel development, a suffering that I expanded on (at far too much length!) in a recent BSD Now.[1] I do not miss the mid-1990s at all, and even now, I find that this ill-informed screed fills me more with resurrected anger than it does any sense of vindication or nostalgia...

[1] http://www.bsdnow.tv/episodes/2015_08_19-ubuntu_slaughters_k...


If you read the "WimpIsBroken" article linked from the same rant, you will see it reference, negatively, of course, the Windows 8 design, so it can't be all that old, or at the very least, it has been updated recently.

I generally dismiss a lot of the things the author is saying for 3 reasons:

1. The author is listing a lot of problems without actually specifically listing why they are in fact problems. For instance, he states that neither Windows nor Unix can teach you how to program. Why would my OS need to be able to teach me how to program? That's like complaining that my Car doesn't teach me how to rebuild the transmission, or my refrigerator doesn't teach me how to generate refrigerant. That's just an obvious example that jumped out at me, there are many others.

2. The author is pointing out a lot of problems without actually pointing out any solutions. People who point out problems but provide no solutions are generally useless. If you ever worked on a project with a person like that, you know what I am talking about.

3. c2.com lacks any sense of design or taste. The author also lacks any ability to communicate his point of view in a logical and consistent manner. I wouldn't trust the author to design a Ski Free game, let alone an operating system. In reality, if tasked with a job to write a Ski Free game, the author would instead return a 30 page paper on the subject of problems with modern game design.


"c2.com lacks any sense of design or taste. The author also lacks any ability to communicate his point of view in a logical and consistent manner. I wouldn't trust the author to design a Ski Free game, let alone an operating system."

The content on c2.com is not the work of any particular author - it's a wiki whose articles are collaboratively created and edited. Each paragraph may be the work of one or more anonymous or named writers.

In fact, c2.com happens to be the world's very first wiki, designed by Ward Cunningham. (The wiki's content has been frozen and is no longer editable. I seem to remember that Ward is working on a next-generation replacement for it.)


c2.com lacks any sense of design or taste.

It has the very best design! http://motherfuckingwebsite.com/


Personally I agree, but shiny pretty shit seems to sell.


There is a difference between, shiny shit, minimal design, and no design. Ford F150 Raptor is shiny shit, a Jeep Wrangler or an Icon FJ are pretty minimal design, and a subframe and 4 wheels, is just a subframe and 4 wheels. Sure, the letter can get you from point A to point B, but come on, it's not a proper car. I would argue that the design on c2.com and the wonderful diatribe of the fellow with a little too many "fucks" in his opus lacks design entirely rather then keeping it minimal, there is a difference. The lazy lack of design, which is similar to the lazy over-designing, should not be confused with proper balance of minimal design.


I think I agree with you. The c2 website could be less painful to read. However, the diatribe is near perfect. It fits the content very well.


I'm watching Bryan's interview, it's awesome.

For some history, the Jeff that he keeps talking about is a former student of mine at Stanford. I wasn't a big deal at Stanford, I was a TA for a Xerox PARC guy, when he retired Stanford asked me if I'd teach the class.

This was the papers class in operating systems. If you don't know what "papers class" means it means there are no text books, this is where you go to learn at the cutting edge. Pretty advanced class and Stanford let me teach it. To this day I wonder what they were thinking. That said, I'm proud of how I taught that class. The students learned a lot.

Jeff Bonwick was a student who was doing a major in statistics. Somehow he ended up in that OS class. I could tell he was sharp and I was working at Sun in the kernel group, I recruited him as hard as I could. He asked me why I would want him, his exact words were "I can't program in C". I told him that I could teach him how to program, I can't teach people how to be smart. He was very, very smart. I told him when he came to Sun he would go far higher than I ever did and I was right, he did.

Bryan is cut from the same cloth. He's an OS geek. There aren't many of those geeks around these days. Go him.


240? Wow, indeed. I am surprised an "outsider" would teach that.


Yup, CS240, still have the class notes.


If anyone cares, I learned a lot teaching that class. Happy to share.


We care! Please do.


I don't have a blog, can I just do a new post here?

Mostly what I learned was about people, it wasn't about OS.


seconded. please share.


what we _actually_ needed was radical surgery like object-oriented operating systems or microkernels or exokernels or orthogonal persistence (or all of the above!)

Do we not?

I watched your interview, but I don't recall you saying anything about research ideas. You even made a jab against Spring which rubbed me the wrong way.

An orthogonally persistent microkernel-based operating system, even another Unix, would be great. Which is why I'm looking at MINIX 3 and Hurd as interests. The latter has its anachronisms, but it's still a step forward.


I think we need a functional userland (like nix) and a microkernel (like minix), at least.

Sadly we will get a userland full of container-ized apps (via systemd) and a monolithic kernel (still linux). Worse is better.


Well, radical surgery was needed -- it just wasn't at the level of the operating system interface, but rather much deeper in the implementation. For example, ZFS post-dates this rant and certainly represents a radical rethink of filesystems -- but it did this without breaking extant applications. That extant applications were (and are) a constraint on the problem is something that's entirely unappreciated by mid-1990s OS research, which was hellbent on throwing out existing abstractions rather than building upon them.

Finally, in terms of Spring: based on the fact that you are implicitly defending it, I would guess that you never had to run it. As one of the very few outside of Sun inflicted with that flaming garbage barge, I can tell you that it was not at all a pleasurable experience -- and that whatever novelty it represented was more than offset by its horrifically poor implementation. If I ever harbored any illusions about Spring, they certainly didn't survive my first encounter with a machine running it...


I imagine it wasn't that pleasant being one of Jenner's cowpox inoculees, either. But that was an important step towards eradicating smallpox nine generations later. And the electricity I'm writing this with is produced by a steam engine descended from those that blanketed London in deadly smog for ten generations. Lots of beneficial innovations go through an early stage where they're counterproductive or unpleasant.


We've had the microkernel versus monolithic kernel debate _over_ and _over_. I'm not against research but I've yet to see a convincing argument that microkernels are better. If you decouple the logical bits of the kernel from each other all you'll get is greater impedance and message passing overhead. And for what? A "clean" architecture? Sometimes perfect is the enemy of the good, and I think that applies in this case. We've also got the tools to make monolithic kernel development scale now. Basically, I'll believe it when I see it.


So they can be better. QNX is the one example I know of that is better. It's also the only microkernel that is actually micro. When I was looking at it the whole kernel, all of it, fit in a 4K instruction cache.

Most "micro"kernels aren't micro at all, they are bloated crapware. QNX was not like that, they actually had a micro kernel and it worked. I ran 4 developers on an 80286 (no VM) and it worked just fine. Far far better than the VAX 11/780 that had more memory and more CPU power. The VAX was running 4.1 BSD.


We've had the microkernel versus monolithic kernel debate _over_ and _over_. I'm not against research but I've yet to see a convincing argument that microkernels are better.

Sure, if all you do is read Linus Torvalds ranting.

If you decouple the logical bits of the kernel from each other all you'll get is greater impedance and message passing overhead.

Debunked many times, as early as 1992 in fact: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.1...

Microkernels have come a long way since Mach.

And for what? A "clean" architecture?

You clearly have no idea what the advantages are.

Basically, I'll believe it when I see it.

You're seeing it in the billions of L4 deployments worldwide in wireless modem chipsets. You're seeing it in QNX being everywhere.

But then again, you might not see it, because you are willfully ignorant.


How many of those L4 deployments end up running 90% of their code in the "Linux" or "posix" process?


That might still be an argument in favor of microkernels if the Linux process can't crash the machine or cause it to miss hard-real-time deadlines. Or if you can use it to confine malicious code in the Linux process.


Fair, but when people talk about "real systems" built with micro kernels, as often as not, it could also describe Linux on xen with a watchdog restart. It's not especially compelling evidence that micro kernels are practical. Want to convince me that micro kernels are the bomb? Tell me about a system where no single service exceeds 40% of the code/runtime.


"Unikernels" is the misnomer being promoted by MirageOS for applications compiled to run on Xen, which they are currently doing on Amazon EC2 hosts alongside Linux instances. There are EC2 hosts that host a number of instances, including "unikernels" and Linux instances, and although I don't have details, the pricing on the smaller burstable instance types makes me think that some of the EC2 hosts are hosting actually quite a large number of instances, in which no instance exceeds 5% of the code or runtime. Is that what you're talking about?


No. What I mean is a micro kernel pace maker where 40% of the code is the beep beep service, and 30% is the meep meep service, and 30% is the bop bop service. As opposed to 99% of the code in the Linux service and 1% of the code in the realtime watchdog restart service.

Calling EC2 a micro kernel success story also seems like quite the definitional stretch.


Incidentally, I'd love to hear bcantrill rant on unikernels sometime, if he hasn't already in a talk or interview somewhere. I imagine he's not a fan, since they can never have the observability or performance of OS containers running on bare metal.


> you are willfully ignorant

No personal attacks, please.


Just so you know. I'm not going to debate you further, because I refuse to debate someone who calls me willfully ignorant.


What would you call someone who has strong opinions about something they refuse to research?


"Unconvinced". The assumption in your question is that anyone who has done any research can only possibly agree with you. News flash: it is possible for informed people to disagree.


The assumption in your question is that anyone who has done any research can only possibly agree with you.

I'm not assuming that. Any amount of research could be no research or close to none. People who have done very little research shouldn't opine. People who have done lots of research are more likely to be correct.

News flash: it is possible for informed people to disagree.

Well of course. Someone who's barely informed can disagree with someone who is far more informed. That would be two informed people disagreeing.

If someone has done adequate research and I'm correct, they must agree with me, otherwise the amount of research wouldn't be adequate or I must be wrong. I can avoid being wrong by avoiding having strong opinions on things I don't understand or know much about it.


As an old guy, I'm rather unimpressed with this sort of bashing. Yeah, Ward made the first wiki. Kudos for that, he deserves it.

But if you are going to bash unix, Ritchie, plan 9, and windows all one page, I'd like it if you had a concrete proposal for a better answer.

And saying unix and windows are the same, really? I've supported a fairly complex SCM on windows, linux, macos, all the other unices, for 18 years. I don't find unix to be the same as windows but maybe that's just me.


I'd like it if you had a concrete proposal for a better answer.

http://c2.com/cgi/wiki?BlueAbyss

Old spec: http://c2.com/cgi/wiki?BlueAbyssFramework

(By the way, it's Richard Kulisz who is saying all of this. Ward Cunningham has nothing to do with it.)


Thanks for setting me straight.

So after reading that stuff I'm still not impressed. Might be me, I worked on SunOS and we did "objects" for the VFS layer. It was like a class with all virtual methods. Worked well.

Full on object oriented stuff in an OS seems weird to me. But I'm not an OO person, I like really simple stuff, SunOS made that work, I hacked vi so that when you tagged on VOP_OPEN it knew that there was ufs_open and nfs_open and tmpfs_open and you could walk all of them, but it was very simplistic. And as such it actually worked.

I like stuff that works, proven in the field, proven in the development space, we can ship this and can support it.

I may be well behind the times but what I read didn't make me want to go work on it, it made me skeptical about it actually working in real life. I'd be happy if someone proved me wrong, it sounds cool. When is it shipping?


Yeah, but see, vfs doesn't count because... Uh, it uses macros, or something.

Hell, even the user land side is "OO". The same set of syscalls work on files, sockets (IPv4 and IPv6), pipes, etc. "needs more OO" is a pretty nebulous complaint about an OS.


Device drivers are the original OO in operating systems and they seem to work pretty well. Without any OO support in the programming language.


it's ironic that both NT and UNIX have roots in the mid-1970s and that both were influenced by many identical theoretical OS concepts and principles

Although most UNIX variants today support either the POSIX or X/OPEN standard, every UNIX vendor has tried to differentiate its offering with a proprietary interface, applications, and architecture.

http://windowsitpro.com/systems-management/nt-vsunix-one-sub... [1998]

AmigaOs is underused solely because it's superior.

http://c2.com/cgi/wiki?WimpIsBroken

My concrete proposal is to avoid using concrete.


I don't understand. How is Amiga not WIMP?


I can't help feeling that there is some sort of operating system uncertainty principle involved in purity vs practicality. The more you optimise for one, the more you lose the other.

A more contemporary version of that rant would no doubt be extolling the more FP oriented OS instead of OO though. Written by someone who drifted off while reading Lisp books too many times. They are in for a let down when they find stuff was "hacked together in Perl" instead.


I'm not at all convinced that Smalltalk-80 was the apotheosis of computing environments; both Windows and Unix are, cockroach like, exceedingly good at staying alive. Neither is fun to use, but nobody's put selective pressure on "fun to use" since, oh, maybe the introduction of the Multifinder.


Microsoft Bob.


An evolutionary dead-end if ever there was one.


Fair enough, everything is flawed. But what should we use instead? TempleOS?


>TempleOS?

I dunno, meddling in the affairs of the One True God seems like the sin of pride to me. I shudder to think of the implications of serving porn on a TempleOS server; that's asking for an IRL segmentation fault.


I like thinking about operating systems, and I am sympathetic to the notion that things could be better than UNIX, but there's some real gibberish in this article.

"Both have flat, useless WIMP GUIs;" ... this has nothing at all to do with UNIX - you don't need a GUI at all in UNIX (which is great) and there are MANY non-WIMP operating environments available - many of which are serious, sophisticated, current offerings.

"Neither has a programmable GUI." False. ion3, for instance, which is one of the many available non-WIMP environments I mentioned above, is completely programmable with lua. I am sure there are 100 other examples.

"Neither is versioned, both are irreversibly destructive." Hard to say exactly what is being pointed out here, but I think it's related to filesystems and going backwards ... and ZFS provides that just fine with snapshots. In fact, even UFS2 has filesystem snapshots. Available on FreeBSD, Solaris, and many others ... including Linux.

"Neither takes account of any modern research" ... again, previously mentioned ZFS ... that's as modern as it gets. netgraph in FreeBSD also comes to mind as something genuinely inventive, unique and modern.

"You can't learn programming from either Unix or Windows." Inasmuch as I would consider the UNIX shell as part of UNIX, you absolutely can learn to program. Further, inasmuch as DOS batch files are part of Windows, you can learn to program there as well. I certainly did.

When was this written ?


In the 90's, don't know when exactly. I remember when this site was active :) Showing my age. Back when wikis were new and /. was the nerd forum of choice. Ah those heady times.

But seriously, this pre-dates ZFS for sure and ion3 I don't know about.


I am not sure when it was written, but it references "WimpIsBroken" article on the same page. That article has been updated to also bash Windows 8 design. So, if not written, then it must have been at least updated somewhat recently. So I am guessing the author still agrees with all of his points of view.


   What a quaint and flimsy ranty troll,
   Written 'ere the web got old,
   In those days nobody'd call you out,
   When you started spewing through your spout.


> You can't learn programming from either Unix or Windows. You can learn it from other applications and from tons of books, but you can't learn it from the OS.

I grew up in an era of Windows 3.1+, so I'm probably showing my youth when I ask, how can you learn to program from an OS? Is this akin to compiling your own kernel from scratch and being forced to tinker with the kernel modules to get Linux 2.x to run on your 2015 laptop..?


ZX Spectrum 48k had a BASIC interpreter as its primary OS. When the computer was powered on the first thing user would see was a REPL (128k version introduced a GUI menu with a few shortcuts). Although users weren't required to, they literally invited to learn some programming.

Not that this was "user-friendly" in modern sense, but also don't think anyone had issues with this.


To amplify that point: A lot of what were then often called home computers bootstrapped into a BASIC interpreter. To pick just a few: the Oric, the BBC Micro A/B, the Master, the Electron, the Commodore PET, the VIC-20, the TI 99/4A, and the QL

The Jupiter Ace bootstrapped into Forth.

None of these were akin to learning to program with the first step being the giant leap from zero to compiling one's own kernel. That is not because they didn't have kernels. Several did, coming with actual operating system kernels that underlaid the interpreter. QL SuperBASIC, for one example, was an applications program running on a multi-process multi-tasking single-user handle-based I/O operating system named QDOS.

The learning to program had as its first steps the rather lower bar of the 2 or 3 line program that printed a string, or asked the user for input, or drew a coloured triangle on the screen ...


I think that they are talking about discoverability. They are probably comparing it with Xerox Alto or similar.


The value of a particular approach to OS design depends upon what it's used for. This post seems to me to be making a number of assumptions in this regard and I found myself thinking "that doesn't apply to me" about much of what it states about UNIX.

Not a criticism per se, just stating that this definitely seems to be from a specific POV.

Having said that, I must say that a few claims are made that are head-scratchers to me, e.g. that UNIX is a "single-user system", and that an OS should be something that one can "learn programming" from.


This quote seems to sum it all up:

By your own admission, you "use" it, which makes you a "user", and as we all know, all users are clueless, therefore you are in no position to judge whether it is "usable". That's logic! -- Tweedledum


Also, doesn't xmonad (which I happen to use) count as a "programmable GUI"?


Not really. Oberon is an example of a programmable GUI, for one.


Interesting. I am a bit confused though as to how xmonad could not be considered programmable. I'm curious -- what does it lack in your opinion?


It's still just a WIMP interface. It might be a more flexible one since it's tiling, and it might support a larger number of configuration options than most.

An actual programmable GUI is one where there is no distinction between what is referenced at initialization and what is bound at runtime. You're actually live scripting the interface's own structures and moreover potentially concocting intricate programs out of the on-screen text that can be typed at any arbitrary offset, and is interpreted as a programming construct. Clicking on a piece of text can serve as an entry point or continuation for performing some form of computation, as it can point to anything, including the internal state of a system object.

You could read people rabble, or you could just try Oberon, Bluebottle OS or similar systems.


Just to rabble in agreement, one recompiles xmonad (typically with mod-q), where recompiling means loading one's entire xmonad.hs file. "Reloading a file," especially when that file is typically solely used for xmonad resources (and not for any object the user desires) is not nearly as programmable as a Smalltalk or an Oberon.


Gotcha. Understood.


xmonad is a window manager, not a GUI. You use it to move windows of a GUI around and resize them. The GUI is what happens inside those windows.


I see your point and don't disagree.

Using these semantics then, this is all user-layer stuff, and even less of an argument against "UNIX" itself. :)


This rant brings to mind Stroustrup's observation about there being only two kinds of languages: the ones people complain about and the ones nobody uses.

What is BEOS's market penetration?


I agree a fair amount with the bashing. And I recall historical commentary similar.

That said, the two hangups for me on windows are -

254 character path limit no posix compliance (lack of a shim library even) -- there used to be a posix compliant set of libs that were optional under NT

That said, outside the scope of the rant, Windows has been really good at keeping backwards compatibility. (See issue w/ 254 char path limit).


"Neither have been designed well, both are the products of random evolutionary pressures."

As is every living being.


What alternative does he propose? I don't see any. Furthermore, why doesn't he propose/create practical, incremental improvements? In the meanwhile, I am just going to stay compatible with all the miracles and all the horrors of the past! ;-)


I propose TempleOS for this author.


I actually find Unix and Windows to be very fundamentally different. Particularly where he talks about security. Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.

Modularity is another example of something that is fundamental both in the architecture AND the philosophy of Unix, but very far behind in Windows where many applications such as a browser exploit can tie into the kernel space.

The communities and philosophies are also something I breezed over, but I think they are a non-trivial part of an operating system.


> I actually find Unix and Windows to be very fundamentally different.

There is a small child growing up in El Paso who thinks English and Spanish are "very fundamentally different", too, because in English there's no word for "quererte" or "sacartelo". But that's just because they've never heard Chinese, let along Lojban or Python, and they don't know how to read yet, so they have no idea along what lines languages might vary.

Unix and Windows are both single-node monolithic multi-user non-real-time operating systems built on a hierarchical filesystem (with names that are sequences of strings) with ACLs for discretionary access control, no mandatory access control, a global user namespace controlled by a single system administration authority, and in which executables (which share code using dynamically-linked libraries) run with the full permissions of the user who invoked them. In both systems you communicate with I/O devices as if they were files. The interface they provide to user processes uses system calls to present a programming interface that is much simpler than that of the underlying machine; those processes can have multiple threads sharing memory that block in system calls independently and can pre-empt one another, and by default their memory is not shared with other processes. They are both written mostly in C with some C++, and other programming languages are more or less obliged to use C calling conventions to interoperate. Both of them use sockets for network I/O. Users typically use them via a windowing system, which provides a huge variety of complicated ways of setting pixels inside a rectangular region on a virtual screen, and accommodates up to one mouse pointer.

Compared to any of PolyForth, Pick, Genera, Oberon, VMS, KeyKOS, VM/360, SqueakNOS, MacOS up to 9, MS-DOS, Spring, Sprite, QNX, and Amoeba, Unix and Windows are as alike as two peas in a pod.


This is a good point, though also a bit curious, proposing as it does, that msdos is perhaps a superior alternative. Protected memory is indeed an argument that windows and Unix are the same. I'm less inclined to agree it is a reason they should be bashed in unison.


Well, he said DOS was more different than either than they are from each other, not that DOS was different in any way that was better.


MS-DOS is a superior alternative only for hard real-time systems and, perhaps, for systems where security is more important than almost any functionality. And probably running Linux under a real-time operating system like RTLinux is a better alternative in the first case.

My point, though, is not that MS-DOS is better in any way; rather, it's that "a flat space of multiple processes with independent address spaces of mutable memory, separated using memory protection, each containing multiple threads, which access I/O devices through system calls" (and, although I didn't say this, with disjoint kernel and user spaces) is only one possibility among many.

You could have only one process with one thread.

You could have multiple processes, but all in the same memory space, with any of them able to overwrite the others' data. (You could call this "one process, multiple threads.")

You could have multiple processes that share an address space but have access to different parts of it, which sounds stupid but means you can pass raw memory pointers in IPC and was the basis for a whole research program called "SASOSes" a few years back.

You could reuse the same addresses for kernel space and user space, which not only gives you a full 4GiB of virtual address space on a 32-bit machine, but also ensures that your kernel code doesn't pass even the crudest testing while dereference a pointer passed in from user space without using the appropriate user-space-access function. (It also imposes the cost of two virtual memory context switches on every system call; I think i386 can do this cheaply with segment registers, but I'm not sure, and basically nothing else can.)

You could give user processes direct access to I/O devices so user processes can access them directly instead of through system calls, which might be sensible in an environment where you wrote all the processes.

You could virtualize the I/O devices, just as we virtualize memory, so that, for example, your gigabit network card copies packets directly into your memory space — but only if they’re your packets, not packets addressed to a different process. (This was originally called "VIA" on Linux; I think it has a different name now.)

You could separate your processes through a trusted compiler, like Erlang does, instead of with hardware; an intermediate approach would use a trusted machine-code verifier, analogous to Java's bytecode verifier, or a trusted machine-code rewriter that compiled unsafe machine code to safe machine code.

You could allow only a single thread per process, like Erlang does and like Unix did for many years, either with or without explicit shared-memory facilities like shmseg and mmap.

You could entirely decouple threads of control from memory spaces, as KeyKOS did (if you look at it funny; KeyKOS domains are an awful lot like single-threaded processes, but you could instead consider them to be locks).

You could make all memory write-once, eliminating many of the difficulties that attend sharing it (Umut Acar's Self-Adjusting Computation paper is based on an abstract machine using this model) but probably requiring a global garbage collector.

You could replace the memory abstraction with a transactional store and execute transactions rather than continuous-time processes; the transactions could either be time-limited, as in CICS, or preemptively scheduled like processes, but in either case incapable of I/O or IPC.

So, considering the enormous design space of possibilities on even this single matter, Unix and Windows are huddled together in one tiny corner of the design space, as on many other design choices. It's clearly a better corner than many other possibilities that we've explored, especially on currently-popular hardware and with compatibility with the existing applications that bcantrill was deifying upthread. But the design space is so big and multidimensional that it seems terribly unlikely that we've found an optimum. We know that it has failed to ever produce a secure system against many plausible threat models, and that producing a hard-real-time system in it is possible but more difficult than with some alternative models. We know shared-mutable-memory threading is terribly bug-prone. We know that indirecting all I/O through the kernel imposes heavy performance costs, which adds complexity to user processes and adds the market barrier to high-performance I/O hardware like InfiniBand. We know that processes separated by the use of virtual memory facilities are very heavyweight, so you can't switch between them at more than a few hundred kilohertz on a single core, you can't create them at more than a few kilohertz per core, and you can't practically make them smaller than a few tens of kilobytes, all of which limit the power of the process as an abstraction facility. (Linux has actually reduced the cost of processes, both in physical memory and in context-switch time, by more than an order of magnitude; I imagine OpenBSD has too. But improving the situation much further probably requires different abstractions.)


That's a more involved response than I think I deserved. :)

No argument that winux is a only a local maximum, but this thread is in response to an article that basically claimed it was a global minimum. Fwiw, your comments are, imo, far more informative and constructive than the linked post.


Flattery will get you everywhere :}


> Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.

I think you have the layering completely the opposite way. NT has security descriptors on everything that has a name. Then above there is Win32, originally bolted on top of NT as a compatibility mode among others, which is historically an API for not very security conscious systems. And most Windows programs out there don't care about the security features.

So it's more like the higher layers suck in this regard.


I absolutely may have the layers the wrong way. My working knowledge of Windows is very limited compared to Unix and I may not have fully understood how Windows is put together.


> Unix security concepts are built into the system architecture whereas in Windows they are implemented as features on top of the OS. A perfect example that he calls out, ACL. Some does exist in the architecture of Windows, but it is only a half hearted implementation.

Huh?

The fundamentals of Windows NT (the object manager, the registry and NTFS) all have ACLs.


[dated 2008]


> Neither is usable by someone who isn't thoroughly familiar with their limitations and weird little idioms.

vs

> You can't learn programming from either Unix or Windows.

Really?

Reading this article is an exercise in Poe's Law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: