Plan 9 as distributed by the labs continues to be LPL (not GPL and not dual licensed).
From the GPL:
You can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License.
I'm assuming "it" in this case refers to the Plan 9 code.
The labs still only distribute Plan 9 under its own license. Now Plan 9 is available also underthe GPL, but the labs doesn't distribute that version. it's only relevant if they continue to work on it and don't continue to share their changes under the GPL.
At worst, there will be a fork under GPL and they will slowly diverge.
FWIW, Ron Minnich wrote the Linux v9fs. He was very involved with Plan 9, for example the Blue Gene port. He know ports Plan 9 bits into Akaros.
Now I'm tempted to do it just to prove a point.
edit, just looked it up on github -
oh, and mine now -
Thanks by the way, I'm going to have to play with it now :)
We'll speak again in one year to see your (and other forks) progress.
Bring biscuits and wear a green hat.
O Jenny is all wet, poor body,
Jenny is seldom dry:
She draggled all her petticoats,
Coming through the rye!
I don't know if the Berkley guys just got a bag of GPL code or if this is a continuous agreement, and they'll get subsequent, more recent code in the future. In the meantime, code from Bell Labs is LPL.
Are there still employees on the project? I had been under the impression that the entire core dev team left in 2003, mostly for Google, and that Alcatel-Lucent no longer staffs the project. Looking through the last-modified dates in the source tree, there are a handful of edits in the last few years, but not very many, and the only specific people I can find who've worked on plan9 in the past 5 years are all external to Bell Labs (e.g. the 9front people).
Who's using MIPS these days? Retrogeeks excepted, of course.
As I noted elsewhere, at least as of a year or two ago MIPS was likely still outselling x86 in terms of volume (but x86 is still supreme in terms of revenue because all the other architectures have far lower average unit costs).
They're having a hard time keeping up, though.
Plan 9 is a research operating system, it's a platform to do fundamental operating system research. It is not a product, and its development is not shaped by commercial interest.
That being said, Coraid hardware runs Plan 9, it's embedded, you don't see it. They also make all their development on Plan 9.
Some of it. Some of it runs Solaris. http://www.coraid.com/products/file_storage
Wow! That's insanely cool!
EDIT: These are per year for all of them.
Either way, even if MIPS doesn't dominate anymore, it's still a huge market (probably more chips than x86), and will continue to be in the future (even though it might be shrinking).
'The clause in particular that causes it to be incompatible with the GNU GPL is "This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America."'
The point is the LPL license says you agree to be bound by New York law for license disputes even if you do not call New York home. Which means you have to defend any license violation suits in New York, because the odds of finding a Judge capable of hearing New York law cases outside of New York are rather slim.
For what it's worth, the chance that a random US judge will apply another state's laws correctly, or even coherently, is pretty slim.
It means that you agree that all disputes will be settled in NY. Every european consultant that had done work for US company has signed such agreement (enforcement cross continent is another beast altogether)
It means that they will be settled using NY law, not that they will be settled in NY. An example might make things clearer.
Suppose a party in the UK enters into a contract with a party in Germany. The contract has a clause that says either party can terminate the contract provided they provide 30 days written notification to the other party. Failure to provide adequate notification can result in some losses for the other party, and so could lead to litigation. The contract has a clause that says it will be governed by the laws of New York State.
The German party wants to cancel the contract, and does so by sending an email to the UK party 31 days before their proposed termination date. The UK party does not see the email for several days, and incurs some losses. They sue the German party in German court. Let's assume that the German party conducts business throughout the world, and so the plaintiff has many choices as to where to sue (they can basically sue anywhere that has personal jurisdiction over the German party).
The key issue is whether or not the German party provided 30 days written notification. Is email "written notification", or does it have to be a more traditional form of written communication, such as postal mail or telegram? Also, 30 days from whose point of view? Does the sender merely have to send the notification 30 days or more before the propose termination date, or does the receiver have to receive it 30 days or more before that date?
Different jurisdictions might have different answers to these questions. If the contract does not define what constitutes 30 days written notification, the outcome can depend on where the plaintiff sues.
What the German court will do is look at their choice of law rules. Those rules will tell them what jurisdiction's contract law to apply when interpreting a contract between a UK party and a German party, and then use that jurisdiction's contract law to figure out what constitutes 30 days written notification. One of Germany's choice of law rules says that if the contract explicitly says what jurisdiction's rules to use, the German court will honor that.
Thus, the German court will try to figure out how New York contract law interprets "30 days written notification". If New York allows email notification, and if the 30 days is from the sender's viewpoint or the recipient's viewpoint.
This is one of the things that makes a judge's job interesting--just because you sit on the bench in Germany doesn't mean you only get to deal with German law. It gets especially interesting if you are trying to apply foreign law to an issue that has not been considered in the foreign jurisdiction. This would happen in this example if New York law had nothing to say about what "30 days written notification" should mean. The German could would then have to try to infer the principles behind New York contract law, and decide what they think would happen if the issue arose in New York.
It is important to note that the German court will NOT use New York law for things not related to contract interpretation. It will use German law for rules of procedure, rules of evidence, qualification of expert witnesses, and things like that.
A completely different thing is a choice of venue clause, although they are often confused with choice of law clauses. A choice of venue clause would say that the parties agree that any litigation over the contract will be brought in New York courts, and that the parties concede that the New York court has personal jurisdiction. You should always be careful if you see a choice of venue clause in a contract, especially if it is a foreign venue for you.
Choice of law clauses are generally pretty safe. They are essentially just shortcuts saving a lot of verbiage in the contract, so that both parties can know what any terms, like "30 day written notice", actually mean.
I have assumed that the FSF's doctrine on this point actually grew out of concerns over one of the licenses in the complex Python license stack, the CNRI license, which has a Virginia choice-of-law clause, close to the time of Virginia's adoption of the controversial UCITA.
http://docs.python.org/2/license.html (CNRI license apparently from 2001)
http://en.wikipedia.org/wiki/Uniform_Computer_Information_Tr... (indicating Virginia adopted UCITA in 2000)
Note that if a license does not have a choice of law clause, that just means one resorts to default rules about what jurisdiction's law applies to a given issue. (I.e., absence of a choice of law clause does not make the underlying issue go away.) Choice of law clauses are common in proprietary software licenses (and other commercial contracts) and are generally seen by lawyers as beneficial measures to reduce interpretive uncertainty (though obviously in 'form agreements' this is to the benefit of the licensor).
Of course, the real conflict is probably the LPL saying
"No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation."
>Consequently, sharing the device across the network can be accomplished by mounting the corresponding directory tree to the target machine.
Does this mean Plan 9 natively supports sharing any device managed by the kernel over a network connection?
I've never really understood why 'import /proc' is better than 'cpu acid'. Yeah, there's cases where the remote host won't have acid installed.
More interesting, I think, would be stuff like getting /net in a VM from the host OS [surely 9vx or inferno's /net could be separated out], getting /dev/sd* from a 9P server that knows QCOW, etc.
Not mattering whether it's in the kernel or userspace, without using $LD_*, is also far more interesting that not mattering whether it's local or remote.
Plan 9 has acme and sam. There's also a vim port, but give Rob Pike's editors a chance. The relation between Unix and Plan 9 is the same as the relation between vim and sam.
> shell history
There's terminal (not shell) history, check out the " and "" scripts. Again, things are different in Plan 9. There's no point in doing the same stuff again. The Plan 9 mechanism is an emergent property of its design (it doesn't require special code in shells), and it's scriptable (again, see " and "").
This makes no sense, Plan 9 have private namespaces which can do many things, including everything symlinks do.
> decent browser
The browser is probably the biggest gripe for more people. You can run a semi-recent Opera in linuxemu. I believe it wouldn't be too hard to update linuxemu so you could run a recent firefox or chromium, however nobody did the work so far. Personally, when I use Plan 9, I use it to do things I can't do in Unix, so I don't spend any effort that would enable me to do things I do in Unix. For browsing I use a mac.
From what I can read on wikipedia, both acme and sam are mouse-centered. This is a major philosophical change compared to vim, while my understanding is that Plan 9 is a pure expression of the ideas underlying Unix ("Unix without the hacks").
> There's terminal (not shell) history, check out the " and "" scripts. Again, things are different in Plan 9. There's no point in doing the same stuff again. The Plan 9 mechanism is an emergent property of its design (it doesn't require special code in shells), and it's scriptable (again, see " and "").
I'm afraid " and "" have both defeated my google-fu. Would you mind enlightening me about them?
> This makes no sense, Plan 9 have private namespaces which can do many things, including everything symlinks do.
I see. From what I read, that's certainly a much more powerful and cleaner abstraction than symlinks.
> The browser is probably the biggest gripe for more people. You can run a semi-recent Opera in linuxemu. I believe it wouldn't be too hard to update linuxemu so you could run a recent firefox or chromium, however nobody did the work so far. Personally, when I use Plan 9, I use it to do things I can't do in Unix, so I don't spend any effort that would enable me to do things I do in Unix. For browsing I use a mac.
This sounds reasonable. Anyway, given the limited hardware support, I suppose a VM is the way to go.
" and "" are two small scripts that allow running and searching through previous commands. Here is the manual: http://swtch.com/plan9port/man/man1/wintext.html
This, along with hardware support, is why I don't run Plan 9 as my only OS. Porting a modern browser to Plan 9 would be incredibly painful, and I believe the codebase of Chromium or Firefox is larger than Plan 9 itself these days!
Someone actually ported vim, and it's pretty easy to install from contrib.
These are a deliberate omission; the ability to rearrange the filesystem with bind doesn't correspond exactly to symlinks since it's done as a property of the current namespace rather than stored in the filesystem, but with the idioms that are built around bind, I've never missed symlinks at all.
Sorry for going into defensive-weenie mode and attacking your positive post!
No worries. One of the reasons for my post is to learn more about what's available on current Plan 9. Besides, I don't mind having my misconceptions corrected, as long as it's done politely. Thanks for taking time to do that.
if you want to keep it run
tail -f /dev/text >> /home/shell_history
in your login
The shell doesn't have fancy nonsense because fancy nonsense doesn't belong in the shell it belongs in your user environment.
Personally in 15 years of using plan9 I have never ever once wished for shell history.
We have no need for command history. Mostly because we use better tools such as Sam and Acme to run arbitrary shell commands.
I shall repeat, like I end up doing on most plan9 threads, I have never had the need for command history in 15 years of using plan9. Ten years of that was full time software development and system administration in plan9.
Not using an OS because it doesn't have a shell like Bash is short sighted. These kind of arguments are why most of us don't even turn up to threads about plan9. It's the same 3 things over and over. No command history, doesn't have Linux programs, doesn't have a native HTML4 compliant web browser
Just now, I was repeatedly running curl in order to test an API, occasionally tweaking URL parameters here and there. Would you really open a text editor, type your curl line inside, and execute the result?
I'm not trying to be antagonistic, but I just have a hard time fathoming the idea of a shell without history.
As for using text editors. Yes, in Plan 9 this is actually feasible and it's a widely used idiom. What makes it so much easier is that I can just middle-click or middle-swipe on anything inside the editor, and it would execute. So yeah, I can just type commands and edit them. When I B2 on them they will run.
Please see this excellent introduction to acme by Russ Cox: http://research.swtch.com/acme
Or rather I'd merely be right clicking on a URL and my environment would already know I wanted to retrieve that URL as text. If I was in Acme it would open that text in a new window, if in the shell it would just print it on stdout (if that's what I had set up).
When I say the environment knows, actually a program called "the plumber" does text matching on the strings it is sent and executes commands based upon the pattern. I could arrange for different commands to execute based upon the URL.
If I knew I was going to experiment I would be keeping a record.
As Tesla said in response to Edison's now famous "1% inspiration, 99% perspiration" remark
"If you thought a bit more, you wouldn't have to sweat so much."
(both were in a live radio debate, curiously only Edison's has entered the lexicon)
For example, someone enquiring about using arrow for history navigation gets an immediate response about how bash is an abimination and a rant about no-one having written a new shell in 20 years and what not.
Meanwhile, skipping entirely any discussion about how plan 9 does things better. 4ad mentioned " but failed to explain how one navgates /down/ the history. It also entirely ignores the fact that users have voted and arrows have won. They are used to navigate in text. Navigate between email messages. Practically every where keyboard-based navigation is used, arrow leys have proved both accessible, simple to understand and fast to use for the level-1 needs (next, previous, ...). Are your rants enlightening? no. They merely impress that plan 9 is not a community I'd like to join.
Stop arguing for faster horses just because cars don't run on oats and hay.
I used plan9 as my primary desktop for about 10 years. As I would always respond, a Formula 1 car doesn't have an ashtray.
* all objects are either files or file systems
* communication is over a network
* private namespaces (transparent access to remote processes) 
Even more modern concepts are in the NT kernel by Dave Cutler (VMS fame). NT uses an object metaphor that is pervasive throughout the architecture of the system. Not only are all of the things in the UNIX file metaphor viewed as objects by NT, but so are things such as processes and threads, shared memory segments, the global registry database and even access rights.  You can browse the NT object tree e.g. with the ReactOS Explorer on Windows or ReactOS. 
Thankfully as of Vista, video drivers are back in userspace.
NT is still a microkernel by design. What Microsoft did was to keep the modules logically separated and use kernel internal RPC to switch between areas of responsibility in the kernel. Even if it looks like a monolithic one.
This is how hybrid mikrokernels tend to work.
Since Vista the support for user space drivers has been improved a lot.
Also, know how you do init in Plan 9? You add commands to an init file, and they run when the machine boots. If a service goes down for some reason, you GO RESTART IT YOURSELF.
Are you saying that lack of process monitoring is a feature? The "add commands to init" thingie has simplicity going for it, and reminds me of the little I've seen of Arch's old init scripts, but I'm not interested in figuring out the dependency order of my services by experimentation, or giving up parallel initialization.
I'm glad in your haste to snark on a typo you managed to leap over all the bits that don't correspond with your worldview.
Sometimes yes, sometimes no. Providing the system owner the ability to make that decision is a feature.
I've seen many "Windows import" sysadmins who think it's perfectly natural to reboot a server because something is not working. It's not. Automatic restart of crashed processes should be the exception (as in "we need to keep the reactor core cool") rather than the norm.
The Linux kernel is also GPLv2 (not +); I completely agree with the '+' thing as that leaves the FSF in control of the license (effectively).
Your reply amounts to "Your opinion differs from Linus's opinion, and is therefore wrong."
There are many reasons why one might wish that Linux was GPLv3: for instance, one might think that the "anti-Tivoisation" provisions are important. Linus obviously doesn't share this point of view; that doesn't make it an invalid point of view to have.
> And please, when you join that fight, use your _own_ copyrights. Not somebody elses. I absolutely hate how the FSF has tried to use my code as a weapon, just because I decided that their license was good.
and about having a poll deciding the matter
> Here's a poll for you:
> - go write your own kernel
> - poll which one is more popular
I very often dislike Linus style of discussion, but that one was pretty clear and straightforward.
Actually, I don't think it's a bullshit argument in cases where the fundamentals are here: Somebody calls a vote on the license of code that was largely written by the the other person. That other person is totally entitled to say "Hey, go, vote on the license of your piece of code and not on mine."
More like: "Someone other respected person believes otherwise, and here is their justification. This other position may be a better one, for the reasons they give."
If linus decided tomorrow to make new software under a license that only allowed white people to use it, I would consider it wrong and a racist license. I reject your claim that the author has the right to establish such license as "right".
And you can't seriously convince me for a moment that the vast majority of the Linux community doesn't agree with Linus.
Every single kernel contributor must at least feel partially the same way, after all, they agreed to contribute under the existing license terms which (as Linus pointed out) can effectively never be changed.
You can't seriously convince me for a moment that the vast majority of the Linux community doesn't disagree with Linus.
Your task now should be obvious: Come up with an agreed-upon definition of "the Linux community", and run a valid survey.
> Every single kernel contributor must at least feel partially the same way, after all, they agreed to contribute under the existing license terms which (as Linus pointed out) can effectively never be changed.
You've established nothing more than that the kernel contributors are willing to work within that framework, not that they feel the same as Linus does.
Your task now should be obvious: Come up with an agreed-upon definition of "the Linux community", and run a valid survey.
You've established nothing more than that the kernel contributors are willing to work within that framework, not that they feel the same as Linus does.
As they, "actions speak louder than words" -- and your words seem very dim compared to the action of thousands of contributors.
Then it's not going to happen, and our opinions are equally valid.
> it's logical to assume that they agree with both the licensing and direction
No, it's not. It's logical to assume they accept them. There are many reasons they might do so. One is certainly because they agree with them, another is that, weighing the benefits of working within the existing framework, they conclude it outweighs their disagreement with the licensing and direction.
Some of the benefits include, by the way, continued salary. Most kernel contributors are paid for their work by employers.
I work within frameworks I disagree with all the time. We all do. Sometimes because I'm paid for it. Sometimes because I feel I can do more good within that framework than fighting it. Sometimes it's because I'm bound to by law or social contract.
That I don't shoot anyone that stands in the way of my ideals is not the same as saying I agree with theirs.
If I had some motivation to do so (something I thought needed fixed, a new driver I wanted, or an employer who paid me), I would have no issue contributing to the Linux kernel. Yet I disagree with the decision to make it GPLv2 only.
Anyone who can't hold two opposed ideas in their mind at the same time is surely too stupid to be trusted to work on a kernel, no?
That's why we have the right to fork. While we can't change GPLv2 kernel code, there is nothing stopping anybody from including GPLv3 code in the kernel so long as it doesn't directly link against GPLv2 code in a way when the fork is distributed.
Every time I look at Android phone or TV or a router I regret that BSD/MIT/Apache is so popular. Those devices run tivoized proprietary builds of what consists mostly of free software, but builds are shitty or just restricted (want a proper firewall? never mind it's there go buy next-tier device). And users can't do anything about that.
Users and GPLv3 deserve more love. They're already screwed by everyone and their pet code monkeys.
As an example, John Carmack (formerly of ID Software) recently posted this on twitter:
GPL never did really do anything for game code, and I do
wonder whether it was a fundamental cultural
When I was merely a user of free software, I too believed as others did, that a more restrictive (or "stronger" if your prefer) copy-left license was better. However, the more years I've spent in the software industry, the more my opinion has varied.
I think it ultimately comes down to who the primary "users" of your code are. If it's an application, it's probably ok to use more restrictive licensing. If it's a library, or code that's in any way intended to be reused by others more generally, the licensing should have very few restrictions if you want to encourage wide usage and to discourage others from duplicating the work.
>The most common case is when a free library's features are readily available for proprietary software through other alternative libraries. In that case, the library cannot give free software any particular advantage, so it is better to use the Lesser GPL for that library.
>This is why we used the Lesser GPL for the GNU C library. After all, there are plenty of other C libraries; using the GPL for ours would have driven proprietary software developers to use another—no problem for them, only for us.
>However, when a library provides a significant unique capability, like GNU Readline, that's a horse of a different color. The Readline library implements input editing and history for interactive programs, and that's a facility not generally available elsewhere. Releasing it under the GPL and limiting its use to free programs gives our community a real boost. At least one application program is free software today specifically because that was necessary for using Readline.
Public Domain or BSD-style licenses are really the best option for encouraging wide usage and discouraging code duplication.
Actual users want products that just work.
You're twisting "users" to mean "programmers that have the time and interest in spending it on hacking their gadgets."
There is a huge difference here: the vast majority of people don't mess with their routers (heck - I'm a programmer, and I code at home and at work, and I don't mess with my router). So the people you're talking about are a very, very small group.
I'm not saying that this is a group of zero size. All I mean is: compare the total number of people that are into their wrt54g routers with the total number of routers out there. Users aren't idiots; they just have other things they'd rather be doing than messing with their wireless routers.
You imply that they're paying too much for their routers, given the features they have. Who are you to say what a feature (of a product you don't have to support) is worth to another person? If you think you can do better, no one is stopping you from making and distributing routers that you'd like, either for sale or for gratis.
Nope, I meant users. Who want products to work.
I'm not willing to hack my phone myself, in fact I don't have resources to do that. But I'm willing to pay someone to do that. Yet, I can't, because the necessary extent of reverse engineering would cost me a fortune. So, I learned to live with what I have.
> the vast majority of people don't mess with their routers
Does that prove anything? See, I don't mess with my phone and my TV, too. Does that mean I'm really satisfied with those? Nope, I just tolerate them.
Imagine there's a patch that automagically improves your network experience by configuring QoS features your router already has (but has no UI for them). Would you install it? If you'd have to crack your router open, solder a serial port, run a TFTP server and risk bricking the device - you probably wouldn't. If you'd have just download a file and click a button or two and have a new feature (not complete router reflash to a new firmware) - I guess you quite likely would consider so.
> Who are you to say what a feature (of a product you don't have to support) is worth to another person?
Uh. Someone, who heard some those other persons complaining? I must admit, this is a subjective opinion, I do not have proper statistics that can make things objective.
However, you're right, routers were a bad example. Average user rarely has an opinion on those just because they don't really consciously interact with them. "Smart" TVs and phones are much more common source of complaints. And the point was not about pricing. It was about inability to control what you have.
And there's the sound of someone slamming down the portcullis between "us" and "them". Programmers and non-programmers. Creators and consumers.
>Every time I look at Android phone or TV or a router I regret that BSD/MIT/Apache is so popular
Android is built on the linux kernel, which is GPL. You are proving yourself wrong with that example.
> And users can't do anything about that.
Of course they can. If they do not want that, they can choose not to purchase it. It isn't rocket science. You are simply upset because actual users are proving you wrong, and showing that they don't give a shit about things being GPL. Your "but think of the users" histrionics are so transparently fake it is insulting everyone's intelligence to continue repeating it.
Proprietary software would still exist without free software. People who write software can do whatever they like with it, and it is incredibly self-absorbed and arrogant to suggest that giving away something they made is a problem because it violates your arbitrary belief system.
Did someone offer you software, with no cost attached, and promises that you can modify and share it, and then it was not enough? You wanted to also have the right to threaten your users with lawsuits if they in turn did the same thing as you just did to the program?
Would be interesting to see what would happen if a cloud provider offered an (updated) version.
But as an older unix nerd/Go programmer, I have the opposite reaction where when I hear people talking about the movie it immediately reminds me of the OS.
I suppose the extension you have in mind are the extra libraries for dealing with buffered io, unicode and concurrency. But I don't think those could be termed "PLAN 9 own standard of C".
To clarify, my idea of restricted was in the sense of a highly refined subset. Much akin to how one should use C++ for instance. A careful selection of the good parts in C. Essentially, I was paying a complement to Plan9. :)
Please see section 3.3: http://doc.cat-v.org/plan_9/4th_edition/papers/compiler
BTW, be careful around those documents. The assembler, compiler, parser and linker have seen over two decades worth of work since those were put ink to paper. Though admittedly I haven't read through the lib9 source tree in years...
Sure they do, by definition code that uses the extensions is not standards compliant. Plan 9 C code makes heavy use of Plan 9 extensions. The Go runtime also makes use of those extensions, that's why gcc recently (2009) implemented -fplan9-extensions in order for gccgo to compile the Go standard library.
> Note that both C++ and GNU C have similar extensions but under different semantics which don't make them any less ANSI compliant.
I think what you mean is that GNU C is a strict superset of ANSI C and you assume that Plan 9 C is the same. This assumption is wrong. Plan 9 C is not a superset of ANSI C, some ANSI C things are missing; e.g. const. That being said it's not that hard to compile C89 code with the Plan 9 C compilers.
> be careful around those documents. The assembler, compiler, parser and linker have seen over two decades worth of work since those were put ink to paper.
I should know since I use Plan 9 every day and I recently ported the Go toolchain to Solaris and gave a talk on these tools. The compiler document is very accurate, the only glaring anachronism is that the extern register storage class doesn't necessarily use a register in the Go toolchain, but depends on the target operating system. It's still true on Plan 9 though. The assembly guide is more inaccurate, but the C papers are accurate, at least for now.
I don't follow. How's having non-standard complying code taking advantage of the extensions offered makes the compiler itself any less standard capable? I was under the impression being standard compliant didn't mean forbidding extensions... But I suppose I could have been mistaken. :(
> GNU C is a strict superset of ANSI C...
In the same way that I said "Plan 9's C is a restricted ANSI C variety". Yes. It takes some good stuff from ANSI C and leaves some unnecessary stuff out by default. Still, keeping to the standard.
> some ANSI C things are missing; e.g. const.
Surprisingly irrelevant. The draft actually said "If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non-const-qualified type, the behavior is undefined.". So, regardless of what the Plan9 doc says about giving warnings and not implementing const in a standard confirming way, no behavior was ever expected in the first place by the standard so the compiler is unwittingly compliant :D And volatile follows suit...
My guess to why the standard requires to implement a keyword without specifying explicit behavior, is that they wanted backward-compatibility with some vendor compiler that did implement these but didn't want to force any actual functionality.
Mind you the standard is far more lax then most people give it credit: Even "void main()" and "exits(0);" that usually raise a few eyebrows are all implementation-specific according to the standard and thus are compliant.
> The compiler document is very accurate...
I was thinking about https://docs.google.com/document/d/1xN-g6qjjWflecSP08LNgh2uF... but I'm obviously not as versed as you in the state of the tool-chain so that's that I suppose.
I personally did some work on getting the compiler to work on a MIPS router I had a few years back but my work was superseded by something better before I could even think about release so that was that... It was at a pretty late stage though with most of the assembly written down. So the impression I got at the time must have had mostly to do with the assembler. But that's ancient history I suppose.
So, from what I can tell out of the standard and the actual behavior of the compiler, it's complaint regardless of what it's docs say. More so, I can't think of a lot of non-confirming compilers I came across outside the embedded circles in recent years. Why, even MSVC is at most a custom header away per project away from compliance and since boiler-plate was never restricted it might as well be considered standard.
TL;DR Standards are highly overrated.
Thankfully I am not at Lucent anymore and am not privy to the tortured negotiations that ended up at the obviously inelegant compromise of "The University of California, Berkeley, has been authorised by Alcatel-Lucent to release all Plan 9 software previously governed by the Lucent Public License, Version 1.02 under the GNU General Public License, Version 2." But the odds are overwhelming that the one-word answer is "lawyers".
AT&T gave away Unix during Unix's formative years, because AT&T's government-granted monopoly disallowed it from making money on computers. They didn't make any money on Unix then. They tried later, and made some, but what a sense of loss about what might have been if only they'd been charging from the beginning! Of course, if they'd been charging from the beginning, it's hard to say Unix would have been so popular.
I don't think Plan9 has much commercial value in it left, but they may consider it would be awkward if, say, EMC or Cisco incorporated some of its technologies (Plan 9 had some interesting ideas about storage and network transparency) into their proprietary products without paying for them. It's easy to sympathize with the small garage inventor and to think it's nice for you to subsidize small innovators, but it's much harder to do the same with corporations that would patent-extort you the moment they realized they could gain some advantage from it.
it seems to me the linux kernel is being used in many places.
There wouldn't be the whole GPL violations thingy if people were not already taking without giving under the GPL.
Plan 9 continues to be LPL.