In other words, RMS' radical position is necessary for 'moderate' LLVM to exist. Otherwise we'd still be living in the Borland/Metrowerks/Microsoft world of the 90s - proprietary toolsets developed by private companies with absolutely no intention or incentive to share their code.
In polisci there is a concept called the "Overton window." If a once-extremely radical position is held and promoted by any significant number of people, it shifts the entire conversation in that direction so that the formerly radical position seems more moderate.
That's why RMS is very necessary. He shifts the Overton Window towards what most of us consider the "reasonable" position.
that's in response to the offer llvm made to give hand the copyright over to the FSF and integrate llvm to gcc. I had long been wondering why there was a llvm-gcc on apple machines a couple revisions back.
basically he claims he didn't see the original llvm offer:
>> "If people are seriously in favor of LLVM being a long-term part of GCC,
>> I personally believe that the LLVM community would agree to assign the
>> copyright of LLVM itself to the FSF and we can work through these
> I am stunned to see that we had this offer.
> Now, based on hindsight, I wish we had accepted it.
>If I had seen it back then, I would not have had the benefit of
> hindsight, but it would clearly have been a real possibility. Nothing
> would have ruled it out.
> I wish I had known about the offer.
In my personal experience, the less the opposing position acknowledges the values of its opponents, the more likely it is to be rejected. Most people reason by mood affiliation and use argumentation as a social tool, so this should not be a surprise.
I think the "Overton window" is confusing cause and effect; it's the nature of reasonable positions to generate unreasonable fanatics at the tail ends. But if you pay attention to the loud fanatics at a given point of time, you find most of them do not shift any window but instead fall into irrelevance. I think that as RMS's software becomes less important, people will care less about what he has to say.
Radicals have to be able to articulate their position in a way that is compelling and reasonable to a significant number of people in order for there to be a shift. But I think political progress is largely explained by this phenomenon.
The radical usually wants the world to convert to his/hers own views, which is why they tend to get closed minded and hard to talk with as they get older.
Some people label all undesired opinions "radical" but that is obviously not what this poster is talking about.
As far as I know regarding persuasion psychology, this is false. These are the psychological studies you are appealing to.
If you have a more substantiative argument I suggest you cite these, but it'd be news to me.
Arguments virtually never convince anyone, so this is all a moot point.
I would tend to agree with you that when two people are arguing, there is very little chance that one will convince the other. However, one of the things I like to do on HN is read arguments between two informed people on a topic in which I myself an uninformed and unopinionated. So for my personal benefit I would urge the people of HN to keep arguing. And to cite your sources.
In-groups love nothing more than making fun of the out-group. When they can wrap themselves in the reasoning of "I'm just moving the Overton window!", they are avoiding the difficult and often painful steps of wondering whether their course of action is the correct one.
>From its name, I guess that LLDB is a noncopylefted debugger and that
some might intend it to replace GDB. But I don't know if that is so.
This is just one of many, in the recent arguments he has stated several times that he doesn't understand how automated code refactoring works, he has not experience with IDEs, and things of the like.
Beside the ideological point, he does not seem a person capable of steering important projects, at least when it comes to compiler technology. He just doesn't know enough anymore.
The fact that "he doesn't know enough anymore" doesn't say much about Stallman; instead it says a lot about how his goal of making sure software is libre has been shoved aside by everyone else for other priorities.
Also, RMS is one of the most humble people on the scene, and will freely admit to not knowing something until he lives and breathes it.
Yeah, actually, it does: specifically, its says a lot about his qualifications to apply theoretical ideals to real world situations. To intelligently plan how to achieve the goals of the ideology, you need more than devotion to and deep understanding of the ideology, you need deep understanding of the existing context to understand the pragmatics of moving toward the goals of the ideology in that context.
Just because you disagree with some of the results of his principles, probably because you're focused on getting shit done in your little corner of the world (I'm typing this on a Mac, I'm just like you), doesn't mean that RMS is somehow fundamentally flawed or incapable of being the philosophical leader of a movement.
I didn't use "pragmatic" as an adjective describing RMS or a role RMS might be in at all, so I'm not really sure what you are saying. RMS is, and has for a long time been, acting in the role that I stated that the "he doesn't know enough anymore" [about the way working developers now actually build software] claim is relevant to his suitability, which is simply making specific recommendations about what software features and usage restrictions should, or should not, be present to achieve the goals of his ideology.
> Just because you disagree with some of the results of his principles
My position on Stallman's principles is orthogonal to my belief that his particular recommended policies are often counterproductive to achieving his stated principles. The post you are responding to is about the latter, not the former.
It does if you are trying to make choices about how to use management of which features to include or exclude in copyleft software targeted at software developers as a mechanism to promote the goals of an ideology with a specific view of software freedom.
> His principles are an entirely separate matter and he has stated over and over and over again that he doesn't care if his principles are inconvenient or if adherence to his principles causes technology to advance at a slower rate
But he presumably cares about whether his decisions result in a world that reflects his principles less rather than one that reflects his principles more. And that's where knowledge of the present pragmatics are important when it comes to tactical choices to advance his ideology.
No, not if it means compromising the principles themselves. That's the beauty of RMS, he really isn't pragmatic. He isn't willing to compromise, at all, ever. And that's why he is so important, because he represents an unwavering ideal, you don't have to worry about him moving the goal posts, if you hitch yourself to RMS and let out 100 feet of rope, you know that you will always be 100 feet from free software purity.
Not as such, but as long as he's doing it for so many marque GNU/FSF projects....
I think that's a pretty striking description of failure in promoting an ideology.
I'd be really surprised if most of the people working on these marque GNU/FSF projects weren't happy with GPL/copyleft, at least for these "complete" programs (as opposed to libraries like the GPLed GNU Scientific Library).
Are you actually saying you think the FSF have failed?
I think the FSF has failed in making popular RMS's extremist exclusionary ideology which sees the eradication of non-Free software as a moral imperative, even at the cost of technological progress and of the utility of Free software for its technical, rather than ideological, functions.
I think the FSF has succeeded in using copyleft licensing to create a critical mass of Free Software which established well the pragmatic case for Free software, and -- because the pragmatic case for Free software has been so well made -- has demonstrated (entirely unintentionally) the conflict between (larger, AFAICT) group those whose goal is increased availability, utility, attractiveness, and use of Free software and the (smaller, again AFAICT) group whose goal is RMS's one of eradication of non-Free software and avoidance of Free software utility in producing/generating non-Free software.
I'd call this a deep problem with how the FSF has operated to-date. It's not that libre software isn't important, or hasn't had a huge impact on the software world. But the very idea that it should be perceived as the ultimate priority in the existence of software is wrong. That viewpoint fails to understand or acknowledge how people use software and what other important risks they perceive and face related to software. As such, our libre utopia falls apart because we didn't understand that it had to be inhabited by real humans.
he probably could be "connected with the present technology" if he wanted to.
Case in point. Understanding how people use and are affected by "present technology" is key to follow-on innovations after copyleft. IMO, a significant risk to libre software is that its social innovation has not continued to adapt to the changing software landscape. For a time, that was fine because we had the heyday of free software's expansion to worry about. But there's been this tacit (or maybe explicit) assumption in the community that the GPL and "belief" in libre software are enough. But in fact, I'll posit that the real goal is to build sustaining social infrastructure for free software and information culture.
But when you're implementing a language ecosystem, proud ignorance of other peoples' usage patterns is just embarassing.
The fact that ESR doesn't use an IDE or automated refactoring is perfectly normal. It's those not able to work without using these tools who scare me.
In the game development world, embedded systems and desktops we like our IDEs.
The supplied IDE for development is Emacs.
2. Are you developing for consoles or PCs, or for some other platform?
3. Do you program in a language that benefits from IDEs, specifically C, C++, C#, or Java?
Now, while I am forced to use Visual Studio as a build system and for it's integrated debugger. I too, generally use other tools for actual development. And I'd love if I didn't have to use VS for it's debugger either.
Also, since everything is tightly integrated, when something crashes the whole goes down. This can be frustrating when your project takes almost a full minute to load in VS (it's mostly VisualAssistX being busy parsing files and the perforce plugin syncing up).
2. I was primarily developing for consoles (PS3, X360 mostly), with some PC work. Now primarily PC and Mobile.
3. Over 70% C or C++. In industry (AAA Console) I was probably 90% C++.
Lots of people do use IDEs in the industry, but lots of people also don't. It's a choice.
Except if Windows C++ developers (most of which use VS) don't qualify as "technically knowledgable people".
And who is doing the judging?
I take his phrase as to mean that those people are justified or the majority.
That is, the phrase "lots of very technically knowledgable shun IDEs and automated refactoring" seems to me to imply that technically knowledgable people do/should shun IDEs and automated refactoring in general.
But, as we all know, technically knowledgable people fall in both camps (pro and against IDEs).
There is no option for anything else on Windows, though. The shell isn't anywhere near as prevalent and you have no other go-to method at all. The whole ecosystem is point & click, which means you'll get nothing but headaches from trying to use a more UNIX-style toolchain in lieu of an IDE.
It's not exactly a point for IDEs when, even if you wanted to, it'd be a hassle to try to integrate another type of workflow into an OS that clearly is not made for it.
I'm not fond of esr in general but he's spot on with this post.
To be fair to RMS he has tried to consider opposing views in the past when he can see and understand the need behind the proposition.
In this case the problem is that RMS doesn't know enough C++/Java/C#-style OO programming, and can't understand why you would need refactoring tools to rename a method, because in his C-style world a search and replace should be sufficient.
And because he doesn't see the need himself, he wont consider it a valid use-case which GNU/GCC/Emacs needs to support.
This already has lead to chilling effects: People have stopped working on adding GCC-based support for auto-completion and similar functionality. RMS may be stubborn, but he's not stupid: He sees adding the proposed LLDB/LLVM-based functionality as a direct consequence of this, and bypassing his "authoriity" on the GCC-issue and thus an "attack" on GCC and the goals of GNU/FCC itself.
In my mind, he is obviously right that people are side-stepping his judgement, but he is wrong that this is an attack on free software: People just want to make Emacs better, and he is vigorously fighting them to make it not happen.
And now we have RMS fighting to artificially limit free software, in his mind to preserve it. I'm not sure how long this has been going on now... A decade?
To me forking core GNU projects to leave RMS out just in order to get things done is increasingly looking like the only option.
Is that really the underlying reason, or are you making it up ?
From what I've seen, he doesn't want to expose internals of GPL software such that non-free software can be built on top of it, which is a fair stance to take - even if you and I might disagree to it as it as a consequence might prevent our life to be easier/better - at least in the short term.
Dying on this hill makes him a bad steward of the projects other people have entrusted him with (and fortunately the maintainer of the Emacs debugger stuff is ignoring him), while also making everybody else's lives suck a little more.
And while I have as of yet no opinion on the concept, a lot of people believe he's hypocritical for supporting non-free OSes in GCC and Gnu Emacs.
They don't want their tools to be superior or have better functionality on a non-free OS.
For example, emacs wouldn't take a patch that hooked into speech recognition provided by the OS, unless that was also available on a free OS. And say Microsoft or Apple provided powerful functionality for debuggers via a system api (and this same functionality isn't on a free OS) - gdb would not take a patch that used it.
I remember Stallman saying something along the lines of "even if it weren't as good as proprietary software, it would be important for people to use free software" over a decade ago.
The fact that releasing the source code, and allowing people to modify it, leads to high quality software is a nice perk as far as he's concerned; but it's not the reason for the FSF. The FSF exists so that programmers aren't helpless when their system breaks.
Raymond's arguments, on the other hand are all about which compiler or debugger is better, assuming the compilers and debuggers being compared are open source.
If you accept that RMS' worldview is about freedom, even if it sometimes means sacrificing some technological advantages, you'll see his position is consistent and reasonable.
People use gcc over non-copyleft compilers because they perceive it as technically superior. People use emacs over non-copyleft editors because they perceive it as technically superior.
Sacrificing technological quality to fulfill an agenda will actually have the opposite effect, because it'll drive users to non-copyleft solutions in order to get the better piece of software.
It's not just the number of users, either. When your software is dominant, you're in control. You get to have a say in the direction of the technology, and you get to prevent the lesser players from having their say.
By sacrificing technological quality for ideological purity, RMS is giving up both his userbase and his dominant position he can use to prevent non-free software from taking over.
Exactly. I respect RMS, and his position that proprietary software is immoral, but I also know that position won't be shared by many (maybe even most) of the people who care about free software.
And to write such compiler, you must as a minimum know what refactoring tools and IDEs are.
Also, I'm sure you always get naming and signatures right at the first try, foreseeing every possible evolution of the system. I don't, and since I don't like my programs to turn into huge piles of horror, I enjoy refactoring tools a lot.
It takes little effort on Apples or Googles or Microsofts part to take advantage of an LLVM dominated world to close off their own changes to it and try to force developers to use their own proprietary LLVM distributions on their own operating systems. It stops every other company in the future from taking advantage of all the great LLVM tech to implement their own CPUs in terms of LLVM IR, so they never need to publish their ISA and they can lock down their platform with a blob LLVM of their own. The only thing stopping any of that from happening is GCC being a still competitive alternative.
If LLVM came to dominate the compiler scene to such a degree that GCC were irrelevant, it opens the flood gates to any major company forking LLVM into a proprietary paid for compiler that they require on their OS to profit from. It lets you do anything from what Apple is already doing with Swift, where you create a programming language with a proprietary compiler, or you could go from the other end and implement a proprietary translation unit so that you never need to publish your ISA (something Nvidia never does) but that allows all the LLVM compilers to target it.
Right now, nobody can close off their LLVM contributions because a hobbled and fractured LLVM ecosystem is one that cannot overcome GCC. In the same way Apple and Google had to cooperate on webkit until it was so dominate they were in a position to fork and do their own things with it once Gecko was rendered effectively irrelevant, but the difference is that webkit was LGPL and Clang is its own permissive license - they still cannot directly modify the free software parts without redistribution, so it is harder to make a proprietary webkit, but Apple surely has succeeded since they have their own proprietary patchset on top of trunk webkit nowadays.
But Eric Raymond convinced me that it's irrelevant ( http://esr.ibiblio.org/?p=928 ):
"If we live in 'Type A' a universe where closed source is more efficient, markets will eventually punish people who take closed source code open. Markets will correspondingly reward people who take open source closed. In this kind of universe, open source is doomed; the GPL will be subverted or routed around by efficiency-seeking investors as surely as water flows downhill.
"If we live in a 'Type B' universe where open source is more efficient, markets will eventually punish people who take open source code closed. Markets will correspondingly reward people who take closed source open. In such a universe closed source is its own punishment; open source will capture ever-larger swathes of industry as investors chase efficiency gains.
"In a Type A universe, reciprocal licensing is futile. In a Type B universe, reciprocal licensing is unnecessary. In neither universe can the GPL’s attempts to punish what we regard as misbehavior have more than short-term, temporary effects."
Also, assuming the case that Open Source is more efficient, some of us don't want to wait around for inefficient closed-source-based companies to fold, because they can use the large savings they're sitting on to take a very long time thrashing around and doing damage before dying or adapting. If copyleft licenses can speed that process up, great.
Finally, there's a false dichotomy here: closed-source and open-source are not on a single scale of goodness measured by "efficiency", and the market does not perfectly adjust to maximize efficiency.
ESR's type-A and type-B universes both presuppose free markets and perfect market efficiency. There's a difference between wanting that and assuming that it's already the case.
Uh, yes, that's one of the reasons the license itself doesn't make much of a difference. If you want to fork the Linux kernel and not give back -- and companies do that in our world -- you'll quickly learn that the license is the least of your problems with regard to staying up to date.
> Also, assuming the case that Open Source is more efficient, some of us don't want to wait around for inefficient closed-source-based companies to fold, because they can use the large savings they're sitting on to take a very long time thrashing around and doing damage before dying or adapting.
That's a valid point. Then again, reciprocal licenses are based on the assumption that, fundamentally, they'll get faster results with legal action (or threatened legal action) than the results you can expect from the the added costs of running a fork of an open source project. When your first cease and desist letter works, that's true. Then again, the IBM vs. SCO fiasco showed how long it can take to get vindication from the courts.
> ESR's type-A and type-B universes both presuppose free markets and perfect market efficiency. There's a difference between wanting that and assuming that it's already the case.
You don't have to have perfect market efficiency. Thinking about how things would work in a perfect world is often useful for understanding the imperfect world we actually do live in. The world we live in does involve a very impersonal market that will relentlessly tell you when you're in the wrong business, or trying to do something in a silly way. People have an ability to ignore the market's message, but it certainly exists.
It is not a universal truth whether or not proprietary or free software makes sense. In a world without copyright, the mechanisms of software profiteering and the utility of open or closed source radically change, since anything and everything is effectively permissively licensed.
On one side, it would take the profit motive out of proprietary software. Without copyright you can not prosecute users who redistribute your binaries, and thus it is "hard" to get people to buy them from you. I would not say impossible because I would not fathom considering all the possibilities in such a foreign context. But in the general case, without copyright, it becomes impossible to profiteer off false scarcity of information in the form of copies of software.
I say that because its important to contextualize why people strive to close off and lock down their software by depriving users of software freedoms - because copyright control, control of distribution, and the right to a monopoly over the idea means profit. If you take that away suddenly there is little incentive not to develop the software you want communally with others who want the same software, because your options are either do all the work yourself closed and have everyone else use it anyway, or open it up and have others contribute and lessen your burden. The mechanism though does not change - if you are distributing your software, which is the only way the GPL even takes effect to compel source release, in a copyright free world it makes more sense to at least release the source as an act of security. Because you cannot even sell visibility - ie, if someone wants to derive from you, you cannot extort them to see the source because if you do take money from them to see it once, you cannot legally compel them not to then release it once you have willfully given them a copy.
In that political paradigm - not a universe, not a fundamental rule of reality - software freedom just makes sense almost all the time. And the edge cases where it does not are much easier to overcome, because today free software is an uphill battle against corporate interests who use their power over software to extort their users of revenue to then fund the enhancement of their product. It is why photoshop and office are so hard to contend with, because they get so much money by using this framework of information monopolies to entrench themselves perpetually.
There are a lot of ways to frame a society that could in theory bias it towards free or proprietary software. which is in part why I don't always agree with RMS - I love the GPL and what it means in our political environment today, but I also claim it is a highly flawed economic model due to the existence of state sponsored IP in the first place. But there are many other ways to do things, and in any of those permutations free software may or may not be economically optimal, and that means it is absolutely simplifying things to claim "universes where free software make sense" exist. That is literally not seeing outside the box at all.
Frankly, I want the big boys on my side when it comes to opensource code. I want them to use the code I write. Usually I indirectly benefit from them using my code anyway, and they benefit, and everyone's happy. This is true even if they decide to make proprietary changes which don't get pushed upstream. I gain influence in my community and lucrative job offers. I get invited to talk at tech conferences, and my projects (present and future) attract more attention. I honestly don't see the downside here.
Apple might have a change of leadership and decide to swim against the current and make LLVM proprietary, but if that happens can't we just fork it? As far as I can see, MariaDB is doing just fine. And until that happens (which will probably be never), we can get some huge compiler ecosystem improvements on Apple's dime. All opensource.
Am I missing something, or is this fear of corporations totally unjustified?
For example, see shader compilers for GPUs. How many open shader compilers do you see around? Is there any motivation to open them?
On the other side, during the 90's, we have seen many vendors come with new CPU ISAs, extensions of existing ISAs, new SOCs, etc. Many of them didn't have resources or will to write a new C compiler, so they wanted to use someting existing. They were willing to write new GCC backend - and GCC license basically forced them to be open. After this, there was no point in keeping ISA specification secret.
LLVM/clang does not have this effect. It pretty much rewards for being closed. So today, we have shitty (especially on ARM-SoCs) shader compilers for secret ISAs, and you aren't going to see their sources anytime soon.
I would love to hear from anyone with more expertise in this realm who might be able to dispute this claim in any way... otherwise this seems to be a smoking gun in GCC's favor!
The situation with architectures in 90s sounds about right: Those that were kept closed (and where a gcc backend was thus no option for the vendor - mostly embedded stuff) have to this day shitty compilers with unpredictable optimizers.
I'd also like to note LLVM/clang didn't gain large non-apple marketshare until GCC adopted GPLv3 which has more to with it's stagnation than anything IMO. v2 is palatable to many business needs, v3 is not.
The rubber apparently meets the road with the NVidia shader compiler.
Only if you can convince people to use your architecture. The tools being closed is a minus in that, and must be taken into account with a lot of other stuff. A closed dev environment may well drown a brand-new ecosystem that you're trying to bootstrap.
Sony recently contributed back a ton of LLVM and clang stuff from their PS4 project, by the way. Why would they do that if keeping it closed was so rewarding?
Apple and Google do not contribute code freely, nor do they contribute a significant amount of their code. The contributions are limited to areas in which an advantage exists. One only has to look at the machinations present in other development platforms to realize the threat. Consider what's happened with Java in recent years. Or look to Swift. Or to the entire Microsoft ecosystem which was built in part on a foundation of open source. BSD licensed code permeates the Windows environment; to ask "what's the concern" belies a rather stunning ignorance of Microsoft's behavior over the three prior decades.
As opposed to their own ineptitude (e.g. Vista and Windows 8) or the changing of the guard from original founders?
I'm certain open source played a role, but I suspect a secondary one. Heck, if post XP Windows didn't suck so much, I and my parents (who nowadays run what I build them) would be using it instead of Linux for our desktops.
Linux is still not a significant player on desktops. Microsoft is still completely dominating that space.
Windows has the mindshare of the masses. When many upper-middle class white Americans want to write a document, they can only fathom word. When they want to do a spreadsheet, they can only fathom Excel. When they want to draw, they can only fathom Photoshop.
It isn't about options or features or anything, I'm talking about the super majority of people who cannot any longer comprehend the existence of anything but what they know - where being presented with Linux destroys their world view. They talk about OSX like its an easy bake oven rather than another computer, or as if its another desktop UI for Windows that also runs Office.
Which is why Microsofts open source efforts are pretty much all on the developer end. They know their userbase is completely ignorant to everything just the way they intended, and it would take years of retraining to push the public conscience away from the mindset that Microsoft Windows is the personal computer, and everything else is some gadget.
Servers are more complicated. In the mid-90s Windows NT started dropping in quality, and the much older decision to have mandatory file locking resulted in situations where creating a server with a major MS server application could require ~ 20 reboots. And many more bug and security fixes require reboots than they do on UNIX(TM) based/inspired platforms.
Then one could argue ineptitude in marketing when Microsoft didn't cut deals that could have made their software competitive for mass installations. I really wonder about that, because so many of these need source, but it's "a path not traveled", except internally with Azure.
This is simply false, actually. But of course, you have no evidence of this, only rhetoric, while i actually see literally every code contribution google makes.
More than 10%?
More than 1%?
You are certainly aware that the vast majority of code is under strict restrictions and will be leveraged for competitive/controlling purposes rather than being shared. Employees wishing to freely contribute code in these domains will have their requests denied.
We've both been employed by large SV companies; we both know how this works. The majority of software will be used in an attempt to control the market.
None of this is true.
I mean, literally none of this.
I don't even know where to begin.
Google does not open source the VAST majority of their code -- it remains tightly restricted. Surely agree this is an accurate statement?
As written, actually, i don't agree with it.
Google has open sourced > 100 million lines of source code, depending on how you count.
I can't tell you what percent this is, but it is quite significant
It is the vast majority of a number of products, and not the vast majority of a number of other products.
In fact, for some subsidiaries, all of the code is open source. For some, it isn't.
So your statement depends on a lot - who are you counting, what is "their code" (Code we've written, code we've modified, or code we use), etc
If you make a detailed enough statement, i'd probably agree.
But as written, there are plenty of cases where google open sources the vast majority of it's code.
What has happened with Java in recent years?
One thing i can think of is that the 'official' Sun/Oracle JDK has gone from being closed source, to having a second-class GPL'd derivative, to being built on a GPL'd core. The amount of proprietary closed-source code has gone from 4% to 1% to nothing that doesn't have a free replacement today.
Your feigned ignorance is disingenuous. An enormous legal battle over the platform took place within the past decade.
We are very lucky that the outcome was favorable and that the platform has been able to continue to improve.
Sun Microsystems basically grabbed BSD development by hiring up a lot of good people and running with it. It took a while for the various free BSD's to come into their own.
Basically, with "bsd licensed" software, if a big company hired up all the developers, they could take it proprietary and out-compete fork efforts.
I don't think it happens often, but it's not impossible, either.
The AT&T lawsuit and the end of DARPA BSD funding and that research group at UCB didn't have a lot to do with that? That was several years after Sun got in bed with AT&T and announced that BSD based SunOS was doomed.
I don't know, but I wonder.
There's also the fact that the internet was far less available in that day and age, making it more difficult to get good people involved.
I don't think it's a big risk, but like I said, I don't think it's impossible either.
Well, look, sure, if you hire all the developers that are working on something and understand it, you can probably out-compete other implementations even if all you have the old developers do on your proprietary project is write up specs from which a different set of developers build a legally non-derivative interoperable implementation.
That may be a risk with permissively-licensed Free software, but its also a risk with copyleft Free software and proprietary software.
To resolve your confusion in this area, perhaps you should consult an Oracle.
Yet, PostgreSQL survives, and is doing well. Ditto for Apache. That is enough evidence that there's something wrong with the usual modeling... Ok, maybe not enough evidence for you to feel secure on the viability of big non copyleft licensed software, but you should at least take it into account.
(a different number simply don't patent stuff they contribute to LLVM)
In particular, I think gcc has played an essential role in providing free software to users because it is licensed under the GPL. The BSD license is great for less important things, but the moment giant corporations have engineered the whole "open source" ecosystem so that they can distribute forks of all the basic build tools without contributing changes to the public where we can see and influence them - that's the moment we've handed over the keys to the kingdom. You have to be pretty out of touch with history not to see that point.
To hear the way HN talks about RMS, he is a nerdy, smelly, arrogant, technically ignorant Emmanuel Goldstein, representing everything we hate most about the nerdy computing world that pre-existed the current startup gold rush (but which, coincidentally, entirely enabled it and us). Now a large proportion of us here secretly harbor the belief that we are the next Steve Jobs, so we pine for the good old days when people like that made bank on companies whose business models were entirely based on platform lock-in. Because so few of us actually remember how fucked up it really was for all the users and programmers. Because we don't consider ourselves users and programmers, just temporarily embarrassed millionaires and Chief Engineering Architect Engineers. So it's no wonder that HN still doesn't understand the point of GPL. Just like most of Marin county now thinks measles is something to cultivate, like acidophilus.
So when in history has this actually occurred?
> As mentioned earlier, in any case I will happily accept and install LLDB support into gud.el. So as long as I'm Emacs maintainer, your opinion on whether this might ruin the FSF's goals are not relevant.
My guess is (based on his past statements) that he doesn't browse the web, or browses it through emacs sending mail to him (really!) and doesn't use search so he can't really find out what LLDB is.
Do you really want to be beholden to a person who blocks software integration (into a unified debugger interface) and then can't even do reearch to justify that?
Just FYI, pg described his browsing setup at one point a few years ago, and it was actually fairly similar.
(It was in a follow-up comment to "Disconnecting Distraction", if I remember correctly. I don't do it myself, but it can be a great way to force yourself to be productive and only read the things you really want to read, instead of getting sucked into aimless browsing. When you think about it, it's really just a poor man's version of Pocket or Instapaper.)
I've never heard him flat-out say "why", but it sure appears to be in order to maximize anonymity / privacy on the web.
(Obviously not at any cost, he chose his successor(s) very well, so they are trusted and are committed to Free Software)
In the linked thread RMS is a bit detached from reality, IMO, but still very reasonable. In the other thread where this whole LLDB drama started he was simply obnoxious and a bully, even insulting and driving out a contributor.
Then again I first became aware of this latest cycle when I think a message from him threatening a fork became a Hacker News topic (https://news.ycombinator.com/item?id=8861360), followed by several others with a lot of discussion.
"What does it take to get LLVM as performant as GCC" talk from 2014 LLVM Developers' Meeting discusses details.
"GCC versus LLVM performance analysis reveals the LLVM inliner 1) does not inline certain hot functions unless a high threshold is provided at -O3 2) produces larger and slower code at -Os."
The problem of LLVM inliner has been known for a long time. One of the best discussion is "Optimization in LLVM" talk from 2013 European LLVM Conference.
However, I would also say that generally I've found LLVM is now producing faster code than GCC in most code I've tested both compilers with.
I think it is completely possible that LLVM developers are using wrong benchmarks. Benchmarks are mostly SPEC and some large Google C++ codebases; in some sense both are quite atypical. But then, entire problem is to understand how typical codebases look like.
Assuming we stick to x86/x64, nowadays (literally, let's say as of January 2015) GCC and LLVM are within the noise for most people on most code (IE 1-2% of each other).
You can certainly find benchmarks were LLVM does badly. Some are important to some people, some aren't.
It is harder to find benchmarks where GCC does badly.
Small benchmarks can go either way, but for large codebase (especially C++) inliner is more important than just about anything else. So GCC wins, because it has better inliner.
Anecdotal, but I've seen similar improvements over g++ in my code.
More on this idea here: http://blog.pyston.org/2014/12/05/python-benchmark-sizes/
Except that it isn't copyleft, which is one of the most important metrics to Stallman, FSF et al, and is why they're unlikely to stop defending it.
I'm not sure what you could do, that would also be attractive for people to use and contribute to, that would add enough GPL content to make such a thing fly even in theory.
Maybe add a bunch of the GCC backends to LLVM? That's where it's most conspicuously behind GCC. There's also the precedent of GCC derived pre-Clang front ends, although I don't know how many of the non-C and C++ GCC front ends are seriously important (there's Ada, but that's got its own complexities).
To do that you need developer support for the fork, so it won't happen now. It won't happen until enough people are sufficiently upset with the current development path, and if/when that happens it will probably not be due to the license.
For my own use-case (a high-performance photorealistic renderer threaded using TBB), my clang builds outperform my gcc builds, but of course that's completely anecdotal and based on just my own use-case.
"LLVM’s license is not a “copyleft” license like the GPL."
Rights to copy the source is not what copyleft means.
Free/Libre software: yes (https://www.gnu.org/philosophy/license-list.html#GPLCompatib...)
Obsolescence happens; this is nobody's fault. It will happen to
clang/LLVM someday, too, but today is not that day.
For those who want to read it from the start: http://lists.gnu.org/archive/html/emacs-devel/2015-02/msg002...
What made it a mega-thread was when RMS weighed in: http://lists.gnu.org/archive/html/emacs-devel/2015-02/msg003...
RMS's goal seems to be to have GCC not be replaced by LLVM. From a reply downthread of ESR's post:
> This means it is more than a potential problem.
> The possible harm is to replace copylefted GNU package
with noncopylefted code. They must have worked for a long long time
to replace the capabilities GDB already had.
So RMS thinks it's a "problem" and "harmful" if lldb replaced gdb. So RMS cares not just about what GCC does, but whether or not other people adopt GCC as well. In terms of that goal, it does not matter what RMS considers more important. RMS is not going to convince everyone else to use GCC instead of LLVM by simply restating his arguements about copyleft forcefully again and again. That doesn't mean he has to sacrifice his feelings about copyleft. But it does mean he has to give a damn about being technically superior to LLVM and lldb if he wants to beat them.
It would be detrimental to GCC to lose developers, sure, but I think one of the reasons for the disconnect between RMS and others is that RMS' goals does not require a large user base, and so he is willing to make decisions that seems counterproductive to anyone who cares about usability and user acceptance first.
The mere continued existence of GCC (and the other GNU tools) in many ways safeguards the freedoms he cares about: It allows users to jump ship if they in the future are prevented from doing what they want with the alternatives. It's not the ideal scenario, but it better serves his goals than giving in, and potentially see these freedoms slipping away at some future point.
Of course he'd be better served by GCC outcompeting LLVM. But if that isn't happening, his goals are better served by slowing developer migration than "capitulating" in a way that might affect developer mindshare by putting LLVM tools in front of more people.
As you say, of course the problem with this is that a lot of us care more about the technical superiority. Especially when the competition is a project that is as open as LLVM.
I don't understand whats with all the gcc sucks attitude these days. It's worked for quite some time, and yes it's showing some age due to the lack of development so it has lagged behind others, but we should all be worried when people very influential in the GNU community start talking about why GCC is bad.
I don't like this line of thinking at all.
If GCC is behind, fork it and do what needs to be done to make it competitive.
Llvm uses his own debugger. It is just RMS realizing people are abandoning GCC en masse, and not liking it. Just that.
LLVM wanted to develop it on ways that gcc people did not, so they created something from scratch, after trying to modify GCC because it was so complex.
It is not simply a fork, but a complete redesign.
LLVM main advantages are:
Instead of being a monoblock compiler like gcc, LLVM is a series of inter operable libraries and tools. This way you can program your own different compilers, or parsers, or debuggers just including libraries.
The above means you don't need to use scripts like in gcc but you can actually program a compiler very easily.
You also don't need to use the linker if you just want a parser. Or you don't need the parser if you already have stored or computer generated an Abstract syntax tree.
It uses his own cross platform "assembly" code so you can use it with dozens of different languages, or compile dozens of different languages.
You can use it for whatever you want, even making closed source software.
The "problem" RMS has with llvm is its non-copyleft license. And yes, the issue is that he doesn't want to support llvn with GNU tools. When LLVM started they used GCCs front end to compile C code until their own matured enough. So parts of GCC were being used to develop the middle and back ends of llvm. The LLVM ecosystem is systematically replacing the GNU toolchain with non-copyleft licensed versions and RMS does not want to support that in any way.
>> I don't understand whats with all the gcc sucks attitude these days. It's worked for quite some time, and yes it's showing some age due to the lack of development so it has lagged behind others, but we should all be worried when people very influential in the GNU community start talking about why GCC is bad.
I don't understand it either. GCC is still a great compiler. Developers seem to prefer the modular design of LLVM and they're probably right in that. Users like some of the features enabled by that design as well - IDE integration and cross compilation come to mind. GCC is starting to move, but slowly.
>> I don't like this line of thinking at all.
>> If GCC is behind, fork it and do what needs to be done to make it competitive.
You make it sound like there are lots of compiler developers with time on their hands for open source development AND who share the licensing philosophy AND are unhappy with GCCs development path. Apparently there are not.
There are lots of things that are simply very hard to do with GCC, but are easily doable with LLWM.
IDE integration is just one, because it has the ability to compile just lines(at least the Apple's version).
Things we have done with LLVM:
Millions of mollecules 3D paths' rendering.
Automatic testing of software and hardware.
Simulation of military vehicles doing all kinds of things.
Digital crash test.
Natural language(speech) understanding.
Before LLVM doing all this took years, now it takes months or weeks.
This exploits the ability of understanding languages of a compiler, but is not just compiling c or c++ like gcc does.
I wonder if any of the big distros will start compiling with CLANG instead of GCC?
However, I don't agree that this means we should just jump on the LLVM train. The world still needs GNU. And LLVM isn't GNU.
The GNU community has historically held dominance in the compiler field, so this is an uncomfortable time. We can no longer rely on the popularity of GCC to keep GNU in the forefront. However, this doesn't mean we should just give up--I think the solution is to start again from first principles and build a better system, an alternative to GCC that is also released under the GPL.
I'm not saying we should drop support for GCC. But we need to innovate: GCC became dominant because it was innovative and it lost dominance because it stopped innovating. LLVM isn't the only non-GNU competitor. It's telling that none of the major new languages (Go, Rust, Clojure, Scala) are released under the GPL.
At the same time no developer of a fledgling language is going to worry about someone coming in and taking their permissively-licensed code without contributing back, because getting to the stage where someone cares enough to seriously fork your language already implies an enormous relative degree of success.
I think you're just perceiving a difference in how it's expressed by each of them, e.g. one reply is that ESR has a very clear goal of increasing the quality of software. Which for me is the big difference between "Free" and "Open" software.
Oh, he is certainly ego-driven, but not in the same way as Jobs was, for example. RMS does not put his person before everyone else, but he lives rather through his principles and tries to convince everyone why it makes sense to follow them. And he has a very solid rationale he has developed through the years, making him very articulated.
No, he's driven by a very clear goal to prevent non-free software, even if that means preventing free software that might, potentially, in the future, be used by someone, somewhere, to create non-free software.
And I think there is a certain amount of ego in there that gets in the way of good judgement on means, in that he tends to take actions which will naturally result in the free software he protects from being involved in producing non-free software losing mindshare to either non-free software or free software not wrapped around with his preferred restrictions, which is contradictory to his purpose -- since it means that not only does software that isn't crippled in features to prevent its utility in contributing non-free software wins, but that that software is also itself either non-free software, or non-copyleft free software that can more readily directly contribute to non-free software as well as being used by people who might build non-free software through use of the features of the software.
(ESR is well known for many things that aren't true.)
I know OpenMP/Clang, but as far as I know OpenMP is not upstreamed in clang yet, is it? If not, it's not superior yet in scientific computing :).
But it will happen soon. And at the very least, LLVM's approach has created a large ecosystem that gcc did not have.
For starters, Apple's money is not the main driver of LLVM (In fact, publicly, Apple is not the #1 contributor anymore).
Second, "but merely the fact that compiler technology has advanced significantly in ways that GCC is not well positioned to exploit. " is simply false
In fact, that's exactly the problem for GCC: Compiler technology has not advanced roughly at all.
GCC caught up to everyone else for the same reason.
Time for a history lesson.
About 14 years ago,a group of folks including Diego Novillo, Jeff Law, Richard Henderson, Andrew MacLeod, me, and Sebastian Pop (along with bug fixes/changes from a lot of others) sat around and build a "middle end" for GCC.
Prior to that, GCC had a frontend, and a backend. The frontend was very high level (and had no real common AST between the frontends), the backend was very low level.
There was nothing in between.
We cherry picked the state of the art in compilers and research, and build a production quality IR and optimizer out of it.
This research has not really changed that much in about 10-15 years. Most of the research these days focuses not on straight compiler opts, but on things like serious loop transforms, and helping runtimes (GPU, GC, etc), or dynamic languages.
You can see all the tree ssa work here: https://github.com/gcc-mirror/gcc/blob/master/gcc/ChangeLog....
This covers only until the branch was merged. At that point, it was "not a piece of crap", but this was before people added all the stuff on top of this architecture.
On top of that architecture, it took another few years to get good, and a few years after that to get really good.
Bringing us to today.
LLVM was started around the same time, but had less contributors back then.
Essentially, you could view it as "instead of build something in between two really old parts, what could we do if we just redid it all". People thought it was a waste of time for the most part, but Chris Lattner persevered, found a bunch of crazy people to help him over the years, and here we are.
Because you see, it turns out compiler technology has not really changed at all. So, algorithmically, LLVM and GCC implement the same optimization techniques in the middle of their compilers. Because there is nothing better to do. Just slightly different engineering tradeoffs. To put it another way: Outside of loop transforms, essentially static language compilers targeting CPU architectures are solved. We know how to do everything we want to do, and do it well. It just has to be implemented.
So given enough time/effort, LLVM and GCC will produce as good of code as each other there. The question becomes "will they keep up with each other as engineering/tuning happens" and "who can generate great code faster".
The problem for GCC on this front is three fold
1. The backend, despite being pretty heroic at this point, really needs a complete rewrite, but people value portability over fast code.
LLVM, having started completely from scratch, has a modern, usable backend. They are not afraid to throw stuff away.
2. For any given thing you can implement, it's a lot easier to do it in LLVM than GCC, so, given time, LLVM will produce faster code because it takes less work to make it do so than it does to make GCC do so.
3. Because it was architected differently and more modernly, clang/LLVM are significantly faster at compiling than GCC. GCC can remove most if not all of the middle end time (and does), but it's still slow in other places, and that's really really hard to fix without fundamental changes (See #1)
Certainly researchers who are working on this sort of thing today are doing it in LLVM or some custom framework. I can't imagine GCC has any significant traction at least.
The rest (effective cache/register/op usage) is all subsumed by cost models for polyhedral loop transforms.
See PLUTO (http://www.ece.lsu.edu/jxr/pluto/) and PLUTO+ (http://dl.acm.org/citation.cfm?id=2688512)
As long as there are people who care about freedom, people will maintain GCC. Even if the worst case happens, and there is extreme fragmentation of proprietary patch-sets to LLVM, we can always still use GCC, or even still use the free parts of LLVM/clang.
While there is some issue with fragmentation of the developer community, I see this as a non-issue. These things generally work themselves out through the natural ebb and flow of chaotic systems (just like the economy is largely self-regulating).
We can always still use GCC. I don't really see where the issue is, am I missing something?
They have about 5-10 architectures and about 40-50 architecture variants to catch up to GCC. It's doable, but it will take about the time it took GCC to get there, and the result will be that LLVM becomes the kind of unmaintainable mess that GCC is considered to be now.
One would assume that the author of "The Cathedral and the Bazaar" would know a bit or two about the lifecycle of open source software.
Not all of those architectures and variants are important going forward (I'd actually be very interested in lists of them).
GCC was designed for and has been thoroughly maintained to be less maintainable (one of the major points of these recent prominent debates).
C++ is more maintainable than C. (I don't know that I buy this at all, in fact, I'm about to dive into LLVM's source code to see if it could possibly prompt me to revoke my oath to never program in C++ and Perl again unless absolutely necessary :-).
The last two, plus LLVM's Bazaar model of development in part enabled by those technological differences, means it won't be an unmaintainable mess if and when it grows out like that.
I have no idea if this will be true. I'd like to hear from seasoned developers who are also seriously familiar with the LLVM architecture, development model and code base (per the above, I rather hope I won't become one of the latter).
> [[[ To any NSA and FBI agents reading my email: please consider ]]]
> [[[ whether defending the US Constitution against all enemies, ]]]
> [[[ foreign or domestic, requires you to follow Snowden's example. ]]]
He's such an adorable ideologue.