Hacker News new | past | comments | ask | show | jobs | submit login
Compilers in OpenBSD (marc.info)
171 points by hebz0rl on Aug 1, 2013 | hide | past | favorite | 74 comments



A decent picture of the history is given in the OP:

* "gcc 2.5 (at the time) had a few bugs, but not many"

* "schism between gcc 2.8, conservative...and the ``Pentium gcc'' group...[Pentium gcc group was] stretching the optimizer code beyond its limits"

* "These projects eventually merged as gcc 2.95...gcc [now] had bugs"

But what does this historical lesson tell us?

Stallman was conservative, slow-moving and cathedral-like with 2.8. This approach helped keep bugs out of the code.

The "pentium gcc" (Cygnus/egcs) group was quickly responding to marketplace needs. It was more bazaar-like. It committed code more freely than Gnu would - and the code while allowing for new functions was not always that well architected.

So what was the deal with this schism and then subsequent merger which happened? What happened was egcs (the Cygnus-oriented one, the "pentium gcc" one) began eclipsing gcc. Toward the end it really began eclipsing gcc. It was not as solid as gcc, but it had all the new functionality people wanted. All over, people were seriously about to abandon gcc and go with egcs. At this point Stallman threw up his hands and accepted that the egcs approach had won. They merged, and gcc became more liberal about what code it would commit, at the expense of being a solid code base. Just like the OP says.

So now what is different this time around? Why is a compiler which prioritizes stability and correctness over new functionality and optimizations going to win? The latter approach won last time around, why should the first approach win this time? Especially since in battles between cathedral/waterfall projects and bazaar/agile projects, the bazaar/agile approach seems to come out on top again and again. OpenBSD can afford to go this route if it wants because OpenBSD fills a marginal niche. It might even be interesting to watch OpenBSD go down this road. But for more mainstream OS's like Linux, this approach might not be possible.

And if anyone mentions Apple - Apple is not marginal, but it is a niche. GCC and Linux are in a multitude of environments. A company like Apple with its own ecosystem and only a handful of targets can afford to pick and choose its compiler.


Both you and Chuck see this situation as naturally arising out of some aspect of open source software. I see this as being simply a side-effect of particular compiler developers enjoying optimization more than portability. Couldn't it just be priorities on these teams that led them to this situation, without it being some kind of indirect result of some over-arching political/economic phenomenon?


I agree and as an academic I can say that optimizations are sure sell for an academic paper, and I think most new optimizations are first developed in academic work in industry or at universities. Portability on the other end seems like a hard sell and if it sounds too much like engineering it is academic suicide.


I don't think Apple is aiming for "a compiler which prioritizes stability and correctness over new functionality and optimizations". I haven't checked, but I would not be surprised if they have cases where the compiler shipping with Mac OS X 10.q is obsoleted before they ship Mac OS X 10.q+1 a year later. Also, I think clang was the first compiler to claim to fully support C++ 11,

As to the subject at hand: OpenBSD aims for an OS "which prioritizes stability and correctness over new functionality and optimizations". Such an OS will want to use a similar compiler. And yes, I think one can draw an analogy between gcc/egcs and BSD/Linux here.


> Also, I think clang was the first compiler to claim to fully support C++ 11,

GCC 4.8.1 was the first compiler with complete C++11 support. Clang with complete C++11 support was released a bit later.


Didn't Clang/compiler-rt have full support before GCC/libcxx (or whatever it's called) did? In my mind, and the minds of most others, this means actual full C++1 support. GCC claimed it early, but only because the compiler supported it fully, though afaict there were features which weren't supported in the runtime. Clang/compiler/rt fully support the language, and I believe they were the first to do so.


http://gcc.gnu.org/onlinedocs/gcc-4.8.1/gcc/Standards.html#S...:

"For information regarding the C++11 features available in the experimental C++11 mode, see http://gcc.gnu.org/projects/cxx0x.html"*

That link shows that the compiler is feature complete, and refers (indirectly) to http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#s... for library support. That has quite a few 'missing x,y,z' or outright "N" markers.

gcc 4.8.1 is from May 31; Clang/LLVM _claimed_ full C++11 support in June (http://blog.llvm.org/2013/06/llvm-33-released.html)

Does that mean that clang passed the finish line earlier? Maybe, but it could just be that the gcc project looks harder for bugs in their own project, thus placing their finish line farther out.

Frankly, it doesn't really matter who was first. It's way more important to know whether the compilers generate correct code and if they do, that it is efficient.


Ok, that is a pretty compelling argument in favor of non-open source software.

I read it to say that GCC is so open source that it cannot converge on a stable release. Further there isn't a non-commercial (aka free) incentive for making it stable, so it doesn't converge. Rather it trundles along from new optimization strategy to the next constantly in a state of minor bugginess. The economics of 'sold' products uses the loss of revenue as the incentive to maintain quality, without that incentive its hard.

Google has (had?) a pretty good sized team that did nothing but maintain GCC. I'm sure it cost them easily $1M/year to keep that team going. There is no incentive for them to fund a team like that in a third party such that everyone else benefits from their work. Sure they offer the changes back into the base product, and somewhere else there is another team working for company Y that is taking those, porting them into their effort. In this article from Marc he mentions himself and 5 other developers who are the "compiler people". 5 developers, $120K each, that is .6M/year before you add insurance and office space.

And those 5 have their lives made more difficult by the dozen or so folks who are committing in changes that destabilize parts of the code or require side ports.

It makes me wonder how many people there are like me who would be willing to pay $100/year for a bespoke C compiler that was supported by a single source and stable.


I think the following quote from the article is more compelling:

While one would like to expect a minimum level of correctness and trustworthiness from a modern compiler, we can't, regardless of the compiler we use.

Notice he doesn't say "open source compiler" - he means all compilers will have issues. Much as it sounds like it would be nice to just pay someone for good tools and not have to worry about it, that's not the case (and never has been). The reason I look for open source solutions first is precisely because I know nothing is perfect, but at least when I have the source I know I might have a chance of fixing it, or if need be, pay someone more expert to. Just paying someone upfront for something I can't tinker on in no way guarantees an incentive to make it stable. I know this from experience. OTOH, I'm more than willing to pay (and have) for open source software. I often wonder if something like kickstarter (without as much fanfare or pressure) might be a good way to fund LTS releases of open source software.


> Ok, that is a pretty compelling argument in favor of non-open source software. [..] I read it to say that GCC is so open source that it cannot converge on a stable release.

Nothing of the sort is true.

There are plenty of open source projects with stable releases, for example Debian, Ubuntu LTS, Firefox ESR.

Some open source projects focus more on stability than others. The same is true for closed source software.


Stable in what way? Unless you want some new C++ features or something provided by the newer compiler, use debian's version or something.

The compilers are basically ABI stable right now.

If your complaint is support, well, yeah, nobody is going to prioritize fixing your bugs over someone elses if you don't pay them.

GCC does converge on stable releases, in the same way debian/ubuntu/everyone else does.

They declare they will not ship until there are less than X P1 bugs, X P2 bugs, etc.

These bugs get fixed, and the compiler ships. Non-primary platforms and miscellaneous bugs are simply an artifact of life.


> It makes me wonder how many people there are like me who would be willing to pay $100/year for a bespoke C compiler that was supported by a single source and stable.

Did you look at ICC / XLC / MSVC? They typically outperform GCC by about 20%, although I haven't checked in a while.


You should probably check again, but in any case MSVC shouldn't be compared to gcc/icc/clang - it can't even compile code written for a 14 year old language specification (C99), so no sane person should use it for C development these days.


FWIW, Microsoft hasn't made an effort to support much of C99 because almost none of their users (Windows and XBox developers) use C. I don't know any Windows or XBox programmers who uses C.

Your point is still valid: if you want to compile C99 code, MSVC is not even an option.


It depends on the features you use. If all you want is variadic macros, long long, __FUNCTION__, and stdint.h, it has those things.

If you don't care about sticking to C, you can usually get what you want with a C++ feature anyway.

http://stackoverflow.com/questions/3879636/what-can-be-done-...


Up to some point in the past, MSVC produced better code than GCC, but most recent benchmarks have them roughly equal. MSVC has actually gotten slower in the last couple iterations, but not by a lot.


I have ICC-ARM which has out performed GCC for a while. I still use arm-none-eabi-gcc for things which I expect other people to build.


Actually, more like The Portland Group: http://www.pgroup.com/


I'm not sure MSVC produces faster code than gcc. Icc does, but MSVC doesn't.

Note: our build system uses three compilers.


In my experience gcc has fewer bugs than icc or xlc.


Google fund things for all sorts of reasons. Other people having gcc is not a business risk, and improving security by maintaining a stable gcc might be worthwhile.


This isn't necessarily an open-source problem, but a politics and leadership problem.


> I read it to say that GCC is so open source that it cannot converge on a stable release. Further there isn't a non-commercial (aka free) incentive for making it stable, so it doesn't converge.

I don't see how not having a LTS version supports that argument. Lack of support for previous versions doesn't really imply anything about the stability of the past, present, or future releases. I think they're mostly orthogonal concepts.


I thought that clang was open source? I don't pay a single penny to use clang & llvm. I've been using clang in FreeBSD for a few months now and it seems to build faster than gcc.


I've recently been playing around with it, so it will be an interesting comparison. As others have pointed out if you don't like the change rate in open source you can always stabilize a version and just use it. Which I have done in the past. (Ubuntu 10.x on an EEEPC is a good example of that)


For an example of a recent bug try this http://gcc.gnu.org/bugzilla//show_bug.cgi?id=56888 basically the compiler tries to recognise code that manually does memcpy or memcmp and replaces it with the built in version. Even if you are trying to compile libc where you get an infinite loop when this happens.


My colleagues and I often lament how gcc is just so much smarter than all of us. Usually after spending a day figuring out why our code wasn't working, only to discover that gcc was doing something "clever".


This one sounds almost like peephole optimization of sorts. How exactly is that "clever"?


I assumed it was the ironic sense—it was intended to be clever but just ended up being a bug.


It does this for very good reason of course. If you are trying to build a C library, you already have freestanding issues, this is nothing new.


Not sure I follow your comment. This is with -ffreestanding, how are you supposed to compile libc?


No mention of pcc[1] except note that it was orphaned in the early 90s. I see here[2] it's been removed from OpenBSDs base system. What happened to pcc? I had high hopes for it. Is there no chance it'd become a viable compiler? (fwiw, it's still included in NetBSD base system).

  [1] http://pcc.ludd.ltu.se/

  [2] http://comments.gmane.org/gmane.os.openbsd.misc/196817


But it can't even compile NetBSD http://blog.netbsd.org/tnf/entry/portable_c_compiler which is disappointing (its much easier than Linux).


I've been watching PCC development from afar for a while. It seems to have stalled some time around 2011.


>The last de-facto LTS compiler we have had was gcc 2.7.2.1

The new de-facto LTS compiler is gcc 4.2.1, the last version released under GPLv2. After gcc switched to GPLv3, Apple and FreeBSD stayed on 4.2.1.


I'm not sure all of OpenBSD's platforms have code generation support in 4.2.1 - for example, the m88k issue given in the article.

But that is definitely a good place to stop/start (depending on how you look at it)


Apple no longer ships gcc at all.


4.2.1 is from 2007 and it hasn't been S'd for a LT by anybody.


FreeBSD moved to clang. I don't know for Apple.


"...but there is something I wish would happen first.

An LTS release of an open source compiler."

Surprising that this doesn't already exist--Apple and RedHat and Ubuntu, etc. must all maintain what is in effect a LTS version of the compilers they ship, in the same way that OpenBSD does.


There's a very stable C compiler out there: Ken Thompson's 1990s rewrite of Dennis Ritchie's 1970s original. Unfortunately it wouldn't solve OpenBSD's problem, being too stable to have caught up with C99 and C++. But I find it interesting to see where Ken and Dennis took their language while the standards committee wasn't watching.

http://man.cat-v.org/plan_9/1/2c


No, Apple's compiler group moves very fast. They cut from LLVM/Clang ToT for each new release.


In the case of Apple (LLVM/clang), having an in-house compiler gives you a certain level of independence from those kinds of problems.


Forgive my ignorance, but what does 'LTS' stand for?


LTS: Long Term Support, it is a version that the editor/community will support for a long time. Here you find the policy for Ubuntu: https://wiki.ubuntu.com/LTS .


Long Term Support


I'm curious why compiler testing appears to be so hard. It seems to me that: Given this C input, this AST should be built, and on this platform, this code should be generated. This should be testable through automated scripts and numerous test cases could be created for new features resulting in huge regression suites. Am I missing something, or is it the effort of putting together such tests that's the problem?


gcc does have a huge test suite.

The problem is if you combine all the various flags that affect the compiler, across all the architectures, across all the platforms, in all its variants (cross compiler, native, the many handful of libc and barebones variants) you're looking at too many tests to run no matter how huge an infrastructure you have to run it.

Another problem is that optimization depends a lot of context, given the amount(basically infinity) of C code that could surround any other piece of C code and affect the result - it's quite a hard task.

One interesting approach is csmith ,http://embed.cs.utah.edu/csmith/, that generates random C programs and look for bugs.


Their PLDI paper, "Finding and Understanding Bugs in C Compilers", is an amazing read: http://www.cs.utah.edu/~regehr/papers/pldi11-preprint.pdf


> The problem is if you combine all the various flags that affect the compiler, across all the architectures, across all the platforms, in all its variants (cross compiler, native, the many handful of libc and barebones variants) you're looking at too many tests to run no matter how huge an infrastructure you have to run it.

As someone who at one point in time maintained such a compiler test system, I'll say that it isn't possible to get all combinations, but you can hit a reasonable percent of them.

A good compiler test run end up running through millions of tests. It isn't for the feint of heart, but it is perfectly doable.


Producing a compiler that is robust on a single platform is relatively straightforward. You just bash away for a while; with wide usage the corner cases will be exposed and you can converge to a finished system.

Add multiple platforms, and it gets more difficult, because each platform has quirks and peculiarities you don't know about until you get a bug report.

Add optimisation of languages that are difficult to automatically reason about, and it gets harder again.

Multiply the optimisation difficulties by multi-platform difficulties and yes: it is hard.


No, as optimisations change both the AST and the generated code...


The problem is the optimization phase. Without optimizations bugs are extremely rare.


The only software that isn't buggy is dead software with no users. All software has bugs, and that is a fact of life.

Clearly OpenBSD developers haven't been involved with the compiler engineering communities, and their wishes have been neglected over many years; this is not news. Why? Because there are no _users_. Bugs don't get fixed by bitching about them: they get fixed when you get involved with upstream and write patches.

GCC can be as "conservative" or "cathedral-like" as it wants: if it does not produce sufficiently optimized code for the big users, it will be thrown out the window. Today, GCC is in active development and has more users than anyone else. The leadership is strong: RMS and many of the GNU people are heavily involved. Those are the facts.

LLVM is the other elephant in the room: from my experience posting patches on their list, they don't give a shit about getting llvm/clang to build linux.git, or many of the projects that currently use GCC. Although it might be technically superior (the code is much more readable and maintainable), the community is much too narrow. Moreover, the leadership is gone: most of the top contributors (Chris Lattner, Evan Cheng, Reid Spencer) seem to have lost interest in the project.

The proprietary compilers like ICC are mostly useful just in research. Sure, they produce highly optimized code, but they're black boxes that cannot be studied or tinkered with. I tried compiling git.git with ICC a few years ago out of curiosity [1]: pages and pages of totally pointless warnings; gcc and clang both clean-compiled git.git at that point.

What the community needs is a compiler project with a strong leadership that cares deeply about its users, not a dead "LTS" project that nobody else gives a shit about: nobody wants to work on a project that's in maintenance-mode. Hardware, programming languages, and compilers evolve constantly, and programmers must learn to cope with these changes.

Fwiw, I'd really like to see what "bugs" this guy is talking about. If they really don't care about hardware, programming language, and compiler technology advancement, why don't they just maintain a port of an older version of GCC? Why bother with new versions at all?

[1]: https://gist.github.com/anonymous/1367335


> LLVM [...] don't give a shit about getting llvm/clang to build linux.git.

The LLVMLinux project is making good progress towards this goal. http://llvm.linuxfoundation.org/


Refreshing to know that even settled projects have those same problems at their core. Depending on upstream developers is always a big risk you have to calculate well. Was always a pain and only gets bigger in our agile *aaS world.


Snarky comment (be prepared), but I read *aaS as Shit as a Service the first time around.


The g++ 4.7 and or libstc++ that goes with it from their pkg-add repo is b0rked at the moment. Exceptions don't work.


If people switch from GCC to Clang/LLVM in enough numbers that Apple think they can get away with it, Apple will, in a heartbeat, close the development of Clang/LLVM and make all new versions proprietary.


You understand Apple aren't the only developers of Clang right? It's their project, but there's nothing stopping anyone else (say Google, which are also huge contributors) from forking their own version of Apple decide to do something that would surely hurt them more than help them.


It seems like they might like CompCert. An optimizing C compiler that comes with a formal proof of correctness. Downsides: GPL, doesn't quite implement the full C language (but it's getting closer), and no support for ancient crap such as m88k.

EDIT: I misunderstood the license file, the majority of the code is non-commercial-use only, only a small part is dual-licensed. Still a cool project, even if it's not open source...


The OP's problem is with the support for the exotic targets and the more recent additions to the C standard (C99, soon C11). CompCert targets only x86 and PowerPC, and does not support all the features of C99, not to mention the GCC extensions that everyone has come to rely on. In fact, in terms of these criteria it is uniformly worse than GCC 2.7.2.1 (although it is better at having verified, bug-free semantics).


That was only his second complaint.

  "First, compilers are fragile. While one would like to expect a minimum
  level of correctness and trustworthiness from a modern compiler, we
  can't, regardless of the compiler we use."
CompCert (had it been truly open source) would have provided that trustworthiness. And it can be ported to new architectures with the confidence that these ports won't silently break by random other changes.


There are two fundamental problems with CompCert: 1) there is no formal, machine-readable specification of the C language, and 2) there is no formal, machine-readable specification of the target architectures. CompCert may be formally verified, but it's not necessarily a C compiler (even ignoring that it implements only a subset of C), nor does it necessarily compile for the advertised architecture.


Addressing number 1) https://code.google.com/p/c-semantics/

Not perfect, but pretty good.


""The CompCert verified compiler is distributed under the terms of the INRIA Non-Commercial License Agreement given below.""

It's some non-commercial non-free license. Only certain files are GPL? I don't think this would be redistributable...


I love seeing posts like this from OpenBSD team members. I cut my chops developing on that platform and I have to say I've never encountered a more clean and well thought-out code base.


Hopefully the OpenBSD members will reach out to the FreeBSD members, who can share a lot of knowledge about their transition from GCC in the base system, over to Clang/LLVM.


FreeBSD maintains fewer platforms than OpenBSD.


Why not just to have clang installed in /usr/local?) And gcc 4.2.1 is good-enough.


People seeing the value of correctness. Proofs >> Tests.


I don't think that's the message here at all. The only way to prove code correct is by reference to some kind of specification. If that spec is a standard full of implementation-defined behavior (and worse, contradicted in practice by all the other vendors) a correctness proof is not really going to convey what it sounds like it would.

What Miod is really asking for is an open source compiler that puts stability and portability over compiler optimization. If it weren't for optimizations and the endless fiddling that goes with them, gcc would have remained stable. It is already the only option that meets their portability requirements.


i agree that miod isn't talking about proofs. but i don't think adultSwim was making that claim, either.

> The only way to prove code correct is by reference to some kind of specification.

there are other interesting properties to prove besides "correctness of the entire compiler".

you could prove the entire compiler version n+1 emits the same code as version n, modulo $bugfix.

you could prove that individual optimizations are, on their own, correct with regards to the AST or IR that gcc operates on.

neither of those require a formalized spec of C. the first would probably make miod pretty happy. sadly one isn't going to get gcc man handled into a proof assistant, ever.


Those are interesting and extremely good ideas, thanks for pointing them out.

I do wonder if OpenBSD would accept a C compiler built on a small functional language amenable to these kinds of proofs. I suppose fulfilling the portability requirements is priority one, then stability. I just wonder if they would accept something not written in C at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: