Hacker News new | past | comments | ask | show | jobs | submit login

There were other reasons too. I worked at a place where we ported our software to 20 different versions of Unix[1]! gcc tended to be a lot less buggy than vendor compilers, and what bugs there were tended to be across all platforms.

gcc's error messages were also a lot better than vendor compilers, as were regular diagnostics. gcc -Wall was a huge boon to improving code quality.

If you were doing C++ for cross platform software, then gcc was the only game in town. The vendor compilers were at different versions of the C++ spec, had different libc++, tended to crash a lot, and had really bad messages. Heck gcc even added a flag to generate warnings based on Scott Meyers' Effective C++ books.

When the vendor compiler started being sold separately, they also included licensing daemons. Every invocation of the compiler would check with the licensing daemon to make sure you didn't go over your license limit. The license daemons also tended to crash or get confused, causing even more grief trying to compile a large code base.

At the end of the day vendor compilers were unreliable, buggy, unpleasant and a pain to work with. gcc was developer friendly and worked well. That got a lot of enthusiasm.

[1] You can get an idea of what development was like back then http://www.rogerbinns.com/visionfs.html




If I may add emphasis... GCC's error messages were a HUGE improvement over existing compilers. Back in 1985, C compilers might tell you what line the error was on, but they also might not. GCC went further, revealing not only the line number but added a carat pointing to the precise term in the source code that was the source of the error. This was a godsend.

In addition GCC's error messages were human readable. Unlike Microsoft, whose compilers and interpreters had long produced only line numbers and error codes, GCC wrote their messages in English, which were sometimes tuned further to fit the syntax.

At the time, all this was a really big advancement over non-IDE compilers (which then died a thousand deaths when the cfront translator and then OOP's general lugubriousness made error messages illegible once again).


Very true! It's easy to forget how bad things were back then ...


Oh, but at least they had a sense of humor:

http://www.ralentz.com/old/mac/humor/mpw-c-errors.html


"I worked at a place where we ported our software to 20 different versions of Unix

If you're feeling odd or out-of-place, I'd like to point out that I had the exact same experience in the mid- to late-'90s.

Gcc also had the major advantage of compiling the same C on all of the machines. Yes, C was standardized and old at that time, but each compiler somehow found room to be slightly different.

Not to mention that the code it generated was pretty competitive in general with AIX's xlc and Sun's SPARC compilers.

P.S. Thanks for the licensing daemon flashbacks, dude. Thanks.


Heh, I remember this too. For a brief time in the late 90s I was responsible for porting NeXT's NetInfo to a bunch of different Unixes. I think, though, we mainly used gcc except on SunOS, because it was free and handled modern C syntax (of course, compiling gcc in the first place without the native compiler involved hunting down a [usually old] binary version). But that's 20 years ago...


Now I feel inadequate. I only had to deal with 8 or 10 versions.


Now I'm feeling inadequate. Growing up in an intel dominated world, you couldn't touch a Sparc or MIPS machine if you were a mere mortal. Everything accessible to me was on windows, and the hard to load versions of *BSD or linux. Nowadays, we're beginning to get access to ARM machines like the Raspberry Pi, or 8 bit things like PICs or Atmels.

Funny thing is, high cost compilers are suddenly becoming free such as Windows with their VS stuff. While gcc has been the gold standard, because it's been free, newer technologies such as LLVM are also beginning to thrive in the free and open source world. Amazing times we live in!


Good news if you're interested in PICs and MIPS --- the PIC32 architecture has a MIPS core! And you can run RetroBSD on it:

https://olimex.wordpress.com/2012/04/04/unix-on-pic32-meet-r...


I work in the FPGA and chip design world and you have described our present day tools to a tee, including the use license daemons that tend to crash and get confused. This thread depresses me, but also gives me a little bit of hope that maybe someday in the future a gcc-like project will free us from our current bonds :-)


There was hope 20 years ago, when NeoCAD wrote an independent partition-place-route tool for Xilinx chips, but then Xilinx bought them [1]. No action since then.

I've thought about this over the last 20 years. In my opinion, designing a free FPGA from scratch and writing the requisite tools would have to be be a serious option for someone considering taking on the task of reverse engineering and tracking the development of existing FPGAs. It probably wouldn't be much more work for a person with the requisite knowledge and the result would be "more free", by not having to follow the silicon manufacturer's elephant.

Then again, buying an FPGA and spending a heap of time reverse engineering it is more accessible to the average hacker than designing and building an FPGA chip.

[1] http://www.xilinx.com/publications/archives/xcell/Xcell17.pd...

---

Edit: Delete unused reference


At this point I'd be happy with just a Free multi-language simulator, let alone synthesis tools. There are a couple Free verilog-only or vhdl-only ones, but any modern project, for better or worse, has a combination of verilog, SystemVerilog, vhdl, and probably some verification-language testbench stuff (specman/vera).

Also, the current free simulators don't have the advantages that gcc had over commercial compilers mentioned. They are slower, have worse error messages (hard to believe that's possible but it really is), and worse language support. So sad.


Writing GPL'd synthesis stuff is the #1 item on my "things to do when independently wealthy" list. Personally I'd start with reverse engineering Lattice's FPGAs. They have the most documentation and the simplest chips. Plus as the third horse in the FPGA business they might be the most amiable about the advantages being more open provide.


The state of EDA tools really does remind me of the bad old days of proprietary vendor compilers a few decades ago. It's so incredibly frustrating.


speaking as someone who was there: (https://groups.google.com/forum/#!msg/gnu.gcc/nMF9tvS8E-w/Op...)

it's not that gcc was less buggy. There were bugs in the gcc ports, too.

In my particular case (Convex), gcc would generate faster scalar code than the for-pay 'vectorizing' compiler would. This was very popular with the technical marketing folks (who would attempt to 'win' benchmarks in order to sell computers.) I pointed this out to the compiler group, and suggested that perhaps some emphasis on scalar code generation would be a good thing. (Amdahl's law, anyone?) I was ignored. At one point I threatened to put a trivial instruction scheduler in gcc, as gcc in those days didn't have any instruction scheduling, the Haifa scheduler didn't come about until the late 90s. Doing so would have probably beat the convex compiler at all but embarrassingly-vectorizable codes (and those were normally written in Fortran anyway.)

gcc still doesn't have a proper control flow graph, which is essential for interlock scheduling.

After I left (for Sun) it became cause for termination to use gcc in a customer-facing benchmark at Convex. As it turns out, the manager of the compiler group took offense. I was only trying to help.

Michael Tiemman made an ovation about a job at Sun (working on gcc) based on that work. Yes, Sun was supporting the gcc work even as the compiler group was "unbundling" the for-pay compiler.

gcc was basically an example of filling a need. There was a good-enough compiler available that could be ported to new architectures.

These days we have llvm/clang, which is architecturally cleaner, and which I expect to eventually eclipse gcc.


Michael Tiemann has a friend who has to have the worst job in the world:

http://www.art.net/~hopkins/Don/unix-haters/slowlaris/worst-...


Great summary of just how bad that era was. I only saw the tail end of it but I worked at a place which had (IIRC) something like 600 different platform + version combinations to test – most Unix variants, DOS, Windows, OS/2, VMS, OS/400, etc. combined the fact that OS upgrades were far less frequent than we now assume and many shops preferred specific patches rather than wholesale upgrades.

In addition to the compiler itself, we had tended to favor the opensource libraries after having found various hard problems with the vendor versions or lack of support in general.

Amusingly, most of the vendors doing this sold the “Open Systems” Unix platforms which had successfully attracted customers away from the even more expensive mainframe / minicomputer market before themselves losing customers to PCs running BSD and Linux by making the cost/support ratio less favorable.


I enjoyed the real meaning of writting portable C and C++ code across DG/UX, Aix, Solaris, Linux and HP-UX using the vendor provided compilers.

Young generations that think UNIX == GNU/Linux, *BSD, Mac OS X, might not imagine the "wonderful" portability across UNIX systems in the golden age of UNIX wars.

Then again, we seem to be getting into distribution wars nowadays.

At least the C and C++ compilers are the same.


The worst codebase I had to support in that period was a serial communications library which cross-compiled for a bunch of Unix flavors and 16/32-bit DOS/Windows/OS/2. If memory serves roughly half of that code was preprocessor directives.


Don't see too many folks this days with stick time on DG/UX. My neckbeard and I tip our beer to you.


That Dr. Dobbs article you link is indeed wonderful - and very relevant today. I made a separate hn-submission -- maybe we can get some interesting software design advice/discussion going?:

https://news.ycombinator.com/item?id=8988358

As for VisionFS - how sad to see yet another project locked up at Oracle. On that note: there's still no decent support for DavFS in/for (free for) windows (8+10)?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: