Hacker News new | past | comments | ask | show | jobs | submit login
GCC 4.9 Release Series – Changes, New Features, and Fixes (gnu.org)
134 points by beatbrokedown on April 12, 2014 | hide | past | favorite | 39 comments



The things that caught my eye were both related to IBM hardware features.

Firstly, apparently IBM chips have hardware transactional memory now:

  PowerPC / PowerPC64 / RS6000
    GCC now supports Power ISA 2.07, which includes support
    for Hardware Transactional Memory (HTM)
  
  S/390, System z
    Support for the Transactional Execution Facility
    included with the IBM zEnterprise zEC12 processor has
    been added.
That's pretty cool.

Also on the 390:

  S/390, System z
    The hotpatch features allows to prepare functions for
    hotpatching. A certain amount of bytes is reserved
    before the function entry label plus a NOP is inserted
    at its very beginning to implement a backward jump when
    applying a patch. The feature can either be enabled via
    command line option -mhotpatch for a compilation unit
    or can be enabled per function using the hotpatch
    attribute.
I guess if you're doing high availability the mainframe way, you don't get to restart your apps to patch them. That's terrifying, but again, pretty cool.


Windows system DLLs have pointless "mov edi, edi" instructions in function prologues to enable hotpatching. It's not just mainframes ;)


Which I recently read about on Raymond Chen's blog. Link: http://blogs.msdn.com/b/oldnewthing/archive/2011/09/21/10214...


Yes, MSVC does this when the /hotpatch command line option is provided.


You can read quick slides about the POWER8 transactional memory and general feature comparison here:

https://www-950.ibm.com/events/wwe/grp/grp030.nsf/vLookupPDF...


One of the large design-constraints of Erlang is being able to hotpatch, given the uptime requirements for telephone switching.


> Memory usage building Firefox with debug enabled was reduced from 15GB to 3.5GB; link time from 1700 seconds to 350 seconds.

This is huge- you can now do this on a laptop


15 GB was presumably with LTO enabled, you could always build it on a fairly average laptop with it disabled. This is great news though, as LTO is pretty awesome.


For the uninitiated: LTO stands for Link-time optimization and happens when the compiler merges/links all separately-compiled object files into one (executable or library).

Although it seems obvious that this might be a good idea, why would it

1) use exorbitant amounts of memory; and

2) be "pretty awesome" instead of, say, mildly useful?

edit: And both questions satisfactorily answered in the time it took me to peruse the preamble of https://en.wikipedia.org/wiki/Interprocedural_optimization

Thank you!


LTO, as the name implies, means that you do another optimization pass at link time. This enables lots of optimizations that don't work when you look at one module at a time.

Functions can be inlined across module boundaries, even when they're not declared inline. You can turn virtual functions into regular functions, if you know that the virtual function is never overridden, or if you can derive the exact type. You can change calling conventions for functions. You can do better escape and aliasing analysis. If a function is only called once, then you can probably optimize it a lot better because you know exactly how it will be called.

As with all optimizations, not all programs will see any significant benefit. Programs with heavy inner loops like physics simulators and graphics processors will not see much benefit, since the local optimizer works well enough. Programs like compilers, interpreters, and web browsers will see larger benefits. However, the benefits can be high—30% improvements in running time are not unheard of.

In short, LTO is a high-cost, high-benefit optimization.


Link-time optimization is not a very descriptive term. It's usually called whole-program or interprocedural optimization. Link-time optimization is just how it has been implemented.

It uses a lot of memory because it requires keeping a representation of more or less the entire program in memory at one time, in a format which is amenable to analysis. It is not out of the ordinary for an optimizer's internal representation to be on the order of 1000x the size of the source code.


For an example of a compiler other than GCC that does whole-program optimization see the Stalin Scheme compiler: https://github.com/barak/stalin


As a Gentoo user, we always did it on our laptop. Build time was about 30 minutes.


Firefox recently (Trunk as of December) started using Unified Sources so builds are now typically 7-9 minutes on laptops now for non Windows platforms.


well, you could do it on a laptop before, you just needed a much beefier laptop. But that's really impressive, how did they cut the resource usage to a quarter of previous usage?


I am very curious too. Such drastic improvements often indicates something wrong before (either in design or implementation) that got fixed.


I bet it is caused by these two optimizations:

> Early removal of virtual methods reduces the size of object files and improves link-time memory usage and compile time.

> Function bodies are now loaded on-demand and released early improving overall memory usage at link time.


GCC 4.9 hasn't yet been released (and won't be until after Easter). Currently there's only a release candidate. See http://permalink.gmane.org/gmane.comp.gcc.devel/135470


Interesting the main page http://gcc.gnu.org/ lists 4.9 as the current release as opposed to Development release.


If you follow the 4.9 link, you get to http://gcc.gnu.org/gcc-4.9/ which states:

> As of this time no releases of GCC 4.9 have yet been made.


libstdc++ finally supports C++11's <regex>! My team will be able to ditch boost regex once GCC 4.9 hits a stable version of Ubuntu.

It is also nice to see more parity with Clang when it comes to diagnostics and ASan/UBSan.


Still not a fully optimised DFA solution though (although neither is Boost.Regex afaik)


Seems like competition from clang led to some really practical improvements.


I use whatever compiler is defined in the makefile or called for in a README. Needless to say I do not really follow developments in either camp closely. I am curious what improvements are the result of "competition" from clang?


I might be doing GCC a disservice but I'm pretty sure the various sanitization work has come from LLVM/Clang.


Actually the sanitizers where added to LLVM by Google who also added them to GCC as they are actively working on both compiler toolchains.


Did anyone notice "GCC 4.9 provides a complete implementation of the Go 1.2.1 release."? Isn't that pretty big.


Support for the CPU code names in -march is a simple but really useful change:

   -march=nehalem, westmere, sandybridge, ivybridge, haswell,
   bonnell, broadwell, silvermont
Also this:

   -mtune=intel can now be used to generate code running well
   on the most current Intel processors, which are Haswell
   and Silvermont for GCC 4.9.


> A new C extension __auto_type provides a subset of the functionality of C++11 auto in GNU C.

I wonder if anyone is working to standardize something like this? It would be way more useful than all those _s() functions they added to C11.


Yes, bounds checking are unnecessary, you don't need those _s() functions - NSA director


The problem isn't bounds-checking, the problem is that most of the _s() functions that deal with bounds-checking already have pre-C11 equivalents that check boundaries.

I also disagree with some of the interface choices, for example when strlcpy() fails it tells you how many characters you needed, not simply "error" as in strcpy_s. Also the use case for memcpy_s() is extremely limited. It just seems like the _s() functions were rushed and stuck in there without regard for what makes sense.


I wish latex/lualatex/xelatex would colorize their logs.


I wish LaTeX would just limit their logs to information I need to see.


I like the addition of color the compile log. It's might be easier reading compile failures now.


Only six months since 4.8 was released. Is GCC moving to a more regular release schedule?


4.8.0 was actually released in March 2013. GCC has been on a yearly major release schedule for several years now (since at least GCC 4.5.0 in 2010). This year 4.9.0 will be about a month late.


That was a point release. 4.8.0 was released over a year ago.


What's wrong with every every 6 months? It's not like bug fix releases aren't dropped in the meantime.


I think that was the point... every 6 months being better than every 6+dx months as seemed the case before.

I don't personally notice a change though, IMHO releases have been fairly consistently released since the 4.0 timeframe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: