
Newer C++ features can create a lot of system yak shaving - weinzierl
https://rachelbythebay.com/w/2018/04/02/cpp/
======
bipson
So it's not the modern C++ features that cause the yak-shaving, but wanting to
use these features on a RHEL6 instance with a compiler that does not support
these modern features.

I'm specifically stating this (obvious observation), because the headline
might lead someone to believe that the features are at fault. The features are
great and should considerably _reduce_ yak shaving.

Going a step further, we've all been there. I once spent weeks building
STLPort and boost, so I could avoid re-writing and extending an existing
project against a C++ compiler from 2006, and more importantly, the WinCE 6.0
API (with weird timers and everything - ugh), and use a kind-of portable,
recent version of C++, with extended features. I deeply questioned every
single professional decision I had made the prior years. This had nothing to
do with yak-shaving, but with how much platform-specific code you would end up
prior to C++11, and how much _simple_ , every-day functionality you had to
write again and again. C++11 and later are a huge leap in this regard IMO, and
for everyone stuck with an old compiler: Consider using boost.

~~~
eXpl0it3r
It's a real click-bait title. Using incompatible versions of anything will
sooner than later cause issues. It's not related to C++ or C++ features at
all, it's simply that in this case the version of the compiler supporting
newer C++ features, wasn't compatible with the rest of the system.

However, ABI incompatibility is a real pain in C++ and one of the main reasons
why I don't judge anyone building C libraries instead. While RHEL6 is just one
example, you'll run into this issue all the time when developing on Windows.
Here, either your C++ compiled libraries match 100% with the compiler used for
to compile your application or it will simply not work. In 99.99% of the time
you get a symbol issue with std::string you can be certain it's runtime
library issue/compiler incompatibility.

~~~
lkrubner
It isn't clickbait because it is such a common situation. I would say in the
majority of cases when I try to use the bleeding edge of any one technology, I
end up dealing with a nightmare of version conflicts with the remaining
technology that's relevant to the system I'm working on. I think articles like
this are worth reading, because they illuminate a specific case of a very
widespread condition that we all face as software developers.

~~~
hermitdev
I agree. Until recently, at work, I was targeting 3 different versions of gcc
(Ubuntu 16, 14 & 12 so I think gcc 4.4-5.4). It's not just an issue of what
language & library features they support, it's also an issue of which compiler
flags to use. Is it --std=c++0x, --std=c++1y, something else?

Toss in there, I also have to compile a subset of libraries for Windows. Even
when you use frameworks to isolate yourself from OS nuances, you may still
have compatibility issues. Like, using the ACE library. What does the function
you pass to ACE_Thread to spawn return? On posix systems, it returns void*.
So, if you want to suppress compiler warnings (-Wzero-as-null-pointer-
constant), you return nullptr. Oops, now your code breaks on Windows, because
ACE_THR_FUNC_RETURN there is a DWORD, and no conversion from nullptr to DWORD.
On yet other systems, it returns int. So, how do you have your warning free
code on all the systems? Ugh: preprocessor.

~~~
gpderetta
it is 2018, just use std::thread. ACE belongs to the 90s.

~~~
hermitdev
I agree, and we're working towards that, but you know: legacy code.

~~~
gpderetta
Full disclosure, the code base I'm currently working on is in the same
situation...

------
rosshemsley
I actually thought this would be about yak shaving due to the huge number of
ways to achieve everything in modern C++xy.

My internal monologues when writing modern C++ look a lot like this...

So I have to return from this function... it uses a lot of resources, so I
don't want to copy it. But then there is RVO... Although is this actually
NRVO? What was that rule again? I guess I can rely on move semantics? But only
Scott Meyers himself understands those and he promptly exited the C++
community after figuring them out... And anyway, what if I want to change the
ownership explicitly? Let's just use a unique_ptr. Oh but actually I mostly
want to share it when I have returned, and that means two blocks of memory
will be allocated per pointer instead of the optimal "single" allocation with
the reference count at the beginning. Although maybe that's only in libc++?
oh, and what if there is an error, should it return nullptr in that case?..

~~~
vinkelhake
I realize that you're mostly ranting, but on the off chance that you're
serious: check out std::make_shared. It's a helper function that can allocate
storage for the object _and_ shared_ptr's control block in one go. It comes
with its own caveats (what doesn't in C++?), but it has the answer to your
concern about allocations.

~~~
ovao
To add: according to cppreference, all known STL implementations leverage this
optimization. This used to be specific to Microsoft's STL implementation, but
apparently all the major STL vendors are now on-board with it.

~~~
gpderetta
make_shared traces its roots back into the original boost implementation.

------
jjuhl
OP could just use devtoolset-7 on RHEL6
([https://access.redhat.com/documentation/en-
us/red_hat_develo...](https://access.redhat.com/documentation/en-
us/red_hat_developer_toolset/7/html/7.0_release_notes/dts7.0_release)) and get
a modern (supported) compiler that way.

~~~
ihnorton
Only assuming you have root or a sufficiently cooperative IT department. In
situations where this is a problem, both assumptions are often untrue (managed
HPC).

~~~
hermitdev
Been there, done that. Several jobs ago. At a hedge fund, corporate mandated
OS at the time was RHEL5. C++11 had been finalized for about 3 years at this
point, latest GCC at the time had nearly full support (only glaring gap I
remember at the time was regex was "there" but it was a stubbed-out interface,
it didn't actually work). We'd even had lecturers in the office of the likes
of Scott Meyers & Herb Sutter on C++11, but could we use it? Hell, no. Shit,
some of us had even been interviewed by Scott Meyers for his Effective Modern
C++ book. Still couldn't officially use C++11, so, given the frustration, we
went rogue. It was a pain, but I built the latest stable GCC from scratch and
all of the libs we needed against it (boost, unixodbc, mqseries, intel tbb,
tibrv). It took a while (I think GCC took like 4 hours to build at the time,
even with a parallelized build), but once set up, it was liberating.

~~~
magduf
It sure seems to me that "enterprise" Linux distros not supporting modern
languages/compilers (e.g., Python 2.7 in RHEL6) only hurts Linux adoption
overall and its reputation.

~~~
bluGill
Old is the keyword you are missing there. Linux distributions, even enterprise
ones have great C++11 support. However RHEL is designed for those who are
thinking "I got this working now lets not break it". There is no reason to
want C++11 on RHEL5 - the only valid reason to do anything on that system is
if you already have a working server and need to do a bug fix. If you want to
write new features you should be targeting the latest RHEL where you do have
all of that.

------
HelloNurse
Running freshly compiled experimental or very recent software on very old
servers that cannot be upgraded piecemeal seems very unusual: if you don't
want to destabilize the servers with updates, you shouldn't want to
destabilize them with unproven new software either.

It's also strange to be unwilling to fork in source control, if necessary, a
maintenance version for legacy servers and a current version for current C++
and current servers.

New software should probably run on more current servers and access the old
RHEL 6 servers through stable network services (shared folders, DBMS, etc.)
instead of expecting them to run bleeding edge software.

------
orbifold
HPC has the same problem (ancient bespoke Linux distributions with strange
tooling) one way to solve it is spack
[https://github.com/spack/spack](https://github.com/spack/spack). The ops
person where I work is so much in love with it that he tried to get it to work
on Mac OS, 2 days later and several patches it took _only_ 8 hours to compile
all nescessary transitive dependencies. In my opinion the only good way to
solve this is to ignore the distribution package manager and keep track of all
dependencies explicitly. Most build systems for C++ make that relatively easy.
To choose the compiler / standard library you can then use
[http://modules.sourceforge.net](http://modules.sourceforge.net)

~~~
gnufx
The RHEL6 compute node image I used to maintain just used rpms (maybe via
SCL). I'm inclined to think the trades-off win against Spack and Guix,
assuming you don't have the world telling you to use proprietary compilers and
libraries that can't be handled that way.

The mess I'm used to seeing on HPC systems with combinatorial builds, with
everything done through a confusion of environment modules, bothers me and
typically confuses users, especially when there's a system package that
provides the same thing. (At least look at TACC's Lmod instead of the
canonical modules implementation.)

------
AHTERIX5000
RHEL 6.0, the version released something like 10 years ago? Yeah, that is
going to be painful and maybe the way out isn't rebuilding everything manually
but upgrading the system ;-)

I only had to deal with RHEL6.x that had gcc 4.8 installed via packaging, I
think Red Hat had a solution for this.

~~~
cowsandmilk
> I think Red Hat had a solution for this.

Yes, Red Hat has a solution in the form of the Developer Tool Set, which will
give you gcc 7.2.1[0]

[0] [https://access.redhat.com/documentation/en-
us/red_hat_develo...](https://access.redhat.com/documentation/en-
us/red_hat_developer_toolset/7/html/7.0_release_notes/dts7.0_release)

~~~
aldanor
Ditto, just wanted to say the same thing. On RH / CentOS, devtoolset exists
exactly for the reason OP is worried about.

------
superbatfish
FWIW, I highly recommend using conda as a package manager for your own pure
C++ packages. You can compile everything with the compilers they ship, and the
only external dependency from your own system will be glibc.

Yes, it means you're using your own "mini-me environment", but you can share
that environment across all of your C++ projects. As long as you build with:

LDFLAGS="-Wl,-rpath,${PREFIX}/lib -L${PREFIX}/lib"

... then everything you compile ends up in the self-contained environment.
Furthermore, distributing your build products to another user (or another
machine) is as simple as:

tar -czf mystuff.tar.gz ${PREFIX}

Then your friends can take your tarball and unzip it in any directory of their
choosing. The whole prefix (environment) is self contained (except glibc,
which must be at least as new as the version on your build machine). It just
works.

~~~
jjnoakes
> your friends can take your tarball and unzip it in any directory of their
> choosing. The whole prefix (environment) is self contained

How does this work if you are using ${PREFIX} with -rpath? I thought one had
to use '$ORIGIN' to get a dynamic path that is relative to the executable's
location. Unless conda is doing some '$ORIGIN' magic under the covers?

~~~
RayDonnelly
If you use conda-build it will do rpath fixups for you. You can also elect to
specify that certain dependencies should come from the system. We do this for
X11 for the Anaconda Distribution. To use these, in the recipe you list `cdt`
(stands for Core Dependency Tree) packages as build dependencies. These
packages are repackaged CentOS6 library and devel packages that are never
installed on end-users machines, only at build time.

~~~
sigjuice
Why not build with -rpath $ORIGIN/../lib etc? Wouldn’t that make the rpath
fixups unnecessary? The whole tree should remain portable.

~~~
RayDonnelly
Well, the Anaconda Distribution and conda-forge build many thousands of
packages and they use a variety of (often programmable) build systems.

So quite often the build system will be mis-programmed or will simply not care
for relocatability _at all_ , hard-coding /usr/lib/libfoo.so as DT_NEEDED, or
hard-coding /usr/lib as DT_RPATH and/or DT_RUNPATH.

So conda-build runs a post-build step to make all DSO loading relative. We use
patchelf and install_name_tool for that on Linux and macOS respectively.

------
aorth
> _The only way out of this is to tear off the bandage and rebuild all of
> those with the new compiler and side-load them as well._

This industry is just one big test of how long you can shave yaks (aka pay
technical debt) before you quit and move to the forest. It's rewarding...
until it's not. :)

------
ixtli
After reading this I'm left with the feeling that the author is complaining
more about "enterprise" linux distros that never ever change (hello, many
centos users who are still running distros that require python2.7!) than about
changes to the C++ standard. And for what it's worth, I agree that I'm always
frustrated by such things, but it's not clear this is something we should
blame on attempts to keep C++ modern.

------
wycy
Can someone elucidate why one might not want to just update from RHEL 6--
especially on what I gather is a personal machine in the OP--rather than go
through this?

~~~
noselasd
Because the personal machine might be used to develop for a production machine
running RHEL 6, and that cannot be upgraded because reasons(no time, software
doesn't support RHEL 7 yet, $precious_driver doesn't support RHEL 7, + a lot
of other reasons that makes sense to an OPS person but not a developer).

Binaries created on RHEL 7 does not run on RHEL 6 - even when linking
statically appears to work, you run the risk of running into the unforseen.

You could develop on RHEL 7 and build for production on RHEL 6, but this
causes you enough grief and trobleshooting nesting up in all kinds of minor
differences to not make it worth it.

Best you can do if you cannot upgrade from RHEL 6 is to install
devtoolset([https://www.softwarecollections.org/en/scls/rhscl/devtoolset...](https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/)
\- these are official packages crerated by Red Hat) to get a newer compiler.

~~~
thawkins
Load fedora 27, and put all your legacy projects into a rhel6 docker
container, with your workspace volume mapped into a "projects" folder, that
way you also get to play your code with multiple rhel variants.

------
hermitdev
> A couple of days ago, that's exactly what I did. It turns out that it's not
> a huge deal to compile gcc with a prefix which will put it off to the side
> and then just drop the entire mess into that directory. You can then just do
> /path/to/your/new/g++ and you'll have the new compiler working for you.
> It'll gobble up the new syntax and will be happy.

I've been there, and it's not fun. And, been down the path of options the
author enumerates after this statement.

Short of it is, on Linux (and I'm sure it's likely the same on BSDs & Mac), if
you're not using the system supplied compiler, you're in for a world of hurt.
You will have to build everything you depend on from scratch with your custom
compiler. That said, it's possible. Just not fun, or user-friendly.

------
hellofunk
> scoped_ptr which worked just like the one at a certain company. It gives you
> a few neat things, like the ability to just forward-declare a class inside
> your .h file and not include the whole thing right there.

But all pointers do this in C++? Why would you need a smart pointer to do this
for you?

~~~
ben0x539
I think the starting point is having a by-value member of that class, which
has convenient automatic destruction semantics, but requires the actual class
definition in scope. To lift that requirement, you move to a pointer field,
and if you go with a scoped_ptr rather than a raw pointer, you don't lose the
automatic destruction.

~~~
hermitdev
scoped_ptr has the same issue due to the implicit destructor needing to know
how to destroy the thing being held. The trick is you need to explicitly
define your destructor (and it's implementation needs to have full knowledge
of what it's pointing to). It doesn't have to do anything - ideally, it should
be empty. This is known as the PIMPL idiom, or Cheshire Cat.

~~~
ben0x539
wow somehow i was sure i could get out of that just by having the destructor
of the containing class in a separate compilation unit, that's unfortunate.

------
wallstprog
The author is correct that trying to get modern toolchains on older systems
(like RH6) is a real pain. But sometimes there's no real choice -- some
applications value reliability above all else, and tend to be deployed on RH
where its relative stability is a feature, not a bug.

I've written about this ([http://btorpey.github.io/blog/2015/01/02/building-
clang/](http://btorpey.github.io/blog/2015/01/02/building-clang/)),
specifically in the context of getting clang running on RH6, which required
first getting a C++11-compatible gcc running on RH6. For those who are in the
same boat, you may find the article helpful.

------
snarfy
C++ really needs a standard ABI. None of these newer features really matter
much without it. You still can't make libraries without compiling your code 10
different times with a handful of compilers and flags and shipping all of the
resulting binaries.

~~~
lallysingh
There is one. Apologies for the ugly link, it goes to a pdf and I'm on a
phone:
[https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...](https://www.google.com/url?sa=t&source=web&rct=j&url=https://software.intel.com/sites/default/files/article/402129/mpx-
linux64-abi.pdf&ved=2ahUKEwiR4IimoZ7aAhVM2IMKHSlbD3gQFjAAegQIBxAB&usg=AOvVaw0MVQdLNlkOBigLu5VDb82x)

10 times? Do you mean 10 platforms vs 1 on something like Java?

~~~
vinkelhake
I don't think C++ desperately needs a standard ABI, but ignoring that for a
moment... The System V ABI doesn't really do anything for the people who asks
for a standard C++ ABI.

It's not enough to mandate how objects are laid out. People want the ability
to define a library function that uses standard types - for example
std::string - in the interface. To build a library with libstdc++ and link it
with an application that has been built with libc++, there would (among other
things) have to be a common string type that they agree on.

Herb Sutter has done some work in this area, but I don't know the current
status of it.
[https://isocpp.org/files/papers/n4028.pdf](https://isocpp.org/files/papers/n4028.pdf)

~~~
jcelerier
> To build a library with libstdc++ and link it with an application that has
> been built with libc++, there would (among other things) have to be a common
> string type that they agree on.

well, then there would be a single standard library implementation. I think
that would be the way to go : just put libc++ out of the std namespace and in
a custom namespace, `st2` for instance, and use its types from everywhere.
MSVC, GCC, whatever.

------
ksherlock
Last year I was playing with Red Hat's Open Shift. Remember when -std=c++0x
was cutting edge? RHEL does. I ended up compiling (and statically linking) my
c++11 code with Windows/WSL/Ubuntu/g++ and copying it over.

~~~
dralley
As many others have pointed out, there is an officially supported solution to
this problem....

[https://access.redhat.com/documentation/en-
us/red_hat_develo...](https://access.redhat.com/documentation/en-
us/red_hat_developer_toolset/7/html/7.0_release_notes/dts7.0_release)

------
joelthelion
Either update RHEL6 or keep using the version of C++ that is supported on your
OS. It's really that simple...

~~~
wolfgke
> Either update RHEL6 or keep using the version of C++ that is supported on
> your OS. It's really that simple...

Or switch to Windows: Here you can use about any version of C++ you want (just
e.g. install the respective version of Visual Studio that supports your
desired version of C++). No need to pay attention which version of C++ is
supported on which Windows version. :-)

~~~
jjnoakes
You still have to pay attention to what version of C++ your dependencies
compiled with, unless you are compiling them all.

And on Linux you can install newer compilers too.

The problems are still there; the root cause is C++ having no ABI, not Linux
vs Windows.

------
mark-r
I've heard the term "yak shaving" before but the meaning escapes me - can
anybody fill me in?

~~~
PeterisP
Watch this gif
[https://imgur.com/gallery/rQIb4Vw](https://imgur.com/gallery/rQIb4Vw) , and
at the scene of car with the engine taken out imagine shaving a yak.

But the original story, as far as I know, comes from Seth Godin
[http://sethgodin.typepad.com/seths_blog/2005/03/dont_shave_t...](http://sethgodin.typepad.com/seths_blog/2005/03/dont_shave_that.html)

~~~
mark-r
Ah, Malcolm in the Middle - how I miss it. I think that was about the time I
stopped watching TV.

------
makecheck
Ever since my first Linux experience, where /usr actually contained way more
useful tools by default than other systems, this has been a double-edged
sword. All the convenience of autoconf and default behaviors, etc. goes far
out the window when /usr/lib is ever so slightly too old. It’s kind of a
curse.

The virtualenv concept makes more sense: set a default on day 1 but assume
you’ll want a new foundation periodically. When the foundations themselves are
versioned, you can evolve your environment to some degree without having a
completely chaotic mix of the latest updates.

------
aphexairlines
> When it comes to C++ programs and libraries, you really do not want to try
> to mix and match across compilers. It really does not like it, and it will
> make you suffer.

Hence the Amazon build tooling for C and C++. I wish they would open source
it, write about it, make it into an AWS product, or release it in any other
form.

~~~
oblio
I don't want to be mean, but people like to make fun of Java. In this regard
life is much, much easier. Yes WORA is only partially true, but it is still
true. Life is so much easier it's not even funny. Except for the most
performance critical bits I'd never write anything in C++... Oh, and embedded.

I'm sorry you guys still have to go through all this pain, I know it's a
really hard problem with all the legacy.

~~~
banachtarski
Nah we feel sorry for you guys haha

------
chris_wot
I’ve recently been reading up on how Rust does memory management and I’ve been
wondering... whilst not a really new feature, I wonder if you could program
via unique_ptrs in C++ and use RAII to handle concurrency in much the same way
that Rust deals with it.

~~~
banachtarski
That's what many people do but not necessarily unique_ptrs ALL the time (too
may heap allocations make the CPU a sad panda)

------
Volcacius
Had the same issue with RHEL6, solved (more or less) using Gentoo Prefix. It
comes with its own libc and the bootstrapping process was pretty
straightforward.

------
w_t_payne
I am not sure what the OP is complaining about.

Isn't this just another day in the office for a C++ dev?

------
ris
Yeah, this whole thing ain't a problem with Nix. The whole idea of only a
single file being able to exist at /usr/lib/libstdc++.so seems quite backwards
and ancient to me now.

------
lallysingh
I hate to be /that guy/, but newer systems can run nix, which takes care of
this stuff for you. I dunno about RHEL 6.

------
Kenji
_I have to admit this was a real bummer. I know better than to try to upgrade
anything that important on a Red Hat type box. There 's just no sense in
attempting to diverge from the base system. You either keep it or throw it out
and go to the next major release... and I'm not ready for that. That leaves me
with one option: leave it in place and then "side load" a new compiler and/or
libraries, and then switch to using it for all of my local stuff._

Just use docker?? That is literally what docker is for. Get your favourite
flavour of debian or whatever, with a modern GCC and fire up that docker
container.

EDIT: Oh wow, docker is only available in RHEL7 and higher. Damn.

------
yuniq
I just wish 'auto' in Cpp can be changed to 'var'. Switching between
C#/JS/C++, I always end up typing var instead of auto :).

~~~
lisper
#define var auto

Not that I necessarily recommend doing this.

------
gok
> you find out that existing C++ libraries on the box are no longer happy
> being linked into your program, like, oh, say, protobuf, gmock or gtest

Or even worse, they link happily then fail weirdly at run time because all the
STL structure layouts have changed.

This is hugely exacerbated by Linux distributions perpetuating the fiction
that C++ is an appropriate language for system library APIs. So now there is a
generation of dependency-oriented developers who haven’t realized that “apt-
get install libfoo-cxx-dev” is an enormous headache waiting to happen.

