Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
C++ Status at the end of 2015 (bfilipek.com)
79 points by ingve on Dec 31, 2015 | hide | past | favorite | 60 comments


> Atomics in signal handlers - N2547 - its listed as missing here (at msdn), but listed as done here at vc blog

That was me being cautious, see http://blogs.msdn.com/b/vcblog/archive/2015/04/29/c-11-14-17... - "I previously listed "Atomics in signal handlers" as No, because despite maintaining <atomic>'s implementation, I didn't know anything about signal handlers. James McNellis, our CRT maintainer, looked into this and determined that it has always worked, going back to our original implementation of <atomic> in 2012."


Good catch, thanks!


Can't wait for modules. Header files are the worst.

Also really excited for co-routines. I never liked using promises.

Wow, had no idea ranges and contracts are on the table too. With all this stuff, C# is really losing its appeal.


C# is memory-safe by default. C++ will forever let you (and your coworkers, and library code you use/misuse) access memory outside array bounds and dereference pointers to already-freed objects. This extends to C++'s advanced features (such as objects captured by reference in anonymous functions.)

This is a big benefit of C# (and every other remotely popular language except C and C++; how C compares to C++ is a separate question.) Arguably, some systems cannot afford a garbage-collected language, but your comment kinda assumes you can choose between C# and C++.


> C++ will forever let you (and your coworkers, and library code you use/misuse) access memory outside array bounds and dereference pointers to already-freed objects. This extends to C++'s advanced features (such as objects captured by reference in anonymous functions.)

Well, to be fair ISO Core C++ (mentioned in the blog post) with all safety profiles turned on is designed to fix this issue. I have my doubts about whether it'll actually succeed in still being C++ though, as it rules out so much valid modern code (and I also have doubts, or at least unanswered questions, about its soundness/expressiveness).


I hate to sound like a fanboy, but... exactamundo!

I'm sure the C++ community will gradually work to address the (huge!) security issues through heroic effort, but it's doubtful that anything meaningful can be achieved when "undefined behavior" remains an acceptable definition of the semantics of an operation.

EDIT: Don't get me wrong... I am in awe of the recent efforts of the C++ committee (and co.), and I hope they succeed in this endeavour!


> it's doubtful that anything meaningful can be achieved when "undefined behavior" remains an acceptable definition of the semantics of an operation.

Well, I should be fair. I may be wrong here, but my understanding of ISO Core C++ plus all safety profiles is that this will no longer be the case: there will be no undefined behavior in this particular dialect of C++, assuming the effort to create this dialect succeeds.


Yeah, that's my understanding as well... but I'm skeptical that this is actually possible (within reason) given the history of C++ and the sheer weight of legacy C++ code bases.


Few useful codebases are brand new. Or if they are they don't stay new. Weight accumulates and at some point efforts always shift to successfully iterating existing things of value.


Which is the problem with the massive subsetting of the language that ISO Core C++ with all safety profiles turned on performs. It rules out so much existing code that porting will be very expensive.

We won't know what the delta is between "migrating to ISO Core C++ with all safety profiles" and "migrating to another, memory-safe language with a good FFI" until ISO Core C++ is complete and widely deployed. My instinct tells me that this delta will not be particularly wide--that is, that porting most existing industry C++ code to totally safe ISO Core C++ will not be much easier than rewriting that code in Go or Swift or whatever. (Look at how long the Python 2 to Python 3 transition has been, and consider that ISO Core C++ is much farther away from industry C++ than Python 3 was from Python 2.)

I could be wrong about this, though--we'll have to see.


Sure... and that's kind of what I'm both skeptical of (and intrigued by).

I'm just waiting for standards-sanctioned A(lgebraic)DT data type support -- it's bound to happen at some point! :)


I work on UI for universal Windows apps where the choice is C#, C++, or JavaScript.

Modern C++ is pretty nice if your team is ready to embrace it and leave old styles behind. I still use C# for utilities and prototypes but for production stuff, users' time is more valuable than mine.


This comment fails to address the main point yosefk was making: that C++ is not memory safe, and C# is. In fact, it completely ignores the point in presupposing that C++ is for "production stuff" and C# is for "utilities and prototypes", when the point of memory safety is to make your code not fail in production.


Embracing Modern C++ (RAII, STL, range-based for loops) helps you avoid a big chunk of memory safety problems.

Contracts and some of the stuff in the new Core guidelines take it further.

Everyone's needs are different obviously. The stuff I work on has tons of users and start up perf and minimal memory usage are primary requirements. Most of our bugs are nullptr access violations or straight up programming errors that C# doesn't shield you from either.


> Embracing Modern C++ (RAII, STL, range-based for loops) helps you avoid a big chunk of memory safety problems.

Not really. RAII still allows for dangling pointers/references. The STL is vulnerable to iterator invalidation. Ranges are likewise vulnerable.

> Contracts and some of the stuff in the new Core guidelines take it further.

How do contracts help memory safety?

The ISO Core C++ bounds/lifetime checker does, yes, but as I mentioned in the other comment I don't know whether it is still going to be C++ in practice.

> The stuff I work on has tons of users and start up perf and minimal memory usage are primary requirements.

Those are valid reasons to use C++, yes.

> Most of our bugs are nullptr access violations or straight up programming errors that C# doesn't shield you from either.

C# doesn't make null dereference undefined behavior :)


1) Using smart pointers makes it really hard to have dangling pointers.

2) The invalidation rules are pretty well known at this point, but I concede that it's much better to have compile-time validation. A good STL impl will have iterator debugging, which catches iterator invalidation. e.g: debug mode in GCC

3) Regarding null, you're theoretically right, undefined behaviour and all. In practice[1], accessing a null pointer/reference will result in a crash both in C# and C++.

When developing Java apps (C#'s big brother), the biggest sources of bugs during development were null pointer exceptions and unhandled exceptions propagating to the event loop and killing the UI thread.

[1] There are strange cases such as http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


> Using smart pointers makes it really hard to have dangling pointers.

No, it doesn't.

    std::vector<std::unique_ptr<int>> x;
    x.push_back(std::make_unique(1));
    auto& y = *x[0];
    x.clear();
    std::cout << y; // use after free
This is, of course, a toy example. For lots and lots of real examples, search Web browser engine bug trackers for UAF vulnerabilities. (They have been using smart pointers exclusively for a decade or so now.)

> A good STL impl will have iterator debugging, which catches iterator invalidation.

Valgrind and ASan catch UAF too and have been around for years and years. Yet we still have a ton of UAF vulnerabilities.


With C++ one needs discipline to make up for the lack of type safety that one can get in other languages. This is of course not so great, because it relies on the programmer to know various idioms/patterns.

In your example, it would be a major red flag in code review to create a reference like that. Some code constructs are just asking for trouble.

I think the reasons behind still having errors despite tools like Valgrind and ASan are: a) many devs don't know about them or use them b) you need 100% coverage to remove all errors

Using said engineering practices will not result in a completely big-free program, but it will have a significant impact on the quality. For something that doesn't have the nightmare security profile of a browser or various other exposed services, that can be just fine, even if not ideal.


Doesn't .NET Native address startup perf, particularly for UWP applications?

I'm guessing the reason why C++ yields lower memory usage than C#/.NET is because RAII and reference counting ensure that memory is always freed as soon as it's no longer needed, whereas garbage collection only ensures that memory belonging to unused objects will be freed sometime later. Is that correct?


Also stack allocation and pervasive use of value types help C++ a lot in practice. This reduces overhead of heap metadata. There's also the overhead of .NET wrapper objects for OS resources--compare a .NET window object to an HWND, the latter of which is just a word-sized opaque ID describing a kernel managed resource.

Though GC has its own advantages too--the ability to compact being the main one.


Haven't seen any benchmarks but I'm guessing the baseline working set of a .Net Native UWP app is still megabytes bigger than a C++ UWP app.

*edit: ~16 MB for .Net Native, ~10 MB for C++ using latest VS2015 targeting x86.

Not going to matter for most apps, but for system UI that has to run on multi-user servers it does ;)


Some people (like myself) just prefer C++ to C#.


That's fine.

And surely you'll agree with the GP that they're not really comparable.


They are both used for building UI applications on Windows, because there's a lot of existing C++ code, .NET used to be (not sure now) a major PITA to deploy and was slower to boot. It also allows one to expand to other platforms and operating system in a way that C# never could, in spite of toolchains like Mono, and the recent open sourcing of .NET.

I would absolutely not choose C# for building a UI app even today.


> I would absolutely not choose C# for building a UI app even today.

Even if memory safety is important to you (the point of this subthread)?


I don't really think about "X safety", but rather about various quality attributes such as availability and resilience, perhaps also security in this case. These have to be evaluated and kept in balance with other attributes such as performance, portability, dev resources etc.

I would personally prefer to use various engineering practices such as static and dynamic analysis, unit tests, etc rather than change programming languages.

There's also the other side of this argument: my last project was Java. C++ would have allowed us to port more easily to other platforms (a current pain point) and get better performance in combination with OpenGL (another major pain point), however, the project constraints would have made it extremely difficult to bring a C++ project to market with the resources we had.


> I would personally prefer to use various engineering practices such as static and dynamic analysis, unit tests, etc rather than change programming languages.

There is no static and/or dynamic analysis out there that effectively stops programmers from writing UAFs (to name one especially problematic class of memory safety problems) in C++ over and over. This is despite decades of work on it. The language itself is fundamentally hostile to analysis.

I also dislike "one language to rule them all" thinking. Web app and network backend developers shed that mentality a long time ago, and the industry is better for it. Java, Ruby, PHP, Python, Go, JavaScript, Scala, etc. are all used on the server where their niches are strongest, and this is great! The industry would have been in much worse shape if they all had tried to stick with C++.

I agree with your other points, though--the choice of programming language has to be balanced among many factors. Sometimes security against RCE doesn't matter or isn't relevant, and the crashes caused by memory safety problems can be lived with. But I don't think we should pretend that C++ is safe, or as safe as other languages. It just isn't, and no tooling so far has been able to make it so.


There are good reasons to choose an unamanged language, but this part strikes me as odd.

"my last project was Java. C++ would have allowed us to port more easily to other platforms"

I'm not saying that you must wrong for your particular situation, but as soon as you've compile Java, you've essentially ported it to a massive number of platforms. https://en.wikipedia.org/wiki/List_of_Java_virtual_machines

If anything a dependency on OpenGL would limit your choices.


Unfortunately, in our case the platforms were Android and iOS :) Likewise, OpenGL is a pretty standard tool for mobile.


RoboVM might have been an option, in that case.

Not ideal, but perhaps workable.


What are the advantages?


To me it has always been the tooling and its ecosystems around the language that matters, not some of these advanced features...

So C# might lose its appeal to you specifically but not to many people ;)


True, C# tooling is an order of magnitude better. Still hate IDisposable and events using strong references.


Lots of important stuff is being discussed/voted/cleaned up... hope modules, ranges and contracts will meet the deadlines to be in cpp17


Yeah I want modules and contracts badly


C++17:

> Single-quotation-mark as a digit separator

I have to admit to utter befuddlement about why you would ever want that. (Presumably they will also require scanf to honor this as well . . . that'll be fun).

Just when I thought the C++ committees were getting their heads screwed on straight. Am I missing something, or is this another trigraph disaster?


I actually like this, and had decided that single quotes were a good digit separator even before C++ adopted it. I had a calculator, an HP, I think, that used single quotes as digit separators in the LCD display. There is actually a locale that uses single quotes: de_CH, the Swiss High German locale. There are no locales that use underscore as far as I know. I have a few aliases like this:

  alias dfk='LC_NUMERIC=de_CH df --block-size=\'\''1024'
  alias duk='LC_NUMERIC=de_CH du --block-size=\'\''1024'
that help me when something like du -h won't do.

Also, the underscore is used to separate words in variable names, where it is significant.

  int my_long_var_name, mylongvarname;
are different variables. But in numeric constants, the underscore has no meaning.

  186_282.397 and 186282.397
would be the same number. This is inconsistent.

Finally, in a variable width font, the single quote is much less visually intrusive than the underscore, which tends to be a wide character.

I would like to see this form used universally. In my wildest fantasies, I'd also allow either '.' or ',' as the decimal indicator. That would make some of my European friends happy.


What number is this?

100000000000000


I actually really like the Java way of doing this:

    100_000_000_000_000


That also works in Perl, Ruby, C#, ADA, D, and Julia.

According to Wikipedia [1] underscore was proposed for C++ but rejected because it conflicts with user-defined literals.

Is there any reason they could not have used space? Offhand, I can't think of anyplace where a space in a literal number would not currently be illegal, so adding that as a separator should not introduce any problems, but C++ has grown massively in complexity since I last seriously used it so I could easily have overlooked something.

[1] https://en.m.wikipedia.org/wiki/Integer_literal#Digit_separa...


A space could not be allowed in numeric values in runtime input (fscanf, <<).


That works, but even in C I'd simply write this as (100 * 1000 * 1000 * 1000 * 1000), which is perfectly clear and no slower.

(Or, rather, as ((uint64_t)100 * 1000 * 1000 * 1000 * 1000), but you'll need to do something to prevent overflow in most languages, anyway.)


It's not a language feature, but a defacto standard package manager would be a huge boon for C++ development.


Agreed. biicode (https://www.biicode.com/) might have potential.


C++ has changed so much i barely recognize anything ( i used to code in c++ a bit like c with classes). Which makes me wonder : do we have any idea on the market share 20xx c++ versions represent ? I mean, how much c++ code produced today is "modern c++" ?


The author may be a bit confused as to what C++17 will be. Modules will be in a TS, most likely. Transactional memory, concepts, ranges, and filesystem are already in published TSes (not planned for the IS in this round). Out of the list presented, only fold expressions and __has_include are actually already voted in for C++17 itself. The rest are in various stages of standardization (mostly TSes) and are not guaranteed to ever become part of the IS (though hopefully many will).


That's incorrect - the Ranges TS was just started, and certainly hasn't been published yet (not even as a PDTS). Filesystem was the first published TS.


Ah, that's right, ranges was a new work item planned for TS (but still not planned for IS in C++17).


My favorite quote from the C++17 specs (https://isocpp.org/files/papers/D4492.pdf):

"""

The note was written for committee members, but “escaped into the wild.” Here are a few comments from the web:

http://www.reddit.com/r/programming/comments/33us7z/what_wil... up_on_c17_goals/

http://forums.theregister.co.uk/forum/1/2015/04/27/c_daddy_b... ections_for_v17/#c_2500733

https://news.ycombinator.com/item?id=9441245

As you see, people outside the committee also have strong opinions. Those opinions can depart radically from the ones I hear in the committee and from reality.

"""


What's the best way to learn C++ nowadays?


That's a difficult question to answer without knowing your current skillset.

I'd definitely recommend CPPCast (http://cppcast.com/) for learning about the C++ community.

Pluralsight has some decent videos as well.


Clicked the link going "Please don't be a website that looks like a C++ dev made it".

It's a website that looks like a C++ dev made it.


I would argue that reading A Tour of C++ would make you ready to write a modern C++ program, if you are a programmer and are comfortable with pointers.


I read "A Tour of C++" and I found the book terse and the information too condensed. I honestly doubt that a programmer that doesn't know C++ could learn C++ from this book.


I am currently going through https://www.youtube.com/playlist?list=PLHxtyCq_WDLXryyw91lah... I have only been through 6 videos in that list. I find it to be useful, though a bit slow.


Recommended resources for learning C++ nowadays: "Jumping into C++" by Alex Allain, "C++ Primer Plus" by Stephen Prata, and Kate Gregory's C++ courses on Pluralsight. You can get a free 3 or 6 month Pluralsight subscription by signing up for Microsoft's free Dev Essentials program.


I found that diving into openFrameworks (looking at their examples, etc...) and hacking my own stuff was a great way to learn. Cinder would do the same.


This is an honest question. In what areas of the industry does C++ dominate the language landscape except hardware drives? I used to think this was true for games but I've seen most studios move to C# as the community has matured around it. All other major spaces seem like they've moved on to more managed languages.


AAA games, embedded, operating systems (err, Windows shell at least), web servers, core libraries, etc.

Still big in medical devices too, embedded or not.


Quantitative finance, scientific computing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: