> Atomics in signal handlers - N2547 - its listed as missing here (at msdn), but listed as done here at vc blog
That was me being cautious, see http://blogs.msdn.com/b/vcblog/archive/2015/04/29/c-11-14-17... - "I previously listed "Atomics in signal handlers" as No, because despite maintaining <atomic>'s implementation, I didn't know anything about signal handlers. James McNellis, our CRT maintainer, looked into this and determined that it has always worked, going back to our original implementation of <atomic> in 2012."
C# is memory-safe by default. C++ will forever let you (and your coworkers, and library code you use/misuse) access memory outside array bounds and dereference pointers to already-freed objects. This extends to C++'s advanced features (such as objects captured by reference in anonymous functions.)
This is a big benefit of C# (and every other remotely popular language except C and C++; how C compares to C++ is a separate question.) Arguably, some systems cannot afford a garbage-collected language, but your comment kinda assumes you can choose between C# and C++.
> C++ will forever let you (and your coworkers, and library code you use/misuse) access memory outside array bounds and dereference pointers to already-freed objects. This extends to C++'s advanced features (such as objects captured by reference in anonymous functions.)
Well, to be fair ISO Core C++ (mentioned in the blog post) with all safety profiles turned on is designed to fix this issue. I have my doubts about whether it'll actually succeed in still being C++ though, as it rules out so much valid modern code (and I also have doubts, or at least unanswered questions, about its soundness/expressiveness).
I hate to sound like a fanboy, but... exactamundo!
I'm sure the C++ community will gradually work to address the (huge!) security issues through heroic effort, but it's doubtful that anything meaningful can be achieved when "undefined behavior" remains an acceptable definition of the semantics of an operation.
EDIT: Don't get me wrong... I am in awe of the recent efforts of the C++ committee (and co.), and I hope they succeed in this endeavour!
> it's doubtful that anything meaningful can be achieved when "undefined behavior" remains an acceptable definition of the semantics of an operation.
Well, I should be fair. I may be wrong here, but my understanding of ISO Core C++ plus all safety profiles is that this will no longer be the case: there will be no undefined behavior in this particular dialect of C++, assuming the effort to create this dialect succeeds.
Yeah, that's my understanding as well... but I'm skeptical that this is actually possible (within reason) given the history of C++ and the sheer weight of legacy C++ code bases.
Few useful codebases are brand new. Or if they are they don't stay new. Weight accumulates and at some point efforts always shift to successfully iterating existing things of value.
Which is the problem with the massive subsetting of the language that ISO Core C++ with all safety profiles turned on performs. It rules out so much existing code that porting will be very expensive.
We won't know what the delta is between "migrating to ISO Core C++ with all safety profiles" and "migrating to another, memory-safe language with a good FFI" until ISO Core C++ is complete and widely deployed. My instinct tells me that this delta will not be particularly wide--that is, that porting most existing industry C++ code to totally safe ISO Core C++ will not be much easier than rewriting that code in Go or Swift or whatever. (Look at how long the Python 2 to Python 3 transition has been, and consider that ISO Core C++ is much farther away from industry C++ than Python 3 was from Python 2.)
I could be wrong about this, though--we'll have to see.
I work on UI for universal Windows apps where the choice is C#, C++, or JavaScript.
Modern C++ is pretty nice if your team is ready to embrace it and leave old styles behind. I still use C# for utilities and prototypes but for production stuff, users' time is more valuable than mine.
This comment fails to address the main point yosefk was making: that C++ is not memory safe, and C# is. In fact, it completely ignores the point in presupposing that C++ is for "production stuff" and C# is for "utilities and prototypes", when the point of memory safety is to make your code not fail in production.
Embracing Modern C++ (RAII, STL, range-based for loops) helps you avoid a big chunk of memory safety problems.
Contracts and some of the stuff in the new Core guidelines take it further.
Everyone's needs are different obviously. The stuff I work on has tons of users and start up perf and minimal memory usage are primary requirements. Most of our bugs are nullptr access violations or straight up programming errors that C# doesn't shield you from either.
> Embracing Modern C++ (RAII, STL, range-based for loops) helps you avoid a big chunk of memory safety problems.
Not really. RAII still allows for dangling pointers/references. The STL is vulnerable to iterator invalidation. Ranges are likewise vulnerable.
> Contracts and some of the stuff in the new Core guidelines take it further.
How do contracts help memory safety?
The ISO Core C++ bounds/lifetime checker does, yes, but as I mentioned in the other comment I don't know whether it is still going to be C++ in practice.
> The stuff I work on has tons of users and start up perf and minimal memory usage are primary requirements.
Those are valid reasons to use C++, yes.
> Most of our bugs are nullptr access violations or straight up programming errors that C# doesn't shield you from either.
C# doesn't make null dereference undefined behavior :)
1) Using smart pointers makes it really hard to have dangling pointers.
2) The invalidation rules are pretty well known at this point, but I concede that it's much better to have compile-time validation. A good STL impl will have iterator debugging, which catches iterator invalidation. e.g: debug mode in GCC
3) Regarding null, you're theoretically right, undefined behaviour and all. In practice[1], accessing a null pointer/reference will result in a crash both in C# and C++.
When developing Java apps (C#'s big brother), the biggest sources of bugs during development were null pointer exceptions and unhandled exceptions propagating to the event loop and killing the UI thread.
> Using smart pointers makes it really hard to have dangling pointers.
No, it doesn't.
std::vector<std::unique_ptr<int>> x;
x.push_back(std::make_unique(1));
auto& y = *x[0];
x.clear();
std::cout << y; // use after free
This is, of course, a toy example. For lots and lots of real examples, search Web browser engine bug trackers for UAF vulnerabilities. (They have been using smart pointers exclusively for a decade or so now.)
> A good STL impl will have iterator debugging, which catches iterator invalidation.
Valgrind and ASan catch UAF too and have been around for years and years. Yet we still have a ton of UAF vulnerabilities.
With C++ one needs discipline to make up for the lack of type safety that one can get in other languages. This is of course not so great, because it relies on the programmer to know various idioms/patterns.
In your example, it would be a major red flag in code review to create a reference like that. Some code constructs are just asking for trouble.
I think the reasons behind still having errors despite tools like Valgrind and ASan are:
a) many devs don't know about them or use them
b) you need 100% coverage to remove all errors
Using said engineering practices will not result in a completely big-free program, but it will have a significant impact on the quality. For something that doesn't have the nightmare security profile of a browser or various other exposed services, that can be just fine, even if not ideal.
Doesn't .NET Native address startup perf, particularly for UWP applications?
I'm guessing the reason why C++ yields lower memory usage than C#/.NET is because RAII and reference counting ensure that memory is always freed as soon as it's no longer needed, whereas garbage collection only ensures that memory belonging to unused objects will be freed sometime later. Is that correct?
Also stack allocation and pervasive use of value types help C++ a lot in practice. This reduces overhead of heap metadata. There's also the overhead of .NET wrapper objects for OS resources--compare a .NET window object to an HWND, the latter of which is just a word-sized opaque ID describing a kernel managed resource.
Though GC has its own advantages too--the ability to compact being the main one.
They are both used for building UI applications on Windows, because there's a lot of existing C++ code, .NET used to be (not sure now) a major PITA to deploy and was slower to boot.
It also allows one to expand to other platforms and operating system in a way that C# never could, in spite of toolchains like Mono, and the recent open sourcing of .NET.
I would absolutely not choose C# for building a UI app even today.
I don't really think about "X safety", but rather about various quality attributes such as availability and resilience, perhaps also security in this case. These have to be evaluated and kept in balance with other attributes such as performance, portability, dev resources etc.
I would personally prefer to use various engineering practices such as static and dynamic analysis, unit tests, etc rather than change programming languages.
There's also the other side of this argument: my last project was Java. C++ would have allowed us to port more easily to other platforms (a current pain point) and get better performance in combination with OpenGL (another major pain point), however, the project constraints would have made it extremely difficult to bring a C++ project to market with the resources we had.
> I would personally prefer to use various engineering practices such as static and dynamic analysis, unit tests, etc rather than change programming languages.
There is no static and/or dynamic analysis out there that effectively stops programmers from writing UAFs (to name one especially problematic class of memory safety problems) in C++ over and over. This is despite decades of work on it. The language itself is fundamentally hostile to analysis.
I also dislike "one language to rule them all" thinking. Web app and network backend developers shed that mentality a long time ago, and the industry is better for it. Java, Ruby, PHP, Python, Go, JavaScript, Scala, etc. are all used on the server where their niches are strongest, and this is great! The industry would have been in much worse shape if they all had tried to stick with C++.
I agree with your other points, though--the choice of programming language has to be balanced among many factors. Sometimes security against RCE doesn't matter or isn't relevant, and the crashes caused by memory safety problems can be lived with. But I don't think we should pretend that C++ is safe, or as safe as other languages. It just isn't, and no tooling so far has been able to make it so.
I have to admit to utter befuddlement about why you would ever want that. (Presumably they will also require scanf to honor this as well . . . that'll be fun).
Just when I thought the C++ committees were getting their heads screwed on straight. Am I missing something, or is this another trigraph disaster?
I actually like this, and had decided that single quotes were a good digit separator even before C++ adopted it. I had a calculator, an HP, I think, that used single quotes as digit separators in the LCD display. There is actually a locale that uses single quotes: de_CH, the Swiss High German locale. There are no locales that use underscore as far as I know. I have a few aliases like this:
alias dfk='LC_NUMERIC=de_CH df --block-size=\'\''1024'
alias duk='LC_NUMERIC=de_CH du --block-size=\'\''1024'
that help me when something like du -h won't do.
Also, the underscore is used to separate words in variable names, where it is significant.
int my_long_var_name, mylongvarname;
are different variables. But in numeric constants, the underscore has no meaning.
186_282.397 and 186282.397
would be the same number. This is inconsistent.
Finally, in a variable width font, the single quote is much less visually intrusive than the underscore, which tends to be a wide character.
I would like to see this form used universally. In my wildest fantasies, I'd also allow either '.' or ',' as the decimal indicator. That would make some of my European friends happy.
That also works in Perl, Ruby, C#, ADA, D, and Julia.
According to Wikipedia [1] underscore was proposed for C++ but rejected because it conflicts with user-defined literals.
Is there any reason they could not have used space? Offhand, I can't think of anyplace where a space in a literal number would not currently be illegal, so adding that as a separator should not introduce any problems, but C++ has grown massively in complexity since I last seriously used it so I could easily have overlooked something.
C++ has changed so much i barely recognize anything ( i used to code in c++ a bit like c with classes). Which makes me wonder : do we have any idea on the market share 20xx c++ versions represent ? I mean, how much c++ code produced today is "modern c++" ?
The author may be a bit confused as to what C++17 will be. Modules will be in a TS, most likely. Transactional memory, concepts, ranges, and filesystem are already in published TSes (not planned for the IS in this round). Out of the list presented, only fold expressions and __has_include are actually already voted in for C++17 itself. The rest are in various stages of standardization (mostly TSes) and are not guaranteed to ever become part of the IS (though hopefully many will).
That's incorrect - the Ranges TS was just started, and certainly hasn't been published yet (not even as a PDTS). Filesystem was the first published TS.
As you see, people outside the committee also have strong opinions. Those opinions can depart radically
from the ones I hear in the committee and from reality.
I would argue that reading A Tour of C++ would make you ready to write a modern C++ program, if you are a programmer and are comfortable with pointers.
I read "A Tour of C++" and I found the book terse and the information too condensed. I honestly doubt that a programmer that doesn't know C++ could learn C++ from this book.
Recommended resources for learning C++ nowadays: "Jumping into C++" by Alex Allain, "C++ Primer Plus" by Stephen Prata, and Kate Gregory's C++ courses on Pluralsight. You can get a free 3 or 6 month Pluralsight subscription by signing up for Microsoft's free Dev Essentials program.
I found that diving into openFrameworks (looking at their examples, etc...) and hacking my own stuff was a great way to learn. Cinder would do the same.
This is an honest question. In what areas of the industry does C++ dominate the language landscape except hardware drives? I used to think this was true for games but I've seen most studios move to C# as the community has matured around it. All other major spaces seem like they've moved on to more managed languages.
That was me being cautious, see http://blogs.msdn.com/b/vcblog/archive/2015/04/29/c-11-14-17... - "I previously listed "Atomics in signal handlers" as No, because despite maintaining <atomic>'s implementation, I didn't know anything about signal handlers. James McNellis, our CRT maintainer, looked into this and determined that it has always worked, going back to our original implementation of <atomic> in 2012."