Hacker News new | past | comments | ask | show | jobs | submit login
We have C++14 (isocpp.org)
426 points by ingve on Aug 18, 2014 | hide | past | favorite | 342 comments



When I started writing C++ around 5 years ago, I had a perception that it was a language that is "on its way out". As I learned more and more of it, I've been super impressed at how modern it is becoming, and how it is adapting to overcome its perceived flaws. It is becoming a killer language to me: blazing fast, modern, ubiquitous, stable, and expressive.


Too bad it's a mirage. No, really. "Modern C++" doesn't exist outside of blog posts, books, and tutorials.

Real-world C++ is an array of sometimes mututally-exclusive dialects, patterns, rules, sub-dialects.

Reading MFC is nothing like reading Qt which is nothing like WxWidgets which is nothing like Boost which is something (but not quite like) the STL which is way different from Apache C++ libs which is way different from what Google's C++ libraries are...

This is what happens when a language is "un-opinionated", throws everything including the kitchen sink at the language, tries to compile C and legacy C++ while introducing new features oh and also can't break existing code.

It's on its way out, no doubt about it. And we'll be better off as an industry for it, I'm sure of it.

Rust, Go, and others will be there to fill in the gap in a much better, saner, safer, maintainable way.


As a counterpoint, I write "Modern C++" for my job. I know several others who do, too. I'm overjoyed at how much the C++11 features have improved my code (and I'm still learning how to deploy them effectively), as well as how good the compiler support is.

MFC was the vilest, most horrible abuse of C++ from the day it was released. Anything that involves MFC should just be rewritten with C# or turned into a webapp. I wouldn't take a job that involved programming MFC.


> MFC was the vilest, most horrible abuse of C++ from the day it was released.

MFC's greatest benefit was that it made returning to plain win32 C programming without MFC a pleasurable experience.


Funny thing is that MFC iw like that because of Win16/Win32 C developers.

Microsoft team made a prototype OO framework even more OO that what OWL and others offered. MFC was the result from the feedback over the said framework.


Me too. RAII ftw!


Good luck making your RAII class exception safe. It's extremely difficult, to the point that writing a simple class becomes an hours-long exercise in reasoning about exceptions.

And of course, if you admit that exceptions aren't worth worrying about, then you'll start to question the entire "C++ way," which usually ends in disillusionment.

Alternatively, instead of disillusionment, you may still kind of enjoy C++. But suddenly you find that it's been three years since you've worked with C++, and on reflection, you haven't really been missing out on much of anything at all by not using C++.

It doesn't seem like C++ solves the right problems, even with the new dialects. Programming languages should be convenient to think in. By immersing yourself in the C++ mental model, it becomes difficult to keep a large-scale architecture entirely in your head.


> Good luck making your RAII class exception safe. It's extremely difficult, to the point that writing a simple class becomes an hours-long exercise in reasoning about exceptions.

In my experience, this is literally backwards. RAII is the only way to make exception-safe classes that aren't a gigantic mess.

> It doesn't seem like C++ solves the right problems, even with the new dialects. Programming languages should be convenient to think in.

No, some progrmaming languages should be convenient to think in. Some need to be uncompromisingly fast. C++ is in the latter set and nothing more convenient to think in is faster.


>Some need to be uncompromisingly fast. C++ is in the latter set and nothing more convenient to think in is faster.

Yes, and C++11/14 make significant strides toward being more convenient to think in as well, without compromising speed. In fact some improvements also improve speed in some cases.


Yes, RAII is the only way to make an exception-safe class. Good luck doing it. It's not easy. I know; I've tried. There are all kinds of corner cases and special considerations that you have to take into account. People have written huge tomes about that exact topic detailing exactly why it's a hard problem and why you're probably not going to solve it correctly by accident. You're far more likely to introduce a memory leak than to make your class exception safe on the first try. That's my point. That's why C++ is overly cumbersome unless you ignore most of its capabilities, like exceptions. And that's why you can simply ignore C++ and not miss out on much. If performance is your only reason for rigorously sticking to a mental model, it's likely that your mental model is a premature optimization.


"Good luck doing it. It's not easy. I know; I've tried" Completely unsupported assertion (no pun) without evidence. Other people have tried it, and have succeeded. I could start listing all the software written in C++ that you use in your everyday life, perhaps 100's of times a day... but that'll quickly exhaust my word limit for this post.

"That's why C++ is overly cumbersome unless you ignore most of its capabilities, like exceptions."

Complete non sequitur. Exceptions are not "most of C++ capabilities". In fact, you can refuse to catch or throw exceptions for 99% of your (millions of lines) of code and still end up with a very large, stable code base running a large portion of the internet. Ask Google.

Today, there's pretty much no viable alternative to C++ for writing large scale, high performance systems with reliable performance guarantees. You could do it in C if you have a religious anti-C++ agenda but that would be like cutting off your nose to spite your face, and you'll end up in a horrible mish-mash of macros and generated code all over the place for a project with any level of complexity. Rust or D might get there some day if they manage to attract a solid community and big corporate backing. But they are certainly not there yet.


> Exceptions are not "most of C++ capabilities".

To add to this point: There are C++ frameworks such as Qt who don't use exceptions at all! And what impressed most: When writing a Qt application, you don't even miss exceptions. You don't have to resort to C-style return value checking all over your code. How is that? It took me some time to figure this out, but I believe it is because Qt applies the "Null Object" pattern consequently - down to the deepest depth of their framework,


Actually that statement has a point. Have you ever tried implementing std::vector and java.util.ArrayList and comparing the two implementations? One of them is trivial, the other one is most certainly not. One of the reasons is exception-safety guarantees. I'll leave it for you to figure out which is which.


> you can refuse to catch or throw exceptions for 99% of your (millions of lines) of code and still end up with a very large, stable code base running a large portion of the internet.

The point is that these features are infectious. If you don't want to throw exceptions, how do you handle a failure in a constructor? And then if you abandon constructors, other C++ features are unusable without them. And so on.


Googler here. We avoid doing "work" in constructors as per the style guide: http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...


Yeah, that's pretty much what you have to do, and go back to a C-style init method. But then you've lost the benefits of RAII, and you can't necessarily use those types with templates (which won't know about your init method).


You could do it in C if you have a religious anti-C++ agenda but that would be like cutting off your nose to spite your face, and you'll end up in a horrible mish-mash of macros and generated code all over the place for a project with any level of complexity.

Have a look at Tarsnap's source code.


What were these corner cases and special considerations you ran into?

I wouldn't call Herb Sutter's _Exceptional C++_ a huge tome and it seems to be the standard treatment of exception safety.


It's 240 pages of knowledge that you can completely do without, and it's a book called "47 engineering puzzles." The puzzles, of course, are entirely exception-related.

I'm not saying it's impossible to write exception safe code. I'm saying that you can write code without exceptions without being worse off for it.


We're talking about the same book, _Exceptional C++: 47 Engineering Puzzles, Programming Problems, and Solutions_, by Herb Sutter. The takeaway I had from reading that book (a long time ago) was just to use RAII religiously, which also tends to make other things easier.

So I'm curious about why you think RAII makes writing exception-safe code difficult?


Sorry for the confusion. I meant that writing exception-safe code is difficult, not that RAII makes it difficult to write exception-safe code. I should have been clearer about that.

The book is excellent. For anyone who wants to use C++ exceptions, it should be required reading. But as someone who has gone through that gauntlet, I feel that I probably would've been better served by spending my time on something other than mastering the finer points of exceptions.

The reason I started talking about C++ exceptions in the first place is because, in my experience, when someone is very pro-RAII (like the original commenter I responded to), they also tend to be pro-exceptions. This isn't always true, but on average, it seems like either people prefer both or prefer neither.

Writing exception-safe code is certainly one of the hardest challenges available in C++, so if you enjoy a good challenge, then it's hard to do better. But if you're just looking to design large-scale systems, it seems like exceptions are unnecessary, as Google has demonstrated by banning the use of exceptions in their codebases.

In my experience, by refraining from using exceptions, it's possible to design large codebases more quickly, with fewer bugs, and without losing any safety or extensibility. However, this is simply my personal experience, and it may be mistaken to generalize this into a claim that all codebases among all C++ teams should refrain from using exceptions.

So, that's all I meant. I was also in excruciating pain yesterday due to a certain tooth that decided to segfault in my mouth, which may have contributed to the adversarial nature of my writing style. I feel bad about how I came across, and I sincerely apologize for it. No excuses, though: I should've done better.


> either people prefer both or prefer neither.

exceptions (specifically, exceptions from constructors) are part of what RAII is, so it's only natural that people "prefer both".

As another personal anecdote, in my experience working with a large (175MLoC at last count) mostly-C++ codebase, I can't imagine how painful it would have been if the code didn't use exceptions.


Their style guide http://google-styleguide.googlecode.com/svn/trunk/cppguide.x... is sheepish about it. The authors seem to believe using exceptions would have been better, except they found themselves beyond a critical mass of bad code that would fail or leak on a throw, and now they can't fix it all or throw it away.

I can't understand how a correct init method could be easier to write than a ctor, given that the compiler is required to generate code that knows which of your base classes and fields are already initialized at any point they could fail, which you would have to do yourself after moving to an init method, as well as losing the guarantee you can't silently ignore an error by forgetting to check for it.


Huh? RAII is basically the only way to write exception-safe code in C++.


> But suddenly you find that it's been three years since you've worked with C++, and on reflection, you haven't really been missing out on much of anything at all by not using C++.

True on my case, now using the language mainly on hobby projects, although we are very good friends since 1993.

Personally I would like safer systems programming languages from Mesa branch of languages had more use in the industry, but OS vendors decided to embrace C instead, and we ended up in the exploits everywhere that we enjoy nowadays.

C++ is not perfect, but it surely is way better than C in terms of safety.

Until a big OS vendor decides to push another systems programming language, we won't see any major adoption from the alternatives.


RAII is hardly a trait of modern C++.


Is your meaning that it's not a distinctive trait of modern C++?


Eh? How do you mean that?


Presumably that it's nothing new. The term RAII dates to the mid to late 80's sometime, and you could hardly find any book about C++ from the 90's onwards that wouldn't be espousing RAII as the One True Way.

RAII has been with C++ almost from the beginning.


Yes, but "modern" doesn't imply that no feature is older than Twitter. "Modern C++" refers to the way that C++ is written today, and some of that is new additions to the language, some has been there for ages.


I have furniture at home made of wood. Oddly, my table looks nothing like my broom handle, so does this mean that wood is a terrible choice for both items? "The industry would be better off without wood!!"

Surely libraries in other languages have the same problem? They don't look anything like each other.

wxWidgets and Qt look nothing like each other but it isn't a problem at all. The signalling mechanism / event handling between the two just hides function pointers, so it isn't a headache. In fact, MFC looks like wxWidgets to me. Not sure what the problem is?

EDIT: Not sure why the downvote(s)? Do people get downvoted here because someone points something out that is true on a subject/language that they like/dislike? Or was it the tone or supposed tone that the person read the comment in? Please let me know!


It`s the down vote pigeon, hitting randomly and savagely.


Sure is a vicious beast! Must have got into a bit of a flap. Pigeons have tiny brains don't they?


Forget about the downvotes, if you're not trolling then know they are a red herring.


People have been saying this about C and C++ for decades. Not to say C++ will always be "on top", or even that it currently is, but your comment isn't really substantive or persuasive that "it's on its way out."

I'm also not convinced that Rust or Go are better or saner.


I can understand the few controversial design choices in Go which limit it's use cases, but would be interested to hear why you claim Rust isn't saner than C++.


When it comes to Rust, there's no stable version of the language at this point. There's no stable version of the standard libraries. There's no reliable production-grade compiler available. As the Rust home page itself states, "Rust is a work-in-progress and may do anything it likes up to and including eating your laundry."

Maybe Rust will offer such stability in the future. But that's of no use to people and organizations who need to develop software today, and who need to be able to trust that the code they write now will compile and work tomorrow, a month from now, a year from now, and perhaps even decades from now.

C++ does offer stable, standardized, well-supported versions of the language. C++ does offer stable, standardized, well-supported standard libraries. There are numerous high-quality free and commercial C++ implementations available, for just about every platform imaginable. It provides a robust and predictable platform that serious and massive software systems can be built upon.

The theoretical benefits that Rust may bring are pretty much irrelevant as long as it isn't a production-ready language in the way that C++ is.


But now you aren't talking about Rust the Language but Rust the Ecosystem.

Not that I am disagreeing with your points, I am not. However, when people talk about C++'s problems, I immediately assume they talk about C++'s problems as a language rather than it's ecosysem.

Rust isn't out there to tackle C++'s ecosystem, tooling, legacy code or professional workforce, but rather Rust aims somewhere near C++ and fixes many of the language flaws which are inherent in C and C++, while still being competitive in performance and low-level control.


Ecosystem really is the whole problem, though. You can never make it big without having lots of friends.

Rust is a pretty cool language, but I can't help being reminded about another very well-designed but ultimately unsuccessful C++ challenger, D. The parallels are really hard to ignore.

D, like Rust, had great syntax and was a breath of fresh air after coding with C++98. Neither Rust not D has a sponsor with really deep pockets to encourage adoption. Neither came out of a standards process. Both have had compiler and standard library issues. The big difference between the two at this point seems to be momentum and where the two are in their parabolic trajectories.


There are important differences: D threw away one of the most important features of C/C++ (for the target audience): usability of the language without a garbage collector.

Also, Mozilla has much deeper pockets than Digital Mars.

Still I agree with you that it is very likely that Rust will follow a parabolic trajectory. The advantages as perceived by the industry compared with C++11/14 will be too few. At the same time it does not have the ecosystem.

Go succeeded because of Google's deep pockets and, perhaps more importantly, because it filled a large niche that even the authors did not anticipate: a faster language for Python and Ruby aficionados.


> Still I agree with you that it is very likely that Rust will follow a parabolic trajectory. The advantages as perceived by the industry compared with C++11/14 will be too few.

Rust is memory- and type-safe (as in: the compiler will not let you write a dangling reference, invalidate an iterator, or write exception-unsafe code without opting into an unsafe dialect). The security benefits alone of that are enough to justify the language for our use cases, and, from what we've seen, for many others. Safe zero-cost abstractions are a niche to its own.


Could you be specific about what in D isn't useable without the GC? You can use manual memory management in D. It has Unique (unique_ptr) and RefCounted (shared_ptr). It has Array. It has malloc and free. None of these use the GC.

People complain about the GC being available to use while simultaneously complaining that to avoid it you have to manage your memory yourself. You can't have it both ways. There is no memory allocation strategy that works best in every situation. Sometimes ref counting is best, sometimes stack allocation, sometimes RAII, sometimes memory pools, sometimes its the GC.

Rust has done some cool work with memory but even it doesn't free the programmer from having to consider and choose which memory allocation/ownership option is going to deliver the best performance on a case-by-case basis.

You're right about Rust and Go having much deeper pockets. D isn't backed by any corporation. It's 100% a community project. Maybe it won't ever gain a significant market share because of this. I don't know.


It took python (which also doesn't have a deep-pocketed company backing it) about 15-20 years before it saw widespread adoption.


I would also say that D did itself irreparable harm with the whole v1-v2 standard library debacle. Right when it was receiving the most attention, it came right out and said to anyone who might have considered it, "we have two incompatible versions of the standard library: one which we don't support anymore and one you can't use yet."


Yeah, that certainly hurt D's reputation. People still bring it up even though it's been resolved for years.


That was then and this is now, there is one standard library and the language has come on leaps and bounds. D is a great language.


I'm not arguing otherwise. But as the saying goes, you only get one chance to make a first impression, and D's first impression was of a train wreck.


>At the same time it [Rust] does not have the ecosystem

Try watching new projects pop up at Rust CI [0] for a few days. With the possible exception of Node (which is not even a PL), I've never seen a language ecosystem grow this vast, and I'm a PL afficionado.

I think in a year's time, the question of Rust's stability and ecosystem will be entirely moot. It's a tough wait meanwhile, but I'm still investing the time in learning Rust (and it's a significant investment).

0. http://www.rust-ci.org/projects/


Is Rust CI really the best evidence to use in this case?

When I last looked at it, probably 40% to 50% of the projects listed had builds that were in the "failing" or "error" statuses.

That indicates that one or more of at least a few things are happening:

1. The Rust language and its standard libraries are changing at a pace that results in previously-compiling code needing to be modified before it will compile again with a newer version of the language/implementation, perhaps a very short time after the code was initially written.

2. The Rust compiler or other tooling is crashing or failing in some way while compiling these projects.

3. The projects themselves aren't being maintained on an ongoing basis.

4. The projects themselves were never building properly in the first place.

5. The projects' developers are targeting different versions of Rust (which probably means there will be interoperability problems for anyone trying to use them in a larger projects, especially when it comes to libraries).

And while there may be a lot of these projects, I've never found the quality to be very good. Many of them are extremely limited or incomplete. Many of them are little more than casual experimentation. Many of them are only developed by a single person, who often has appeared to have lost interest.

Those factors are disconcerting, especially for somebody who wants to use Rust for serious product development. It does no good if there are hundreds of libraries available for use, but half of them don't even build, and the ones that do are very incomplete.


>Many of them are extremely limited or incomplete. Many of them are little more than casual experimentation. Many of them are only developed by a single person, who often has appeared to have lost interest.

Yes, I completely agree, and this is an entirely normal part of the language ecosystem development cycle. Rust is at the tail end of the experimentation stage, and as it converges on 1.0, more people will undertake serious projects.

I'm not at all worried about the quality of Rust projects. I'm just happy to see so much enthusiasm. I have no doubt that all this enthusiasm will transfer into some powerful libraries as Rust continues to stabilize and reaches 1.0.

But yes, it's still to early to use Rust for production unless you're willing and able to write your own libraries.

Then again, the "batteries included" approach of Rust's standard library leaves not much to be desired outside of domain-specific libraries.


Given the Mesa/Cedar system a Xerox PARC, Oberon at Swiss Federal Institute of Technology and Modula-3/SPIN at Olivetti, doing OS work in GC enabled systems programming languages is quite possible.

The problem is how to move OS vendors away from C's influence.


Use micro-kernels.


Exo-kernels might be a better alternative.

https://en.wikipedia.org/wiki/Exokernel


It is a matter of willingness, not technology.


Facebook have deep pockets ... Yep, Facebook is using D for some stuff.


A large or well-known company merely using a programming language, and maybe even contributing back to it and its community, isn't the same as the company truly supporting or championing it.

What you describe is very different from, say, how Sun pushed Java, or Microsoft pushed C#, or how Apple will likely push Swift, or how a huge portion of the entire software industry pushed C and C++.

Facebook's Hack language is probably a much better example than D is of a language that they're actively supporting. It's a creation of theirs, rather than just a creation of somebody else's that they find useful in some limited cases.


> Both have had compiler and standard library issues.

What are the compiler/standard library issues with Rust? Sure, it's taken time to get to a high level of quality, but no language compiler and library is going to spring out of thin air fully complete. In particular, I'm confident that the Rust compiler is of high quality for its age, especially in the quality of the generated code.


> In particular, I'm confident that the Rust compiler is of high quality for its age

I'd like to concur. I've used the Rust 0.10 compiler and the only bug I encountered was that generating debug binaries was somehow broken. Apart from that, it was one of the smoothest experiences ever and the error messages are amazing.


Rust error messages are perhaps the best thing about the compiler. For me, it's a new experience altogether. Instead of having to Google obscure error messages, Rust error messages actually tell me how to solve the problem!


I don't think that the two can be separated in any meaningful way. Of course they're different things, and we can discuss each on their own, but when it comes to which language to use as a business decision then the line becomes blurred if not completely transparent.


Rust is really the only language that's actually targeting the same niche as C++ primarily occupies, system level programming where reasoning about memory management and performance is part of the actual work being done. Go, let alone others, simply do not allow you to do this.

It shows a distinct lack of understanding of why people still use C or C++ to raise languages/environments like Go as viable alternatives.


Don't forget D


D is fine language and a pleasure to program in, but with it's required garbage collector, it's not really in the same niche as C/C++/Rust. I think the hallmark of this niche is the ability to run without a garbage collector penalty and the accompanying lack of determinism.


Oh not again the GC argument.

D is the one true C++ without pretending being a superset of C. Rust is more modern than C++ and better in programming in the large. But D is really for low level programming and a good target for code generation. For me it is a good replacement for C++ and Delphi. It is even a good fit for areas touched by Java and C# like desktop and scientific apps. But it is not there yet because of lack of serious tooling and lack of bytecode but for the second disadvantage you earn performance. Nimrod for example could address this (with D as a code generation target).


You can use D without using the GC. I'm not sure why this myth keeps coming up. It's always supported manual memory management. You have to sacrifice some language features that aren't even available in C++ and some parts of the standard library use the GC but that is easy to avoid with yesterday's 2.066 release and the @nogc attribute. 2.067 should also have a lot of the standard library switched to using lazy algorithms where possible which means the caller gets to determine the allocation strategy (and can often avoid allocation entirely).

void main() @nogc { /* ... */ } // this program doesn't use the GC


Ordinary C++ with good old RAII do use something akin to garbage collection: many classes, when constructed, allocate memory on the heap, then free that memory when they fall out of scope.

Malloc() and free() aren't exactly predictable. If you really want guarantees, you need to write your own custom allocator, and if you don't need those guarantees, garbage collection is probably enough for your purposes.

Basically, if you're using C++ without real manual management, with custom allocators, pools, and all, you probably didn't need C++ in the first place.


RAII is a form of automatic resource management, it's true. It is, however, a deterministic form of automatic resource management.

This is not to say that the underlying allocator is necessarily deterministic, but you're making a false dilemma here by putting it in opposition to making decisions about custom allocators and pool allocators. You can use RAII with custom allocators. You can use RAII with memory pools.

And that's where C++ really does shine. It's a niche that hasn't even really been attempted much, let alone that it's been surpassed in.


With Nimrod, you can manage your own memory, or use the Boehm GC. It compiles to C/C++ with native code generation not dependent on a VM, and the Pythonic syntax makes writing such code as easy as common scripting languages [1].

1: https://github.com/def-/nimrod-unsorted


You mean like Mesa/Cedar at Xerox PARC, Oberon at ETHZ, Modula-3 at Olivetti...


Too bad it's a mirage

I hate to spoil your worldview, but modern C++ is very alive and well in the scientific computing communities. Disney Animation implemented a new renderer using it, and our next movie is currently being rendered using the new renderer, so I highly doubt it's on its way out in the these areas.


Is this a replacement for RenderMan? Interesting...


How much time/money did it take to be implemented? What is the maintenance cost? Which version of C++? What percentage was written from scratch? Does it run in a networked environment? These are the real questions.

Pharao Inc. built pyramids in ancient Egypt. It cost many lives of slaves but it was cheap them. Nowadays we make big buildings without slaves.


It's not like C++ programmers are poorly paid.

And hey, don't knock the Pharaohs!

First off, it's been known for some time that the Pyramids were built by paid workers:

http://www.boston.com/news/world/middleeast/articles/2010/01...

(Literate too, since they left graffiti)

http://www.bbc.co.uk/history/ancient/egyptians/pyramid_build...

The conditions that migrant workers endure building World Cup facilities in Qatar seem to be worse than those endured by Egyptian construction workers:

http://www.theguardian.com/world/2014/may/14/qatar-admits-de...


I don't see how that refutes the parent's comment. The areas you listed are very small niches. Frankly, very few people care about scientific computing or even animation rendering. Generally speaking, using C++ is like wearing jeans and white sneakers: popular in the 90s, but definitely out of fashion now.


Exactly. It is out of fashion, but fashion is fleeting. Fashion is a twenty year old wearing skinny jeans; style is James Bond in a Tuxedo, and it's timeless. What's in fashion today may very well be out of fashion tomorrow. I will likely still be writing C++ to get shit done.


Oh goodness - C++ is out of fashion? But I was so productive! And my code ran so fast! Well, I guess I should go read up on node.lang or er-rust, or whatever it is those kids are into...


Very few people care about any kind of computing - they care about the end result. Between embedded systems, servers and systems programs, and infrastructure, the modern world basically runs on C and C++.


Why do you need to be fashionable to be productive? Do people really ask what your finished software product is written in?


Whats your problem?


So "Modern C++" is a mirage, but Rust and Go are the surefire winners? Do you have any substance to add to your post?

I don't know, I've been writing C++ for about a decade and the code I write is "modern" as far as I can tell. I just started a new project. It is C++, it is "modern", and Rust and Go simply aren't options.

Rust is an infant and not yet ready for prime time, and Go simply wouldn't be considered by management. Not because they're ignorant, out of date curmudgeons, but because they want a larger pool of talent to draw upon and a language that is well known, well understood, and has decades of work invested around tooling. I can't say that I blame them.


Why do people keep assuming that they need an X programmer, or a Y programmer? Go is a garbage collected imperative and OO language, right? It's not like Java and C++ programmers wouldn't be able to learn the basics in 3 days, and be proficient in a couple weeks.

Why does the "pool of talent" even matters?


Companies don't want to pay for training.


So don't publicize the fact that you are 'learning' while you 'practice'. Just show the end result and it will likely be better than their language 'experts' could have done in any comparable [or infinite] time frame.


Not even 4 weeks worth of semi-productive self-learning? That sounds quite pathetic.

There must be something else, like fear of the unknown, risk aversion, "nobody got fired for choosing IBM"…


I know consulting companies that bill the customer the hours developers spend getting up to speed with a given technology.

In the cases the customer is not willing to pay, developers get asked to learn off work hours, if they want to stay on the project.


Now this is just abusive. I understand that as an individual I have to live with this, but collectively, it looks like we should rise up.


I think you could say the same thing about any language where there are big frameworks and libraries that handle a lot for you.

In the Java world, a Java EE CRUD application is a LOT different than a Swing application that does the same thing. Likewise, a Python Django application is a completely different beast than a console application.


But there is less variation: Java provides only one paradigm (OO) and frameworks use different flavors of that: beans, POJOs, heavy classes, etc.

In C++ you can have such variations, plus all the other paradigms, including:

* C with classes (people programming in this style often still use C strings, do deletes and frees in destructors).

* C++ as the full-blown OOP language, with extensive multiple inheritance hierarchies.

* Nearly functional, STL-driven C++. In this code you see nearly no loops at all, <algorithm> is the control structure.

* Template-driven programming, with policy classes, dozens of specializations, only header files.

Etc. Java does not have the legacy to 'embed' another language. Java does not do multiple inheritance, avoiding heavy-inheritance-based projects, Java does not claim to be a semi-functional language (though Java 8 brings Java closer in that direction), and Java does not provide a turing-complete template language that is a world on its own.


That's the problem I have with C++. It is a mish-mash of paradigms, features and techniques. It is not consistent in use-cases and philosophy (unless you count "all and everything" as use cases and philosophy).

As much as I try to find "my" general purpose language of choice, and forr all of my inertia in learning yet another programming language, I believe languages should have narrower scopes. And there should be more languages. Specialized for specific use-cases.

And, as a pythonista, the "you can do/use it in many different ways" just irks me.


One core part of C++'s philosophy where it's very consistent is that you only pay for what you use and the performance guarantees (e.g., STL) are explicit.

Another consistent part of C++ is that it gives you absolute control over memory, both allocation and layout. It's really this control that makes C++ unbeatable in performance. The good VMs can JIT their languages to match CPU performance on small routines, but they can't match C++ when it comes to manipulating big, complicated data structures.


I thought much like that (and was a huge Python fan) until I found Scala. There are cases people cite as "more than one way to do it", but they're usually superficial syntax differences. You do have to choose between passing around objects and passing around functions, but that's a choice you make in Python as well. It can handle high-performance calculation or high-level scripting, but it never feels like these are separate languages; it has a bunch of orthogonal features but any two of them combine in exactly the way you would expect (and often that combination forms something very useful).


Scala is different than C++ in the sense that Scala at its core is a simple language compared to C++ and most abstractions are built from that. That said, there are different Scalas to, Scala the 'a better Java' is a very different language from scalaz/shapeless Scala.

This seems to be the curse of any powerful core language (Scala, Haskell, ...) - on the one hand they allow you to make all kinds of useful abstractions that you can't make with e.g. Java. On the other hand, their abstractions are so powerful that they create seemingly different languages.

One could apply the same rule as C++ - stick to a subset of the language/abstractions. However, if you look at e.g. Haskell, a large number of libraries now depend on a library such as lens [1], so often it's not that much of a choice.


I think it's good that a library can become sort-of "standard" without having to be part of the language standard library (where, as the python folk say, modules go to die). We're starting to see this come together in Scala-land - if you want a Monad typeclass you use the Scalaz one, if you want a HList you use the Shapeless one, and if you want HTTP you build on top of the spray-http abstractions (I'm particularly pleased that Play intends to port to run on top of Spray). That's something that you don't really get in C++, with the possible exception of boost; there are still multiple ways to do threading, multiple ways to do event-driven, all with their own conventions.


That's a pity. My code appears to be a mix of the first three.....

any recommended reading?


> "Modern C++" doesn't exist outside of blog posts, books, and tutorials.

That you aren't used to doing greenfield development doesn't mean others are likewise impaired.


While a bit harsh sounding, this is actually spot on. What I write nowadays vs 5 years ago: hard to believe it is the same laguage.


Exactly. I only write greenfield C++ and I dig it a whole bunch.


Good for you. It would be harder to dig if you had to do some maintenance like less lucky lifeforms.


I have only worked on one legacy C++ code base, and it was a student job, but I've enjoyed it a lot. Most of "modern C++" can be applied to an old code base step by step, all the way up from C with classes. As long as nobody decides to introduce the project's first "const" somewhere in a core class... :)


I hope you didn't take that to mean that I don't maintain my own code. But I can choose when to refactor and when to put up with it. C++ isn't a tool to use when you can't own the project.


My recent project uses standard C++11. No dialects, no restrictions. Gcc 4.8.x. It's not a mirage, it's all up to those who want to use it or not.


You are missing the point. No project uses all features of C++ at once, only a subset, and this forms a dialect. It's not that infrequent to see the dialect change when going from one team to another even in the same organization. Some use multiple inheritance and exceptions, some don't. Some swear by STL, some won't touch it with a long pole. That's why internal coding guidelines exist and that's why they are particularly detailed and thick for C++ shops.


The commenter above claimed that "modern C++ doesn't exist" because normally only a subset of the language is used. I disputed that, and I got his point clearly. No one is using all features of the language just for the sake of it. That doesn't make used C++ not modern.


> No project uses all features of C++ at once, only a subset

Many features of C++ are targeted at library authors. Even if a given project won't allow template metaprogramming, it probably uses vector, which certainly does. And for good reason.


Could you name a book that focuses on Modern C++, that isn't one of these 1200 page tomes? I ask because sooner or later at my job I will probably have to code in C++. It will undoubtedly be a greenfield project, so I don't have to worry about anyone else's horror show (only my own).


There's no better place to start than Alexandrescu's eponymous Modern C++ Design http://www.amazon.com/Modern-Design-Generic-Programming-Patt.... Scott Meyers will soon come out with "Effective C++11" which from the table of contents looks like it will do a great job of covering the new language features.

As for coding standards, I suggest a book by the same author plus the current chair of the standard committee: http://www.amazon.com/Coding-Standards-Rules-Guidelines-Prac.... Another commenter has suggested the Google C++ Coding Standards, but despite their popularity they do not conform very well to "Modern C++" as I understand it.


Try "A Tour of C++" by Bjarne Stroustroup it has less than 200 pages.


Or just get Stroustrup's "The C++ Programming Language" and read the intro bit - it covers the language in fleeting detail but then you still have the rest of the book to look at if you are having a hard time understanding the section at the beginning.


I have both, "The C++ Programming Language" as a physical book and "A Tour of C++" as an ebook. "A Tour of C++" is self contained.


I would use the Google Coding standard. The best book to read is Effective C++ (get the latest edition, since they have changed a lot.)


Doesn't the Google coding standard use uppercase function names (like Microsoft)? Brrr. Classes are nouns -> uppercase, functions are verbs -> lowercase. Well, alt least if you speak German that is the logical choice.


It also completely forbids the use of exceptions. I can understand that for their stated purpose of working with non-exception-safe legacy code, but it shouldn't be extended to other circumstances.


Exceptions have a significant performance impact, and making code exception safe distorts your interfaces. Insisting on no exceptions is reasonable for these and other reasons, even for new code.


Depends on if you need to gracefully handle malloc failures or not. Otherwise any write operation on an STL type (like appending to a string) will potentially throw an exception that you need to care about.

In the C world this is extremely arduous since need to bubble the allocation-failure down through every level of code, adding cleanup to just about every function call. If you're using C++ and RAII you can just catch std::exception in one place to cover a huge number of places that an allocation can fail.


Do you work on systems that handle this? I'd like to know if any still exist.

In modern systems I'm familiar with, malloc only reports failure on bogus inputs like -1, or address space exhaustion. Your process is likely to be killed before exhausting your address space (think iOS OOM handling, or Linux overcommit), especially on 64 bit. So checking for allocation success just isn't that useful any more.


Two examples come to mind.

The first is process limits.

The second is exhausting kernel data structure space for things like page mappings. I've seen this recently in AIX when allocating lots of memory that alternates mprotect permissions.


I worked on an embedded system that aggressively cached images in memory dedicated to the GPU. It wasn't uncommon for gl texture allocation errors to occur - and in addition to that, we needed to decompress images packed in various formats into whatever format the GPU supported (typically from png to rgba32). In low memory situations, it also wasn't uncommon to not have enough contiguous memory to perform that decompression - in which case malloc would fail.

Gotta love putting forth every effort in software to keep the BOM down :)


As @jjnoakes points out, you'll get malloc()==NULL if your ulimits are set. For a long-running program you definitely want to have a ulimit that will kick in before the OOM killer does.

Even in the absence of the OOM killer (i.e. the old days) you had to do this -- otherwise the machine might be swapping itself unresponsive for ages before you ever get malloc()==NULL


> Exceptions have a significant performance impact

Are you talking about enabling exceptions, or throwing them?


Can you elaborate on the "significant performance impact"? If used properly, they should have no runtime overhead - they are predicted "not taken" in modern compilers.


Early C++ implementations used setjmp/longjmp to support exceptions, and then each try block would lead to settting the setjmp context. So there definitely was a runtime overhead, even with no exceptions thrown.

I don't think it's very popular today. Modern implementation don't use this anymore and do not incur run-time overhead when no exception are thrown. However, they do incur some space overhead: the implementation need to maintain pretty large tables to know what needs to be unwound (which destructors to call, in which order) when an exception is thrown. In most environment this space overhead shouldn't be a problem (PC, server). But for embedded development it may be a problem. And then disabling exception to save space brings back the issues with error checking (or the lack of it...).

For more details, search for C++ exception implementation and you should find all the info you need. Or look into G++ documentation for example, this topic is covered somewhere.


> Rust, Go, and others will be there to fill in the gap in a much better, saner, safer, maintainable

For business applications? Sure.

For system programming?

Only if they get an OS vendor godfather that pushes the language into their SDK, like any other systems programming language that got widespread use.


And what exactly does that mean? So what if the libraries are all written differently? I hardly see how thats a negative thing. The language is flexible. I was very impressed with c++11 features and I am looking forward to c++14.


It means that the language as such - the codebases written in it - have a readability problem. If every project is written differently with opposing coding standards and practices, then it adds a significant switching cost when moving to a different project, debugging systems that includes libraries from various sources or reusing code.

For ecosystems where there is a "one way to do it", then picking up a random project or library from github and using or changing it is easy, as it is understandable in the same way as my own code; for C++ it is, well, different.

And this is directly caused by the language - the technical details of the language significantly influence the code ecosystem that grows around it. Lisps are another example of this - because it's simple to define commonly used stuff yourself, it results in every project doing the same things differently, adding a serious cost in readability and maintenance for anyone that's not the original author.


What experience are you speaking from?


This is totally true. The downside, of course, is just how much code exists that is written in pre-C++11 dialects, and how many programmers are trained to write that sort of code.

I like way more of C++11 than I liked of the prior version, but there's just so much that's accumulated over the years. I wonder if breaking backwards compatibility is the key, but then I look at Python 3 and think, well, probably not.


> The downside, of course, is just how much code exists that is written in pre-C++11 dialects, and how many programmers are trained to write that sort of code.

I do the same thing. Not because I can't write C++11, but because I still encounter systems with older compilers that I want to run programs on. What, is this surprising?

Let's face reality: it's a little unreasonable to expect every system you work on to have a compiler less than 3 years old.

Especially true on older Linux systems, it's sometimes a massive pain or even pretty much impossible to upgrade the existing compiler in a package-manager-friendly way (try upgrading Ubuntu 12.04's compiler to GCC 4.9, for example), so you'd have to compiler the compiler from scratch. Which is a pretty massive PITA and more or less requires a PhD of its own for GCC (Clang is a little better). And on Windows VC++ isn't always exactly following the correct standard. It's not necessarily outright impossible, but it's pretty impractical.


> Let's face reality: it's a little unreasonable to expect every system you work on to have a compiler less than 3 years old.

If you are running Linux, this is one thing that Windows gets right. (Although as you say VC++ has some catching up to do still).


Linux is on all parts of the spectrum here. Ubuntu LTS and Slackware are going to remain outdated by design. Arch Linux almost always have the most recent stable releases of GCC and clang, and you can even build development snapshots from AUR


You don't need to use the compiler from the distro to release for that distro. There is such thing as cross compilation after all. Most do it out of habit and because distros don't offer such services out of the box. But if you build the cross compiler, you can use the newest one even if the distro ships the older.

For example I had a building machine running RHEL 5 which was building for both RHEL 5 and RHEL 6 while using gcc 4.8.2 for both (cross compilers).


You say "if you build the cross-compiler" as though it's as simple as "if you cross the street". Have you ever tried building GCC? I've wasted days on it and in the end still failed to make it work.


It is hard if you do it all manually. There are helpful tools like Crosstool-NG for example which make it easier.

See also https://en.wikipedia.org/wiki/Cross_compilers#External_links


> Have you ever tried building GCC? I've wasted days on it and in the end still failed to make it work.

I've hand-built GCC for half a dozen of different target architectures over the years and it's perhaps one of the most stable pieces of software when it comes to building it. I've only had one build failure over the years and that was unstable from git (even that usually works no problem).

To build GCC, follow the simple steps on the GCC manual. With a modern fast computer, it's all done in an hour or two (of mostly waiting).


I don't know how you manage it, but I can tell you it is not reasonable to expect people to compile every compiler they want to use on every system they want to use it.


No one implied people are expected to compile every compiler they want to use on every system they want to use it. Point was it's not hard. Maybe link to a message where you detailed your troubles?


> I don't know how you manage it, but I can tell you it is not reasonable to expect people to compile every compiler they want to use on every system they want to use it.

If you're dealing with bare metal projects (kernel programming or micro controllers), building binutils and gcc for cross compiling from source is pretty much required. There are scripts that help and some distros have cross compilers in their package managers.

But building GCC is not difficult and it pretty much works first time, every time.


Heh! I am stuck on VC++ 2010 at work. I gaze longingly at C++11 features but can't use them. A pity, as the same codebase is also built under XCode and Clang.


When LLVM started to require C++11 (which we use as a library), my team decided to statically link libstdc++ into our product on Linux. Now we just depend on glibc (and we use symbol versioning to keep the required version to a minimum). It sounds like you ship source though, so it may not be an option for you.


> upgrade the existing compiler in a package-manager-friendly way

This is your problem.

Build a new compiler using the old compiler and install the entire new toolchain in /opt/my-gcc-4.9-0xfoof . rsync this new compiler to any machine you want. The package managers (people and code) a here for our suffering.


Also, how many companies are afraid of the new features and deprecate them internally (fearing compiler bugs, lack of portability, and/or unmaintainability). I still haven't had a chance to use full C++11 in a professional context.


Like Google for example? Their C++ style guide is pretty restrictive, here's an expert C++ programmer's takedown of it: https://www.linkedin.com/today/post/article/20140503193653-3...


FWIW, things like range-based for loops, "auto", and unique_ptr are pretty standard in Google these days.

Still no exceptions, though.

The thing is, when you have a single codebase that is maintained by thousands of engineers, you have to make concessions. There is no place for "rockstar programming": everything must be, in a sense, bland, so that it blends seamlessly into the work of thousands of others.


Not that I disagree with your real point about writing maintainable code rather than "clever" code, but if you have a single codebase that is maintained by thousands of engineers then I would argue that you're probably doing something horribly wrong already, and using exceptions is hardly exclusive to so-called rockstars.

I don't really understand organisations that still use C++ in 2014 but then knock out such fundamental parts of its functionality, but then looking at the design priorities for Go, clearly at least some people at Google have very different preferences to my own, so YMMV etc.


I agree that using exception is hardly noteworthy these days, but consider that in the context of a codebase that has evolved for more than a decade. I guess, ten years ago, "let's not use exceptions; they aren't worth it" was more justifiable.

In any case, the decision was made, and we have millions of LOC where every single API is built with the assumption "No exceptions here." In that context, introducing exceptions seems suddenly a lot more trouble than its worth. Well, YMMV, and in any case I'm not a decision maker so what do I know.

* BTW, I think "a single codebase maintained by thousands of engineers" is the single best thing I found in Google, as it opens up infinite possibility of learning new tricks, new APIs, and sometimes new projects. (Of course, everybody complains that every API is implemented at least three times, but imagine how worse it would have been if every project had its own codebase.)


This is of course a valid reasoning to forbid the use of some features at this particular company. However, it renders advocating the Google C++ style guide as an ubiquitous style guide for all C++ programming rather moot.


Completely agree. I hope nobody's naive enough to say "Well, if this style guide works for Google, it sure should work for our company...!"


Not adding much value here, but it is a lot worse. You get 5+ implementations everything, and every project is different just because.


Perhaps you would have less millions of LOC if you could reuse third-party libraries which happen to use exceptions?

Also no exceptions implies no failing constructors which is plain ugly as a constraint.


This article is a bit strange. Complex initialization in a constructor can make it more difficult to unit test a class. Copy constructors aren't prohibited, it's just recommended that you remove the implicit versions when your class shouldn't be copied. Some of the C++11 features were blacklisted because Google has had its own implementations. In some cases these C++11 features are now allowed and the previous implementations have been deprecated, it's just that the conversion process took time.

In general if you're going to work with other people I believe you need to be a bit flexible on coding with them rather than trying to be hard headed about your own style. It's important that people not be surprised when they work with your code.


The guide that's publicly available says "If your class needs to be copyable, prefer providing a copy method, such as CopyFrom() or Clone(), rather than a copy constructor" I think that's insane. Or, at the very least, surprising to encounter when working with someone else's code.


Oh my goodness, yes. The HN crowd probably doesn't appreciate the extent to which people who sling C++ at BigCorp are limited by the rules the company sets.


It's not just C++ either. "What do you mean you want to use LINQ (in C#)? No one understands that rubbish".


* Don't use LINQ

* Don't use var

* Don't use dynamic

or even better,

"You've only got .NET 2/3 installed"


I can't look at code without var's anymore.


"We haven't vetted anything other than Java 1.4 on our BigAutoCorp PCs, so if you want to write a desktop app..."


And should Go or Rust achieve widespread adoption the same thing will happen to them. So the point is moot.


The main downside I can see with it is how much implicit behaviour there is. Sometimes it's very hard to figure out what's actually going to happen with a line of code.

That said it's quite expressive now, you can do a lot with a little code, which is always nice.


Yes. As much as I love C++, I sometimes get very scared by the implicit behavior. This is why I don't understand the hate on exception-free, more C-like C++. You can still reap huge benefits from std::algorithm, iterators, classes, templates, while stepping back to a more manual approach for allocation and lifetime management.


You definitely have to pick and choose from C++ based on environment, your staff's experience, what kind of problem you're solving. Allocation in particular is problematic in every language (even GC languages) especially when latency is an issue, so that's probably what I manage 'manually' most often.


Meh, breaking backwards compatibility isn't all that bad. It will be a bit rough for a few years, but it's worth it to clean things up for a brighter future.


The weird thing is, why not just call it something new? I've always wondered if they called Perl 6 "Smeebly" or something would it get a better reception? C+++ ? My personal taste is that if you break backwards compatibility that is when you have a 'new' language and you just just name it as such.


People would, on the surface, see a different beast, even though it remains 99% the same language. Not worth it.

With this argument, every release that adds a new keyword (for example) would have to be given a new name, since it will very likely break some code. Question that remains is how much must the changes be before the thing deserves a new name.


The example of Python 3 is a good one. "A bit rough for a few years" might be an underestimation.


I don't think Python 3 is a good example. Unlike other cases where languages broke backwards compatibility, the Python community actively discouraged upgrading for years, constantly saying "don't use Python 3" for years, and Python 2 was actively maintained in parallel to Python 3, with features constantly being back-ported. It would have been much smoother overall if they'd just dropped Python 2 altogether, even if a little more painful up-front.


I believe he was being sarcastic. And breaking backwards compatibility can be really bad. Imagine if the next version of C would break the previous one. How many operating systems and critical software would have to be rewritten in order to be maintained.


Python is in a different boat than C++. If the Python language leaders decides to make a version that breaks backwards compatibility... well, they also make the most popular Python interpreter. The group responsible for the C++ standard isn't in the same position at all, they have to convince the different compiler groups (GCC,LLVM,Intel,Microsoft,etc.) to implement it for them.


Agreed but even there were a monopoly for the interpreter/compiler you would see people marrying with a specific version and having the implementer to maintain two different branches of the compiler for security reasons.


Imagine if the next version of C would break the previous one. How many operating systems and critical software would have to be rewritten in order to be maintained.

I don't think the idea is to force existing software to be rewritten. It's to provide a clean break so that new software isn't saddled with the baggage of the old. Over time, most of the older software is retired or replaced, and the industry migrates to use the better tools with more and more projects.

This seems a very reasonable strategy, as long as your ABI or equivalent interface specifications can remain compatible for the common subset of functionality, so that programs built using the new language can still use libraries written in the old one. (See also: Every successful general purpose programming language in decades has offered a FFI for C.)

It doesn't work so well if you don't have proper formal standards for your old and new languages. It's also much harder to achieve in practice if your code is effectively running on some sort of virtual machine so you can't have that clean break at the ABI level, which might be one reason why several languages have failed to make a convincing big jump in recent years (Perl 6, Python 3, etc.).


Maybe they can use something like javascript's and perl's "strict" annotations.


I _completely_ agree. I think people need to start talking about how, when combined with the right tools and compilers, you can achieve nearly anything you can with another "modern" language more expressively and faster, all while being compatible with decades of massive c, c++, and objective-c libraries.

Perhaps we should start with dispelling the old notion that one "shouldn't use" the STL.


Do people really not use the STL? It's incredibly useful, and portable!


(Sorry for the late reply.) But yes, I find that people who've made careers out of programming in non c++ languages never revisit it when planning a project because they have this weird notion that it "slow" or "poorly implemented." I really don't know where this is comes from but I've heard it from many people i've worked with over the years.


You should check out what's planned for C++17 https://isocpp.org/std/status if you haven't yet.


OT question: I was playing with some C over the weekend (not C++), trying to figure out how to handle Unicode in a way that would work on Mac, Linux, and Windows. Despite a couple hours googling and reading, I couldn't answer really basic stuff, like:

- Do I use char* for strings? It sounds like wchar_t is 16 bits on some systems and 32 on others, so I should avoid it?

- If I want to read & write UTF-8 files, how do I turn the bytes into 32-bit wide Unicode strings?

- Are there special functions I should use for handling Unicode strings?

More generally, where do you go to discover C libraries you can use?

I am (was) reasonably proficient in C, but I haven't used it much for over ten years. I'm surprised how many things I just don't know how to do!

Sorry this is not a C++ question. I'd like to get back into that also, but I'm trying to work my way up from the basics. :-)


C++'s support for unicode is minimal. There are unicode string literals for UTF-8, UTF-16 and UTF-32, and conversion facilities between those encodings and the host platforms native 'wide char' format.

> - Do I use char* for strings? It sounds like wchar_t is 16 bits on some systems and 32 on others, so I should avoid it?

In general you should use std::string and UTF-8, since UTF-8 is a relatively safe encoding for sub-string search and replace, and (locale insensitive) lexical comparison/ordering. Windows uses a 16 bit wchar_t (UTF-16). Mostly everyone else uses 32 bit (UTF-32)

> - If I want to read & write UTF-8 files, how do I turn the bytes into 32-bit wide Unicode strings?

http://en.cppreference.com/w/cpp/locale/wstring_convert

> - Are there special functions I should use for handling Unicode strings?

Nope. You need a library for that. Manipulating natural language is a tricky business anyway, beyond single mortals, and you will likely get it wrong.


Look here: http://userguide.icu-project.org

You really need a library when dealing with unicode, if you are doing anything advanced.

How do you split a sentence into words? Spaces, right? Oh, Chinese doesn't have spaces.

How about sentences? Paragraphs? Even characters are a pain to iterate over.


It really is abominable how bad the standard support for Unicode is. ICU is great, though the C API is somewhat limited compared to the full C++ API. ICU's build process, OTOH, leaves something to be desired, especially if you want to build it with a cross-compiler.


Agreed, ICU is a good place to start looking at this stuff, used it in a few large-scale commercial projects now.


C11 has the uchar.h header (say, http://www.cplusplus.com/reference/cuchar/ ), but it wasn't in Visual Studio last I checked.

Most programmers will tell you to use UTF-8 for in-memory strings. But it's not easy to figure out if a particular char* is already encoded as UTF-8, and it's common for people to forget that Unicode characters can take up to 6 bytes in UTF-8. I know I'm in the minority, but I prefer to use UTF-16 or UTF-32, because nobody makes the kind of mistakes with UTF-16/UTF-32 that they make with UTF-8. Plus, you can't accidentally pass a UTF-16-encoded string to a non-Unicode-aware function.


Beyond the issue noted with UTF-16 permitting surrogate pairs, it's also important to understand that UTF-16 does not free you from normalization form concerns.

For instance, even within the 16-bit Basic Multilingual Plane, it is not safe to reverse a UTF-16 string by simply reversing each block of 2 bytes, as a Unicode glyph (e.g. á) can be composed of a pre-combined code point which only takes 16-bits, or a base character (a) followed by a combiner (´).

The pre-combined variant would reverse just fine, but the renderer-combined á would reverse to ´a, which is probably not what was intended.

If you're dealing with Unicode you need to be dealing with a good library and you need to be understanding what you're doing, once you go away from simply reading/displaying/serializing bags-o-bits.

But if the latter is all you're doing, UTF-8 is just fine, and less susceptible to mistakes with code that works correctly on C strings.


Unicode code points can only take 4 bytes in UTF-8, this has been true ever since the planes were capped at U+10FFFF.

Or did you mean UTF-8 encoded surrogate pairs?


Thanks for the correction. I don't know where I originally got the "up top six bytes" number from, but I've been using it for a while. Apparently, it's out of date (looking at the original proposal, https://en.wikipedia.org/wiki/UTF-8#Description , some code points were expected to need 6 bytes, but as you say, that's no longer true).


It was true until 2003, so in this case reality has been retconned on you.


UTF-16 has surrogate pairs as well. It's the worst of both worlds.


I prefer UTF-32, but I'm very lonely.

The benefit with UTF-16 is that you can't accidentally pass a string to, say, strlen(). But, yes, people will forget that not all Unicode code points fit in 16 bits, and won't test with the right kind of input to find out that they've done it wrong. So errors can still creep in; but I will continue to argue that those errors are less common because the fact that you're dealing with Unicode (and not ASCII or something else) is more obvious and you're more likely to need library calls (that will do the right thing) to do anything useful.


> I prefer UTF-32, but I'm very lonely.

I'm not sure why you'd prefer UTF-32, it doesn't make correct text manipulations any easier, but — much like UTF-16 — it does make incorrect assumptions and text manipulations much easier.


UTF-32 means each code point takes up one char32_t. There are no surrogate pairs. However, it is true that you still have to deal with combining characters, normalization, etc. So it doesn't solve all of your problems, but it does avoid simple goofiness ( http://blog.coverity.com/2014/04/09/why-utf-16/ ). You still have to use libraries to do anything non-trivial.


Right. So while UTF-16 is a terrible encoding and there is very little reason to use it, UTF-32 is little better. Use UTF-8.


Like I said, I'm very lonely. My only consolation is that every project I've seen that says "use UTF-8 and be careful not to do goofy things (like running a string through the encoder twice, or passing a string to old C string functions that aren't Unicode aware, or assuming each character takes one byte), without exception, constantly finds those goofy mistakes in their code. Being careful is not enough.

UTF-8 for strings on disk, yes. But UTF-8 for in-memory strings has a long track record of being much harder to actually do correctly.


What I want:

A language where the "String" data type is as follows:

A rope of "logical characters" (One or more code points, such that they are logically one character. So an accent is combined with the previous character, that sort of thing.)

With the additional "restriction" (read: implementation detail) that within a single node all logical characters must have the same width. (You can, for example, store a single one-byte character in a run of two-byte characters as an overlong-encoded two-byte character, but this is just an optimization.)

Short ropes degenerate to a flat array.

(You have to do a workaround for single code points that encode multiple logical characters. You split them into N parts encoded in the private unicode range or something similar, and when displaying them recombine them if they are in the correct order, otherwise normalize them. Although I'm up in the air about this. Should reverse("st") be "st"? Or "ts"? (That's the single unicode character "st", for those that are confused.))

Ideally, you put character encoding directly within nodes.

That way most things "just work". Running a string through the encoder twice doesn't do anything, as it detects the encoding is the same as the target encoding and doesn't do anything. Reversing a string "just works". Indexing a string is sub-linear time, but gives decent results. (Indexing a string and getting invalid unicode as a result is never fun!) Concatenating strings takes sublinear time even. This works really well with immutable data structures, or quasi-immutable data structures. (there's some tricks with rewriting ropes to take maximal advantage of structure-sharing that preserve the illusion of an immutable data structure without actually being immutable.)

And if you really want you can start doing fancy things like allowing lazy generators within strings, or lazily decompressing / reading data from disk.

To store on disk? Yeah, go with UTF-8. (Or my personal favorite pet encoding: compressed UTF-32.)


> Being careful is not enough.

UTF32 will not help you much with that, save that you've got a separate type for "strings" and "bunch of bytes"

Which you can have anyway, so do that, it's a good idea which doesn't require using UTF32.

> But UTF-8 for in-memory strings has a long track record of being much harder to actually do correctly.

As if other in-memory encodings had a better track record.


Nobody's ever accidentally tried to uppercase a UTF-32 string by passing each code point to C's toupper() function; but that constantly shows up in projects using UTF-8 for in-memory strings. And it's not always caught by unit tests. People never accidentally pass a UTF-32 string to a legacy API that expects char*'s in Latin1 encoding. I when I say "never" I don't mean "almost never happens"; I mean literally "anybody who tries, gets a compile time error." With UTF-32, you don't have to worry about copying less than a full code point to a destination string, but if you forget that possibility with UTF-8 or UTF-16 you can actually create security problems. And no matter how much work people do to fix the code, those mistakes creep in repeatedly.

Yes, people forget to normalize their UTF-32, UTF-16 and UTF-8 strings before comparing for equality. Yes, people forget that whether a code point is a letter depends on who's asking the question (I usually don't consider Greek letter pi a letter, but Greeks do). Yes, it's true that reversing a Unicode string in any encoding requires more than simply reversing the individual elements (because of combining characters).

So, yes, it's possible to get things wrong in any encoding. But UTF-8 has more ways to screw up than the alternatives. And UTF-16 has a few more than UTF-32; but I can accept UTF-16 if there are external reasons to (e.g., working on Windows).


> With UTF-32, you don't have to worry about copying less than a full code point to a destination string

But you can still copy only parts of a grapheme cluster. If you want to do Unicode right, you have to treat even UTF-32 as a variable-length coding.


To do Unicode right, you have to know what you mean by "character," which usually means never saying "character" and instead being more precise ("code point," "grapheme cluster," "byte representation"). You also have to figure out what people expect to happen when they hit backspace (does the whole grapheme cluster disappear? the most-recently-typed code point in the cluster disappear? do you have to hit backspace twice to get rid of surrogate pairs (a la Notepad on Windows)? should you have to hit backspace a variable number of times to delete code points encoded in UTF-8 (following the Notepad behavior to its logical conclusion)?).


> Nobody's ever accidentally tried to uppercase a UTF-32 string by passing each code point to C's toupper() function

Note to put aside, since this was all about C++, the toupper() in <locale> is a template type that takes an arbitrary character type and a locale. There are also functions for locale sensitive collation and comparison. So, modulo bad implementation, you should be able to do basic unicode string handling in ISO C++.

The GNU stdlibc++ manual goes in to quite a lot of detail:

https://gcc.gnu.org/onlinedocs/libstdc++/manual/localization...


> So, modulo bad implementation, you should be able to do basic unicode string handling in ISO C++.

The locale stuff has been intentionally left vague. So:

(1) assuming that your implementation supports passing a char16_t or char32_t to the functions in <locale>, you're guaranteed it will do the right thing as far as Unicode and the C++ standard (although, there are some well-known problems with what the C++ standard defines the "right thing" top be for lowercasing epsilon -- since in Greek, there are two lowercase forms of epsilon, and the C++ standard doesn't provide the function enough information to decide between those two forms -- and for uppercasing ẞ (LATIN CAPITAL LETTER SHARP S) -- because in German the uppercase form takes two letters, and the C++ standard assumes doesn't allow for that).

The fun part is that plain char's can be encoded in several different ways, and not all have anything to do with Unicode. So you need to have a Unicode-aware locale for the functions in <locale> to do anything sensible with Unicode. Of course, that's something of a tautology, but I'm not sure what the standard requires for locales. So, yes, if your implementation is Unicode aware, you can call a standard C++ function to get what you want, but be sure to call the right one. There's another, with the same name, that isn't guaranteed to do what you want.


I will repeat, since you apparently managed to miss it last time around: using your type system does not require using UTF32 as your internal string encoding. Hell, you don't even need a type system to have separate bytes and string types, you can even do that in dynamically typed languages, regardless of the string type's internal encoding (it could even be variable, within a single string).


I didn't miss it. I simply know projects that use UTF-8 for in-memory strings, and none of the ones I'm familiar with uses a different type for it. It's all convention, and they all occasionally find that somebody flubbed the convention. Can you point me to any projects that manipulate UTF-8 encoded in-memory strings and actually use a different type for it them?

I realize I won't convince you. That's what I meant in the original comment that I know many people disagree with me on this.


> Can you point me to any projects that manipulate UTF-8 encoded in-memory strings and actually use a different type for it them?

Glib seems to.

https://developer.gnome.org/glib/2.37/glib-Strings.html

Though in complete fairness the type appears to be just a "bags of bytes" type and so could hold anything, it's really meant to hold UTF-8 (as you can tell from the functions that append/prepend Unicode chars).


Thanks for the pointer.


> I simply know projects that use UTF-8 for in-memory strings, and none of the ones I'm familiar with uses a different type for it.

Which is relevant… how?

> Can you point me to any projects that manipulate UTF-8 encoded in-memory strings and actually use a different type for it them?

Rust does. Python 3.3 does something similar but slightly different (it switches internal representation between iso-8859-1, UCS2 and UCS4 depending on the string's codepoints).

> I realize I won't convince you. That's what I meant in the original comment that I know many people disagree with me on this.

Of course you won't convince me, your original comment is based on inane premises.


Thanks for the pointers.


>it wasn't in Visual Studio last I checked

The upcoming version has it.


Why not just write C style C++? I never understand the arguments to use C anymore - you can write C style C++ and get on with your life, and cherry pick the features you want.

In C++ unicode is as simple as: string s = u8"This is always a utf8 string.";


Weeellll, your position is defensible if you're writing an application. If you're writing a library though, let me just let you know that every time I encounter a package with a C++ API, I want to drive stakes into the author's eyes, as compensation for the pain I'm about to experience.

Like it or not, C is the lingua franca of the programming world. Pretty much every other language out there provides C bindings, not C++ bindings. Which means that when your function exports something wonderful like smart pointers, my world just becomes painful.

Furthermore, in embedded environments, it can be challenging (to use a euphemism) to get source code debugging up and running. If your code is written in C, it's not so bad, non-optimised machine code generated from C tends to match the actual C code fairly closely, to the point that you can very nearly read it as easily as the original source. This is not true for C++, which means that every time something goes wrong in C++ code, I'm probably going to lose a week of my life inserting trace into the code, compiling, deploying to target, and running the test, ad infinitum. Not fun.

Even in the case where you're writing an application, I'm personally dubious about the benefits of C++. Write the application in a higher-level language, personally I like Javascript (JavaScriptCore) for this, but I've had good success with Ruby and lua as well. When you hit a performance bottleneck, push the performance sensitive code into C, and create bindings. As a bonus, your C code is now easily available to be used as a library from any other language, without inducing homicidal tendencies in the user.


extern "C" {

}

Hey look, C API.

> This is not true for C++,

Depends on the code you write. If you write C in C++, you get the same near machine parity, since its the same source material. If you do not use namespaces you don't even get name mangling.


Firmware development. For some microcontrollers, you'll be lucky to even have a C compiler let alone a C++ compiler. Even then, I'd rather use C than C++ for firmware. I often find myself running into odd compiler issues, having to inspect/modify assembly, deal with resource constraints, etc. and I imagine C++ would make that more difficult.


Well there are people who really hates C++[1] and to be honest C++03 had some flaws an inconveniences that made the language a bit awkward. Right now it feels like it is going in the right direction.

[1]http://article.gmane.org/gmane.comp.version-control.git/5791...


Unicode is much more complex than that. For example, how would you then convert that string to uppercase, or truncate it?

For Unicode strings in C++, you can either limp along with the wchar_t support, or leverage something like ICU. Neither is especially easy.


If you want string manipulation in C++ you need an external library no matter what encoding you use.

Also, std::toupper inherits the system locale (or whatever you set it to), so if it is utf8 it would do proper utf uppercase conversions. You need to parse per character though, and that depends on the locale again.


Is 'extern "C"' still a thing to avoid name mangling? Some C++ features are easier not to use than others.


Yes, extern "C" is still a thing to interoperate with the C FFI. It is required if for nothing else than to use C headers.


The main problem with using C++ as C is that everyone has different ideas about which ideas to cherry pick. So without a lot of discipline and a good coding standard, lots of (maybe subjectively) bad ideas will make it into the code as well. And once they do, you can't do anything about them any more.


You can grab and compile the very latest draft from the committees Github page:

https://github.com/cplusplus/draft

The current version is a slender 1365 pages, including the standard library.


I see a lot of discussion around C++ becoming modern. I would like to know which features are actually modern?

Is it lambdas? been around for literally decades. Is it type deduction? again been around for quite some time, far better type inference has been available in the likes of Haskell for quite some time. Is it addition of a particular threading model to the standard? this has always been available via some form of library, it just know means the language is truly obsolete if the model changes. Allow GC'd implementations? again decades.

What is modern about any of features that have been added?


> What is modern about any of features that have been added?

Nothing. On the other hand, which features of a “modern” language are missing in C++ by now? Type deduction is still a bit less powerful than in Haskell, yes, but apart from that I can’t really think of anything that may or may not be missing.

This in turn implies that C++ allows essentially any style of coding you may wish for at the moment while at the same time staying some orders of magnitude faster than essentially the entire competition.


"but apart from that I can’t really think of anything that may or may not be missing"

* Hygienic macro system. * Pattern matching. * Strong generic type system. * Persistent containers. * Lockless data structures. * Memory-safety. * Null-safety. * Reflection. * Dynamic/variant types. * Modules. * ...


C++ is no faster than any other language, it's the code that is written which is measurable.

The real wins in performance for real world applications are in terms of algorithmic complexity. A poorly implemented solution in C++ will still be slow, the language does not make it fast.

Any micro optimization you can make with C++ will be an order magnitude less significant than O(log n) vs O(n).

The likes of Janestreet will likely need high performance software yet have great success with Ocaml for example, which incidentally can be very fast.

Just saying C++ is fast does not make programs written in C++ actually fast.


> The real wins in performance for real world applications are in terms of algorithmic complexity. A poorly implemented solution in C++ will still be slow, the language does not make it fast.

Yet a well written implementation in C++ will still be faster than an equally well-written implementation in, say, Python or Haskell or whatever. Not in an asymptotic sense, but easily by a factor ranging between two and 200. For me, this can be the difference between waiting a couple of weeks or half a year for some computation to complete :)

> Just saying C++ is fast does not make programs written in C++ actually fast.

Of course not, but C++ makes it possible for programs written in C++ to be fast.


Comparing to Python or Haskell, maybe you're right, but not to CLR-based or JVM-based statically typed languages. Very often the differences between Java/C# and C++ are the same order of magnitude as differences between various C++ compilers. The mythological C++ performance is often overstated (there are some performance advantages in C++ but they are rarely big enough to justify language choice).


An order of magnitude is a hell of a lot. I did some extensive comparisons of C++, Java, and C# back when I was working on my dissertation. Admittedly, this was about 10 years ago now, but at the time, the best I could get with Java was about a factor of 2.5 slower than C++. C# was similar, but I don't recall the specifics now.

When you read a language shootout, a lot of times a factor of 2-2.5 times slower than C or C++ is considered to be "good enough". If you get your Clojure code that fast, everyone calls it a day. For me, a typical run of my experiments took about 3 weeks on (I think) a Core 2 Quad using all four cores. So in the best optimized Java I could find, that would have been about 5 extra weeks spent waiting around for results, and I had several cycles like that. All told, I imagine it would have added a year to my work.

I'm not necessarily picking on you here. You did say "rarely", not "never". But I don't think a lot of people are aware of how common it is for a factor of two to be a complete deal-breaker.


A factor of two difference is something that can be a result of using different C++ compilers or even different versions of the same compiler.

See this: http://lemire.me/blog/archives/2012/07/23/is-cc-worth-it/ Particularly the comments section contains various results obtained from different compilers / versions. Variablility is higher than 2x for GCC.

Comparing with Clojure is not fair, because it is not really statically typed, and known to be much harder to optimize for speed than e.g. pure old Java.


I tossed out Clojure as a random language -- all my testing was done with the most efficient Java and C# I could write.

The link is fine, but only shows micro-benchmarks. It's much easier to get a 3x difference in execution speed between compiler versions when you have a three-line example -- you're maybe exercising 5% of the optimizer?

The current version of the C++ I was using comes in at around 50kloc. The Java and C# versions were a bit smaller, since once I stopped testing the platforms, all the new code only went into the C++ version, but it's a significant piece of work. I haven't seen any significant real application for which the theoretical benefits of Java (JIT's profile-guided optimizations, etc.) have really come to pass. You can write slow C++ code, but if you know what you're doing and instrument carefully, there's very little or no evidence that Java can be as fast on real programs.


> It's much easier to get a 3x difference in execution speed between compiler versions when you have a three-line example

Those three lines can be as well a bottleneck in a big program, responsible for 80% of its runtime. This example was to show that compiler-induced performance differences in a single language can be just as large (here: up to 4x) as differences between Java and C++ in microbenchmarks. Therefore 2x difference in a microbenchmark where Java is losing to C++ is probably not a statistically significant difference, even though it may be important to the end user. If you rerun this benchmark in 3 years from now, using newer versions, you might as well get completely different results.

As for real applications - that you don't know of any real, fast Java program, doesn't mean there exist no evidence. Sure, it is quite hard to find a pair of two programs doing exactly the same written in two different languages, but there do exist quite a few high-performance Java apps out there which are #1 in their class: Netty, Apache Cassandra, Apache Spark, LMAX disruptor (Java used for ultra low-latency app, see: http://programmers.stackexchange.com/questions/222193/why-di...). Someone also ported Quake2 to Java (Jake2) just to show it was possible, and that didn't make it slower.


I'm not saying there's never a use case for Java. It's reasonably well suited for server-side work (and most of your examples are IO-bound server processes).

When your main problem is IO, thread contention, etc., Java gives you better tools to address those problems. It's just not at all well suited for the kind of client apps that people run on their desktop computers, and to a lesser (but still visible) degree, it's not well suited for CPU-bound things like a lot of scientific computing, which is more where my expertise lies.

Looking at Jake2, the author notes: "The 0.9.2 release shows a big improvent due to the new “fastjogl” OpenGL renderer. The new renderer reduces the number of native interface calls. JNI calls produce a considerable overhead and were the main bottleneck in our application." If he's doing enough JNI to make this the main bottleneck, then clearly quite a lot of the heavy lifting isn't being done with Java.


I think C++ is justified when you're constrained by CPU, RAM, or other resources. That is, if the limiting factor is development speed, or network traffic, or waiting for user input, then C++ probably isn't going to be worth it.

But if in your problem area you can get a competitive advantage by being faster, using less memory, or tightly controlling other resources, then C++ is probably the right choice. If you really need it, you can use some inline assembly too.

For example, look at a VM's JIT compiler: it needs to capture a profile from a running application, analyze hotspots in that profile, recompile those on the fly to native code, and coordinate with the VM to swap that out on the fly. The more time and memory that is used to do all this, the slower every application on the VM will run. Since there's a very constrained environment, C++ is a good choice for JIT compilers.

And if you look at the CLR/VES or JVM, they are indeed written in C++.


Fair enough. But the number of resource-constrained systems is pretty low these times. A phone, watch, dishwasher or xero machine are no longer resource-constrained. Also, C++ is not really shining IMHO in very-constrained environments - at least not "modern C++" dialect. Template-heavy libraries can blow up code-sizes pretty severely and dynamic memory allocation happening behind-the-scenes in the standard library is not free either. So for those very constrained systems, pure C or a very tiny subset of C++, i.e. C with classes, no exceptions, no templates, might be often a better choice.


Yeah, I realize my view on this is a bit skewed. A lot of the examples people give where C++ is still justified are things that I have done in my career - image processing, video editing, rendering software, CAD software, solving large sparse matrix systems, etc.

On the other hand, there is a reason why native apps are popular on the iPhone. Likewise, some games are programmed in high-level languages, but the performance advantage of C++ keeps it dominant in games. Between games and mobile devices, I'd say C++ isn't going anywhere any time soon.

C++ can indeed be very bloated, like you say. On the other hand templates do allow for code reuse without the function call overhead.. I believe std::sort is faster than qsort. For me, if there was another language that was as well supported as C++ but allowed higher-level structures and programming methods, I'd be very much in favor of it. For now it looks like new versions of C++ are the closest there is. Nobody really likes C++ for its elegance, it's just a very practical choice.


Is it possible to write C++ programs that are slower than their counterparts in other programming languages? Yes.

Is it possible to write programs in other languages that are just as fast as good C++ implementations? Often it is not.

That performance comes at a cost, it really depends on what your application is. Horses for courses.


> Any micro optimization you can make with C++ will be an order magnitude less significant than O(log n) vs O(n)

Wrong. See (from about 46min): http://channel9.msdn.com/Events/Build/2014/2-661


Lightweight threads are missing, as are tail calls. Both (especially the second) prevent C/C++ from being a viable target for functional compilers.


Agreed on guaranteed tail call optimisation, but it seems[0] that it is at least possible to get it in some cases on some compilers. It seems that calling dtors after the return (somewhat obviously) makes it impossible to do it, although appropriate scoping might help. I however didn’t play around with it, merely rephrasing what [0] wrote.

I’m not sure about threads (and how light/heavy those currently in C++11 are).

[0] https://stackoverflow.com/questions/34125/which-if-any-c-com...


C++ is 35 years old and predates Haskell and many other "modern" languages like Java, C#, Python, ... So yes, compared to what C++ was originally, it has become modern.


My point is that none of the features are modern.

Comparing a later version of language to an earlier one to measure modernism seems rather pointless.


>Comparing a later version of language to an earlier one to measure modernism seems rather pointless.

C++ is a production ready systems language, not a theoretical CS research paper. How do you propose we define modern in this context then? As I'm sure you're aware, it takes years to vet design features, debate whether they can be implemented, whether they affect performance, whether they have unintended side-effects, whether they break existing code, etc. 'Modern' concepts are already old by then. Also, sometimes features can't be added to a language because of practical real-world issues that have nothing to do with the language itself. (e.g. longer compile times).

I don't see anything particularly wrong with calling a language modern as it adopts features. Certainly, I would agree that one should not then claim the language to be modern in the sense of cutting edge research.


It implies that the features are somehow new and ground breaking.

Lambdas have been understood for many many years, the same goes for GC. The were not only in the domain of academia.

Common Lisp has many of these features and more, being a multi-paradigm programming language. And has been used in industry for many many years. Yet people say CL is antiquated while it has so called 'modern' features.


They are groundbreaking in the sense that no production ready systems language has those features, except C++. I guess we'll just have to agree to disagree. It really is a big deal to design a language with such features that are implementable with zero overhead.


I think we should start using the term "modern Java" for writing Java with lambdas, or "modern PHP" to write PHP with OOP and exceptions. :D


The way all those languages are used together.

Not writing modern C++ means:

- writing C++ as if it was plain C

- doing OO programming with big object graphs

- manual memory management

- not using the standard containers and algorithms

- writing functor objects


How can you not use manual memory management in C++? Say you're writing a compiler, which uses a quite complex, possibly circular, graph structure of objects for its AST representation. You can't use SharedPtrs (cycles), can't use UniquePtrs (sharing of graph nodes), can't use RAII (unpredictable lifetime). I see no better solution than a GC.


Even Stroustrup says that there are times garbage collection is the right solution. But, some people are surprised that there are times it isn't needed, and that garbage collection always brings the burden on unordered finalization (e.g., it's possible for an object's members to be finalized before the object, so finalizes must be written in a way that assumes very little about what is valid regarding data members).


When Java appeared, there GC enabled system programming languages like Modula-3 and Oberon, with stack allocation.

Modula-3 generics are good enough to even implement reference counting data structures.

The going VM craziness of the last two decades pushed such languages away from the mainstream and thus younger generations get surprised by such approaches.


Why can't you use them? Lots of compilers written in C++ use them.

Ever heard of weak_ptr?


I would like to know which features are actually modern?

Close to none of course, but that isn't the point AFAIK.

The point is all of them have now been added, hence the 'modern' connotation, and another part of the point is that C++ now even more is a language rather flawlessly combining all those paradigms.



Do you know where I can find a diff?

I'd like to see whats new with 14.

Edit: http://en.wikipedia.org/wiki/C%2B%2B14#New_language_features


Can anyone recommend a book on modern C++?


Scott Meyers' Effective Modern C++ will be required reading once it's released in October.

Until then (and even after), Stroustrup's The C++ Programming Language (4th edition) is the canonical resource. It's a reference but is also intended to be read.


Ironically, it was (the first edition of, I believe) Meyer's "Effective C++" that made me question "WTF? Why does anybody use this???" (given that Turbo Pascal / Delphi, or Perl, actually were much easier to understand). Maybe C++ is usable now, but back in the late 90s, nobody that I respected used C++ - all the code I saw at work in C++ was by resume padding morons who wouldn't recognize polymorphism if it walked up and bit them in the ass. (the opinion being either use C, or a high level language, but not C++)

Not fair to judge the language by the fools that used it 15+ years ago, but I have no use for operator overloading, copy memory leaks and all the other C++ obfuscation.

C++ would probably benefit from a "lessons learned" do-over with a new name (as referenced elsewhere in the Perl 6 / Python 3 debacles)


There is an digital unedited pre-release version that available now if you pre-order:

http://shop.oreilly.com/product/0636920033707.do

I'm reading the PDF and it's quite good.


Thank you, looking forward to Meyers' release.


Bjarne Stroustrup, A Tour of C++.


Stroustrup's The C++ Programming Language 4th Edition (the blue one) is good. The beginning of it gives a tour of C++ in light detail but then you have the rest of the book to look at as a reference. It's actually quite an enjoyable read, particularly as it has a different font to the previous version (the white one?).


Yeah, this would be a god send. As someone who is writing more and more C++ for computer vision type stuff, a modern overview would be great. I am stuck working off of things I learned over the years and don't feel quite like I'm writing modern idiomatic C++


If you just want to learn about the features introduced in C++11/14, I wrote compiler-specific books about those: http://cpprocks.com


I apologize if this is ignorant, but how is C++ versioned?


Until now there have been four standardized versions of C++: C++98, C++03, C++11, and C++14. And the versions have been consistently named (to be precise, "nicknamed") as "C++YY", where YY are the last two digits of the year in which that ISO standard was adopted.

Note: C++03 was a very minor change (sort of a "bug fix"), so people sometime refer to C++11 as direct successor of C++98. Also in the interim (2007) there was C++-TR1 ("ISO/IEC TR 19768:2007") which was not a formal standard per se, but instead a technical report specifying bunch of standard library extensions (which were formally included in C++11)

For sake of completeness: The format C++YY is the one which is commonly used almost everywhere, but the official language name follow ISO's convention. Here is a mapping:

C++98: "ISO/IEC 14882:1998 Programming languages C++"

C++03: "ISO/IEC 14882:2003 Programming languages C++"

C++11: "ISO/IEC 14882:2011 Programming Language C++"

C++14: "ISO/IEC 14882:2014 Programming Language C++"

There are also language classifier like (E), (F), etc present at the end (e.g., "ISO/IEC 14882:2014(E) Programming Language C++"). I am not sure if the official standard is fixed in a particular language, or are all the different language translations equally authoritative (I suspect the latter, but it's just a guess).


what do you mean?

C++ is a language designed by committee. C++14 is meant as a "tock" release, fixing some of the things in the C++11 "tick" release. ("tick" releases are larger, and "tock releases fix some of the problems in the "tick" release).

Compilers then attempt to implement the standard, and have their own versioning system.


Traditionally, "ticks" are small, and "tocks" are big. See https://en.wikipedia.org/wiki/Intel_Tick-Tock .


By the year the standard was published.


Fairly common programming language convention as well. I believe FORTRAN 66 and ALGOL 68 were the first standards widely referred to with a revision year. More recent examples include Fortran 90 (no more caps!) and C99.


I agree but it seems like newer languages adopted the 1.0, 2.0 versioning scheme so it might throw some people off.


That's somewhat more common with languages with a reference implementation instead of a specification.


Pre-standard C++ had that scheme as well! (well, CFront releases, http://www.softwarepreservation.org/projects/c_plus_plus#cfr... ).


Not just new languages. Java is also versioned from Java 1.0 to 1.1 to 1.2 ... 1.8, with recent versions dropping the "1." because they never will have a true "2.0" release.


New features like auto and shared_ptrs.


The version number identifies the tonnage of crap that has been added since C++98.


Only 14 more tons since C++98, which, of course, had 98 tons since C.


I've started getting back into C++ after many years away and it's all coming back to me.

Now of course it's C++11, which does have some nice features, but really I think we've reached the point where we need to start again (downvote away).

Let me give you an example: I recently came across some code that was written years ago that has two size types: one 32/64 bit signed and the other 32 bit unsigned. This creates a bunch of issues when compiled on 32 and 64 bit architectures and there is a substantial amount of effort to clean it up.

I point out things like this to colleagues who are very pro-C++ and I inevitably get the same response: "well that's just bad API design".

Thing is, if you look at the history of this example it's a series of incremental changes, all well-meaning and reasoned, some of which are done by people who I could only call luminaries, and even they make significant and far-reaching mistakes.

So what hope do the rest of us have?

But my biggest problem with the C-dialects is pointers. Namely if you return or receive a pointer, it's not necessarily clear who owns it. The way this is handled is comments like "DO NOT delete this" or "you MUST delete this".

I like that a language like Rust is trying to formalize the concept of object ownership. I'd really like to see that idea mature and take hold.

Until now there hasn't really been a competitive alternative to C/C++. It's not Go (as much I love Go). Maybe it's Rust. We can but hope.

My other big problem (and this applies to Java too) is directly dealing with low-level multithreading primitives like threads, thread groups and mutexes. I really like that Go has taken a different approach here.

What I find with particularly young programmers is they don't have the appropriate fear of writing multithreaded code. It's really, really hard to write correct multithreaded code with low-level primitives. It's why (excellent) books like Java Concurrency in Practice exist.

As for the feature list of C++14 [1], I wonder what all these "auto" declarations will do to the significant work required for static analysis tools, that are an essential part of modern, large-scale C++ codebases.

The literal types (like "s" for std::string or seconds) are cute but at some point the STL was optional. I'm a little leery of embedding it directly in the language but hey I'm no expert.

[1]: http://en.wikipedia.org/wiki/C%2B%2B14


In C++ parlance ownership of pointers is dictated by how they are stored.

Raw pointers are only views of the object passed around, these confer no ownership.

std::unique_ptr owns a single unique instance of the memory.

std::shared_ptr (and std::weak_ptr) confer ownership of the memory amongst several other entities.


The convention I use in my own code is that passing a raw pointer around, or returning one, confers ownership. Mostly I do not schlep raw pointers around between functions, though. I use shared_ptr or scoped_ptr or unique_ptr or a reference. When I have to use raw pointers with some given library, then I immediately wrap their usage in my own facade types. Sticking to these oft-repeated best practices means that crashes are rare and memory leaks are rarer still.


On pointers -

Like a lot of things with C, you need to handle ownership by convention. I generally try to make the actor that created the heap pointer the owner of it and responsible for its destruction - i.e. instead of allocating buffers and handing them back up the chain full of data, pass them down from the requester where possible. If the length is not known at that point, create another routine to calculate it. Using techniques like this, ownership can become easier.

You can do anything with it, but that's half the problem!

And personally I like threads :)


The literal types are optional— they're user-defined literals, which can be written like (say):

    QString operator""_qs(const char* c_str, size_t len)
    {
        return c_str;
    }

    void foo()
    {
         auto iama_non_stl_type_ama = "foo"_qs;
    }


> code that was written years ago that has two size types: one 32/64 bit signed and the other 32 bit unsigned. This creates a bunch of issues when compiled on 32 and 64 bit architectures and there is a substantial amount of effort to clean it up.

Such things happen whenever an invalid assumption that is commonly true becomes not so commonly true, such as the 32-bit to 64-bit migration. That said, C/C++ provides you the tools: size_t is an unsigned type meant for storing array indexes and the size of objects in memory. If you want a real integer, then (unsigned) int. If you need integers of a definite fixed size, u?int(8|16|32|64)_t. Granted, I've encountered that not enough people take the time to understand the tool they're using. Honestly, I wish the default integer type was not fixed size, and especially not platform dependent fixed size, like Python; for non-[indexes/sizes], I feel like this is usually what you want.

> "well that's just bad API design".

Well…

> But my biggest problem with the C-dialects is pointers. Namely if you return or receive a pointer, it's not necessarily clear who owns it. The way this is handled is comments like "DO NOT delete this" or "you MUST delete this".

This is true; in C++, your use of pointers should be minimal, though this can be a problem with references just as easily. In the (special) case of std::shared_ptr, ownership is clear. That said, this is a problem not unique to C++: it exists in Java, Javascript, Python, and many others as well. For example,

    some_list = [1, 2, 3]
    some_obj = MyObject(some_list)
If some_obj expects to own that list, and it's constructor is:

    def __init__(self, some_list):
        self._some_list = some_list
which I see a lot, you've got the same problem. (I'm also interested to see how well Rust tackles this, as I agree it's a problem.)

> My other big problem (and this applies to Java too) is directly dealing with low-level multithreading primitives like threads, thread groups and mutexes. I really like that Go has taken a different approach here.

Honestly, I've never thought multithreading was "hard". There's a set of rules you have to hold yourself too, otherwise, yes, you can make your life very hard. If you limit shared data as much as possible, and what data is shared has a well defined locking order (preferably behind code that enforces it) — often this is just a single mutex — then I don't see the problem. Thread-safe queues ("channels", I think Go calls them) are also useful to establish "service" like threads.


There's no such thing as a well defined locking order - a statement that should scream the fact that locks are not composable. Good luck documenting it in the API or in making other developers pay attention.

The problems are even worse than with manual memory management, since there you often get away with RAII. Say a function call is returning some sort of list, or some other data structure. Was the producer operating on another thread? Was that data structure signaled to other consumers as well? Can you modify it without locking?

To make matters worse, locking is extremely inefficient, it limits the possible parallelism, throughput and vertical scalability and suddenly you start thinking of not protecting reads, since shared reads should be parallelizable, right? And so you need to look at alternative means of synchronization and then suddenly you end up reasoning about happens-before relationship arising from usage of memory fences, non-blocking algorithms and single producer / multiple consumers scenarios. And then you discover that C++ doesn't have a strong memory model to speak about and that at least before C++11 it was all platform dependent.

Of course, I like this state of things, since my favorite platform (not C++) has a better memory model and plenty of higher level abstractions built on top, like actors, CSP, ring buffers, futures, Rx, iteratees, immutable data-structures, etc... but yeah, people able to reason about concurrency are also avoiding it like the plague.


For memory ownership, look no further than `std::unique_ptr`. With the features introduced in C++11, it is an elegant, inexpensive way to clearly and safely denote memory ownership and to have that memory cleaned up when it goes out of scope or handed off as you see for.

If you are ever using bare pointers instead of `unique_ptr` you are probably doing it wrong.

For more: http://herbsutter.com/2013/05/29/gotw-89-solution-smart-poin...


> I'm also interested to see how well Rust tackles this, as I agree it's a problem.

Is this a "see how well it actually works" or a "I don't know how this works and I'm interested?"

I'm curious because this is possibly Rust's most important, interesting feature, and I want to make sure we've got some reasonable messaging here.


Number one wishlist item for next C++ is to add networking libraries to the STL. Its past time that C++ knew about the internet.


C++ is suitable for development of cross mobile applications , since both iOS and Android tool chain has support for C++.


Are modules already kind of unofficially supported, or can they be enabled ? On what platforms ?

I'm trying to make a C++ game with fat libraries like Ogre3D and bullet3D, often on a laptop, and that would really be fantastic to not wait 10s or more each time I edit a header.

I've seen a presentation, basically modules would decrease compile time from M x N to M + N.


I really like the direction C++ is moving in. I just really don't like how incredibly VERBOSE it is (though `auto` helps).


In languages like C++ "verbose" means "specific and obvious" which is something you want to have when writing certain kinds of code.

It's annoying, it's a drag on productivity, but in the long run it's arguably necessary.

Some other languages which reduce verbosity by making more things implicit make it harder to understand what's actually going on behind the scenes. You lose a lot of information.


A lot of C++ verbosity comes simply from bad defaults. Const and virtual should be the default for example, not the other way around. Would stop the virtual destructor ommision problem, and if you need efficiency you could make destructors non-virtual.

Also in the "hiding stuff behind the scenes" department C++ is quite bad.

    SomeClass someMethod(SomeClass a, SomeClass b) {
        ...
    }
is doing a lot behind the scenes. Code below is better but it's longer and less convenient to write.

    const SomeClass& someMethod(const SomeClass& a, const SomeClass& b) {
        ...
    }
And actually C++ code isn't easy to reason about wihtout reading the whole program. It isn't even easy to parse.

What's going to happen when you run this?

    y = f(x);
It may be that f is function, or a type. f may return the correct type to assign to y, or it may return something else and automatically run some conversion. There may be copy constructor and some destructors involved if it's returning object and not reference. It may run overloaded operator= and do anything at all, for example add f(x) to y. Hell, f can also be a class with operator() overloaded, and you would need to track its state to see what will happen.

And we haven't even touched the subject of #define.

It's much easier to reason about Java code for example.


Uh, no, "virtual" should absolutely NOT be the default. I don't know where you got that idea from (Java?) but it's absolutely horrible.


I wouldn't call it 'horrible', but rather, inconsistent with C++ and its target area.

Zero-overhead abstraction is a key talking point of the language. Virtual-by-default would contradict this entirely.


It was the right default twenty years ago for performance reasons.

These days, the economy of a vtable pointer is not really a good reason, and all languages that have the opposite default (such as Java, as you point out) are doing quite fine.

Because of this default, I can't count the number of times where I've seen "#define private public" and other horrors that developers used to be able to extend classes that their creators were too short sighted to design properly.


I'm very glad that virtual is not the default. Most of the classes I write are simply value classes and do not use inheritance at all. Once you start using virtual, you really have to embrace traditional inheritance idioms whole hog, and then you've got std::vector<std::shared_ptr<Foo>> instead of std::vector<Foo>.

If anything, the performance difference between std::vector<shared_ptr<Foo>> and std::vector<Foo> is even greater today than it was twenty years ago.


On the flipside, if you are not using inheritance then you could have solved the same problem with abstract data types. The only big problem is that in C++ the method call syntax is much more convenient to use: foo.frob() vs Foo::frob(foo). IMO, the correct way to fix this is by adding syntax sugar to the language, not by making methods non virtual by default.


I don't understand. Do you mind elaborating? Not sure what you mean, specifically, by "abstract data type" (I think of ADT as just another synonym for a class), nor do I get the static method thing. If you were calling hard-coded static methods, you wouldn't have polymorphism anyway, so how you have virtual methods?


The real problem with defaulting to virtual isn't the vtable pointer, it's the lack of inling.

Not being able to inline a method like (from vector)

    T& operator[](size_t pos)
    { return data[pos]; }
Would kill performance.

In languages which do run-time optimisation you can inline such methods later, but in C++ that's not possible and proving when you can de-virtualise a method (which most compilers do) is very hard and often fails.


Why would it be bad? You could always add non-virtual (inline maybe? it's already useless keyword) if you need the performance, and most of the code doesn't. And it's a source of errors.


Because if the compiler can not figure out how to de-virtualize your usage of a structure then you're going to necessitate that a vtable be created at compile time and used at runtime.

The reason the previous commenter asked if you're from the java world is because in many highly important areas, the effect on memory and speed this would cause would be unacceptable. These areas _tend_ to be left to people who understand languages like c, though, so a lot of newer languages make decisions ignoring these good use cases.


I don't know which world I'm from. I mostly program (for money) in java nowadays, but I also did C++ for money for like 6 years, and I knew C years before. And mostly I've learnt programming on turbo basic and turbo pascal. But never had to do system programming.

Also I don't think it's that big of an achievement to understand C. Despite its flaws it's very simple language, very different from C++.

To the point - you could ensure that compiler can de-virtualize your usage of structure (or class if you will) by adding "nonvirtual" to every method it implements or derives. I don't see how it's any better than having to delete "virtual" from every method it implements or derives. Just a question of defaults, and I'd say most of modern C++ code isn't written with the performance goals that justify nonvirtual as default. You can and should profile after writing something anyway if you care about performance.

And anyway if you have derived classes it's almost always the case that you want at least some of your methods virtual, otherways what's the point?


>I'd say most of modern C++ code isn't written with the performance goals that justify nonvirtual as default

I am using C++ right now for embedded code but otherwise I write mostly Javascript. There was a time when C++ was the standard choice of language for desktop apps etc., but that's hardly the case any more. You choose C++ if you need performance and control. And I think the "default means the least overhead" concept makes a lot of sense there.

Also, inheritance is a very central part of mostly all Java code whereas it's much less idiomatic (modern) C++. In C++, classes serve to give you RAII and you specialize with templates.


>"To the point - you could ensure that compiler can de-virtualize your usage of structure (or class if you will) by adding "nonvirtual" to every method it implements or derives."

Think it in this way. Many respectable people has been making a case to avoid inheritance[1][2]. What you are proposing would actually be an incentive to them. I don't know about you but the amount of functions that I actually override in my code is not even close to the 20%.

Why should we set a default for that 20%?

[1] http://channel9.msdn.com/Events/GoingNative/2013/Inheritance... [2] http://www.gotw.ca/publications/mill07.htm


I agree with "composition over inheritance", but I disagree with "inheritance is evil". Composition over inheritance means you divide you classes into parts that ALSO use inheritance and virtual methods, you just don't make the hierarchy deep and don't mix many different subdivisions into one hierarchy. The problem isn't virtual methods, it's too many divisions and responsibilities in one class hierarchy.

And you do want to refactor your virtual methods and then extracted methods do usually need to be virtual, even if at first they don't they may need to become virtual in future, and IMHO it's better to just make them virtual from the start, if you don't REALLY need the performance.

You can mess up because of nonvirtual-by-default too, especially in C++ because of the difference between stack and heap objects.

    struct A {
       virtual int f(int x, int y) { return g(x,y); }
       int g(int x, int y) { return x+y; }
    };
    struct B : public A {
       int f(int x, int y) { return g(x,y)+1; }
       int g(int x, int y) { return x+y-1; }
    };
    B* b1 = new B();
    A* b2 = b;
    B b3;

    b1->f(2,2); // 4
    b2->f(2,2); // 5
    b3.f(2,2); // 4
This bite me a few times in C++.


Those actually all return 4. Since f is virtual in the base class, it remains virtual in all derived classes, regardless of whether they say so explicitly.

The override keyword was added in C++11 to help prevent the sort of mistake you were trying to show. Any methods you mark with it will result in a compile error if they are not actually overriding anything.

For example, if B::g were marked as override, it would fail to compile because A::g is not virtual.


Many respectable people has been making a case to avoid inheritance[1][2].

Unfortunately, class inheritance is the only way to create the equivalent of an interface or (Scala-type) trait in C++. So, even if you avoid class inheritance in general (which is a good thing IMO), you still end up doing inheritance if you want run-time polymorphism in the form of abstract classes or abstract base classes.


Performance is only one side of it. The bigger reason is http://stackoverflow.com/a/814939


Over 20+ years of development, I've found this objection to be vastly theoretical and with no practical consequences.

In practice, most classes are designed without inheritance in mind and yet, being able to extend and override them has proven infinitely more valuable than the occasional case where such an overriding breaks the parent class.

The practical reality is that even if a class is not designed for inheritance, inheriting from it is unlikely to break it but very likely to make its user's life much, much easier.


Seemingly off-topic, but I'm curious -- what language has the majority of your 20+ years of development been in?


Professionally, Java for most of the time. Recreationally, Scala and Haskell. I did about 5 years of C++ overlapping with Java before that.


Ok, that explains it. When you're used to Java it's easy to expect inheritance to be a big deal. The fact of the matter is that run-time polymorphism (aka virtual) is much more rarely necessary in the C++ world than the Java world. The only reason it's common in Java is that it's the only tool available for many job that C++ has other tools for, and it also avoids an extra composition overhead that doesn't exist in C++.

tl;dr: Just because it's "infinitely more valuable" in Java doesn't mean the same for C++.


It's a lot easier to reason about your first version of 'someMethod', which passes by value, than the second. It can actually be more performant than the second as well, since the code inside the body of the function (which may have been compiled during another invocation of the compiler on a seperate file) now knows the dynamic type of the object it's dealing with, meaning all interior virtual calls can be made direct.

The first form of 'someMethod' will also accept all 4 combinations of moves and copy operations on 'SomeClass': (copy a, copy b), (copy a, move b), (move a, copy b), (move a, move b). Your second, less convenient, function results in moves degrading to copies, leaving performance on the table if 'someMethod' performs mutation.

Your 'y = f(x)' ambiguity isn't a problem in practice, since most code styles use different naming conventions for classes/structs and function names.

Conversions, construction, copy, move, and assignment semantics etc, are one of the most important, and one of the most difficult things to get right, when it comes to class design. If you make sane choices though, and put thought in to it, automatic conversions etc shouldn't be bothersome.


I find it amazing that people seem to believe it should be possible to understand a large code base by looking at single lines in isolation.


> In languages like C++ "verbose" means "specific and obvious"

Whoa, no it doesn't. C++ is far more verbose than necessary for things like creating algebraic datatypes or really creating any types.


Or looping through for loops. I remember the jaw-hitting-floor moment when my C++ book tried to pass off

    std::vector<int>::iterator it;
    for (it=arr.begin(); it!=arr.end(); it++) {
        ...
    }
as an improvement over

    for (int i=0; i<len; i++) {
        ...
    }
Not entirely unrelated: I'm currently busy verbosifying a large chunk of C++11 code into C++90 code because Reasons (or so the maintainers assure me).


In modern C++ these loops could alternatively be written as:

  std::vector<int> myVec;
  for( auto it = myVec.begin(); it != myVec.end(); ++it );
or if you prefer the external begin/end syntax:

  for( auto it = std::begin(myVec); it != std::end(myVec); ++it );
or the most succinct (replace && by & if writing):

  for( auto && item : myVec );


> or the most succinct (replace && by & if writing):

That is strictly not necessary. `auto&&` is what is now being called a universal reference: `item` resolves to `const int&` if `myVec` is const, and to `int&` if `myVec` is mutable. `auto&&` does the right thing in the majority of cases. In fact, the following extension [1] has been proposed for C++17 (and already implemented in recent clang builds):

    for( item : myVec ) { /* do stuff */ }
which is meant to be a short-hand for

    for( auto&& item : myVec ) { /* do stuff */ }
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n399...


Both auto& and auto&& produce const T& for a const vector<T>, and T& for a modifiable vector<T>. The difference between them is subtle: auto&& handles Humanity's Eternal Nemesis, vector<bool>.


Oops, you're right of course; I got reference collapsing rules confused with the regular binding rules. Proxy objects will only bind to auto const& or auto&&, though, since they're generally returned by value. I was surprised to see that this does not come up in std::bitset nor std::valarray, but only std::vector<bool>.


Yes, and it's probably good to note that the issue has been fixed so as to not unduly scare away any newcomers to C++. Thanks for doing it for me :) Still, at the time the

    for(auto& item : vec)
syntax was merely a talking point being tossed around C++0x meetings. And to this day the project I'm working on still hasn't officially dropped support for C++90 so I'm stuck with the old syntax anyway :/


It's an improvement in the sense that it's more general; you can loop through any container that implements that protocol, which most (if not all) of the STL containers do. For example,

    template <class T>
    void foo(T arr)
    {
        for (auto it = arr.begin(); it != arr.end(); ++it)
        {
            std::cout << *it << std::endl;
        }
    }
You could pass a `std::map<int>` to `foo`, or a `std::list<string>`, or a `std::vector<char>`. You could even create your own classes and give them to `foo`, as long as they implement the `begin`/`end` protocol.

It's certainly more verbose than necessary, but it can be convenient.


You want to use 'T const&' as your argument type, not 'T', otherwise foo() will take a copy of anything you pass.


I've never understood the STL's infatuation with iterators. Why do I even need to know about them when all I want is walk through the collection?

Ideally, it should be something like:

    for (auto it : arr) {
      // do something with *it
    }


You can do that. That's part of C++11 and is supported by basically every major compiler.

The advantage of iterators is that they're incredibly flexible. You can use them for sub-ranges, reversed ranges, non-ranges like input and output iterators, etc, etc. This is vastly more powerful than something like, say, Objective C's NSFastEnumeration, which just allows for the trivial case handled by the range-based for-loop.


> Why do I even need to know about them when all I want is walk through the collection?

Because there are algorithms where you may not want to walk through the whole collection. STL wasn't meant to be a container library alone, it also includes many generic algorithms that work using iterators to delimit the range of data to be operated upon.


It basically is this easy these days, thankfully. It is nice to have iterators, though, because it allows you some flexibility for swapping out the underlying data structures on existing algorithmic code. This has proved quite useful when I've written graph code.


Oh yes, and I'm hella thankful for those containers. I just wish "auto" had come along a little sooner :)


I remember reading Stroustrup's book and not having a clue what an "iterator" was. Even looking in dictionaries didn't help me - I was just after the very definition of the word!

Times have changed now of course.


Indeed!

Just to write a class in C++ with a constructor and destructor which are not inlined, you have to repeat the class name about seven times: three times in the class declaration. Then four more times in the definitions of those two functions:

    class verbose {
    public:
      verbose();
      ~verbose();
    };

    verbose::verbose()
    {
    }

    verbose::~verbose()
    {
    }


Stroustrup doesn't think so.

I don't have a link - it was in some recent conference. He stated that every time he added a new feature everyone would be up in arms and demand a really verbose implementation, which he added reluctantly. Now that everyone is used to the features they complain about how ridiculously verbose it all is.

There isn't any reason for the craziness of

    template<typename T> 
    void foo(T a)
vs

    void foo(auto a)
Certainly things can become too implicit, but C++'s heritage of verboseness is due to the conservatism of the standards committee, not because anything less would be confusing.


But the expanded form does provide you with more flexibility. Even if the latter is a synonym for the former, there is a difference between:

    template<typename T>
    void foo(T a, T b)
vs

    void foo(auto a, auto b)


It seems like in the second `foo`, the types of `a` and `b` are not hinged together. So to do it with the template syntax you actually need:

    template<typename T, typename U>
    void foo(T a, U b)
:)


I intended the two to share a type. My point was that you cannot express that constraint with only auto.


The reason for your particular example is path dependency — templates came long before (the redefinition of) auto.


I was specifically referring to boilerplate and type level programming: Algebraic Data Types's, and type generation in general.

C++'s type system is actually pretty strong, but the amount of boilerplate that is needed to create a new type means that it is very difficult to program in a type safe way... without being incredibly verbose. Creating a type safe "Length" type is too much of a pain in the ass for anyone to actually do it.

Which is what I mean by "too verbose". It is so verbose that instead of securing the "specific and obvious" in strong compiler guarantees, we fake it with implicit conversions and typedefs.


And we have golang.


I'm a little afraid the language will go down hill from here. Too many versions - a committee dedicated to creating new versions of the language standard is going to do exactly that. Much like has happened with OpenGL.


> Too many versions - a committee dedicated to creating new versions of the language standard is going to do exactly that. Much like has happened with OpenGL.

The problem with OpenGL was that they tried to retrofit an API that assumed a certain underlying hardware model to modern (i.e. less than 15 year old) GPUs. It was impossible to do in any sane manner.


I'm assuming C++14 will have 1320 keywords and a bunch of weird operators to use so people can ignore it even more than it's being ignored.


Speaking of ignoring things, have you read anything at all about C++14?


I don't think there are any new keywords for C++14.


If only it were true, the problem is more that c++ as to many meaning for its keywords.


I don't know, it's kind of both. There are too many keywords and operators, and many of those keywords and operators have different semantics in different contexts.


um, C++ is the most widely used language in the world.


We have this notion of "trolls", but troll means someone that specifically tries to stir discussions or invite responses, by saying controversial stuff on purpose.

I think the parent's comment is just plain ignorant. Saying stuff that he thinks is "cool" because he doesn't know better.


Okay, that's incorrect. It's a bit hard to say whether it is Java, Javascript or C, but it's certainly not C++.


It's really neither here nor there, but every java or JavaScript engine in widespread use is written in c++, so c++ code "in use" subsumes both of them.


No language exists but x86 binary code!


Um, ever hear of ARM? (Also 680X0, IBM mainframes, 8051, Itanium, Sparc...)

But then I presume the parent was sarcasm.


Yeah, it was sarcasm on my part, at least. Point being that only binary is it's own language all the way down (which is of course not long at all).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: