Hacker News new | past | comments | ask | show | jobs | submit login
The Time Needed to Write “Effective Modern C++” (scottmeyers.blogspot.com)
171 points by ingve on Apr 22, 2015 | hide | past | web | favorite | 149 comments

Is this guy actually writing C++ programs in the wild? He often presents himself as an apostle in the C++ church, but, I am always surprised that he does not seem to apply his guidelines into any practical project of his.

As a programmer, I'm suspicious when people have opinions and give advice but never showed me more than snippets of code. Like a master tailor that wouldn't sew. How come he gets that much recognition and respect?

I would be much more inclined into accepting advice from someone like, say, Carmack, that has successful projects in C++ under his belt. (Many people consider Doom 3's code beautiful C++, yet, the choices made there are very controversial, and far from "modern".)

I also put Bjarne Stroustrup in the same category.

What do you think?

His past C++ projects (or lack thereof) do not tell much about the quality of his books. He teaches people how to master C++, and that's a very precise skill. He doesn't claim to teach how to organize software, or how to manage collaboration between people, or any of the other skills that are necessary for a successful software project.

It's the same way that good grammar is only one of the skills involved in writing a great novel. Some scholars really grok grammar, and are excellent at teaching it, yet they haven't written great novels. I'm thinking of William Strunk Jr (of "The Elements of Style" fame), for example.

> Some scholars really grok grammar, and are excellent at teaching it, yet they haven't written great novels. I'm thinking of William Strunk Jr (of "The Elements of Style" fame), for example.

Not sure that Strunk, or the successor Strunk and White, are good examples here; they didn't "really grok grammar", in fact, they were pretty bad at both understanding it and teaching it, and their advice consists of a lot of incorrect information and vague platitudes. E. B. White was actually a good writer, but much of his writing violates the prescriptions in The Elements of Style.

Geoff Pullum has a good rant on the subject, if you want more detail: https://chronicle.com/article/50-Years-of-Stupid-Grammar/254...

So, this is exactly why it's an interesting question to test if the suggestions in the "Effective C++" series are actually good suggestions, or are just someone expounding rules that sound good in theory but don't actually help in practice.

His book is titled "Effective Modern C++," which necessarily contains opinions about what is effective and what is modern. Without practical experience writing software, his opinions won't be grounded, and so his advice won't be sound.

That said, I think he does a good job being even-handed. For example, his chapter on auto covers both advantages and pathological cases, and goes on to say:

First, take a deep breath and relax. auto is an option, not a mandate. If, in your professional judgment, your code will be clearer or more maintainable or in some other way better by using explicit type declarations, you’re free to continue using them. But bear in mind that C++ breaks no new ground in adopting what is generally known in the programming languages world as type inference.

So he is not excessively proscriptive.

I think some of the most solid and forward thinking advice I've seen and used in my C++ have come from his books.

As for Carmack:

I sort of meandered into C++ with Doom 3 – I was an experienced C programmer with OOP background from NeXT’s Objective-C, so I just started writing C++ without any proper study of usage and idiom. In retrospect, I very much wish I had read Effective C++ and some other material. A couple of the other programmers had prior C++ experience, but they mostly followed the stylistic choices I set.


He gave a talk at DConf 2014. He said "I'm gonna make a confesion which will make many of you instantly stop listening to me which is I don't develop software." Meyers is a teacher, a very good one. http://www.ustream.tv/recorded/47947981

The best coaches are not necessarily the best players.

Like in sport, coaches are good at pointing to particular aspects of someone's game, but were not necessarily great at putting it all together on the field. You find there's different coaches for attacking, defending, etc.

They all tend to have tried playing at some level, however. Even Mourinho was a senior player for a short while.

I suspect programming gurus are no different. They may get very good at shouting RAII or SFINAE at you, without being great at executing it in a large project.

Anecdotal, but a friend of a friend talked with Meyers, and Meyers said he has not worked on any large scale C++ programs in a while.

That being said, I have worked on large scale C++ projects, with a lot of people smarter than me, and the biggest criticism of Meyers' books and advice would be "necessary but not sufficient". I can't, off the top of my head, think of anything he recommends that would have been controversial amongst my peers.

His webpage: http://www.aristeia.com/ indicates that he has been a programmer since 1972. Even if he has been doing training/consulting for a couple of decades, that leaves a lot of time to get the basics down. I've never met him, though I have browsed one of his books once.

My own personal feeling is that books like these can be invaluable for people starting out in the industry. It gives you a reference point to relate to. If he can explain things well and people can understand his writing, then it can be useful even if the person isn't the best living coder on earth.

At some point in your development, you need to branch out from what people are saying in books and start to form your own opinions. You have to start questioning what the "experts" are saying and try to experiment with other techniques. It is frustrating when people hang on to certain ideas just because a famous author said it. Sometimes it holds us back. It doesn't make their contribution any less valuable, though, because at least newbies are getting to a certain level due to the well written books.

I largely agree with you, BUT, this version of Effective C++ was less opinion and style oriented and more a guide to avoiding easy to make mistakes with some of the new C++11 and C++14 features. Not following many of the guidelines will result in incorrect and broken code.

My recollection of the older books is that they covered "best practices," and ignoring them may have been bad style, but the code would still be technically correct. Quite a few are still style issues, but it's important to recognize the difference before deciding to ignore them :-)

You should still read them though. But instead of reading them like an instruction manual, you should read them for an eye towards when you would disagree with them, when you wouldn't, and why.

> It is frustrating when people hang on to certain ideas just because a famous author said it. Sometimes it holds us back.

What I appreciate is that most of what he says (via GotW or the books) comes with the rationale and a code snippet instead of a blind assertion that this is The True Way.

Scott Meyers mentions at the beginning of his books that his advices are important (according to him), but what's even more important is the rationale and why he gives you this advice, and I believe this is also a reason his books are great. This is not "Follow this guideline blindly", but rather "I suggest you follow this guideline and here's why".

Are you putting Stroutrup in the I'm suspicious of his advice category or the He teaches from experience category?

It would seem from the context of his statement the latter, especially given that he built C++

Creating C++ means you're likely to know the ins and outs of the language. That's a necessary condition to being a good C++ programmer, but it's not sufficient. It's not at all obvious to me that Stroustrup would be a good C++ programmer or even a good C programmer.

Macho man! If you need to know the intricacies of an abstraction in order to use an abstraction, then it's not a very good abstraction.

Do engineers know the ins and outs of concrete and rebar? Or do they offload that knowledge onto manufacturers? But programmers can't offload anything onto the language creator or they aren't real programmers.

I'm glad for all the work the SBCL and GHC teams put onto their languages so I can use their abstractions without needing to know the ins and outs of performance optimization. Maybe one day you'll find a language you like that allows you to focus more on your problem domain and less on its traps, idiossyncrasies and inadequacies.

What? You're reading way too much into my use of the term ins and outs. I didn't say good C++ programmers must know how to implement a C++ compiler. By ins and outs I meant: the syntax, the evaluation strategy, the features and facilities provided by the language and toolsets, etc. You need to know those kinds of things to be a good programmer, and every language creator is likely to know those things about the language she created. My point was that knowing those things is necessary but not sufficient to be a good programmer.

Macho man! ... Maybe one day you'll find a language ...

Your tone is really shitty, and it's made worse by the fact that you're being shitty toward some imaginary strawman you've put in my place. For what it's worth: I greatly value good abstractions, I think appropriate high-level languages should be used whenever there's not a good reason to use low-level languages, I think leakier abstractions are shittier abstractions, I think C++ is full of the leakiest of abstractions, and I think Haskell and lisps are great. I know you're really eager to shit on others because you think you've achieved some kind of higher level consciousness, but you should probably work on your knee-jerk reactions.

It could also go the "ivory tower" direction

I'm suspicious of his advice because the c++ he wants isn't the c++ I want.

While Carmack obviously is a great programmer, the fact that C++ compiler eats his `C with classes` code doesn't make him the best C++ programmer out there :)

>Is this guy actually writing C++ programs in the wild?

Well, I read one of his articles which you can find here: http://www.artima.com/cppsource/top_cpp_books.html

In it, he writes about his role with C++:

"I’ll begin with what many of you will find an unredeemably damning confession: I have not written production software in over 20 years, and I have never written production software in C++. Nope, not ever. Furthermore, I’ve never even tried to write production software in C++, so not only am I not a real C++ developer, I’m not even a wannabe. Counterbalancing this slightly is the fact that I did write research software in C++ during my graduate school years (1985-1993), but even that was small (a few thousand lines) single-developer to-be-thrown-away-quickly stuff. And since striking out as a consultant over a dozen years ago, my C++ programming has been limited to toy “let’s see how this works” (or, sometimes, “let’s see how many compilers this breaks”) programs, typically programs that fit in a single file. (make? Who needs stinkin’ make?) My living is based on C++, but it’s not by virtue of the programs I write in it.

It’s not by virtue of any intimate association with the language’s standardization, either, because I’ve never been a member of the C++ standardization committee, I’ve never been on the committee’s mailing lists, and I’ve never attended any standardization meetings. My knowledge of the inner workings of the committee—including the things that have had a significant impact on it—is based on what I’ve read and heard from others. This means that I may be ignorant of important forces that shaped C++ as we know it, because those forces may have been felt only within the committee.

Given that I don’t really use C++, nor do I help specify it, you might wonder what I do do. Fundamentally, I study C++ and its application. I gather as much information as I can about the language and its use (from books, magazines, newsgroups, email and face-to-face conversations with developers and members of the standardization committee, experiments with toy programs I write, etc.), organize and analyze what I find, then I package my findings in concentrated form (e.g., books, magazine articles, technical presentations, etc.) for consumption for people like you—people who do use the language. Your job is to employ C++ as a tool to write useful software. My job is to discover and package the information you need to best apply that tool.

I like to think of myself as an outside observer, not too deeply steeped in the day-to-day travails of programmers and not too keenly focused on the minutiae of standardization, yet familiar with both. This series of articles, then, summarizes what this self-proclaimed outside observer thinks have been the most important contributions to C++ since its inception...."

Great programmers don't typically write books. They are busy writing programs. 'de Raadt on C' would be a good book, wouldn't it? But he'll never have time to write it.

His books are well-researched, that is for sure. However, you cannot use his ideas as gospel. He will try to teach the most conservative way of using C++, not the best for your purposes. I particularly think that there are several areas of C++, such as exceptions and constness that should better be avoided if possible. But each one has a different way to use the language.

I could understand why some people like to avoid exceptions. But constness? Really?

Constness when used on classes introduces a way to create objects that behave differently in different contexts. Are you calling a member function from a const or from a non-const pointer? Depending on that you could be running different code.

In my opinion you should make your mind about what the class does: is it mutable? then don't use it with const. Unfortunately, you are still required to use const in many places: copy constructors and operators, for example. However, you can still choose to minimize the use of const, and avoid it whenever possible.

That is an argument to avoid const overloading, not constness generally.

I would say that constness is a massive boon so I use it on the vast majority of declarations. It reduces cognitive load by being able to assume that variables are not going to change.

I agree that const is overall a good thing; however, it is tricky even in what you might think are simple cases. Take this for example:

    include <iostream>
    void foo(const int& a, int& b)
	    std::cout << "a: " << a << std::endl;
	    std::cout << "b: " << b << std::endl;
	    std::cout << "a: " << a << std::endl;
	    std::cout << "b: " << b << std::endl;
    int main() {
	    int a = 1;
	    foo(a, a);
Here the value of a changes in the middle of the function even though it's a const reference, because of aliasing. Here const doesn't mean the value won't change--only that you can't change it through that particular reference. Yes, const is useful, but if you don't know what it actually does it will bite you in the ass, just like most of C++.

This does not seem like a strong argument. Yes, table salt is useful, but if you don't know what it actually does and you put it in your eyes, it will burn them, just like most, well, everything...

A const reference guarantees to the caller that the callee won't modify the object, not to the callee that the object won't be modified.

Except that it doesn't. Thanks to const_cast, you can take away constness from pointers, so where does your "guarantee" comes from?

The guarantee is not against a malicious agent, but a reasonable programmer who may make mistakes.

A function may not reliably cast a pointer-to-const-T to a pointer-to-T because T may actually be immutable, in which case modifying it would be UB. Like all casts, const_cast exists as an escape hatch in the type system for when you know that what you are doing is safe, even if the compiler can't prove it.

You misunderstand reference-to-const, it does not imply that what "a" points to is immutable, it is a contract to the caller that foo will not use "a" in any non-const way.

That's exactly what I said; however, when you introduce people to C++ that seems rather counter-intuitive and can lead to a lot of mistakes. Yes, you can tell people "Well, you don't understand what the standard says!" but there's still something to be said for language features that don't take several pages to explain what they really do.

I just gave an explanation of reference-to-const in 1 sentence, not several pages. Also, my objection to your comment was your focus on what const means to foo() when the value is in what it means to main().

Isn't this inferable from the code itself, though? I was expecting some subtle gotcha, so I had to read your comment three times to realise there was none!

It's to be expect that we can change a-through-&b, but not a-through-&a in foo.

I think the objection is that b may get passed throughout the program such that it's no longer local to a. Then you have 'const Foo& a' that you expect will never change, but your function reading from that calls something that mutates a 'Foo* b' that aliases the object (probably through a member variable), and suddenly you have some very hard to track down bugs. If you're multi-threaded, you have a data race.

You either need to be very strict about creating non-const references to objects, or you need to enforce immutability at the class level. Unfortunately the latter has all sorts of other practical problems, often including efficiency trade-offs or awkward APIs.

> you have 'const Foo& a' that you expect will never change, but your function reading from that calls something that mutates a 'Foo* b' that aliases the object

And this is exactly why const means pretty close to nothing. Compilers can't enforce it in so many situations that it is almost like the "auto" keyword in pre-C++11 times.

It still documents intended behavior, though. C++ is basically a giant clusterfuck if you want the compiler to actually enforce anything - because of pointer arithmetic, there's no guarantee not only that const Foo& a is immutable, but even that it points to a valid Foo or that accessing it won't crash your computer. It's trivially easy to trigger undefined behavior that may do anything up to pwning your computer, whether you make your classes immutable or not. The point of const, access control modifiers, references, static typing, RAII, and all of the other restrictions on C++ programming is to put up giant signposts that say "Don't do that! Here be dragons!" rather than actually prevent you from doing it.

If you want the compiler to actually enforce safety properties of the language, use a language like Haskell or Rust.

This seems to me a lot of trouble for "documented" behavior. It is much better to design your classes in a reasonable way and stop worrying about const issues. If you rely on const to write good code, I guarantee you that sooner or later you will be in a lot of trouble.

Const-ness gives you a way to express a very common pattern in software, where an object is constructed piecemeal in one section of the code and then "frozen" and only const references get passed around from then on. If you were to express this at the class level, you'd need awkward circumlocutions like the Builder pattern, comments on methods, or friend member functions.

The builder pattern is the right way to go. It is stronger than const-references, because once the object is created, there is no way to unfreeze it. It guarantees that the object is truly const, no matter what. On the other hand, const keyword in C++ does not guarantee the object is really const. Your reference may be const, but something else might still have a non-const reference and mutate the object.

Most of the time `const` is only needed for the caller to ensure that the object won't get changed when passed to another function.

In addition, to enforce immutability in C++, you have to disable copy/move constructors and assignment operators, which removes much benefit of using values instead of references.

> `const` is only needed for the caller to ensure that the object won't get changed when passed to another function.

But const cannot guarantee this, because it can be cast away so easily. This could be true just for your own code, but then it is you who is responsible for maintaining the immutability, not the compiler.

> because it can be cast away so easily

By this logic, none of the static type checks in C++ is of any use, since they can be cast away easily as well.

Types exist to help you organize your code. Const (for classes) just promotes the misguided idea that you should design objects with mutable and immutable parts at the same time. The main objection to this is that your objects should be either value objects (therefore immutable) or reference objects. There is no need for const if you design your classes in this way. On the other hand, the const idea requires you to spread "const" everywhere you might want immutability, otherwise the compiler will fight you at each line of your code. Either way, it is a loss-loss proposition.

I see this sometimes in code at work. As soon as you need to stuff one of the 'const' objects into any kind of container you start requiring unique_ptrs everywhere, and then you start thinking how much easier your life would be if you wrote Haskell for a living instead.

It that is your intent, then build an immutable class with some friends to initialize the object if necessary. The const thing is so overloaded in C++ that is means close to nothing in terms of intent, and it can be easy circumvented.

Those who can't do, teach.

Those who cannot do or teach, write books.

( And use the fame derived from them to get consulting gigs. )

well, "writing books" is "doing"

I write C++ for a living and I like it, but it bothers me that we need a book that's essentially a list of "you can easily fuck this up by accident, watch out!"

for now, it seems that C is insufficiently expressive and everything else is too slow. but the complexity of C++ is troubling.

> but the complexity of C++ is troubling.

It's a multipurpose, statically compiled and standardized language. I don't think its complexity is a problem. Simplicity in a industrial level language like C++ can't really be expected.

I'd say C++ is a for a multitude of uses, it allows to do things precisely and well, but it has a cost, the one of learning how to use it.

1) There are many other alternatives than C++ that will cover a lot of use-cases. You don't always "need" to use C++, unless you have precise needs all the time, or don't want to have another language interact with your code.

2) You can still use C++ and avoid complex features, or just use C.

3) The language keeps evolving, and I think it's great that companies are working towards an ISO standard. Few languages have that. Maybe the language will be a little easier to use in the future.

> I don't think its complexity is a problem.

C++ is complex in many ways which are unrelated to its core functionality. For example: most vexing parse, integer promotions, conflated language features (classes provide records, polymorphism, namespacing, encapsulation), header files, vector<bool>, the grammar is insanely complicated, et cetera. C++, like Common Lisp, is a standards effort which places more value on preserving the ability to run existing code than it does on removing language warts. Tools are now appearing like clang-format, clang-modernize, ReSharper, etc., but the equivalent tools have been available for other languages for quite some time now because those languages don't have the same complexity just at the syntactic level that C++ does. For example, Python has "2to3", and this was made with far fewer resources than "clang-modernize". Sure, C++ has richer semantics. But the syntax really is a maze, and if it weren't, we could have had our tools longer ago.

Well many of those issues are inherited from C, and Stroustrup has stated that compatibility with C was an absolute prerequisite otherwise C++ would have been stillborn. Given its position now, it's impossible to say that he was wrong, although I wonder if certain things could have been tidied up that would only have compilation-failed really bad code (e.g. it annoys me that you can pass a floating point value where an integer is expected).

While that's true, a lot of the complexity is self-inflicted. C++ tries to define syntaxes for its features in a C style which cripples a lot of features in unnecessary ways. For example, classes being kinda like a C struct instead of a separate syntax altogether.

Honestly, even Objective-C is better in this regard because the Objective-C parts are well separated from the C parts instead of feeling wedged-in.

>Objective-C is better in this regard

I would argue that's "better". As a result, Objective-C feels like an alien language for a C programmer. Although to be fair I don't think a core C programmer would move to either C++ or Objective-C unless (s)he has to (iOS support or legacy code). I think it is the same reason why C++ developers are not moving to Go anytime soon.

I'd gladly move to a language that has the syntactic sugar of C++ (STL, lambdas, modules, and many other things) without the complexity (templates, inheritance, polymorphism).

Go and Rust seems to have weird syntax differences with C and I don't understand the utility of those. The go function syntax looks a little bit hairy.

C++ isn't evolving; it's just accumulating features. That's a crucial difference. The amount of harmful patterns you can accidentally use just keep on strictly increasing.

take your unfounded bias elsewhere

It's called "observation", not "bias".

MY "observation" is that your "observation" cannot be trusted because you're being "unfair" in your "characterization" of C++'s new features as being the negative phrase "accumulating features" rather than "evolving" despite the idea that "evolving" implies getting "new features".

I also enjoy using "double quotes"

Maybe the language will be a little easier to use in the future.

this is always double with C++.

At one point, it most definitely is easier to use now already than pre-C++14 or 11. Lots of those additions really make me enjoy using it (even more:) and often lead to less code which does the same while still not being harder to understand (often even better to understand) and having the same performance.

However, because of the backwards compatibility there still is cruft around which does make it harder to use because you basically need to know how to use it or in most cases that you shouldn't use it at all. Ideally one just wouldn't need to spend time / 'brain resources' on that kind of stuff. Simple example: I can still use std::auto_ptr. I won't because there are much better alternatives. I can still use std::tr1::shared_ptr. I don't, and I know what it is when I see it. But for a newcomer or slow learners this is just utter nonsense: need to figure out wtf tr1 is, why it exists, what to do with it, and so on.

Couldn't these things just be deprecated? As in, never removed because of compatibility, but when compiling you would get warnings about using features no longer 'best practices', and maybe there could be a switch for 'modern c++' that would turn use of these features in errors?

Yes that would be ideal - in practice it is also not extremely hard as most of these things aren't baked in to the compiler/linker but can be solved with (a lot) of #ifdefs. The harder part is likely getting such thing into the standard: decide what will be deprecated and what not.

The price for zero-cost abstractions and expressiveness is a monstrous complexity. If you want clean, expressive and easy-to-write code, you have a ton of languages where you can do just that. If you want performance and expressiveness, you will have to do with C++'s complexity.

I think Rust is making the case that you don't.

In particular, I think one of the largest sources of complexity is C++'s backwards compatibility.

Speaking of backwards compatibility, I much prefer C++'s overzealous stance over the instability and inconstancy of Rust that prevents it from being used in any serious work. This is okay, it's a beta. I'll reserve judgment on Rust for now.

But Rust also introduces complexity, while improving substantially in many areas. Of course it will do things better than what a language initially designed decades ago will do. A large amount of important systems rely on C++ and so it cannot change overnight, if at all. Rust on the other hand is free to experiment. The cost, at least in this early period, is unreliability.

When, or if, Rust becomes as performant as C++, with a comparable number of libraries and platform support, then it will be a viable alternative (and I hope that day comes). For now, the only possibility for this use case is C++.

22 days till 1.0...

> as performant as C++

LLVM helps a lot here. We're generally in the same order of magnitude, sometimes faster, sometimes slower. It depends, as always with performance.

> comparible number of libraries

Yeah, this is a big one. If those C++ libraries also expose a C interface, we have zero-overhead FFI, but not all of them do. Crates.io currently has 1,893 packages, with a million and a half downloads served so far. It's a start, but always a weakness of a young language.

> platform support

LLVM helps here too, though there will always be some embedded platforms and such that ship their own C or C++ compiler.

I think Rust is going a pretty good job of adding a similar level of complexity. I see most of this due to the new abstractions they have added like lifetimes and borrowing. With the old abstractions we have kind of figured out the least confusing way to implement them.

I think initial impressions of Rust may make it seem a lot more complicated than it is. That's not saying it's simple, but it doesn't have the same kind of dark corners and 'surprising' interactions that has C++ (often due to backward compatibility requirements); most features in Rust are pretty orthogonal and minimal.

Having the compiler on your back about lifetimes/borrowing is definitely unfamiliar, but, fwiw, a programmer usually has to be keeping track of that in C/C++ anyway (if it's complex when the computer checks it, it's complex when a human checks it).

One advantage of c++ is it is kind of nice to know that code you wrote six weeks ago is going to still work.

Yeah, instability is definitely a good reason for people to not use Rust, although that's quickly being resolved, both in theory, with the stable 1.0 release in just over 3 weeks, and in practice, with the recent 1.0-beta release.

> C++'s backwards compatibility.

Yes, exactly. In particular, this can be seen in the syntax. There are lots of angle brackets and colons these days. I hope at some point, around 5 years from now, a version can be made that is nearly the same thing underneath, but looks more like python.

Everything else isn't too slow, though, especially if you hit any meaningful IO.

There are many applications that either aren't typically IO bound, or where the IO (eg. gpu<->memory) is fast enough that you'll likely need C++ to maximise performance.

Writing software that pushes bleeding edge hardware is fun, which is why many people still want to learn C++.

Oh, sure, but the vast vast vast majority of applications--especially that people are hiring for--aren't what you're referring to.

Statistically I've no doubt you're right. However there are still a bunch of jobs where performance matters. If I left my job and industry, I'm fairly certain I could find another employer that found value in having a specific piece of code run as fast as possible.

Yeah, tell that to embedders and kernel/driver programmers.

The irony is that at least one class of kernel developers (Linux) refuse to touch C++.

Yeah, but how much of that is because Torvalds has opinions and how much of it is actual fact and preference.

and there the additional complexity of C++ is a barrier to entry but no more, so no biggie in this context.

I work on programs that are CPU and memory access bound.

Or if you use clever algorithms. I don't have benchmarks at hand, but I suspect that an FFT in Python is faster than a naive Fourier transform in C.

Not sure if this way of comparing programming languages (i.e. do A in language X and do B in language Y => Y is not slower than X) makes enough sense to draw conclusions. If you can use clever algorithms, in the majority of cases you'd do so in any language and in the majority of cases C would lead to more performant code than Python. Then again, wheteher this matters in the scope of the actual application is something else.

I suspect his comment meant to imply that a level of skill exists for which a programmer could build an FFT in python, a plain DFT in C, but not a proper FFT in C. That programmer would benefit from using python.

Theoretically this is true, but in practice, those who need to write very efficient code competitively, rarely use naive algorithms. (Pure C/C++ also isn't enough nowadays though, the processor isn't competitive with the GPU in a lot of algorithms, so CUDA/OpenCL needs to be also used in most cases.)

I think the replies to this post are getting confused between the FFT/DFT, which is O(n log n), and the "naive Fourier transform", O(n2).

My experience with numerics in regular python is that they're generally 50-500x slower than the equivalent in C/C++, this just pushes back the point at which the asymptotics take over.

C has libraries too, you know :)

Err, why?

My point is that a good algorithm, a good strategy can give you a several orders of magnitude speedup, more than the speed differences among languages. If a higher level language makes it easier and faster to develop good algorithms, then you should use that. After that, if you have time or really need it, you can re-implement it in hand-coded assembly, or even in hardware. But you generally don't have the time and don't really need it.

Because the FFT in python is almost certainly implemented in C, and probably more cleverly done than a naive FFT that I/you/someone would whip up as part of a C program.

Is this a joke? Something written with the same algorithm in c will be faster than python. Why not use someone elses non naive FFT impl in c then as well?

I think the argument isn't that Python code out performs C code, it's that code written by mediocre Python programmers often outperforms code written by mediocre C programmers. C code is fast enough the mediocre programmers get used to letting the language bail them out. Python programmers know that their language is slow and that they have to work around it.

I've encountered this several times in my own career. A co-worker who writes in C will be implementing a process in parallel with my Python implementation. A week later, my O(N) Python code is outperforming my colleagues O(N^3) C code, since I chose a more complex algorithm which is trickier to get right. The C programmer then re-implements my method in C, which would completely trounce my own code, except I've spent that time leaning on BLAS and LAPACK, speeding up my operations again. The C programmer then starts using fast libraries instead of her own code, again beating my old source, only to find that I've now pushed a good chunk of the processing onto the GPU.

Eventually, I will run out of tricks. The final draft of the C code will trounce the final draft of my Python code. However, during most of the creation process, my Python usually out performs their C. Also, a truly talented C programmer would write my colleague's final draft as her first draft, negating every advantage that I had in the process. However, that's not a situation that I'm likely to run into, because places hiring truly talented programmers aren't likely to be hiring me.

Makes literally no sense. The c developer would simply use the same c/fortran library the python implementation is based on. You are creating a false dichotomy for the sake of it.

The c developer should use the same c/fortran library that the python implementation is based on. A good C developer would use that library. In my experience, mediocre C developers will not use that library and will implement their own, naive version.

This has not been my experience.

He's making shit up to stroke his own ego, ignore him.

Yeah, I really don't see the GP's point.

"the complexity of C++ is troubling"

This comes out of the fact that C++ is being developed more as an engineering tool than as a programming toy. When you're into real engineering, you don't have free lunch.

I disagree. I see most of the complexity of C++ as being due to design mistakes which have to be maintained for backwards compatibility. I would personally like to see a breaking change where a lot of the inconsistencies and design mistakes that have been identified are rectified.

You can see this with Rust which tries to satisfy the same niche. It's possible to avoid the gotchas in C++ while preserving the advantages, you just pay a different price by working harder to satisfy the compiler.

A lot of people working with C/C++ I know are looking forward to Rust, but the problem is that currently it's nowhere near as mature and widespread as former languages. I hope that will change in the near future as it looks like promising system language.

Why wait for Rust when D is already there?

Honestly, I haven't tried D, but if it hadn't got traction in 14 years, I'd say it was a miss. It's not enough to be a good language -- documentation, tools, libraries, community, adoption in open source projects is, arguably, even more important.

This is the attitude that keeps us mired in subpar ideas like C, C++, Java, HTML, CSS, JS.

"I'm a tough guy and tough guys use tools that make our lives hell! We're serious! We're engineers! Your language is just a toy because our managers want to keep us fungible!"

What does an average "real engineer" C++ programmer have at their disposal? C++ has an ersatz type system (not algebraic and not connected to type theory) and an ersatz macro system (templates and preprocessing) and it offloads type signing onto the programmer (no help from an inferencer). It also lacks a garbage-collector because its followers are still afraid of non-existing "performance penalties". C++ has an ambiguous grammar that is context-dependent and requires infinite lookahead. Its creator is not any revolutionary thinker like Alan Kay, just some guy that wrote a language and become famous for writing that language. He also said that we hear a lot of complaints about C++ just because a lot of people use it, not because it's crap.

But bullies like restalis rejoice in the fact that know-nothing managers keep choosing C++ because it's the industry bandwagon and you can count on universities to supply the market with a fresh load of programmers trained in mediocre tools time and again.

You're a bandwagoner.

"What does an average «real engineer» C++ programmer have at their disposal?"

Ways to do the job. C++ is not perfect, I contest some of its design decisions (like having private class members by default, and having to explicitly (i.e. verbose) declare "public" at least some of them to make that given class usable), but I don't have to fight the language so much to get things done. For me the best job C++ does so far is by staying a tool. I don't want _BY_DEFAULT_ "something more", "something clever", or "something whatever" that adds more accidental complexity [1] which comes around when least expected and gets in my way! This being said, I admit that C++ suffers from feature creep like many other things do, but (as a consolation) it got here like this only after serious critique for each of the added feature and each feature had to pass serious filtering in order to "creep in".

"managers keep choosing C++"

Actually, I am the one choosing C++ for what I do, be it corporate related (and managed by managers) work or personal projects. I choose it for both low level and almost scripting-like tasks, although I used (and consider myself proficient in) other toy programming languages too.

[1] http://shaffner.us/cs/papers/tarpit.pdf

couldn't D, go, nim or rust fill that need?

All of his books are great - especially the one on STL. Herb Sutter wrote some great C++ books also. On a side note, based on that picture of him sitting at this desk, he is still using iPod V1

Sutter's written that we should use "async" everywhere (even though conventional threads have numerous advantages) and that we should use "auto" everywhere (even to the point of writing "auto x = int{5}" instead of "int x = 5" and "auto foo() -> void" instead of "void foo()"). I'd take his advise with a grain of salt.

> he is still using iPod V1

Are you sure? I see some color contrast between the center button and the wheel, and no ring of buttons around it.

It's not V1 - that one took a Firewire cable, not USB.

I wonder how long it took Kernighan and Ritchie to write 'The C Programming Language'? Amazon says the 2nd edition (1988) is 272 pages long but looking back, it felt like it was 80 pages :) Going by the OP's math (of the language creator being twice as productive) prolly 4 months?

Remember that that's inluding like 60 pages of what are basically printed man pages.

When K&R was written, it's not a given that the man pages for C functions already existed, is it?

They were, albeit potentially more terse than is available now. System III's functions section was pretty full by 1983 even [1].

[1] http://minnie.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/man/...

System III's functions section was pretty full by 1983

K&R was published in 1978

This is interesting, but the hours are vastly overestimated, at least compared to similar metrics (e.g. from the deliberate practice and expertise literature):

If we figure a 40-hour work week... it wasn't my only activity. Let's knock that number down by 20% to account for my occasionally having to spend time on other things.

There's no way anyone spends 40 or even 30 hours a week writing. Most authors spend something like 3-4 hours a day writing - and that's a good day!

See for example the chapter on writers in Cambridge Expertise Performance Handbook (http://www.amazon.com/Cambridge-Expertise-Performance-Handbo...).

In general this type of reasoning (40h work week => time for 40h of writing) makes time estimates troublesome in my opinion. Another example is people who claim to write code for 40, 60 or even 80 hours a week. A look at actual RescueTime data gives a sober picture: https://news.ycombinator.com/item?id=209195

Of course, you could claim a lot of the work happens in breaks, and I would agree. But then the actual weekly number for our most beloved artists, programmers, and scientists is more like 24*7, literally. In that case, it makes more sense to talk about it in on the timescale of days, weeks, months or years.

I find the hours amazing. My understanding is that a typical tech publishing cycle is three months for a first draft, with an SD of around 1.5 months, and a page count of 300-600. Not infrequently the book will be a side project and not the author's main gig.

E.g. When iPhone OS (as it was) first appeared, in-depth guides like Erica Sadun's were on the shelves almost as soon as it was released.

Even if some of those authors had limited distribution beta versions, they still worked their way through all the new features of the OS, wrote and tested sample code, and wrote all the content in a couple of months.

I understand Meyers wants to make sure the content represents industry practice, and that takes longer than just cranking out some code and making sure it works.

But even so - that's still a surprisingly long time for a tech book.

I think that the lesson here is that it is kind of silly to try and put a time estimate on these things. How do you define time spent "writing"? Does it include time spent researching? Time spent thinking of the structure? If you pause to formulate the next sentence or line of code, should you stop the clock? Maybe we should only measure time taken in each individual key press.

It makes much more sense, and is less ambiguous, to talk about "time to complete a project". Like a book, for example.

When I track my time scrupulously, the results are significantly different than what I thought they would be and also quite different from what, several weeks removed, I remembered re the project.

So, when I read an article that does back of the envelope calculations on time spent, based on rough estimates (20% on other projects, etc.), I find myself suspicious of the computed results.

With the amount of time invested into writing the book it would be interesting to see what type of financial return he was able to get for it. Certainly I would hope he was able to make as much as if he were working in private industry for the same amount of time. (though somehow I doubt this is the case)

To within a disturbingly accurate margin of error I can tell you right now his financial return for this book: zero dollars.

There isn't enough money in tech books for writers to rely on it as a day job. At best it'll get you noticed and make other jobs or speaking engagements easier to get.

I am inclined to disbelieve you.

I could agree that tech writing in general doesn't pay well, but this is an author who has sold over 150,000 physical copies, in addition to whatever virtual sales (like mine) he's had. Even if his take is 10% for a $30 book, that's still respectable income.

I had no idea where you came up with the "150,000 copies" figure but then I ended up seeing the same thing listed on a page about "More Effective C++", which has been in print for 2 decades. Even if he received $3-4 per copy sold for all those books, that's over a 20 year timespan, that only works out to $20k or so a year, although he does have other books. It's possible that Scott Meyers might be a sufficiently popular author to be able to support himself just on book income at a level comparable to having a decent day job, but at best it's a very close thing and even so he would be one of the extremely rare few who could do so.

Actually book writing is booming business these days as long as books are meant for general audience. Tim Ferriss, for example, has made more than $3M from his books. E L James reportedly made $100 million from Fifty Shades of Grey. There are many other authors in the millionair list purely from writing books. The reason this is happening now more than before is because of cheap electronic version of book that can be downloaded on phones and tablets on more than 150M devices in US alone. Previously carrying around hard copies meant you need to be serious book lover. Now all the friction and weight is gone and books are more in line with TV entertainment instead of something meant for people wearing glasses. In addition publishers and authors are figuring out how to make books viral. If your book indeed gets viral, you can bet 1% of this population to buy it which immediately translates to $1M-3M. Adding translated editions and international markets would double your revenues.

However this generally doesn't apply to tech books because number of programmers are just around 10 million and if you estimate 1% will buy your book (best case) you will still top out $300K range @ 10% royalty. Even worse, technology will change in next 2-3 years and your royalties would dry up quickly. Most "full time" tech book writers run training consulting business and do conferences as main source of income.

E L James is primarily a marketer. Reddit had a very interesting comment from someone who knew the backstory about her relationship to the Twilight fanfic scene.

Let's just say people still remember her there, but perhaps not very fondly.

The Amazon gold rush for new authors is pretty much over. That's mostly a good thing. The people who stick with it now will be talented and/or persistent, and quality is going to start increasing.

As for tech - book sales are actually incredibly low. The general audience "How to use an iPhone" guides can sell very well, but titles on specific niche developer subjects rarely sell more than a few K copies.

A title on - say - JavaScript that racked up sales of 10K would be considered a run-away best seller.

Also royalties are closer to 5%, because they're usually calculated from net publisher income, which is around half the jacket cost.

Publishers sell tech books because many authors accept very low advances, so the numbers still work out. But to some extent it's a legacy industry. It was big in the 80s, peaked in the 90s and 00s, but has been in decline ever. There is so much excellent free training/example content online that it's making less and less sense to package it up in book form.

Being an author of a well regarded programming book has plenty of benefits that don't involve book royalties. You're considered an industry expert and can make money on speaking and consulting gigs.

John Resig may be a bigger name, and he didn't make much from publishing.


It doesn't seem like the reward is primarily financial, though I'm incredibly glad of both Scott's C++ books, and Resig's Javascript books.

Very interesting to reflect on the amount of time needed to write a technical book on a programming language. If I compare that to amount of time taken to write code (and forgive me for the horrible KLOC metric), I think one could easily average about 300 lines of good C++ code (including tests) per day on a new project -- probably more if you are working alone, but let's keep it conservative. So about 37.5 lines per hour * 1350 hours = ~50KLOC. Say half of it is tests, so that's a medium sized app.

I don't know why, but writing a book seems waaay more effort than churning out ~25 KLOC of tested C++. I guess it is what you are used to...

* 300 lines per day is what I averaged over a multi-year period when I was a C++ programmer. But that was more than a decade ago, so I'm guessing it would take less lines of code with "Effective Modern C++" ;-) (and yes, I measured it for interest sake...)

> I don't know why, but writing a book seems waaay more effort than churning out ~25 KLOC of tested C++.

It is. I wrote a programming book, and the code for each chapter was always a breeze compared to writing the prose.

I rationalize it by thinking of English as a programming language. It has a fantastically complex syntax and semantics, tons of undefined behavior, and a few billion interpreters out there, each with a large number of quirks and bugs.

Writing a program in English that does the more or less correct thing on all of those interpreters ain't easy.

I'm actually finding it hard because I feel like I need to re-write all the old chapters each time I write a new one.

The problem is as I go from A to B to C when I get to D I realize I really want the code to be in a different shape. But to do that I either need to make a C' where I take a timeout to explain why I'm refactoring the code before I move on to D. Or, I need to go re-write A, B, and C so they fit directly into D.

Making C` sucks because it's irrelevant to whatever the topic is. But re-writing is also a ton of work.

You could argue I should start at Z and work backward but I'm writing as I go so I don't know what Z is yet.

Ditto. I'm almost finished writing a book Ansible (self published on LeanPub), and most of the difficult sections have had the code samples ready to go for months. Getting the code organized an laid out in a logical fashion with appropriate, short intros, descriptions, and summaries requires much more effort than the code itself!

300 lines of code per day ? Really, the LOC -measurement is highly dependent on what kind of project you are working on, and really doesn't tell much about what is going on with the project.

Don't know what the good metric is. Maybe functionality instead of lines of codes or kilobytes in the final binary. But how to measure functionality ?

300 lines per day sustained? that seems really high to me. what were you working on? didn't you ever spend days optimizing or refactoring with <= 0 net lines added?

300 lines is about what I averaged (I can't remember the exact number, but I remember it was about 50:50 split between test code and production code with 150 lines of production code a day). As another commenter mentioned, it was new code and it was C++ (old style). It's easy to crank out header files (as long as they don't contain templates...) Definitely spent time both optimizing and refactoring large portions. It's the average over time. So some days I might write 1000 lines and others, net negative code.

As another person mentioned it really depends on the project. If you are using big frameworks then you can't write code nearly as quickly because you are constrained by the design of the framework. Also if you are green fielding everything, then you get to write a lot of infrastructure code. I get paid to write Ruby on Rails code these days (among other things) and there is no way that I would average 300 lines of code a day even with tests. A huge part of my day is wrestling with Rails and trying to understand what the heck it is doing.

Lots of other things can slow you down. Your coworkers for instance -- constant squabbling over one issue or another can easily eat up half the day (or more). Having a constant stream of requirements is really important too. It's easy to get stalled trying to figure out what you should be doing. It's also easy to get paralysed by indecision and be afraid to write code because you don't want to fix it later. Many, many things can slow you down. If you have a situation where these things aren't getting in the way, you can write quite a lot of code.

At the time I was measuring my output, I was on a phenomenal team. We were writing Windows applications (notably without MFC ;-) ). Working with them pretty much approximated what I think I could do if working alone (i.e., the communication overhead was pretty much 0), so I thought it was a reasonable comparison.

> about 300 lines of good C++ code (including tests) per day on a new project

He wrote 'on a new project' So you can indeed get to such an average with writing new code for some time.

Could somebody redpill me on why something like D (http://dlang.org/) is not a better alternative to C++. Is that the library support or what exactly makes C++ still a better option in 2015? I am not sure what is the best alternative to C++.

When you develop something you consider not only the features of the language itself, but also the availability of libraries in this particular language. There is a large amount of high-quality C++ libraries and C++ can consume C libraries seamlessly. In most other languages you need to write often unsafe bindings to connect to C.

D simply doesn't have enough libraries to make it a viable choice except maybe for some niche area. For example, there are only 6k D repos on GitHub (https://github.com/search?utf8=%E2%9C%93&q=language%3AD&type...) compared with 340k C++ repos. Also any high-performance code (at least in the area I'm working) is written in C++ or, sometimes, in C.

Thanks for the answer this is exactly what I was looking for. It seems that C++ is in a sweet spot and it is going to be hard to challenge its position.

D is a better alternative for many people. Speaking of Scott Meyers, C++, and D, his talk at DConf 2014 was humorous and you'll get a better sense of him.


It depends on your actual goals. Personally, though I prefer Go for everything when possible, my favorite alternative to C++ is C++. If you want a true alternative, Rust is probably a smarter investment of time compared to D, based on traction. However, I'm uninformed about the D community as of late, and certainly won't be surprised of a future surge.

Thanks this video is just too funny. I was aware of the quirkiness of C++ but this is very insightful.

On the Go note, I like it but I think the target of Go is system programming with great concurrency support. More about Go here: http://yager.io/programming/go.html

Rust seems to incorporate all of the academic research about programming languages and also moves towards memory safety and correctness, and that is a good start. See what happens in the next 10 years! :)

Scott is hilarious. He's a bundle of insight. The slight jest about C++ being an alternative to itself sums it up: it can be a multidimensional beast. Quirky is a very kind way to describe smashing one's head on a keyboard repeatedly. That said, C++11 changed its game. It's so much easier to write in it without inducing bleeding eye syndrome.

The Go article has some fair points. Yet, most of the reasons the author discounts Go are reasons I prefer Go. Its minimalism is a virtue. I'm grateful that it ignores a lot of research overload. It's grounding. I see more and do more in it. Reading people's code and the standard library itself isn't painful. Go is versatile. Systems don't sum it up. In my view, it'll have a healthy stride in a lot of other areas outside of networking and the web, given time (e.g. desktop GUIs, mobile, games, game servers, etc.). In 10 years, we'll have exhausted all letters and be back on C.

I am familiar with C++ and not familiar with D, but I have heard from people that D is not very practical without a garbage collector, so I suspect that D is not a good choice if you want to have memory allocation/deallocation in your own hands.

I loved D the last time I looked at it, but got tired of writing bindings / FFI code for all the libs I needed. Same thing with Lisp.

I don't want to use your arbitrarily opinionated bindings, and I don't want to have to write my own just to get started.

The language itself is amazing. I'm a fan.

But try to find a MySQL library with support for DECIMAL. Something I think should be in the standard library nowadays.

No workable solution found.

Things like that are the things that kill the language.

I actually finished this book today. It's a great resource on the various pitfalls and good practices that are present in C++11/C++14, and they are presented in a very readable format that really lends itself to referencing later on. I highly recommend it.

I've read couple of Effective C++ series books by him and I've found Effective STL to be the most useful book for my day-to-day C++ programming at work.

It's a must have book.

Love the book!

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact