Hacker News new | comments | show | ask | jobs | submit login

There's pitfalls to everything. I find that a lot of people that only learned C++ end up writing nice-looking safe code that's very well organized but has awful memory usage and dubious performance characteristics. It's not that they're bad programmers or made poor algorithmic choices. It's that they've been indoctrinated with the fear of "premature optimization". Moreover, they don't really know any better if they do have optimization opportunities.

On the flip-side, the C programmer who starts out learning C++ will often spit out some hideous abomination that uses no namespaces, consists of obscure function names, pointers everywhere, and tons of callbacks.

Thing is, sometimes performance really matters. A large part of the HN audience focuses only on web-oriented programming. But in scientific computing, finance, etc. being able to squeeze out a few more operations/CPU cycle can be incredibly important.

I think rather than being dogmatic about the issue, as programmers tend to do, it's important to introduce students to C but be very clear about why you might want to use some of its features vs. relying on the safer C++ variants.




Agree, but most of the time performance matters less than people think.

In all my years as developer, even back on Spectrum, MS-DOS and Amiga, I never bothered turning off bounds checking and it seldom mattered for the type of applications I was writing.

The very few times it mattered, I was writing big chunks of Assembly anyway.

Everyone thinks their applications have the same performance requirements as Microsoft, Apple, Google, Facebook, CERN, Wall Street, Sony ... have, but they don't.

It is like the native code version of going web scale on day 1, when everything that one has are a few pages to display.


Fear of premature optimization, fear of pointers, and fear of how computers work in general, yes.

Pointers should be used very sparingly, and much of the time in code I've seen they could have and should have been avoided. But it seems like we've come to some far extreme other side of that thought to the point where seeing a pointer instills an exaggerated fear of memory stomping, leaks, and a variety of other things. These are important concerns, but, like optimization, ought not make a person afraid of their use. Anybody who needs to do non-trivial programming at the systems level had better get over those fears and focus on learning how best to use the tools they need to, in my view.


Out of interest, I never learned C but went straight to C++. What in particular about C++ makes you believe that you'll write memory hogging programs?


Specifically, the STL has a lot of slow containers that people readily use. For example, a beginner might see a std::map<T> and think "wow, a handy hash table class!". First off, it's not a hash table (usually a RB tree). Second, the closest thing to a hash table in the STL (std::unordered_map<T>) is still pretty slow.

Then there's std::string. Super nice in 95% of cases but can cripple an application if you have millions of strings you need to deal with (TONS of dynamic memory allocation).

And then there's std::shared_ptr. Super convenient but potentially a huge impact if you have items with very short durations in hot loops. std::unique_ptr on the other hand has no added overhead. Sure, you can look these things up. But it's not really obvious.

C++ is an incredibly useful language. It's far from the safest, prettiest, etc. But when used properly it's a very powerful tool. But knowing C can help you to better understand when you're not getting something for free and when undefined behavior can rear its ugly head.


I see, interesting, thanks. I suppose the same problems arise attempting to use STL algorithms instead of using algorithms that are better suited to a specific container, eg. std::set::find instead of std::algorithm::find.

Being aware of what is happening and the best tools for the job I suppose! And knowing the STL inside out and its idiosyncrasies and foibles.


I've seen a game engine built on shared_ptr that had trouble getting a 2D game performing well on the PS3.

Its not that C++ will bloat your program, its that many programmers don't understand the tradeoffs of the primitives they're using, or don't have the experience or vision to see what they become at scale.


That last bit of your sentence is a great point - no vision to see what they become at scale, ie normally monstrous




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: