Hacker News new | past | comments | ask | show | jobs | submit login
Does C++ still deserve a bad rap? (nibblestew.blogspot.com)
108 points by signa11 5 days ago | hide | past | favorite | 301 comments





The code for that blog post is fragile in an interesting way. Its safety and security depend on a subtle invariant: that the 'word_regex' does not match a string containing any non-ASCII character. If requirements changed so the regex could match a non-ASCII character, then one of the 'c' chars could have a negative value other than -1. https://en.cppreference.com/w/cpp/string/byte/tolower says "If the value of ch is not representable as unsigned char and does not equal EOF, the behavior is undefined", so executing std::tolower(c) would trigger undefined behavior, i.e. a safety and security bug.

This fragility is not at all obvious in the code. It is easy to imagine someone making that kind of change to word_regex and introducing a theoretical security bug that no compiler or static checker is going to pick up (AFAIK). Of course the severity of the bug in practice depends on what std::tolower does in that undefined-behavior situation (which may depend on the run-time locale setting).

I think the author's example actually illustrates the C++ safety problem pretty well. You write a program that looks safe and actually is safe, but slightly different code which looks just as safe is not. You're tiptoeing through a minefield.


In at least some locales tolower segfaults under that condition, in glibc at least.

(before anyone says that ‘in reality it’s probably fine’)


I would love to know an example of that.


That quotes a glibc comment "we also support negative `signed char' values for broken old programs" which means the OP code is in fact going to behave OK with glibc, providing that "support for broken old programs" is never removed.

I have nothing against C++ per se. Used it on and off for 10 years (but most actively 3). It's a fine language.

To me using C++ means you are ready to give up a lot of time and energy in order to gain a complete control over certain aspects of your program. I was very much into that at the start of my career and gradually started drifting away to more immediate productivity while reserving the right to poke under the hood when necessary. This has led me to Rust and I find it to be the better natively compiled strongly & statically typed language, but YMMV of course.

These days I reach for dynamic languages for personal projects and experiments. I want to sketch an idea and see if it works in the interval of 15 minutes to a day. Languages like C, C++, Rust, D, Nim, Zig, Fortran, Pascal, Haskell, I found to be generally terrible for when I am in a flow state. Using Elixir / JS / OCaml I found that I could iterate much faster.

I never will participate in a language war (although all of us have slipped on that ice more times than we'd like to admit, let's be honest). But to me C++ is simply not productive. It feels like having to learn a 10x5 meters of control panel with dials, blinky lights, keys and levers. I know there are many who enjoy that -- more power to them. But I am not in that group.


I used to work with C++ almost exclusively, but accepted to use more interactive languages for most of my work mostly for faster iteration. C++ is definitely not my main choice anymore, but still use it daily.

C++ has pretty much unmatched tooling due to the massive ecosystem. I have my own long list of gripes against it's syntax and historical baggage, however it's a language that doesn't really impose a style/idiom like several other modern/newer languages do, making it vastly more applicable from anything from embedded to large-scale programs without restrictions. Modern C++ has smoothed a lot of the rough edges in terms of productivity, but it's still an overly complex language that cannot shed any weight due to backward compatibility, and it's a shame.

Contrarily to many other guys tough, I'd take a dash of C++ compared to "simple" C/C99 any day, especially in embedded. C++ can be vastly simpler and more readable, while retaining 100% control over memory and layout. Most of the memory/threading issues essentially disappear with very little fanfare, but few people talk about that.

I don't want to detract against static checking though! But I feel like newer languages such as Rust are still too young and idealistic, and haven't been beaten by implementation requirements or a system's programming language yet where your silly requirement is somebody else's essential feature. This is how it gets ugly, and this is where C++ has probably something to say.

I still like modern languages though. I like Rust, but won't be able to use it effectively until they get rid of static compilation. I wish I had D's metaprogramming over C++ templates any day, but the GC is a again non-starter. Nim has no static checking, but it should deserve much more consideration than it currently has: it's much more pragmatic than Rust IMHO, and it's also much easier to integrate into existing programs will less hassle.


I worked mostly in C++ for over 15 years. I was totally proficient in it, and very productive, especially after c++11. Often I would reach for C++ even in favor of Python for small throw-away tools, in the knowledge that it wouldn't bail out on a typo in the final print statement after 10 minutes of analysis.

I have now worked with Rust for a year, and I don't see myself ever going back to C++. The statement in the article that "only the final iteration would be unsafe if one would forget to check the length" made me cringe uncomfortably. I might have written similar code as the author, but it now looks clumsy and overly complicated.


Fair and accurate points. I admit Rust needs to sort some kinks out still but it builds up on the experience of C++ and I believe it's already heading in a very productive direction. I love what they're doing with their async stack, being just one example.

I can't deny compiler and linker speed in general are better in C++ land but Rust is gradually catching up (too slow for my taste still but sigh). That's a very strong point against Rust to this day.

You might be right that compromises for running in embedded environments might load Rust with the same ugly historical baggage as C++. That's very possible and I can only hope it won't happen, but you do still have an excellent point about the genesis of these problems.

D and Nim I admittedly didn't evaluate very fairly -- gave them half an afternoon each. They were quite nice in fact, but I found the small ecosystem off-putting, especially in a situation when I want to play with an idea that might require libraries for 5+ well-established technologies. So yeah, I wasn't very fair to them but didn't want to dedicate much time for prototyping ideas regardless.


What is "static compilation"?

It's not just-in-time compilation. Maybe he wants a kind of REPL for Rust - I guess?

> C++ can be vastly simpler and more readable, while retaining 100% control over memory and layout.

But unlike assembly, you don't have control over CPU flag registers and special instructions. Can you explain why and when, exactly, 100% control over memory is needed? Outside of OS kernels, high-frequency trading engines, and real-time control systems? If using assembly is too costly, why is using C++ economical?

And, do you really have control what happens at the machine level? Second and third level caches, hyperthreading, multicore, out-of-order execution, CPU affinity, NUMA, TLB, virtual memory, all that stuff, there is nothing in C++ which controls that.

And I would even argue that this in most cases is a good thing, because unless you are writing an OS kernel, it is none of your programs business - what it should focus on is the logical flow of the computation.

And this is what modern optimizing compilers do: They transform expressions into machine code which has an equivalent observable effect, and executes that what the programs specifies as fast and efficient as possible. Fi you write:

  int a = 0;
  for (int i=0; i < 100; i++)
    a += i;
  printf("a = %i\n", a);
you could be tempted to believe that the CPU executes increments on a 32-bit or 64-bit register. This is not what happens on a modern optimizing compiler. Look at that link (which uses -O3):

https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(fontScal...

- the compiler reduces it to a single move and return instruction. This is not specific to C++ compilers - a good Lisp or D or Ocaml compiler will to the same, while managing the memory for you.

Now, control is, as in psychology, a double-edged sword. It allows you to take influence, but too much control is not a good thing, as the example of micro-management shows. In the case of the compiler, if you control too much, it interferes with the compiler's job to transform your expression into efficient equivalent machine code. The fine-tuned code you write today might be optimized for some of the modern hardware, but while it will probably still compile, it might be much less than optimal 15 years from now, and in the case of C++ without the compiler having any leeway to optimize it because you told it way to explicitly what to do.

It also interferes with your job to produce clear, valid, and readable algorithms and code. And the numerous controls which C++ gives make for a very broad and very fuzzy interface, which in turn makes it more difficult to produce good code and hard to do that in a way which is both reliable, strictly valid, and easy to understand. The issue with non-ASCII characters in words in the original article is a good example. Here is another one:

Take

  include <vector>;
  include <algorithm>;

  std::vector<bool> vb(100);
  
  std::fill(vb.begin(), vb.end(), false);
is this valid code?

The C++ reference says:

https://en.cppreference.com/w/cpp/container/vector_bool

"Since its representation may be optimized, std::vector<bool> does not necessarily meet all Container or SequenceContainer requirements. For example, because std::vector<bool>::iterator is implementation-defined, it may not satisfy the LegacyForwardIterator requirement. Use of algorithms such as std::search that require LegacyForwardIterators may result in either compile-time or run-time errors. "

Do you think this is funny?


You don't have direct control over caches, however if you have control over memory layout you can indirectly influence how the code performs by leveraging the proper cache sizes/lanes. If that's your goal, you cannot focus on program flow alone, and that's the main point.

Low-level and high-performance code is not "pretty" by modern standards, as it often requires DMA and tight control over data placement in order to fully leverage instruction-level and hardware-level parallelism. The hardware influences how the data structure layout should be first, and we work on top of it. We abstract structs and containers that capture this layout, so that we can still have a readable program flow at the end.

In this sense, C++ can still be used as a glorified assembler, without the need to drop down to ASM for the simple stuff as your for loop: nobody wants to do that (me included). But contrarily to C, you can avoid a ton of preprocessor macros and get improved type checking, while still using ASM in selected spots if needed.

Your example about vector<bool> is not really surprising. Do you want the iterator to be efficient, or consistent? Tough choice, depending on the scenario. It's annoying that it's not standardized in one form or the other. I remember I was fretting over this detail over 10 years ago, but in actuality I used vector<bool> exactly 0 times in my career so far.

Again, please don't consider this as if I was praising C++. There are _many_ areas, including lack of standardization in the stdlib area (vector<bool> being one of many), that I don't like. I just want to say that it's an incredibly versatile tool which is very hard to replace given the same constraints.


> Low-level and high-performance code is not "pretty" by modern standards, as it often requires DMA and tight control over data placement in order to fully leverage instruction-level and hardware-level parallelism.

This is true if you look at implementations like these:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

for example:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

vs.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

I'd say the C++ code in this particular case is not only much longer but also uglier.

However, how much of a fraction of C++ code in use does really require direct control over DMA calls? If you could get an at least equally fast result as when writing "normal C++" in Rust, with more safety and less effort for debugging, would this not be tempting for many people using C++? Because this is what I observed in my case. And this given that I've written no more than a few hundred lines of performance-critical code in Rust, and have worked for > 10 years on performance-critical code in C++ and C.


> Your example about vector<bool> is not really surprising. Do you want the iterator to be efficient, or consistent?

The thing is, what cppreference says here is that this innocent-looking code might either not compile or cause undefined behavior in whatever part of the program. In my case on GCC 8, it caused memory leak warnings with address sanitizer. For larger programs and especially for programs which have hard requirements on real-time latencies, safety or robustness, this is not acceptable.

And yes, an experienced C++ developer will initially need a little more time to write the same code in Rust, compared to C++. (I do not think this is true for developers new to C++ because C++ is a very large language). However, in any larger program, the time spent for testing and debugging is very likely larger than the time spent for typing in the first code. And in this metric, Rust's strict approach to correctness is far better.

And if you get to debug undefined behavior in large multi-threaded programs, the time you can spend debugging code is basically unbound if you were were not both experienced and careful with writing first.


I find it very interesting that you put Haskell in one camp, and OCaml into the other. I am more of a Standard ML guy instead of OCaml, but can you elaborate a bit why you feel you can iterate faster in OCaml than in Haskell?

OCaml is quite a lot more pragmatic and 'loose' than Haskell. But that's just my impression. They even have for-loops in OCaml..

(I would be really slow writing code in vanilla JavaScript. There's no static checking, and not much dynamic checking. So you need to watch your step very carefully---in some sense, very much like in C++.)


> OCaml is quite a lot more pragmatic and 'loose' than Haskell. But that's just my impression.

Mine too. I still dislike the lack of Unicode strings by default, and the somewhat confusing tooling (initially) but outside of that I quite love the language. It has a huge potential and I can only hope that the arrival of Multicore to the upstream language will kick it forward and put it in the same league as Rust (although due to GC it can never be that fast I'd think; but who knows).


> (although due to GC it can never be that fast I'd think; but who knows).

Well, in theory GC can be made very fast. Though you sort-of have to decide whether you want maximal throughput or real-time latency guarantees.

In practice, we aren't quite there yet, alas.

The way to get really fast code is just to avoid allocation, I'd guess? OCaml allows you to use mutation, and it's relatively easy to peak under the hood to see what your code gets translated into.

The main technical innovation of Rust is the borrow-checker.

But that piece is very related to linear types / uniqueness types. There's some interesting work going on with linear types in Haskell.

And just like Generalized Algebraic Data Types also eventually made their way to OCaml, there's no fundamental reason they couldn't add linear types to OCaml, I guess?

(Linear types can basically be used to encode the FP equivalent of manual memory management.)


Check my reply in another comment that's sibling to yours.

One of the issues with this comment is that, while it aims to elaborate on certain aspects of C++ that make it unattractive, it tacitly promotes the idea using C++ gives a person "complete control over certain aspects of your program". Given the intent of the message, this is actually kind of harmful. There's a certain class of chest-thumping/-puffing programmer who sees this as vindication for the language. So, perpetuating that idea is bad news, especially since it's only true if applying very, very loose criteria of what it means to give a programmer "complete control".

This is what I heard several seasoned C++ veterans claim and so far I didn't have a reason to doubt them.

As already stated, I'm not a fan, but apparently a lot of people out there manage just fine with C++ and claim that it's actually easy and possible to attain complete control.

I'd imagine that could be somewhat true if you invest decades in that language.


It isn't true unless that investment includes tons of bespoke and unrealistic compiler work and an open side channel (e.g. in the form of command-line flags to the compiler) to get it to lay down exactly what you want it to.

When people write those kinds of things, they're being hyperbolic. It's best not to assent, in light of the chest thumpers. Complete control just isn't possible in C++. More control than many of its contemporaries, yes.


I general I agree with your broader point about when to select dynamic language, flow state etc., and specifically with C, C++, Rust (although of those I haven’t done C++ in years except to tweak others’ code). However I find D can be written in a very script like manner, using the GC. Only when chasing performance do I need to go GC-free. The fast compilation means I can iterate as fast as python, and it has replaced a lot of python in our group.

Side note,I would like to learn OCaml and am interested you drew a distinction between Haskell (former group) and OCaml (latter group).


OCaml and Haskell share a common ancestry, but I would also draw a distinction between them. (Though not the same that the other commenter did.)

OCaml is a bit more of a drop-in replacement for something like C++ or D. Mostly because it's strict by default, so it's easier to reason about performance even for a non-expert user.

OCaml's approach to side-effects has now entered the mainstream: it's considered good style to avoid them, but they are still used pervasively. That's very similar to what good modern C++, Java, Python etc style advocates.

Of course, Haskell also allows side-effects. But generally, you track that they are occurring with the type system, you generally stick them in some kind of Monad or Applicative Functor.


Check here: https://news.ycombinator.com/item?id=24809623 (sibling comment).

I love this concept! I think it's more linked to experience than anything specific language feature.

I used to think it was typing, then I became more proficient in Haskell and the type system actually saved me time.

My in-the-flow languages right now are Haskell, JavaScript and PHP, but I'm trying to add rust to it.

I have a friend who reaches out to C for quick prototyping. I know many more going for .Net or Java (which are not that far from C++)


I am still having a hard time adding Rust to my flow-state-coding. Sadly, like C++, it does require you to learn quite a bit before that point.

But I still find it more pleasant than C++ so I reach for Rust instead. Plus the experience really starts to pay off from one point and on.


What are the cases where you would use specifically Ocaml instead of C++ or Rust?

(EDIT: I misread your question, apologies. OCaml vs. C++ is mostly because of much less surprises to start working with it. OCaml vs. Rust is mostly because Rust mandates you to pay attention to a lot more initially (`Result`-s being a prominent example, although using `.expect()` or `.unwrap()` actually helps a lot) and even if I absolutely love that you have to iron-proof your code, it does get in your way when you just want to try something quickly. It kills the motivation intrinsic to the flow state.)

---

(Original reply:)

I'll admit it's not a factual and clear-and-cut distinction.

OCaml has warts (no multicore, although that's claimed to be very close to complete now; also no UTF-8 strings by default, and a somewhat surprising tooling, the first time at least) but I found Haskell to ask too much of me to even start -- I almost immediately grokked monads and lazy evaluation but it seemed to be too much effort for something you'd like to quickly play with. And when I saw the list of all compiler variants I almost gave up right there and then.

OCaml, as mentioned in the parentheses above, isn't without problems too but I found that I could get to a working dev stack and the ability to make a quick program and run it, much quicker compared to Haskell. Also, it's very terse.

Not the most compelling or factual of reasons but this is how I arrived at that place where I prefer OCaml for sketching ideas and not Haskell.


OCaml is a much smaller language than Haskell.

Similarly, Erlang (and thus Elixir, I guess) is also comparatively small and simple.

Surprisingly, C would be in the same category, if it wasn't so primitive and likely to make you shoot yourself in the foot.

I've programmed professionally in both OCaml and Haskell (and Erlang and C etc). I find that Haskell is generally more terse than OCaml.

They are both fine languages. Subjectively, Haskell feels a bit more fun to me. But if you want to really get into it, it helps to have some people around who know what they are doing.


Interesting how different experiences with the same thing can be! :)

> it helps to have some people around who know what they are doing

I think that's the crux of my issue. OCaml wasn't the easiest to start with either but it took me 2-3 casual evening sessions to just be able to sketch code and run it immediately with almost no hassle.

But yeah, can't deny Haskell kind of rubbed me the wrong way so I am likely biased now.


It does feel fun to bust out a few Project Euler challenges with OCaml; and as a reasonable experience programmer you should be able to do that after those 2-3 casual evening sessions.

It's been a few years since I last did OCaml. At that point, the ecosystem was a bit more mature for Haskell. More libraries, better editor support, etc.

In some platonic sense I like small languages, but I had come to appreciated all the extra comfort Haskell provides. OCaml was just too much in the uncanny valley of being almost Haskell, but then missing some nice-to-haves. I might grumble, but OCaml is a pure pleasure compared to C++.

Btw, they finally added (optional) stacktraces to Haskell a few years ago.

About all the language extensions in Haskell: I like them, but they can be overwhelming. Luckily, the language extensions your libraries use has almost never any bearing on what language extensions your client code uses.

So you can ignore the extensions, and still use your favourite libraries.


Seems likely Haskell will get stack traces by default fairly soon.

GHC already gives you stacktraces when you compile with profiling on. You say they are going to turn them on by default? Interesting!

Many thanks for the explanation!

Like every programming language out there, C++ is a tool. And like every tool out there it has its uses.

There isn't any point of using C++ to count words in a text file. Any high-level language like Python will beat you to it. However, there's one thing you can do in C++ and not in Python or JavaScript or PHP: fully control the memory layout of your data.

While you don't need it in most of the cases, it becomes a killer feature when directly dealing with large amounts of data:

* Try implementing an on-disk hash table in Python and you're stuck manually packing ints and longs in an out of arrays. In C++ it could a simple template used with a memory-mapped file.

* Try doing anything non-trivial on a microcontroller with 32KB of RAM. You could theoretically use a higher-level language, but you will end up using >10x amount of RAM.

* Try designing an application-specific data structure in any other language. Let's say you have an ~8GB in-memory database that slowly adds records one-by-one and then invalidates them in chunks. A C++ implementation will rip anything else to shreds. You just won't get the same speed and memory efficiency.

What it means in practice, is that unless you have an existing project that uses C++ anyway, you want to partition it: do the memory-critical part in C++, and communicate to it from a higher-level language via a high-level interface. You will get performance where you need it and a peace of mind everywhere else.


> There isn't any point of using C++ to count words in a text file. Any high-level language like Python will beat you to it.

People using c++ usually care about programs finishing before the heat death of the universe. I had a few Python scripts that spend several seconds parsing larger files, after a rewrite to c++ that changed to almost instant.


> I had a few Python scripts that spend several seconds parsing larger files, after a rewrite to c++ that changed to almost instant.

It depends on the task. If the parsing is done once a day then several seconds is nothing, so there is no point in using C++.

If the parser is running very frequently then of course one uses a faster implementation.


In my case it lead to a reverse on that reasoning. The script ended up being an explicit step to import the data when it changed instead of running implicitly whenever the data was needed. Of course that choice was based on the initial version which was even slower and didn't use multiprocessing to handle the files in parallel.

it depends on what you're doing of course. It can even go the other way[1]

Generally languages like C++ are obviously much faster, but a lot of primitive stuff in languages like python is also just C calls and may be optimised to the point of being faster, because doing things in low level compiled languages often comes with its own tricks and problems.

[1]https://stackoverflow.com/questions/9371238/why-is-reading-l...


To be fair, C++'s standard I/O library is really bad. There is probably an almost universal consensus in the C++ world that the locale library is the worst standard library, immediately followed by iostreams. For small programs that's annoying, although using libc IO is usually fine for them. However, C++ is probably one of the top 3 languages for developing large applications (Java would also be there, not sure which one is the third in that pack) and for these having to roll your own IO is usually no biggie, because it probably would have been done anyway (e.g. even if iostreams wouldn't be so bad, it would likely have no place in the IO system of a content creation app or being used in HDF5).

In that sense, C++ is not batteries included, which can be annoying for small and medium projects.


In terms of large projects C++’s role is very much shrinking. C++’s last major strength is being cross platform, but new platforms keep excluding it. You can’t program on iOS or client side websites with it. Microsoft actively discourages using C++ on Windows, and C# took over for large projects.

Essentially C++ and the modern ideas about computer security are at war.


For data-intensive applications, which are only increasing, C++ is the only game in town. State-of-the-art architectures typically rely on schedule-based safety models that are not productively expressible within Rust's ownership-based safety model, and the performance characteristics of working within these respective safety models is not comparable. There aren't many alternatives when a GC language is a non-starter due to the adverse impact of a GC on throughput. C++ is winning for data-intensive apps not because it is great (it isn't) but because there isn't a practical alternative.

I've seen Rust being used for the layer above C++, where it doesn't have to deal with I/O, scheduling, etc that don't play well with Rust but you still want the throughput/latency and memory safety.


Twitter which is at the outer edge of throughput uses Scala and Java. Language choice of just about anything being viable for what most companies consider high throughput computing. For companies spending 8 figures on hardware it’s something to consider, otherwise it’s basically irrelevant IMO.

Hell, highly optimized numerical simulations are still being written in freaking Fortran.


What exactly do you mean by "schedule-based safety model"?

Many safety issues in complex compute environments are predicated on the notion that many different threads/contexts can manipulate the data structures in somewhat arbitrary and unpredictable orders. This is approximately correct for classic multithreaded software.

In some software architectures, there are schedulers with a global view of the entire (potential) conflict graph and they have complete control of what gets executed when. These architectures don’t even require locks because the scheduler has enough visibility and control to guarantee that execution won’t be scheduled such that there would ever be a contended lock or some other concurrency conflict. No amount of mutable references to the same memory will break these models, and the correctness of some implementations have been formally verified. The scheduler can always dynamically reorder execution to guarantee the invariants of the system. These models have the added benefit of having insanely good locality properties such that throughput is excellent.

These software architectures originated in HPC over a decade ago and eventually bled over into high-end database kernels. I learned it from when I worked in HPC many years ago and have used it every since, due to its unambiguous advantages.


Thanks! Sounds interesting, but it's not obvious to me that such a framework can't be captured in the Rust type/ownership system in a reasonably ergonomic way. Has anyone even tried doing that?

For example you might be able to wrap shared-mutable state in a kind of degenerate mutex and pass a "scheduler guarantee" token around that unlocks those "mutexes" without doing any runtime work.


Do you have any references to HPC systems that exemplify this approach?

What a load of nonsense...

1) Neither iOS, nor "the web" are new platforms.

2) Not only you can program with C++ on iOS, it's heavily used in certain kinds of applications.

3) Why would anyone single out a specific language and claim that it's a problem that you can't use it to build websites with it? This applies to all languages with one exception. Or it did, because now there's WebASM, which makes this argument even more bizarre.

4) C++ had and has first-class support on Windows.


C++ is 35 years old significantly older than client side web programming. iOS apps are only 12 years old. So, they both showed up well after C++ was a mainstream language.

2) Objective-C++ may look like C++ but their different languages. As always beware of edge cases.

3) WebAsm is somewhat “supported“ by 92% of web browsers which sounds great but isn’t enough to be viable for companies like Amazon. But hey it’s a cool toy if you don’t have real work to do.

4) I have run into plenty of missing C++ documentation when C# documentation was available on Windows. It’s supported sure, but very much a second class citizen.


Modern C++ is actually encouraged on Windows with Microsoft making standards compliant and supported frameworks like C++/WinRT for WinUI3.

https://docs.microsoft.com/en-us/windows/apps/winui/winui3/x...


> Microsoft actively discourages using C++ on Windows,

Really? That's the opposite of my impression. It seems to me that they promote it as co-equal to C# as primary focus in their constellation of supported and encouraged application programming languages.


Just thought I'd post to say I think you are spot on with this comment.

> but a lot of primitive stuff in languages like python is also just C calls and may be optimised to the point of being faster

That assertion makes zero sense if we take into account that parallelization is one of the most basic performance techniques there is, and Python's GIL simply eliminates that option except for multi-process applications which then require serializing stuff back and forth.

Python is pretty much unparalleled as a glue language, and excels at putting together small exploratory scripts for data analysis and number crunching applications, but it makes no sense to present Python as a performant alternative to C++.


> That assertion makes zero sense if we take into account that parallelization is one of the most basic performance techniques there is

I would not support that stance. There are areas where parallelization is helpful, like numerical weather models, graphics, and scientific number crunching, but for by far the most cases, parallelization is anything but trivial and requires a pretty deep knowledge about things like the C++ memory models, what read-write barriers, locks, and fences are, and so on. Less than 1% of C++ programmers can really handle that. And apart from that this is an area where other languages like Rust, Clojure, Scala have a strong point, because whether it is done correctly is completely implicit in a C++ program while in Rust, many errors will have the result that your program does not compile. Debugging the same mistakes in C++ will make you pull your hair.

In fact, I'd urge anyone who wants to learn principles of safe concurrent computation in C++, to learn Rust, Clojure and Scala first, they are wonderful languages with strong, highly coherent concepts, and it will make you a much better C++ programmer even if you never use them again.

Of course it is not only possible but common practice to implement parallel computation in games and such in C++, but this is not the standard application of C++.


Its funny that you mentioned games as an example of parallel computation because I'd argue they're some of the hardest programs to parallelize effectively since they don't generally involve that much bulk-processing of read-only data.

Parellism in C++ is most often used for scientific applications and other forms of mass number crunching. It's really easy to just throw a "#pragma omp parallel for" on a loop and call it a day but of course that would also apply to C and Fortran and is somewhat limited. Parallelism libraries like Intel TBB which I'm most familiar with are very easy to use and performant. I think there's a large problem in the reluctance of educators to use libraries to teach parallelism and people always dive straight into locks, threads and atomics which are really not the way to approach parallel computing if you're looking to do parallel computing and not looking to implement parallel primitives yourself (i.e DIY tasking-system or lock-free queue)

Focusing on TBB, it facilitates efficient parallelism by providing high-level canned algorithms such as parallel_invoke, parallel_reduce, parallel_for and parallel_do which anybody who claims to know C++ should be able to use easily. It also provides a task-graph which is great for more complex processing pipelines (things like join/split, fan-in/out and queueing). If you need more low level control you operate at the task level and TBB provides customization points for that. There's other libraries out there which provide similar functionality and even the STL in C++17 provides basic parallel algorithms such as transform (equivalent of map in other langs), reduce and many others.


> Parallelism in C++ is most often used for scientific applications and other forms of mass number crunching.

There are two aspects. I agree to the aspect that today, C++ is used often for such tasks.

Now, what are the reasons why C++ is used dominantly in this domain? I think more or less the only reason is performance. The good performance is what makes the authors of such libraries put up with the disadvantages of C++.

However, I think it is also true that Rust allows for a more concise and safe formulation of the computation (also Scala, for example, which serves somewhat different purposes). For scientific applications, correctness matters, and knowing that the code which compiles does not has hidden memory errors and data races is extremely valuable, because it can save a ton of time.

Also, if there are still differences between Rust and C++ performance, they are minor. In many cases, Rust is faster.

Now, if authors of scientific computing libraries were to come to the conclusion that there exist an alternative which produces at least as fast code, provides better and safer support for parallelization, and is with some learning easier to work with, why should these library authors continue to use C++?

Of course, nobody is going to ditch a large C++ project like Eigen over night and rewrite it in Rust. There is too much inertia for that. Also, GCC support is still lacking. However, one can expect that the number of new projects which use Rust is going to increase and the project which are successful there will blaze a new trail. For something like Python extension modules, users of these libraries do not need to know anything about Rust.

Also, some nitpick. C++ is used in important scientific libraries. However, many essential libraries such as Numpy are written either completely in C, or use C interfaces, because C++ does not has a stable ABI and Python uses the C ABI. This would make a switch to Rust pretty easy. In fact, I think the impulses in this domain will come first from researchers and analysts which start to write small Rust extension for Python which use the C ABI and integrate with Numpy, for example.


The overwhelming majority of C extensions release the Python GIL during calls to them. If you're in the situation described in the parent comment then you can parallelize just fine using threads.

> The overwhelming majority of C extensions release the Python GIL during calls to them.

In that corner case you still have slow Python glue code calling fast C++ code.

It makes no sense to claim that Python is performant based on the idea that it may be used to glue together calls to performant C++ code, while ignoring the fact that not only Python forces performance restrictions on it's code but also that C++ code is quite capable of calling performant C++ code itself.


The comment that started this discussion is about the specific case that most of your execution time is in libraries rather than your application code. In that Python is usually no slower to execute than C/C++. You disagreed with that by claiming that such programs cannot be multi threaded because of that GIL, and I just corrected that because it's untrue (edit: it's untrue in general, but especially untrue in this situation we're talking about).

To be honest I'm not sure what your new comment is about. If you're saying that Python is not necessarily faster than C then I'm sure no one going to dispute that; in the situation we're discussing, the performance of your code is totally dependant on the efficiency of the libraries that are doing all the work, not your glue code. If you're saying that Python is slower than C when it's doing a non-trivial amount of the execution, then sure, but no one was claiming that either. Or maybe you just want to simplistically categorise languages as "performat" and "non-performant" without thinking about the specific contexts they can be used in, but that wouldn't make any sense. Did I misunderstand?


Python and C++ are joined at the hip now primarily due to scientific computing and AI/ML adjacent fields. I feel the growth of Python has helped C++ grow too.

>I had a few Python scripts that spend several seconds parsing larger files, after a rewrite to c++ that changed to almost instant.

Was it worth rewriting those scripts in C++? I'm not trying to be facetious; a few seconds (or minutes) of additional runtime for scripts you hardly ever run or even run only once doesn't really matter, but running such scripts often may then worth the effort to convert them to something faster.

I regularly find myself porting regularly used python/js scripts to C#/dotnet (because it's fast enough, has good collections + LINQ for your somewhat-functional-programming needs and I can skip manual memory management, and async/await programming is not perfect but available and easy enough, + nuget has a lot of stuff and so does the standard lib).

What made you pick C++ over e.g. go or rust or C#?


> Was it worth rewriting those scripts in C++?

Mostly for development, I ended up invoking the script quite often and had to wait for the results. In production the input it ran on only changed once a week, so there was no need to run it often.

I most likely tried to run it with Pypy first, thought I am not 100% sure on that. If I ran into a similar issue today I would first try to compile it with Cython as I already try to use type annotations in every new script I write.

> What made you pick C++ over e.g. go or rust or C#?

Most of the project was already C++ and I only had to link in an Xml library for some of the output.


You just compared C++ to the slowest popular language out there. It would be a much more fair comparison if you used pretty much any other high-level language.

Since you can get within about 2x of C++ speed which still using a high-level language, that makes the argument that the times you really need C++ for performance reasons are quite limited. I'm not saying they don't exist, but they are rare.


You list some good use-cases for C++, but the other side of the coin is that there are languages out there nowadays that are better than C++ for a large majority of the use-cases where it fits best (Rust being the obvious example).

Yes, there are surely still a few cases where C++ beats Rust on raw merit. And of course C++ still has the advantage in the number of developers who can competently contribute to your project in short order. But both of these advantages are diminishing rapidly, and—to me at least—the writing is on the wall for C++. Its days are numbered.


> * Try implementing an on-disk hash table in Python and you're stuck manually packing ints and longs in an out of arrays. In C++ it could a simple template used with a memory-mapped file.

This is only simpler if you don't need to handle read failures. If you want to handle read failures, memory mapping can quickly become way more complicated. You have to install signal handlers and do a complicated dance to correctly handle not having data in some range.


Indeed. Memory mapped files are the bubble sort 2.0.

They look simple and approachable on the surface, lead to an elegant code and seem to solve the problem in basic test cases, but in practice they have nasty scalability problems. In particular, trying to work with very large files leads to cache thrashing that affects the entire system, which makes mmaped files particularly unfit for implementing off-memory data structures. Moreover, the code has zero control over this behavior, because we ultimately over-allocate memory and outsource swapping details to the OS.


> There isn't any point of using C++ to count words in a text file. Any high-level language like Python will beat you to it. However, there's one thing you can do in C++ and not in Python or JavaScript or PHP: fully control the memory layout of your data.

Any books or resources for digging deeper in these kinds of topics? I’m a Python developer and would like to learn C++ but the just the reasons you mentioned make it hard to leave Python.


you can malloc() to get a memory range, or mmap() to get a disk-backed memory range with kernel-managed paging, and then you can do whatever you want with it.

You get a pointer to the start of the range, and you know how large it is, so from there it's just pointer arithmetic and assignment. Easy peasy. Now you just have to make sure that you zero data when you're done with it, track how much of that memory is available or used (and which sections), and (optionally) implement your own defragmentation and garbage collection.

Unless you desperately need to specifically manage the memory layout of your data (e.g. for alignment purposes for massive data processing), none of this is worth the time or trouble to do for most projects, not only because it's a huge amount of work for little gain, but because you'll almost definitely screw it up and potentially corrupt your entire data set.

Note that, at least for the on-disk hashmap set, you absolutely can do this in python, so if you're interested in experimenting with this sort of data management, practice it there first.

A good place to start: https://github.com/luispedro/diskhash


> you can malloc() to get a memory range, or mmap() to get a disk-backed memory range with kernel-managed paging, and then you can do whatever you want with it.

No need for that, as C++ provides placement new().

https://en.cppreference.com/w/cpp/language/new


This doesn’t really solve the same problem, even though the two often go hand-in-hand.

It is really not that much work and there are plenty of libs that can do it for you if you do not want to.

To get a quick start, try declaring some class variables and then open the Memory window and go to "&variable_name". You can also evaluate "sizeof(variable_name)" to get its exact size.

You'll see a bunch of bytes reflecting how exactly your variable is seen by the CPU. That's exactly how many RAM bytes it occupies. It can quickly copied to a new location, limited only by your RAM bandwidth. It can be dumped to a disk and loaded later, limited only by your disk bandwidth.


How does this help?

Python developer here just switched jobs to one that does C++.

Let me warn you right now. It's not worth it. At all. Don't go down this path. Memory management is easy, but that's not the only crap you have to deal with.

Templating and the whole C++ ecosystem with Cmake compiling to make is a giant headache. It takes about a week to integrate a new C++ library in source code and people are worried about installing or "trying" out new libraries because of memory leaks.... while in python I'll install an entire library in seconds just to try out a function.

I would say if you made an app in python and the same exact app in C++, the C++ app would perform waay better. No question. But the developer experience and effort required to make that same app is much much more painful.


> while in python I'll install an entire library in seconds just to try out a function.

but... surely you are aware that the memory leak concern absolutely does not disappear ? you just choose to not care about it in the python case for "reasons" - likely you've never been hit by a system which appends to a dynamic array without ever emptying it, and that can happen in every single language in the world.


No it doesn't... but it's far less likely to occur. Web development practices usually avoid this problem as state is handled externally.

lol thats not a c++ problem. Cpp is different lang than python. If u don't have experience with c++ stop complaining about it. And regarding developer experience it doesn't matter if end consumer are going to get slow bloated app which need python runtime to be installed lol.

I do have experience. It's my main language and my current job. C++ really sucks to work with for a developer even with types.

For building cloud services where apps are bottlenecked by the database, the user experience is roughly the same as C++ but the developer experience with python is much much much more easier.

This attitude where consumers have to run the apps or install a runtime to run the app on there personal machines is much more rare nowadays.


C++ type system is pretty weird to be honest. It's overly verbose, and at the same time too loose and too limiting.

So how do you deploy your python app to your customers?

You deploy to the cloud. The customer downloads a web page and that functions as a client that communicates with the python server. Just like how most programming is nowadays.

But if you want a customer to download an app, which is much more rare nowadays than you can do what Eve online does or iron mail. They deliver actual client side python apps. I don't know what they use internally or how they did it, but pyinstaller is one option you can use to build a python executable.


It just sounds like you are working on a big messy project.

Nah it's a big unicorn startup down in Irvine, so the codebase is still relatively new. They use C++17 so pretty modern as well.

Why not just fuzz the library if that's a concern?

The effort required for this makes it not worth it.

If I want a library in python it takes about a second to install with one command, and these libraries tend to be safe so you're not going to get any resistance if you want to try out 10 libraries in the next hour or so, which is sort of impossible with c++. Also these libraries are easily uninstalled and isolated from other python apps you have. No need to use massively complex package managers like nix to handle isolation.

Fuzzing a library just isn't something that's done in python. The developer just installs it and manually tests it 5 or 6 times in a shell and it's part of the ecosystem in 5 minutes.

If you accidentally leave unused libraries in your code and it gets pushed to prod it's a non-issue.

I switched to c++ because I was ok with the verbosity and also the complexity and also dealing with memory. But literally every single corner of the c++ ecosystem is a pain in the ass to work with. It's getting to the point where I'm just annoyed all the time. If you want high performance apps go with rust or golang. C++ is just not worth it.

For a C++ dev moving to python is like night and day. Imagine a programming environment where you only have to deal with the intention of what your app should do and not all the yak shaving that comes with it on the side. I know some and they tell me they would never go back to c++.


Yeah I wouldn't use C++ for building web applications.

Which is the majority of applications nowadays.

You would use C++ for a service or very focused component that does one thing well (like a database, or crunching numbers). For those, even Rust is not always a good answer.

You shouldn't use C++ as the backend for your marketing site, or some simple crud backend where a JIT runtime with GC will do.

Hell, if you're even building a small compiler today a JVM language or Go is a better choice.

But if you need to allocate memory yourself, C/C++ is very hard to beat. Games and systems dealing with huge chunks of memory will always go this route... and just (most likely) copy the dependencies into version control and deal with things that way.

It doesn't matter if the majority of applications are web applications. That doesn't make C++ less relevant because that was never the target audience.


Kill two birds with one stone by learning more about the structure of the Python interpreter and builtins implementations by reading the source.

Not an exact reply.. but in python dealing with binary data is very inelegant and unintuitive.. Simple things like packing and forming and reading binary packets seems very convoluted in python..

From memory, a thing i've implemented in C++ that its basically impossible in almost any other languages (with few exceptions).

Represent a data table in memory where the columns are from different types, where the data is backed by a arena allocator and represented contiguously in memory.

Of course you can do a data table representation in almost any language, but once you benchmark the implementations, the languages that gives you full control of memory and are good at moving bytes are the ones that lead in performance.

Lars Bak once said that C++ is a great "byte chunker" (if i recall the term correctly), explaining why its a good language to create a compiler or a VM in it.


Sounds like Rust should handle this case just fine?

Yes, Rust is one of those languages now. (C, C++ and the newcomers Rust and Zig)

> Of course you can do a data table representation in almost any language, but once you benchmark the implementations, the languages that gives you full control of memory and are good at moving bytes are the ones that lead in performance.

The thing is, once you need even more performance, you might need parallel evaluation. And in C++, there are plenty of opportunities to shoot yourself with which even very experienced programmers have sometimes extreme difficulty doing it correctly:

https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedL...

https://preshing.com/20130930/double-checked-locking-is-fixe...

And this is the area which Rust, while being as performant as C++, is heavily improving on.


fyi, you can read mmap files in Python too:

  import mmap # Unix, Windows

  with open('c_structs.bin', 'rb') as file, mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_COPY)) as buf: 
        result = (YourStruct * 3).from_buffer(buf) # without copying
https://stackoverflow.com/questions/17244488/reading-struct-...

> [...] fully control the memory layout of your data.

Is that actually guaranteed by the standard?

> * Try doing anything non-trivial on a microcontroller with 32KB of RAM. You could theoretically use a higher-level language, but you will end up using >10x amount of RAM.

It's a shame Forth never took off. Sometimes I wonder what it would have been like to live in an alternate universe where Forth had been the lingua franca instead of C.

(Ie imagine that universe's equivalent of Unix being implemented in Forth as its killer application and going from there.)


> Is that actually guaranteed by the standard?

Kinda sorta. Technically, the program runs on an abstract machine, and the compiler is free to do whatever it wants that has the same observable behavior as the code you wrote. But this is not really a problem in practice: You will get what ask for, or occasionally the optimizer will give you something even better.

If you really need to ensure that you have exactly the layout you ask for, you could use "volatile" which tells the compiler that some region of memory is "special". Storing stuff there could have side effects. The memory could change at any time. This will force the compiler to follow your instruction verbatim. But it's not really advised since it will read the data from memory every time even if it's available for quick and easy access in a register - the memory could have changed, better read it again!

One important thing that is not possible to express in C++, is that some value, e.g. a password, should be purged from memory. A copy of the password in some random location of memory, that is not observable, so the compiler is free to create such a copy if it wants to.


What is so good about forth?

It's rather simple, can easily run on bare metal (ie no OS).

You can go very low level, but it has better higher level abstraction capabilities than C.

It's also really weird, which fascinates me. Eg most Forth code doesn't use variable names.

I've toyed around with Forth a bit. But never used it in anger.

Wikipedia says that Forth is used today in some bootloaders and outer space. I suspect the latter is a historic accident, because the inventor of Forth worked on a big telescope.

See https://en.wikipedia.org/wiki/Forth_(programming_language)


Ahh. I remember. I always get fascinated first, but when I see 2 3 4 + * meaning (2+3)*4 my eyes glaze over.

Thanks to a good amount of time spent with Lisp, I don't mind neither post-fix nor prefix notation that much.

Aren't you describing compiled languages with manual memory management vs interpreted languages with automatic memory management? Not merely C++ vs python or js

Of course, it's not inherent to a language whether it's compiled or interpreted. That's just an implementation detail.

You can total write a C++ interpreter. And there are Python and JavaScript compilers.

C++'s memory management isn't even necessarily particularly manual. Nowadays people use reference counting in C++ occasionally. (They call them 'smart pointers' I think.)


Just because there's a tool in a niche doesn't mean it's the best one. I could do all those things you listed in Rust with much less headache.

plus rust type system etc are just wow. And mostly if it compiles it works :)

> There isn't any point of using C++ to count words in a text file. Any high-level language like Python will beat you to it.

Well, I think that Python will probably do that in the shortest code but it is also more or less the slowest language which is today widely in use, at frequently about 1/50 the speed of C.

You of course could write the example in the link in Rust but for a quick script with decent performance, at around 1/6th of the speed of C, I'd probably use Racket. Racket is really a pretty sweet spot between performance and expressiveness.

> C++ is a tool. And like every tool out there it has its uses. [ ... ] While you don't need it in most of the cases ...

The thing is that the space in which C++ is really useful and the best solution is more and more shrinking. If one wants to control the hardware registers and write a kernel, I think C is still best. If one aims for safety and an performance equal to C, Rust is by now a good choice. For example in the task given in the original article, it would correctly process Unicode without any extra effort. It is of course complex, but much less so than modern C++. If one wants to get things done quickly, Python has its place, but I think Racket and similar languages (babashka) have many advantages. If things get more abstract, Ocaml might be an interesting choice. And I think for parallelism on the server, Clojure and Scala are pretty good.

What happened is that, once upon a time, programs in C++ or C were far far faster than anything else, and Java programs were as slow as a slug. But, compilers got better, and this has changed. And also, modern software development is source-code centric, not based on compiled Windows DLLs, much of the used code is in libraries on Github and elsewhere on the net, and it turns out that for this kind of code reuse, functional languages are often a better fit. Now, C++ is still competing with this and adopting functional features, but the result is getting more and more complex. At the same time, languages like the mentioned ones are getting better and better compilers, and they are conceptually far far simpler. And the speed advantage of C++ is shrinking steadily. In fact, by some comparisons it already does not exist any more:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

> fully control the memory layout of your data.

This is, looked at closely, not more than a means toward an objective, and that objective is performance. Control over the memory layout is normally not a value in itself.

> * Try implementing an on-disk hash table in Python and you're stuck manually packing ints and longs in an out of arrays. In C++ it could a simple template used with a memory-mapped file.

This can easily be done if you use LMDB. In fact, using it might be faster and LMDB allows for safely accessing the data from multiple processes. LMDB is written in C and of course there exists a Python library for that.

> * Try doing anything non-trivial on a microcontroller with 32KB of RAM.

You can use uLisp on the Arduino:

http://www.ulisp.com/show?1LG8

See also https://news.ycombinator.com/item?id=11777662

> * Try designing an application-specific data structure in any other language. Let's say you have an ~8GB in-memory database that slowly adds records one-by-one and then invalidates them in chunks.

Sounds perhaps like a use case for InfluxDB, which is implemented on top of LMDB, by the way.

> What it means in practice, is that unless you have an existing project that uses C++ anyway, you want to partition it: do the memory-critical part in C++, and communicate to it from a higher-level language via a high-level interface.

That's of course possible and is arguable often done that way. But there is no necessity to use C++. You can do the low-level, performance-critical part which needs some control over the memory layout in Rust, and do the rest in Racket, for example. The functional nature of Racket will make parallelism in the Rust part easier. I've tried that and it is really really neat.

Just to say, I've been using C and C++ professionally since, well, now 25 years. And this along with, for example, Python and IEC 61131-3 Structured Text, which is used to program PLCs. Of course, C++ is entrenched and will continue to be used for a long time. There is also heavy investment and interest in industry. But, it is not as essential as C for the hardware access part and there are today a number of interesting other options. I think C will be used for a loooong time from now, but I would not bet my house that C++ is being used as widely 20 years from now.

(The issue with Python is that it is easy to get started with it, and write small personal projects in it, but for the case of programming reliably in-the-large, and using a lot of dependencies, it is showing its deficiencies).


I feel like the fastest growing and most dominant area for C++ is scientific computing and AI/ML. Although many people write these programs in more user-friendly languages like Python and R, most of these programs call through to C++ frameworks (which many people use directly too), so in a way C++ has latched itself to the growth of such languages. Furthermore many major accelerated computing platforms (frameworks?) such as CUDA, Intel's oneAPI and Khronos' SYCL are focused on and committed to the C++ language and ecosystem and I don't really see much competition in this space for the time being.

To avoid double-posting the same content, please see my other reply here on about the same topic: https://news.ycombinator.com/item?id=24821043

Wouldn't C be a better fit for these tasks?

Given people (including in these comments about string splitting) often complain about C++ not having "batteries included" for everything, surely C would be even worse in that regard...

Personally, I think that the reason for these complaints is that C++ tries to look like a "batteries included" language that is suitable for tasks that high-level languages are typically used for, and falls short. C, on the other hand, is pretty obviously low-level. C is honest in what it is and what expectations you can have from it. It won't tempt you to rewrite your whole business logic with it — instead, you would write a small library or service, that would only take care of the memory and CPU intensive task that you need it for, and leave the rest of business logic written in language that really suits it.

Actually C is more batteries included (or accessible...). Since the C ABI is stable you easily can use external libs. For c++ you need to either compile those yourself or hope they precompiled for your exact compiler version.

>Like every programming language out there, C++ is a tool. And like every tool out there it has its uses.

And like every tool some tools are crap and other tools are good. Not saying C++ is crap but I'm definitely saying that it is unwise to use this analogy to believe that there are no crap tools out there.

There's this thing that happens among programming languages where everyone likes to think all programming languages are just apples and oranges and everything is equal.

The truth about the world is equality is rare... some things are good and others pure liquid shit. Programming languages are not immune from this property of the universe.

Now about C++... What you say about it is true. I completely agree. However, much of the good of C++ comes with a pile of crap that you have to deal with that's often not worth it for the majority of projects. In my opinion C++ is only good because there's no alternative unless you count Rust or D.


C++ fills a niche and probably if you use it all on your own you don't even see the problems. But if you use it at scale with multiple developers you can see it's layers of leaky abstraction laid down since the 80s that can never really be cleaned because too much is built on this foundation.

This slightly over 100 page book on move semantics https://leanpub.com/cppmove shows some of the iceberg like complexities dotted all over the place. Some developers probably never use and have never even heard of move semantics.

I've been programming in C++ pretty much my entire career. Personal projects or small projects I'd use Ruby or Python first. I'd only ever use C++ (or a very specific subset of it) if I want to make something extremely performant.

C++ will eventually peak and die off but the timeframe is probably measured in decades and I'm sure there'll places where it holds on even then, like Fortran today. I certainly would't recommend it to a programmer starting out today, instead C or Rust would be better places to get an idea of system level programming.


> This slightly over 100 page book on move semantics https://leanpub.com/cppmove shows some of the iceberg like complexities dotted all over the place. Some developers probably never use and have never even heard of move semantics.

I think Scott Meyers' "Effective Modern C++" which is in large part a collection of caveats and description of things which don't fit always together also underlines that impression:

https://www.amazon.com/Effective-Modern-Specific-Ways-Improv...


Hard agree. I was enthusiastic about reading that book when I got it, as I'd been very late to the bandwagon of new C++11 and C++14 features.

Ended the book in a pessimistic tone. It looked more like a cookbook of pitfalls _everywhere_.

Every new feature looked exciting, but came with a list of cases where the language decides to leave you on your own when it gets too uncomfortable (the "well that's undefined behaviour, I'm sorry, you suck! bye!" escape hatch). So in the end it's like you said: a new list of gotchas to learn by heart.


I think in Scott Meyers case it is he basically never used C++ in production, which in some sense is ok, since he makes his living as a language expert, but sometimes this shines through; also from a D conference talk I remember, even he got sick of it and switched to D.

Also let me use this comment to recommand John Lakos, his talks, his books. He knows how to build large scale applications and if you saw his talk on the 'is_working_day' function (if I remember correctly) you know you were not overthinking.


Thanks for the recommendation, I'll have a look at his work, for sure.

Also I'd be curious to see that talk you mention, do you remember anything about it that could help in searching for it?


I think same talk but different recording: https://www.youtube.com/watch?v=MZUJsCh1wOY I think around 1:08:30 onwards (maybe even before) the example I had in mind.

For me, that disillusion has been one of several steps.

Scott Meyers book was one point.

Then I started to learn Clojure, motivated by the idea that we need better concepts for the upcoming massively parallel hardware (a big influence for me was the article "The free lunch is over" by Herb Sutter: http://www.gotw.ca/publications/concurrency-ddj.htm). I now think that immutability by default is clearly the better way to go, even in close-to-the machine applications like embedded devices and industrial control applications.

Then, I saw this article on the different options and syntax for initializing variables in modern C++: http://mikelui.io/2019/01/03/seriously-bonkers.html

And I was like, no, this can't be serious. I think this was the point where I began to distance from the language (though I still use it at work when I need to).

I got also the impression that the actual language use in C++ is undergoing a serious split. Compare the C++ core guidelines with Google's C++ style guide:

https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines

https://google.github.io/styleguide/cppguide.html

This does already look somewhat like different languages, only that they can be compiled with the same compiler.

And then, the C++17 and C++20 standard iterations, at this point it is just like "this is too much! What is this good for?"


My entry in professional C++ programming was accompanied by Scotts earlier books. The moment gradually to refrain from the language as my main tool again coincides (coincidence?) with his retreat.

> if you use it all on your own you don't even see the problems

100x this. If >90% of your serious coding is in one language, you'll become desensitized to its bad parts but you'll be hyper sensitive to anything that seems inconvenient in any alternative. I've seen this over the years with Fortran and C coders. C++ coders do it to C all the time. Lisp coders are notorious for it. It's Selective Perception 101, counting the hits and ignoring the misses. We're all prone to it, and have to correct for it.

If you really want to compare any two languages, you have to give both a more than casual try, and keep an open mind throughout. Otherwise your conclusions are likely to be unhelpful at best.


One can even think of long compile times as a feature (more time for coffee breaks), well or that nightly builds sounds actually cool.

FORTRAN is still quite popular in many fields of scientific computing and C++ hasn't become ubiquitously mainstream so there's probably quite a lot of potential here.

> C++ will eventually peak and die off but the timeframe is probably measured in decades and I'm sure there'll places where it holds on even then, like Fortran today. I certainly would't recommend it to a programmer starting out today, instead C or Rust would be better places to get an idea of system level programming.

Unlikely. The only candidates for replacement are Rust and D, none of these are anywhere near the adoption of C++.

What C++ can do is get better at being a language and then maybe give tools for developers to phase out outdated constructs or relics from C. It is not going away cause it's the best replacement for C humanity will ever get, at least for the next century.


> Unlikely. The only candidates for replacement are Rust and D, none of these are anywhere near the adoption of C++.

Neither C nor C++ have replaced Fortran in that sense, either. Yet, as far as I can tell, Fortran has peaked a long time ago.


C++ is a very poorly designed language. A lot of the work being done by the committees is fixing Stroustrup's mistakes.

Consider horrors such as this. In the following statement what are foo and bar?

   int x = foo(2) + bar(3);
Most people will say foo and bar are functions and that's a reasonable guess. Unless you are talking about C++. In C++ foo and bar could be functions but could also lots of other things. For example it could be the equivalent of the statement below:

   int x = (foo)2 + (bar)3;
In other words type casting can be written as (foo)2 or as foo(2). Just to make it harder to understand what is going on. As another example consider implicit type conversions:

   Foo b = c + 3;
If 3 is not compatible with whatever type c is then C++ checks if there is a constructor that accepts an int parameter, and it automatically calls the constructor to convert 3 to the appropriate type.

Many of these craziness is being addressed by standards committees. C++ as designed by Stroustrup is a horrible language. Just look at his C++ book. It is huge and heavy and in fact Stroustrup takes pride in how big and complex the language is. C is an awesome language. C++ is horrible.


This story about the committees fixing Stroustrup's "mistakes" is completely made up from you and is not even remotely true. Also Stroustrup does _not_ take pride in how big the language is, the exact opposite is true. E.g. you will never ever hear Stroustrup speaking about template meta programming or similar weird stuff. He always tries to teach the practical side of C++. His C++ book is not big because the language is complex, but because it is a genral introduction into programming with C++. In fact, if you want to learn the esoteric features of C++, you have to buy other books. Implicit type conversions are inherited from C, the language you called "awesome". Many arcane features of C are nowadays considered as anachronisms and no modern designed language would repeat them.

> Implicit type conversions are inherited from C, the language you called "awesome".

It is true that C language has some very limited implicit type conversions. For example you can say:

   long n = 3;
And the integer 3 is converted to long. This is extremely limited, and does not make the program hard to understand, which is completely different from the craziness you see in C++.

C does destructive conversions automatically, if you ask for it. E.g.

  unsigned n;
  n = 2.99;      // n = 2
  n = -1;        // n = 2^M-1;
  n = 1.0E33;    // undefined but not catched
Both C and C++ follow the same philosopy: "Trust the programmer". The programmer is expected to use his expressive freedom, to solve performance problems and stay away from problematic constructs without being told to so. C is a systems programming language from the 70's, designed to solve performance problems and C++ inherits from that. It is true, that in C++, you can overload assignment, addition and throw exceptions everywhere, thus making even simple expressions like

  auto n = a + b;
completely unpredictable. But the coder simply has abused the freedom C++ gave him. T some extent, the same is possible in C, look here https://www.ioccc.org/

Either you love C++ or you hate it. To tell the truth, i am also of the opinion that C++ is fucking crazy complicated. There are youtube videos from leading experts, explaining the usage of simple language elements at length, something like that is unheard of in modern languages. E.g. you are not expected to grasp overload resolution to the fullest extent: https://en.cppreference.com/w/cpp/language/overload_resoluti... Mastery takes likely years of experience.

> Both C and C++ follow the same philosopy: "Trust the programmer"

I disagree. I think ever since it’s inception from C, C++ has tried to increase type safety and continuously move more of the work to the compiler (latest example is concepts). It does empower the programmer to do what they want: OOP, FP, GP...

The philosophy of the language is mostly about abstractions. Abstracting objects, types, resource management etc And using modern C++ features and a recent compiler makes it harder to make mistakes. Using some generally accepted guidelines and static analysis tools helps even further.

Though the syntax is at times ugly due to the age of the language and it has a lot of inertia that makes it hard to get rid of some bad design decisions like some of the defaults, unless someone recreates c++ with the same flexibility and power, and the same powerful compilers/tools but with better syntax and defaults I don’t see the language going anywhere. And I am not holding my breath for another language to quickly be able to reproduce the C++ echo system that have taken decades to develop.


The philosophy of C++ is about _zero cost_ abstractions.

> C language has some very limited implicit type conversions

What? C will happily compile this:

    void g() {
        float f = 3.14;
        int* ip = &f;
    }
C++ has no such _craziness_. Your strawman actually has pretty good uses:

    complex c = 3i;
    ... = c + 4;
Would you rather have the last line not compile because 4 is not a complex number? Because one _could_ argue that 4 is a complex number, and C++ can represent this with an implicit constructor.

The issue with C++ is that implicit is the default, not that it exists.


One of the main gripes I have with C++ is exactly what you said, that the defaults usually are the wrong for the usual cases and for general sanity.

With that said, C programmers also seem to love their implicit behavior so it's pretty clear where this kind of thinking comes from.


C will happily compile this

Only if you ignore your compiler output: Even TinyCC will warn about that one on default settings. With the flags I use, the code would in fact fail to compile.


The second case is easily solved by having an overload operator+(complex lhs, int rhs). No need to convert anything.

This fails when you want `4 + c` or `c + 4.f` etc. So you either have a gross number of overloads doing the same thing, or you just put an implicit constructor to `complex`.

Also, I agree that implicit conversions in general should be avoided, but this complex number example feels like the perfect use case.


    complex c = 4;
Would you rather have this line not compile because 4 is not a complex number? Because one _could_ argue that 4 is a complex number, and C++ can represent this with an implicit constructor.

The issue with C++ is that implicit is the default, not that it exists.


Haskell solves this reasonably well.

If your data type implements the Num typeclass, you can use literals like 4. (A typeclass in Haskell is similar to what they call an interface in Java.)

There's no automatic conversion happening at all. It's done via overloading literals at compile time. (You can do the same for strings.)


C++20 introduced concepts, and now it's possible to specify type constraints, like integral, in a way similar to Haskell typeclasses.

What it doesn't solve that by default the compiler tries to find an appropriate implicit conversion. Sometimes it's convenient, sometimes it's harder to see what the code actually does.


Yes, concepts help a bit, but I'm not sure whether they would be necessary here.

Yes, the implicit conversion for all values is part of the problem. The Haskell approach only gives you magic for literals.


"Most people will say foo and bar are functions and that's a reasonable guess. Unless you are talking about C++. In C++ foo and bar could be functions but could also lots of other things."

Same as in Lisp, one of the most beloved of languages on HN.

Were Lisp's macros a mistake? Some will say yes, but overall the language is more praised for them than damned.

So highly regarded are Lisp macros that many languages will eagerly proclaim that they have macros too -- even if they're not homoiconic and so harder to work with than in Lisp. Few if any are the languages that proudly proclaim that they don't have macros because they thought Lisp macros were a big mistake.

C++ fans themselves often point out that C++ templates can be as powerful as Lisp macros.

So which is it? Is giving users the power to radically transform their language a bug or a feature?


This is perhaps a misunderstanding. Lisp macros are a part of the language. Of course you can use them to make your code unreadable. But what they are used for is to evolve and add definitions for things such as iteration, which in other languages, like Python or C++ is not possible without a language change.

For example:

https://lispcookbook.github.io/cl-cookbook/iteration.html

https://docs.racket-lang.org/reference/for.html


I don't think lisp is that beloved here. It's just that the few people who love it love to bring it up.

Hacker News is written in a Lisp, started by people deeply interested in the language, and frequented in a large part by people who feel similarly.

I think that last clause is incorrect. It might have once been correct but I suspect most people on here have never used Lisp at all.

Well, constructors are not very different from functions, from the caller point of view. Also, Python is much worse in this: any call can be monkey patched at basically any point during the execution, but people do not (usually) use this as an example of poor language designing. You expect the programmer to use the language features considerately.

BTW, in C++ every call is resolved at compile time, and the compiler could tell you what you are exactly doing at each line, including whether your examples are function calls or casts. If you dump the right table in GCC the information is there, although it's a pity there is no convenient way to access it. Cppexplorer helps a little bit, but I always find it difficult to understand. It's a problem of tooling, though, the language is well defined.


> If you dump the right table in GCC the information is there, although it's a pity there is no convenient way to access it.

CLion, ctrl+B.


>In other words type casting can be written as (foo)2 or as foo(2). Just to make it harder to understand what is going on.

Or to make it easy to describe high-level behavior where you don't care about the details. In your example, foo and bar could be some complex vector or matrix types with custom allocation/deallocation logic.

So this line:

  int x = foo(2) + bar(3);
Is an equivalent of 7 lines in Plain C:

  foo tmp_foo;
  bar tmp_bar;
  construct_foo_from_int(&tmp_foo, 2);
  construct_bar_from_int(&tmp_bar, 3);
  int x = add_foo_and_bar(&tmp_foo, &tmp_bar);
  free_foo(&tmp_foo);
  free_bar(&tmp_bar);
Sure, if you don't know what foo and bar are, it's confusing. But if you want to decompose what you are doing from how you are doing it, this syntax rocks.

BTW, (foo)2 syntax is preserved for backward compatibility with C and is deprecated in favor of static/dynamic/reinterpret_cast<>() (that many people don't use due to clunkiness) and foo(2) is unavoidable if foo's constructor has more than 1 argument.


This weird ambiguous syntax exists to allow among other things templates that take either primitive or class types. It makes primitive types usable as if they are classes with copy constructors.

That being said, it can certainly be ugly. A lot of the ugliness comes from trying to maintain itself as a mostly superset of C. Technically it no longer is strictly that, but most C code will compile fine in a C++ source file.


> Consider horrors such as this. In the following statement what are foo and bar?

I don't understand your point. What's exactly the problem of knowing what are foo and bar? Even if you struggle with C++, any half decent IDE allows you to check the symbol and see what's being called with a single mouse click.

So what's exactly the problem?

> As another example consider implicit type conversions:

Again, what's exactly the problem? If you write down "c+3" it's your responsibility to know what you are doing and that "c+3" does make sense. And your example can only start to become a possibility if you explicitly and quite intentionally define your constructors to allow implicit type conversions, which is a direct violation of basic C++ best practices.

Adding to that, your convoluted example is no where near the mess that's javascript or Python and somehow people manage to not screw that up even without any type checking.

So why single out C++ although it does provide all sorts of guardrails and sanity checks?

> Many of these craziness

You failed to show any semblance of "craziness" in any of the convoluted examples you presented. In fact, you only showed you struggle with a language you barely have any experience with, let alone a grasp on rudimentary best practices such as requiring explicit constructors. Thus, why pin the blame on the language instead of where it really belongs: your lack of knowledge and experience using the language?


If you mark the constructor as explicit, the automatic cast you mention does not take place and you'll get a compile time error.

The casting style can be enforced via linting. There are also C++ cast operators, that in my opinion should be preferred to the C style casts: static_cast, dynamic_cast, reinterpret_cast, const_cast, static_pointer, dynamic_pointer_cast, const_pointer_cast... and you can enforce that those are preferred rather than C style casts.


I mean, it's even more weird than you said. It's not impossible that Foo(int) and (int)Foo or int g = foo do completely different things. Foo could define an `operator int` that terminates the program, or does any damn thing.

Thanks to macros foo(2) can be litterally anything in awesome C.

If you dedicate a large amount of your career on C++, you can be quite productive writing very robust performant code, and these days its getting easier and easier.

I've been coding C++ since 1996, and to this day it is still a joy. I do a lot of back end node/typescript, and after dealing with the loosy goosy types and stuff in that language, coming back to a concrete hard core language that requires a much higher intellect to write in, is frankly refreshing.

I've always felt writing in C++ requires much more skill, and that alone is a very rewarding experience. I can tune things like arena allocators and make string operations insanely fast, things I agree most problems simply don't require, but still, its a craft that I feel very proud of conquering, and every day you learn something new.

So if you like a challenge, if you like learning, or maybe if you simply enjoy flexing your brain cells a bit more then most, C++ delivers that on all fronts.

As for whether it deserves a bad rap, that's a little too generalized of a statement, and it reeks of dogma. Ignore dogma, dive in, learn for yourself what you think sucks or doesn't.


"I've always felt writing in C++ requires much more skill, and that alone is a very rewarding experience. I can tune things like arena allocators and make string operations insanely fast, things I agree most problems simply don't require, but still, its a craft that I feel very proud of conquering, and every day you learn something new."

I worked with C++ for years, and I while I agree with the "every day you learn something new part", I also think that most employers and projects don't benefit all that much from time spent playing with your toy of choice, so to speak. That time that you spend learning about C++ each day, could go into learning about security, machine learning, graphics pipelines and a lot of other things.

I switched to a job where I mostly had to use C or Python, and I no longer obsess over my hammer. Unlike my C++-job, we don't go to C++ conferences and spend meeting after meeting talking about the language we're using. We attend conferences on crypto, machine learning, intrusion detection, and other things that could be useful to us.

I agree that there is a time and place for C++, but I feel so much more productive pulling that tool out of my toolbox AFTER I've identified a hot region of my code, and then using it where absolutely necessary.


> I've always felt writing in C++ requires much more skill, and that alone is a very rewarding experience. I can tune things like arena allocators and make string operations insanely fast, things I agree most problems simply don't require, but still, its a craft that I feel very proud of conquering, and every day you learn something new.

It's important to distinguish professional from hobby programming.

Challenge for the sake of challenge is certainly interesting/fun, but, no employer would pay employees to write programs in Brainfuck just because it's challenging, while it's ok in hobby programming, because the resources (=time) are virtually infinite. Of course, there's a value in an employer allowing a certain amount of extra resources for the employees' "growth for the sake of growth".

I think the article grossly misses the point of what makes C++ problematic, as it considers a programming language in the context of a single programmer solving a simple problem, while the real-world issues of C++ are in the context of a team and memory management of large programs.


If this is the best that can be said for it, then yes.

60 lines of "portable code" turns out not to be portable.

Practically every line requires some piece of C++ ceremony.

The nightmares around the potential for "undefined behavior" and miscompilations of code by C++ are ignored.

It has always been the case that there are several subsets of C++ which make reasonable languages. The problem is that when you combine them in the wrong way, you get into a world of hurt. And in the real world, C++ programmers do not agree on which subset to use.


What I am often wondering is why these subsets are not named, while it would increase clearness of communication and what is wanted and needed.

As someone who writes <5 commits of C++ per year, it is very frustrating. I can’t get through a code review without reviewers referencing at least 2 blog posts about proper style for modern C++. I’m sure if I were an expert who used it every day I could remember all the gotchas and best practices, but it sure is tedious to deal with occasionally. Too many foot guns and conventions, not enough constraints in the language.

I'm sorry but why would you expect anything different from a language you rarely use? I've picked up Python the past few years and have come to love it, but I'm sure any seasoned vet would eviscerate me in a code review.

A lot of this comes down to expectations. I would treat C++ like, say, kind of like a cabinet full of chemicals, rather than like (say) a typical home toolbox. You (hopefully) don't just grab containers and mix their chemicals and expect to learn by doing that 5 times a year; you really need to start slowly and take time to learn all the dangerous pitfalls properly, or otherwise expect to be told exactly what to do down to the most minute details. Adequate training for it is simply a prerequisite. C++ is kind of similar in that respect.

> <5 commits of C++ per year,

I am sure any such developer will get equal treatment from another who is an expert, right?


So C++ doesn't have linters? Why not configure the subset acceptable.

Thankfully, one can stick to the C++ Core Guidelines now. Abseil also has some good recommendations. The rest is just bikeshedding.

I never even really got to the point where most of the stuff from the article would be relevant to me.

Whenever I'm forced to using C++ only a tiny amount of the time is spent actually implementing my ideas.

Most of the time is spent figuring out how to fix incredibly cryptic compiler error messages that usually have nothing to do with what I actually did wrong.

Then I have to figure out how to get all the dependencies in a way that will still work in a year or two, without directly committing them to my git repo, and setting up various complex build tools that were created decades before anyone even imagined package management.

Finally I hope that nobody will ever look at my code because I definitely didn't follow "best practices". The best thing I can do is sprinkle "const" wherever I can because of my religious belief that immutability should be the default. If I wanted to stay up to date with the latest developments in C++ and the accompanying proper ways of doing things I feel like I'd have to forget what little I know about the language every time I used it and learn it all over again, just to avoid doing stuff that some blog post now says is "considered harmful". I might as well try staying up to date with JavaCoffeeTypeScript using the latest AngularReactVue and whatnot.


So first, C++ in a small codebase with a small team, is a very powerful language that really does produce the best code. That's what you want in a professional, top tier code base and that's what you get.

Unfortunately, once a project grows to many developers, many of whom have varying levels of experience, it's just too easy to write bad C++ that becomes impossible to debug. I've been using it for a decade and I swear I barely scratch the surface of its features.

Lastly, and this one is the most annoying, C++ also has cryptic error messages that almost always never mean what they suggest. You'll learn to understand them over time and decode their meanings, but at first glance you'll spend long days hunting for the wrong issue.


C++ error messages have been gradually improving over time. Clang really raised the bar when it came along, and the other two compilers have been catching up. Concepts in C++20 will really help to further improve things for template errors.

> So first, C++ in a small codebase with a small team, is a very powerful language that really does produce the best code.

Does it?

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> So first, C++ in a small codebase with a small team, is a very powerful language that really does produce the best code.

"Best" is such an undefined term it's word-noise here. "Best" in terms of end-user extensibility? "Best" in terms of debuggability? "Best" in terms of having a single binary which can be copied and run without regard for machine architecture and OS?


It's quite obvious that the poster was referring to codegen.

Of course not.

Modern C++ can be a joy to write, and is fine for a lot of tasks. It's hard to beat the combination of performance characteristics and higher-level constructs.

The issue with C++ is that it's unsafe by default. Foot-guns abound. The right way is to make everything safe by default and provide an escape hatch when needed, e.g. Rust's unsafe or C#'s unmanaged.

Unfortunately, there's no way to "fix" C++ to be safe-by-default. Too much legacy code out in the world and no appetite in the standards committee for backwards-incompatible changes in future versions of the standard.

It seems obvious to me that e.g. C++2X should entirely drop some older, broken, unsafe concepts, e.g. auto_ptr is the first thing that comes to mind, but there's all kinds of gnarly bits that need to go, and just allow older code to continue compiling under previous versions of the standard.

That does leave libraries in a tough place - if your project wants to move to the (never going to exist) backwards-incompatible C++2X, they would need to update or remove all libraries not compatible with the new version.

But this seems like a way to eventually move the entire ecosystem to a better place. Possibly the only way.


IMO fixing the unsafety is impossible, because doing so would mean re-evaluating the very core idea of C++ that C is a suitable substrate for building a safe, expressive language on. We know from years of experience that C has plenty of footguns of it's own, and fixing them is not possible without a fundamental redesign of the language. Therefore, the foundation of C++ is already rotten.

If that foundation were to be redesigned one'd have to get rid of the current near-100% compatibility with C, making the interop no longer an argument for using C++. Add the fact that most real-world C++ code will also be incompatible, and the strongest argument for using C++ is gone (namely, the impressive ecosystem size and maturity).

At that point there'd be no more justification for the existence of C++ over Rust (or another language that takes Rust's good ideas and supersedes it).


The raison d'être of a safer C++ would be to enable staged, tool-assisted migration for the many hundreds of millions (billions?) of lines of C++ code in the world.

I think that's plenty of reason all by itself.


I wonder would a compile time switch that enbales 'safe by default' be an option. At least then it means programs could transition to becoming fully able to compile in this safe mode and only let subsets (e.g. certain files in a project or old libraries) use the unsafe by default mode. It would be less of a sharp shock and could allow people time to transition, and know where to spend efforts to become safer.

One thing we do in the Firefox codebase is build using a clang plugin that we wrote to statically enforce additional rules.

It’s not a perfect solution , and obviously not built in, but it is a big help for avoiding footguns.


You can do a lot to make modern C++ programming safer with linters, clang-tidy, sanitizers, valgrind, etc. But it's a lot of work and a lot of compute to bolt that on top of a fundamentally unsafe language. It's out of reach for a lot of organizations and requires a substantial amount of organizational coding discipline.

It's like the old saw that gets trotted out about C and C++; you can write safe code in any language if you're disciplined about it. The point of safe languages is to make it easy and accessible to write safe code; to design into the language a "pit of success" rather than requiring a big slog to make a flawed language usable.


C++ is, quite rightly, considered an overly-complex and bloated language. A lot of newer features are papering over poor design from the beginnings of the language.

But..

There is something about the use of destructors, RAII, and even smart-pointers that just feels so...elegant? I'm not sure what the word is I'm looking for. I feel like there's a kernel of a beautiful language hiding in c++ and those features are the integral parts of that.


Because C++ has almost all the possible features of the programming languages, so you find a subset to like in it. A language shines when it picks the correct subset of the features, not all.

Rust does a good job capturing that kernel.

I've never actually had a problem with c++ grammar or syntax or logic

What I do have a problem with is trying to interface with more complex apis and functions. For example a while back I had to implement a https get and post request and it was a total nightmare. Something that takes only a few lines in python took a few days to figure out. Maybe it's lack fo experience but I tried wrestling with libraries like pocco which I couldn't figure out for the life of me but in the end ended up writing a monster of a program implementing the winhttp api directly.

I made a similar comment earlier and someone suggested I should have just used libcurl. How to people normally wrestle with these things without going crazy?


By using libcurl. I run a distributed c++ database and it's one of the most stable components of our stack. Java and Ruby tend to break much more often. Also, you wouldn't want to use Python without any packages, so why insist on the same with c++?

Is there some sort of standard / commonly accepted way to manage packages for c++? In python its as simple as pip install + import, and the community is generally in agreement with the best library to use with good documentation. Is there some sort of similar tools I can use for c++?

> Is there some sort of standard / commonly accepted way to manage packages for c++?

As others have pointed out, CMake is the de facto standard in C++. However, a combination of CMake with Conan seems to take care of most of anyone's needs. The downside is that the availability of C++ packages in Conan tends to be very limited.


Your system package manager, usually, along with something to detect/configure dependencies (autotools, CMake, etc). CMake can be set up to automatically download missing dependencies in some cases, too, and I think there are some other build systems that provide this sort of functionality. But there's not some standard packaging system a la pip or cargo or the like.

There is build2[1] and libcurl is available[2] from cppget.org, its package repository.

[1] https://build2.org

[2] https://cppget.org/libcurl


There are many. But none of them are considered the ‘default’ way to do it. vcpkg[1] by Microsoft is the newest kid on the block which is gaining some traction. Apart from that Conan[2] seems to be the (relatively) popular one. There’s also Hunter[3] which builds on top of CMake.

[1]: https://github.com/microsoft/vcpkg [2]: https://conan.io [3]: https://github.com/cpp-pm/hunter


I am piggy-backing on this to ask if there is a good networking library to handle OAuth2 (client side). I came across cpprestsdk[0] but it was a pain to setup with some OpenSSL conflicts unless I use vcpkg. But I need to edit some code in the httpclient class which won’t be straightforward if I use a package manager.

[0]: https://github.com/microsoft/cpprestsdk


Modern C++ with libcurl API’s still isn’t great. I’ve found the Python and Go http API’s much more intuitive.

This is just a bad library. The library I use in C++ (which I wrote) is pretty much the same as Python requests2 library (or whichever the good one is now... I forget the names as I honestly also rewrite the Python HTTP API layer myself over httplib as the ones that used to come with Python sucked also).

“There are only two kinds of languages: the ones people complain about and the ones nobody uses.”

Bjarne Stroustrup


Rust is pretty widely used and in my (biased fanboy) opinion, strictly better than C++ in almost every way.

(Not quite _every_ way: lack of per-container allocators, and less advanced generic/metaprogramming facilities are two particular pain points)


Unfortunately that quote is an all-purpose way to dismiss the volume of complaints about any language.

I don't hear too many complaints from C users. Perhaps because C never overpromised?

I understand C++ is messy because, back in the day, Stroustrup never say no to a request or suggestion from a potential user. He wanted his language to be widely used. Guess he succeeded.


I don't think C++ ever really had a bad rap except among ideologues and evangelists of other languages. In my experience, people like this are very rare in the real world outside of maybe Rust conferences.

Your experience is highly incongruent with mine. I work with a lot of Rustaceans, and while they're often able to articulate why they prefer Rust over C++ for many applications, many of them would probably be reaching for C++ if not for Rust, and certainly don't spend time hating on the language. Speaking personally as a Rust fanatic, C++ is still my 3rd favorite language. (Out of a large N.)

I guess my point is, this is a weird (and IMO, flat-out wrong) dig at the Rust community.


Equally, a lot of people just don't know any different.

How would you know C++ isn't optimal if you've never tried or thought about anything else.

That and the sunk-cost of learning C++ is large.


You can't get a degree without being exposed to a number of lecturer's language-of-choice. I'd be shocked to find someone who had only used c++

Lecturers are often fairly poor programmers, which is sort of within the point I was making. You might learn Java and C++ but you'll probably be writing one in the other no matter what you do first.

I.e. my current lecturer in C programming seemed to imply we were going to be learning about object oriented programming in plain C (specifically inheritance), which does not bode well if he didn't misspeek


I say this with a lot of love for C++, and I think it has gotten much better, but just try splitting a string via another string as the delimiter...

My favorite weapon of choice for this is absl::StrSplit. It can be very efficient because it adapts to any return type you like (including range types, so you can `for(string_view : StrSplit()) {...}`).

To be fair this is not a problem of the language but a library problem.

And the fact that libraries are still a pain point in C++ is a language problem.

I'm not sure whom/what this is being fair to, given both of these are specified by the C++ standard, but sure.

Why would that be more difficult in C++?

Because it doesn't even exist as provided functionality; you have to code basic stuff like this yourself. See for example https://stackoverflow.com/a/14267455

This is my annoyance with C++, it feels like the effort put into the language is disproportionate. As in, the committee is off solving problems that a handful of library implementors will use, but basic things like splitting strings or trimming whitespace is ignored. Yes these are fairly trivial but you either roll them over and over or start carrying around a little utility library.

After having used both Python and C++ for a while I concluded that most of the problems of C++ could be solved by a proper standard library.

It is possible with a regex iterator

  #include <iostream>
  #include <regex>
  #include <string>

  int main()
  {
   std::string s = "Python*C++*Java";
  std::regex regex("\\*");

  std::vector<std::string> out(std::sregex_token_iterator(s.begin(), s.end(), regex, -1), std::sregex_token_iterator());

  for (auto &s: out) {
    std::cout << s << '\n';
   }

  return 0;
  }

Hey just curious i have only coded c++ in school I’m curious to know why don’t we use `using namespace std;` in real world when writing c++

Because it imports literally everything into your current scope, and the standard library types are named in such a way that they will likely conflict with code you write. Named a variable “min” or “max”? Function called “list”? You’re going to cause problems for yourself.

Plus, using std:: is generally a good reminder that you’re using the standard library types and not some homegrown facsimile of that type, as is common in many C++ projects.


Because it imports everything from the std namespace and can lead to name clashes. It also makes it harder to see where symbols come from. If you see `std::vector` you know exactly what it is. If you just see `vector` in a large code base it could be anything. Could be a hand-rolled version of std::vector from prehistoric times or people not trusting the STL for no reason. Could also be a mathematical vector or something for vector graphics or whatever else. That is why people generally always fully specify where things come from, it's not limited to just the STL.

`using namespace X;` is somewhat fine if you limit it to a scope by using it in a source file and ideally only the function body where you use it, in header files it is a really bad idea because then the namespace gets exposed to everything including that header. That can lead to the compiler calling the wrong function and you might not even notice it.


In addition to the other comments (which are the primary reasons), beware that you can actually get different behavior due to something called "argument-dependent lookup". The most well-known example being std::swap(a, b). In general it is not equivalent to using namespace std; swap(a, b), though in this case, you actually want to say something else (using std::swap; swap(a, b)) to avoid importing the entire std namespace while also ensuring you call the ADL version of swap that may have been provided.

Stop

Beh, it's one function call if you use Qt or boost

It is not difficult to do it, but it is difficult to do right and in a C++ way. That means with least overhead, i.e. not copying stuff if not necessary and not doing any memory allocations. This is why C++ still has no string split is its standard libary: People cannot agree how it should look like.

There is, however, boost::split from the famous boost library.


Do you know if they're revisited this ever since string_view was implemented? I feel like a vector<string_view> should be quite uncontroversial, and you could even propagate the allocator from the string to the vector.

Since they now have ranges, the discussion can start all over... But I'm maybe no completely up to date, where the discussions is.

With range it would be

    auto splitText = text | view::split(' ') | ranges::to<std::vector<std::<wbr />string>>();

It should return a range of string views, no allocation is necessary.

That sounds like a great idea. Not sure if it might present a bit of a hurdle if the caller prefers it to be random-access, but I'm kind of tempted to try this out and see if it might present issues in practice. It might be worth suggesting it to the standards committee?

I think it should be bidirectional. Random access is expected to mean O(1) advancing of the iterator. I don't think it can be done without upfront work and allocation at construction.

If one needs random access they can always do the bidirectional range -> vector of string_views conversion.

Ranges are relatively new in the standard and not many algorithms landed in the library. I'm pretty sure there will be many new range algorithms in the upcoming standards. This particular algorithm is probably worth suggesting. I recommend floating the idea on the cpplang slack (there are #ranges and #future_standard channels) and/or the std-proposals mailing list. I never wrote a proposal, but you can chat with people who did.

https://cpplang.slack.com/

You can get an invitation here: https://cpplang-inviter.cppalliance.org/

https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals


Difficult, probably not.

Verbose and clunky and needs a web search to do, yes.

I'm guessing it's analogous to Java's Regex. Like, yes, they're there and do as much as any other language, but lord it's a PITA.


In C++ you can implement it in a way that gives you much more control.

Do you want to truncate the string? return new strings? return string views? It all depends what you need.

In some languages all strings are immutable and all operations result in new strings and you have no control over that.


The trouble is often it doesn't matter and literally any of those would be useful, and yet none of them exists for you to use, so you have to waste time reinventing the wheel.

Perhaps they're not in the standard library, but they're readily available on Boost, POCO and many other well known mature libraries.

Check the Boost C++ string algorithms library.

https://www.boost.org/doc/libs/1_74_0/doc/html/string_algo/u...


Is there any language where string handling doesn't suck? (No script languages please, because they cheat by implementing the hard things on C or C++)

From memory the less horrible experience i had was with Go, but there's help for the slices on the runtime which let the difficult parts hidden and also the batteries-included std library, which is one of the most well designed standard libraries out there.

Buffer management is one of the things that sucks the most in any language, and the ones that dont its because the complexity is hidden in another fundamental layer.


C# and Python...

  "a, b".Split(", ")  // C#
  "a, b".split(", ")  # Python

In Rust:

    "a, b".split(", ")

I think it's pretty nice in Scala. I don't know if "no virtual machines" was implied as part of the "no script languages" request, though.

  scala> "Mary had a little lamb. The lamb was in federal witness protection.".split("lamb") 
  res0: Array[String] = Array("Mary had a little ", ". The ", " was in federal witness protection.")

No, its ok and not part of that category.

I meant for languages where the hard parts are delegated to another like Python and Javascript (C and C++ respectively).


Strings in D are pretty nice, along with the extremely expressive range algorithms.

Unicode is a slight pain (it's explicitly supported and works fine but it was designed before UTF8 dominated so there's baggage)


Ranges are an absolute pleasure to use, and besides the great template system, my favorite part of D

> implementing the hard things on C or C++

This misses the point. I don't care how the language implements the functionality if it meets my performance requirements and it is convenient to use.


> This misses the point. I don't care how the language implements the functionality if it meets my performance requirements and it is convenient to use.

I dont think it does, because it meant you are working on a simple language that can only afford to be simple delegating the harder parts to other languages that took the effort to be designed to deal with everything.

The convenience is payed by someone else, and the language that pay for the hard stuff are the ones that are complex, because they need to work well with any sort of algorithms.


I would risk to mention Tcl as a good candidate, which might be unfair because the language is designed for strings.

  split "foo.bar" .
    => foo bar

Any software where you will need to develop serious things for will suck.

Almost nobody feels miserable in languages like python because they rarely have to deal with the ugly stuff or having people depending their lives on it. Thats the 50% of the fun writing in it, because you will deal mostly with the cool things, and who would not like the language that gives you that kind of pleasure?

And the languages that are used for this kind of stuff will suck somehow, because its hard to make things right on them, it requires a lot of discipline and you will have to deal with big codebases, in big teams.. so the things gets much harder to do right.

The guy that is chearing Rust today, is the guy that will complain 10 years from now, how Rust sucks, because he have to deal with it everyday in its big avionics codebase and any mistake can cost million of dollars or peoples lives.

Its psychology. We need the shiny new thing to give more meaning to our lives. It will never stop, but with time you learn to navigate over it a little better.

There is very little change in languages now, and once someone does a real paradigm shift, it will be impossible to ignore.


> The guy that is cheering Rust today, is the guy that will complain 10 years from now, how Rust sucks, because he have to deal with it everyday in its big avionics codebase and any mistake can cost million of dollars or peoples lives.

I do believe (not as a religion, but based on the progress made with languages like Rust, Idris, Haskell, Typescript etc) that the trend towards provable/robust safeness will continue in the coming 10 years. Progress is slow, but it is there and the libraries + proof assistants (possibly AI assisted) will improve. Somehow that will find it's way in one or more of the languages we are using now (because shiny new things).

Anyway; I don't have that feeling, at all, about languages and codebases I have been using for 1 or more decades; c, c#, php, java, pascal, perl (only exception is JS; it was awful 10+ years ago and I do like to complain about that but I maintain only 1 significant codebase in 'older' js (about 8 years ago?); it is a (callback hell) nightmare, currently being rewritten in c#); I work on significant sized codebases in all of these and they work fine (no money lost, no people died (no avionics either though, but still enough opportunities where people could've died because of something I wrote or was lead on)).

> Its psychology. We need the shiny new thing to give more meaning to our lives.

'We need' => maybe most people do, I don't. I actually never really liked new things. I want stability as my work demands and has demanded that for decades and it creeped into my non-work life. I want things to be reliable and robust 'forever', not new things that I don't know of how good or solid they are (spoiler; they are usually not, in code nor meatspace). I like to pay people, keep my company healthy and sleep at night; shiny new things do the opposite in most cases.


> Any software where you will need to develop serious things for will suck.

Well, "serious things", that is, larger, robust programs need a lot of experience and discipline. This is true in any language and if people do not watch for it their library landscape turns into a hot mess quickly. Take Python.

But the thing is, I believe that in many applications, C++ does not make it easier to build serious things. It adds complexity and when working on large-scale stuff, you need extra complexity as much as a hole in the head.


C++ will be around for at least the next 100 years. It's an ISO standard used by business and government all over the world. Modern C++ is very safe and fast too.

100 years is a long time in technology. A safer bet would be to say "Assembler will be around in a 100 years". High level languages tend to get replaced after a while.

C++ has already been around for 36 years.

And COBOL was for ~40 years before starting to be left in oblivion in 2000's. Your point?

I've been programming C++ for years. Trying to keep up with modern C++ took a lot of time, which I feel is better spent on other stuff, like reading "Designing Data Intensive Applications" or something similar than yet another Scott Meyers book.

Between ‘most vexing parse’, ‘static initialization order fiasco’ and ‘abominable function types’, C++ probably has more terms for miserable failures of design than any other language. That has to count for something.

Right at the beginning,

> This calls for regular expressions

Why? A regular expression is a whole new program, written in a whole new programming language (one used to program finite state machines, instead of Turing ones, but still). Why not just write a simple function to compare string suffix? Is you programming language (C++, in this case) so anaemic that you need a whole different language to do such basic operations?

This is what people who don't like C++ complain about, in addition to the language, the self-indulgent culture around it.



That immediately jumped out at me. They don't need a regular expression here, it's probably an inappropriate fit, or else it's revealing an undisclosed design assumption.

It certainly ensures that I don't take their conclusion to be worth much.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: