Hacker News new | past | comments | ask | show | jobs | submit login

I think it's a sensible choice. I've seen way too many C codebases rewriting half of the STL or using clunky macro hacks (or worse, half-assed linked lists) when basically every single platform out there has a copy on the STL available which includes containers and algorithms with excellent performances and are well tested.

It's complicated but it's the only reasonable choice. You can then write your code C-style while compiling it as C++ and nobody will bat an eye.




STL has a lot of weird pitfalls. There was std::vector<bool>. Here you can see some pitfalls of std::unordered_map: <https://youtu.be/ncHmEUmJZf4?t=2220>. The whole talk is interesting to watch. In the beginning you can also see the puzzling design of std::unordered_map (puzzling because of the number of indirections).

I'd reach for abseil first: <https://abseil.io/>.


These pitfalls really aren't that bad. An extra copy here and there only matters in the tight loops. Some of the examples in the talk are really contrived. Am I going to insert the empty string many times as a string in a hash map? No, in most cases I am going to make sure that almost all of the keys that I generate are different. Sure, google can save on its electricity bill by doing micro-optimizations. For me trying that would be a bad trade.

Every language has its pitfalls and I tend to prefer C++ pitfalls to some other pitfalls. Lately I having been doing some python. Turns out a python -m venv is different from a virtualenv. Turns out pytest some/specific/test/file.py is different from python -m pytest some/specific/test/file.py. I am wishing quite hard this code base consisted of all C++. And of course mypying this is still on the TODO list so the type annotations that are there might well lie.


> STL has a lot of weird pitfalls

everything that's old and has legacy is doomed to have several. It's just a fact of life.

The C++ committee doesn't and can't just throw away stuff like `std::vector<bool>`, despite it being a whole fractal of bad ideas, because people have already used that piece of dung in uncountable projects. It would be nice to have a way to get a _normal_ vector of booleans (even though vector<char> just suffices most of the time), but that's life I guess.


People bring this point up every time C++ is criticized, but no, not all C++ shortcomings come from its age. In fact, most of the shortcomings it is criticized for today are new. The committee just keeps on adding new ones: initialization (mentioned by another reply to my comment), std::optional & std::variant (UB), concepts[1], std::thread (good luck debugging where the std::thread which crashes your program in a destructor came from), std::random_device()() being allowed to return 0 every time (which e.g. won't show up when you compile for your own machine, but will when you cross-compile) and probably many others which I don't remember off the top of my head.

[1]: <https://news.ycombinator.com/item?id=28098153>

But back to the point, my original comment wasn't about C++ being a bad choice. It was about STL. You can use C++ without STL. I pointed to one alternative: abseil.


It does indirectly come from age, because legacy and backward compatibility mean that you can't simply change some core parts of the language at all. For instance, std::optional uses UB because _all_ of C++ does the same, and making std::optional<T>::operator* different from std::unique_ptr<T>::operator* would have caused similar concepts to have a different behaviour - something that would have definitely translated into weird bugs.

Also, C++ hasn't features like built-in move semantics and borrow checking, so it's very hard to enforce certain things that just come natural in Rust. That's simply stating a fact. The real point here is that std::optional<T> is much less likely to cause UB than a plain, old C pointer, so it's a net improvement - even though it could still lead to UB in some circumstances.


> https://youtu.be/ncHmEUmJZf4?t=2220

Some good points, but also "calling find with C strings is slow", you are using C strings, don't expect speed from code worshiping strlen of all things. Also the issue with integer hash being its identity? That is the case for every language I checked (Python, Java, C#).


I mean, if Python, Java or C# were good enough for me as a C++ user, I'd use them.

The mindset behind C++ is not a relative "thing must be good enough", but an absolute "there must not be something better possible".

Of course, this is often not realized in practice, but it's the goal.


> "there must not be something better possible"

It maps all integer values to distinct hashes which seems rather ideal, what it doesn't give you is perfect performance in an artificial benchmark written to exploit this implementation.


Tangent, but I remember reading not too long ago that there were some cases where perfect hashes weren't optimal, but can't remember where..


after watching that video, I never want to touch C++ again


Several CPPCON talks in recent years have reinforced that for me as well. https://www.youtube.com/watch?v=7DTlWPgX6zs, for example.

Edit: I originally linked to the wrong video (https://www.youtube.com/watch?v=TFMKjL38xAI), which was a different talk by the same person.


Then someone has to do it for you, because for the foresable future there will be plenty of critical libraries, and compiler infrastructure, that won't be available in anything else as implementation language.

Hence why I keep my C and C++ skills sharp, even though they aren't my main languages any longer.


or it is time to replace these critical libraries with something that has the same features and is far less convoluted and far harder to make serious mistakes with.

In addition to having experts accidentally make mistakes with `ref ref` like in that first video that resulted in performance degradation due to unwanted copies, C/C++ is a major security risk. Countless vulns are from simple mistakes and complex misunderstanding.


This is a pipe dream. There are systems still running COBOL. It’s a real hard sell to convince those with the money to let someone rewrite something that’s working and has been for some time.


That is a noble goal, first LLVM and GCC need to be rewritten in one of those languages, then CUDA, Metal Shaders, DirectX, Tensorflow, Qt, WinUI, ....

So even though managed languages and better systems programming languages are making their replacements, somewhere down the stack there will be C++.

And then to top it, as long as people insist in using UNIX clones, C will be there as well.


But if that's the case couldn't there just be a library that provides all the functionality without having to switch languages? Something like boost but for plain C?


The lack of generics for C makes it difficult to write those sorts of libraries in an efficient way.


C has had generics since C11. Can't fault you for not knowing that, as they're a pain to use and I think folks would rather forget that the language feature exists.


_Generic is basically a switch on typeof(x). Despite the name, it's really a specialization/overloading mechanism.


Those aren't generics, they're more like overloading.


I wrote this (slightly crazy) generic vector for C: https://gitlab.com/nbdkit/nbdkit/-/blob/master/common/utils/... If you search for "string" in this page you can see how it is used: https://gitlab.com/nbdkit/nbdkit/-/blob/master/plugins/data/...


And, generally impossible to write C generics so they may confidently be used safely. There are reasons why C++ was necessary.


You’d use a code generator for this, similar to how you’d do it in golang.


C libraries can't be simultaneously as generic and as fast as equivalent C++ libraries without (or sometimes even with) the aforementioned "clunky macro hacks".


if "fast" was the only requirement we would be writing it in assembly


"Fast" is never the only requirement but often is a requirement. And the problem with a slow library is you don't know when rewriting the program to not use the library will be necessary to achieve the desired performance.


Nope, folks didn't invent higher-level programming languages (the first of which was arguably Fortran, see case in point [1]) because they wanted their code to run slower.

In fact, if “fast” is the most important requirement we'll probably write it in Fortran or C++. With C++ you just have to know which parts not to use.

[1] Fortran Web Framework https://news.ycombinator.com/item?id=28509333


If fast were the only requirement, we’d have vastly different processor and memory architectures and we’d probably write in higher level languages with compilers that would also generate the optimal ISA to be compiled into architecture-specific microcode along with the binary that runs on it with enough information the OS can pick which part of the heterogeneous machine is best suited to run the program.


No OS, it’s too slow. The binary needs to bootstrap the machine as well.


Yes, Gtk+ and Glib do that with the catch that C has no type polymorphism, so everything is done with casts and void pointers. It's even fully object-oriented.


The Glib approach is an immense monolith of nonsensical zealotry towards C. Reimplementing an Object-Oriented system in C using hacks and void pointers when GCC already had not one but two fully object oriented languages built-in was a completely pointless endeavor. There are a million different ways to shoot youself in the foot with Glib, and it definitely doesn't feel like writing C at all.

I understand the historical context that lead to the creation of Glib (no C++ standard in the mid-'90s, no decent open Objective-C libraries, Stallman and most of the FOSS movement at the time harbouring a deep disgust towards C++ for some reason, etc), but it can't be pointed at as a good example of how to code in C - on the contrary, it's IMHO exactly what your C project shouldn't do. You've picked C because it's simple and clear, don't butcher it up with macros and weird hacks just to get a worse Objective-C.


> understand the historical context that lead to the creation of Glib

Well, yeah, that was 23 years ago. Back then there was no GNUstep and no free as in freedom C++ or Objective-C compiler. Now GCC's main development language is C++ and GNOME is slowly becoming somewhat language agnostic and also has Vala which at one point was supposed to cover up that mess.

I guess another point is that they seem happy with it[1], so why should we care anyway.

1. http://planet.gnome.org


I’d love to have a good excuse to program in Vala… it seems like a very nice language.


Both g++ and the objective-C frontend already existed back then, so your argument isn't correct. I think the main reason back then was that Stallman and the FSF were very much against C++, and pushed very hard for everything under the GNU umbrella to be written preferably in C or Scheme. They only relaxed that after much insistence from the GCC developers, which wanted to be able to use templates and RAII.

I think that Glib was born from the same mentality which considered C as the perfect language, with the end goal of getting the same functionalities of modern languages without leaving it.


You're right, g++ was there by the end of the first year but as far as I can tell libstdc++ didn't show up until around 1998. I don't know when GCC started supporting Objective-C (another commenter says maybe by 1992) but GNUStep didn't 1.0 until 1998 (it was fairly incomplete before that.)

But yes, you're right, they could have developed Glib in either of those languages even if STL and *step didn't exist yet. I also found this e-mail[1] from Stallman that supports your claim. Do you know of more documentation? I'm interested in this history.

1. http://harmful.cat-v.org/software/c++/rms


The main reason why Gtk and GNOME exist was Qt's license and yes C above all.

Here is one of the original versions of the GNU Coding Standards from 1994,

https://www.schwedler.com/old/HTML/standards_7.html#SEC12

> Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C.


I found an archived copy of the GNU guidelines from 2004 that well represents what was the GNU policy of the time:

https://web.archive.org/web/20041029162424/https://www.gnu.o...

As you can see, it basically consisted in "use C, unless you _really_ have too". It was also well known that C++ was heavily discouraged by Richard Stallman, so picking it could well end up in your project having a hard time being adopted by GNU.

The guidelines were relaxed afterwards, especially since GCC came to rely on C++ and every relevant C compiler was then written in C++, making the whole "we don't want to force people to install g++" even more nonsensical. In 2004 C++ was already way too ubiquitous to not have g++ installed.


> no free as in freedom C++ or Objective-C compiler

G++ was available in 1990. I don't recall when Objective C became available in GCC, but it was present in at least gcc-2.2.2 in 1992.


"Something like boost but for plain C" is, exactly, Standard C++.


In theory there could be such a library but in practice I don’t think there is. Maybe because maintaining something like the STL is a non-trivial task.


That’s pretty much what glib is. There are generic structures.


Does transmission use glib?


Yes, it's built on Gtk+ (which is built on Glib.)


thats (one of) the client(s)?


I wonder if Dlangs "betterC" would be a good fit, since you get real metaprogramming then.


Dlang's betterC would be a good option for just a better-than-C language but looks like it doesn't support D's standard library [1]. That'd mean being stuck with plain C container libraries. :/

Personally, I was impressed by an approach taken by the authors of Pixie (a fast 2d library like Cairo) [2]. They created a "library wrapper" generator for Nim code that supports creating libraries Python, Node, C, and Nim itself that they call Genny [3]. Haven't tried it but being able to use Nim and it's ref based gc but still export to nice API's in other languages is fantastic. Pixie's aim is to be a Cairo alternative so it makes sense they'd need this. I hope the approach takes off. Usually writing any cross language API's is a lossy operation.

Here's a sample of the API definition:

    exportObject Matrix3:
      constructor:
         matrix3
      procs:
         mul(Matrix3, Matrix3)
1: https://dlang.org/spec/betterc.html 2: https://github.com/treeform/pixie 3: https://github.com/treeform/genny

(edited formatting)


The consequences of betterC are listed out here. https://dlang.org/spec/betterc.html#consequences

Notice that betterC does not exclude templates or destructors, so you could still use c++-style containers based around RAII, etc.

The argument against it is a smaller ecosystem and you’d end up having to roll your own containers etc.


>I wonder if Dlangs "betterC" would be a good fit,

You mean "Das C" ? :)


>"You can then write your code C-style while compiling it as C++ and nobody will bat an eye."

Very practical and reasonable choice.


C style code plus you can get some nice things like hashmaps without having to implement or find a implementation and instead use std library


You can also statically link libc++. Not much of a binary size increase from static linking because most is template headers.


> Not much of a binary size increase from static linking because most is template headers.

Unless you use anything that pulls in locale code (e.g. any stream), in which case you will get code for things like date and monetary formatting in your library even if it is never used.


Linkers are pretty good at throwing out dead code, so if you don't reference these functions there's a decent chance they won't be linked in when linking statically.


Seems like exception handling and exception safety would be quite tough to achieve if you are just sprinkling STL throughout a mostly C codebase, no?


A mostly C codebase using a bit of STL will almost certainly not use exceptions. So there's no need to worry about exception safety.


The STL can throw exceptions, but it's easy to avoid that if you use the right patterns (i.e., find() instead of at(), iterators, and so on). If you build your C code as C++, exceptions will flow through it just fine. Nothing will throw anything unless call throw (or you run out of memory).


> If you build your C code as C++, exceptions will flow through it just fine.

Exceptions will propagate up the stack, but won't clean up resources since the C code expects to do that manually. Same is true for non-RAII C++ code that is not written to expect exceptions.

Your main point is correct though: Large parts of the C++ stdlib do not throw.


Well, you could compile everything with noexcept and not use exceptions, but that would certainly make large chunks of the STL useless.


What? Everything in the STL works fine without exceptions. Only wrinkle is if one wants to handle out of memory in way besides crashing, although there is generally nothing better to do. Besides, the kernel will often kill processes rather than return malloc failures (see the OOM killer) anyway.


IIRC, STL constructors throw if they can’t run successfully. I suppose if you’re sure that OOM is the only thing that is going to cause constructors to fail, you can just live with allocation failures causing program termination.


> C++'s std:: tools are more useful than the bespoke tools in libtransmission's C code. Nearly every time I go in to fix or change something in libtransmission, I find myself missing some C++ tool that I've come to take for granted when working in other code, either to avoid reinventing the wheel (e.g. using std::partial_sort() instead of tr_quickfindFirstK(), or std::vector instead of tr_ptrArray) or the better typechecking (e.g. std::stort() instead of qsort()).

I think it's more to glib development entering an ice age, than a language issue.

Glib had an impressive roadmap list around 8-10 years ago, copying much of STL features, but of course nothing much materialised after GNOME 3 caused an exodus of GTK developers.


Patching up C so it can safely play where the big kids do is both pointless and doomed to failure. It is way, way less work to switch to using a better language, instead.

Any C program can be converted to build with a C++ compiler with no more than a day's work. Then, you can start in modernizing the active parts, deleting local, generally poorly-tested, cobbled-together utilities as you go.


I have been sounding that trumphet since 1993, but there are lots of very hard listeners.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: