Hacker News new | past | comments | ask | show | jobs | submit login
We used C++20 to eliminate a class of runtime bugs (microsoft.com)
144 points by maccard 7 days ago | hide | past | favorite | 101 comments





I saw this this morning (and also rusts support for fmt strings). Whenever c++ is discussed I often find myself defending modern c++ against people who want to write c++03 because it's simpler. I like this article because it shows how useful modern features really are in practice.

Complexity can solve issues but can also bring new ones.

For instance, I've been bitten recently by the new "universal" syntax to write variable definitions with only curly braces.

  #include <iostream> 
  #include <string>
  #include <vector>

  using namespace std;

  int main() {
    // Forty-two things:
    vector<string> v1 { 42, "foo" };
    vector<int>    v2 { 42, 1 };
    vector<char>   v3 { 42, 'x' };

    cout << v1.size() << " " << v2.size() << " " << v3.size() << endl;
    return 0;
  }
This prints, of course:

> 42 2 2


This was, in my opinion the single worst thing added to C++ -- it's what made me quit the C++ standard's committee, as it was clear (to me) how much of a mess things were.

Different people wanted "{ } for unified construction" and "{ } for initalizer lists", and we ended up with this mess which made 0 people happy, and ruined unified construction.


I'm at the beginning of a programming career, about to lean real heavily into learning C++ because it's standard/necessary in the field I want to join.

Stuff like this terrifies me that I'm signing up for some kafka-esque frustration.


The way I would think about it is: C++ isn't any weirder than English. As long as you are happy to accept there are things just there for weird history reasons, it's not too bad.

Thanks, this is heartening. People new to english still manage to get their point across even if it sounds funny.

C++‘s initialization is utterly broken. Eg to understand the difference between just ‘struct Foo f;’ and ‘struct Foo f{};’, you have to come across about 10 different kinds of c++ initialization terms.

Is there a way out of this (serious question)?


I'm curious about your use case for calling the vector(size_t, value_type) constructor (vector<string> v1 { 42, "foo" }).

I'm on a codebase of a few hundred kloc and the need for a vector initialized to anything but the default value pretty much never came up. It uses boost, Qt, llvm and two dozen other libraries, yet there is a grand total of eight instantiations of that constructor in it.

When putting a breakpoint, it turns out that all the calls to it I could trigger actually just initialize with the default value statically (e.g. vector<T*> v{x, nullptr};).


Easy because since the bite I'm adding some comments after those initialisations where I really intend to call a given constructor.

Three instances I could grep immediately, all from the same file:

  std::vector<std::tuple<std::optional<qreal>, QString, QColor>>
      values(  // Not list-init
          numValues, std::make_tuple(std::nullopt, QString(), QColor()));
  (...)
  std::vector<size_t> nextTupleIdx(numValues, 0);  // Not list-init here!
  std::vector<bool> done(numValues, false);        // Not list-init here!
I do not doubt a single second that there are 36 ways to write this that are simpler, more efficient and more readable. Yet I would like to hear your suggestion.

All your examples can be replaced by

    std::vector<T> vec(numValues); 
as vector value-initializes.

Personally I prefer an explicit reserve or resize, but still..


The committee loves overloading. IMO they overuse it. I think once you start thinking "this overload only participates in resolution if..." you should start considering different functions.

I feel like there was success with overloading `<<`, which is weird but very appealing for students IME, and suddenly overloading everything was on the table.

Overloading << is part of Stroustrup's iostreams toy and thus enjoyed the advantage that the language's creator was advocating it from the outset (it pre-dates the C++ standard library). I doubt that anything so ugly and dangerous would be standardised even in C++ today.

Because this is actually still a shift operator, it retains all the precedence of the shift operator even when tired humans think of it as as a clever new stream out operator. So if you're in a situation where shift has precedence, the operator fires, and too bad if that wasn't what you'd intended nor what makes sense to a human reading the code.

Oops.

Some languages let you mint new operators (from some limited combinations of symbols) or allow infix function calls, or by some other means provide for you to nicely extend syntax here, but the overload of unrelated operators was IMNSHO an ongoing disaster whose popularity with programmers is unexplained, like C's decision to treat an array as just the pointer to the front of the array.

But I'm a cranky old man, I also don't like the fact that Rust's + concatenates Strings (this isn't inherent in the language since String isn't a built-in type, in Linux kernel Rust nothing like this exists, because Linus would throw a fit, but in the out-of-box runtime environment Strings implement the Add operator as a concatenate operation).


And starting writing the standard instead of describing high level expectations but describing the implementation (code).

Worse, even without the vectors, the initialization of single variables of these types would be different with the same syntax, e.g.

    string s; // initialized (empty)
    int i; // NOT initialized (garbage value)
    static int i; // initialized (to zero)
    char c; // NOT initialized (garbage value)
...I mean, it’s always been a complete mess.

Can someone explains why it prints that?

There are two different features interfering, which were added at the same time.

The {42,1} is an "initializer list", which was a way of passing a list of things to a constructor.

However, at the same time, something called "uniformed initalization" was added, which used {}, to "clean up" how constructors work.

However, from looking at a piece of code it's impossible to know if it is trying to call "initializer list" or "uniformed inialization". The "initalizer list" wins in this case, when the items are all things you can put in the vector.


Also, although this might be more obvious to people familiar with any C-like language, C++ considers that since 'x' is a character, and its characters are fundamentally just an integer type (maybe a signed 8-bit integer, it depends) the third list is just two constant integers, which are a perfectly reasonable thing to put in a list. Even though that's obviously not what a human sees.

In a language that isn't just pretending to be strongly typed, the compiler would notice that char and whatever-integer-type-42-is aren't the same type, and it would reject your program as nonsense.

But that confusion gets much less chance if the language explicitly distinguishes initialising collections so that "I want N of X" is different from "I want N, and X". Compare Rust's vec! macro:

  let mut v = vec!['x'; 42]; // NOTE semi-colon. Compiles, vector of 42 x characters

  let mut v = vec!['x', 42]; // NOTE comma. Does not compile

  let mut v = vec![b'x', 42]; // NOTE comma. Compiles, vector containing 120 (the ASCII code for 'x') and 42 as bytes

  let mut v = vec![b'x'; 420]; // NOTE semi-colon. Compiles, vector of 420 bytes with value 120

  let mut v = vec![b'x', 420]; // NOTE comma. Does not compile unless you explicitly tell Rust that you mean the low 8-bits of 420 if you do this
I'd actually be enthusiastic about a Clippy warning for the middle one, because it feels like the odds are better than they should be that wasn't what you meant. But on the other hand, the odds of you wanting either possibility are slim, so, not a priority.

Rust is one of the greatest new mainstream languages.

But what they do with semicolons is pure brain damage. Having different complex semantics encoded purely in semicolons is nuts imho. I don't get how someone could create such mess in an otherwise mostly quite sane language.

Especially as semicolons are an archaic artifact that shouldn't be used for mostly anything in a modern language, imho, as it's just useless noise (besides when writing some well readable one-liners of course ;-)).


Oh how I agree about the uniform initialization! And to make matter worse, I now work on a code base where it is the mandated form in the coding standard. At least your initial example now gives me ammunition against it, but too late for this coding standard.

It makes code look silly, too, as now initialization looks different from assignment.

     int i { a + 2 };
     i = a + 4;
     
     int j = a + 2;
     j = a + 4; // ah! consistency!

It is good for initialization to look different. It is different. Not so much so for int, but maybe hugely different for other types.

To build up on sibling's answer that isn't the same thing.

The first one will call the initialization constructor, whereas the second form will initialize the object and then apply one form of copy construction, depending on which ones are available (user provided or compiler generated).

It just happens that for int that doesn't matter in practice.


universal initialization is the worst feature added to modern c++; these changes are unexcusable IMO.

Generally in C++ one should use Type name = Type( args... ) style to be the least surprising. Prior to C++20 that would be aggregates. It's really too bad that initializer_list gets special treatment, and it's universally wart in C++. The alternative would have been vector{ { 1,2,3} } and that would have been fine.

Ugh.

I'm firmly of the opinion that if anyone uses a sane programming language first, they'll absolutely do anything they can to avoid C++.

Corollary being, 1) if C++ is your first language, you'll end up learning all these rules and exceptions to the rules and assume that is just how it is (this is the way I think about Python), and 2) if you use C++ you are aiming for performance that alternative languages cannot provide. I'm glad the gap between alternatives and C++ is narrowing. If I could, I would never write another line of C++ again. Rust and Zig are FAAAARRRR better alternatives to me personally, purely because there's fewer gotchas I have to keep in my head when attempting to do anything with those languages.


Corollary 3) the platform only offers C++ on the toolbox alongside the managed languages on the SDK and adding another language will create more complex toolchain, complicate the build server, have an impoverished IDE experience and rely on third party libraries of various levels of quality for FFI bindings.

In my case I'm writing C++ because I want to use a library implemented in C++.

One of the gripe I have with C++ is that it has replaced C as the goto language for general purpose libraries, yet it's hard to built bindings for another language (whereas bindings to C are trivial).


I’m in the same boat, I’m learning game engines and so many libraries are in C++.

It feels impossible to use Rust or Zig, but I have hope I can use Nim. It can compile to C++ and has a simple-ish way of wrapping C++ code. Maybe it would interesting for you too?


C++ is one of my favourite languages, however there is a big difference bettwen being able to write modern C++ in side projects, and what most companies consider C++.

Enjoy a sample of AOSP source code, https://android.googlesource.com/platform/art/+/refs/heads/m...


Just because some companies write ancient C++ code doesn't mean all companies do. I write mostly C++17 these days for a company, and my previous job wasn't exactly afraid of modern C++ but they could have embraced more of it. The post I shared is from Microsoft about writing modern C++ .

I know, and I can provide similar quality reference from Microsoft own samples if you wish.

I don't get the point you're trying to make, sorry. "Big companies sometimes write crappy code" is of course going to be true.

It would be interesting to see statistics on use of newer C++ in companies known to have relatively stringent coding standards and code review regimens. I suspect that it (still) isn't so much about crappy code as much of the value of C++ being sunk into legacy code that determine the constraints put on *new* codebases.

Most companies still write C++ crappy code, regardless of the advocacy many of us in the community try to make.

In fact, if we focus on traditional enterprise shops they are stuck in what I call the C+ mindset.

Those of us that bother to live in HN, Reddit, attend CppCon, C++Now, C++ on Sea, ACCU in whatever form, are 3lit3. A tiny minority that actually cares about quality.

That is the point.


It doesn't sound like much of a point. The same thing is going to be true for all programming languages widely used in industry. It's not a C++ issue.

In fact, see Python 2 vs 3 for probably the most impactful case of people being slow to take up new language versions/features.

There's going to be shops writing C89. Some reasons might be well founded, some reasons might be less well founded.


Correct, so now imagine Python 3 as the adoption of Core Guidelines, which is basically what modern safe C++ is all about.

Lasted surveys, which are anyway only answered by people that are active enough to care about stuff outside job, places use of tooling to validate modern practices at about 20%.

Now do some statistical extrapolation and it is relatively easy to find out how much "modern C++" is actually used in practice.

Bjarne hasn't dedicated two CppCon to the subject to improve adoption of better C++ practices just by accident.


I agree. I think perhaps the one thing that does more uniquely affect C++ is the compatibility with C (at least not newer versions of C) and the strict backwards compatibility with C++ before it. Which is a little strange to say because compilers _do_ deprecate features, so it's not strict-strict, just strict enough for the standards committee to use it as an excuse.

The people who write crappy modern C++ will write crappy C++03, crappy java and crappy golang.

Well, 50% of programmers are at most mediocre—or worse. And only maybe 1% of programmers is able to write significantly better code than all others.

So in result, just by the means of statistics, all code is necessarily crap. (Besides the extremely seldom pearls written by some independent geniuses).

Frankly this is unavoidable on an industrial scale of production, I guess. At least as long as we can't clone "10 x programmers".


Perhaps, but crappy Java and crappy Go won't have memory corruption concerns, which is a huge plus in and of itself.

Indeed, unfortunately they get to play with sharp knives.

>Enjoy a sample of AOSP source code, https://android.googlesource.com/platform/art/+/refs/heads/m...

What am I looking for? Is it just the use of smart pointers?


+1 it looks ok, although the stylistically I would personally prefer to split this file up into smaller parts (although maybe it's all related). I'm also not a big fan of macros but again without knowing more it's not too unreasonable.

Personally I wouldn't like to slog through this on a moment's notice but it seems named well and there are a lot of comments, and appears to have been through clang-tidy.


Wow ... I want to pull my eyes out now. Is this really some uber critical path in the code or did the author(s) have too much time on their hands/optimizing some other metric (e.g. promotion, proving how smart they are, attempting to get job security). I am also astounded by two things related to documentation .. it is nearly non-existent, and where it exists, seems to provide almost no context to someone other than the original authors.

You posted this for a reason .. would be good to get your thoughts.

edit: adding one more point. I went to a fairly high-ranked university(top-20). We would get graded based on how clear our code was, comments (both for functions and inline comments). This code would get a zero on those metrics.


Hmm... I can't see how the code is unreasonable, given the complexity of the task at hand. Could you tell me what could be improved about this code (code itself, not docs etc)?

just opened a file and...ugh

https://android.googlesource.com/platform/art/+/refs/heads/m...

like, just this, basically C:

    if (strncmp(argv[arg_idx], "-XXlib:", strlen("-XXlib:")) == 0) {
       ...
is just so complicated and error-prone over

    if(std::string_view{argv[arg_idx]}.starts_with("-XXlib:"))  
       ...
(which I guess is the intent)

That code is older than C++11, let alone C++17. Also, your proposal isn't as efficient, since it calls strlen on construction of string_view (not that cost of parsing command line arguments is going to be measurable).

> your proposal isn't as efficient, since it calls strlen on construction of string_view

The code it replaces also had a strlen in it.


A strlen of a compile-time constant, which will certainly get turned into a constant by the compiler. It does not get the length of the input string.

Why wouldn't the string_view constructor be inlined and get the same treatment?

Certainly isn't a fact unless proven by examining the generated machine code.

This is a worthless statement. Do you examine all of your generated machine code to ensure that it generates correct code? Any compiler that you cannot trust to inline 3 for strlen("foo") is not one you should trust to compile anything, let alone modern C++.

> Do you examine all of your generated machine code to ensure that it generates correct code?

(not OP). I examine the output if someone tells me it's slower, or if I care about performance. I did a very rough benchmark which says in this case they're equivalent [0], however that's not necessarily representative of the places it's called in. There's definitely a compile time overhead though [1]. On my current project there's ~20k files. On an 8 core machine (as an example), adding 150ms per file would add 7 minutes onto a clean build (before optimising the builds).

[0] https://quick-bench.com/q/LBarCDIigwpmXcwZSx-RgV2x4_4 [1] https://build-bench.com/b/OH8qf9AgEB_l-BhANV5qP_uSCys


ISO C++ abstract machine and what a random C++ compiler generates out of it, don't require at any moment the realisation of your assertion.

So you either validate it actually takes place or believe in fairy tales.


The code is ancient, and Google (like most) sees no value in modernizing old code, particularly for non-revenue-generating product. Furthermore, Google long ago locked themselves into an idiotically anti-RAII coding standard.

So, the example is of bad old code, at a place that enforces bad coding practice. The intent appears to be to suggest this is typical practice, which is not supported.


Google seats at ISO C++, is a major contributor to clang and LLVM, and belongs to the group of companies with largest C++ code in production.

If this what a company with deep roots in C++ world is doing, what are the large majority of "dark matter" software factories doing, specially the sweetshop ones.


A minuscule fraction of the people coding what they call C++ at Google come to the committee meetings. Google's ancient, idiotic prohibition on RAII is not a thing the current attendees, or other employees, have a say about. Many, attending or no, would rather be coding sensibly. And, in any case, the ancient code you present is not how code is being written even at Google today. So, your point is hopelessly muddled.

There are shops where people still write new C++ like it's 2011, or 2003, or 1998, or 1992, or C. There are plenty of shops where people code the best way their current production compiler allows. There are plenty where different people do some of each of those. Vanishingly few shops make an effort to rewrite ancient code according to current best practice.

I guess a sweetshop company is one with Oompa-loompas.


One day I would appreciate to know where that perfect C++ world of yours exist in real world.

I guess I need to catch up with some Twilight Zone episodes.


Try "Willy Wonka and the Chocolate Factory"?

> just opened a file and...ugh

Looks just like regular PHP. So what's the point? Or did I miss something?

/s


When I was programming C++ regularly, which was C++03, I wrote a robust format string system which could catch all mismatches against the format string at run-time. Which is good enough in practice if you actually test the code paths.

What's valuable is being able to compile format strings into faster code, like the formatter macro in Common Lisp.


C++?? every 3 years isn't nearly as bad as say, the Rust situation, but it does mean that a linux desktop distro compiler can only compile stuff from "hip" devs writing in C++ for about 3 years after the distro release. It means if you chose to use these language features the vast majority of people will not actually be able to compile it.

So balance that. If you're not just distributing binaries, if you have geek end users compiling, maybe don't always use the latest bleeding edge C++?? features.


Nah, this is also why the Linux desktop is behind. On Windows I build on win10 VMs with the very latest C++20 features and can ship back to Win7 because Windows does not idiotically tie a compiler version to an operating system.

If you have to use a VM to compile instead of doing it on your OS you've lost the game too. It is a cross-OS problem with almost all modern software.

I am quite curious when clang will catch up with C++20 (GCC is much better).

Meanwhile I am enjoying modules in VS 2022, even if there are some hiccups.


I don't write C++ for work and I've written very little ever, but it seems like C++ and PHP (which I do write daily) have a similar trend. They get shit for being old and bad, but people haven't seen the myriad of new features, safety, and solid ecosystem.

It's impossible to retrofit sanity ex post facto into something that's broken by design. The nonsense on the base layers will always creep into the higher levels through unavoidable abstraction leaks (which are usually unavoidable exactly because said nonsense on the base layers exists in the first place).

That's why repairing design mistakes is only achievable by rewrites, inevitably breaking compatibility with the old system in this process.


That is true, I have plenty of stuff to complain about C++ and PHP, but also have similar list about Python, Java, C#, F#, C and many other languages of similar age.

I recently bumped my company's fmtlib version to the one that has compile-time format strings, and it's been such a load off our minds to not have to worry about this anymore. We're reasonably early stage (12 people right now) and even with all of us being reasonably senior, I still found a few callsites where the compile-time format string caught bugs.

(FYI: for anyone holding off on this because of issues with spdlog, enough has been fixed in the latest versions of both that you can upgrade them together now.)

Also worth noting: the UX of fmtlib's compile-time format strings is actually quite interesting. There are error callbacks in the consteval calls that take advantage of the context that compilers show in compilation failures that detail the mistake being made.

(Aside: I'd also like to note that `absl::StrFormat` has supported compile-time format strings for quite a long time. https://abseil.io/docs/cpp/guides/format)


What this is saying to me is that they have untested code paths. All the places where error encountered bad parameters are not actually covered.

It doesn't seem incredibly valuable to know that some error() calls in untested code are well-formed, since that code could be broken. The error() could be a false positive, or the compiler could crash before reaching the error() call due to some bug. Or some of the errors() could be in de facto unreachable code: no test case can cause them to be executed.

If you have a test suite which hits all the error() calls, and if the formatting system is robust to catch bad arguments at run-time, you don't have a problem.


As someone coming from a web background with almost no systems programming experience, Rust is appealing to me in terms of it's language features and support for a functional style of programming, as well as the documentation and tooling (among other things).

But from the little I've learned about modern C++ and RAII, I wonder how systems engineers will respond to Rust in the long-term. Memory safety seems to be the primary argument in favor of using Rust, but what would be easier - rewriting everything in Rust, or refactoring existing code to take advantage of RAII?


I used to be a 100% C++ programmer in the olden days but over the years C# and Python have pushed my C++ percentage down to maybe 10% super time critical code and some embedded.

RAII is the one thing I really really miss from my C++ days. From what little I know about Rust it seems to promise a more "baked-in" version of RAII so I'm all for it. Haven't found the time to experiment but looking forward to eventually dipping my feet in.


I'd argue that RAII is pretty well baked into C++.

For Rust, it's the "Drop" trait that you add onto things which ultimately causes RAII behavior.

You don't need to do it as much with rust as a lot of the reasons for wanting RAII (memory management) are simply handled by the language.

For Rust, it's sort of the "Pit of success" the easy thing to do is the right thing to do. Whereas in C++, it's pretty dang easy to new something up and fail to correctly delete it.


> Whereas in C++, it's pretty dang easy to new something up and fail to correctly delete it.

This is true, but we shouldn't be encouraging people to reach for new/delete these days; auto Foo = std::make_unique<Foo>(args); does the right thing in a surprisngly large number of cases. It's not the only tool available, but it's a really damn good one.


The "Pit of success" and "Pit of failure" aren't really about what's available in a language, it's about what's easy and natural.

I'd argue that in a language with the `new` keyword, nothing is more easy or natural than invoking it. `auto Foo = std::make_unique<Foo>(args);` is absolutely the right thing to do, but would you really be surprised to find `auto foo = new Foo(args);`?

I absolutely agree unique pointers are the right call. They just aren't necessarily the easy or natural call.

The contrast is rust where all ptrs are, by default, unique ptr and the compiler validates that for you.


I've thought before about making a "dialect" of C++ that doesn't add any new fundamental capabilities, but simply changes syntax to make the more modern options easy and natural and make old C features verbose and difficult. For example, using "*" to represent unique_ptr and std::legacy_c_ptr<...> to represent "bare" pointers. Ideally, such a dialect could be fully include-compatible with all existing C++ code (although macros could be difficult), similar to how Rust "editions" or go "language versions" work. If C++ compilers offered a "#pragma modern_defaults" would people be interested in using it?

Not really, one of the reasons people keep using C++ is its legacy.

It is already hard to educate people on modern language features and adoption of static analysers that enforce Core Guidelines.

Who would write the books and teaching materials for #pragma modern_defaults, and ensure it would be accepted into ISO?


Actually in C++ the vast majority of the time, the right thing to do is

    Foo foo{args};

Which is simpler than the other cases, more correct and more performant. The rare few times dynamic allocation of something that is not an array with new is needed is when you have polymorphic types - but then, the user code generally calls a factory function instead of new.

Can we agree that the new "best default" way of instantiating is... pretty damn weird?

    auto foo = std::make_unique<Foo>(args);
Think about explaining this to a new C++ developer.

This will feel much more natural to most developers:

    var foo = new Foo(args);

The "best default" way of instantiating is

    Foo foo(args);
Sometimes an object has to be on the heap as its own separate allocation (rather than on the stack or as a data member), but I notice that some new developers who come to C++ from some other language use the heap way too much.

> Think about explaining this to a new C++ developer.

How's this:

The code allocates and initializes a new Foo instance with args. By using make_unique you guarantee that the destructor is called when foo goes out of scope for any reason.


in my experience, it's almost inevitable that you end up calling some code you don't control that returns raw pointers. better hope you have access to the source or at least some good docs :)

Agreed. That problem existed even before unique_ptr though; there are still some APIs that I use that hand you back pointers that you may or may not need to manage yourself. If you do need to, then unique_ptr + custom deleter is the way to go!

If you are using a relatively recent version of C#, you can get quite close to it, specially now that using can make use of structural typing.

And with help of Roslyn, any type with a "destructor" (aka Dispose) that gets used without a using declaration can be turned into a compiler error.


> rewriting everything in Rust, or refactoring existing code to take advantage of RAII?

Oh, refactoring existing C++ code is easier, hands down.

You can, for the most part, keep your C++ algorithms exactly as is with little rewriting. Rewriting in rust may force you to totally rethink your approach (or use a lot of unsafe, or write really inefficient code).


I'm not sure I fully understand the question. RAII is powerful, but it is just a tool to organize what is still manual resource management. it can help with memory/thread safety, but it doesn't guarantee those things by itself. ex: it's nice that std::vector's memory buffer will be freed when the object goes out of scope, but I can still do something stupid like hold onto to its .data() pointer and try to dereference later.

to answer the more general question, I think most engineers would prefer to refactor a c++ project over rewriting the whole thing in rust, and rightly so. you usually don't want to fuck around with delicate things that currently work in these sorts of projects. perhaps over time you end up with an FFI layer over a thoroughly tested c++ core. then new features can be written in rust or whatever friendly language is popular at the time.


I appreciate the clarification, my understanding of this stuff is pretty superficial.

Putting it that way, I can see why someone would appreciate that Rust guarantees safety by default, instead requiring you to opt-out explicitly when needed (with `unsafe`), versus the opt-in nature of RAII - for greenfield projects, anyway.


One quibble one of the best uses for RAII is not memory management but any external resource like db connections or file handles or mutexes.

The built in smart pointers take care of much of memory management. (Although they use RAII internally)


I've been using RAII in C++ since 2002 or so -- so it's not a new thing to do.

Personally, I maintain quite a lot of open source C++ code -- I'm not rewriting that in Rust, but I'm writing new projects in Rust.


RAII was already a thing in C++ compilers for MS-DOS.

Imagine what happens when Microsoft moves to Rust, it will be a red letter day.

Well... a "Microsoft Visual Rust" (integrated with Visual Studio) could be all the motivation I'm looking for to finally learn, mmmm, not learn but to finally put Rust at serious use.

With everything integrated like, you know (and this may sound super crazy) graphically-mouse-operated "create new project", "right click -> add new file to project", resource editor with "create a window with a button, double-click the button and generate a function to handle the event"-sort of thing. F7 to build, F5 to debug. More or less the things we are used to since the past 30 years or so!!!


Maybe they can even recreate that amazing Visual Studio project settings dialog for Rust!


Microsoft have done a lot of working attempting to make every windows C/C++ API available for use in Rust. This task is/was too big to do by hand (and sensitive to changes) so they have automated it. See: https://github.com/microsoft/windows-rs

There is some ongoing Rust adoption on Azure and IoT, but I doubt WinDev C++/COM bastion will ever allow anything else, given how they manage to keep it going over the years.

I remember using variadic template to implement a print() function, to print any type of variable, with a variable amount of arguments.

I got help to implement it, it was a bit difficult, but quite nice to have.


I haven't used C++ in years. I can remember writing C++11 and transitioning to C++14. And also lots and lots of Valgrind :)

What's it like these days? Is tooling pretty much the same - downloading packages from linux, some header only libs, and makefiles? Has the Language Server Protocol made in roads with C++ editors and IDEs?


C++20 is as big an advance as 11 was, and is taking as long to penetrate.

Editors all know C++ now. Generic lambdas in C++14 made coding fun. Template metaprogramming is hardly ever needed or useful anymore. Builds got faster with module support (essentially standardized precompiled headers), and also ccache and ninja. As Concepts penetrate, error messages get radically better.

Valgrind is increasingly unnecessary, as use of op new vanishes.


One of the most interesting changes to me is how much they are pushing metaprogramming into the language itself _without_ a lot of the traditional annotations that went with TMP before.

For example, annotating a parameter with 'auto' instead of the typename T preamble.


Template metaprogramming was always known as a hack, no more so than among people obliged to do it. Most of the things it was used for are now done with core language facilities designed for the job, and coded in compiled C++ instead of interpreted template flailing, so fast.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: