Hacker News new | past | comments | ask | show | jobs | submit login
C++20 Concepts: The Definitive Guide (thecodepad.com)
141 points by emma-w on Aug 22, 2021 | hide | past | favorite | 100 comments



I feel like one of the things a "Definitive Guide" to this feature needs to make clear, and maybe even emphasise, is a key way these are different than say Rust type Traits.

Concepts, just like the SFINAE and constrexpr hacks you should discard in their favour, are about only what will compile and you, the C++ programmer, are always responsible for shouldering the burden of deciding whether that will do what you meant even if you have no insight into the types involved.

Example: In C++ the floating point type "float" matches a concept std::totally_ordered. You can, in fact, compile code that treats any container of floats as totally ordered. But of course if some of them are NaN that won't work, because NaNs in fact don't even compare equal to themselves. You, the C++ programmer were responsible for knowing not to use this with NaNs.

Whereas, Rust's floating point type f32 implements PartialOrd (saying you can try to compare these) but not Ord (a claim that all of them are totally ordered). If you know you won't use NaNs you can construct a wrapper type, and insist that is Ord and Rust will let you do that, because now you pointed the gun at your foot, and it's clearly your fault if you pull the trigger.

This is a quite deliberate choice, it's not as though C++ could have just dropped in Rust-style traits, but I think a "Definitive Guide" ought to spell this out so that programmers understand that the burden the concept seems to be taking on is in fact still resting firmly on their shoulders in C++.

The other side of this is, if you wrote a C++ type say Beachball that implements the necessary comparison operators the Beachball is std::totally_ordered in C++ 20 with no further work from you to clear up this fact. Your users might hope you'll document whether Beachballs are actually totally ordered or not though...

I think this will likely prove to be a curse, obviously its proponents think it will work out OK or even a blessing.


> I feel like one of the things a "Definitive Guide" to this feature needs to make clear, and maybe even emphasise, is a key way these are different than say Rust type Traits.

> Concepts, just like the SFINAE and constrexpr hacks you should discard in their favour, are about only what will compile and you, the C++ programmer, are always responsible for shouldering the burden of deciding whether that will do what you meant even if you have no insight into the types involved.

To be fair, that's not what makes them different from Rust. Unless there's something in Rust that I missed, it offers no guarantees that the implementation of the trait is consistent with its semantics.

Whether it's traits or concepts, it's still about compile-time type-checking, not actual contracts.


The trick is that in Rust the implementer chooses which Traits to implement for their Class while in C++ the Concept chooses what properties it will require and then applies to any classes with matching properties whether that's desirable or not.

So in C++ the fact a Mouse is food::LovesCheese doesn't actually tell me whether the Mouse's programmer has any idea what it means to food::LovesCheese or whether the food::LovesCheese programmer knows about a Mouse. I need to carefully read the documentation, or the source code, or guess. It might be a complete accident, or at least an unlucky side effect.

In Rust the fact a Mouse has the trait food::LovesCheese always means specifically that either (1) the Mouse programmer explicitly implemented food::LovesCheese as part of a Mouse or (2) the food programmer explicitly implemented a way for Mouse to food::LovesCheese. Rust requires that nobody but them can do this, if I have a Mouse I got from somewhere but alas it doesn't food::LovesCheese and I wish it did, I need to build my own trivial wrapper type MyCheeseLovingMouse and implement the extra trait.

Either way as the user of any Mouse, you can be sure that the fact it has food::LovesCheese in Rust is on purpose and not just an unfortunate behaviour that nobody specifically intended and you need to watch out for.


Okay, now I understand what you meant. Concepts automatically "apply" to anything that satisfies them, whereas Rust traits have to be implemented deliberately.

Thank you for clarifying.


First off, we should be clear what we mean by "guarantee". When you're talking about Rust, "guarantee" means "the compiler enforces this in an ironclad way, such that the programmer cannot mess it up" (generally with the implied caveat that we assume unsafe is not used). C++ programmers generally use "guarantee" to mean "the standard says you should follow these rules", because as the language is not type-safe the compiler is in general unable to enforce much of anything that is interesting, making the Rust sense of the word "guarantee" not a particularly useful concept in that language. Confusion between these two meanings of "guarantee" (along with related concepts like "safe") has resulted in a lot of misunderstandings over the years.

Bearing this in mind, neither concepts in C++ nor traits in Rust guarantee any semantics in the Rust sense of the word. In the C++ sense of the word, C++ concepts do carry guarantees, in that the spec says what they should do. But, in this sense, so does Rust: PartialEq [1], for example, has semantic requirements spelled out in the documentation. In my mind, the difference is that Rust programmers tend to program defensively, not trusting programmers to get things right that the compiler doesn't enforce. Thus you see a lot of conversations along the lines of "what if the implementer of trait X does something weird?" in the Rust space. This may give the impression that Rust traits don't have a clear and consistent semantics associated with them. But that's not right: the implementer of PartialOrd, for example, is absolutely expected to implement a proper partial order, as explained in the documentation. A specification for Rust could specify associated semantics for those traits, just like in C++.

[1]: https://doc.rust-lang.org/std/cmp/trait.PartialEq.html


You're correct that Rust's compiler can't guarantee semantics. However because the programmer has to actually implement Rust traits, they can and they should.

That option isn't available to C++ because of how concepts work. PartialEq and Eq wouldn't be different concepts in any practical sense, they would just be a funny way to write a comment, but in Rust they are different traits because even though Eq doesn't add any syntax it does add semantics.

There is one further trick I wasn't going to mention but you've sort of brought it up. Rust's unsafe is partly about taking responsibility for the safe and correct operation of your code. That's a pretty alien thought in C++ although I argue it shouldn't have been, it's clearly too late now. As a result Rust has the idea of unsafe traits. If you get PartialEq wrong, the programmer using your type curses and probably discards it as hopelessly broken, but their program mustn't have Undefined Behaviour as a result by Rust's definition.

However if you implement an unsafe trait like Send wrong, maybe the program now has Undefined Behaviour, as a result (signalled by the unsafe keyword you'll need to use to implement it) you are responsible for making sure your implementation is in fact semantically correct.

This distinction wouldn't mean anything in C++ where not only are your implementations of concepts never required to obey the semantics, but the language doesn't even allow you express that your implementation doesn't obey the semantics, the fact it's a syntactic match means it gets recruited by the concept and too bad.


Are you sure about this? In my tests floating points are always considered partially ordered, not totally ordered. This page [0] even mentions this in the notes towards the bottom.

[0]: https://en.cppreference.com/w/cpp/utility/compare/partial_or...


Note that std::totally_ordered is a concept (this topic is about "C++ 20 Concepts: The Definitive Guide") whereas you're talking about std::partial_ordering which is a class, also introduced in C++ 20.

Specifically these ordering classes are the result of the spaceship operator and the concept doesn't care whether you have a spaceship operator.


<=> spaceship operator: Good article on the C++20 three-way comparison operator: https://devblogs.microsoft.com/cppblog/simplify-your-code-wi...


> I feel like one of the things a "Definitive Guide" to this feature needs to make clear, and maybe even emphasise, is a key way these are different than say Rust type Traits.

What use would that be to a C++ developer who doesn't know rust?


Ah, sorry, I didn't intend that it should address this as "Here's why it's different to Rust" but rather, "Here's a surprising thing about what it does/ doesn't do in your program" and Rust offers a contrast to show this isn't just "But that's how computers work".

As you see from the standard library concepts the emphasis is on semantic claims like "fully ordered" but this feature does not actually provide semantics and I think that's a trap programmers would be likely to fall into.


Rust traits are opt-in, C++ concepts are opt-out; you can think of C++ concepts as having a "blanket" implementation for all types, so you'd need to opt out using negative reasoning. For example, in C++, you can declare a trait:

    template <typename T>
    struct totally_ordered : std::false_type {};
and opt-in implement it for some types, but not for floats.

Then you can define a TotallyOrdered concept that requires the trait.

The C++ standard library didn't do this, so if you happen to accidentally implement a type with an API that conforms to TotallyOrdered, then it becomes "accidentally" TotallyOrdered, which is a big footgun.


I think this might have been a better way to explain it than my attempt (since it looks like that confused some readers). However, you say C++ concepts are "opt-out", how does a Class opt out ?

If your Delicious concept mistakenly applies to my Desert, how do I as the author of the Desert tell C++ "No, no, when people ask if a Desert is Delicious tell them it isn't?".


Either the writer of the concept "opts-in to making the concept opt-in" (e.g. using the approach above), and that way, classes must opt in, _or_ classes opt-out by not implementing the concept API.

There is no way for a class to both implement the concept API and opt-out. The only way to support it is for the C++ concept writer to make their concept opt-in.


How can floats be totally ordered. This isn’t even a matter of NaNs or not. A set of floats where two or more floats compare equal does not permit a total ordering.


I think that's the parent comment's point. Floats are not totally ordered but C++'s type system is weak enough that it appears they are.


IEEE754 provides a total ordering algorithm actually, but it's not used when you do double a, b; ... a < b


What do you mean? I'm pretty sure no two distinct floats compare as equal.


-0 and 0 do.


-0.0 and +0.0?


Are those floats actually distinct? I mean you could represent 3.0 + 4.0 = 7.0 as well. Are "-0.0" and "0.0" actually different?

(This is a serious question I honestly don't know.)


As other users mentioned, the most relevant part is when you divide by 0, you get different infinities depending on if you're dividing by +0.0 or -0.0. But even beyond that: the binary representation of floating point numbers [1] necessarily means that there are two values for zero with distinct bit patterns.

Floating point numbers have a dedicated sign bit that specifies the sign, which means that you can flip the sign of any floating point number and get a float with a different bit pattern (and opposite sign). That means that you get necessarily get both +0.0 and -0.0, and they have different internal representation in bits.

This is one of the major advantages of two's complement notation for representing integers: it doesn't have a dedicated sign bit in the same way, so you only get one representation for 0. You can still check the sign by looking at the top bit, but if you flip it for the number 0, you don't get -0 (which doesn't really exist), you get -128 (for 8 bit signed integers).

[1]: https://en.wikipedia.org/wiki/Double-precision_floating-poin...


They are distinct values but they compare equal. For almost all uses they are effectively equal, except where you are producing infinities.


I hadn't thought about the infinities part. If they had the same behavior in all operations involving other numbers, then I could see how they could nonetheless be the same from a C++ type perspective, but given they behave differently, C++ certainly couldn't consider them the same.

Thanks for clearing that up!


If you keep infinities and NaNs out of a subsystem, then float and double are totally ordered within that subsystem. So, it is a choice. I would not hesitate on that basis.

Of more moment is that floating point variables often represent imprecise values, such that e.g. 1.41 could represent a value that should equal another transcribed as 1.39. The type system will not help you there. You are obliged to code in tolerance the hard way.


Is it just me, or have C++ errors gotten a lot better? Below example from guide seems a lot more ergonomic than in years past.

  test.cpp: In instantiation of ‘T add(T, T) [with T = std::basic_string<char>]’:
  test.cpp:17:21:   required from here
  test.cpp:11:22: error: static assertion failed
   11 |   static_assert(std::is_integral_v<T>);
      |                 ~~~~~^~~~~~~~~~~~~~~~
  test.cpp:11:22: note: ‘std::is_integral_v<std::basic_string<char> >’ evaluates to false
Build finished with error(s).


One of clang’s stated goals is “expressive diagnostics”

See https://clang.llvm.org/diagnostics.html ; that page is not dated, but compares to gcc 4.9, which is from April 22, 2014.

gcc also has worked on improving its error messages (most likely because of competition with clang), so that comparison probably isn’t accurate anymore.


The link leads to a 404.


The comment markup parser mistakenly understood the link to include the following semicolon, but you couldn’t clearly see it because the semicolon’s descender hid the underline.


Thanks; HN thought the semicolon was part of the URL. Added a space.


There has been more interest from modern programming languages and modern compiler developers in good diagnostics in two ways: 1. The earlier you report the problem, the cheaper the fix and 2. The better the diagnostic the more likely the programmer's fix is appropriate to the actual problem they had.

I agree this is great news. I actually had to write MS SQL for the first time last week and it was disappointing but sadly expected to have it respond to a common but not technically-standard SQL syntax I'm used to with "Syntax error" as though somehow that's helpful. Such poor errors meant I spent more time reading the documentation than writing queries even though I've years of experience across several other SQL dialects.


Yeah, and besides the focus on error messages in past years, concepts will naturally allow template errors to be a _lot_ clearer, since they can serve as the specification of templated type requirements.


One of the major motivations for concepts is to allow better error messages.


And ironically it’s questionable if it helps. Worse error messages were one of the main reasons it got blocked from 17.


It's not just you, dinostics continuously and noticably improve. A lot of effort is spent on this.


Yes. I had build failures related to LTO on our production code that uses gcc 7.5 (Ubuntu 18.04). I had to build it with gcc 9.1 (Ubuntu 20.04) in order to get a useful error message that saved me an hour or two of faffing about.


clang kinda started the race


And now lags behind in C++20 support, as apparently not everyone is keen to upstream whatever they are doing.

Clang concepts were implemented by one guy.

https://cppcast.com/saar-raz-clang-hacking/


Strange


Actually not that strange.

Apple doesn't really care about C++20. Their LLVM efforts are focused on Objective-C and Swift.

C++ as used on Apple platforms is mostly related to the Metal Shading Language (a C++14 subset) and IO/Driver Kit (an Embedded C++ variant).

Google has their style guide, only recently updated to C++, isn't a compiler vendor and apparently losing on the ABI vote has made several employees move away from their clang involvement.

Then the other vendors like Intel, Embarcadero, IBM, Codeplay, Sony, Nintendo, ARM, NVidia, AMS have their own agendas and do not upstream everything.


That has nothing to do with C++. Its entirely a compiler thing.


Not entirely true - one of the motivations for new features and libraries C++ adds is to make better errors possible. In this case, `static_assert` was added in C++11, and `is_integral_v` was added in C++17. Concepts, the new feature this post is about, falls under that category as well.


True, although the distinction hardly matters to users.


The most important thing to say about this "definitive guide" is that it delays to the end presenting the overwhelmingly most important detail about Concepts: how to use them.

The right place to put a concept name, in production code, is in place of "typename" in a template definition, or even better in place of "T" in the function argument declaration. That is, instead of

  template<typename T>
  T add(T a, T b)
      requires addable<T> (
    return a + b;
  )
say

  template <addable T>
  T add(T a, T b) {
    return a + b;
  }
or

  auto add(
      addable auto a, 
      addable auto b) {
    return a + b;
  }
according as whether you want to enforce a and b to have the same type (which is another omission).

Most often it is not necessary, and not wanted, to enforce a and b having the same type.


> Most often it is not necessary, and not wanted, to enforce a and b having the same type.

That really depends on what you're trying to do. Presenting these two different declarations as somehow equivalent is very misleading and I'm glad that the author didn't do that.


They are not presented as equivalent, but as a choice.

The author's failing was in presenting neither of them until long after they were due, and presenting thoroughly inferior alternatives in the meantime.


Another language-changing paradigm added to C++. I haven't even finished learning the old ones yet.

Sometimes I think C++ would be better off if the committee stopped accepting proposals that add new features.


I think there's more to it (changing the language) than you are crediting it.

First, you don't have to use the new features (though eventually you'll be reading the code of people who did, so this is only half-valuable). There is new c++11 code being written every day -- in volume (a hard to pin down amount) it's sadly more than 50%. The usage surveys don't really capture this clearly (and it's not clear they could).

Second: often new features are for library writers, or are out there for library writers to use (e.g. coroutines, which probably will not be appropriate for many users before c++23, but pretty much need to be available for people to experiment with).

Third: the new features tend to be additive. For example you don't need to use many of the stuff in <algoritm> -- stick to a for loop unless you want to take advantage of some new capability (e.g. policies, which are't ubiquitous). Concepts are the same way: they will improve error messages and reduce bugs, but if you don't use them your code will in 99.9% work just fine. When you see a very simple example that uses concepts, it's not surprising that the concepts don't really improve a simple add function -- the case is deliberately simple for explanatory purposes.

Languages move forward. Even go recently caved and added in generics.


I think you hit it on head, that most new features are for library writers who need to capture every edge case succinctly. As someone who has used c++98 and only a little c++11, for nearly 15 years, and writes no libraries; I've had little real need for any new features.


The less you use the new features, the less benefit you get from them. You could stick to K&R C and get no benefit at all, but that would be equally as foolish as what you are doing.

The new features are there to improve your experience, and to make your code more reliable. When you have a choice between old and new, new is usually better.

Sticking to K&R, you would have as many bugs and crashes as other K&R code. Sticking to modern C++ makes most of such bugs impossible. That is progress.

You can stop learning and take up complaining at any time, as you have done.

But you can also start learning again at any time. Now is always a good time for that.


You don’t know what domain the person you are answering works in, so you cannot make such sweeping statements. Implying someone is foolish is plain rude. https://news.ycombinator.com/newsguidelines.html


Anyone can do, or choose not to do, a foolish thing. Encouraging against the foolish thing does not imply anyone is foolish.


I'm not sure how you drew a line from my discussion of c++98/11, to K&R C.


They are all points on a continuum. There is no more wisdom in stopping at 98 or 11 than in stopping at K&R.


>you don't have to use the new features

But it complicates the compiler, complicates the tools, complicates the error messages,...

Take something as "simple" as operator overloading. If a beginner does a "5"+5, the compiler is forced to respond with "no operator + for the types given, types are: int, std::string" instead of the infinitely more friendly "can't add string to int". I actually _like_ operator overloading, I wish more languages stopped the silly practice of making language-provided primitives somehow sacred and closed-to-extension. But look at how even this relatively straightforward and useful feature ruined the error message.

Part of this particular example is bad error message design. I mean, what are the chances somebody intended to do operator overloading but then forgot to implement the method? It would have been much better to say "Can't add string to int, Did you forget to implement operator+(.. .,...)?". This centers the common case first, then reminds the experienced about what could have happened in their case. But even this might confuse beginner and convince them that "5"+5 is somehow a reasonable thing to say (because it actually is in some contexts) that they just need to get that fabled operator+ from somewhere to make it work, instead of just outright telling them " nonsense, not gonna work".

Extensions are always, inherently, abstractly, a tradeoff. Because even if you are a perfectly spherical coder who doesn't interact with other code in any way, you at least interact with the language tools. And the tools _must_ know all the of the language: instead of just having the luxury to say "sorry mate can't do that" when encountering "5"+5, the editor/IDE/compiler has to spend the cycles to see if it _can_ do that, then report a message that simultaneously says that you can but can't. As the features accumulate and interact, the tradeoff curve increasingly inflect against adding new things.

All of this in the abstract is a fairly balanced argument that applies to any language, the particular case of C++ is much, much worse. C++ have this aweful way of "retconning" new features. I can never put my hands on it, but C# creators practically design a new superset every major version and call it the same name without ever making people angry or afraid that the language is spiraling out of control, but every C++ feature gives me this dark feeling of "is this never ever going to end? how far are you willing to take it?".

Part of that is undoubtedly the syntax, remind me again what language class (regular, context free,..) is C++ syntax? The other day I was fooling around and thought of adding discriminated unions to C++, just a thin syntax layer over a more verbose idiom. The very first thing is to parse the damn thing, but that way lies madness. Because C++ have no agreed-upon, guaranteed formal grammar. There is a grammar online but it's gargantuan and was written by one man, there is a grammar in the standard but it's gargantuan and the standard says it's there for illustration only, there is a grammar (implicitly) in the source code of any working parser but it's gargantuan and full of hacks. So for all practical intents and purposes, C++ has no formal grammar. Think about that for a minute, a formal language that has no formal grammar. It's the bare minimum any formal language can have, and yet you can't do it safely because of the massive, beast-like bloat that is that language.


> Take something as "simple" as operator overloading. If a beginner does a "5"+5, the compiler is forced to respond with "no operator + for the types given, types are: int, std::string" instead of the infinitely more friendly "can't add string to int".

You are not allowed to overload operators involving only built-in types so operator overloading changes nothing about the expression "5"+5. Even if you could overload operator+ here, there is nothing preventing a compiler from giving you the friendly explanation if that operator does not exist (perhaps with an additional hint that it could be defined).

(Also, the expression "5"+5 is valid C++ but Clang has -Wstring-plus-int to warn you that it might not do what you expect.)


It’s true that C++ is huge but the complaint should be about the lack of epochs that would let us simplify the language instead of about very useful new features such as concepts, modules, or constexpr. If you’re already programming in C++ and you can’t take a few days every 3 years to learn about new additions, you’ll have even more problems with other languages.


> but the complaint should be about the lack of epochs

Rather than something as coarse as epochs, C++ includes a lot of fine grained feature test preprocessor definitions such as (to pick one at random) '__cpp_lib_constexpr_algorithms'.

It's harder to make this work with breaking API changes of course. I assume non-preprocessor versions that work with modules will be available in C++23...and perhaps they will support API changes?


Epochs would be more usable. Nobody can afford to litter their code with hundreds of feature tests.

API changes that change semantics are forbidden in Standard interfaces, although APIs can be extended, backward-compatibly. To make an actual change, we need to introduce a new name. Thus, when we get around to modernizing std::vector, the fixed version will have a different name, maybe std::vec, but conceivably std2::vector.

I don't know of any plans for feature test macro analogs that would integrate with modules.

To be usable, epochs would have to apply locally to a file, not to anything #included into that file, nor to any file #including it. And, it should be possible to tie them to modules, to say "you can use this module only from code in epoch C++XX or later".


> Thus, when we get around to modernizing std::vector, the fixed version will have a different name, maybe std::vec, but conceivably std2::vector.

Let us hope that if such a name transition ever happens that the hideous plague of multi-case identifiers does not infect the standard. Leave that to (a subset of) user code.

Either std26::vector or std::vec would work for me. `std26::vec` would probably be better because of possible confusion (as to which vector implementation is desired) due to use of `using` directives.


> I don't know of any plans for feature test macro analogs that would integrate with modules.

What is the plan, if any, for deciding at compile time whether the format module is available or if I would still have to use fmtlib/fmt?


Sorry, I haven't paid any attention at all to fmt.


While on one hand, I hear you, but on the other, on some level, the design goes all the way back to 1994: https://programowaniezpasja.pl/wp-content/uploads/2019/05/Cp...


> Another language-changing paradigm added to C++

no, people were writing "concept-like" code since C++98 in a very bloated way with e.g. sfinae, this is just standardizing existing practice (while improving it of course)


Concepts are just nice syntactic sugar for stuff that already existed in the wild for decades, hardly "language-changing".


Concepts have existed informally since the original C++ STL. For the last 25 years people have been trying to make them a language construct. The current design is the one compromise people could agree on while being implementable and backward compatible.

I haven't used it much yet, but I would still say it is pretty good although far from a proper template type system.


> Another language-changing paradigm added to C++

I agree. Somewhere in the piled-high-and-deeper complexity of C++ there is one excellent, modern language that could be carved out.

Maybe it is only the subset since C++17.


Scott Meyers thinks similarly, I think. https://www.youtube.com/watch?v=KAWA1DuvCnQ


Scott Meyers was never a production coder. His schtick was teaching, and he got tired.

The additions since he bowed out improve the experience of production coders.


He got tired because of the insane complexity of the language, and in the video I linked he supports his complaint with extensive and damning examples.


His difficulties are characteristic of someone not seeking solutions to actual daily engineering problems, and instead getting lost in a maze of language lawyering.

If you approach features in terms of how they can be useful when coding, almost all of his difficulties never arise, or are easily sidestepped. For working coders that becomes second nature.


Just like Java, VB and C# are getting theirs.


C++23 vs C++20 seem to be less drastic. So maybe they will slow down to a slow trickle at some point ;)


C++98 was big. C++11 was big. C++20 was big. Probably expect C++29 to be big.

Nine to twelve years seems like it should be enough time to adjust.


> The above definition of add is equivalent to the one using static_assert.

They are not though - the equivalent C++17 would be

  template<typename T>
  std::enable_if_t<std::is_intregral_v<T>, T> add(T a, T b)
  {
    return a + b;
  }
The difference is that these make the function unavailable for overload resolution for non-integral T but another implementation might still cover them while the static_assert version leaves the function available for overload resolution but then errors if it is selected.


I guess this is somewhat besides the point but checking if something can be multiplied by -1 is not great as a definition of subtraction. You'll typically want to use the additive inverse directly.

I mean sure you can extend any commutative group into a module over the integers but you probably don't want to make this a hard requirement just to have subtraction.


Why would this be an issue? I'm under the impression that this is fine if we're operating with the standard addition operator and the standard field that everyone is used to working under. Isn't that the definition of the inverse? I understand that in different fields you have different operators but is that relevant here?


Random example: In geometry code, you might distinguish between points and vectors, where point + vector = point, point - point = vector etc.

It can then also be convenient to have a special type/value Origin, which (while functionally identical to point(0, 0)) allows you to e.g. write vector = point - origin to clearly express your intent of turning a vector into a point.

-1 * origin is not meaningful, but point - origin is (while point + origin is not).


But in this case I don't think you're working on the standard field. I mean you're working in different coordinates. I do understand that the example only works in R1 space with standard addition, but that's kinda the point of my question. That the "error" isn't really an error unless you're being pretty pedantic.


If you were working with the standard field then why would you need to bother defining the general concept of subtraction?

If you want to define the concept of subtraction then you probably don't want to assume you can multiply elements with an integer. Not that it can't be done but in general it will be a lot easier to define the additive inverse directly (if one exists).


I mean but under the example that the author is doing they are working with standard integers (integrals). So this appears to me to still be the standard addition and subtraction. And with standard R1 subtraction it is the same as inverse of addition.

I mean if we move into different coordinates and different fields, then yeah things change, but I don't see what the issue is with the example given here.


This is correct. The expression tested in the concept definition should not be "-1*x", but "x-x".


Something about type constraining auto just seems funny. I can see how it's useful from an error minimization / code clarity standpoint, but it almost seems counter to the point of even using auto.


Initially auto was not required (nor allowed). The concept name itself was enough. Adding auto is one of those compromises that please no one but was necessary to move the proposal forward


I belive that you do not sufficiently understand the giant footgun that is unconstrained auto, especially in the context of very template-heavy code. Concepts solve the issue that judicious use of auto will allow template instantiations to succeed that are plain wrong in that they will happily do the wrong thing just because the types involved fit the constraints by mere chance and not because they were meant to be used together like this.


Unconstrained auto is unwise in globally accessible templates that participate in overload resolution, but is fine in lambda literals.


Am I going insane or does the following does NOT work(?):

<code> template<typename T> T mul(T a, T b) { return ab; }

template<typename T> T mul(T a, int b) { std::string ret_val{std::move(a)}; a.reserve(a.length() b); auto start_pos{a.begin()}; auto end_pos{a.end()}; for(int i = 0; i < b; i++) { std::copy(start_pos, end_pos, std::back_inserter(a)); } return ret_val; } </code>

I knew it looked odd to me for some reason...I had to re-write it as follows:

<code> template<typename T> T mul(T a, T b) { return ab; }

template<typename T> T mul(T a, int b) { std::string ret_val{}; ret_val.reserve(a.length() b); auto start_pos{a.begin()}; auto end_pos{a.end()}; for(int i = 0; i < b; i++) { std::copy(start_pos, end_pos, std::back_inserter(ret_val)); } return ret_val; } </code>

Did I miss something??


Four spaces before each line will format your code so other people can read it.


You only need two on Hacker News, not four. https://news.ycombinator.com/formatdoc


TIL thanks.


Is this a scrappy version of type classes (Haskell) or traits (Rust)?


Sort of, in that they are a thing you do to constrain generic parameters.

A significant difference in my understanding is that type classes and traits are required, but concepts are not. That is, using the example from the article, a concept can tell you if you're passing something that doesn't add, but you can add inside a function without using a concept. In other words:

    fn add<T>(x: T, y: T) -> T {
        x + y
    }
This won't compile in Rust:

    error[E0369]: cannot add `T` to `T`
     --> src/lib.rs:2:7
      |
    2 |     x + y
      |     - ^ - T
      |     |
      |     T
      |
    help: consider restricting type parameter `T`
      |
    1 | fn add<T: std::ops::Add<Output = T>>(x: T, y: T) -> T {
      |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
This suggestion works, but you don't have to write it this way. Once the constraints get more complex than T: Foo, I personally switch to this form:

    use std::ops::Add;
    
    fn add<T>(x: T, y: T) -> T
    where
        T: Add<Output = T>,
    {
        x + y
    }
    
I find it a little easier to read. YMMV.

Whereas in C++, this does compile:

    template<typename T>
    T add(T a, T b)
    {
      return a + b;
    }
If you try to add something that doesn't have + defined:

    int main(void) {
        add("4", "5");
    }
you get this

  <source>:4:12: error: invalid operands to binary expression ('const char *' and 'const char *')
    return a + b;
           ~ ^ ~
  <source>:8:5: note: in instantiation of function template specialization 'add<const char *>' requested here
      add("4", "5");
      ^
Whereas, if you do what the article does (though I'm using char* instead of std::string, whatever)

    #include <concepts>
    template<typename _Tp>
    concept integral = std::is_integral_v<_Tp>;
    
    template<std::integral T>
    T add(T a, T b)
    {
      return a + b;
    }
    
    int main(void) {
        add("4", "5");
    }
you get

    <source>:12:5: error: no matching function for call to 'add'
        add("4", "5");
        ^~~
    <source>:6:3: note: candidate template ignored: constraints not satisfied [with T = const char *]
    T add(T a, T b)
      ^
    <source>:5:15: note: because 'const char *' does not satisfy 'integral'
    template<std::integral T>
                  ^
    /opt/compiler-explorer/gcc-11.1.0/lib/gcc/x86_64-linux-gnu/11.1.0/../../../../include/c++/11.1.0/concepts:102:24: note: because 'is_integral_v<const char *>' evaluated to false
        concept integral = is_integral_v<_Tp>;
                           ^
This doesn't feel like a huge change because add is such a small function, but if it were larger and more complicated, the error with a concept is significantly better.


Is this in Dlang?


I’m sorry for this petty criticism, but every time I gaze upon “modern” C++ I wonder if the language could get any uglier, and with every revision it somehow manages to.


Amazing that this took ~20 years to design and it still had to be rushed through to get into c++20…


You can take K&R C from my cold, dead hands.

There was no header file hell. Often you didn't need a single header to be included in your code: most functions returned int (or nothing), and if you needed something that returned double, you could just say so.

I remember being excited about function prototypes, but something was irretrievably lost at that point. The primal elegance of C as it was conceived by its creators is long forgotten now.

(If you want, you can still experience it with the Tiny C Compiler that seems to continue to understand K&R C code just fine.)


I think you're trying to say you like C but dislike C++'s complexity.

You will enjoy Go.

Go can be understood as an improved, modernized C that doesn't abandon C's simplicity.


The "Go is like C" comparison never made any sense to me.

Go has a sophisticated runtime with transparent N:M threading and built-in concurrency primitives, Interfaces, garbage collection and a large standard library.

Go is only simple when compared to the other languages that sit in a similar space, like Java and C#.


C's runtime is UNIX, that is why we got POSIX.


> Go can be understood as an improved, modernized C that doesn't abandon C's simplicity.

This is false, because Go has a garbage collector by default. This isn't an "improvement" in anyway but for those who don't care about memory management and predictable and deterministic performances.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: