So it looks like people repeat how C++ templates are bad, and Rust macros are good. Can someone tell me what's the real difference? I'm not sure I get it, but I feel like both are good or both are bad.
C++ templates are intertwined with parsing and typechecking, while Rust has two separate features that cover the same ground: macros, which are purely parse time, and generics, which are purely run at typechecking time. This has advantages and disadvantages. The advantage of C++'s model is that it has one feature instead of two, and this allows for features like SFINAE that Rust has no analogue to. The advantages of Rust's model are that (1) the implementation is a lot simpler; (2) you can typecheck generics before they're expanded, which is a big improvement in ergonomics; (3) generics cover 90% of the use cases, so you rarely have to reach for macros.
C++ templates also aren't general purpose macros. For example, in Rust you can write a macro that takes an expression `e` and replaces it with `e + 1`.
However, I believe you can't do that with C++ templates. C++ template parameters can be types or primitive values, but not expressions. And the body of a template has to be some sort of declaration, e.g. a function definition or struct definition, but not an expression or statement.
Right, you can write a function that returns its argument plus one, but it couldn't e.g. evaluate `e` more than once (which both C and Rust macros can).
C++ templates are bad because they are untyped generics (e.g. C++ doesn't check that the key type for an hashmap is hashable, and instead you get a bunch of errors that there is no "hash" overload accepting certain arguments when you try to call a method on the hashmap).
C++ tried to add a type system for templates with "concepts" but it was scrapped.
Rust instead has properly typed generics instead using the trait system, resulting in proper static checking and proper error messages (although without higher kinded types and const generics currently, that C++ instead supports).
Rust also has a powerful macro system, which is unrelated to templates, although you can use them to have kinds of abstractions that the language doesn't support directly (like higher kinded types, const generic or abstracting over mutability), at the cost of having to explicitly perform instantiation, not having inference and having a higher cognitive burder.
Concepts are on track for C++20 and they are already available on gcc to play with.
Besides that, it is already possible to use typed templates in C++, at least with any C++14 compiler and even better on C++17, even if it requires a bit more of effort for the implementer of the generic code.
Basically by making use of type traits, static asserts, constexpr if and enable_if.
Yes, it isn't as clean as Rust but as shown by Andrei Alexandrescu with D, it opens the door for very powerful designs.
But the C++20 concepts is only concept lite. It does not check the implementation of the template function that it only uses operations available for the conceps.
suppose you have a concept Hashable that is only enabled for type with a hash() function. But you can still in the code of your HashMap do a comparison with the < operator, which the Hashable concept don't ensure. In that case you woud only have an error at instentiation type for Hashable types that don't have '<'
In Rust, the compiler make sure that your generic function only use operations that exits in the traits declared in the function signature.
> instead you get a bunch of errors that there is no "hash" overload
Since you get some kind of compile-time diagnostic, then effectively, the type is checked. Just, within static time, not as early as you'd like, with not as a relevant a diagnostic.
ISO C++ doesn't dictate the wording of diagnostics; a C++ implementation could work backward from that error, or do whatever else, in order to phrase the diagnostic in terms of a problem in the template.
C++ templates are only typechecked for types that are actually instantiated. Rust generics are typechecked using the traits that the generic is declared with. That's a pretty big difference for people writing generic/templated functions though not as big of a difference for people using them.
Sure, also, C++ code is also typechecked only for files that you feed to the compiler and include in your image.
C++ templates being checked only for instantiated types makes them flexible. E.g. a template that accesses type::foo or instance.foo will work with anything that has a member named foo. That template argument doesn't have to have a declared relationship to any type which has foo, which would be an annoying restriction for the users.
Types that aren't instantiated don't exist. They don't correspond to anything in the executable image. Worrying about them is like a recording studio fussing over how much reverb to add to a one-handed clap.
It makes them more flexible in the same way that dynamic typing is more flexible. A type error that doesn't execute doesn't really exist either.
It's fine to not worry about types that don't get instantiated if you are the only person ever instantiating a template, but that's obviously not always the case. If my template doesn't typecheck for a type that someone else wrote, I don't find out about it until they send me a bug report. With typechecking before instantiating, I find out about it as soon as I write it.
Except it's not dynamic typing; everything that goes into the output image is type checked.
> A type error that doesn't execute doesn't really exist either.
Nope; that depends on what you mean by "doesn't execute".
Provably doesn't execute? As in: it is removable, dead code?
Or: doesn't execute because it's not covered by a test case? So that it could execute if a suitable input is found?
A template-generated type that is not instantiated doesn't execute because it doesn't exist. It's not in the image. There is no possible input to the program which can exercise it.
The code is basically not written. There is no need to diagnose bugs in code that isn't written.
Making users derive template arguments from a specific type is just stupidly "blubby". That C++ has typeless template parameters is one of the few sane things about the way it supports generic programming.
> With typechecking before instantiating, I find out about it as soon as I write it.
With type checking, that same user who would have run into a problem with the "typeless template" will simply find your template useless, and roll their own. In other words, same situation as before. The type error will still be there.
If you have a template that is intennded with foo-like arguments, and the user uses them with a bar-like object, that's going to be diagnosed one way or another.
If the generic arguments are weak, it will show up as some constraint violation when the template-generate code uses the the bar-like thing as if it were foo-like.
If the generic arguments are strong, the user instead gets an error that the bar-like thing is not a suitable argument to the template, which expects a foo-like.
It's the same either way; it's all static checking at the same static time and it doesn't change the circumstances under which errors occur. It just changes the amount of discipline needed to use the generic, and the accuracy/granularity of the diagnostic ("this thing is not a foo" versus "this thing here coming from a template parameter can't be used in this way in this piece of template generated code").
It's easier for the user to adjust the bar object to make it sufficiently foo-like for a given template, than to make it completely conform to the foo type.
If the template works (is tested with) with a foo (base class), there is no reason it wouldn't work with something derived from foo. (If not, then there is some LSP issue in the foo type hieararchy; the generic as such is working.)
You might intend (without actually testing) that the template works with some bar also, not related to foo; the user tries it and finds otherwise, due to a bug.
With the typed generic, you just declare that it's for foo, and it refuses to work with bar (whether or not it has a bug which would actually prevent that in the absence of the declaration).
All the declaring, checking, instantiation, and the rest of it are all happening in the same time: static time. There is no difference like with static versus dynamic. Static versus dynamic is due to different times: what is done before running the program, versus during.
There is no verification difference between declaring that a template parameter may be of types foo | bar, and writing a typeless template which a test module then instantiates for foo and bar. Both are declarative mechanisms at work, played out statically. Just one way is more open-ended; it's not "sealed" against new instantiations.
Those are like different concepts. While Rust macros work on the AST and take AST nodes as arguments C++ templates are a whole meta language that also allows for code execution. I would say that Rust macros mainly do code generation while C++ templates also allow for computation.
As a, slightly facetious, addition. I think Rust macros are better than either the C preprocessor or C++ templates because they're so bloody confusing.
They require someone with a great deal of energy, knowledge and will to win in order to tease anything useful out of them. This, I submit, is a feature. It dissuades the programmer who believes they are better than they are from being too clever by half.
There are worse things in the programming sphere than crap meta-programming but it does have its own circle in hell.
I appreciate that the Rust team are working hard to spoil my happy idyll but happy I remain for now.
I tend to agree with the argument that Rust macros are deliberately ugly so that people will only reach for them as a last resort (learning the hard-won lesson from Lisp, as it were), but there's no way that Rust macros are any more confusing than the C preprocessor or C++ templates. Have you ever tried to write a C macro that actually expands to what you want it to, in all possible contexts? And the less said about the horrors of C++ templates, the better. No, the fact that Rust macros have severe (deliberate) limitations is what makes them tractable; there's no shenangians comparable to `#define true false`.
here, 7 is an i32, so T is i32. But, we get a compilation error in Rust:
error[E0277]: the trait bound `T: std::fmt::Display` is not satisfied
--> src/main.rs:2:22
|
2 | println!("{}", s);
| ^ `T` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
|
= help: the trait `std::fmt::Display` is not implemented for `T`
= help: consider adding a `where T: std::fmt::Display` bound
= note: required by `std::fmt::Display::fmt`
error: aborting due to previous error
This illuminates a core difference between Rust's generics and C++'s templates: in the C++ case, the code is typechecked after the code is expanded. So it works. However, we don't even need that `main` to cause a compilation error in Rust; it only checks the body of the function. In this case, Rust says "Hey, you accept any T, but you try to print it out. What if a type isn't printable? That wouldn't work!"
The fix is:
use std::fmt::Display;
fn f<T: Display>(s: T)
// or in near-future Rust, a slightly nicer syntax, still needs the 'use' line too:
fn f(s: impl Display)
This changes the signature to say "we take any type T that implements the Display trait." Now we know that the body of the function works properly.
There are advantages and disadvantages to both approaches. In some senses, the tradeoffs are the same as any duck typed vs not situation: the C++ approach is far more flexible, but the Rust approach has stronger guarantees. We also chose the style we did because we prefer the nicer errors; C++ template errors are notoriously complex, though many say they get used to how to read them over time.
Additionally, neither languages' story is complete here. C++20 has a new feature called "Concepts" that gives you some degree of similar features here, but it's complex and I haven't read the version that was merged into the draft of C++20, so maybe someone else can elaborate on it. Rust is also adding a 'const fn' feature that gives you access to similar tricks to constexpr in C++, as well as more features for our bounded generics.
#include <iostream>
#include <string>
#include <type_traits>
class Display {
virtual std::string fmt() = 0;
};
template<typename T>
void f(T s)
{
static_assert(std::is_base_of<Display, T>::value, "Error: T must implement Display interface");
std::cout << s << '\n';
}
int main()
{
f(7);
}
Gives the following error with clang,
<source>:12:7: error: static_assert failed "Error: T must implement Display interface"
static_assert(std::is_base_of<Display, T>::value, "Error: T must implement Display interface");
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<source>:19:7: note: in instantiation of function template specialization 'f<int>' requested here
f(7);
^
1 error generated.
Compiler returned: 1
Of course the downside is having the library author spending extra effort to write those checks.
Totally! There's a lot of code out there that doesn't do this, even if it could. That's also a lot more code, so I imagine people do it less often as well.
Beyond that as well, you won't get an error if you don't use f(7), that is, if your main is empty, which would affect library authors.
whereas there would be no way to get `f(7)` to compile in your C++ version since you can't make `int` inherit from `Display`. _A_ downside is that the library author has to add these sorts of checks, but for me a bigger downside is that anyone who then wants to use the library has to inherit from the library's types.
Concepts (lite, the version that is still around) are "one-way." They constrain what types you can use as template arguments, but they do not constrain what templates can do.
Which means the templates themselves can still impose additional requirements based only on their internals, leading to the same bad error messages for users and lack of checking for library authors that we have without concepts.
Both make debugging harder. Rust, though, keeps a lot more safety in the whole process. The resulting code is syntactically what the macro author meant, by the types.
And, nobody really steps through code anymore. So a lot of the difficulty argument is moot.
I don't follow. What other technique is in, compared to code stepping?
Sometimes I put a breakpoint in and step thru code to help me figure if something is working properly or maybe live-inspect the return value of something that comes from the outside world. Maybe I'm missing some new fundamental technique to dev/debug?
I have seen very few people actually hook up a debugger and step through code lately. This is particularly pronounced in many of the functional idioms that are making it into java and similar languages. Stepping through a loop is now much more involved, entering the stream is a step in, not just a step.
I don't actually have a lot of experience with the rust world, but the main complaint with macros is typically that you can't step through them easily.
With my getting downvoted, I'm assuming I'm wrong and that step debuggers are more common in rust than I was assuming. I think that makes me happy. If anyone has a good video demonstrating this level of tooling, I'd be interested.
"printf debugging" vs "debugging" is an eternal struggle. Many people do both, in both C++ and Rust. There are many, many essays written about the pros and cons of both approaches.
Rust as a project is investing heavily in working towards excellent debugger support, because people do use it.
How well does it work with heavy use of macros? Specifically, what does it mean to "step into" a heavily macro based section? I'd be very interested in seeing examples.
I think it is fair that my comment was clearly all encompassing where it should not have been. I am curious on the numbers, though. Just anecdotally, I have seen very few people use a step debugger.
Some of it may be situational. With how many microservices I'm dealing with, "stepping into" a call really needs to jump to another process quite often.
Anyone have numbers, by chance, on how many folks actually use a lot of these features?
Visual Studio also does remote debugging, including for services running on Azure, and recently introduced across containers.
Usually the problema is that many developers never learned what a visual debugger is capable of, and the typical command line interface in UNIX leaves little room for exploration if people aren't curious to learn about them.
For example, many aren't aware that gdb has a TUI that eases the experience.
I know you can do remote debugging. I mean the case where you have a breakpoint in one service, and it calls to another service. You can't really "step into" that. Best you can do is connect to both services and have cooperating breakpoints.
But, really, this is something I've never actually done. Reasoning about how the system should behave has been enough to get me by.
A scenario I have used it a lot was in debugging cooperating processes in UNIX, a couple of years ago.
When some condition took place, the problematic process would break into the debugger, while the others would be parked waiting for me to attach to them.
Is that not, like, operator precedence gone really wrong? Or is this the rust stringifier in trace_macros messing up without anything going wrong in the actual AST?
The trace_macros! stringifier isn't inserting parens when required: due to the use of 'expr' the macro is treating 3 + 7 as a single node X, and then the multiplication with 4 is, in the AST, X * 4 (i.e. correct). The thing that is being printed there is the recursive rpn invocation in the @op branch, and specifically the '$a $op $b', which is a "disconnected" sequence of AST items/token trees. Once it's forced to be parsed as an expression (that is, when matched against the recursive '$... : expr' rules), the printer handles it correctly. For instance, the last 'to' line of the output in the example you quote, or, if that example is changed to 'rpn!(2 3 7 + 4 * 5)', the trace_macros output includes:
The original designers of the macro system absolutely had Scheme firmly in mind (source: I was there). The most telling giveaway is that the identifier for declaring macros, "macro_rules", is a reference to Scheme's "syntax-rules". As for a comparison, Rust macros have to jump through hoops to account for the fact that Rust has syntax (and types), and the Rust system isn't entirely hygienic yet (work in progress).
Nope; it makes the let bindings. The job of reducing to a constant is better left to the processing of the let rather than duplicated in the macro.
If you have a compiler which can reduce (let ((x 3)) ((y 4)) (+ x y)) to 5, why replicate that in a macro.
If you do not have one (as is the case here), then it's better to work on getting one than compensating for it in macros.
Yet, with the following small change to compile-expr, whereby we avoid generating temps for constant expressions, we can at least get some nicer-looking output:
My impression has always been that it probably feels right at home for people with mathematical backgrounds, and mildly line-noisy for people coming from traditional programming backgrounds.
Stuff like this:
fn map<T, U, F>(vector: &[T], function: F) -> Vec<U>
where F: for<'a> Fn(&'a T) -> U {
...
It's not too hard to read once you're in the ecosystem, but there's a lot of non-alphanumerics in there, and you know each one of them is super important to getting the thing working.
While I like the functional style you actually did bring up one of the cases where I find the syntax a bit too much. Why can't we have the functional interface information on F directly in the signature?
Why not something more akin to ML?
fn map<T, U, 'a>(vector: &[T], function: &'a T -> U) -> Vec<U>
I'm there with you. Have you found something consistent/stable to debug Rust code? I find the integration with VS:Code to be quite flimsy and friends have been saying that's like the pinnacle of Rust tooling right now, to have a fully loaded VS:Code with the Rust plugins...
I know you are. I didn't mean to take a jab at you. Rust is amazing for how young the language still is. I also don't have a CLion license, but I've also heard from multiple sources that that is right now the best IDE.
What do you mean by "Native Debug" extension for VSCode? The normal Rust plugin built on top of Racer often shows Racer as crashed, and doesn't underline errors while typing. It also doesn't really seem to know the type of a variable most of the time. I'm writing in too many languages at once. Autocompletion usually gives me enough of a hint that I remember what to type.
VSCode being my favorite editor, so I'm glad that I don't need to install another IDE right now.
EDIT: I just checked it out, and apparently you can install the Rust plugin with IntelliJ Community Edition. Which is free.
It's just the plugin called Rust. I tried the other ones, but they were not better.
>it does not include debugging support
That's a bit sad, but I haven't had debugging support in VSCode until now. I'd pick good code completion and error hints over debugging. Both would be nice though.
How do you configure that Native Debug plugin for Rust?
Yes sir! You can rest assured I'll find something to contribute to the Rust ecosytem, be it bug reports, translations! I haven't been as thrilled as I am right now with a new language since I was a kid actually taking my first steps into programming (MS-DOS and Turbo Pascal days). I wanna see Rust take off and take a big slice off the 'most used languages throughout' pizza chart :-)
Well C macros work at the text-level, Rust macros run at the AST level (but not the type level, that is you cannot construct syntactically incorrect stuff but you can construct wrong-typed stuff).
So I don't really know lisp macros but I guess it's still closer than to C.
They work on both - for example, if you specify $expr:expr, it will actually parse it as an expression node and won't allow to simply pass it into another macro as $pat:pat because of AST node type mismatch even if actual tokens are still compatible.
That's a good point. What I was trying to get at is that Rust macros don't let you manipulate ASTs in any straightforward way. So they are not best thought of as AST -> AST transforms.
There is a common set packages [0][1][2] that provide macros that are more similar to lisp-style macros (when operating on structs). [2] in particular implements quasi-quoting in rust.
by comparison to gp's article Facotr macros are easy because instead of knowing exactly where and what the arguments are in Factor the reader doesn't have to care.
Many Lisp dialects are not homoiconic, as defined in the Wikipedia page for "homoiconic".
Homoiconic means that procedures are stored in representation in which they are written, or more or less.
A Lisp that compiles everything entered at the REPL isn't homoiconic. The input looks like (lambda (...) ...), but the storage is some block of x86 machine code with an environment vector.
POSIX shells are homoiconic. When you type set, you see your functions, in source code form (modulo reformatting).
The thing you want for metaprogramming is a smart internal representation of program text (not same as the external one).