Hacker News new | past | comments | ask | show | jobs | submit login
Generics and Compile-Time in Rust (pingcap.com)
268 points by Bella-Xiang 25 days ago | hide | past | favorite | 86 comments



There are plenty of Rust features I'd love to see finalized, like GAT ( HKT, sort of), generators, async trait methods, custom test frameworks, ...

But there is an area that could have a big impact on certain (mostly higher level) domains, yet doesn't seem to get much attention: better trait objects.

They are severely limited in a few aspects:

* only a single trait/vtable

* casting is only available with Any and you can't cast between different traits, requiring really awkward super-traits with manual conversion methods or hacks like mopa [1]

* object safety rules are cumbersome and prevent certain important traits like Clone to be available, leading to clone_boxed, clone_arc everywhere, or proc macro solutions like dyn-clone [2]

* ...

Doing anything more fancy with them usually feels annoying. Therefore the standard library and entire ecosystem strongly favor generics and monomorphization.

This is generally fine and has worked out well for the language, but there are plenty of use cases where more advanced trait objects could reduce code size and compile times with very small impact on performance, while also enabling some interesting new patterns.

I realize there are plenty of implementation challenges that make work in this area far from trivial in the current language, but it's frustrating to miss out on part of a toolbox.

I think Swift is an interesting comparsion. The languages are similar in quite a few aspects, but Swift often prioritizes small code size and dynamic dispatch over monomorphization. Compile times aren't that great either, though...

ps: it is briefly mentioned in the post, but switching to LLD has provided noticeable build time improvements on most of the binary crates I am working on.

[1] https://github.com/chris-morgan/mopa [2] https://github.com/dtolnay/dyn-clone


> This is generally fine and has worked out well for the language, but there are plenty of use cases where more advanced trait objects could reduce code size and compile times with very small impact on performance, while also enabling some interesting new patterns

To elaborate a bit on the part about new patterns, I've encountered issues where trait objects allow me to define APIs that otherwise wouldn't be possible. One example is when developing something like a database driver, you might define a trait EventHandler with the methods `handle_start_event` and `handle_completion_event`, where users can pass in values of types that implement EventHandler, and you call their handling methods whenever you start or complete an operation. The most straightforward way to specify this would be to have your client type have an internal vector of EventHandlers that you can iterate over and call the corresponding event handling methods whenever needed. If you use generics for this, then your client type will need to be generic over the type of EventHandler it can contain, which means you can't specify EventHandlers with different concrete types. The best way to get around this is to use something like `Vec<Box<dyn EventHandler>>`.

I understand the sentiment behind favoring generics over trait objects in the Rust ecosystem, as strongly preferring compile time costs to runtime costs when there's a choice between them is one of the more fundamental guiding principles to the Rust ecosystem (and is one of the things I really like about Rust), but there are patterns that trait objects allow that just can't be expressed with generics, and when you need to use one of them, it can be frustrated to hit some of those sharp edges you mentioned.


> The best way to get around this is to use something like `Vec<Box<dyn EventHandler>>`.

That's exactly the right solution to your problem. It's not a workaround. It is what you need.

What you need is a heterogenous collection of objects that implement a common interface. And the exact type isn't available to you because it's provided by the client. So you need to go through virtual (or dynamic) dispatch.

Just imagine how you might write this in C++: you just define an EventHandler abstract class and ask that clients inherit from it. Then you take pointers to EventHandler and store them.

Or imagine how you might write it in Haskell: simple existential types. The type class dicts are stored by the compiler into your data type to enable dynamic dispatching.

This is object-oriented polymorphism 101. I think Rust's anti-OO pendulum has swung so far that people can't even see that what they need is basic OO.


It is not even OO, as the Haskell example shows.

But I understand completely the mindset. If you think that you can implement your solution with full static dispatching and no pointer indirection, having to add these "ugly workarounds" feels like surrendering.

But there is a continuum between full static dispatch and boxing and indirection everywhere. Adding a few carefully chosen "dynamic joints" can add a lot of runtime flexibility at a minimal cost.


> That's exactly the right solution to your problem. It's not a workaround. It is what you need

Yep, I agree! To clarify, the issue for me isn't having to use trait objects like this, it's the ergonomic issues that my parent comment brought up, e.g. not being able to require `EventHandler: Clone`. I'm not opposed to trait objects themselves; I just wish they were easier to use.


I'm confused. In that example, what's wrong with `Vec<Box<dyn EventHandler>>`?


Nothing is wrong with `Vec<Box<dyn EventHandler>>` itself; the issues are around the ergonomics of using trait objects in general (e.g. with regards to requiring that types that implement EventHandler also implement Clone, as mentioned in the parent comment).


I'm not an expert on this, but my guess would be that it's slow because it requires at too much indirection.

Instead of the Vec holding concrete instances of EventHandler, it holds pointers to them (because of the Box). And then because of the dyn, the compiler can't generate static-dispatch code when you later inevitably want to call functions on the EventHandler, so those function calls have to be resolved at runtime.


Having dynamic dispatch of boxed structures seems like a logical consequence of wanting dispatch on arbitrary user-provided non-coherent (not the right word but I can’t remember the proper one) types?


Is it even possible to do it without indirection? The type parameter of a struct can change its shape hugely, so what you’re asking for is “vectors with variable sized entries” which is typically done with indirection.


In C++ we have things like Boost.PolyCollection to improve that specific case : https://www.boost.org/doc/libs/master/doc/html/poly_collecti...

With some metaclass / reflection magic maybe one could come up with an additional scheme that would save the per-object vtable pointer through creation of a "duplicate" structure - e.g. given

    struct Event {
      virtual ~Event();
      virtual bool whateverAPI();
      pair<double,double> globalPosition();
      pair<double,double> localPosition();
    private:
      double x{}, y{};
      Widget& widget;
    };

    struct MyEvent : Event { 
        // or whatever, just an example
        bool whateverAPI() override;

      private:
        MouseButtons button{};
    };
one would get a sub-array of things with a layout similar as

    struct synthesized_MyEvent {
      double x{}, y{};
      Widget& widget;
      MouseButtons button{};

      pair<double,double> globalPosition() { /* potentially UB magic */ }
      pair<double,double> localPosition() { /* potentially UB magic */ }
      bool whateverAPI() { /* potentially UB magic */ }
    };
("potentially UB" being something like copying all the non-vtable things into a stack-allocated concrete MyEvent and calling the virtual function on that which could be worth it for small objects in terms of cache usage if the vtable is 8 bytes and the custom data something like 16 bytes and there's 350000 objects)


In C++ you could do this with std::tuple and variadic templates, if you know all the event handlers at compile time. There would be zero indirection, although depending on how you implement the indexing into the std::tuple (i.e. by treating it as a std::variant and using std::visit) you could still inadvertently generate a vtable.


this requires the set of all possible types to be known statically at the point the tuple is defined. True "virtual concepts" are an open set.


Sometimes you do not need the vector to be runtime sized, then you can use tuples. In C++ variadic templates make it easy to work with compile time sized collections of objects. I'm sure you could do something similar with rust macros.


That's what a trait object would have to be anyway though? You're going to have to have a vector of virtual function tables at runtime, one way or another.


This sounds like the Existential Antipattern, from Haskell land. I don't know to what degree it translates to this setting.

https://lukepalmer.wordpress.com/2010/01/24/haskell-antipatt...


The difference is that the author's suggestion as to what to do instead doesn't work in Rust. In Haskell there's no need to use a type class/trait here at all; you can just use a record. We could try to do the same thing in Rust with a struct:

struct EventHandler { handle_start_event: fn(), handle_completion_event: fn(), }

...but this differs in a key way from the haskell example: Rust's `fn` is not a closure, it's just a pointer to some code, and you can't construct them dynamically. If you want to pass around a _closure_, you need ...a trait object.

Something like:

struct EventHandler { handle_start_event: Box<dyn impl Fn()>, handle_start_event: Box<dyn impl Fn()>, }

...but that's exactly the pattern the author of that post is suggesting getting away from.

I agree with the author's point, but it doesn't really apply to Rust because there isn't actually a simpler way to do it. And I think that's part of what the OP was getting at -- Trait objects are the closest thing Rust has to proper closures, and they're a bit awkward.

Note that I'm still something of a newbie at Rust, and don't feel like I can really say much about the day to day experience of using (or avoiding) trait objects due to limited experience.


Great context, thanks! My intent was not to say "this is bad in Haskell so it might be bad here", but more to hope that suggested alternatives might prove interesting - particularly if running into limitations with the current approach.


Closures are the closest thing to closures Rust has.


I was mostly talking about the type level; I certainly didn't mean to imply that Rust didn't have lambdas if that's what you're getting at. But there is no general type of Rust lambdas that take an A and return a B other than the trait object type `dyn impl Fn(A) -> B` (Or FnMut or FnOnce); the concrete type will depend on what variables it captures.

Also, there's a lot of disagreement about exactly what the distinctions between `lambda`, `closure`, `anonymous function` and friends mean. The interpretation of `closure` I'm interested in is basically a bundle of code and data, which can be treated abstractly in a first class way. That could be something that appears as a lambda capturing variables in the text of your program or as an object in a more OO setting which, albeit more verbose for a single function, has a similar level of power of abstraction.

Either way, part of that power is being able to write code that can work with the interface (irrespective of the shape of the enclosed data), and do things like store differing implementations in lists/vecs etc. The only mechanism Rust provides to do that is trait objects.


> there is no general type of Rust lambdas that take an A and return a B other than the trait object type `dyn impl Fn(A) -> B` (Or FnMut or FnOnce);

I don't understand the issue (and I know very little of rust), but isn't `dyn impl Fn(A) -> B` exactly that type? Any language with first class function parameters and objects, but type erases them (unlike rust and c++) would implement them pretty much like the dyn impl above under the hood.


The comment I originally replied to linked to a blog post arguing that in Haskell, the practice of wrapping type classes (which Rust calls traits) in an existential is an antipattern, because rather than using fancy type system features, you could just use a record with some functions in it, which is simpler and more direct.

But dyn impl is a trait wrapped in an existential. There's no concrete type a -> b, only a trait for types that can be 'called.' So to pass an arbitrary function around in a record/struct you need to do the very thing the post is saying is an antipattern (in Haskell). So the argument doesn't really apply to Rust because there isn't a simpler more direct way to achieve the same thing.


Right, I completely failed to understand your point. You are completely right of course.


The existential 'antipattern' is the only way to get close to a good notion of subtyping. In my experience it also tends to be faster.


FWIW, I think "antipattern" is an overstatement, but it's what the community has been calling it. I do think some of the discussion around it is interesting and possibly valuable.


Hypothetically couldn’t you implement this as a macro extension of sorts?


You can, it's how com-rs [0] works and we're co-opting it in vst3-sys [1] for non-Windows targets. It's painful, unsurprisingly. There are some people working on better abstractions, but I hand-coded a VST3 plugin using those macros just for the FFI-safe COM bindings and it is verbose and particularly unsafe [2]

I'm in disagreement with the parent, dynamic dispatch through vtables is only a zero-cost abstraction across FFI boundaries. Personally I'd ballpark about 33% of the value-add of traits are shared interfaces on different types.

The real money is in associated types and trait bounds. The latter is still a serious type-checking requirement at compile time, and the former requires RTTI to support dynamically in some form - both of which have costs at compile and runtime.

All that said, there may be an argument that the features GP is talking about are covered currently by enum variants (many of the use cases one would have for base classes with associated member variables are done that way, for example), and if you could show how performance and ergonomics improved by increasing the semantic complexity of dyn trait objects - you'd have a strong argument to add it to the language. Just my two cents.

[0] https://github.com/microsoft/com-rs/

[1] https://github.com/RustAudio/vst3-sys

[2] https://github.com/RustAudio/vst3-sys/blob/master/examples/p...


I love when authors add how to read aloud something to help newcomers, as in

  fn print<T: ToString>(v: T) {
> We say that “print is generic over type T, where T implements Stringify”


Does anyone actually call it "Stringify"? I would call it "to string".


Probably not. If they wanted people to call it Stringify then just name it stringify. Though personally, I like how instead of toString, Haskell calls it 'show'. Clear, to the point, and one word.


How is "show" clear for string conversion? You could just as well expect a GUI popup just going from that word.


How is toString, you just might expect your object to turn into a ball of yarn.


String is a very common but specific word in programming for a string of characters.

Show is a very common but generic word in programming, for showing plots, showing message boxes, showing anything really. Not knowing Haskell I would not have expected it to convert an object to a string.


I suppose Rust's direct analog to Show is Display. There's a blanket implementation of ToString for any type implementing Display, so ToString can be thought of as a super-type. "Display" is nice because it is usually uttered in the same breath as Debug.


Rust has Display: https://doc.rust-lang.org/std/fmt/trait.Display.html

On the doc of ToString you can find the following note:

>This trait is automatically implemented for any type which implements the Display trait. As such, ToString shouldn't be implemented directly: Display should be implemented instead, and you get the ToString implementation for free.

I think ToString exists mostly in order to have a more convenient API to get a String out of a display-able type, otherwise if you used Display directly you'd have to use a variant of that boilerplate every time: https://doc.rust-lang.org/src/alloc/string.rs.html#2185-2192


That’s not an instruction on how to read the code aloud, but a way of describing a certain fact about the function.

If you were actually reading it aloud, I would strongly avoid that use of the word “where”, because it’ll be confused with a where clause:

  fn print<T>(v: T) where T: ToString
The two are semantically identical, but they put the constraint in different places, so using the word “where” will confuse things, especially when it leads to you repeating the “T” in a way that where clauses do but the first form doesn’t.

For myself, I would probably read the original line of code as “function print, generic over type tee implementing to string, taking parameter vee of type tee”.

The where clause version I’d read as “function print, generic over type tee, taking parameter vee of type tee, where tee implements to string”.


I also greatly appreciate this. Thanks for pointing it out for me!


This is a good article, but rather misses the point on performance of monomorphization vs. dynamic dispatch. Yes, CPU indirect branch predictors are getting better, and compilers are getting smarter about identifying opportunities to turn dynamic into static dispatch. But inlining remains the optimizer’s silver bullet, enabling a host of dependent optimizations. It’s those further optimizations that make the primary performance difference for static calls.


I did some investigation into the performance impacts of using the _Generic keyword in C11. It was a toy example so not necessarily broadly applicable to real world code, but a good 70 or 80 percent of the speedup from generics seemed to come from inlining.

https://abissell.com/2014/01/16/c11s-_generic-keyword-macro-...


I think an area ripe for improvement is compilers to think about which monomorphized variant is valuable & can benefit from that inlining vs where the dynamic dispatch is sufficient. Start with PGO & I'm sure smarter people will investigate how that can be done for 80% of the benfit at compile time without PGO.


Inlining is also what’s behind the classic “C++ is faster than C” example of “sort()”.


> first, modern CPUs have invested a lot of silicon into branch prediction, so if a function pointer has been called recently it will likely be predicted correctly the next time and called quickly

Huh, TIL. Branch prediction is normally about predicting which branch an `if` would take. But apparently this applies to indirect jumps as well: https://stackoverflow.com/a/26240197/1082652


This also applies to predicting the return address. You can assume any form of control flow has branch prediction. A surprising amount of silicon ends up being worth the cost if it can improve prediction just a little bit because a pipeline stall is so astronomically expensive.



for what is worth, indirect branch prediction has been added relatively recently (as in the last 20 years) in mainstream x86 cpus.


A true gem from the "comments on the last episode":

> The compile times we see for TiKV aren't so terrible, and are comparable to C++

So if you're already used to the terrible compile times of C++, the compile times of Rust won't seem that bad in comparison. And Mozilla, where Rust started, mostly relies on C++. That does explain a lot...


That, and the more you optimize for runtime performance, the less compile-time performance you have.


I always keep coming back to code gets compiled a lot more than it gets modified. In C land it's typical to run analysis tools on a codebase as a separate process. Seems doing the sorts of checks Rust does on every compile seriously wastes programmers time.

Also a problem that for 95% of code written speed doesn't matter at all.

Combine those two and programmers a paying a lot for something that isn't needed. AKA 99% of the time you're compiling a module that hasn't changed. And 95% of the code in those modules speed doesn't matter at all.


Rust is a language specifically for the other 5% though. If you don't have extreme performance requirements wouldn't you just use OCaml (which compiles very quickly)?


Of late I feel like I care less about perfect languages and more about ABI compatibility. There is something to be said about most of the time you should be using a safe mildly performant glue language with a fast write/compile/test cycle. The big monkey wrench of course is ABI incompatibility. Which has always left you with C and now Swift as the base language.


> In general, for programming languages, there are two ways to translate a generic function:

> 1. translate the generic function for each set of instantiated type parameters

> 2. translate the generic function just once, calling each trait method through a function pointer (via a “vtable”).

The approach in haskell might be considered a variation of 2, since it involves indirection, but a little different from other languages normally using vtables, since it's not selecting different implementations at run time, but just looking up the pre-determined implementation through a new parameter.

In particular, the function is transformed into a higher-order function accepting a new parameter representing the dynamic to_string functinoality, then at the call site, the appropriate concrete implementation to_string parameter is inserted for the new transformed function, and similarly this new higher-order function 'print' only needs to be compiled once.


One thing to note is Haskell is exceptionally good at inlining that explicit dictionary away.

Haskell isn't 0-cost-abstraction like Rust ofc, but it is definitely a minimize-the-cost-of-abstraction. And the control over said minimization is getting better with each release.


Also worth noting that Haskell's type system is too powerful for monomorphization to work as an implementation strategy for generics in all cases. In particular, polyorphic recursion[1] means that attempting to just monomorphize anything wouldn't necessarily terminate.

Higher rank types also break this, as you can define things like:

newtype GenericThing = GenericThing (forall a. [a] -> a)

doManyGenericThings :: [GenericThing] -> [a] -> [a]

doManyGenericThings things list = map (\GenericThing f -> f list) things

Here, `doManyGenericThings` takes a list of generic functions, and applies each of them to its argument. You can't just monomorphize it, because you'd have to somehow also monomorphize every argument it is ever passed, which is a dynamic property that you can't know in advance (and again, may not be a finite set).

[1]: https://en.wikipedia.org/wiki/Polymorphic_recursion


You should be able to prove that all places that call this function with a fixed list of fixed types are monomorphisable though.


It is certainly possible to still monorphize some cases, but the point is it can't work as a comprehensive strategy like it does in rust; it's an optimization that applies in some cases.


It can't inline the dict in general though, and that is more the default situation than the opposite, unless the defn is in the same module


Sounds like static polymorphism, done in C++ with: https://en.wikipedia.org/wiki/Curiously_recurring_template_p...


How is a Haskell type class dict different from a vtable?

It's in fact the same.


If you have

    Obj f(Obj a, Obj b);
in C++, you pass in Obj's vtable twice, and the returned value includes a vtable.

If you have

    f :: Obj a => a -> a -> a
in Haskell, you pass in one dictionary. If you have

    f :: (Obj a, Obj b) => a -> b -> a
in Haskell, you pass in two dictionaries. If you have

    f :: (Obj a, Obj b, Obj c) => a -> b -> c
you pass in three dictionaries, and the caller gets to decide what the concrete type of c is, NOT the function implementation.

    data DynObj = forall c. Obj c => DynObj c
    f :: (Obj a, Obj b) => a -> b -> DynObj
In this case, you pass in two dictionaries and the returned value has a vtable. This essentially is what C++ does.


In c++ vtables ptrs are attached to object instances (although all instances of the same type vtable), while AFAIK in Haskell "vtables ptrs" (better known as function dictionaries) are attached to object pointers (also known as fat pointers). I believe that rust impl traits behave similarly to Haskell. I think go interfaces also use fat pointers.

The advantage is that "interfaces" can be attached to objects non intrusively (and outside of that object declaration, which is a huge win for interoperability), the disadvantage is that pointers are larger and there is no easy way to cross cast between interfaces.

In c++ "fat pointers" (also sometimes known are virtual concepts) are often implemented manually, the most well known example is std::function. We are all waiting for compile time reflection to be able to implement virtual concepts generically (you can only approximate it up to a point currently).


One way is that it can handle functions like return : Monad m => a -> m a where m occurs only in the outputs. There's no input arg you could hang a vtable off of.


Hmm, not sure what you mean. In the body of a function there is one class dict available. On the other hand there is one vtable per object, typically.


I see I wasn't clear enough. With object-style polymorphism, the vtable is looked up on the object, using some means or other. With typeclass-style polymorphism, the "vtable" is (effectively) looked up on the type.

Perhaps gpderetta said it better: https://news.ycombinator.com/item?id=23537639


I don't know of a single language that has a vtable per object. What would even be the point?

Edit: Assuming we're talking about objects that can be grouped into some sort of 'classes'.


Isn't python __dict__ pretty much a per-object vtable?


In C++ and Rust, there's a single vtable per class (trait), not per object.

(This is basically why Rust doesn't support multi-trait types (e.g. &dyn Debug + Clone)


You can accomplish the same behavior by having a marker trait:

    trait Foo: Debug + Clone {}
Edit: No you can't, because `Clone` requires `Sized`, which means you can't make a trait object of `Foo`.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


Once all of this compile-time metaprogramming and code execution starts to happen, it always makes me ask: doesn’t this just conclude with dynamic typing? I’m currently a static typing lover. But it’s almost as if we’re just looking for the full power of a programming language. Why not just use the language itself instead of a weird, stratified compile time language?


If you follow through on this thought further you end up in the realm of the dynamic languages with very strong metaprogramming capability - the Lisps and Forths. With these, the reduction of the language is so complete that changing domains to the meta-mode is as easy as falling off a log. And nothing stops these languages from having a combination of useful static checks, high performance output, and fluidity of expression: they are powerful enough to be parsers at one instant, code generators at another, and optimizers at a third.

The main reason not to do it that way is the "you're on your own" nature of dynamism, in and of itself. If you don't come up with a good structure for checking things, the language isn't going to do it for you. And this is fine within the context of the quick hack or the home-grown type system, but less useful for proving broad, general properties about the code. Particularly when this is an industry where the majority of programmers are assumed to be hugely inexperienced, with less than 5 years under their belt, and trusted to build fairly complex systems.

Classical static type systems(your Pascal, Java, etc.) only really know the world in terms of combinations of primitives; they don't prove much beyond simple matching of input to output. But that's yesterday's approach - it's not what industry is looking towards.

What the newer static languages(of the Haskell and Rust ilk) have converged upon is to bolt a powerful constraint solver engine to the type-checking system, which in turn shapes the whole language around pleasing the type-checker. The resulting minimum code quality is often higher, but also semantically torturous.


Not necessarily. The cleanest version of this sort of thing I’ve seen is zig’s comptime. Basically you can annotate loops, conditionals, function parameters, etc to be evaluated at compile time instead of runtime. You’re still writing zig - it’s the same language; just some of your code runs in the context of the compiler.

One really nice use of this is printf. In C, printf needs a remarkably large amount of code and it’s quite slow. In zig, printf marks the format string as comptime. Then the compiler runs the code looping over and evaluating the format string, then unrolls the result into the binary. The output ends up being a few terse calls to write / format with no extra machinery at runtime. And you get clean compile errors if the format string doesn’t match your arguments - all with no special casing in the compiler.

More detail, with code: https://ziglang.org/documentation/master/#Case-Study-printf-...

I think nim has something similar.

I’m surprised rust went with macros instead of something like comptime - though I assume there’s some trade offs I’m not aware of.


> I’m surprised rust went with macros instead of something like comptime - though I assume there’s some trade offs I’m not aware of.

Rust has procedural macros which are Rust libraries built and run compile time that can transform raw tokens into valid Rust code (or more raw tokens for nested macros). Procedural macros can annotate any statement or code block - although some specific ones might still be on nightly only.


crate-level inner attribute macros are still unsupported [0], so if you want those, you'll need to do the codegen before the compiler takes a pass or from within the compiler itself (using, say an after_parsing [1] hook). They're really helpful for making a deeply embedded DSL.

[0] https://github.com/rust-lang/rust/issues/54726

[1] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver...


What's the difference between comptime and macros? they seem pretty much the same at first glance.


Macros normally are a separate phase of compilation, separate, for example from type checking.


I agree with you that Zig comptime seems really nice.

As for the format string at comptime, it's a big improvement over C sure (no missing parameter, yes!) but I still feel that it's still much less readable than Python f-strings.


The weird stratified compile time language lets you create DSLs that generate and typecheck the program you’re trying to write. I’ve written a lot of C++ (and no Rust), but if the compile time language is turing complete, then you can push all sorts of correctness and type checks to compile time.

This is really, really powerful.

Even with unsafe C++, I’m used to writing very high level, high performance concurrent code. If it crashes after it’s passed CI testing for a week or so, it’s probably a hardware bug.

(Of course, there are exceptions, but those fall outside of the parts I have the compiler check for me.)


But what you can prove at compile time is rarely useful. You can use phantom types, smart constructors, and the like, but after a while you need a runtime to do anything interesting.

An example: this function is only valid on positive, odd integers in the range 3-91. With any static type system, this has to be a runtime check anyway.

I’m not an expert on dependent types, but that sounds like what those try and solve. But at that point, you just want compile time to be a full program, which has all of the same pitfalls as the runtime of a program.


Any static type system prevents you from doing some subset of things that you could do with a dynamic type system. But there are obviously lots of things you get in exchange for that power. Type system evolution, usually, is about trying to colonize a larger and larger portion of that possibility space and put it under the domain of "statically checkable". It will never cover the whole thing, but it's still really valuable work and great strides are still being made.


What does dynamic typing have to do with the language used for CTFE? (Have I misunderstood you?)

D has fairly permissive CTFE which uses the language itself; it remains statically typed.

Haskell is statically typed while the type system can be made Turing complete by using certain language extensions.

The Terra language (embedded in Lua) is statically typed and is metaprogrammed using Lua.

Racket is certainly dynamic, but Typed Racket is ... well it's still dynamic, but type annotations are statically checked.


Dynamic typing would be throwing the baby out with the bathwater. Look at stage polymorphism (e.g. MetaML) for the right way to do this.


Can we have the title changed to "Generics and Compile Time in Rust"? The way it's written know, I thought for sure it would be about compile-time programming using generics.


I am convinced that `rustc` should have an "optimizer lint" phase with all the same checks other languages perform to change the behavior of the code and instead suggest changes that affect running code and compile time, like `Box`ing fields or variants that disproportionately affect the size of an `enum`[1] or to suggest changing generic params to trait objects or vice versa[2] when it makes sense.

The advantage of not doing it automatically is that the behavior of the code once compiled can be always inferred from looking at the code, no magic and sudden changes in behavior because some threshold has been passed in some optimizer.

[1]: https://github.com/rust-lang/rust-clippy/pull/5466

[2]: https://github.com/rust-lang/rust-clippy/issues/14


Rust compile times are just really really bad.


Nim or Rust?

Some Rust syntax seems overly confusing. But Nim doesn’t really have strong corporate support backing it.

But, both are still new, and missing a lot of libraries.

Rust is annoying, where there aren’t standardized libraries for common functions. Just some guy’s tweet to use some random cargo crate from someone.


> * Note that in these examples we have to use inline(never) to defeat the optimizer. Without this it would turn these simple examples into the exact same machine code. I'll explore this phenomenon further in a future episode of this series.*

I’m really eager to use more rust, but three optimizations really turn me off. Optimizing the compiler feels like meta programming.


What do you mean? I think that these statements are there only so that the author can provide a glimpse into the difference of code generation between the two examples. You would not want to disable it when writing code for an actual project.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: