Hacker Newsnew | past | comments | ask | show | jobs | submit | lerno's commentslogin

"Incremental compilation is fast", is something people only talk about when normal compilation speeds are abysmal. Sadly, C++ set the expectations here, which made both Rust and Swift think that compilation times in the minutes is fine.

If your code compiles in 1 second from scratch then what do you need incremental compilation for?


> If your code compiles in 1 second from scratch then what do you need incremental compilation for?

That's entirely fair. But, when I watch somebody like Jon Blow, with his 1 second from scratch compilation the result seems to be that he just iterates a lot more without being significantly more productive than I am. I can imagine for some parts of video game design fast iteration might be crucial, yet I struggle to imagine that compilation speed is what matters there. Systems design, art direction, plotting, these don't seem like factors where you need those one second compile times.


> If your code compiles in 1 second from scratch then what do you need incremental compilation for?

I really want incremental compilation in 100ms or less, this makes live reloading (such as with dioxus subsecond [0]) so much more enjoyable.

I don't care about the time spent on fully recompiling the whole project from scratch though.

[0] https://www.reddit.com/r/rust/comments/1j8z3yb/media_dioxus_...


Alloca would not allow you to pass data from the current scope up to a parent scope.

I did say 10x better alloca. I'm saying that's not good enough, and seems very narrow.

You could do this in C++, with RAII stacked arena allocators. Though it's unclear to me from the blog post if C3 would prevent returning a pointer to memory in the top most pool. C++ probably wouldn't help you prevent that.


What destructor?

Not possible to nest and possible to run down out of stack memory quickly. That said, C3 has a `@stack_mem(1024; Allocator mem) { ... }` which allows to allocate a part of the stack and use that as an allocator with fallback.

> care more about binary size than speed

That does not seem to be true if you look at how string formatting is implemented.


Well, the title (which is poorly worded as has been pointed out) refers to C3 being able to implement good handling of lifetimes for temporary allocations by baking it into the stdlib. And so it doesn't need to reach for any additional language features. (There is for example a C superset that implements borrowing, but C3 doesn't take that route)

What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used. That's of course not the level of compile time checking that Rust does. But then Rust has a lot more in the language in order to support this.

Conversely C3 does have contracts as a language feature, which Rust doesn't have, so C3 is able to do static checking with the contracts to reject contract violations at compile time, which runtime contracts like some Rust creates provides, can't do.


> What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

The article makes no mention of this, so in the context of the article the title remains very wrong. I could also not find a page in the documentation claiming this is supported (though I have to admit I did not read all the pages), nor an explanation of how this works, especially in relation to the performace hit it would result in.

> C3 is able to do static checking with the contracts to reject contract violations at compile time

I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts. Even worse, violating them when not using safe mode results in "unspecified behaviour", but really it's undefined behaviour (violating contracts is even their list of undefined behaviour! [2])

[1]: https://c3-lang.org/language-common/contracts/

[2]: https://c3-lang.org/language-rules/undefined-behaviour/#list...


> The article makes no mention of this, so in the context of the article the title remains very wrong

The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees (which is good, because capabilities will be added on the road to 1.0).

> I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts.

No, there is no guarantee at the language level because doing so would make a conforming implementation of the compiler harder than it needs to be. In addition, setting exact limits may hamper innovation of compilers that wish to add more analysis but will hesitate to reject code that can be statically know to violate contracts.

At higher optimizations, the compiler is allowed to assume that the contracts evaluate to true. This means that code like `assert(i == 1); if (i != 1) return false;` can be reduced to a no-op.

So the danger here is then if you rely on the function giving you a valid result even if the indata is not one that the function should work with.

And yes, it will be optional to have those "assumes" inserted.

Already today in current compiler, doing something trivial like writing `foo(0)` to a function that requires that the parameter > 1 is caught at compile time. And it's not doing any real analysis yet, but it will definitely happen.


Just my opinion, but I think that having contracts that might be checked is a really really really dangerous approach. I think it is a much better idea to start with a plan for what sorts of things you can check soundly and only do those. "Well we missed that one because we only have intraprocedural constant propagation" is not going to be the sort of thing most users understand and will catch people by surprise.

Safety is a spectrum. You add +1 and safety goes up.

Well, we've already tried that, and no one used it.

> The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees

Understandable, but then why are you mentioning the borrow checker if you avoided mentioning _anything_ that could be compared to it.

> No, there is no guarantee at the language level

Then don't go around claiming they are statically checked, that's false. What you have is a basic linter, not a statically enforced contract system.


Oof that sounds incredibly dangerous and basically means it doesn't really offer much of an improvement over C imo in terms of safety.

What is "incredibly dangerous"? Having contracts that can catch errors at compile time?

I understood your comment as that contracts are not statically checked at compile time. That is incredibly dangerous and means I would never go to bat for the language.

> What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

I looked at the allocator source code and there’s no use-after-free protection beyond zeroing on free, and that is in no way sufficient. Many UAF security exploits work by using a stale pointer to mutate a new allocation that re-uses memory that has been freed, and zeroing on free does nothing to stop these exploits.


It doesn't zero on free, that's not what the code does. But if you're looking for something to prevent exploits, then no, this is not it, nor does it try to be.

How would you want that implemented?


> But if you're looking for something to prevent exploits, then no, this is not it, nor does it try to be.

> How would you want that implemented?

Any of the usual existing ways of managing memory lifetimes (i.e. garbage collection or Rust-style borrow checking) prevents that particular kind of exploitation (subject to various caveats) by ensuring you can't have a pointer to memory that has already been freed. So one would expect something that claims to solve the same problem to solve that problem.


All of that is out of scope for a C-like though. Once you set the constraints around C, there will be trade-offs. Rust is a high level language.

C on modern hardware/compilers has all the disadvantages of a high-level language (at least to the extent that Rust does).

That was already available in languages like Modula-2 and Object Pascal, as the blog post acknowledges the idea is quite old, and was also the common approach to manage memory originally with Objective-C on NeXTSTEP, see NSZone.

Hence why all these wannabe be C replacements, but not like Rust, should bring more to the table.


No, that is quite possible. You will not be able to use that memory you just returned though. What actually happens is an implementation issue, but it ranges from having the memory overwritten (but still being writable) on platforms with the least support, to being neither read or writable, to throwing an exact error with ASAN on. Crashing on every use is often a good sign that there is a bug.

It might not be on every use though. The assignment could very well be conditional. If a dangling reference could escape from the arena in which it was allocated, you cannot claim to have memory safety. You can claim that the arena prevents memory leaks (if you remember to allocate everything correctly within the arena), but it doesn't provide memory safety.

Memory safety as in the full toolset that Rust provides? C3 clearly doesn't, I fully agree.

No, by memory safety people don't mean the exact mechanisms Rust has (and therefore Rust). They mean

https://en.wikipedia.org/wiki/Memory_safety


Well the latter is covered: you can make temp allocations out of order when having nested "@pool"s. There are examples in the blog post.

It doesn't solve the case when lifetimes are indeterminate. But often they are well know. Consider "foo(bar())" where "bar()" returns an allocated object that we wish to free after "foo" has used it. In something like C it's easy to accidentally leak such a temporary object, and doing it properly means several lines of code, which might be bad if it's intended for an `if` statement or `while`.


You can certainly do it with RAII. However, what if a language lacks RAII because it prioritizes explicit code execution? Or simply want to retain simple C semantics?

Because that is the context. It is the constraint that C3, C, Odin, Zig etc maintains, where RAII is out of the question.


If you want RAII to be explicit, then show an error if you fail to call the destructor. That's it.

Ok then I understand what you mean (I couldn't respond directly to your answer, maybe there is a limit to nesting in HN?).

Let me respond in some more detail then to at least answer why C3 doesn't have RAII: it tries to the follow that data is inert. That is – data doesn't have behaviour in itself, but is acted on by functions. (Even though C3 has methods, they are more a namespacing detail allowed to create methods that derive data from the value, or mutate it. They are not intended as organizational units)

To simplify what the goal is: data should be possible to create or destroy in bulk, without executing code for each individual element. If you create 10000 objects in a single allocation it should be as cheap to free (or create) as a single object.

We can imagine things built into the type system, but then we will need these unsafe constructs where a type is converted from its "unsafe" creation to its "managed" type.

I did look at various cheap ways of doing this through the type system, but it stopped resembling C and seemed to put the focus on resource management rather than the problem at hand.

So that is why it's closer to C than Rust.


You lost me there I'm afraid.

The idea is, you could have a language like Rust, but with linear rather than affine types. Such a language would have RAII-like idioms, but no implicit destructors; instead, it'd be a compile-time error to have a non-Copy local variable whose value is not always moved out of it before its scope ends (i.e., to write code that in Rust could include an implicit destructor call). So you would have explicit deallocation functions like in C, but unlike in C you could not have resource leaks from forgetting to call them, because the compiler would not let you.

To the extent that you subscribe to a principle like "invisible function calls are never okay", this solves that without undermining Rust's safety story more broadly. I have no idea whether proponents of "better C" type languages have this as their core rationale; I personally don't see the appeal of that flavor of language design.


NSAutoreleasePool keeps a list of autoreleased objects, that are given a "release" message when the pool goes out of scope.

`@pool` flushes the temp allocator and all allocations made by the temp allocator are freed when the pool goes out of scope.

There are similarities, but NSAutoreleasePool is for refcounting and an object released by the autoreleasepool might have other objects retaining it, so it's not necessarily freed.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: