Hacker News new | past | comments | ask | show | jobs | submit login

I think the twitter thread I linked answers your questions?

"use after free" is an instance of a memory access pattern that is considered invalid from the program's point of view.

What happens depends on the situation:

Was the memory returned to the operating system? If so it will probably result in a page fault and if you don't have a thing to handle the signal then the OS will crash your program.

Was the memory part of an arena managed by the custom allocator that still owns it? If so it will return whatever value is contained in the address being dereferenced.




This is fundamentally the same thing as undefined behavior, regardless of whether Odin insists on calling it by a different name. If you don't want behavior to be undefined, you have to define it, and every part of the compiler has to respect that definition. If a use-after-free is not undefined behavior in Odin, what behavior is it defined to have?

As a basic example, if the compiler guarantees that the write will result in a deterministic segmentation fault, then that address must never be reused by future allocations (including stack allocations!), and the compiler is not allowed to perform basic optimizations like dead store elimination and register promotion for accesses to that address, because those can prevent the segfault from occurring.

If the compiler guarantees that the write will result in either a segfault or a valid write to that memory location, depending on the current state of the allocator, what guarantees does the compiler make about those writes? If some other piece of code is also performing reads and writes at that location, is the write guaranteed to be visible to that code? This essentially rules out dead store elimination, register promotion, constant folding, etc. for both pieces of code, because those optimizations can prevent one piece of code from observing the other's writes. Worse, what if the two pieces of code are on different threads? And so on.

If the compiler doesn't guarantee a deterministic crash, and it doesn't guarantee whether or not the write is visible to other code using the same region of memory, and it doesn't provide any ordering or atomicity guarantees for the write if it does end up being visible to other code, and then it performs a bunch of optimizations that can affect all of those things in surprising ways: congratulations, your language has undefined behavior. You can insist on calling it something else, but you haven't changed the fundamental situation.


You language has behavior not defined within the language, sure. What it does not now have is permission for the compiler to presume that the code never executes with input that would cause the behavior not defined to occur.


The compiler is already doing that when it performs any of the optimizations I mentioned above. When the compiler takes a stack-allocated variable (whose address is never directly taken) and promotes it to a register, removes dead stores to it, or constant-folds it out of existence, it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack. In other words, it is leaving the behavior of a program that performs such loads and stores undefined, and in doing so it is directly enabling some of the most basic, pervasive optimizations that we expect a compiler to perform.

In a language with raw pointers, essentially all optimizations rely on this type of assumption. Forbidding the compiler from making the assumption that undefined behavior will not occur essentially amounts to forbidding the compiler from optimizing at all. If that is indeed what you want, then what you want is something closer to a macro assembler than a high-level language with an optimizing compiler like C. It's a valid thing to want, but you can't have your cake and eat it too.


When you put it like that, it's actually interesting. If they went ahead and said, "This is a language which by design can't have an optimizing compiler, it's strictly up to the programmer - or the code generator, if used as an intermediate language - to optimize" then it would at least be novel.

But as they don't, I see it more as an attempt to annoy the people who have studied these sort of things (I guess you are the people who "suck the joy out of programming" in their eyes)


No, the compiler is not "already" doing that. Odin uses the llvm as a backend (for now) and it turns off some of those UB-driven optimimzations (as mentioned in the OP).

Some things are defined by the language, some things are defined by the operating system, some by the hardware.

It would be silly for Odin to say "you can't access a freed pointer" because it would have to presume to know ahead of time how you utilize memory. It does not. In Odin, you are free to create an allocator where the `free` call is a no-op, or it just logs the information somewhere without actually reclaiming the 'freed' memory.

I can't speak for gingerBill but I think one of the reasons to create the language is to break free from the bullying of spec laywers who get in the way of systems programming and suck all the joy out of it.

> it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack

If you write code that tries to get a pointer to the first variable in the stack, and guess the stack size and read everything in it, Odin does not prevent that, it also (AFAIK) does not prevent the compiler from promoting local variables to registers.

Again, go back to the twitter thread. An explicit example is mentioned:

https://twitter.com/TheGingerBill/status/1496154788194668546

If you reference a variable, the langauge spec guarantees that it wil have an address that you can take, so there's that. But if you use that address to try to get other stack variables indirectly, then the language does not define what happens in a strict sense, but it's not 'undefined' behavior. It's a memory access to a specific address. The behavior depends on how the OS and the Hardware handle that.

The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".


> If you write code that tries to get a pointer to the first variable in the stack, and guess the stack size and read everything in it, Odin does not prevent that, it also (AFAIK) does not prevent the compiler from promoting local variables to registers.

This is exactly what I described above. Odin does not define the behavior of a program which indirectly pokes at stack memory, and it is thus able to perform optimizations which exploit the fact that that behavior is left undefined.

> The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".

This is a misleading caricature of the relationship between optimizations and undefined behavior. C compilers do not hunt for possible occurrences of undefined behavior so they can gleefully get rid of lines of code. They perform optimizing transformations which are guaranteed to preserve the behavior of valid programs. Some programs are considered invalid (those which execute invalid operations like out-of-bounds array accesses at runtime), and those same optimizing transformations are simply not required to preserve the behavior of such programs. Odin does not work fundamentally differently in this regard.

If you want to get rid of a particular source of undefined behavior entirely, you either have to catch and reject all programs which contain that behavior at compile time, or you have to actually define the behavior (possibly at some runtime cost) so that compiler optimizations can preserve it. The way Odin defines the results of integer overflow and bit shifts larger than the width of the operand is a good example of the latter.

C does have a particularly broad and programmer-hostile set of UB-producing operations, and I applaud Odin both for entirely removing particular sources of UB (integer overflow, bit shifts) and for making it easier to avoid it in general (bounds-checked slices, an optional type). These are absolutely good things. However, I consider it misleading and false to claim that Odin has no UB whatsoever; you can insist on calling it something else, but that doesn't change the practical implications.


> They perform optimizing transformations which are guaranteed to preserve the behavior of valid programs. Some programs are considered invalid (those which execute invalid operations like out-of-bounds array accesses at runtime), and those same optimizing transformations are simply not required to preserve the behavior of such programs.

I think this is the core of the problem and it's why people don't like these optimizations and turn them off.

Again I'm not the odin designer nor a core maintainer, so I can't speak on behalf of the language, but from what I understand, Odin's stance is that the compiler may not make assumptions about what kind of code is invalid and whose behavior therefore need not be preserved by the transformations it makes.


> C compilers do not hunt for possible occurrences of undefined behavior so they can gleefully get rid of lines of code.

Yes they do, if they detect UB they consider the result poisoned and delete any code that depends on it.


> The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".

No production compiler does that (directly). This is silly. We want to help programmers. They sometimes keep it even if it is known to be UB just because removing it is unlikely to help optimizations.

But if you are optimizing assuming something does not happen, then you have undefined behavior. And you are always assuming something does not happen when optimizing.


> The compiler is already doing that when it performs any of the optimizations I mentioned above. When the compiler takes a stack-allocated variable (whose address is never directly taken) and promotes it to a register, removes dead stores to it, or constant-folds it out of existence, it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack. In other words, it is leaving the behavior of a program that performs such loads and stores undefined, and in doing so it is directly enabling some of the most basic, pervasive optimizations that we expect a compiler to perform.

No, that's C-think. Yes, when you take a stack-allocated variable and do those transformations, you must assume away the possibility that it's there are aliasing accesses to its location on the stack. Thus, those are not safe optimizations for the compiler to perform on a stack-allocated variable.

It's not something you have to do. The model of treating each variable as stack-allocated until proven (potentially fallaciously) otherwise is distinctly C brain damage.

> If that is indeed what you want, then what you want is something closer to a macro assembler than a high-level language with an optimizing compiler like C. It's a valid thing to want, but you can't have your cake and eat it too.

This is a false dichotomy advanced to discredit compilers outside the nothing-must-be-faster-than-C paradigm, and frankly a pretty absurd claim. There are plenty of "high-level" but transparent language constructs that can be implemented without substantially assuming non-aliasing. It's totally possible to lexically isolate raw pointer accesses and optimize around them. There is a history of computing before C! Heck, there are C compilers with "optimization" sets that don't behave as pathologically awfully as mainstream modern compilers do when you turn the "optimizations" off; you have to set a pretty odd bar for "optimizing compiler" to make that look closer to a macro assembler.

It's okay if your compiler can't generate numerical code faster than Fortran. That's not supposed to be the minimum bar for an "optimizing" compiler.


We are talking about Odin, a language aiming to be 'better C' the way Zig is. The literal only reason anyone uses C is to write code that runs as fast as possible, whether for resource-constrained environments or CPU-bound hot-paths. Odin has many features that one would consider warts if you weren't in an environment where you'd otherwise turn to C, such as manual memory freeing. If I were pre-committing to a language that runs five times slower than C, I have no reason to select Odin over C#, a language that runs only ~2.4 times slower than C.


> The model of treating each variable as stack-allocated until proven (potentially fallaciously) otherwise is distinctly C brain damage.

OK, let's consider block-local variables to have indeterminate storage location unless their address is taken. It doesn't substantively change the situation. Sometimes the compiler will store that variable in a register, sometimes it won't store it anywhere at all (if it gets constant-folded away), and sometimes it will store it on the stack. In the last case, it will generate and optimize code under the assumption that no aliasing loads or stores are being performed at that location on the stack, so we're back where we started.


Borrowing gingerbill's terminology, your "mini spec" is still incomplete. And it is unclear how an implicit or incomplete mini spec is different from UB, except for the fact that the compiler can't take advantage of that. From the user's perspective a gap in the mini spec is still a gap that needs to be memorized, considered and avoided much like UB. If you do somehow manage to define every "mini spec", this poses another problem that your specification limits what you can do in the future---for example you would be unable to switch the memory allocator.


The difference is that in the case of UB in C the compiler will say "aha! I will just assume this can never happen", where as with platform-dependent behavior, the compiler can never make such assumption, so has to be conservative about its assumptions.


You can already order a C compiler not to make an assumption based on UB, e.g. -fno-strict-aliasing, -ftrapv and others. Did that make C users happy? Not sure.


The point of the C11 memory model is that it gives formal bounds on what optimizations the compiler is and is not able to do. In particular, it is free to reorder memory operations as if the program was single-threaded, unless the memory operations are explicitly marked as atomic. My assertion is that if you do these optimizations and then have a data race, it's functionally equivalent to undefined behavior, even if you call it something different and loudly proclaim that your language doesn't have UB.


Obviously Odin does not use C's memory model. And in instances where LLVM optimizes Odin for UB, it is a bug, and not a feature. Odin explicitly opts out of all optimization passes that depend or leverage UB, but that's a moving target.

For example, as mentioned in the article, Odin does not leverage LLVM's poison value optimizations, which are derived from optimizations exploiting undefined behavior.

Sure, some code is slowed. But can you point to a well known and well used algorithm whose runtime characteristics depend upon exploiting UB? If you code goes fast because it's doing undefined things the compiler strips away, that's a structural bug in the application, in my view.


I'll give a concrete example. It's not the most compelling optimization in the world, but I think illustrates the tradeoffs clearly. The following is pseudocode, not any particular language, but hopefully will be clear.

    let i = mystruct.i;
    if (i < array.length) {
        let x = expensive_computation(i);
        array[i] = x;
    }
For the purpose of this example, let's say that the expensive computation is register-intensive but doesn't write any memory (so we don't need to get into alias analysis issues). Because it is register-intensive, our optimizer would like to free up the register occupied by i, replacing the second use of i by another load from mystruct.i. In C or unsafe Rust, this would be a perfectly fine optimization.

If another thread writes struct.i concurrently, we have a time of check to time of use (TOCTOU) error. In C or unsafe Rust, that's accounted for by the fact that a data race is undefined behavior. One of the behaviors that's allowed (because basically all behavior are allowed) is for the two uses of i to differ, invalidating the bounds check.

Different languages deal with this in different ways. Java succeeds in its goal of avoiding UB, disallowing this optimization; the mechanism for doing so in LLVM is to consider most memory accesses to have "unordered" semantics. However, this comes with its own steep tradeoff. To avoid tearing, all pointers must be "thin," specifically disallowing slices. Go, by contrast, has slices among its fat pointer types, so incurs UB when there's a data race. It's otherwise a fairly safe language, but this is one of the gaps in that promise.

Basically, my argument is this. If you're really rigorous about avoiding UB, you essentially have to define a memory model, then make sure your use of LLVM (or whatever code generation technique) is actually consistent with that memory model. That's potentially an enormous amount of work, very easy to get subtly wrong, and at the end of the day gives you fewer optimization opportunities than C or unsafe Rust. Thus, it's certainly not a tradeoff I personally would make.


Thanks. Currently Odin would cache i on the stack for retrieval later, granting LLVM the ability to load it into a register if profitable with knowledge that after the read, `i` is constant, which bypasses the RAW hazard after the initial read.

My view is that undefined behavior is a trash fire and serious effort should be undertaken to fix the situation before it gets even more out of hand.


> For the purpose of this example, let's say that the expensive computation is register-intensive but doesn't write any memory (so we don't need to get into alias analysis issues). Because it is register-intensive, our optimizer would like to free up the register occupied by i, replacing the second use of i by another load from mystruct.i. In C or unsafe Rust, this would be a perfectly fine optimization.

WAT?

This is obviously not "fine".

This kind of bullshit is why I said Odin's stance on UB is what swayed me to prefer it.


> If you code goes fast because it's doing undefined things the compiler strips away, that's a structural bug in the application, in my view.

I think that's the inverse of what UB-based optimizations do. Those optimizations rely on you _never_ doing what is considered UB. That's what makes the optimization correct. The problem is that, in practice, what's usually considered UB is not something that the language can always statically ensure you're not doing, and so when you do make a mistake, you land in a situation where the optimization seems to make things worse, instead of protecting you.


> Obviously Odin does not use C's memory model.

How is that obvious? And which memory model does Odin use? Just to be clear: a "memory model" is not about memory management, it is about the behaviour of multithreaded programs: https://en.m.wikipedia.org/wiki/Memory_model_(programming)


Thanks. I was thinking of strict-aliasing and the associated undefined behavior, which Odin forbids. Odin's atomic intrinsics model after C11 closely (and require the programmer to emit memory barriers when cache behavior must be controlled by instructions). I believe the final memory model will be platform defined. There is no built-in atomic data type in Odin. Only atomic operations are available, and most sync primitives are implemented in the standard library wrapping OS capabilities (WaitOnAddress), etc.

The parent comment said: >then have a data race, it's functionally equivalent to undefined behavior

This is a matter of interpretation but there is a categorical difference between "this read is undefined so the compiler will silently omit it" and "this read will read whatever value is in the CPU cache at this address, even if the cache is stale". The difference is a matter of undefined behavior from the language versus the application being in an undefined state.


> This is a matter of interpretation but there is a categorical difference between "this read is undefined so the compiler will silently omit it" and "this read will read whatever value is in the CPU cache at this address, even if the cache is stale". The difference is a matter of undefined behavior from the language versus the application being in an undefined state.

I totally agree.


How is that different from specifying the behavior of addition only when it won’t overflow? The C spec might as well say that it is invalid to overflow, and you are responsible for checking for that (and that’s kind of what they do), but that’s what UB is afaik.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: