Hacker News new | past | comments | ask | show | jobs | submit | glowcoil's comments login

The only physically accurate answer for where to put the far plane is "behind everything you want to be visible". It fundamentally does not make any sense to change the shape of the far plane to "more accurately reflect human visual perception" because there is no far plane involved in human visual perception, period.

You're describing a problem with a particular method of fog rendering. The correct way to address that would be to change how fog is rendered. The perspective projection and the far plane are simply not the correct place to look for a solution to this.

I disagree. This problem exists even when the fog is completely absent and also distorts the objects at the sides of the screen regardless of the fog's presence or absence. I guess you could use fog, rendered in a particular way, to make it less noticeable but it's still there. So the root cause is the perspective projection.

Now, I've googled a bit on my own, trying all kinds of search phraes, and apparently it is a known problem that the perspective projection, when wide (about 75 degrees and up) FOV is used, will distort objects at the side of the screen. One of the solutions appears to be a post-processing pass called "Panini Projection" which undoes that damage at the sides of the screen. From what I understand, it uses cylinder (but not a sphere) as the projection surface instead of a plane.


You originally described a problem where fog had a different falloff in world space at the edges of the screen compared to the center of the screen. The root cause of that is not the perspective projection; it's how the fog is being rendered.

The issue you are describing now is called perspective distortion (https://en.wikipedia.org/wiki/Perspective_distortion), and it is something that also happens with physical cameras when using a wide-angle lens. There is no single correct answer for dealing with this: similarly to the situation with map projections, every projection is a compromise between different types of distortion.

Anyway, if you're writing a ray tracer it's possible to use whatever projection you want, but if you're using the rasterizer in the GPU you're stuck with rectilinear projection and any alternate projection has to be approximated some other way (such as via post-processing, like you mention).


It's not. Why would it be?


If you assume at the start of your proof that π is rational, it's not clear that you can then still make use of concepts closely related to π. If those concepts depend in any way on π being irrational, then you can't use them to cleanly arrive at the contradiction.


This is fundamentally the same thing as undefined behavior, regardless of whether Odin insists on calling it by a different name. If you don't want behavior to be undefined, you have to define it, and every part of the compiler has to respect that definition. If a use-after-free is not undefined behavior in Odin, what behavior is it defined to have?

As a basic example, if the compiler guarantees that the write will result in a deterministic segmentation fault, then that address must never be reused by future allocations (including stack allocations!), and the compiler is not allowed to perform basic optimizations like dead store elimination and register promotion for accesses to that address, because those can prevent the segfault from occurring.

If the compiler guarantees that the write will result in either a segfault or a valid write to that memory location, depending on the current state of the allocator, what guarantees does the compiler make about those writes? If some other piece of code is also performing reads and writes at that location, is the write guaranteed to be visible to that code? This essentially rules out dead store elimination, register promotion, constant folding, etc. for both pieces of code, because those optimizations can prevent one piece of code from observing the other's writes. Worse, what if the two pieces of code are on different threads? And so on.

If the compiler doesn't guarantee a deterministic crash, and it doesn't guarantee whether or not the write is visible to other code using the same region of memory, and it doesn't provide any ordering or atomicity guarantees for the write if it does end up being visible to other code, and then it performs a bunch of optimizations that can affect all of those things in surprising ways: congratulations, your language has undefined behavior. You can insist on calling it something else, but you haven't changed the fundamental situation.


You language has behavior not defined within the language, sure. What it does not now have is permission for the compiler to presume that the code never executes with input that would cause the behavior not defined to occur.


The compiler is already doing that when it performs any of the optimizations I mentioned above. When the compiler takes a stack-allocated variable (whose address is never directly taken) and promotes it to a register, removes dead stores to it, or constant-folds it out of existence, it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack. In other words, it is leaving the behavior of a program that performs such loads and stores undefined, and in doing so it is directly enabling some of the most basic, pervasive optimizations that we expect a compiler to perform.

In a language with raw pointers, essentially all optimizations rely on this type of assumption. Forbidding the compiler from making the assumption that undefined behavior will not occur essentially amounts to forbidding the compiler from optimizing at all. If that is indeed what you want, then what you want is something closer to a macro assembler than a high-level language with an optimizing compiler like C. It's a valid thing to want, but you can't have your cake and eat it too.


When you put it like that, it's actually interesting. If they went ahead and said, "This is a language which by design can't have an optimizing compiler, it's strictly up to the programmer - or the code generator, if used as an intermediate language - to optimize" then it would at least be novel.

But as they don't, I see it more as an attempt to annoy the people who have studied these sort of things (I guess you are the people who "suck the joy out of programming" in their eyes)


No, the compiler is not "already" doing that. Odin uses the llvm as a backend (for now) and it turns off some of those UB-driven optimimzations (as mentioned in the OP).

Some things are defined by the language, some things are defined by the operating system, some by the hardware.

It would be silly for Odin to say "you can't access a freed pointer" because it would have to presume to know ahead of time how you utilize memory. It does not. In Odin, you are free to create an allocator where the `free` call is a no-op, or it just logs the information somewhere without actually reclaiming the 'freed' memory.

I can't speak for gingerBill but I think one of the reasons to create the language is to break free from the bullying of spec laywers who get in the way of systems programming and suck all the joy out of it.

> it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack

If you write code that tries to get a pointer to the first variable in the stack, and guess the stack size and read everything in it, Odin does not prevent that, it also (AFAIK) does not prevent the compiler from promoting local variables to registers.

Again, go back to the twitter thread. An explicit example is mentioned:

https://twitter.com/TheGingerBill/status/1496154788194668546

If you reference a variable, the langauge spec guarantees that it wil have an address that you can take, so there's that. But if you use that address to try to get other stack variables indirectly, then the language does not define what happens in a strict sense, but it's not 'undefined' behavior. It's a memory access to a specific address. The behavior depends on how the OS and the Hardware handle that.

The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".


> If you write code that tries to get a pointer to the first variable in the stack, and guess the stack size and read everything in it, Odin does not prevent that, it also (AFAIK) does not prevent the compiler from promoting local variables to registers.

This is exactly what I described above. Odin does not define the behavior of a program which indirectly pokes at stack memory, and it is thus able to perform optimizations which exploit the fact that that behavior is left undefined.

> The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".

This is a misleading caricature of the relationship between optimizations and undefined behavior. C compilers do not hunt for possible occurrences of undefined behavior so they can gleefully get rid of lines of code. They perform optimizing transformations which are guaranteed to preserve the behavior of valid programs. Some programs are considered invalid (those which execute invalid operations like out-of-bounds array accesses at runtime), and those same optimizing transformations are simply not required to preserve the behavior of such programs. Odin does not work fundamentally differently in this regard.

If you want to get rid of a particular source of undefined behavior entirely, you either have to catch and reject all programs which contain that behavior at compile time, or you have to actually define the behavior (possibly at some runtime cost) so that compiler optimizations can preserve it. The way Odin defines the results of integer overflow and bit shifts larger than the width of the operand is a good example of the latter.

C does have a particularly broad and programmer-hostile set of UB-producing operations, and I applaud Odin both for entirely removing particular sources of UB (integer overflow, bit shifts) and for making it easier to avoid it in general (bounds-checked slices, an optional type). These are absolutely good things. However, I consider it misleading and false to claim that Odin has no UB whatsoever; you can insist on calling it something else, but that doesn't change the practical implications.


> They perform optimizing transformations which are guaranteed to preserve the behavior of valid programs. Some programs are considered invalid (those which execute invalid operations like out-of-bounds array accesses at runtime), and those same optimizing transformations are simply not required to preserve the behavior of such programs.

I think this is the core of the problem and it's why people don't like these optimizations and turn them off.

Again I'm not the odin designer nor a core maintainer, so I can't speak on behalf of the language, but from what I understand, Odin's stance is that the compiler may not make assumptions about what kind of code is invalid and whose behavior therefore need not be preserved by the transformations it makes.


> C compilers do not hunt for possible occurrences of undefined behavior so they can gleefully get rid of lines of code.

Yes they do, if they detect UB they consider the result poisoned and delete any code that depends on it.


> The compiler does not get to look at that and say "well this looks like undefined behavior, let me get rid of this line!".

No production compiler does that (directly). This is silly. We want to help programmers. They sometimes keep it even if it is known to be UB just because removing it is unlikely to help optimizations.

But if you are optimizing assuming something does not happen, then you have undefined behavior. And you are always assuming something does not happen when optimizing.


> The compiler is already doing that when it performs any of the optimizations I mentioned above. When the compiler takes a stack-allocated variable (whose address is never directly taken) and promotes it to a register, removes dead stores to it, or constant-folds it out of existence, it does so under the assumption that the program is not performing aliasing loads and stores to that location on the stack. In other words, it is leaving the behavior of a program that performs such loads and stores undefined, and in doing so it is directly enabling some of the most basic, pervasive optimizations that we expect a compiler to perform.

No, that's C-think. Yes, when you take a stack-allocated variable and do those transformations, you must assume away the possibility that it's there are aliasing accesses to its location on the stack. Thus, those are not safe optimizations for the compiler to perform on a stack-allocated variable.

It's not something you have to do. The model of treating each variable as stack-allocated until proven (potentially fallaciously) otherwise is distinctly C brain damage.

> If that is indeed what you want, then what you want is something closer to a macro assembler than a high-level language with an optimizing compiler like C. It's a valid thing to want, but you can't have your cake and eat it too.

This is a false dichotomy advanced to discredit compilers outside the nothing-must-be-faster-than-C paradigm, and frankly a pretty absurd claim. There are plenty of "high-level" but transparent language constructs that can be implemented without substantially assuming non-aliasing. It's totally possible to lexically isolate raw pointer accesses and optimize around them. There is a history of computing before C! Heck, there are C compilers with "optimization" sets that don't behave as pathologically awfully as mainstream modern compilers do when you turn the "optimizations" off; you have to set a pretty odd bar for "optimizing compiler" to make that look closer to a macro assembler.

It's okay if your compiler can't generate numerical code faster than Fortran. That's not supposed to be the minimum bar for an "optimizing" compiler.


We are talking about Odin, a language aiming to be 'better C' the way Zig is. The literal only reason anyone uses C is to write code that runs as fast as possible, whether for resource-constrained environments or CPU-bound hot-paths. Odin has many features that one would consider warts if you weren't in an environment where you'd otherwise turn to C, such as manual memory freeing. If I were pre-committing to a language that runs five times slower than C, I have no reason to select Odin over C#, a language that runs only ~2.4 times slower than C.


> The model of treating each variable as stack-allocated until proven (potentially fallaciously) otherwise is distinctly C brain damage.

OK, let's consider block-local variables to have indeterminate storage location unless their address is taken. It doesn't substantively change the situation. Sometimes the compiler will store that variable in a register, sometimes it won't store it anywhere at all (if it gets constant-folded away), and sometimes it will store it on the stack. In the last case, it will generate and optimize code under the assumption that no aliasing loads or stores are being performed at that location on the stack, so we're back where we started.


Frankly it seems strange to me to be comparing Vale's generational reference system and Rust's borrow checker directly. They have completely different characteristics and are not direct substitutes for one another.

First, Rust's borrow checker incurs zero runtime overhead for any pointer operations (whether dereferencing, copying, or dropping a pointer) and requires no extra storage at runtime (no reference counts or generation numbers); it's entirely a set of compile-time checks. Generational references, on the other hand, require storing an extra piece of data alongside both every heap allocation and every non-owning reference, and they incur an extra operation at every dereference.

Second, since Rust's borrow checker exists entirely at compile time, it doesn't introduce any runtime failures. If a program violates the rules of the borrow checker, it won't compile; if a program compiles successfully, the borrow checker does not insert any conditional runtime panics or aborts. Generational references, in comparison, consist entirely of a set of runtime checks; you won't found out if you violated the rules of generational references until it happens at runtime during a particular execution and your program crashes.

Finally, Rust's borrow checker applies to references of all kinds, whether they point to a heap-allocated object, a stack-allocated object, an inline field inside a larger allocation, a single entry in an array, or even an object allocated on a different heap and passed over FFI. Its checks still apply even in scenarios where there is no heap. Generational references, on the other hand, are entirely specific to heap-allocated objects. They don't work for stack-allocated objects, they don't work for foreign objects allocated on a different heap, and they don't work in a scenario with no heap at all.

All of these are fundamental differences which mean that Vale's generational reference system is not at all a replacement for Rust's borrow checker. It's not zero-overhead, it doesn't catch errors at compile time, and it's fundamentally specific to heap-allocated objects. In these ways it's more comparable to Rust's Rc, which introduces runtime overhead and is specific to heap-allocated objects, or RefCell, which performs checks at runtime that can result in aborting the program.


I and some other folks in the Rust audio community have put together some low-level bindings for the CLAP API: https://github.com/glowcoil/clap-sys

They're relatively straightforward due to the fact that CLAP is a simple, pure-C ABI, and there are already some fully functional plugins making use of them (e.g. https://github.com/robbert-vdh/nih-plug).


You can't use RAII in Rust? What on earth could this possibly mean? RAII is an extremely pervasive pattern in Rust and is fundamental to many of the safe APIs in the standard library.


To clarify, I was talking about making RAII, not using RAII. And it surprised me too, when I learned that the borrow checker rejects it.

To see it in action: Have a Database object, and try to have multiple Transaction objects that might commit something to it, in their drop().

It's unfortunately not possible, because they can't all have a &mut Database as struct fields.

We can sacrifice speed (by using Cell's copying or Rc's counting) or safety (by using unsafe). Most RAII we see uses unsafe FFI under the hood, which is why it was so surprising to me.


Rust is actually right, you cannot have multiple mutable references to a Database object without things going down the drain. (This is related to the fact that, like other comments said, &mut is an exclusive reference).

However, achieving something like what you want is still more than possible in Rust. You can do this with the pattern of 'interior mutability', which in its simplest form is just a Mutex. This allows upgrading a shared reference to an exclusive reference, so that you can safely mutate an object while upholding the expectations that a mutable reference is exclusive, and a non-mutable reference does not change from under your feet.

Of course, for a database, you will probably want a more advanced implementation of interior mutability, so that you can commit multiple transactions at the same time. (Or not, it seems to work quite well for SQLite.)


RAII is a general pattern for tying resource management to the lifetime of objects such that resource allocation is tied to value construction and resource deallocation is tied to value destruction. The smart pointers for allocation in the Rust standard library (Box, Rc, and Arc) are examples of the RAII pattern, since memory allocation happens at creation time (Box::new()) and memory deallocation happens when the Box goes out of scope (in drop()). Another example of RAII in the standard library is File: opening a file means creating a value of type File, and dropping that value means closing the file. Yet another example are the smart-pointer guards used for accessing RefCell and Mutex: RefCell::borrow() returns a Ref, and Mutex::lock() returns a MutexGuard; the underlying value can only be accessed while the guard exists, and access is relinquished when the guard is dropped. Given all this, it's absurd to say that Rust doesn't support RAII — RAII is fundamental to the design of many of Rust's safe APIs.

The very specific API design that you've described is not possible in Rust, but it is strange to equate this with the entirety of RAII. In any case, there are many alternative APIs (some with no sacrifice in speed or safety!) that are perfectly possible in Rust.


i think the real mistake was to have "exclusive references" be called "mutable references" in the language. I've taken the habit of saying "mut" as "mutually exclusive" for references. Of course you can't have each Statement keep an exclusive reference to a db object. They're exclusive!

You need shared references for your DB, implying you need interior mutability. This is how Statements are implemented in real-world rust database drivers such as rusqlite (any operation on a db is done through a shared reference). The fact that a very real package is doing it proves that the pattern you're talking about is, in fact, possible.


You are missing something. For a piece of Rust software to run in any widely used computing environment, it is required to interface with a large body of non-Rust software via a non-typechecked ABI. Moreover, the Rust standard library itself contains many, many instances of the unsafe keyword. The benefits of Rust safety do not come from building a hermetically isolated tower of pure safe Rust code from the ground up, and those benefits do not become null and void the moment you include one C library used via FFI.

Rust safety is about being able to take an unsafe component, encapsulate its implementation details, and encode sound usage patterns for that component in a public API which can then be statically checked by the compiler. This allows the difficult problem of determining whether an entire codebase is sound, memory-safe, and free of undefined behavior to be factored into many smaller, more tractable problems of verifying that individual components are sound given their APIs. You can even do this with wrappers and bindings to C libraries, and there are many examples of this in the Rust ecosystem.


> There has been an explosive growth in component software technologies since the first edition of this classic book was published. The advent of EJB, J2EE, CORBA 3, COM+ and the .NET framework are evidence of a maturing market in component software that goes 'beyond OOP'.

This book seems to be discussing distributed object systems, which is a sense of the word "component" that has little or nothing to do with the sense used by game developers in reference to the ECS architecture. Distributed object systems are designed to enable an object-oriented design philosophy to be used for a system which spans multiple address spaces or machines. The entity-component-system architecture is a methodology for organizing data layout and program functionality, usually within a single address space on a single machine, where the data associated with a given entity is spread across multiple components or subsystems and associated by index relations (much like a relational database), rather than being all grouped together in one place (as encouraged in OOP).

These two concepts (distributed object systems and ECS) are designed to solve different problems, they are generally used in different scenarios, and they apply at different levels of system organization. There is so little resemblance between the two that I have to conclude someone calling them "sides of the same coin" is either completely unfamiliar with one of them or is being deliberately misleading.


I also did not state that book was the canonical ECS model, rather that it was one of the first sources to move into discussing components instead of classes.

COM and DirectX aren't distributed object systems, nor Objective-C protocols, for example.

Don't confuse COM with DCOM and COM+.

Then there are the component models based on traits, mixins, patterns, message passing, type classes,... plenty of variants scattered around SIGPLAN and ECOOP papers.


You are still conflating two totally unrelated things.

The book is one of the first sources to move into discussing "components," as in coding against interfaces/protocols/traits/etc.

ECS deals with "components," as in pieces of data composed using a relational model. This has nothing to do with interfaces or protocols whatsoever! It is practically the opposite thing- working directly with raw data, with no abstraction boundary.

You can't just pattern match on the word "component" and expect it to mean the same thing to everyone.


That is the next step, data oriented programming, which many confuse with ECS, as they tend to be used together.


I am not talking about data oriented programming. With or without that sort of memory layout optimization, ECS "component" still refers to un-abstracted chunks of concrete data rather than interfaces/protocols/etc.

Let's step back even further and consider Unity's pre-DOTS "entities" and "components." These do not take the data oriented approach, are not typically even classified as ECS (e.g. because they lack the System aspect of that design). However, the components are clearly chunks of concrete data (transforms, meshes, rendering parameters, rigid bodies, etc.) rather than interfaces/protocols.

This is the sense in which ECS means "component." That book is not relevant to this sense.


A book that I clearly referred it was only one of the first ones that somehow started talking about this, I never stated it was the definition of ECS.

Apparently making the point about how relevant this specific book is, is what matters in this whole thread.


It didn't start talking about "this." It started talking about something unrelated that you mistook for "this."


There is really no way in which Rust's Send or Swift's Sendable form a monad or a functor in any of the senses of those terms.


That doesn't change the fact that Haskell and ML did it first on their type systems, regardless how pedantic we want to discuss type theory.


The things you are saying are just not true. Standard ML and OCaml are in fact single-threaded languages, and while Haskell has concurrency and parallelism support it does not have anything that looks particularly like a Send or Sendable trait.


Concurrent ML:

http://cml.cs.uchicago.edu/pages/sync-var.html

As for the rest I am on the go to type a proper example.


Yes, Concurrent ML exists. It also, however, does not have any feature analogous to Rust's Send trait or Swift's Sendable protocol.


Of course it doesn't have a type class Sendable, that is not the point, rather the type theory behind it, and what those languages allow for, regardless of what they have in 2021 on their standard library.


While there is no exact match between them, the ``Send`` trait which can be thought of as preventing you from sharing an object between two threads actually has two monadic implementation in haskell. The ``IORef`` and ``STRef`` monads serve exactly the same purpose. This utilizes the Rank2Types extenstion. I invite you to read the Lazy Functional State Threads paper[1] by Simon Peyton Jones published in 1994.

The purpose of the ST Monad is to prevent you form sharing the object inside it wrapped by an STRef with another thread of execution.

IORef utilizes the uniqueness of the ``IO`` monad and its properties guaranteed by the haskell runtime i.e. only one thread can use it at any given time to prevent more than one thread from having a reference to the object inside the IORef from multiple.

[1]:https://www.microsoft.com/en-us/research/wp-content/uploads/...


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: