Hacker News new | past | comments | ask | show | jobs | submit | indigoabstract's comments login

I wasn't aware that logarithmic depth buffer could be implemented in WebGL since it lacks glClipControl(). It's cool that someone found a way to do it eventually (apparently by writing to gl_FragDepth).

> apparently by writing to gl_FragDepth

If they do that, this disables early Z rejection performance optimization implemented in most GPUs. For some scenes, the performance cost of that can be huge. When rendering opaque objects in front-to-back order, early Z rejection sometimes saves many millions of pixel shader calls per frame.


And not to mention, floating point numbers are already roughly logarithmically distributed. Logarithmic distributions are most important in large differing orders of magnitude, so having the piecewise-linear approximation of logarithm is good enough for proximity buffers.

Indeed, logarithmic depth is pretty much useless on modern hardware, but it wasn’t always the case.

On Windows, the support for DXGI_FORMAT_D32_FLOAT is required on feature level 10.0 and newer, but missing (not even optional) on feature level 9.3 and older. Before Windows Vista and Direct3D 10.0 GPUs, people used depth formats like D16_UNORM or D24_UNORM_S8_UINT i.e. 16-24 bits integers. Logarithmic Z made a lot of sense with these integer depth formats.


Yeah, I agree, but I guess it's fine for a demo, which otherwise would not have been possible.

> which otherwise would not have been possible

I wonder is it possible to implement logarithmic depth in the vertex shader, as opposed to pixel shader? After gl_Position is computed, adjust the vector to apply the logarithm, preserving `xy/w` to keep the 2D screen-space position.

To be clear, I have never tried that and it could be issues with that approach, especially with large triangles. I’m not sure this gonna work, but it might.


I haven't really studied this either so I could be mistaken, but I think it's because OpenGL does the perspective divide between the vertex and fragment shaders, going from clip space to NDC space (which is in the [-1,1] interval and can only be changed by glClipControl()).

The value in gl_FragDepth, if written to, is final, but gl_Position is not and will go through the clip/NDC transformation and since the trick to get the extra precision is to put the depth range into [0,1] instead of [-1,1], this would fail.

So my guess is, it probably wouldn't work on WebGL/OpenGLES without also setting gl_FragDepth, which is, as you mentioned, impractical performance-wise.


I'm not the author, sorry to disappoint. I also find this project very interesting and wanted to share it.

> This particular aspect is the only feature that the authors' libriscv usage and the Lua VM have in common - but that doesn't make it scripting.

Maybe I'm nitpicking, but to me scripting is just programming/extending a certain fixed piece of software to execute instructions not already programmed into it without having to modify that software itself. Usually using an API. Not unlike computer programming, which aims to make a piece of hardware execute some instructions without needing to alter the said hardware. The main difference is conceptual (the platform targeted: hardware or software). If the script happens to be run by a VM emulating a real processor because your software includes such a thing, I think the distinction becomes purely conceptual.

Now, the technology is obviously super cool, but what I don't quite understand yet is what is the best use case for this? Is it really game scripting? Or compiling C++ on the fly?

It's not exactly a simple drop in replacement for a Lua interpreter.


It looks like third time's a charm. I for one think it's a feature to have stories competing for the reader's attention and just take their time to bubble up to the front page.

And this is quite an interesting story. Since the Sierra games' source code was never publicly released, it makes me think how radical id were to open source their games at the time, in the 90's.


Is it too much to believe that both hardware (built-in) and software (flexible) parts are employed in nature?


Yes, it meanders to much to get to the point. Which is that RAII doesn't work in C because unlike C++, which has a comprehensive type system mandated by a standard, a C program doesn't "know" at runtime that a struct is composed of other (typed) fields so it can do a proper deep field copy (or destruction). And implementing that type system in C doesn't seem feasible for practical and political reasons.

I think the actual question should be "can C get automatic memory management like in C++ without having the equivalent of C++'s type system"?

Though I can't put my finger on it, my intuition says it can, if the interested people are willing to look deep enough.


> a C program doesn't "know" at runtime that a struct is composed of other (typed) fields so it can do a proper deep field copy (or destruction).

This doesn’t make sense: you don’t need runtime introspection to do this?


In C++, when you copy a struct instance to another instance, the runtime knows if any fields (to whatever depth) have manually defined assignment or move operators and will call them in the proper order. So it's a deep copy. The same information is used for calling any field constructors and destructors that are user defined.

Introspection (reflection) would go even further and provide at runtime all the information that you have at compile time about an object. But that's not required for assignment and destruction operations to work.

C doesn't have any of that, so a struct copy is just a shallow copy, a bit by bit copy of the entire struct contents. Which works pretty well, except for pointers/references.


No. Well, yes, in that if the type of an object is dynamic, it's possible that certain functions are resolved at runtime usually through a "virtual table". The static type of an object is only known at compile time, and all that the virtual dispatch does is an indirection through the virtual table to the static constructor or destructor as required, and the static special functions always know how to construct, copy, or destroy any subobjects.

So, no, runtime introspection is not needed, but runtime dispatch may be needed.


How hard can it be? It's only 30 papers.

Maybe someone should ask Chuck Norris how long it took him. Who wants to go first?


Chuck Norris ghostwrote those papers while on a motorcycle tour of Brazil.


The thing I like most about C is its simplicity, its minimalism and it being the interface of choice for other languages to talk to each other.

You can completely understand the language since it's so small and doesn't change much and (if you avoid complicated pointer expressions) that also helps with reading the code.

That said, I don't enjoy manual memory management, but I'd like a GC even less.

Arena allocators are useful, but since reference counting is the preferred general solution in C++, could something similar also work in C without taking away that minimalism? I think it might, but that is just my opinion.


> You can completely understand the language since it's so small

Not really. There are many dark corners to the language, and many opportunities for subtle misunderstandings.

Plenty of C programmers don't even understand the basics of how types work in C, e.g. why you shouldn't use printf's %d format specifier with an argument of type size_t.


  > could something similar also work in C without taking away that minimalism?
maybe automatic reference counting? [1]

[1] https://en.wikipedia.org/wiki/Automatic_Reference_Counting


Yeah, I remember about that. It works well for Objective-C.

But since C doesn't have objects or messages, the best equivalent thing I can think of is for the C compiler to emit some type of signal on assignments and variables going out of scope.

I don't claim to know what's the best solution for adding automatic memory management to C, but I'm pretty sure a good one exists. And it would need compiler support for it to work.


What I wrote about is not GC. The other comments about reference counting also unfortunately completely miss the point I made.

The code stays simple as nothing is happening in the background like a GC or macro abuse. For example you clearly see where allocations can happen now in join_path(temp_arena, basedir, filepath). At the same time I‘m not calling free everywhere and can avoid a lot of the „goto cleanup“ dance or the gcc-specific extension.

Frankly I feel like understanding this approach to memory management is the best way to snuff out C experts from juniors/wannabes.


> What I wrote about is not GC. The other comments about reference counting also unfortunately completely miss the point I made.

I know. I mentioned the GC because of another comment and because reference counting and GCs are the two usual general ways (that I know of) for automating memory management. That's what I had in mind.

Unless I'm misunderstanding, arena allocators are most useful for categories of objects with similar lifetimes and are not in the fully automated category. Because if I have multiple arena allocators, I still have to manually assign an allocator when creating an object and also deal with the lifetimes of the allocators themselves.

Easier than allocating/deallocating individual objects in main memory with malloc, true, and probably more efficient, but still not fully automated, so in my mind they have different use cases.

But it's possible that I haven't really understood them, so if I'm mistaken about their use, please don't leave me in the dark. If that makes me look like a junior wannabe, I think I can take it. Even the people who know their stuff had some point when they didn't..



It's probably just me, but the somewhat forced laughs & smiles from the people talking to it make me feel uneasy.

But enough of that. The future looks bright. Everyone smile!

Or else..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: