Minor correction/clarification, the library doesn't depend on SDL, it just writes into a 32 bit color frame buffer, but all the demos/examples use SDL for getting it to the screen since that's the best option. The test suite just writes frames to disk as png images to compare against expected for example.
Depends on how you define castrated. It's not like using C or C++ is really limiting and using a C++ math library (like my own rsw_math or more complete glm) lets you basically write it in GLSL. It's not like there's anything you can't do.
And the only math from Bellard's TinyGL that I use is his clipping code, maybe 80 lines of code give or take. Not to diminish Bellard at all, if anything I'm saying any problems with PortableGL are mine not his.
My bad, I used "castrated", it is not what I really wanted to say. I should have said a subset of opengl features. Namely, it is not a software GL implementation you can run an opengl4 3D game on (only those horrible c++ diarehas which are llvm with things like llvmpipe or the other one from intel are supposed to be able to do so).
Yeah PortableGL will never be completely fully featured, not even for OpenGL 3.3 since I'll definitely never do the geometry shader and probably not the transform feedback. But specifically it'll never have the earlier immediate mode stuff, or some of the big 4.0 stuff like the tessellation shaders. I have been meaning to add the DSA functions where they make sense. They'd be really simple to implement.
Actually a few days ago someone sent me a pull request adding an interesting project to my README
So now if I were to try to sum up all the OpenGL software implementations I can think of,
TinyGL (and modern improved forks) = OpenGL 1.1-1.3 ish
osmesa = OpenGL 2.1 using Mesa 7.0.4's swrast
PortableGL = OpenGL 3.x-ish
Mesa = 2 software renderers still included, gallium based softpipe and llvmpipe
and I think one or both support the latest OpenGL 4.6 but I could be wrong. swrast and Intel's gallium/llvm based OpenSWR have both been removed from mainline Mesa, and the latter only supports 3.3 core-ish (https://www.openswr.org/)
I'm sure there are others out there. I've actually never tried to use "Stand alone" Mesa. I really should to see how it performs if nothing else, but I still say nothing beats the single header library for ease of use.
The combination of mapping pixels to memory and rendering individually
and then that appearing on webgl and looking great on a not too expensive
phone, where I am pinch zooming while it flawlessly animates, sort of blows my mind in how much technology has come in 25 years. Connects the old with the new in a way I haven’t seen for a while (and yeah seen loads of C64 emulators and such)
The TODO section at the end of olive.c shows that there's lots more to do, and decisions to make about how much to do (e.g. what about bezier curves?), but that's quibbling: it is refreshing (and educational) to see everything here done in such a perfectly self-contained way. Godspeed!
Hell, half the letters in alphabet is not implemented in that glyph list. That is a weird place to leave a work halfway done. I wonder if he only added the letters for his hello world
Neat.. of course it seems obvious now, but that's funny - I'm Ukrainian and I think that without explicit explanation it would have never occurred to me to read, let alone pronounce it this way - the dot before the extension somehow puts an insurmountable boundary after "olive".
As a long-time beginner student of several Slavic languages, I like that word play too, with ".c" being "ts". It makes me wonder what missed opportunities there are for C library names with a funny pun.
// This idea is that you can take this code and compile it to different platforms with different rendering machanisms:
// native with SDL, WebAssembly with HTML5 canvas, etc.
It depends how fast “fast” is. PCs in the 90s could run games like quake in lower quality mode with lower framerates, but they ran. CPUs that we have now are many orders of magnitude more powerful, so imagine what you can do with that.
Though most games don't bother so we don't really have actual evidence.
Quake used hand-coded assembly for blitting pixels. That could be a bit less efficient when using this engine.
I would expect multi+GHz CPUs hardware to be able to overcome that, though, especially if you’re happy with the tiny (for today) screen resolution of Quake.
I'm not giving this library as an example of very fast code. I mean, it's really cool, but at the moment it doesn't even use SIMD (which dramatically improves performance).
Also, I think handmade hero started with a software renderer. Not sure how far they went with that, but I remember Casey mentioning once that software rendering is viable if you do it right. Certain art styles are easier to do as well — I'm not expecting PBR to be fast, but you could pull off a cel shaded look with simple lighting, and make it look good.
It's also worth mentioning that even with portable APIs like OpenGL, drivers are terrible and porting to other operating systems (and even other GPUs!) can be a chore. Software rendering can be an asset in those cases.
i understand that it is more portable but if it is cpu based, i guess its power performance is terrible since it is not using gpu acceleration which is a priori more efficient
This is so nice. It feels a bit like Logo using C syntax - I love that it's totally safe to run software written in this in a browser thanks to WebAssembly.
Looks great but the API itself doesn't seem very innovative (which is probably not the goal anyway).
I'd personally be interested in a graphics API that do not try to render the whole buffer every frame, but update it given a list of changes (e.g. sprite moving/rotating). I believe that such approach could work very well for a 2D software renderer given the likeliness of spatial redundancy, and possibly for video encoding without going through a clueless encoder querying pixels (and having the ability to exploit hardware efficiency to decode/render the stream).
I have been wondering what he is going to do under the recently declared partial mobilisation. I hope he could find a way to get out the country. He needs to know he is always welcomed in Turkiye.
Depending on your privacy settings, your anti fingerprinting configuration may block canvas operations. I know some stringent Firefox fingerprinting protection used to do this, not sure if that has been refined since.
this is very impressive, how is this done? the executable is only a few hundred KB and it runs for me, wasm is doing the magic here but I don't know too much about how the process works.
I added -I/usr/include/SDL2 to build.sh, made sure wasm-ld is in place, and it builds and runs smoothly.
It looks like olive.c is a "single header" library. They chose a .c suffix rather than .h.
If OLIVE_IMPLEMENTATION is defined before including "olive.c", then the full implementation is produced. Otherwise, just declarations, so that it behaves like a header file.
It's a valid technique. If you have a library all in one source file, the requirement to have a separate .h doubles your file count.
One small reason to have a .c suffix is might be that your editor can then choose a more specific syntax scheme. A .h file could be C++. Another one is that it can be used as a source file. You can pop it into a Makefile project, and just make sure you have -DOLIVE_IMPLEMENTATION on the compiler command line for that file. Other files using it just include "olive.c" to get the declarations. Because it has a .c suffix, make will handle it via its .c.o rule.
Sure, but let's look at the larger picture. If you have one file, then that's your deliverable. You can host that file somewhere, send it as an e-mail body, drop it into a paste-bin or whatever. It is one self-contained unit.
If you have two files: .h and .c, the use scenario may be simpler. Not a lot though. And now youu have two files which have to stay together somehow, yet remain distinct. If you combine them in one body of text, you have to indicate: oh, please snip out this part as a .h file and then the rest as a .c file. In e-mail you can have it as two separate MIME attachments. You can use an archive file --- have you looked into the formats? Not so simple.
It's not hard to understand the attraction to the one file deployment, even if you don't do it yourself.
Can you do something like what SQLite does an squash the .h and .c into one file, but also have multiple files for those who are doing development or just like separate files?
If I wanted to distribute a single file library, but have the two file option for users, I'd make it so that a specific Awk one-liner produces the two. For instance, the file might look like this:
#ifndef FOOBAR_LIB_H_D3B94F3C
#define FOOBAR_LIB_H_D3B94F3C
// header stuff here
#endif // FOOBAR_LIB_H_D3B94F3
#if FOOBAR_LIB_IMPL
// impl here
#endif
Then to people who want two, I would say, just run this command in your shell and paste the content into it:
that way I wouldn't need a build step on my end to generate parallel files.
I would have the extra build step if it were a large project of multiple .c files that I wanted to deploy as a single file for the users who want that. (Not only would I build the single file out of multiple files, but also have some test cases which actually use it. Things have ways of breaking when you combine files, like giving the same name two two static variables or functions in different files.)
Sorry, why does it make the project feel like a toy? I'm not trying to argue, I just don't understand. Is it because you're including a source file instead of a header file?
The function declarations and definitions are all commingled in a single file. So if you want to use this library in a project that contains multiple source files, you have to include a duplicate copy of the implementation (including e.g. the static font data) in every single compilation unit.
It's possible that your linker would be smart enough to identify and remove the duplicates, but it's still inelegant, unidiomatic and needlessly inefficient.
Ah, I missed that, but I don't think your comment is precisely correct either.
All of the function definitions are declared with OLIVECDEF, which by default is #defined as `static inline`. So if you want to only get a single copy of the implementation, you would have to choose one compilation unit that defines OLIVEC_IMPLEMENTATION, and you would have to define OLIVECDEF as something else (like the empty string) that causes the functions to be non-static.
Still kind of hacky, but not as bad as I thought.
EDIT: I just noticed that only some of the functions are marked with OLIVECDEF, so you have to do this trick if you want to reference the library in multiple compilation units, or else you would get duplicate symbols. The default behavior doesn't seem like it would ever be useful.
gcc or clang would crash on the linking step complaining about duplicate symbols. This library is only good for a project that is built in a single .c file.
That's not true in this case: if you look at the .c file you can see it actually has no definitions by default and is a relatively normal .h file when it's included.
You can #include it in any number of files as long as exactly one has #define OLIVEC_IMPLEMENTATION.
There are many single-file C "libraries" that work perfectly fine as both "header" and "implementation", and that do not require unity builds (building everything as a single translation unit, e.g., a single .c file). Here is but one famous collection of them: https://github.com/nothings/stb
single-file libraries almost always feel like toys because most serious projects i'd want them integrated into a real build system. vcpkg is a good one these days for c/cpp. nothing wrong with a toy library ofc.
Just saying that these "feel like toys" is a very low-effort comment. Header-only libraries exist even if uncommon and they work fine when written properly.
If you actually use a header-only library and it fails to link properly or be used in a project with multiple compilation units, then that would be a bug in the header-only library worth discussing and fixing. That would be a good-effort and interesting comment.
sqlite has amalgamated source too. It's used EVERYWHERE. This opinion doesn't seem like it's based on anything valid or real.
Note you get some opportunities for better compiler optimizations when the entire compilation unit is the entire project. In fact, sqlite claims the code runs 5-10% faster when built as an amalgamation (https://www.sqlite.org/amalgamation.html)
Unless it contains C++ templates, it's fairly trivial to compile a "header-only" library and use it as normal, even in unusual cases where the author has made no effort to support that.
Sure, but only if the text file looks like a C string literal, i.e. starts and ends with double quotes (which would make it into a weird text file).
Doing
const char *s = "
#inclued "test.txt"
";
won't work, since the preprocessor won't interpret directives inside string literals of course.
In many assemblers, there is a directive called "incbin" which pastes in unstructured binary data at the point of usage. I just found a very clever C and C++ wrapper [1] for that, which gives you an INCBIN() macro. Nice!
Note that C23 will include a variant of incbin spelled #embed: Semantically, the preprocessor will insert a list of integers which you can use to initialize an array.
Also, for clarity, it is fully expected that the compiler will use the as-if rule to optimize it. Most likely by having a dedicated token/ast node that only decomposes to comma separated values if if actually needs to, with common usage in initalizers being handled by simply copying the data directly into a static data segment without ever creating an integer list at all.
https://github.com/rswinkle/PortableGL