Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: StratusGFX, my open-source real-time 3D rendering engine (github.com/ktstephano)
483 points by ktstephano on March 30, 2023 | hide | past | favorite | 82 comments
It's been closed source for a long time while I worked on it on and off as a hobby research project, but yesterday the repo was made public for the first time under the MPL 2.0 license.

A feature reel showing its capabilities can be found here: https://ktstephano.github.io/rendering/stratusgfx/feature_re...

A technical breakdown of a single frame can be found here: https://ktstephano.github.io/rendering/stratusgfx/frame_anal...

It's still in a very beta state (bugs and instability expected), but I felt like it was a good time to make it public since a lot of its core features are mostly presentable. I plan to continue working on it in my spare time to try and improve the usability of the code.

Two main use cases I could see for it:

1) People using it for educational purposes.

2) People integrating it into other more general purpose engines that they're working on since Stratus is primarily a rendering engine. Any extensions to the rendering code that are made public would then further help others.

So I think it will remain very niche but I'm hoping it will still be helpful for people in the future.




This is the kind of work I really love to see here. Whether or not it finds footing it's clearly an amazing learning experience, and it's extremely impressive in its own right. Kudos on a great project.


Thanks! It was pretty fun but also difficult and definitely a learning experience.


Not sure why OP didn't link to this, as it appears to be the video version of the feature reel:

https://www.youtube.com/watch?v=s5aIsgzwNPE

I've been out of gaming and this kind of scene for far too long, but I was amazed by the lights reflecting off the scooter wheel arch. (I don't know if that's new or not, but it's new to me).

Either way, amazing work, best of luck!

Edit to add question: Is the video linked above rendered / captured in real-time?


Hey cool I'm surprised you found it. With the latest engine version I switched to using static images since it's a lot easier for people to quickly look through. Some improvements have been made since the video and I want to redo it once better image smoothing has been implemented.

And yes! The video was captured in real time on an Nvidia GTX 1060.


(I'm an old man who prefers reading physical books over electronic ones, but I do think that video demonstrates "things" better, especially if it demonstrates the realtime rendering that's a selling point of the engine - and the "kids these days" consume video like it's a breathing-level bodily function)

Looking forward to seeing improved versions.

I'm tipping you've got a pretty bright future!


I decided to add the video link to the readme after you brought this up. Maybe it's better to have it for anyone who prefers it.


Thank you. I enjoyed the video, and it did a great job of highlighting the visual features you achieved!


Impressive! Images AND video is also an option.


This looks awesome, quality is amazing for being so early.

Regarding integrating it in other engines, would love to see an attempt to integrate it with something like Blender and/or Bevy, seems like it would fit right in with both of those, and Eevee needs some competition in real-time rendering :)


Why does it seem like bevy has produced far less spectacular results then much newer c++ engines with a fraction of the dev team? Cherno’s engine also comes to mind.


I think bevy is a much more ambitious 'total' framework for making games. Plus, I think by its nature Rust forces you to deal with the really hard questions up front, and get them right first time. In C++ you can punt some of these questions and build a less-than-perfect architecture more quickly. Please don't think I'm disparaging C++, I develop in it for my day job. But I do think there are different trade-offs at different stages of a project's lifetime.


Bevy has 2 core graphics maintainers, but they're both volunteers with extremely limited time, so there's definitely issues where your submit a PR and have to wait months for it to get reviewed.

They've also been working on patching up the core infrastructure of Bevy's renderer (fixing the shader import model, reworking the CPU side to enable batching and preparing for bindless, which are both huge optimizations). WGPU (the low level graphics library that wraps the individual APIs) also has issues with excessive locking of resources, so multithreading draw calls wasn't any faster than single threaded (A core part of Vulkan and DX12 performance is multithreading draw calls), but that should hopefully be fixed in the next few months.

In terms of the more "pretty graphics" side, cascaded shadow maps were implemented for distance shadow rendering, there's 95% ready PRs for SSAO (based on GTAO) and PCF (soft shadows), and TAA just got merged.

Finally, there's the desire to support all platforms, meaning there's a 10ton weight shackled to the renderer called WebGL 2, which doesn't even support compute shaders (which are the cornerstone of pretty much every new graphics feature from the past decade).


The simple truth is that a single talented, unencumbered (ie, as repo owner you're less encumbered by PR etiquette and feature direction), motivated developer in open source software, especially from what I've seen in the game-engine/framework space where multiple disciplines come into play, is far more effective than a large group of contributors will ever be as far as pushing large new features is concerned.

When it comes to picking a smaller open source game engine, it becomes essential to evaluate the hero developers involved: how committed are they to the project? is their effort sustainable ?

I think Bevy will work out long-term, because they have structured as a sustainable foundation with solid project management, however the lack of a hero dev that who can contribute to all the sub-crates [1] within the bevy ecosystem also means you have odd quirks in Bevy that will remain longstanding on the issue tracker, like for example setting the texture wrapping mode.

For that reason, Fyrox is worth serious consideration if you're constraining to the rust ecosystem of game engines/frameworks. Just one developer, but he's got the old-school experience and dedication, and the framework looks/feels much more familiar to a C++ game developer. It's not the sophisticated beast that Bevy is, but that's a good thing to the kind of person who would even consider these game engines/frameworks in the first place.

What really saddens me is that these hero game devs are still choosing C++ instead of rust. There are so many great flash-in-the-pan game engine/frameworks in C++, but they will never take off for a number of reasons.

When you look at how game devs approach development in C++, you come to realise that Rust is the language we wanted all along.

[1] https://github.com/bevyengine/bevy/tree/main/crates


Bevy spend time on more things than just graphics, it's just one part of the framework and engine. The ECS system is vast too, probably a lot of time went into it


Thanks! I've been thinking about contributing to either Godot or Blender. I'm a big fan of both.


In the Sponza scenes there is fairly obvious dithering going on which makes the visuals look very unstable especially in motion, idk whats up with that? Its most clearly visible in this screenshot: https://ktstephano.github.io/assets/portfolio/Sponza2022_3.p...

Also the lack of AA adds to the visual instability.

But still, nice work! I'd love to see what this could do with better assets/art (and maybe some extra shaders)


Yeah most of that comes from the current implementation of basic transparency which just uses a sort of punch through method without any smoothing. Even though it's basic it allows for surfaces to be blended together in a deferred pipeline (Intel Sponza has a surprising amount of this) but it adds to the dithered look you mentioned. In future versions I'm going to remove it in favor of true order independent transparency. And you're right that it does add to visual instability with its current form. I think there are a few things I can do to help it before just switching over to order independent so I'll mess around with that.

Right now it's using fast-approximate anti aliasing for its AA method but overall even though it's better than nothing it's definitely not good enough especially in motion. I'm going to keep FXAA as a togglable option but I want to add in TAA/TSSAA which look like they handle AA in motion much better.


Nice choice of license. MPL. Much clearer and straightforward than LGPL. Existing files and changes to them must stay MPL licensed. New files can be any other license.


LGPL is pretty clear, it's just not very convenient. It's only unclear when your start going through mental hoops to make it seem more laisser-faire than it is.

For instance, it's just about impossible to distribute an iOS application in a kosher way.


I only recently learned about the MPL and I'm a fan. It seems to be a really nice alternative to GPLv3 and LGPL and offers a lot of convenience while still being copyleft.


I'd recommend using lower resolution inline images on your pages... they're very slow to download.


Is it getting hugged to death? I can imagine Github has bandwidth limits for pages.


The images are fine, they just need to be hosted somewhere faster, like maybe Cloudflare, or even just S3.


I'll look into both of those. I've been trying to figure out a way to move away from using GitHub as storage for images and 3D assets and one of those might be the answer.


Sorry, my suggestions are useless; the problem is in the PNG format (and indeed the raw byte size), not in the lack of CDN (see sibling thread).


Ah ok that's interesting. I'll need to convert the current images to another format and reupload.


It’s on Github Pages so CDNed by fastly already


Oh snap, indeed!

An image of this size should not be so heavyweight, but the images are PNG, not JPG or webP. This is why they are so slow.

I would consider adding JPG previews with links to PNG images for those who want to see every pixel as rendered.


I've sped up Github pages hosted sites significantly by adding Cloudflare's CDN. Github has something but it isn't as good out of the box.


You might want these images hosted on a CDN so the examples load a lot faster: https://ktstephano.github.io/rendering/stratusgfx/feature_re...


Looks impressive! Especially for real-time.

What kind of hardware does it expect, and what's the FPS? (I know that performance may be far from what the architecture allows at this early stage.) What kind of API is it using: DX12? Metal? OpenGL? I did not seem to be able to find it mentioned.


Thanks! In most of the demo scenes it can get 60+ fps at 900p and 1080p on an Nvidia GTX 1060. That's currently the only hardware I can test it on. On more difficult scenes from the demos I can generally get 30-45 fps.

Its backend only supports OpenGL right now. I think for a long term goal migrating to Vulkan would be a great option since it would unlock MacOS while still allowing Windows and Linux to run it.


Do you think it would be possible to reduce the version requirements to OpenGL 4.3 / OpenGL ES 3.0? Then it might possible to port this engine to WebGL2 and WASM and support all platforms via the browser. Maybe that's a silly idea but I think it would be really cool and perhaps easier than porting to Vulkan :)


You know it actually might be. Three big things it requires are Multi-Draw Indirect (looks to be available in 4.3), shader storage buffers (looks like it's also available in 4.3) and compute shaders which 4.3 should have too.

I don't know much about WebGL2 - do you happen to know if dropping the requirement to GL 4.3 would be enough to make it compatible with WebGL2?


Sorry but I seem to have misunderstood the capabilities of WebGL 2. I had read that WebGL 2 was OpenGL ES 3.0 and that “OpenGL 4.3 provides full compatibility with OpenGL ES 3.0”. Which means OpenGL ES 3.0 can run in an OpenGL 4.3 environment. Not the other way around… whoops

The OpenGL versions and variants have always been confusing for me.

WebGL 2 is close to desktop OpenGL 3.3. So probably far too outdated for this project.


Have a look at WenGPU -successor to WebGL. It runs on Vulkan

https://developer.chrome.com/docs/web-platform/webgpu/


Unfortunately WebGL2 doesn't support multi-draw indirect, shader storage buffers, or compute shaders (there was temporary experimental support for compute shaders but it's considered obsolete now).


Great work! I really appreciate the breakdown.

> Each of the four cascades are given the scene at different levels of details. The first cascade uses the highest level of detail while the last cascade uses the lowest available level of detail.

I would expect the shadow renders would want to use the same LOD as the primary view to keep the shadow casting geometry consistent with the shadow receiving geometry. Otherwise, you might have incorrect self-shadowing.


I think this part of the cascade shadow map generation could be improved. Luckily the stuff closest to the camera looks correct, but distant stuff does look off until you get closer and higher resolution pops in.

In the future I'll probably do something similar to what you mentioned and have the cascades use the LODs selected for the main view.


Thanks for sharing your work, this kind of stuff is awesome! What resources did you use and find to be the most helpful when learning about this?


Thanks! For resources the most helpful were the OpenGL Superbible, learnopengl.com, Foundations of Game Engine Development Volumes 1&2 and the 3D Graphics Rendering Cookbook. Then there were also a lot of web sources that were very helpful such as the Google Filament PBR paper and some realtime rendering presentations from other developers.


I'm working on a custom engine myself and one thing i'm having trouble sourcing is an efficient animated model renderer with hardware accelerated mesh deformations. Currently I have the autodesk fbx sdk but it's performance is disappointing since it does the deformations in software.


Readme needs to state who it is for, where it is best used, what platforms it targets, who the team is working on it and the short medium and longer term goals, and importantly, why you made it and why a developer should use it.

Some open source projects like this can become huge. You never know what might happen.


When the project includes some 3D renderer the readme needs also to include some screenshots.


That's something I'm trying to figure out. I had an earlier version of the readme that had one, but I ran into an old issue where GitHub complains that the repo is over its storage quota. Even after downscaling the image to less than 1mb it still complained so I think I have an issue leftover from an older version of the codebase (I used to store demo assets in the repo).


https://rtyley.github.io/bfg-repo-cleaner/ (Java) seems to be the favored tool; there's also https://github.com/xoofx/git-rocket-filter (.NET) and the built-in commands: https://stackoverflow.com/questions/2100907/how-to-remove-de...

You'll have to unprotect the branch to force push to GitHub, and anyone who has already cloned the repo may not appreciate basically having to start over, so better to get it over with ASAP!


Curious. This script[1] lists the blobs in your repo, which includes deleted files. It seems the deleted resources directory takes up 364MB. That isn't tiny, but it isn't supermassive either.

I wonder if GitHub support would be helpful. I don't know if asking them to do a 're-pack' would be beneficial.

P.S. thanks for open sourcing this!

[1] https://stackoverflow.com/a/42544963


When I removed an old commited and then removed node_modules folder, I had to do a git filter-branch and a force push; it's doable for your own projects if you don't object to changing the history a bit.


Definitely. Even if they're pretty small, with the user needing to click on them / open in new tab to view them in full glory.


That's a good idea. Later today I'll draft up a new readme with this information. I think it would also be good to reiterate it in the posts I have about the engine so I'll add it there too.


May as well provide a link back to this thread in the readme as well for additional context ppl can get from the HN crowd.


Congratulations, can you tell us about what it took to get through all this work by yourself?


I would say a big part was that I was very interested in the project so even when parts of it were difficult to figure out I kept wanting to come back to it.

The other was breaks. I didn't do this all in one stretch and instead did it in small chunks over time. I wasn't trying to ship a final game or finish as quickly as possible so the time scale was very relaxed.

Then the last was to make sure the process stayed as enjoyable as possible. If I was getting stuck on a problem or if a particular path really didn't seem to be working, I preferred to either take a break from the problem and come back later or find new sources that helped me see if in a different way (or both).


How do you generate all the demo scenes? Are there prebaked reference scenes to import and render? Even if that’s the case, is there a huge amount of work to integrate existing models/scenes into your engine?


The demo scenes were all from what is in the Examples/ folder. I am working on getting the 3D assets hosted somewhere (some people made some suggestions in this thread I am going to try) and once that is done people will be able to render all the demo scenes locally. All the assets are either from graphics research samples such as Sponza or Bistro or they were CC licensed on a different site such as Sketchfab. I would have tried Quixel Megascans too but I think you have to buy a subscription.

"Even if that’s the case, is there a huge amount of work to integrate existing models/scenes into your engine?"

It uses the Assimp library which allows it to support things like .obj, .fbx and .gltf. Usually what I do is export from Blender to gltf and ask it to export the textures separately rather than pack them into the binary. Most assets that were designed with metallic-roughness in mind either work immediately or with minimal changes. In my own testing the most frequent things I had to change were if normal maps were inverted (fix by flipping the green channel in photoshop) or if the model exporter set all the metallic values to maximum due to missing data (fix by loading the metallic or metallic-roughness maps in photoshop and flipping metallic channel from max to 0).


Would someone like to port it to Vulkan so we can try this out on mobile?


Looks great, maybe you could find a way to integrate it with Blender and make a Renderer as a plugin. Does it do SSGI for diffuse rays? Anyway I know it's a lot of work so kudos.


Currently all GI is handled using virtual point lights but SSGI is something I would like to add as a complementary method. I want to say that Godot does something like that where they have a heavyweight solution and couple it with SSGI to enhance it.


I couldn't find the repo on your page, probably I missed it somewhere while distracted by the lovely screenshots :)

https://github.com/KTStephano/StratusGFX

BTW I saw you've released it under a fully open license (MPL 2.0). Have you considered that allows any company with a closed source commercial AI bot to scan and include your work in their training data? How do you feel about it? I've been holding back on open sourcing anything recently because I'm undecided on this.


I don't really know what to feel about the current wave of AI/machine learning. It's been moving very fast and I don't think we've seen the end of the story as far as copyright concerns go.

For this project I felt like weak copyleft MPL 2.0 was a nice balance sitting in between strong copyleft GPLv3 and no restriction MIT.


Can you make an AI 'robots.txt' style disclaimer that this may not (currently) be included in ANY AI implemented stack/sideloading/whatever into a closed codebase?

(or word the above in the appropriate manner to achieve result (AI LOCKOUT))


The commercial bots scan your code anyway, the open license only allows well-behaved humans to use it too.


Kudos! Really impressive work :-) It's long been a dream of mine to build an engine like this - much respect to you for actually doing it!


HN first post and no comment? I guess people marveling speechless at the screenshots. Good job!

Would love to hear the backstory why you built that engine!


Hi, thanks! I've been interested in the low level tech behind engines for a while and liked experimenting with things I watched or read about. For some reason a couple years ago my focus kind of narrowed in on realtime 3D graphics as an area I really liked but didn't know much about. It was amazing to see what people were accomplishing in games year after year (still amazing to me). Eventually this interest became strong enough that I decided to try and see if I could learn how to emulate some of the modern techniques that the games were using. This morphed into the StratusGFX project over time.

It's been fun but slow and also very difficult. Global illumination was especially hard and had a few false starts.


It's definitely an impressive project. Building a real time 3d rendering engine is not trivial


This is really great work! Keeping a tab on that, congratulations. I marvelled at this.


Glad you like it!


Fantastic work. Looks amazing.


Thanks!


This is great! The images in the feature reel look amazing!


Great work happy to see project made for fun here!


Thanks!


Looks really well done, good job!


Thanks!


Very cool!


Thanks!


Well done creating this! Thank you so much too for opening the source after so much solo effort, I’ve read the high-level architecture document and the things you’ve built sound very interesting and make me excited to read the code.


Hopefully it helps! More engine documents are something I need to add to my todo list so that they can help the code be more understandable.


Some software is not worth paying for to use. But other services might.

Look at nextcloud: they wouldn’t have been anywhere near as big if they weren’t open source. Very few people would pay for the product but instead users now provide pull requests and improve the product. Meanwhile they make money now through enterprise support and specific plugins they provide for business.


The demo images look impressively realistic, congratulations. It would be interesting to see some fps figures, including both sides: maximal realism on "gamer PC" hardware… but also reduced detail compromise on average hardware, which is relatively low end.

That being said, I suspect that the following quote would be a major hindrance for a lot of developers looking for a rendering engine:

> "This code base will not work on MacOS. Linux and Windows should both be fine so long as the graphics driver supports OpenGL 4.6 (…)"

These days, most projects/developers looking for a 3D rendering engine - including those who, like me, don't have any sympathy left for Apple and their business practices - will need a wide cross-platform compatibility including MacOS and mobile (both android and iOS). So the point "Porting the renderer to Vulkan" that you list under "Future of StratusGFX" does indeed seem like a priority.

In that same vein, I'd point out that given the very large proportion of end users that have relatively weak hardware, especially on the GPU side, and even more so with the ongoing tendencial shift from desktop to mobile, it is crucially important to keep in mind that while maximal realism capabilities are indeed an important goal, the engine also needs to have good fallback compromise capabilities for less powerful hardware, i.e. cutting back on the most computationally expensive parts and details while still delivering a decent enough result at a good enough speed.


Yeah I think as far as the future goes, removing the dependency on OpenGL 4.6 and adopting Vulkan is the best way to go and needs to be near the top of the priority list (maybe even very top). It would allow MacOS, Linux and Windows to all have a similar experience.

Then for the performance on weaker hardware... yeah this is something I've mostly neglected for now. All my testing has been on a desktop Nvidia GTX 1060 where performance is currently pretty good there.

One option would be for me to add fallbacks that remove reliance on certain realtime features (such as realtime GI) and allow for baked solutions. My initial goal was to avoid all baked solutions and only target realtime everything, but now I feel I need to cycle back and add in other options on top of what's there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: