Yeah it does have issues, but I managed to pump out my own game very recently called Atlas Fury (Also on Android) [1]. Runs pretty well, but had to tamp down some things/effects I wanted to do. Your other option is Defold engine.
C# is a language but not a limiting factor to making a game. GDScript is for the most part fine to work with. Issues on other platforms is also something that will improve over time. Unity has not shown good faith in the least.
I'm not so confident that it will be fixed. It's been a known issue for nearly a year now:
"Godot 4's HTML5 exports currently cannot run on macOS and iOS due to upstream bugs with SharedArrayBuffer and WebGL 2.0. We recommend using macOS and iOS native export functionality instead, as it will also result in better performance."
What they said was that they had run Godot games on macOS in a Chromium-based browser, which...does not contradict the listed export limitation at all. Why would that mean that the games were built on Godot 3?
There were two links that were posted. The vast majority of Godot devs are well aware of the limitations around web export in Godot 4, and the Godot devs themselves have said that you are better off sticking to Godot 3 if your intent is to release an HTML5 game.
That first link is a collection of web games (ostensibly in Godot). It would therefore be reasonable to assume that most of the developers behind those WEB BASED games were savvy enough to build them in Godot 3.
I feel like you are so focused on the fact that Godot 4 has web export limitations that you haven't realised that nobody is disputing that with you.
An export limitation that affects Safari by definition does not affect Chromium-based browsers. Repeating that Safari is unsupported when someone explicitly asks/talks about Chromium-based browsers just sounds like you aren't really listening to what you're being asked.
Also if you think developers knowingly releasing web apps (of any kind, not just games) that only work in one browser and/or straight up don't work on mobile at all is somehow remarkable, you might be very new to the web.
Find me a line where I disputed Chromium browsers on Macs can or cannot run Godot 4 based web games. I'm not sure why you are so hung up on this, the original point still stands. And that is Godot 4 is not currently web ready, and due to limitations upstream according to the Godot devs themselves, it is recommended to stick with Godot 3 until such a time.
Furthermore, juvenile personal attacks are not appreciated (such as insulting comments like "you are very new to the web") and go against the spirit of HN. I'm done with this conversation because I feel like you are deliberately being antagonistic and are acting in bad faith.
But I think the weird part was that it was working okay in 3.5. In general c# support and cross platform saw regressions in 4.0 though it's slowly getting better as of 4.3
Last time I tried Unity I felt like the web builds were awful... ran super poorly in browser even for a super basic scene and also had some compatibility issues with certain core Unity features. That was 2 or 3 years ago so maybe they've gotten better, but in general I think if you're trying to make web games specifically you should go direct to developing for web instead of using Unity or Godot.
This adds more to the evidence that Vulkan / DX12 seems like a failed API design, when too many graphics engineers are reinventing the wheel by building their render-graph API on top of these (propietary triple-A game engines, Unreal Engine, and now Godot...) Instead of real-time graphics APIs providing all these low-level manual synchronization primitives and cumbersome PSOs, maybe they should just provide an official Render Graph API instead? Provide all of the operations and its dependencies in an acyclic graph up-front, and the driver handles synchronization automatically in the most performant manner tailored to the hardware.
I guess there needed some trial-and-error in the gamedev world for about a decade to really nail down a nice to use but also performant graphics API design. Vulkan being originated from the Mantle API from AMD didn't help - since it was a low-level console API mainly accustomed to AMD's GPUs and really didn't seem like it would fit for a more "general-purpose" API spanning a huge range of hardware and can stand the test of time. And with Microsoft's DX12 hastely copying from AMD's initial design it also has all the same issues (The irony is that DX11 is still the best graphics API you can use in gamedev in Windows in terms of ergonomics and even performance - seeing many trying to dauntingly build a DX12 backend and end up performing worse than DX11...)
Nowadays I'm obversing that the industry has known these issues for a while and are experimenting with alternative API designs... there are some experiental render-graph extensions available in both DX12 / Vulkan (albeit in a limited fashion):
Calling Vulkan a failed design is selling it short. There are parts like image layout tracking that should never have been exposed; tracking that at the application level does not offer any additional gains. Exposing explicit synchronization, on the other hand, is in principle the right idea. The driver gets information through those APIs that it cannot derive any other way. I believe it enables certain practical resource access patterns that would be near impossible without that. However, there may be a better way than the current pipeline barrier API to describe data and execution dependencies.
Other parts of Vulkan are clear wins. Pipelines with their explicit state look more cumbersome than the OpenGL state machine when starting out on a toy renderer, but they are absolutely invaluable in anything slightly more complex. All OpenGL based renderers that I've worked with ended up building a state management abstraction on top of OpenGL to work around the debugging nightmare of forgotten state variable updates. Pipeline add this state management at the driver level.
Pipelines (or in general terms PSOs) are the most problematic aspect of Vulkan / DX12 - much more than synchronization! Large parts of the gamedev industry seems to recognize all the performance issues with pipelines and therefore companies are experimenting with newer models like the VK_EXT_shader_object extension ("Vulkan without Pipelines": https://www.khronos.org/blog/you-can-use-vulkan-without-pipe...).
I think Vulkan / DX12 is explicitly designed to make making another abstraction on it possible? It's possible to make a full app on it without any assistant, but it's not the easiest way to use it. For example, you may make a opengl emulation layer on directX, but not the other way around. And dx12/vulkan is designed to be the underlying one (While hide some hardware specified gotcha).
The problem is... even it being low-level it isn't really a good abstraction. It mainly maps well with how AMD's hardware works but not NVIDIA's - NVIDIA has some inherent dynamic state baked into their hardware that makes the whole pipeline object thing a bit inefficient in theirs. But even ignoring this, it's a bit disheartening seeing all the graphics engineers working on game engines building similar things on top of it in suboptimal ways (ex. the pipeline cache), when it could have been just done at the driver level from the start! The reality is that most graphics application developers really do not care too much about the intricacies of low-level device management - and the ones who actually care are also building abstractions around it since they too don't really want to deal with this. Vulkan / DX12 basically feels like the driver developers have just given up and dumped the whole responsibility of managing a GPU onto the application developers.
It's a bit like how even the best performance-oriented engineers rarely go down to directly writing assembly to optimize their C/C++ code - sure you can do that, but excluding the rare hot spots it's mostly a waste of time. It is incredibly useful to get a debugger/profiler and observe how your code is compiled down to, but actually writing it is cumbersome and requires too much of a discipline with too many footguns.
And reasonably so, since it will take quite some time to build an entire triple-A-grade game engine from scratch (not just in Rust, but with any language! Though building it in Rust will make things much harder since they need to first build entire ecosystems from scratch...)
I think Embark Studios ultimately do want to use Rust for their projects and are pursuing small R&D projects to go in that direction (rust-gpu and also their Kajiya renderer) - but they're first and foremost a game company that needs to release games for profit, so made the pragmatic decision of using UE5 at the meantime.
It's great that Godot keeps improving. AFAIK it is still mostly for indie devs though. I don't know why valve isn't open sourcing Source 2. They barely even make games anymore and it would make them money in the long term if it means games could be made cheaper and better. UE5 on the other hand is leaps and bounds greater than both Source 2 and Godot judging from the few games that have released with it so far. There's also O3DE which last I checked isn't ready.
Valve doesn't try to make general purpose engines, they make engines that suit their own games, so unless you happen to be making a game which is shaped very much like HL:Alyx or CS2 you would probably be underwhelmed with Source 2. That includes platform support, Source 2 doesn't officially support any of the consoles because Valve hasn't needed it to, having only ever shipped it on PC and (briefly) Android/iOS.
Then again there is s&box which I would not consider being close to either, well the games that can be made with s&box at least. But I agree that it would probably be hard to build something like Teardown inside the Source 2 Engine. On the other hand: Teardown does not use Unity either.
S&box licensing Source 2 is a bit of an enigma to me, I get the impression that they're having to rewrite huge swaths of the engine so I don't know what they're getting out of it. They've gone to the lengths of integrating C# scripting from scratch rather than using the scripting facilities that Source 2 comes with. Nobody else has licensed Source 2 in the 4 years since s&box adopted it, so they're the lone outlier in any case.
Respawns use of Source 1 was a similar story, they ended up rewriting practically everything in the course of developing Titanfall and Apex Legends, to the point that it's almost unrecognisable as Source at this point.
S&box has rewritten many parts of the engine in C# at this point. Gets rid of lots of legacy code and makes this easier to change in the future. Using the scripting facilities built into Source 2 would have restricted what could be written in C# and slowed the team down.
Garry has publicly expressed regret over choosing Source 2 for this reason, but they might be in too deep now. Generally, the tooling is really great, so I guess that's the primary benefit.
Do you have a source for this? S&box used to use UE4 in the past but as soon as Source 2 became an option immediately jumped ship. There has been a ton of progress since then and I don't see why he would regret switching to Source 2 when no other engine would have made as good of a base for S&box.
The AAA industry is eating itself right now. Indie seems like a good place to be.
Also, indie games are a billion dollar market. Maybe it's time to stop using it as a synonym for "not serious." The consumer doesn't care that Undertale was made with Game Maker.
>There's also O3DE which last I checked isn't ready.
I don't think "indie" is synonymous with "not serious," at least if people use it that way I think they're wrong and I think there a lot of people who use it the same way I do.
To me "indie" is usually more a reflection of budget, which heavily impacts graphical sophistication, size of the game (as in how many levels, how many hours of play, etc), and usually means the price of the game will be between $5 and $30. AAA I expect $60 and 100+ hours of playable content. Indie game I expect 5 to 6 hours and I'm pleasantly surprised when it's more. When it comes to choice of game engine, that usually means they prefer less flexibility but more simplicity and "free stuff" from the framework, whereas a AAA game might prefer maximum flexibility, but that comes with complexity. As in all things it's a budget (time & money) tradeoff, not necessarily an "are we serious" tradeoff.
> AAA I expect $60 and 100+ hours of playable content. Indie game I expect 5 to 6 hours and I'm pleasantly surprised when it's more.
My experience is much more the opposite, with plenty of indie games providing many hours of actual gameplay (including replayability) and plenty of AAA games inflating their runtime with long cutscenes and other fluff (or milking people for all they're worth through live service).
Thank you yes, fair point. the cutscenes can get quite outrageous, and I refuse to play the games that do in app purchases and the like, so I have an unrepresentative sample in my experience.
There was a time when Unity was the 'indie game engine' as well. Give Godot some time (especially since it's progress seems to be picking up steam.)
> UE5 on the other hand is leaps and bounds greater
UE5 has the benefit of standing on the shoulders of giants, not to mention giant-sized pockets of cash to spend on developing it. UE has been around since 1998, so we're talking a quarter of a century of development leading up to what UE5 is today. It also has the added benefit of having been used for a huge number of different games, so there's an additional level of refinement in the toolset that it offers, since it's had to adapt to suit so many games.
Not sure how you define AAA, but a lot of popular games have been made with Unity. Some major studios have used Unity for mobile games related to bigger properties too.
Good to hear that Godot is addressing this. The Rust graphics crates are bottlenecking on buffer management. I'm seeing the render thread become compute-bound on a static scene, with the GPU under 25% busy. Stack is Rend3/WGPU/Vulkan/X11/Linux.
The latest round of WGPU improvements now has copying content into the GPU memory concurrent with rendering. That level of synchronization has been fixed. But it's still too slow.
I dread having to learn how things work down at that level.
By the way, what's the thread situation in WASM now? Previously, it was multiprocess with some shared memory, not real threads. Various real threading proposals were floating around. Did that happen yet?
> The order of execution of the recorded commands inside a command buffer is NOT guaranteed to complete in the order they were submitted: the GPU can reorder these commands in whatever order it thinks is best to complete the job as quickly as possible.
It's my understanding that commands inside of a command buffer are guaranteed to complete in order. The synchronization must happen when you are `vkQueueSubmit`ing multiple command buffers, no? I think that's what they meant to say?
Commands are guaranteed to start in the order they are inserted into the buffer but not guaranteed to complete in that order.
Per kronos:
> Commands are also guaranteed to start in the exact order they were inserted, but because they can run in parallel, there is no guarantee that the commands will complete in that same order
- Can't use C# at all
- Has issues running on MacOS / iOS
Meanwhile as much as Unity irritates me, they're working on even better web platform support in Unity 6 and smaller runtime bundled deployments.
https://unity.com/solutions/web
I think if I was predominantly focused on traditional console/platform gaming it would be a different story.