This website sucks, it takes forever to load challenge pages because they're flooded with irrelevant entries which you can't even see if you don't submit your own. You can't even visualize the solutions directly in the browser.
And you never really get to learn anything. I generally come away feeling like there's this knowledge locked in solutions I'll never get to see so there's no way to improve. Seems like a good idea but in practice it's just frustrating.
How's the performance of the resulting games? With unity I've found that it's a mixed bag, but it's hard to know if that's the fault of developers or the engine.
Performance is fine in most cases. While GDSCript is quite slow it's rare to be the bottleneck. You can always port perfrormance critical code to C++ (or to basically any language in 3.0). You can try to optimize game without other languages as well. I'v got two examples:
- My bro had written RTS prototype using built-in tilemap. When zoomed out, game was working pretty slow (12-20 fps) because drawing a lot of tiles at once was bottleneck. He was able to optimize his prototype using GDScript to work with stable 60 fps (he cached rendered tilemap as texture).
- I wrote simple "terminal emulator" control for Godot. It was quite slow - rendering 125x32 characters terminal was 120 ms per change. With few simple tricks I reduced it to
40 ms for full redraw and 1 ms for small change like drawing few characters.
The problem with generic engines, is that the type of game has a diffrent performance requirement. RTS can live with physic-sim updates every 0,25 seconds, which is not usefull for a jump & run or fps game. Thus the highest requirement defines the engine, which is a complete waste for some games and limits performance.
I dont know if you can configure the framerates and engine internals here.
Next problem is that you need to write a lot of partially performance intensive game logic code (pathfinding etc.).. which again makes sense to write in the engine native language.
There is a reason why game specific engine exist- and if i would dev a new game today, i would choose- the one os-game that is allready as close as possible to the idea port that to a dedicated engine.
Its way too much work to fork a generic engine to a high-performance specialisation.
I think it really depends these days. You can make a lot of games in a general case engine up to and including those that you might think need a specific set of tradeoffs they might not provide adequately. At the same time it can be easier to not have all the unwanted cruft. It's about where you want to put the time and effort and I'm not sure that's very easily generalizable right now.
I do think Unity is beginning to jump the shark a bit with what seems like a stronger focus on ancillary services rather than their core offering.
>execute AIs on clients, and send their inputs along with players' inputs
That seems strange, you'd be sending multiple times the same AI's actions to the server, in which case you better be certain that it's deterministic, not to mention the security concerns. It seems far easier to execute the AI on the server and send its actions to each player, especially if it's a trivial AI.
A lot of games actually do this: each client has its own AI, and the games just exchange player state. By syncing player state before computing the AI's next action, you ensure that every game instance will have the AI perform the same operation.
Of course, for this, you need a bit of determinism, and some games need to synchronize other parameters, which can lead to "security" concerns: bullet spread, hit detection and such for FPSs, for example.
This can also lead to some hilarious de-sync issues when a player tries to cheat on its local game instance. By altering its local state, the AI is effectively desynchronized, and the players can observe different outcomes on their local instances. This is often used as a punishment for cheating.
The alternative (for serverless, p2p games) is to have a central host, but it might take a lot of computing resources, and the host is then generally free to cheat.
One side effect of having deterministic AIs synced over the lobby is that the code needs to have the exact same behavior, down to the rounding errors. This can dramatically increase the complexity of cross-platform multiplayer games, and usually requires the exact same binary to be used by every client.
I think that Civilization and Sins of a solar empire use this kind of distributed AI scheme (as well as most games that don't have cross platform multiplayer, and those where you can experience de-syncs).
I wonder what you might actually do is execute AIs on both, and send occasional snapshots of the AI state from the server to the clients.
This gives you the benefit of low-latency on the client (don't have to wait for the server to tell it what the AI is doing) but also avoids security and non-determinism worries. (Although the AI might still be non-deterministic, it probably can't diverge too much before the client receives the next snapshot from the server).
This might be more effort than it's worth, though...
you achieve the benefit by loosening consistency, i.e. the local ai will have to react to local events before those are acknowledged by the server to see any gain, and the server might come to different opinions because it sees a different state of the game considering multiple clients who are constantly out of sync.
That leads to warping. A rocket, to take an example of an extremely simple entity, might fly a few meters on your screen only to explode right in front of your face because the server sent a message that the flight path was blocked by an online enemy that your client didn't predict.
I don't see how this is an issue. Just imagine that client-side AI is just another player that happens to share the same CPU as the human one. From the server's POV, there's not much difference. Client inputs are client inputs, you always want to have some anti-cheating mechanism in place.
It's funny how you see these articles about fancy text editor buffer representation (gap buffer, ropes) but meanwhile, the editor with the best feature/performance ratio (including multiple cursors) I've found[1] "simply" represents a buffer as a vector of strings.
I somewhat doubt that a hollow plastic tank is significantly more expensive than a concrete block. Factor in the added cost of shipping the much heavier concrete block, and it seems like the manufacturer would also save money in the process.
The cost hasn't been cited in the "invention", but concrete is something that weights roughly 2,200 Kg per cubic meter, and has a cost (of course it depends on where it is produced) of less than 100 US$ per cubic meter, the 25 Kg counterbalance costs between 1 and 2 dollars, and is probably just barely comparable to a mass produced injected plastic tank, but I doubt that there will be any actual savings by the manufacturer.
And there is no real "visible" added cost of transport.
A 70 kg washing machine is typically 0.60x0.60x0.90=0.324 cubic meters, let's say 0.7x0.7x1,00=0.49-0.50 including packaging, it has a a very low "density" of 70/500=0.14.
On a truck with a platform of 2.40 m x 13.00 m (a normal large truck with a loading accepted of around 30,000 Kg ) you can usually put (in two levels) between 100 and 110 washers (2.40/0.7=3 13/0.7=18 2x3x18=108).
So you have this big truck, designed to carry 30,000 Kg and you load it with 8,000 Kg instead.
Do you think you will get a discount from the trucker?
And do you think that you will get a further discount if the load is 5,000 instead?
As well, do you think that you will get a discount from the delivery (and installing) guy if it weights 20 Kg less (but the guy needs to remove the top cover, fill the tank, re-assemble the cover)?
From an environmental viewpoint there are undoubtedly savings but the manufacturer (or the customer) won't be able to appreciate them in practice.
Is it that much more expensive to ship something in bulk with additional weight? I honestly don't know how distance shipping by the container works, but I always thought it was just a case of being billed based on volume.
In bulk you basically per 40' container which are usually kept under ~45,000lbs* and have 67.7 m3 volume. So, you are either volume or weight limited, but not really both.
Note: there are a few different container sizes, but 40' is by far the most common. Weight limits also very by location.
I never understood why people argue that operator overload is a bad thing. In a language that doesn't support it, you're going to have a function "T add(T, T)" which is pretty much the same, and can do anything. Overloading "T operator+(T, T)" is mostly syntactic sugar.