The combination of C++ and OpenGL is really the ultimate in portability. A language supported on almost every platform by one if not more implementations (sometimes 3), coupled with probably the most widely supported graphics API. Of course you lose accessibility, and many platform API conveniences, but you get a more stable and performant platform to work with. In fact it's the platform that most web browser themselves are heavily based on. For another example just look at what Epic was able to do with Fortnite, one code base that literally runs everywhere.
Magnum [0] is a favorite of mine for creating portable OpenGL C++ applications. It also has support for compiling to WebAssembly [1] and using WebGL2.
Yeah there is MoltenGL which will probably be open sourced eventually. The real future is Vulkan which is moving to replace OpenGL and for the Apple platforms there is MoltenVK [0] (Vulkan re-implemented on top of Apple's new Metal) which is open source. Magnum has an initial Vulkan backend [1], and many other engines have them or are building them as well.
From what I understand it was bought out by Valve and donated to community in its quest to promote gaming on Linux (where no Metal or Dx12). I can only speculate that it wasn't most successful commercial product.
OpenGL is deprecated, not removed, you can still use it in current versions of macOS as long as you are fine with the version it ships (which in general should be fine for a ton of 3D games, remember that something like Doom 3 was built on OpenGL 1.5 + extensions and Rage was built on OpenGL 2.0 + extensions). Apple might break it sometime in the future, but Apple might break any API in the future (i have several applications and games that stopped worked every time i upgraded macOS and the developers had abandoned them - one of the reasons i decided to stick with Windows over the years is that old apps keep working) so that comes with the territory.
It is still technically more advanced and looks better than the vast majority of indie games and my point was that if something like Doom 3 (or Rage) can be made using OpenGL 1.5+exts (or OpenGL 2.0+exts, for Rage) then OpenGL 4.1+exts that macOS provides is more than enough for a ton of games.
Yeah, it'd be nice if macOS supported OpenGL 4.6 and even better if Apple wasn't braindead enough to deprecate OpenGL instead of fixing one of the worst implementations (and still the only that has the core/compatibility divide that makes matters even worse), but even with all that taken into account, OpenGL on macOS is still capable enough for almost every game bar the more high end ones.
Also, keep in mind, that while WebGL is based on OpenGL ES, it is emulated using Microsoft Direct X on all Windows machines (by all mainstream browsers through the Angle project https://github.com/google/angle).
It runs on Android. They probably haven't released a desktop Linux version because of the small market share, but there's probably no technical reason why they couldn't.
> The combination of C++ and OpenGL is really the ultimate in portability.
Wouldn't the credit belong to Magnum rather that C++ plus OpenGL, given the tremendous amount of work that goes into making things seem portable to the end user of the library?
"At one point, Sony was asking developers whether they would be interested in having PSGL conform to the OpenGL ES 2.0 specs (link here). This has unfortunately never happened however, as developers seem to have mostly preferred to go with libGCM as their main graphics API of choice on PS3. This has meant that the development environment has started becoming more libGCM-centric over the years with PSGL eventually becoming a second-class citizen – in fact, new features like 3D stereo mode is not even possible unless you are using libGCM directly."
Just like everyone raves around Switch having Vulkan support, when most studios are using Unreal, Unity and the main 3D API, NVN.
As rule of thumb, instead of believing what gets posted on FOSS friendly web sites regarding 3D APIs, GDC Vault, Making Games, IGDA, Connect, EDGE, Retro Gamer, and AAA studio dev blogs are a much better source of information.
A viable alternative is Rust + OpenGL, using gfx[1]. I've had success cross-compiling one Rust codebase to Windows, Mac, and Linux native applications that can render OpenGL graphics. I even got the compilation for all three targets working from within a single docker image, meaning the build can easily be set up in any modern CI system.
"OpenTTD is a business simulation game in which players try to earn money via transporting passengers and freight by road, rail, water and air. It is an open-source[2] remake and expansion of the 1995 Chris Sawyer video game Transport Tycoon Deluxe."
Uncaught RuntimeError: float unrepresentable in integer range
at wasm-function[3502]:49
at wasm-function[6940]:395
at wasm-function[6939]:347
at wasm-function[3103]:334
at wasm-function[3284]:30
at wasm-function[4902]:3105
at wasm-function[4903]:654
at wasm-function[8083]:442
at wasm-function[8082]:3
at wasm-function[9782]:13
It's a shame the online content functionality isn't working because of a missing zlib library. OpenTTD really has quite an active community of modders and asset creators.
You want the trains from your home country? They are available, and probably up to date to the current year. Just download them from within the game, enable them, and start a new game.
Porting c/c++ codebases to WebAssembly is both super fun and frustrating.
Network stuff, as mentioned in other comments, is futzy because of the limitations of running in a browser environment. A surprising amount of network code "works," in the sense that it compiles and runs translated to web sockets. (Great work from the WebAssembly and Emscripten community.) But, you know, web sockets.
It looks amazing, but for some reason it's not polished on my machine. There are many small bugs that don't exist in the native version. For example, when I scroll, the pointer stay frozen till I finish scrolling, and only then "jumps" to the new location.
It makes a huge difference between enjoyment and suffering in games.
This is super cool, TTD always has a special place in my heart. Any technical explanation why level generation is so much quicker? Not even a loading screen
One of the coolest things to do is go on one of the many multiplayer servers and see some of the super train systems that have been built. 16 lanes high speed rail connecting thousands of nodes.
The browser doesn't provide the usual POSIX api for opening TCP connections. With some work you can write a shim library that implements it in terms of websockets, but it still can't connect to arbitrary hosts. Instead you have to run a proxy server for it to connect to.
Yeah, you'd need a proxy server (websockets) to establish WebRTC connections. Even after a WebRTC connection is established, you'd need a STUN or a TURN (coturn supports both) server to get around NATs. Pls somebody correct me if I'm wrong!
You should only need a STUN or TURN server if you are attempting to connect to a machine that is behind a NAT. OpenTTD servers are usually on public IPs (similar to HTTP servers), so a STUN or TURN server should not be required.
The biggest issue is that WebRTC connections do not allow you to send raw data over TCP or UDP[1].
That being said, it shouldn't be too difficult to just create a WebSocket server that translates between the native network protocol and HTTP WebSockets[2].
Actually it's quite playable as soon as you learn UI and used to multi touch that game is using. E.g as far as I can recall you just need 2-finger touch to move camera.
How quick it runs to an extent is correlated to energy usage. I know it's not the same but for a single threaded low memory usage game the correlation would be close.
Hasn't OpenTTD been ported to JS in 2015?
https://epicport.com/en/ttd
Regardless, a WebAssembly port is perhaps more useful since improvements can be merged from upstream more easily.
Moving the cursor was really sluggish on Safari for the first two to five minutes and afterwards I didn't notice lag. Maybe it's some JIT-compiler warm-up phase?
One reason I liked flash: You could disable malicious ads with flashblock without breaking 90% of the internet. With the cancerous growth of JavaScript APIs embrancing and extending on all that made flash evil that is no longer possible.
That train has sailed already anyways with JS, so I see no difference in that. Though I do expect websites to become more obfuscated - rendered by wasm so that adblocks and privacy protectors can't reach as well.
WebAssembly on its current state still doesn't prevent internal memory corruption, due to lack of bounds checking enforcement.
Yes it does not escape the sandbox, however triggering stack corruption of local variables opens the door to explore changing the behaviour of function calls, while getting some goodies in the process.
It remains to be seen how secure is WebAssembly at scale, when black hats start actually turning their sights into it.
It doesn't change the fact we are down the same road, and the user UI/UX of getting a web page that is nothing more than WebAssembly + WebGL will be hardly different than Flash used to be, open source or not.
I don't care, I used to enjoy playing with Flash, gave some sanity to the div/CSS soup pretending to be native UI controls.
Magnum [0] is a favorite of mine for creating portable OpenGL C++ applications. It also has support for compiling to WebAssembly [1] and using WebGL2.
0 - https://magnum.graphics
1 - https://magnum.graphics/showcase/picking/