It makes some degree of sense. You might assume a bunch of refugees hitting your product would be good, but in my experience, people who transition voluntarily/gradually are more willing to adopt a new paradigm. However, people who are moving under duress (because their previous product suddenly vanished/went out-of-support/changed licensing) are generally less flexible. They haven't had time to adjust, and just want an apples-to-apples equivalent for the thing they are used to as quickly as possible, because they're trying to get back to work.
Presumably they didn't want Godot to suddenly get an influx of help tickets or message forum posts that were all rephrasings of "This interface doesn't have a button exactly where I expect it from Unity. Godot sucks."
I have seen this same pattern in long-time Windows users approaching Linux, becoming frustrated, and then posting online about how bad [they think] it is.
Windows glasses are usually grayed out for some unknown reason, and there is no apparent way to clean them up. Sometimes taking them off and then putting them back on may suddenly clear the gray areas.
I've been using Linux for decades on workstations, servers and laptops. I love Linux. I'm super grateful for Linux. I can't imagine living in Windows. I don't want to be trapped by Apple. Even through my gratitude-colored glasses, Linux leaves a lot to be desired.
I had 2 Linux Dell laptops in a row that either had Wifi OR bluetooth, never both. I swapped the radio in the second one and things were okay for awhile.
Neither of these, nor the Asus laptop I later purchased supported sleep/hibernate when the lid closes. Out of 4 laptops in 7 years, the only one that supported this incredible future technology was my Lenovo Carbon X1, the two Dells and the Asus all repeatedly tried to commit head-death suicide in my backpack.
My current workstation that runs Ubuntu 22.04 and now 24.04 was going great until I got some update that swapped me back from Wayland to X11. Now things are weird and I've got to make time to fuck with it to figure out how to make this RTX 4080 work with Wayland.
It's the worst operating system in the world, except for all the others.
Long time linux user here. I often hop between my work windows machine, home linux desktop, BSD laptop and macOS laptop.
I can nitpick and find issues with all of these systems. I think there is a bit of a little saying that sometimes go around that, "all OSes just suck." On Windows 11, I've got ads, inconsistency between different windows provided by Windows (some have the newer UI look and some are still stuck with their Windows 7 like when adjusting print settings or advanced settings), and I can never get Bluetooth headphones with a mic to just work consistently (same with my co-workers). On macOS, the reorganization of settings have sucked for a number of settings, clicking with my mouse half the time does the view where it shows all the applications in my virtual desktop view (don't know the proper name of this) and having to augment macOS with Magnet. With Linux, some of the pains you've already mentioned. And of course those are same what even worst on desktop BSD. If I sit down I can probably think of more for all the platforms.
At the end of the day, all of them suck, its just a pick your pain. So just as you said, each one is the worst operating system in the world, except for all others.
Believe it or not, I finally bought an all AMD machine with decent driver support, and everything still works in Debian. Even closing the lid to sleep. Haven't tested hibernate, cause sleep works great and reliably. Even has incredible battery life and performance. I can hardly believe it.
Thank Microsoft for sleep just working, it's using "modern standby" which keeps the CPU and stuff running on lowest power rather than some "special magic mode". You're essentially killing peripherals, killing all but one core and parking it on some low mhz.
Initializing hardware is hard, reinitializing hardware is harder, so now when chips are power focusing we can just ignore the annoying steps :)
Laptops have been successfully closed to sleep, and inserted into backpacks for literally decades.
My 2024 Lenovo X1 Carbon has not ever woken up and tried to commit heat-death suicide.
I am quite sure this isn't a widespread problem with Apple products.
I can't imagine what the hell people are even doing with laptops if they don't consider close-to-sleep, wifi and bluetooth bare minimum functional requirements.
Umm, not that well. I've had audio playing and after closing the lid, the audio continues to play. At times, I wonder if it's really asleep or not. You don't really know until you open the laptop (I guess I could measure power draw).
I totally feel you on this, and have spent a lot of time yak-shaving similar stuff, especially Linux laptops.
IMO one of the best reasons to prefer to buy/use a laptop from a Linux-first laptop company (Framework, Tuxedo, Slimbook, Starbook, System76, etc) is a better chance of sleep/suspend working properly. Even then, though, with the abandonment of S3 and the push for S0ix, it can mean that even "properly" working sleep can basically drain your battery overnight (sleep-to-hibernate can be set, but can be complex). For those interested in characterizing their laptop battery drain in Linux, I wrote a utility to help: https://github.com/lhl/batterylog
For Wifi + Bluetooth, I've found always swapping to an Intel card to be the best bet for stability, although BT still is always hit or miss for me.
I had some crazy intermittent crashes on suspend w/ one of my dev laptops, turns out that was a problem Nvidia introduced w/ their v550 drivers at the beginning of the year that still hasn't been fixed yet. That was pretty maddening: https://mostlyobvious.org/?link=/Reference%2FHardware%2FRaze...
if it makes you feel better, I only use windows, and none of my computers (laptops or desktops) sleeps/hibernates properly. Most of them work most of the time. Some none of the time, and none all the time.
This is the thing that annoys me most about my Windows laptop (the highest end dell "desktop replacement" that cost me more than my MacBook did). I've never had this particular issue with any Mac I've ever had (I've lost count how many these were, but surely double-digit by now).
All of these problems are Windows problems too. I don't get why Linux gets all the hate for it.
> I had 2 Linux Dell laptops in a row that either had Wifi OR bluetooth, never both. I swapped the radio in the second one and things were okay for awhile.
I had a Windows laptop that came with broken wifi and bluetooth as well as a BIOS-enforced wifi card whitelist that prevented me from swapping out the wifi card.
> Neither of these, nor the Asus laptop I later purchased supported sleep/hibernate when the lid closes.
Yes, so does my Windows laptop. It's why I always shut it down completely before going anywhere.
> My current workstation that runs Ubuntu 22.04 and now 24.04 was going great until I got some update that swapped me back from Wayland to X11. Now things are weird and I've got to make time to fuck with it to figure out how to make this RTX 4080 work with Wayland.
An NVIDIA update gave people with my laptop model a blackscreen. NVIDIA only fixed this half a year or so later.
It's pretty unfair to compare Linux to Microsoft. Companies develop their software and drivers for Microsoft Windows. They don't necessarily do the same for Linux. Which means volunteers have to reverse engineer things and sometimes they don't work right. There is just no way around this at 4% marketshare.
Agreed. But if a company is selling a laptop with Linux preinstalled on it (Dell and Asus), they damn sure ought to verify that the basic feature set works (i.e., Wifi, Bluetooth and it sleeps when the lid closes).
Windows for all the terrible imposed by the business is actually pretty nice.
* Active Directory / Group Policy has no equal to the point where FreeIPA and sssd threw in the towel and supplement AD and support group policies directly.
* RDP is the better technology full stop. X forwarding is really cool and VNC is… there but RDP got the abstraction right.
* SMB for all its warts is better at file sharing than anything that Linux has to offer and FUSE, while great, is a band-aid over the wrong permission model— why Linux does mounting a filesystem have security implications? Why after 30 years do we still pretend the filesystem is reliable and paper over the reality that it isn't.
* NTFS got the permission model right where for the most part Linux is still clunking along with chmod. You don't have to deal with user ids sharing a single global namespace. Directories only being able to be owned by one group. NFSv4 even adopted NT ACLs wholesale as well as NT's identity model.
* Windows has undergone truly heroic efforts to make remote home directories work where Linux you will never stop dealing with issues.
I think the, deserved, Microsoft hate has blinded folks a bit to the good things Windows has going for it. I know, tragic, the worst person had some good ideas.
Linux didn't support per-monitor fractional scaling until fairly recently. I only got it after I changed to Wayland about a year ago (not sure if X supports it these days), and then it turns out Wayland doesn't support color profile (or at least that's what I learned from random Reddit posts).
So I guess I could either go back to X and have uncomfortably large letters or stick with Wayland and tolerate slightly off colors. Yay.
The IME shows Korean characters in slightly wrong size whenever I type and gvim keeps throwing UI error messages in the console. At least it allows the input to go through, so I guess it's fine?
Hibernation used to work, and then stopped working for about two years, and then started working again after the latest upgrade. No idea why, I'm not gonna question the system when it works.
(BTW, I think the last time I had to worry about IME or hibernation in Windows or Mac was about twenty years ago.)
not a fan of linux font/text rendering either. doesn't look too bad on my 4k but the 1080p monitor is unusable for me. fonts also look so airy and thin, and white on black looks weird as well.
For one the Nvidia drivers are still a pain, and Nvidia has 83% discrete GPU marketshare per the Steam survey, so the majority of Windows users venturing into Linux are immediately going to run into whatever the Nvidia issue of the week happens to be. Those who are already sold on Linux know to just get an AMD card in the first place, but that requires the benefit of foresight.
I don't think the average person cares whose fault it is, they just want their computer to work, and Nvidia has incredible mindshare over on the Windows side of the fence so most people are going to keep buying their hardware and almost immediately running into problems when they give Linux a shot.
Wayland is being pushed out more and more, and it still has many problems for me on my Nvidia card. I know it is mostly Nvidia fault, but at the end of the day, I am not here to assign blame, I just want to use my PC.
So just switch to Linux from Windows and immediately start deviating from distribution-supported defaults…
This suggestion comes up implicitly in a lot of weird hardware cases. It really isn’t a selling point for a user wanting a good first (or recurring) impression.
Just to call out: X11 isn’t all rainbows and unicorns either. Wayland (for the various defects I’ve encountered) is a desperately needed improvement.
The default on virtually any computer is Windows or MacOS. If you're scared of deviating from defaults you'll not have to worry about Ubuntu.
Okay, so you're willing to change your whole operating system to something which is quite different than what you're used to.. but you're not willing to switch off of Wayland even though you know you don't like wayland and know the alternative fixes your problem, because you feel somehow compelled to stick with a default? Makes no sense. You should either not use wayland, either by sticking with windows or switching to X11, or buy hardware that supports wayland better, or learn to live with wayland as it is without feeling sorry for yourself. Using wayland even though you don't like it is 100% a self-inflicted harm. Nobody is making you do that, the "push" is in your head. Just stop doing that.
Well I use X11, because Wayland is just not there yet, at least for my particular hardware, but it is true that more and more distros come with Wayland as the default. Linux can't become more widely used, if the answer for anyone installing a wide array of distros is, if your pc has a GPU that 90% of pcs have you should immediately change this default setting. It is fine to be a power user and mess with your system, but the default configuration should have the expectation of working well on as wide array of hardware as possible.
For a month, I've been running an experiment (on a desktop running Sid). I removed the proprietary drivers and just used the nouveau drivers with my 2070S. Everything mostly just … works? Sharing screens on Meet, Slack, and Teams, four monitors with varying DPI, everything just works. I do not play games or do any GPU heavy stuff though, so ymmv. Still planning to replace it with an AMD card though.
Until starting to work with local LLMs, I’d used nouveau / libre drivers for years without major issues. It’s sort of amazing how much troubleshooting and wrestling is involved getting CUDA working with older, newer or middling hardware.
How do I get a drive to mount during boot (during boot, not after like FUSE does it) without having to look up the command to get the drive's UUID, look up the syntax for a new fstab entry, and edit fstab manually?
A few distros/DEs have straightforward ways to do this with a GUI but there's no universal solution, and this is something almost anyone using Linux for a home media server is going to run into.
For reasons we won't discuss, I want the caps lock key on my keyboard to map to the 'end' key instead.
On Windows, it's just an AutoHotKey script. On Linux? Literally impossible. I've spent days dicking with various config files, rebooting and relogging dozens of times. Even got so desperate I asked chatgpt. Gave up after a week or two.
To anyone else reading this: this is not an invitation for fixes. Whatever you're going to suggest, I've already tried it and I'm super not interested in any opinions about it.
It runs in the background as a systemd service, and the config is just:
$ cat /etc/keyd/default.conf
[ids]
*
[main]
capslock = end
(I got excited that someone else likes the same keybinding I do and wanted to share, and only then saw "not an invitation for fixes". Well, maybe it'll help someone else.)
Fortunately posting about it publicly is akin to asking for more engagement.
No mention if you tried to compile the firmware, or use the custom mapping configuration available to your keyboard, what kind of keyboard you have, or what specifically hasn't worked.
This annoyed me today. Does a Linux distro highlight newly installed software? I'm not going to say it doesn't because it may in a different DE or distro.
I'd say the core difference for those migrating is that they evolved with different 'evolutionary pressures'. Broadly speaking there's equivalent functionality for a lot of things, but windows has a lot of GUI while linux has a lot of config files and terminal commands to accomplish things.
Where I think the friction is the distance between new migrants coming to linux with glowing advertisements of it mostly being a drop-in replacement for windows but the GUI not offering everything, and crucially not hinting where to go to find things, so people head off to do web searches and play the lottery of if they get good/recent information appropriate to the distro and version they chose. Linux seems to be polarized between great for simple use cases (browser, gaming within the steam walled garden, etc) and those willing/able to dive into the terminal, but between those can be a wide gulf which is hard to cross.
For me, Linux is capable of doing everything productivity-wise much better than Windows, but my home PC is practically a toy for me to use Discord and play video games. Screen sharing on Discord is an open issue with lots of third party solutions, but none of them are perfect. Video games is around 95% of the way there (thanks, Valve/Proton!), but that 5% can be annoying for a device that is essentially my alternative to buying a Playstation. I think a large share of "PC enthusiasts" fall into this category. The two main use cases for a $700 graphics cards are to play Elden Ring and run an LLM, and you can guess which one is more popular.
Then there's the minute ambiguities that continue to perplex me, which are mostly down to lack of standardization across distros. Most distros come with a bunch of premade user groups, but good luck figuring out what each of those groups give access to. The actual purpose of various root level folders is inconsistent, making manually installing programs confusing, and oftentimes figuring out where the package manager put an installed program is vague as well. The other day I installed openSUSE on a server, went to disable password access in /etc/ssh_config, only to find it doesn't exist. Had to google around before finding out they moved it to /usr/etc/ssh_config. I am used to googling as part of how I navigate daily life (so much so that I pay for it with Kagi), but most people don't want to be arsed with googling things.
Dotfiles are also annoying, and I think that is a universal complaint among users given how many dotfile management solutions there are around. The fact that you have to edit config files at all is a turnoff for many people (I prefer the benefits config files bring, but not everyone is so easily convinced). Things like this are navigable with enough experience, but people have gotten so used to Windows' quirks and error resolution process that learning a new system with a whole new set of pitfalls just isn't worth it.
The problem is, which distro?, which DE? Even which drivers? Every distro usually has some missing pieces, when you raise that, people would say: ‘ oh you are using X thats why, it works on Y’
Having used all three, macOS is the one I can't wrap my brain completely around for some reason. Maybe I got too used to Windows, but I picked up Gnome 3 quicker than I expected.
I once started a job and was handed a macbook pro, after years of using nothing but Linux.
It was one of the most frustrating experiences of my life, I felt like I had to constantly work around the OS to get things done that would be trivial on linux.
I lasted a month before making the macbook dual-boot ubuntu. Eventually they just bought me a linux laptop.
Most flavors of Linux I've tried didn't have text selection for Shift+arrow or Ctrl+Shift+arrow or Shift+Home or Shift+End keys working by default. Maybe it's been fixed since, but I remember that it annoyed me every time I installed fresh Linux.
It's many small things like that.
When I had to switch to Mac for work, it had no silly problems like that.
And then they redesign everything every time. W8, W10, W11 - W11 is the biggest offender with changing the context menu. The only positive thing I can say about Windows these days is it does work really really really well with touchscreen input.
imho as a surface pro user who doesn't use the keyboard cover, that's a significant overstatement.
it's much, much better than it used to be, but there are so many inconsistencies and annoyances that I would be hard pressed to recommend it. selecting text is painful. the on-screen keyboard is decent but swipe input won't register half the time and it routinely fails to open when it should or stay closed when it shouldn't.
with a pen, it's extremely picky with tap inputs. if the pen tip moves even a mm after touching, it's registered as a mouse drag, and unlike with mice there's no way to adjust the sensitivity or register a deadzone. handwriting to text is ok, but its constantly shifting interface regularly leads to errors and gestures like striking out or replacing a letter rarely works on the first (or even third) try.
with both pen and osk, text prediction / spell correct is utterly moronic. often reaching for unusual words or proper names instead of a more common word.
I will give them credit for getting to this point and I'm sure many of the inconsistencies come from having to support a wide variety of apps and backwards compatibility. regardless tho, it will be a long while yet before it's usable as a touchscreen-primary OS, let alone "really good".
On W10 you can use the full keyboard layout and undock it to move it where you want. On W11 this changed and once you undock the full keyboard layout it becomes a different layout. This has bitten me quite a bit as some fullscreen applications will disappear behind the keyboard.
Also it is impossible to turn off text prediction on the keyboard. I don't want and need it so it blocks screen space.
> The only positive thing I can say about Windows these days is it does work really really really well with touchscreen input.
This was always the point, wasn't it? MS wanted Windows to compete more closely with the iPad with the Surface - it just turned out that only parts of what they tried were good (the Metro style turned into Fluent).
I've been using Windows since 98 and I frankly feel like the current UX is at a high point. It's responsive, thoughtfully laid-out, and supports modern stuff like omnisearch (I have configured my Start Menu to NOT give me Bing results) and multiple desktops.
The enshittification is a real shame because I've been really enjoying the changes.
I find this hilarious because HN in general has the same complaints of Windows not being exactly like Linux. Even Windows 11 not being exactly like Windows 10. Humans truly are all the same.
You may be right, but chances are it would have pushed Godot in the right direction of avoiding Unity mistakes namely using custom scripting language which Unity ultimately gave up on. In this case, however, the forced API decisions that are contrary to performant interop patterns of C# are even more painful, and I keep getting casual reports that there is something wrong with the way Godot embeds .NET that leads to its consistently worse performance vs when being used standalone. I don't know the details and if anyone can shed more light on this, I'd love to dig into that.
In any case, even if we skip the subject of performance, GDScript is not a particularly good language comparatively speaking.
GDScript is fine and I rarely encounter errors because of the GDScript language. The typing is optional, but if used it works well enough and the editor gives good autocompletions and will mark type errors before runtime. GDScript also hasn't screwed up any of the fundamentals, like implicit type coercion in if-statements (see JavaScript), and is thus capable of improving without breaking backwards compatibility.
I can tell you if Godot abandoned GDScript and went all in on C# then I would stop using Godot. I do not think C# is even remotely enjoyable as a language and GDScript works really well.
Why? What are the features in GDScript that don't exist in C#?
Personally I find C# to be a fantastic language with both high (eg. pattern matching, dynamic) and low level (eg. value types, Span<T>) features as well as amazing tooling (eg. Rosylyn Analyzers) that come in very handy.
My dislike of C# is nothing but syntax. Godot writes their examples in both C# and GDScript and you can see that GDScript is a much more concise language with almost no boilerplate compared to C#.
I’ve used both, and personally, I find a lot of features in C# not necessarily contributing to standing up the gameplay I’m looking for. It has more functionality like you mention, but GDScript integrates with the engine nicely and doesn’t have GC pauses to worry about.
If performance is an issue, or I want to do something that GDScript doesn’t lend well too, I’d probably just jump to C++.
> and I keep getting casual reports that there is something wrong with the way Godot embeds .NET that leads to its consistently worse performance vs when being used standalone.
This is the first time I hear that. Do you have any sources for that? Reposting unfounded "casual reports" isn't helpful.
I'm not saying that they are well-founded, only that there seems to be a consistent anomaly I keep getting told about and would like to hear if this tracks or does not track with the experience of active Godot + C# users. It's just something I would like to know more about, with the best outcome of being proven wrong.
Godot is named after the play Waiting for Godot, and is usually pronounced like in the play. Different languages have different pronunciations for Godot and we find it beautiful.
But there used to be an extra line that recommends pronouncing it like "god-oh":
For native English speakers, we recommend "GOD-oh"; the "t" is silent like in the French original.
It was removed in favor of making the pronunciation non-prescriptive:
No, they aren't. French puts the stress generally at the end of the word (vs languages like Spanish where the stress varies but is fixed and can't be used by the speaker to emphasize anything)
> No, they aren't. French puts the stress generally at the end of the word
What? I'm pretty sure "no stress anywhere" is right. (I did play a few words in my head to check, but really, syllable stress is just a foreign concept in french.)
I mean... I'm honestly not sure because I indeed don't think about it, but I really don't think so?
Like, okay, let's take a random sentence:
"C'est impossible!"
There's a bunch of different ways you can pronounce it: you can pronounce all syllables in one breath (no accent), you can enunciate three syllables for heavy emphasis ("C'est im, po, ssible!"), you can put the accent on the first syllable for light emphasis ("C'est IMpossible"), but you're almost never going to put the accent on the last syllable ("C'est impoSSIBLE!") unless you're doing a bit.
> Godot's limitations are fascinating in part because, as the pair stressed multiple times in our conversation, they're something technically savvy developers can solve themselves. Because Godot is open source, developers who want to add key features can fork the engine and modify it for themselves free of cost. That isn't possible with Unity or Unreal Engine.
Yes you can redistribute modified versions of the engine to other parties. Public offerings need to be done through an epic marketplace but that can be for free. The modified engine is still beholden to the original license terms.
> Yes, but can you redistribute those modifications? Under what terms?
They have a marketplace for selling or sharing stuff like that. If it's a huge chunk of code that's mostly original Unreal code, there are probably some license issues (I have no idea) with just sticking it in your github, but I actually have seen stuff like that on github, so, maybe not.
Why do you need to redistribute them if your goal is merely to solve your own problem? (That's the specific context of the quote in the parent comment.)
Huh. I have no idea what point you're trying to make.
Virtually everyone modifies Unreal C++ code. Some indie projects are blueprint only. But modifying the engine C++ source code is practically an expectation.
Can those source code modifications be easily shared with other developers? Well they can't be publicly dumped on GitHub. Can they be shipped to consume in a compiled binary? Absolutely.
As I understand it the sanctioned way of sharing code added to UE is to fork it on github and publish changes to your fork.
Being a fork it will only be available to other people in the Epic Games github org which is only people who have agreed to Epic's licensing terms, and your modified engine remains under that same license.
That's not really true. You can't modify it free of cost; you still have to pay the royalties to use it. You also can't modify it in any way you like, there are numerous restrictions in the EULA on how exactly you can modify/distribute snippets of code. Not only that, but you also aren't allowed to integrate it with any sort of software with a copyleft license, making it useless for any gamedevs who want to license their game with one. Even people wanting to use copyleft libraries or code with their fork are completely restricted.
free means whatever the reader wants it to mean, including free as in beer, free as in speech, or free as in puppies. In this case, free until you have $1,000,000 in revenue is free enough outside of pedantic online arguments about the definition of free. If you made a million dollars from something, having to pay the thing that helped you get to that place doesn't seem unreasonable, but maybe I'd I had a million dollars I'd feel differently.
It's pedantic to say it isn't free when...it isn't? Plenty of smaller games make over $1 million. It's news to me that the concept of free is whatever the reader wants it to be. Free as in beer or free as in freedom, Unreal is neither.
Freedom in licensing definitely requires context and specification; an absolute view of freedom has little practical use. Game engines released, for example, under a GPL license may align with an absolute view of freedom, but they are useless for the vast majority of commercial games.
What needs to be below 1m? Revenue of the whole shop? Or just that game? If you have two games how is it calculated? Can you create legal entity per app to manage limit? Is it annual revenue? Can you set publishing legal entity in front that takes most revenue as publishing cost and pays peanuts to dev shop legal entity that holds license?
Thank you. But that’s very blurry, no? Is my Mario 2 new game? Mario 3D? What about Mario vs Donkey Kong? Smash Bros with Mario? If all are new games what about Mario v1.3? etc. Wonder how they formalised it with legal language.
No that’s not very blurry at all. Mario 2 is clearly a different game than Mario 1.
Epic is not claiming any rights on your IP.
Mario v1.3 typically means Mario 1 with some patches unless you’re kingdom hearts, so yes that is still the same game. I believe dlc is bundled into the parent game.
My reply had nothing to do with Unity. It was directed at the statement that Unreal being source-available is somehow equivalent to being able to freely fork and modify Godot to add specific features.
Furthermore, many of these devs have been on Unity for many years, not because it was the best, but because it was pretty much the only accessible and modern choice for smaller projects before Godot and other open-source engines were mature enough to release games with.
Probably UObject or AActor, depending on what you need it for.
There are macros-as-descriptors like in Godot but they don’t really work in the same way. But if you’re asking whether you have access to base level objects so that you can extend them, yea you can.
I would have guessed it was written by an actual journalist who used to work for a newspaper. It's mixing story reporting, interview summarization, and quotes from independent sources in a way that's very familiar to me and seems like the work of a human. Although few places can afford that journalist touch any more, so maybe you're right!
> Stretching $8 million to do $2 BILLION of product development is the hard part.
It's even worse over in Rust land. Bevy is making progress, but it's slow.
There are several parts to this. There are the run-time components - the graphics stack, the physics engine, and the 2D user interface components are the big ones. There's the gameplay programming system - schematics in Unreal Engine, scripts for most others. Then there's a developer user interface to provide a GUI for all this.
Open source development can do the run-time components. There's general agreement on how those are supposed to work, and many examples to look at. The developer user interface is tough. Open source development has a terrible time with graphical user interfaces. Those need serious design, not just a collection of features and menus that happen to compile together.
There are indies doing well certainly (had at least 3 significant hits this year to my knowledge between Balataro (made with LOVE2D), Animal Well (C and a custom engine) and Buckshot Roulette (made with Godot)), but game developers in general have not been doing great. Seeing layoffs on par with the rest of tech.
We've had upwards of 20k games industry layoffs in just the last couple years, iirc. Funding has dried up too, which is really bad for indies since they're more dependent on it than established studios.
Its more that there's an absence of investment funding. Profits are still being made and work is still being done but its hard to get a new project funded, let alone allocate that funding to engine dev.
Yeah, my experience is that there's actually mutually desired success and a lot of indirect collaboration. I worked on a debugging tool and a tracing platform, and in both cases there was a lot of cross-pollination with similar and competing companies through meet ups, forums, talks, etc. I'm not sure executives would have endorsed it but it definitely improved the products.
Unreal has hot-reload of code. The difference is that in Unity the scene editor tooling is active while in play mode. You can move objects, inspect values and edit game object properties and the playing simulation is updated live.
Unreal has some of this, in that you can see dynamically spawned AActors(game objects) but their properties and components are not updated and are not editable while playing.
Presumably they didn't want Godot to suddenly get an influx of help tickets or message forum posts that were all rephrasings of "This interface doesn't have a button exactly where I expect it from Unity. Godot sucks."
reply