Hacker News new | past | comments | ask | show | jobs | submit login
Godot founders had desperately hoped Unity wouldn't 'blow up' (gamedeveloper.com)
160 points by kruuuder 12 days ago | hide | past | favorite | 140 comments





It makes some degree of sense. You might assume a bunch of refugees hitting your product would be good, but in my experience, people who transition voluntarily/gradually are more willing to adopt a new paradigm. However, people who are moving under duress (because their previous product suddenly vanished/went out-of-support/changed licensing) are generally less flexible. They haven't had time to adjust, and just want an apples-to-apples equivalent for the thing they are used to as quickly as possible, because they're trying to get back to work.

Presumably they didn't want Godot to suddenly get an influx of help tickets or message forum posts that were all rephrasings of "This interface doesn't have a button exactly where I expect it from Unity. Godot sucks."


I have seen this same pattern in long-time Windows users approaching Linux, becoming frustrated, and then posting online about how bad [they think] it is.

I have used Linux daily for years and can provide many objective things wrong with it. No rose-tinted-Windows glasses required.

Windows glasses are usually grayed out for some unknown reason, and there is no apparent way to clean them up. Sometimes taking them off and then putting them back on may suddenly clear the gray areas.

I'm using this analogy with every instrument we have at the lab. Thank you

can you please talk about what linux refuses to do that you are trying to make it do?

or in general what is wrong with it, i am sincerely curious


I've been using Linux for decades on workstations, servers and laptops. I love Linux. I'm super grateful for Linux. I can't imagine living in Windows. I don't want to be trapped by Apple. Even through my gratitude-colored glasses, Linux leaves a lot to be desired.

I had 2 Linux Dell laptops in a row that either had Wifi OR bluetooth, never both. I swapped the radio in the second one and things were okay for awhile.

Neither of these, nor the Asus laptop I later purchased supported sleep/hibernate when the lid closes. Out of 4 laptops in 7 years, the only one that supported this incredible future technology was my Lenovo Carbon X1, the two Dells and the Asus all repeatedly tried to commit head-death suicide in my backpack.

My current workstation that runs Ubuntu 22.04 and now 24.04 was going great until I got some update that swapped me back from Wayland to X11. Now things are weird and I've got to make time to fuck with it to figure out how to make this RTX 4080 work with Wayland.

It's the worst operating system in the world, except for all the others.


Long time linux user here. I often hop between my work windows machine, home linux desktop, BSD laptop and macOS laptop.

I can nitpick and find issues with all of these systems. I think there is a bit of a little saying that sometimes go around that, "all OSes just suck." On Windows 11, I've got ads, inconsistency between different windows provided by Windows (some have the newer UI look and some are still stuck with their Windows 7 like when adjusting print settings or advanced settings), and I can never get Bluetooth headphones with a mic to just work consistently (same with my co-workers). On macOS, the reorganization of settings have sucked for a number of settings, clicking with my mouse half the time does the view where it shows all the applications in my virtual desktop view (don't know the proper name of this) and having to augment macOS with Magnet. With Linux, some of the pains you've already mentioned. And of course those are same what even worst on desktop BSD. If I sit down I can probably think of more for all the platforms.

At the end of the day, all of them suck, its just a pick your pain. So just as you said, each one is the worst operating system in the world, except for all others.


Believe it or not, I finally bought an all AMD machine with decent driver support, and everything still works in Debian. Even closing the lid to sleep. Haven't tested hibernate, cause sleep works great and reliably. Even has incredible battery life and performance. I can hardly believe it.

Thank Microsoft for sleep just working, it's using "modern standby" which keeps the CPU and stuff running on lowest power rather than some "special magic mode". You're essentially killing peripherals, killing all but one core and parking it on some low mhz.

Initializing hardware is hard, reinitializing hardware is harder, so now when chips are power focusing we can just ignore the annoying steps :)


This is a weird argument.

Laptops have been successfully closed to sleep, and inserted into backpacks for literally decades.

My 2024 Lenovo X1 Carbon has not ever woken up and tried to commit heat-death suicide.

I am quite sure this isn't a widespread problem with Apple products.

I can't imagine what the hell people are even doing with laptops if they don't consider close-to-sleep, wifi and bluetooth bare minimum functional requirements.


>Thank Microsoft for sleep just working

Wait, what?? Just re-discussed the other day:

State of S3 – Your Laptop is no Laptop anymore – a personal Rant (2023)

https://news.ycombinator.com/item?id=41442490

> S0ix would be great if it worked. But unfortunately it does not - laptops die from overheating, draining their battery in the process.

> This issue is not limited to Linux, as Dell officially warns to power down your Laptop before placing it in a backpack.


Umm, not that well. I've had audio playing and after closing the lid, the audio continues to play. At times, I wonder if it's really asleep or not. You don't really know until you open the laptop (I guess I could measure power draw).

I totally feel you on this, and have spent a lot of time yak-shaving similar stuff, especially Linux laptops.

IMO one of the best reasons to prefer to buy/use a laptop from a Linux-first laptop company (Framework, Tuxedo, Slimbook, Starbook, System76, etc) is a better chance of sleep/suspend working properly. Even then, though, with the abandonment of S3 and the push for S0ix, it can mean that even "properly" working sleep can basically drain your battery overnight (sleep-to-hibernate can be set, but can be complex). For those interested in characterizing their laptop battery drain in Linux, I wrote a utility to help: https://github.com/lhl/batterylog

For Wifi + Bluetooth, I've found always swapping to an Intel card to be the best bet for stability, although BT still is always hit or miss for me.

I had some crazy intermittent crashes on suspend w/ one of my dev laptops, turns out that was a problem Nvidia introduced w/ their v550 drivers at the beginning of the year that still hasn't been fixed yet. That was pretty maddening: https://mostlyobvious.org/?link=/Reference%2FHardware%2FRaze...


if it makes you feel better, I only use windows, and none of my computers (laptops or desktops) sleeps/hibernates properly. Most of them work most of the time. Some none of the time, and none all the time.

Hibernate worked amazing on Windows XP.

Back then of course it was simple, closing the lid instantly wrote RAM out to the HDD and shut everything down.

In the name of copying the iPad's instant on, we've killed countless batteries and made countless laptops completely useless.

Progress!


This is the thing that annoys me most about my Windows laptop (the highest end dell "desktop replacement" that cost me more than my MacBook did). I've never had this particular issue with any Mac I've ever had (I've lost count how many these were, but surely double-digit by now).

All of these problems are Windows problems too. I don't get why Linux gets all the hate for it.

> I had 2 Linux Dell laptops in a row that either had Wifi OR bluetooth, never both. I swapped the radio in the second one and things were okay for awhile.

I had a Windows laptop that came with broken wifi and bluetooth as well as a BIOS-enforced wifi card whitelist that prevented me from swapping out the wifi card.

> Neither of these, nor the Asus laptop I later purchased supported sleep/hibernate when the lid closes.

Yes, so does my Windows laptop. It's why I always shut it down completely before going anywhere.

> My current workstation that runs Ubuntu 22.04 and now 24.04 was going great until I got some update that swapped me back from Wayland to X11. Now things are weird and I've got to make time to fuck with it to figure out how to make this RTX 4080 work with Wayland.

An NVIDIA update gave people with my laptop model a blackscreen. NVIDIA only fixed this half a year or so later.


It's pretty unfair to compare Linux to Microsoft. Companies develop their software and drivers for Microsoft Windows. They don't necessarily do the same for Linux. Which means volunteers have to reverse engineer things and sometimes they don't work right. There is just no way around this at 4% marketshare.

Agreed. But if a company is selling a laptop with Linux preinstalled on it (Dell and Asus), they damn sure ought to verify that the basic feature set works (i.e., Wifi, Bluetooth and it sleeps when the lid closes).

Windows for all the terrible imposed by the business is actually pretty nice.

* Active Directory / Group Policy has no equal to the point where FreeIPA and sssd threw in the towel and supplement AD and support group policies directly.

* RDP is the better technology full stop. X forwarding is really cool and VNC is… there but RDP got the abstraction right.

* SMB for all its warts is better at file sharing than anything that Linux has to offer and FUSE, while great, is a band-aid over the wrong permission model— why Linux does mounting a filesystem have security implications? Why after 30 years do we still pretend the filesystem is reliable and paper over the reality that it isn't.

* NTFS got the permission model right where for the most part Linux is still clunking along with chmod. You don't have to deal with user ids sharing a single global namespace. Directories only being able to be owned by one group. NFSv4 even adopted NT ACLs wholesale as well as NT's identity model.

* Windows has undergone truly heroic efforts to make remote home directories work where Linux you will never stop dealing with issues.

I think the, deserved, Microsoft hate has blinded folks a bit to the good things Windows has going for it. I know, tragic, the worst person had some good ideas.


Linux didn't support per-monitor fractional scaling until fairly recently. I only got it after I changed to Wayland about a year ago (not sure if X supports it these days), and then it turns out Wayland doesn't support color profile (or at least that's what I learned from random Reddit posts).

So I guess I could either go back to X and have uncomfortably large letters or stick with Wayland and tolerate slightly off colors. Yay.

The IME shows Korean characters in slightly wrong size whenever I type and gvim keeps throwing UI error messages in the console. At least it allows the input to go through, so I guess it's fine?

Hibernation used to work, and then stopped working for about two years, and then started working again after the latest upgrade. No idea why, I'm not gonna question the system when it works.

(BTW, I think the last time I had to worry about IME or hibernation in Windows or Mac was about twenty years ago.)


not a fan of linux font/text rendering either. doesn't look too bad on my 4k but the 1080p monitor is unusable for me. fonts also look so airy and thin, and white on black looks weird as well.

For one the Nvidia drivers are still a pain, and Nvidia has 83% discrete GPU marketshare per the Steam survey, so the majority of Windows users venturing into Linux are immediately going to run into whatever the Nvidia issue of the week happens to be. Those who are already sold on Linux know to just get an AMD card in the first place, but that requires the benefit of foresight.

Blame Nvidia for that one. They have actively refused to work with the community for the longest time.

I don't think the average person cares whose fault it is, they just want their computer to work, and Nvidia has incredible mindshare over on the Windows side of the fence so most people are going to keep buying their hardware and almost immediately running into problems when they give Linux a shot.

Wayland is being pushed out more and more, and it still has many problems for me on my Nvidia card. I know it is mostly Nvidia fault, but at the end of the day, I am not here to assign blame, I just want to use my PC.

Worth mentioning that I've had AMD blow up on Wayland as well, so I only have my one device with an Intel iGPU running Wayland. Everything else X11.

You can just... not use wayland. The "push" is easy to defy. Just don't use it.

So just switch to Linux from Windows and immediately start deviating from distribution-supported defaults…

This suggestion comes up implicitly in a lot of weird hardware cases. It really isn’t a selling point for a user wanting a good first (or recurring) impression.

Just to call out: X11 isn’t all rainbows and unicorns either. Wayland (for the various defects I’ve encountered) is a desperately needed improvement.


If you're scared to deviate from defaults you're not going to ever see wayland in the first place, so relax.

I thought it was default on Ubuntu

The default on virtually any computer is Windows or MacOS. If you're scared of deviating from defaults you'll not have to worry about Ubuntu.

Okay, so you're willing to change your whole operating system to something which is quite different than what you're used to.. but you're not willing to switch off of Wayland even though you know you don't like wayland and know the alternative fixes your problem, because you feel somehow compelled to stick with a default? Makes no sense. You should either not use wayland, either by sticking with windows or switching to X11, or buy hardware that supports wayland better, or learn to live with wayland as it is without feeling sorry for yourself. Using wayland even though you don't like it is 100% a self-inflicted harm. Nobody is making you do that, the "push" is in your head. Just stop doing that.


Well I use X11, because Wayland is just not there yet, at least for my particular hardware, but it is true that more and more distros come with Wayland as the default. Linux can't become more widely used, if the answer for anyone installing a wide array of distros is, if your pc has a GPU that 90% of pcs have you should immediately change this default setting. It is fine to be a power user and mess with your system, but the default configuration should have the expectation of working well on as wide array of hardware as possible.

For a month, I've been running an experiment (on a desktop running Sid). I removed the proprietary drivers and just used the nouveau drivers with my 2070S. Everything mostly just … works? Sharing screens on Meet, Slack, and Teams, four monitors with varying DPI, everything just works. I do not play games or do any GPU heavy stuff though, so ymmv. Still planning to replace it with an AMD card though.

Until starting to work with local LLMs, I’d used nouveau / libre drivers for years without major issues. It’s sort of amazing how much troubleshooting and wrestling is involved getting CUDA working with older, newer or middling hardware.

I agree, but it doesn't matter who is to blame. It still doesn't work.

I would blame the people who insist that of course Linux will just work, I don't need to worry about it.

Are they really a pain still? The average user doesnt change his kernel, so the package driver will work.

NixOS plug: If you swap kernels it'll automatically build the nvidia bridge code for the kernel and it'll work.

I wouldn't buy nvidia for Linux still, but it's hardly the issue it was


In theory, in practice AMD also has its own share of surprises.

How do I get a drive to mount during boot (during boot, not after like FUSE does it) without having to look up the command to get the drive's UUID, look up the syntax for a new fstab entry, and edit fstab manually?

A few distros/DEs have straightforward ways to do this with a GUI but there's no universal solution, and this is something almost anyone using Linux for a home media server is going to run into.


Both GNOME and KDE have GUI disk/partition managers in their control panels.

If you aren't using one of the big two, it's your own fault (or the fault of whoever sweet-talked you into trying their non-mainstream DE).


It isn't sweet-talking, it's using XFCE because it's a weak little machine that lags like hell if I use KDE or Gnome.

For reasons we won't discuss, I want the caps lock key on my keyboard to map to the 'end' key instead.

On Windows, it's just an AutoHotKey script. On Linux? Literally impossible. I've spent days dicking with various config files, rebooting and relogging dozens of times. Even got so desperate I asked chatgpt. Gave up after a week or two.

To anyone else reading this: this is not an invitation for fixes. Whatever you're going to suggest, I've already tried it and I'm super not interested in any opinions about it.


I wanted this exact thing (Caps Lock -> End), and https://github.com/rvaiya/keyd works.

It runs in the background as a systemd service, and the config is just:

    $ cat /etc/keyd/default.conf
    [ids]
    
    *
    
    [main]
    
    capslock = end
(I got excited that someone else likes the same keybinding I do and wanted to share, and only then saw "not an invitation for fixes". Well, maybe it'll help someone else.)

Fortunately posting about it publicly is akin to asking for more engagement.

No mention if you tried to compile the firmware, or use the custom mapping configuration available to your keyboard, what kind of keyboard you have, or what specifically hasn't worked.


No fix, just a hope spot, I did this with mutiple keys, including caps key.

AutoHotKey is a godsend.

This annoyed me today. Does a Linux distro highlight newly installed software? I'm not going to say it doesn't because it may in a different DE or distro.

It doesn't have the button exactly where Windows has it.

I'd say the core difference for those migrating is that they evolved with different 'evolutionary pressures'. Broadly speaking there's equivalent functionality for a lot of things, but windows has a lot of GUI while linux has a lot of config files and terminal commands to accomplish things.

Where I think the friction is the distance between new migrants coming to linux with glowing advertisements of it mostly being a drop-in replacement for windows but the GUI not offering everything, and crucially not hinting where to go to find things, so people head off to do web searches and play the lottery of if they get good/recent information appropriate to the distro and version they chose. Linux seems to be polarized between great for simple use cases (browser, gaming within the steam walled garden, etc) and those willing/able to dive into the terminal, but between those can be a wide gulf which is hard to cross.


For me, Linux is capable of doing everything productivity-wise much better than Windows, but my home PC is practically a toy for me to use Discord and play video games. Screen sharing on Discord is an open issue with lots of third party solutions, but none of them are perfect. Video games is around 95% of the way there (thanks, Valve/Proton!), but that 5% can be annoying for a device that is essentially my alternative to buying a Playstation. I think a large share of "PC enthusiasts" fall into this category. The two main use cases for a $700 graphics cards are to play Elden Ring and run an LLM, and you can guess which one is more popular.

Then there's the minute ambiguities that continue to perplex me, which are mostly down to lack of standardization across distros. Most distros come with a bunch of premade user groups, but good luck figuring out what each of those groups give access to. The actual purpose of various root level folders is inconsistent, making manually installing programs confusing, and oftentimes figuring out where the package manager put an installed program is vague as well. The other day I installed openSUSE on a server, went to disable password access in /etc/ssh_config, only to find it doesn't exist. Had to google around before finding out they moved it to /usr/etc/ssh_config. I am used to googling as part of how I navigate daily life (so much so that I pay for it with Kagi), but most people don't want to be arsed with googling things.

Dotfiles are also annoying, and I think that is a universal complaint among users given how many dotfile management solutions there are around. The fact that you have to edit config files at all is a turnoff for many people (I prefer the benefits config files bring, but not everyone is so easily convinced). Things like this are navigable with enough experience, but people have gotten so used to Windows' quirks and error resolution process that learning a new system with a whole new set of pitfalls just isn't worth it.


The problem is, which distro?, which DE? Even which drivers? Every distro usually has some missing pieces, when you raise that, people would say: ‘ oh you are using X thats why, it works on Y’

Having used all three, macOS is the one I can't wrap my brain completely around for some reason. Maybe I got too used to Windows, but I picked up Gnome 3 quicker than I expected.

I once started a job and was handed a macbook pro, after years of using nothing but Linux.

It was one of the most frustrating experiences of my life, I felt like I had to constantly work around the OS to get things done that would be trivial on linux.

I lasted a month before making the macbook dual-boot ubuntu. Eventually they just bought me a linux laptop.


Same here. Time spent with MacBook was the worst out of 3 for me. Luckily the project that depended on Mac was short and I never had to go back.

Happy like a clam with Windows and Linux. Both have some ugly sides but neither stand in my way when need to do something.


Most flavors of Linux I've tried didn't have text selection for Shift+arrow or Ctrl+Shift+arrow or Shift+Home or Shift+End keys working by default. Maybe it's been fixed since, but I remember that it annoyed me every time I installed fresh Linux.

It's many small things like that.

When I had to switch to Mac for work, it had no silly problems like that.


For every "objective" quality you bring up, I can call it subjective.

Have you considered modifying it to suit your needs?

I use Linux since 1995's Summer, starting with Slackware 2.0, kernel 1.0.9, the first with native ELF support.

Since then I subscribed to Linux Journal until it was no more.

Got most of Walnut Creek distributions, lost count of how many I have tried in the last 30 years.

My UNIX experience started with Xenix in 1993, and I have used most well known UNIX variants.

There are plenty of reasons to still complain about Linux in 2024, especially in regards to laptops.


And then they redesign everything every time. W8, W10, W11 - W11 is the biggest offender with changing the context menu. The only positive thing I can say about Windows these days is it does work really really really well with touchscreen input.

imho as a surface pro user who doesn't use the keyboard cover, that's a significant overstatement.

it's much, much better than it used to be, but there are so many inconsistencies and annoyances that I would be hard pressed to recommend it. selecting text is painful. the on-screen keyboard is decent but swipe input won't register half the time and it routinely fails to open when it should or stay closed when it shouldn't.

with a pen, it's extremely picky with tap inputs. if the pen tip moves even a mm after touching, it's registered as a mouse drag, and unlike with mice there's no way to adjust the sensitivity or register a deadzone. handwriting to text is ok, but its constantly shifting interface regularly leads to errors and gestures like striking out or replacing a letter rarely works on the first (or even third) try.

with both pen and osk, text prediction / spell correct is utterly moronic. often reaching for unusual words or proper names instead of a more common word.

I will give them credit for getting to this point and I'm sure many of the inconsistencies come from having to support a wide variety of apps and backwards compatibility. regardless tho, it will be a long while yet before it's usable as a touchscreen-primary OS, let alone "really good".


There was a big downgrade I forgot about.

On W10 you can use the full keyboard layout and undock it to move it where you want. On W11 this changed and once you undock the full keyboard layout it becomes a different layout. This has bitten me quite a bit as some fullscreen applications will disappear behind the keyboard.

Also it is impossible to turn off text prediction on the keyboard. I don't want and need it so it blocks screen space.

Surface Go 3.


> The only positive thing I can say about Windows these days is it does work really really really well with touchscreen input.

This was always the point, wasn't it? MS wanted Windows to compete more closely with the iPad with the Surface - it just turned out that only parts of what they tried were good (the Metro style turned into Fluent).

I've been using Windows since 98 and I frankly feel like the current UX is at a high point. It's responsive, thoughtfully laid-out, and supports modern stuff like omnisearch (I have configured my Start Menu to NOT give me Bing results) and multiple desktops.

The enshittification is a real shame because I've been really enjoying the changes.


I find this hilarious because HN in general has the same complaints of Windows not being exactly like Linux. Even Windows 11 not being exactly like Windows 10. Humans truly are all the same.

You may be right, but chances are it would have pushed Godot in the right direction of avoiding Unity mistakes namely using custom scripting language which Unity ultimately gave up on. In this case, however, the forced API decisions that are contrary to performant interop patterns of C# are even more painful, and I keep getting casual reports that there is something wrong with the way Godot embeds .NET that leads to its consistently worse performance vs when being used standalone. I don't know the details and if anyone can shed more light on this, I'd love to dig into that.

In any case, even if we skip the subject of performance, GDScript is not a particularly good language comparatively speaking.


GDScript is fine and I rarely encounter errors because of the GDScript language. The typing is optional, but if used it works well enough and the editor gives good autocompletions and will mark type errors before runtime. GDScript also hasn't screwed up any of the fundamentals, like implicit type coercion in if-statements (see JavaScript), and is thus capable of improving without breaking backwards compatibility.

I can tell you if Godot abandoned GDScript and went all in on C# then I would stop using Godot. I do not think C# is even remotely enjoyable as a language and GDScript works really well.

Why? What are the features in GDScript that don't exist in C#?

Personally I find C# to be a fantastic language with both high (eg. pattern matching, dynamic) and low level (eg. value types, Span<T>) features as well as amazing tooling (eg. Rosylyn Analyzers) that come in very handy.


My dislike of C# is nothing but syntax. Godot writes their examples in both C# and GDScript and you can see that GDScript is a much more concise language with almost no boilerplate compared to C#.

I’ve used both, and personally, I find a lot of features in C# not necessarily contributing to standing up the gameplay I’m looking for. It has more functionality like you mention, but GDScript integrates with the engine nicely and doesn’t have GC pauses to worry about.

If performance is an issue, or I want to do something that GDScript doesn’t lend well too, I’d probably just jump to C++.


Pity that it doesn't have a JIT or AOT compiler like C#, though.

> and I keep getting casual reports that there is something wrong with the way Godot embeds .NET that leads to its consistently worse performance vs when being used standalone.

This is the first time I hear that. Do you have any sources for that? Reposting unfounded "casual reports" isn't helpful.


I'm not saying that they are well-founded, only that there seems to be a consistent anomaly I keep getting told about and would like to hear if this tracks or does not track with the experience of active Godot + C# users. It's just something I would like to know more about, with the best outcome of being proven wrong.

Apparently they have done the same strange anti-pattern as Unity, regarding magic methods called via reflection.

> "Pronounce it however you like."

Godot pronunciation is documented at https://godotengine.org/press/

    Godot is named after the play Waiting for Godot, and is usually pronounced like in the play. Different languages have different pronunciations for Godot and we find it beautiful.
But there used to be an extra line that recommends pronouncing it like "god-oh":

    For native English speakers, we recommend "GOD-oh"; the "t" is silent like in the French original.
It was removed in favor of making the pronunciation non-prescriptive:

https://github.com/godotengine/godot-website/pull/638

https://github.com/godotengine/godot-website/commit/c9053182...


huh, I always imagined the emphasis was on the second syllable, not the first. Of course, in French, all syllables are equally emphasized.

No, they aren't. French puts the stress generally at the end of the word (vs languages like Spanish where the stress varies but is fixed and can't be used by the speaker to emphasize anything)

> No, they aren't. French puts the stress generally at the end of the word

What? I'm pretty sure "no stress anywhere" is right. (I did play a few words in my head to check, but really, syllable stress is just a foreign concept in french.)


It's a foreign concept in French because in French it's always on the last syllable, so you don't have to think about it.

I mean... I'm honestly not sure because I indeed don't think about it, but I really don't think so?

Like, okay, let's take a random sentence:

"C'est impossible!"

There's a bunch of different ways you can pronounce it: you can pronounce all syllables in one breath (no accent), you can enunciate three syllables for heavy emphasis ("C'est im, po, ssible!"), you can put the accent on the first syllable for light emphasis ("C'est IMpossible"), but you're almost never going to put the accent on the last syllable ("C'est impoSSIBLE!") unless you're doing a bit.


me too. if i were to hear someone say waiting for GOD-oh, i'd think they'd said gato, cat.

doesn't matter, for me it will always be go-dot

> Godot's limitations are fascinating in part because, as the pair stressed multiple times in our conversation, they're something technically savvy developers can solve themselves. Because Godot is open source, developers who want to add key features can fork the engine and modify it for themselves free of cost. That isn't possible with Unity or Unreal Engine.

You can modify unreal. It’s source available.


Yes, but can you redistribute those modifications? Under what terms? Having access to the source code is, like, 10% of the battle.

Yes you can redistribute modified versions of the engine to other parties. Public offerings need to be done through an epic marketplace but that can be for free. The modified engine is still beholden to the original license terms.

> Yes, but can you redistribute those modifications? Under what terms?

They have a marketplace for selling or sharing stuff like that. If it's a huge chunk of code that's mostly original Unreal code, there are probably some license issues (I have no idea) with just sticking it in your github, but I actually have seen stuff like that on github, so, maybe not.


Why do you need to redistribute them if your goal is merely to solve your own problem? (That's the specific context of the quote in the parent comment.)

>Why do you need to redistribute them if your goal is merely to solve your own problem?

Because "you" in this case is typically a game studio. "Solving your own problem" means shipping your game.


Huh. I have no idea what point you're trying to make.

Virtually everyone modifies Unreal C++ code. Some indie projects are blueprint only. But modifying the engine C++ source code is practically an expectation.

Can those source code modifications be easily shared with other developers? Well they can't be publicly dumped on GitHub. Can they be shipped to consume in a compiled binary? Absolutely.


Do you think it would make any sense for epic to forbid studios from shipping games that contain a modified engine?

Gotcha!

As I understand it the sanctioned way of sharing code added to UE is to fork it on github and publish changes to your fork.

Being a fork it will only be available to other people in the Epic Games github org which is only people who have agreed to Epic's licensing terms, and your modified engine remains under that same license.


That's not really true. You can't modify it free of cost; you still have to pay the royalties to use it. You also can't modify it in any way you like, there are numerous restrictions in the EULA on how exactly you can modify/distribute snippets of code. Not only that, but you also aren't allowed to integrate it with any sort of software with a copyleft license, making it useless for any gamedevs who want to license their game with one. Even people wanting to use copyleft libraries or code with their fork are completely restricted.

This is not (entirely) correct:

> Free

> Game developers (royalties apply after $1 million USD gross product revenue)

> Individuals and small businesses (with less than $1 million USD in annual gross revenue)

> For educators and schools (no revenue limits)


Yes, royalties apply after $1 million, hence, not free. Plenty of small/medium studios use Unreal.

Free for educational purposes means it is not free, since the royalties still apply if the game is released.

Last time I checked, free means free.


free means whatever the reader wants it to mean, including free as in beer, free as in speech, or free as in puppies. In this case, free until you have $1,000,000 in revenue is free enough outside of pedantic online arguments about the definition of free. If you made a million dollars from something, having to pay the thing that helped you get to that place doesn't seem unreasonable, but maybe I'd I had a million dollars I'd feel differently.

It's pedantic to say it isn't free when...it isn't? Plenty of smaller games make over $1 million. It's news to me that the concept of free is whatever the reader wants it to be. Free as in beer or free as in freedom, Unreal is neither.

Freedom in licensing definitely requires context and specification; an absolute view of freedom has little practical use. Game engines released, for example, under a GPL license may align with an absolute view of freedom, but they are useless for the vast majority of commercial games.

What needs to be below 1m? Revenue of the whole shop? Or just that game? If you have two games how is it calculated? Can you create legal entity per app to manage limit? Is it annual revenue? Can you set publishing legal entity in front that takes most revenue as publishing cost and pays peanuts to dev shop legal entity that holds license?

Per game, lifetime revenue.

Thank you. But that’s very blurry, no? Is my Mario 2 new game? Mario 3D? What about Mario vs Donkey Kong? Smash Bros with Mario? If all are new games what about Mario v1.3? etc. Wonder how they formalised it with legal language.

No that’s not very blurry at all. Mario 2 is clearly a different game than Mario 1.

Epic is not claiming any rights on your IP.

Mario v1.3 typically means Mario 1 with some patches unless you’re kingdom hearts, so yes that is still the same game. I believe dlc is bundled into the parent game.


$1M per product. So if you’re talking about releasing engine mods for free, there is no cost.

Why did all these migrating game developers pick unity in the first place if they wanted to release a copyleft game?

My reply had nothing to do with Unity. It was directed at the statement that Unreal being source-available is somehow equivalent to being able to freely fork and modify Godot to add specific features.

Furthermore, many of these devs have been on Unity for many years, not because it was the best, but because it was pretty much the only accessible and modern choice for smaller projects before Godot and other open-source engines were mature enough to release games with.


Does unreal expose an interface like GDCLASS for new types?

Probably UObject or AActor, depending on what you need it for.

There are macros-as-descriptors like in Godot but they don’t really work in the same way. But if you’re asking whether you have access to base level objects so that you can extend them, yea you can.


So is Firefox, and we all know where the community contributions there end up

I don’t know anything about this. Does Firefox reject contributions?

This article is pretty clearly LLM generated, or at least heavily padded out by an llm

I would have guessed it was written by an actual journalist who used to work for a newspaper. It's mixing story reporting, interview summarization, and quotes from independent sources in a way that's very familiar to me and seems like the work of a human. Although few places can afford that journalist touch any more, so maybe you're right!

What? You read the articles?

Oh my God, it's clearly a reported story by a journalist.

Stretching $8 million to do $2 BILLION of product development is the hard part.

So is trying to make a game engine during the worst industry contraction since the collapse of Atari.

Being open source is kind of saving them right now.


> Stretching $8 million to do $2 BILLION of product development is the hard part.

It's even worse over in Rust land. Bevy is making progress, but it's slow.

There are several parts to this. There are the run-time components - the graphics stack, the physics engine, and the 2D user interface components are the big ones. There's the gameplay programming system - schematics in Unreal Engine, scripts for most others. Then there's a developer user interface to provide a GUI for all this.

Open source development can do the run-time components. There's general agreement on how those are supposed to work, and many examples to look at. The developer user interface is tough. Open source development has a terrible time with graphical user interfaces. Those need serious design, not just a collection of features and menus that happen to compile together.


>the worst industry contraction since the collapse of Atari.

Is that true? It feels like we've been in a indie renaissance for a while now


There are indies doing well certainly (had at least 3 significant hits this year to my knowledge between Balataro (made with LOVE2D), Animal Well (C and a custom engine) and Buckshot Roulette (made with Godot)), but game developers in general have not been doing great. Seeing layoffs on par with the rest of tech.

We've had upwards of 20k games industry layoffs in just the last couple years, iirc. Funding has dried up too, which is really bad for indies since they're more dependent on it than established studios.

Its more that there's an absence of investment funding. Profits are still being made and work is still being done but its hard to get a new project funded, let alone allocate that funding to engine dev.

Founders rooting for their competitors to self-destruct is pretty rare in devtools.

Yeah, my experience is that there's actually mutually desired success and a lot of indirect collaboration. I worked on a debugging tool and a tracing platform, and in both cases there was a lot of cross-pollination with similar and competing companies through meet ups, forums, talks, etc. I'm not sure executives would have endorsed it but it definitely improved the products.

Unity's killer feature really is "edit while in Play mode." I miss it when in Unreal as well.

That said, its really a double edged sword. Unity can turn into a buggy mess if you try to get too clever about mucking with domain reloads and such.


Something like Live++ would help in Unreal.

Unreal has hot-reload of code. The difference is that in Unity the scene editor tooling is active while in play mode. You can move objects, inspect values and edit game object properties and the playing simulation is updated live.

Unreal has some of this, in that you can see dynamically spawned AActors(game objects) but their properties and components are not updated and are not editable while playing.



I like how it was the Godot maintainers that ended up showing demonstrating their Unity.

Well we gave up waiting for Godot


The engine is actually named after that reference.

The timing of it all is almost Unreal.

Please, can we not? These weak puns are not even funny, and if HN devolves into Reddit, I have nowhere left to go.

It's true but you're not supposed to say it: https://news.ycombinator.com/newsguidelines.html

Thanks, yeah I discovered that after I posted. Didn’t the guidelines also used to say something about not making puns or joke threads?

The guidelines don't say it now, which is what matters.

And yet, the votes speak for themselves.

Some puns get upvoted, some don't. I'm happy to take the downvotes when it's not a well-liked one.

You can just not read a comment, you know

You can't know what a comment is before you have read it, so no that isn't a solution.

Shock waves like an level 3 Quake inside an Arena.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: