Hacker News new | past | comments | ask | show | jobs | submit login
KSP2 is spamming the Windows Registry until the game stops working (kerbalspaceprogram.com)
272 points by firewire on Sept 25, 2023 | hide | past | favorite | 294 comments



One of the comments mentions this:

"Alright, doing some investigating it seems that they are saving the Pqs preferences based off of the instance ID of the pqs object, which, according to unity's own documentation, changes between runs of the game, hence why its saving 10 trillion different copies of the same data"

So, somebody didn't notice a "changes every game" instance ID was in the path and/or data. They thought they were overwriting a single key.


Somewhere a unity exec has just figured out the new metric that determines their royalty pay rate.


Are you sure it isn't based on number of instructions retired by the CPU? or the number of pixels rendered by the game?


They will bill you by the pixel. Usage metrics are streamed in real-time. You're gonna wish 480x320 came back.


cubic pixel/seconds


That’s voxels/second, surely?


pixel displacement


bitfield component values of a texel color BGRA’s


Coming from a Linux background, what is the windows registry and why do things need to write to it? All I ever read about it seem to be horror stories.

Can't you store stuff alongside the install? Or in some user data location?


> what is the windows registry and why do things need to write to it?

It's a centralized, high-performance small key value store that's the alternative to writing a million config files in random places.

It's arguably much better in my experience. E.g. one of the never-ending headaches I always have on Linux is updating config files when a package updates. There's no automatic file merge in general (hence .pacnew and such) so you gotta do it by hand (and not everything is available with the foo.d/ hack). The registry already operates at the value granularity so it bypasses this kind of issue.

And as a developer you don't have to worry about some things you might not think about, like the trade-off between corrupting your config files with in-place updates vs. having to create a new file and replace the old one after every config change.


What's always annoyed me with many operating systems is this need to organize based on function instead of domain. If I install an application named "Bob" then I want ALL of Bob's configuration to live within Bob. I don't want it split between different registry keys or /usr/local, /etc/, /var, /lib, etc.

I'm fine with there being conventions and categorizations, but I'd like to root node to be the application itself. Yes, I even want this for multiple users.

I know there are design arguments to / for the various different ways of organizing configurations, but IMO they are just inferior from at least this user's perspective.


You are correct, from a user's perspective this seems like unnecessary complexity. But here on the system administration side of Chesterton's fence, these splits enable more secure, more robust and more reliable systems. Among others, this allows for improved backup and recovery, better storage efficiency, easier application of security patches and providing centralized configuration.


I think for all future development, the industry agrees with you. Chromeium, iOS, Android, and Windows Store apps all utilize packages and per-package storage.

We're just suffering from backwards compatibility for things made over 30-years ago.


This exists for a while now, but didn't catch on:

https://gobolinux.org/at_a_glance.html


That's largely what MacOS does.


Yes exactly.


I like being able to use revision control to track config changes mostly in one place (/etc), using etckeeper.


[dead]


And Microsoft knows corporate is where the reoccurring revenue is.


The number one reason corporate picks Microsoft is the control of people it provides by it lush telemetry dashboards on worker efficiency.


That would be a directory called /bob right in the root dir, containing everything relating to tge bob app, including the firewall settings, both the distro defaults and the user layered, the executables, the image resources, the db libraries, the db store... never mind that the firewall is another package that needs those same settings...

No.

Or, if you really do want something so insane, well great news, you have snap and flatpak. If those are not complete enough examples of this shining vision, go ahead and show us how it's done.


You're constructing an argument the person never made.

The binary for my app could live in a hypothetical /apps/bob.

Its config could be in there too.

There's no need to duplicate dynamically linked system libraries in there.

If the app need a different version of a library than the one provided by the distro/os, it could vendor it (or link statically). Optionally it can vendor and still try to use OS version if they version is satisfied, but this is just a memory optimisation (disk isn't that expensive).

There's also no need to place firewall rules in there. I'm not sure where you got that from. Firewalls are beyond the scope of a single application?

As for making thing system-wide available, there's already a few solutions for this (symlinks being a very backwards compatible way of doing this on nixes).

For all its faults and poor UX execution (albeit, maybe things improved since I last used them) flat pack and snap have some good ideas!


> There's no need to duplicate dynamically linked system libraries in there.

> For all its faults and poor UX execution (albeit, maybe things improved since I last used them) flat pack and snap have some good ideas!

Hum, flatpak says hello.

Unlike the kernel, Linux userland is a not-backwards-compatible hell so the only way to make sure things don't blow up in a spectacular fashion is bundling the whole world that you need with your app, or hopefully that someone bundled the things that you need with a Flatpak Application Platform for you..

> There's also no need to place firewall rules in there.

> As for making thing system-wide available, there's already a few solutions for this (symlinks being a very backwards compatible way of doing this on nixes).

I'm pretty sure that if you're proposing a "app is a folder" specification, any application that has some system-wide rules it needs to install or suggests or toggle, need to live somewhere in the application folder. Icon customizations, firewall rules, mdns, upnp, those sorts of things.

If the application folder is the end-all, then it need to have some form of "$sysconfigdir overlay" folder if it wants to modify $sysconfigdir without actually being able to modify or spill over the actual $sysconfigdir (and therefore not being contained in the folder at all).

That is in fact how flatpak works: you're never supposed to write anything at the host /etc, you overlay things in your application folder XDG_CONFIG_DIRS and flatpak-aware applications read flatpak XDG_CONFIG_DIRS folders.

That is in fact how nix works: nix packages are supposed to be read-only self-contained folder in /nix/store/$hash-package-$version, and /etc happens to be a non-persistent cache thing that will be overridden sooner or later by a nix expression.

But that of course means that now you have several XDG_CONFIG_DIRS/$sysconfigdir lying around in your system.


nixos symlinked world sounds about like sco open server.

I get that the actual ugly details are hidden away behind automation in the form of the special dsl and highly specified framework/structure, and maybe sane enough to use, and maybe adresses some pain point or two better than other arrangements.

Just remarking that I've seen 'everything is a symlink' before, and it's the exact opposite of a new idea.

Whether that means nix shares a bad idea with another bad system or this was one of the good ideas from an arguably successful system (for a time) I won't try to say.


> There's no automatic file merge in general (hence .pacnew and such) so you gotta do it by hand (and not everything is available with the foo.d/ hack). The registry already operates at the value granularity so it bypasses this kind of issue.

it gets around this issue by creating a gigantic systemwide or per user bucket of crap that inevitably ends up inconsistent.

since it is custom there is no good tooling for doing diffs and many developers treat it as quasi private so the config values are often not human readable.

there are management utilities that work ok, until they don't. then you have to start over because who knows, it's all just crap.

it's literally like a jerry seinfeld joke. you think configuration should be sensible, then everybody just sees this bucket and goes wooopty woopty woo and then it's just filled with crap.

bill gates probably sent emails about it. "i tried to look at the registry, but then it was just trashed."

that is the registry. a little shadow filesystem with a bizarre layout with weird and incomplete tools that is filled with crap.


"high-performance"

We might have differing opinions about that. (I've got experience for example in Windows kernel mode drivers and service development.)


What are you comparing against here that's leading you to a different opinion? Are you claiming file I/O would be faster or that you could write the equivalent functionality faster?

Also note that performance isn't just speed either.


For example SQLite is superior. That said, on Windows you kinda have to use registry, especially on the kernel driver level.


> For example SQLite is superior.

First... where are you getting this from? I just spent half an hour writing a an incredibly simple benchmark and all I see is SQLite being something like 20x slower than a registry query, seemingly caused by repeated locking & I/O system calls. This is on a trivial database with just 1 table with just 1 row, vs. a registry that's on a machine that's been running for years. If you have a benchmark that can disable the locking and get comparable performance, I'd love to see it. Not that it would mean anything though, given the next point.

Second, even if it were somehow faster... you'd be comparing apples to oranges. The registry has a bunch of things SQLite isn't designed for: security integration with the rest of the OS, a hierarchical structure, OS hooks for monitoring & interception, multithreaded access, etc. Have you tried doing these with SQLite before praising how fast it is?

> That said, on Windows you kinda have to use registry, especially on the kernel driver level.

Is it common for drivers to use SQLite to store configuration information in any OS? I dare say I've never seen this.


"Is it common for drivers to use SQLite to store configuration information in any OS? I dare say I've never seen this."

It's not. You're going to use your assigned registry hive for it.


[dead]


No guesstimates, but it's generally very bad.

Especially when there's some corporate "security" software in the loop. And that's the case you have to really code against.


You already realize it's your security software that's slowing your system calls. Blaming it on the registry makes no sense. It's like blaming HN for being slow because your ISP is slow.


Not mine, but the actual customers'. "Works for me" is not going too far.

Besides, registry is not a rocket even on a clean system.


The performance part is questionable, as is the discoverability: Programs write binary blobs to impossible-to-find keys and, as we see here, there's nothing preventing a severe loss of performance if the program writes too much to the registry. It can only be examined with specialized tools, and, due to the binary values written to it, it might not be meaningfully editable at all.


It's also really a pain once you realize that many strings are stored in raw UTF16 form. It makes using any tools or doing any automation on the values much harder than it should be.


There's also no mechanism for the OS to offer the user the ability to purge registry entries when a program is uninstalled, so the registry just bloats further and further.


Installers are supposed to do that.

But they have to be well-written installers (many aren't). And anyways, lots of installers are written by companies which want to leave their footprint behind even if you do uninstall.

Finding orphaned registry entries is hard - it's not always clear what application put them there, and how to determine that the application is still installed.


This is no different with most Linux distributions or macOS. When applications are deployed and create configuration files (system-wide or user), they're often not removed when deleting the application.


Registry bloat was one of the reasons why reinstalling Windows used to speed up computers in the 95-2000 times.


It is one of those ideas that sounds good. "Hey we will put all the configs into a database under one tree", Oh wow, that sounds great, sign me up. The main fundamental problem is now you have two trees, with completely different access patterns.

It is also all in the spin. What if I told you over on this side of the OS table we too keep all our configs in this great single tree database, not only that, we keep our data in the same database, everything is accessed using the same simple unified interface, the api has about five calls, it is pretty great. It also has the amazing feature that physically separate devices can be merged into the same tree.

Preposterous! some would say. All in the same tree! Why you would get everything muddled up. you must have a separate tree for each device. and a special tree with it's own special access patterns just for configs.

But for real, people keep trying to reinvent the registry over in linux land. (cough) gnome (cough) and it is terrible.


It's great and granular until someone starts to push complex serialized structures or large blobs as registry values...


The registry is far from high performance. And given the registry is basically a massive hierarchical structure, how is it different from files re: "random places"? The same dev that dumps files in "Random places" would dump values in random places in the registry.

The registry is almost never a good choice for storing...anything outside of the operating system itself, unless the data is somehow tightly coupled with a specific Windows install (e.g. an activation key). The idea of storing instance data in the registry is madness.


>one of the never-ending headaches I always have on Linux is updating config files when a package updates. There's no automatic file merge in general (hence .pacnew and such) so you gotta do it by hand (and not everything is available with the foo.d/ hack).

nix solves this


Except the part where whenever you install anything on Windows the whole OS grinds to a halt... I assume writing to the registry takes an exclusive lock. Or has to contend with decades of backwards compatibility layers and triggers, or both...


OP asked for an analogue, not a comparison.

I think it's silly to compare the two.


Imagine every setting in ~/.config and /etc is stored in corresponding SQLite database, and you have common interface to access both.

Now imagine this database is not a great engineering product like SQLite but really, really REALLY fucking sucks at being database, and slows down quickly with sizing up.

Now imagine there is no sensible way to get obsolete data out of it, and as every program uses same file it just gathers, and gathers, and gathers...


> REALLY fucking sucks at being database, and slows down quickly with sizing up.

This is over 20-years out of date.


Then why does KSP2 spamming the registry with orphaned entries eventually cause KSP2 to hang on launch?


Likely because the app can't deal with the size of the data it's getting back when it tries to get its entries.


The database-backed configuration mechanism makes a ton of sense, and if you squint, the filesystem is really kind of a database anyway. With the registry, there’s a lot you get for free—you can set defaults system-wide, you don’t have to deal with parsing, you can get new values without re-parsing a file, etc.

A lot of the horror stories came from earlier versions of Windows which had problems with reliability.

If you spend time as a Windows sysadmin you can start to appreciate it, because it does make certain administration tasks easier. Like “I need to change 10 registry different keys on 50 different machines” is easier on Windows. On Linux, I’d do the same with, like, Ansible scripts which can be a lot more error-prone to write.


Most people aren't sysadmins. The registry is inaccessible to them.


> the filesystem is really kind of a database anyway

I thought ReFS was a failure. /s


It’s /etc but harder to browse and edit.

Or, from the contrary perspective: it’s the 500 places config lives on Linux, but in one place instead.

(Except it still has a deep hierarchical structure, so that second one is kiiiinda not entirely true, in that you can run into exactly the same issues as scattered config files on Linux)


You can search for keys and values across all loaded hives. This makes finding things in the registry much easier.

Editing, sure, you've got essentially a single GUI application and CLI or programmatic access. Your options are certainly more limited than the plethora of text editors available.


Or you can use powershell that allows you to browse the registry as if it were a filesystem.

    PS> cd HKLM:\Software\Microsoft\Windows\CurrentVersion
    PS> ls
    [...bunch of stuff...]
    PS> gp Themes DesktopBackground
    DesktopBackground : c:\windows\web\wallpaper\[...default system wallpaper...]


That kind of defeats the purpose, doesn't it?


How so? I don't follow. It's still the same optimized, global, secure key-value store. It just has a different interface if you cannot possibly fathom using different tools than the ones you use to deal with the filesystem.

Pwsh is pretty neat in that regard. Besides the actual filesystems drives, many systems have "providers" that allow you to treat their objects as files: the registry, environment variables, the functions defined in your session, the variables, the aliases... Unix's "everything is a file" became "everything is an item".


> you've got essentially a single GUI application and CLI or programmatic access

You can export a registry path as a text file, make changes in perferred editor, then import it again. It's annoying but I've done it before when needed.


I think the only two "real" differences between throwing stuff in a folder and the registry is that:

* a uncorrupted registry hive is supposed to be Directed Acyclic Graph, (while a modern filesystem can have arbitrary cycles with symlinks and bind mounts and junctions) and;

* the registry has more limited name length limits, even assuming Windows somewhat low filesystem name length limits.


The windows registry is both of those things.

It has a "hive" which is part of the user profile. User preferences are stored there ("HKEY_CURRENT_USER").

System-wide preferences are stored in a "hive" for system apps ("HKEY_LOCAL_MACHINE").

It allows applications to mix and match system-wide stuff with user-specific stuff, which only differs by which "hive" it wants to query against. It has several data types for different types of records. There's conventions for how to store things (although apps can do whatever they want), so apps usually store their state and config data under HKLM\SOFTWARE\MyCompany\MyApp for system-wide stuff, and user specific stuff will live under HKCR\SOFTWARE\MyCompany\MyApp.

All in all, my experience has been that on average its usage is a bit more standardized than config files. Of course on windows, apps (particularly C# apps) ship both with config files and a boatload of registry entries.

Lastly - by design, HKCR (the user profile hive) is assumed to sometimes have orphaned data. If a program is installed per-user, and you uninstall it as a different admin user than who installed it, the other user profiles cannot be loaded and modified by the installers thereby orphaning any other user data.


I believe there is a Registry of sorts in Windows 3.1, but is largely empty. It was only in the Windows 95 era that it ballooned into what we have today. Coming (at the time) from an Amiga perspective I was bemused to realize that if 3.1's progman.ini file was damaged, that the program groups in Program Manager (think of each group as a node in Start -> Programs, basically the user-oriented files and executables from a software install) could just disappear! While there's a slight performance hit, one would think it would be more robust to scan the disk for .grp files every boot and build this data dynamically in memory. I have always assumed this was done because 3.1's memory management was complete crap, necessitating that Microsoft "write everything down". Ditto for 95, and being a much more ambitious OS, there was that much more to write down. Then the third party vendors jumped on board. Fast forward to 2023 and this legacy still persists when it doesn't have to.

I could be blowing smoke, but this has always been my thinking. An old joke: Registry was derived from the Latin word registratum, which means "put all your eggs in one basket".



Based on the follow-up questions, it sounds like you already know what it is and just wanted to advertise your allegiance to the Unix way of doing things.


It's kinda like a system wide dconf


> Coming from a Linux background, what is the windows registry and why do things need to write to it? All I ever read about it seem to be horror stories.

The Windows Registry is the NT Kernel's system config/preference store. The closest Linux equivalent is dconf. Like dconf it is built to be a read-mostly/read-optimized database. It's not as strongly focused on a service-bus architecture as dconf. It tries to heavily optimize for an MRU on-disk order (somewhat like redis) and optimize for strong write consistency (unlike redis which is happy to do more in memory between flushes to disk). Any writes at all generally thrash the Registry "hive" database files some. Heavy amounts of writes will murder it. It was optimized for reading not writing.

The Registry was side-ported to Windows 95 from the NT Kernel, in part because COM (it became the central spot for registering COM components), and a lot of mistakes were made in the messaging about it. It was intended to be Kernel-focused and mostly never used by user-space applications. That unfortunately wasn't made clear enough, and needing to register COM components in it certainly muddied the message. So Windows apps have a long history of storing a lot of things to the registry, much of which they should probably use user config files for.

> Can't you store stuff alongside the install? Or in some user data location?

In Windows due to a complicated dance with anti-malware efforts and secure binary memory mapping it is generally frowned upon to store stuff alongside installs (in the %ProgramFiles% or %ProgramFiles(x86)% directories). Many programs (especially well written ones) don't even have write permissions at all to their install folder (similar to how many distros lock down /bin and sometimes /usr/bin and/or /usr/local/bin to superuser accounts only). For backwards compatibility reasons with applications dating all the way back to Windows 95 some app installer are allowed to still re-ACL their install directory to give back write permissions to the application. Also, in modern Windows (since Vista), depending on a number of factors Windows will actually lightly sandbox the app when that occurs and redirect ACL changes and writes to an "overlay" directory, allowing the app to believe that it still is writing to its own install directory but it is actually writing to a machine or user directory outside of %ProgramFiles%.

On the other hand there are plenty of user folders available for config storage: Generally the suggestion is %AppData% for most such files. (This is the "Roaming" app data that may be automatically copied to other machines for some users on some networks, mostly enterprise/corporate accounts/users.) Some apps may prefer %LocalAppData% which doesn't roam for larger config files or more machine-specific transient things (like window positions or caches). It's also common enough to see cross-platform apps use Unix-style dotfiles under %Home% and even sometimes XDG-style dotfiles under %Home%/.config/. That's not generally recommend and especially because of roaming behaviors the general preference is to use the appropriate %AppData% or %LocalAppData% and avoid cluttering %Home%. Though many Windows users don't even really use or see %Home% for a variety of interesting reasons, so its not entirely frowned upon as wrong. (Unlike storing config files under %Documents%, an ancient mistake of many Windows apps not as terrible as storing things in the Registry they shouldn't, but more visibly obnoxious to Windows users that want tidy Documents folders and feel that folder should be entirely user-controlled.)

Also, there is a machine-wide config location, somewhat akin to /etc, named %ProgramData%.


A bit weird that cross platform apps use dot files or dot directories directly in %Home% on Windows. If they already are cross platform, thus taken the time and resources to do that, why not add a compile flag to move them into %AppData%?


That is a very good question, made all the weirder by even Microsoft's own cross-platform apps using %Home% instead of %AppData%. VS Code uses %Home%/.vscode/ and dotnet (.Net 5+) uses %Home%/.dotnet/.

One of the things that made a work computer more complicated than it needed to be was that part of the network setup redirected %Home% some of the time to a network drive (Z:) (a "Home" drive) and other times %Home% was still on the default drive in the default user folder location. I'd constantly have to copy files like npm's .npmrc from one folder to the other to make sure that every use of npm found a right copy of .npmrc.

That "home drive" setup is a relatively common in my experience ancient Windows corporate hack for hand roaming some user files (sometimes including %Documents% which was likely the real intent but %Home% used to be easier to redirect than just %Documents%), so there is a sense of irony in some tools using %Home% as a non-roaming storage only to have it haphazardly roam due to some ancient corporate policy. It's also funny that it is possible for there to be two %Home% folders on Windows in competing locations with different contents, because the %Home% redirect seems buggy.

I think part of it is that there is a growing sense on Windows that %Home% is for "non-roaming, developers might to text edit this" configuration and %AppData% is for more UI-driven config that is less expected to be edited. A lot of developers learn %AppData% just fine, but there is some cross-platform convenience for developer life if you can always in PowerShell (or bash) just vim `~\.somercfile` and expect that to always work. But I still think using %AppData% more consistently is possibly a better thing for Windows apps to do.


It's just /etc


> It's just /etc

Some files in /etc have manpages.


Some registry keys also have The Old New Thing posts by Raymond Chen [1] /s

[1] https://github.com/mity/old-new-win32api#registry


It is some combination of /etc and /var. a hierarchical key-value data store shared by the whole system used for configuration management mostly.


The windows registry is a hierarchal data storage system. Yes, basically a filesystem.


like dconf, but strictly worse (eg, doesn't have schema)


“create_or_update pattern considered harmful,” so they say.

Actually I kind of do believe this, but separating the two doesn’t always solve the whole problem if you’re creating on every startup rather than only once on, say, profile creation or new game or something.


I would say, using something that doesn't belong to you as a primary key considered harmful. Create or update is a good pattern for robustness (eg so a troubleshooting user won't brick the app by deleting or renaming the key), if you can guarantee the uniqueness of the primary key. You can't constrain some identifier from a different system.


How could they not notice this?


I can understand not noticing the extra writes; but doesn't this mean that they were never loading the proper value, always loading the default, if the new key was always unique?


It wasn't found in unit tests. /s


Even integration tests don't usually test with a sufficiently significant amount of data that this kind of regression would be noticeable.


> So, somebody didn't notice a "changes every game" instance ID was in the path and/or data. They thought they were overwriting a single key.

Becsuse messing with Windows registry is a recommended action. /s

What's wrong with game config files ?


KSP2 is really becoming the premier example of "Nearly perfect labour of love's legacy ruined after being bought by a larger company."

More on topic: I have no idea why one would want to use the registry to store this information.


It's far worse than that. Take-Two bought Squad and KSP along with it. Soon after they took Squad's original IP from them and gave it to Private Division to develop KSP 2. Then they poached staff from Squad. It's not like KSP 2 is developed in some innovative way, I was expecting that game like KSP 2 should have been developed in it's own engine since it's so unique among other games. But nooo. KSP 2 still uses Unity, and in worse way. You are better off installing better graphics mods on KSP.


Incorrect on nearly all counts:

* The GAME (KSP) and the IP was purchased by Take-Two, not the company itself.

* The IP was then given to Star theory (MNC/SMNC/Planetary Annihilation guys) to develop KSP2

* Star theory failed once to deliver on the budget they promised, they got extra money

* ST failed second time,T2 didn't wanted to give them more money, so ST wanted to sell out to T2

* T2 went "fuck it", took IP from them

* Take-Two was one to poach developers from Star Theory (not Squad!) to make new studio to make KSP2. IIRC they basically made blanked statement "if you want to work on KSP we will hire you" to their devs, a lot of them came over.

I assume the last part happened because T2 went "hold on, you failed to deliver then want us to buy you ? Why would we buy company with incompetent management in the first place ? Poaching devs is cheaper".

> You are better off installing better graphics mods on KSP.

Well, at least you got that right.


You got it wrong. Star Theory didn't want to sell out to Take2. They realized Star Theory couldn't fulfill the totally-not-abusive-and-predatory deadline they wanted to release the game, so they said "ok" and tried to purchase the studio. They rejected the purchase, so they poached like 60% of their developers and employees instead, to a point that ST had to close doors because Take2 poached almost their entire work force.


> should have been developed in it's own engine since it's so unique among other games

with my limited technical knowledge about game programming and much less limited technical knowledge about software development in general this seems quite wrong

because while the game is somewhat unique in some point, in many many(1) other technical points it's not, so by using a game engine you can save a lot of time/problems with all the points it's not unique in and just either replace or adapt the parts where it is

so while Unity might very well have been a bad choice

the general idea of using an existing game engine was not

(1): Like window handling, input handling, asset loading/packing/bundling/compression, parts of game saving, most parts related to the rendering pipeline, menus, statistics/crash reporting, installers, and probably more.

Through this is also how game engines rot: By not maintaining many of the build in components leading to any non very simple game needing to replace them all the time. I think Unity had been going into that direction.


KSP is a game that routinely runs into floating point related problems and a game that needs deterministic physics simulations to be accurate and glitch free at all simulation speeds in the vacuum of space and inside the atmosphere of a planet.


yes so you need to replace the physics engine

this still leaves quite a lot of parts you do not need to replace


The "some point" is the main selling point of KSP. The "many other" technical points are beside the point. If you get the small things right but fail at the most important part, you fail overall.


I think you missed the point of my comment.

developing games is cost and time constrained

if you can use a existing engine and not wast resources on all the things which are not the main selling point and in turn spend more resources on getting the main selling point right, why should you not do so?

AFIK few, probably non of the more successful established game engine force you to use their physics engine


Not sure, but isn’t unity in this case covering your need for some tools, like rendering, assets management and so on, but maybe not all - i.e deferring to them the physic. Or so would I say assume ?


I heard from someone who actually works in the industry (as engine developer) that companies usually don't use Unity out-of-the-box and write their own tools and extensions (e.g. memory management) and change parts of the engine as the needs vary greatly depending on the game they are making. Not sure how common that really is though.


You may be referring to Unreal in this instance. Unreal is open source* but you pay to license the engine, so you can make core changes I.e. memory management or tweak underlying net code. Unity is closed source and so you can only really extend very core parts of the engine.

FWIW my experience in gaming has been that Unity is exceptionally powerful and allows game developers to create games that would otherwise require an entirely separate dev team to support the engine. When I first learned game dev, the code sections were almost entirely devoted to interacting with the underlying graphics libraries (OpenGL or DirectX) and hardly any to creating powerful features in a game. Now, using something like Unreal or Unity is akin to using a web framework like Ruby on Rails or Laravel.

* The source is available to view and modify


Unreal is not open source.

I think you might mean that unreal is "source available" which doesn't confer any of the rights of open source, but does allow you to view and modify the source code subject to a commercial license (which might be free for personal, small scale use. I don't know unreal's pricing structure).


> Unreal is open source

Unreal is not open source (cf. the Open Source Definition: https://opensource.org/osd/). It's just that the licencee is allowed to view and modify the source code within the limits of the licence.


BTW, unity does offer a full source of the engine to companies that are willing to pay enough for it. So you can effectively modify it as much as unreal.

Although, I don't even know if the KSP2 studio is big enough to afford it, let alone if they are actually doing it


> open source*

> * The source is available to view and modify

“source available”, and then you are using common terminology in a non-controversial way and don't need an explanation that is longer than the term.


My limited experience with Unity was that, if you are planning to do something the Unity way, its very easy to bootstrap and start just coding cool features. Which is why theres a proliferation of Unity store assets + cheap gameplay on steam.

But if you want to do something a bit more complex, like modifying 3d objects in flight based on unit interaction, not only are you fighting an uphill battle, but the documentation is all out of date and mostly wrong. Because Unity isnt just an engine, its a set of default addons to that engine, and each of those addons was written to be just generic enough, but mostly just enable people to start making FPS and RTS games really quick. If you want to step outside that paradigm you are basically just engine coding again.


Plenty of companies have source access to Unity and write their own custom portions of it. It's just not accessible at the hobbyist or indie budget level. It's one of those "call us for pricing" situations.


No people do rewrite parts of unity for their own games, they just don’t rewrite it all the way down at the c++ level.

I’ve known people who worked on some games where they hand rolled a physics engine in c# instead of using the unity one.


Common patterns from non-indies using Unity is something like that: write your game with the idea that Unity handles the rendering and media stuff, everything else you write/manage yourself with a proper C# codebase.


> write their own tools and extensions (e.g. memory management)

I've been using Unity since 2009, and have been a lead developer and CTO in several different companies. I have never heard anyone writing their own memory management for Unity. May be you're talking about object pooling?


It's kinda what ECS is, depending on how loose you want to get with the term memory management. And studios have implemented their own ECS in Unity long before Unity's official ECS framework.


I'm assuming that the "Squad" you're talking about has nothing to do with the "Squad" multiplayer tactical FPS game developed by Offworld Industries.

As someone not familiar with the KSP ecosystem that threw me for a loop.


[flagged]


Yes.


Why do "evil overlord" always seem to fail at understanding what they bought, especially the fact that the brand isn't worth much, and what are required investments/plans to capitalize it?

Like, buying Palm and reselling cheap Android phones, buying KSP and releasing KSP2 as it is don't seem to make a lot of financial sense. Or does it? It's always assumed "they" make a lot of money by ruining a brand. How and how much?


not an expert here, but there are thousands of variations of this.. the amount of effort and unique-whatever'ness that it takes to bring a product line into profitable maturity includes lots of people, places and things.. most of those are costs. If capital and their attorney buy the entire operation, with all the layers, the focus changes to the accounts and balance sheet, where lots of happy positive things are mercilessly cut. An unpredictable second part is, willing buyers for some of those parts. What happens from the consumer point of view after that, may or may not appear to make sense. Communicating to the customers costs money and risks reputation. Quite a lot of what happens in decision making is business-secret, some of the bad parts are forever hidden. $0.02 in America


It's a great example of "Quality doesn't produce market-leading returns"

Of which the consequence should be "Don't turn labors of love into businesses that demands market-leading returns"


Are you suggesting KSP2 is quality? It is decidedly the opposite.

KSP1 however ARE and have created decent returns.


KSP1 took 5 years of development to get to 1.0, at a non-software company.

Take-Two then bought the game in 2017 and is responsible for ports and KSP2.

So while KSP1 likely generated returns of Squad-magnitude with a few resources... that doesn't mean KSP2 is capable of generating Take-Two-magnitude returns from a full team.

Dwarf Fortress sold a lot of copies too (finally), but it's also been in development for 21 years!


But their point is for that to even feasibly be an idea to consider, KSP2 would have to be good. It's practically a downgrade from KSP1 in every way except the graphics. Entirely a downgrade if you're willing to install graphics mods in KSP 1.


Dwarf Fortress is an indie game. KSP was an indie game until Take2 bought the game.

The expectations for both these games were low. It was a small team working on both of them so we don't expect much, and since they're indie, they are actually gamers like us and like to play their games.

Being bought by a AAA publisher and having billions at their disposal increases the quality expected by unprecedented levels. And they are just showing that they are even worse than the indie team (HarvesteR) that started KSP. And they have budget and can get more personnel if they want.


In a way, this speaks to the power good software engineers hold.

You can have hundreds of millions but no match for a small number of software engineers in what they are good and enthusiastic at.


I think the graphics of Dwarf fortress and Minecraft are why those games were so much cheaper to develop, rather than some romantic story about the triumph of the underdog the vs some big bad empire. Turns out fun is more important to some than graphics. There are plenty of people that tried to play Dwarf fortress and gave up because earlier versions were impenetrable.


I'm curious, for someone who's never played KSP, what is the consensus on the last "good" version of KSP? The last patch that came out before being bought out?


Stock KSP1 is superior in almost every way. Less buggy, more game features, and a mature mod community. KSP2 has all of that planned but it’s still struggling with very basic gameplay. Problems that were solved a decade ago with KSP1.

Most of the interesting mods have already been integrated into the core KSP1 game. About the only ones worth mentioning are ScanSat (adds planetary observation sensors) and EngineerEverywhere (or something. It adds the thrust-weight-ratio display info without a part). The graphics are enhancements are just pretty pictures, that do nothing.


The first KSP game is still a stellar experience, if you pardon the pun - especially if you install a few graphics mods to make it prettier.

KSP2 has a lot of potential, and is very pretty, but unfortunately also requires a pretty beefy machine to run well - and even with a beefy machine, it can still stutter quite a bit. I honestly am not sure if I own KSP2 or not - my Steam library is like most people's, and I often treat game purchases a just a "you did a good thing, here's some money" - but I certainly can't run it on any hardware in my house.


The most recent version/patch level of KSP 1 is fine, it's been a pretty solid game for a while. There's not much reason to play older patch levels unless you want to use older mods that no longer work on newer versions.

I recommend CKAN for installing and maintaining mods for KSP 1.


> Nearly perfect labour of love

I wouldn't go that far. Its not KSP2, but KSP1 has plenty of jank and bugs.


It has jank, but I don't know anyone who stopped playing because of it.


I stopped playing because I could no longer figure out how to install mods. There's a mod manager, but it doesn't seem to work.


Ckan works for me if it’s a different mod manager you use. I’ve been using it for like at least 5 years without issue.


Ckan is the one I tried to use, but the resulting KSP1 installation usually threw a ton of exceptions at runtime. Maybe it's just compatibility issues -- those weren't called out, though.


I see you're not a rocket scientist.


The jank was sometimes part of the fun


The jank was also forgivable when it was a one man effort or a very small team. Plus it was a relatively novel idea for the game being created for the first time. It also made no grandeur promises of what was to come, and continually made improvements.

Take Two and KSP2 have none of the same forgivable qualities.


The jank was acceptable because it was supposed to be a simple 2D game that ended up outgrowing it's diapers as it grew, and being a Unity project a long time ago, was stuck on abysmal physics systems. It couldn't NOT be jank. Everyone always "wanted" a better physics system or game engine entirely, because KSP1 was often held back by Unity, but that would have definitely killed the project.

Then KSP2 is like, still on Unity? Despite massive amounts of budget and like no compelling reason to stay on Unity with the backing of a real game company.


> like no compelling reason to stay on Unity

One benefit of Unity is moddability.

But yeah, thats not really worth the drawbacks for something like KSP.


The Kraken.


Well, kinda. Big company just bought IP and hired small "labour's of love" company to do it.

The ideas how to progress the game were great! And players liked it!

But after failing the deadline twice (I assume they lied to take2 about funds required) T2 kicked them off the project and poached developers that did it (basically "if you still love KSP come work for us).

And again, it looked great! The improvements they wanted and presented were received positively as it was basically "KSP1 but MORE! And with MULTIPLAYER!"

But in the end it turned out that passion cannot replace competence and the developers of this game are not very good.


the oh-so subtle difference between two almost equivalent but in-the-end completely different intentions of:

making games (and also a bit on money on the side)

making money (and having to make some video games so the money will come)


KSP is on GOG, but the enshittification made all reviews one-star:

https://www.gog.com/en/game/kerbal_space_program

I don't know what other companies have screwed up GOG titles, maybe paradox (like stellaris?)


The legacy of the game was already ruined by the greed of it's owners. Remember when almost all of the KSP original development team resigned at once?


It feels like the Windows Registry is one of those well-intentioned ideas that ended up being a tremendous mess in actual implementation. "Let's use a central database to store things that the OS, drivers, and UI need to access" somehow became "half assed KV dumping ground for every process and their dog to litter with whatever while acting as a singular bottleneck".

See also: https://news.ycombinator.com/item?id=32275078


> "half assed KV dumping ground for every process and their dog to litter with whatever while acting as a singular bottleneck"

Doesn't that describe disk filesystems too? And Unix file namespace in particular (a single hierarchy unifying several block devices, just like registry is composed of several on-disk files)? What about all that junk in one's $HOME?


In particular, taking a leisurely stroll through /etc is quite a treat if you enjoy a well planted and diverse bed of configuration cultures.

Over on one end you have papersize, a file containing a single word with seemingly no relation to any installed software. In httpd.conf we have a sort of SGML that’s mostly line oriented CDATA. aliases and many other mail files are all members of the Berkeley DB lineage of key value stores. default/ feels like it’s also a key value store but one suspects that one could probably put a command in there and something would execute it. rc.d is 99% code but with semantics in the symlinks too (see also Debian’s alternatives.) A very large number of files look like braced C code; named.conf even requires terminal semicolons!

There’s no value in either consistency or diversity of the underlying implementation be it a registry — the registry, or a gconf thing — or a filesystem smorgasbord of config languages. Without the discipline (authoritarianism?) of a social structure — for example a “company” with a hierarchical leadership that can promote/fire you — you will get diversity in any system.

I just reminded myself of the time I used ansible YAML Jinja templates to control EdgeRouter config files that programmatically built config in /etc. Time for a leisurely stroll in a real garden I think, far away from a computer.


No, because the filesystem is one layer of abstraction lower than the registry. There's nothing with less bottleneck to the filesystem besides raw device access. The registry runs on top of the filesystem. If you wish to use the registry for filesystem-like purposes (eg storing startup config specific to the app or user state), just use filesystem. If you wish to use it like a database for system-wide information, that's a better use-case, but the registry isn't quite a proper database.

Windows programs proliferate $HOME junk, too. And that's an issue in its own right, which should be addressed by platform-specific application dirs (e.g. the platformdirs library for python).


> No, because the filesystem is one layer of abstraction lower than the registry.

That doesn't prevent it from becoming a KV dumping ground for every process and their dog to litter with whatever, at all. Not in the least because it's already that, which a cursory look through /tmp and /var supports.

> There's nothing with less bottleneck to the filesystem besides raw device access

I thought there was considerable effort from the Linux kernel team spend on parallelization of the inode and buffers management but if you say that the main bottleneck is the raw device speed then sure, I'll believe you. It's not like NVMe protocl has design with 64K command queues each 64K command long because the OS simply can't saturate the device's bandwith otherwise, right?


It's developer error. It'd be the same as a game on Linux filling up a conf file or a database with duplicates. Does Linux have mechanisms to guard against that? Didn't think so.


No, but I imagine it's a bit easier to catch an unwanted proliferation of files vs unwanted proliferation of registry keys.


> Does Linux have mechanisms to guard against that?

Sure. ulimit or cgroups can.


I doubt any of those are applied to games or applications on Linux by default.


If you install games via Flatpak, or via Steam which you've installed via Flatpak, they are indeed isolated in `~/.var/app/*`, IIRC.

But this thread is getting distracted. That's a separate issue, and the applications in question can still pollute all they want within their container.


A central key-value store that can be programmatically accessed to persist state across users, processes, and boots, that is also strictly typed and hierarchical is quite useful.

I think it would actually be quite useful to have an /etc/conf virtual file system that could be programmatically accessed by user space processes and used as a "dumping ground" just like /etc already is.


I think the "central" part is where you lose me with this argument. What advantages does the registry have over application-specific KV stores... besides the potential to interfere with other applications and the OS itself?


Where do you store where the application-specific KV stores are located? Are you just going to hard-code it so the application cannot be moved or installed elsewhere or on external drives? Where does the OS store its own settings?

There are a lot of problems with software "shotgunning" their junk across the system. This isn't exclusive to the Registry or even a specific OS unfortunately. Just go look at what configuration files are located between applications on Linux for example, it is not consistent at all.


Linux is about as consistent as Windows these days. /etc isn’t really a “dumping ground” for everything and is quite static now. Just diffed a snapshot of /etc on my workstation from a month ago and there aren’t really any changes I didn’t put there myself. It’s reasonable to make /etc immutable on a lot of systems; something impossible with the registry.

Home is a bit more chaotic but most applications follow the XDG specs. Mostly. Less so with cache and state files (vscode, for example, dumps tons of cache/state files in .config instead of .cache and .local/state). And weird things like Flatpak shoving everything in .var.

I’d say things generally behave about as well as windows apps which often treat documents as a dumping ground for all kinds of files. I always have to go to pcgamingwiki to find game data locations without having to check half a dozen places.

For administration I really like the /usr and /etc divide. Vendor files go on /usr, and overrides for the running system in /etc. It’s useful to be able to peek in /usr to see defaults.You don’t really get that ability with the registry. With some more modern setups /etc is bootstrapped from /usr (for example, with systemd-tmpfiles) and you can “factory reset” a system by clearing out /etc (with some asterisks around restoring state for a few of the legacy state files still kept in /etc if the system is has manually created users/groups).


>Where do you store where the application-specific KV stores are located? Are you just going to hard-code it so the application cannot be moved or installed elsewhere or on external drives?

I would put that in the same directory as the executable so I can have more than one executable of the same program or the path is simply passed as a command line parameter in the systems unit file.

Compare that to hardcoding the registry key path in the executable...


> I would put that in the same directory as the executable

They're multi-user Operating Systems. Each user needs their own.


Which is how it works in Windows, if you follow Microsoft's own (admittedly confusing) guidelines. Every user has directories for both private and user-visible data, as well as for some half-thought-out feature called "roaming" that I've personally never seen an actual requirement for.

Storing .INI files in the .EXE's directory has been a no-no for many years, and nobody should be arguing for that. But the central registry concept makes even less sense. Use .INIs in your format of choice, and store them in the appropriate user-specific application data folder. Additionally, they should be created when the executable finds them missing, not by the installer.

The latter practice alone solves a multitude of problems, ranging from multiuser support ("Whaddya mean, install for everybody who uses the computer or just me? WTF?" -- your users) to the issue of requiring admin rights at installation time that are not really needed by the program itself.


macOS has a folder called Library where this stuff is supposed to go. It's not enforced by decree, with many apps doing their own horrible thing. Ultimately, its macOS' culture that mostly makes apps puts their settings and other resident details in Library.

I'm not knowledgeable enough to know why a culturally enforced folder is far worse than the database that Windows has. Care to enlighten me?


Linux has ~/.local, ~/.config, and ~/.cache for user-specific program files. Sadly many programs don't respect these and pollute your home directory instead.


Even better are the programs which will stick cache files inside something like ~/.config/appx/mycache . Like a little middle finger from the developer to waste space in my backups.


macOS also has NSUserDefaults which is a similar application (but sometimes developer team?) scoped KV system which is actually just a plist file shoved into ~/Library/Preferences. It's the most similar thing to the Windows Registry in both its intended purpose (small amounts of configuration data) and the ways people abuse it due to its simple interface (storing absolutely everything in it).


I pressume "Program Files" or "AppData" folders are Windows analogy to Library


"AppData" is like $HOME/Library.

"ProgramData" is like /Library.

"Program Files" is like /Applications.


Those are terrible names!


How do you find it? How do you install it? Do you need root/admin permissions to create it? How do you provide access control for it?

There's a lot of value to having something like this "built in" where the plumbing isn't something you worry about. You don't have to worry about the state of your users' file systems (User deleted 'My Documents', User doesn't have a $HOME folder, you don't have write permissions for %AppData%, etc...).

I think if someone went off and redesigned a global KV store for an OS they'd probably require authentication tokens for mutations, so your app would only be able to communicate with the subtrees of the store that it has permissions to (but also, kind of like a file system?)


> I think if someone went off and redesigned a global KV store for an OS they'd probably require authentication tokens for mutations, so your app would only be able to communicate with the subtrees of the store that it has permissions to (but also, kind of like a file system?)

If I had to design an OS from scratch, this is basically what I would do. I'd provide an API for a prefix-tree based KV store with specific data types (eg bytes, utf-8 string, bool, int, float, datetime...) with some mechanism for namespaces and path access controls or the like. Processes would always have their own space to use ad-lib, and would have controlled access to system-wide or cross-process spaces.

This sort of implementation lets you optimize access patterns so that basically each process gets its own KV/db (if it wishes) which should be roughly as performant as rolling its own kv running on the filesystem. Vs all processes all competing for the same data structure, all the time.


I agree with this. Although I would probably make it a pseudo file system and allow normal file reading (you want to dump as JSON? Read /etc/conf/app.json. TOML? /etc/conf/app.toml) as well as something like sysfs (/etc/conf/app/thing). The raw data itself can live at /mnt/conf.

The thing I'm not sure about is how you'd implement secure initialization/registration of a new app on first run.


Interestingly enough, the only thing that's missing from this design from the modern Windows Registry is the generation and usage of user principals. The registry already supports a rich, granular permission system, and enforced sandboxing for access. In a greenfield design, yeah, I could see your design being quite useful, but adapting and enforcing a usage pattern on current day Windows isn't that far off.


Most games that _do_ use the registry use it as a data store that may or may not be accessed by external applications. It provides a very reliable pathing (at least at the time of the game's release) to access typed data.

For instance, most EA games from the early 2000's standardized the store of the CD key in ~HKLM:\SOFTWARE\<Game Name>\ergc. This would let one install the game to any drive and still have access to the CD key.

Games also use the registry as a way to point to where the game is installed. In the worst case scenario, it's used as a dumping ground for the game's preferences/settings and save states. With the shift to 64 bit and the introduction of WoW64 and most recently the shift to VirtualStores, I would rather nobody ever stores anything in the registry.

I'm currently working on a compatibility shim geared towards games that will redirect winapi filesystem and registry calls to a custom location. Hopefully it'll result in being able to make more portable installs of games that may require access to the registry.


Those sound like exactly the use cases the registry should be used for. However in practice all kinds of state ends up in there that does not need to be accessed by anything other than the game process.


How is that different from accessing $HOME/.config? It does exist on Windows too, but I don't want to Google it.

I believe in Windows there's even $SHARED/.config


I think it's %LOCALAPPDATA% and %APPDATA% respectively.


Using the registry is a pretty simple API call to say place X here, read X if it exists, etc.

If you need to start manipulating configuration files, then you need to deal with all the complexities of that.

The registry is great for exactly this sort of config info. And on Windows, HKLM (HKEY_LOCAL_MACHINE) is for system-wide stuff. If it were per-user config it'd go in HKCU (HKEY_CURRENT_USER).


> It feels like the Windows Registry is one of those well-intentioned ideas that ended up being a tremendous mess in actual implementation.

The real issue is that this is known since Win 95 days , yet some people don't get it.


As opposed to using the filesystem which is an even poorer KV store?


They are global variables. Worth working very hard to block in any project. Separate microservices are the most effective way I saw so far to stop people in a large org from making shortcuts via global contexts. I feel bad for our frontend devs dealing with a tide of global constructs in our React codebase.


I still haven't seen a non-handwavy method for orchestrating a transaction across microservices. That means I still can't use them. Although I don't think I ever wanted to particularly.


You wouldn't ever design a microservice architecture that required transactions across them. Every microservice owns its own data.


Maybe it's just the domain, but I guess it means the scope of my microservice is just the whole damn thing anyway. I guess I was doing microservices the whole time.


That seems domain specific. I can imagine some services where transactions across everything are unavoidable, but I've also worked on lots of things where there's different databases and transactions aren't needed between them.

Some times it is a relaxation of requirements though. Some people might want/need account deletion to also remove all the content related to the account transactionally, etc, and if that's a requirement, everything account related must be in a transactionable system.

If you can be more flexible on that, separating account management from other data is pretty common in my experience.


Global objects in react are almost universally an anti pattern. The only way to do them right is through a context, ensuring the state lives in the context. But most people hate writing the scaffolding, so a non reactive, non FP singleton gets to work and fails at integrating correctly with the react ecosphere.


You still need to store variables somewhere.

The equivalent of "separate microservices" is making sure multiple programs don't share registry keys, and that's already the case 99% of the time for this kind of key.


Well, the file system, too, is a global variable. Concurrent reading/writing of a single file has about the same racy behaviour as concurrent reading/writing of a global variable.


KSP2 had all the warning signs of being a disaster ever since the initial delays and controversies with the original studio. Unfortunately, as usual, the community put on hype blinders until it became impossible to ignore.


And every time anyone mentioned any of that on the subreddit they got downvoted to oblivion, including myself. It's like everyone was (and to a smaller extent still is) on a heavy dose of weaponized hopium because the studio invested half their budget into a prerendered trailer.

Even the youtubers that got to play the prerelease version pulled out every excuse in the book to not present what they saw objectively.


Yeah, I had to unsub from the subreddit during the development years since everyone was eating up the hype videos and any attempt at questioning the lack of actual meaningful info was just downvoted and flamed.

I still check the sub every once in a while though, and it seems like nowadays the hopium/copium is mostly gone. Player numbers are embarrassingly low and even the "most of the game is already done, they just want feedback and will quickly add in the promised features after refining" argument from launch is also undeniably dead with the total lack of new content.

In terms of content creators, I very much appreciated Scott Manley having been up front with saying that he didn't recommend buying the game at launch given that he was THE original KSP-tuber.


Subreddits for corporate products are almost invariably shit. Run either by the company itself, one of their contractors, or a team of corrupt 'community' mods. In any case it becomes a corporate propaganda space. Reddit's usefulness peaked many years ago.


Watching Matt Lowne’s KSP2 videos just makes me feel better about not buying. It’s just buggier and less.

While I was interested in the surface colony stuff they promised for KSP2, the addition of another star system you could travel to was very disappointing. Not so much another system, but that you could travel to it. One thing I liked was that KSP was somewhat grounded in real physics. Practical interstellar travel just isn’t. (“bUt WiTh ALcUbIeRrE dRiVeS…” smack Shut up. They’re a fantasy.)


You can timewarp 10000x in KSP and the Kerbals are immortal. You could travel between stars as soon as you achieved third degree escape velocity.


Well, it's natural extension to endgame and players already liked mods doing that


Generative or alternative systems are an extension. Traveling between the stars is fantasy.


I mean an orion drive multi-generational ship could probably do it with established tech, it's not so much a fantasy as it's just hilariously dangerous and expensive. Easier so with kerbals since they don't need to bring any food along and don't have nuclear test ban treaties.


I don't think FTL drives were ever going to be in KSP2. Interstellar travel was going to be via torchships. You can argue _those_ are unrealistic, but they're at least feasible without needing to break the fundamental laws of the universe.


I thought it was warp drives, and possibly stargates. (See the arches with the carvings on Mun). Regardless, we're talking travel time in hundreds of years. Even with time warp, that's pretty boring.


I don't think there was anything official about warp drives or stargates. Supposedly they're planning on having a fast enough time warp even for hundreds-of-years journeys — assuming they ever get that far.


The pitch showed they know what players want

What they did so far showed they have nowhere near skill to pull it off


To be fair, the users didn't ask for much by and large. They wanted better graphics and good physics with more game world to explore.

That's not really a huge leap. The issue really is, as it is with many things, scope creep. In KSP2's case, it's scope creep to such an extent where, they have long forgotten what people loved about the original whimsy rocket building game.

Like, better graphics really just needed to be better textures, and to make round things actually round. Good physics just needed to be "stuff doesn't randomly explode or become unstable while at rest". They absolutely could have done this... they just... didn't.


> That's not really a huge leap. The issue really is, as it is with many things, scope creep. In KSP2's case, it's scope creep to such an extent where, they have long forgotten what people loved about the original whimsy rocket building game.

I don't think that's the case. They clearly want to re-create the KSP1 things before going into more advanced stuff, the problem is that they can't even do that.

Like, from the pitch it does seem that they know what players want, like starting from the get go with procedural wings, VAB improvements etc. but from results it's clear they don't have the chops


I knew it was going to be a disaster as soon it became obvious that it was a continuation of the old code base, which has a lot of fundamental issues.


This is the first time I'm hearing it uses the old code base - do you have a source? I thought in the initial marketing they were big on saying it's a full rewrite.


You'd think if they were using the old code base they'd have more of the old features.


It might not use literal code (although it might), but there's definitely signs that some things like the terrain system were cribbed from KSP1, complete with the limitations of that system.

Which means they've ended up in an awful state where even if they were competant (which they are not), best case things might not be much better.

You're right they marketed it as a full re-write to avoid the problems ("Slay the Kraken"), but what was delivered was trash.


I'm pretty sure it's not using much if any of the old code given how many bugs it had/has that don't exist in KSP1.


KSP1 is really complete, especially if you are using OKAN and common mods that people install.

I don't understand what they are trying to do with KSP2 to begin with. Skeptical from the start and it somehow turned out worse.


KSP1 has actually atrophied quite a bit, as many of mods that are essentially necessary for serious play no longer work with the newer versions.


The only complaint for modded KSP1 I have is loading times. I'm surprised devs didn't properly optimized that part.


that's a reasonable trade-off, to make a game that is so extensibly mod-able.


Not so reasonable. A load with a decent but not crazy number of mods can be 10-15 minutes.


People in both threads shit on devs as if they manually modified registry with a crappy script at that, but this is the official Unity API and this is the idiomatic way to store preferences.

Ofcourse, it is unfortunate to have bug here of all places, but it looks like people don't like KSP2 devs in general (maybe even for good reasons, I don't know) and try to exaggerate this issue as some kind of proof, when in reality there is none


yeah, I thought it was a bit silly too, and obviously a case of "hindsight is 20/20". "They should have been more careful." They assumed an ID that was static and it turned out not to be. I'm not saying the bug would have been impossible to prevent, but things like that will slip through the cracks on occasion.


Not surprised at all; KSP as a franchise is in the enshittification stage where they aim to invest the minimum amount of money and maximise returns. This is just simply inexcusable software engineering and quality control.

Bit tangential but I'd like to remind everyone that this is the game where the lead designer (?) said that wobbly rockets and physics bugs are adding to the fun of the game, and they are implemented deliberately. This is doomed.


I wouldn't call a game sequel bombing "enshittification." Beloved games sometimes have horrible, buggy, no good sequels.


But it's not just about the game being bad. There is a huge difference between a poor game which was made with an effort and care, and a completely soulless, effortless cashgrab.

This is the later, that's been obvious from how Take-Two handles things, with the game even changing developers midway through development.


The individual developers do seem to care a lot about KSP2, fwiw. It seems like the problems with the game are more caused by incompetence, bad project management and probably unrealistic deadlines from the publisher.


KSP2 isn't an enshittification play, imo. It's literally just bad project management.

I'll list their sins from my armchair:

- remaking a beloved, established game that has a prodigious base of features and extremely extensive modding support. you have a HUGE hill to climb just to get to your "MVP" - you have to supplant the existing game plus its modding community, and this is already a niche audience of ["people who find orbital mechanics as a primary gameplay loop to be fun"]. They were always going to need to really fucking knock it out of the park on their first at-bat to make this work.

- doing so on a short time-frame

- trying to be all artsy about it too. not that that's a bad thing, but it does position you to take your time rather than going fast. and like I said, they already had a steep hill to climb.

- standard-issue development hell. it happens.

- special-issue development hell where TTWO did some fucky-wucky stuff where they hired away a ton of the staff from the studio they were contracting with, cancelled the contract, and brought it all in-house with the poached team. Hardly an encouraging sign.

- with all the delays adding up, I suspect they were given the ultimatum to either ship their current state into EA and start recouping money, or get scuttled. People have allegedly looked into the code and found extensive additional systems that were basically hastily commented or hacked out so they could ship some vaguely-functional core.

I was really, really looking forward to KSP2. KSP1 but with good graphics, a non-unity engine (it was always a miracle that squad had gotten such good large-scale physics out of unity), and promises of official support for non-kerbin bases and interstellar travel? Yes please, sign me up!

But honestly, my mental model for how this would be successful was "they'll reimplement the existing base game in a new engine. big task but doable for TTWO's money, it's not indie anymore, and they obviously already understand the product. Then, with the base game ported, people will be willing to buy in EA because they see the promise of 'KSP but more!!!'". And that last bit was going to be critical, they'd need people bought-in if they wanted TTWO to keep funding them / them keep funding themselves.

So when they launched this scrap heap into EA, I knew it was doomed. And look at the, what 8ish months between then and now? They've released a few quaint patches that ignored all the huge issues and done basically nothing else.

I fully expect them to now slowly wind the EA down with a skeleton crew and people will just forget it to an ignoble death. I mean, TTWO can hardly be keen on continuing to pour development funding into this EA, right?


From my armchair I'll add that throwing away the KSP1 engine and replacing it with, based upon the number of bugs, a new implementation of the same basic idea seems like a terrible idea. My understanding is that most of the original Squad team (i.e. the only people in the world with experience building a successful orbital mechanics game) weren't kept on for KSP2. Take Two should have done everything possible to keep them as core KSP2 developers.

All that said, I don't think the game will be left unfinished. All costs are sunk and Take Two has a (reportedly) somewhat functional, nicer looking copy of KSP1 with, presumably, at least base elements of interstellar travel and colony systems in place. It's probably worthwhile trying to get the project over the last few hurdles, as it's a potential goldmine if they can pull it off.


> KSP2 isn't an enshittification play, imo. It's literally just bad project management.

It is also MBA decisions (kicking the game out for full price as an early-access game that was nowhere near close to being ready for that).

The combination of PM and MBA decisions that have screwed it up are definitely enshittification-adjacent, although they may just lack enough competency.


The first point is incorrect IMO. They just needed a really solid core of unjanky physics for an MVP, which was why a sequel was needed in the first place - KSP 1 engine limitations. Couldn't even deliver that.


The root problem with KSP2 is that they needed to make the core of the game perform much better than KSP1 in order to pull off the scale of what they intended to achieve. And then the they shipped into EA with significantly worse performance than KSP1.


Well, I still believe it's going to be good eventually. Like KSP1, for example, or No Man's Sky. There certainly have both the resources and a guaranteed player base to make that happen.

That said, I cannot be sure of that, so I will not buy it until it is actually good.


> KSP1, for example, or No Man's Sky

Those are two very different examples, though. NMS was improved heavily but in many ways never approached the features and qualities that people were expecting. KSP2 feels more like NMS than KSP1 in that regard - people have expectations. They've been sold a specific vision which doesn't look like the game. Further, unless it's great... why not just keep playing KSP1 with extensive modding?


> NMS was improved heavily but in many ways never approached the features and qualities that people were expecting.

Asking who? From what I've seen, the typical sentiment is that they've gone far past the expectations they set.

For KSP2 I think the main expectations were eventual new content (starting with a good chunk less than KSP1 and adding more later) and better performance. And they sure haven't delivered performance.


> Asking who? From what I've seen, the typical sentiment is that they've gone far past the expectations they set.

I enjoy the game a lot more now, but the original E3 "gameplay" trailer still feels like it overpromises. Comparing the current state to that trailer there are quite a few things that have fallen short:

I've never encountered a planet which was as lush as the one shown initially. You don't find animals packed that closely together. I don't think semi-aquatic animals like the sauropod-looking things in the trailer are supported in the game either. You don't see animals running around in packs like they showed either. The closest thing you see is about five identical animals spawned into the same small area.

The video also shows you being able to fly swiftly above the surface of a planet. I may be misremembering, but I don't think I've ever been able to fly quite that quickly, but the biggest difference is that the game suffers from very noticeable object pop-in (it can even be noticed when moving at low speed).

I think you can fly with wingmen now, but I've not tried it.

This might not sound like much, but the existing game just looks far less interesting than the original trailer even if many rough boxes have been checked since then. Also, I'm not even looking at any other pre-release announcements or videos for reference.


Well I really hope neither of us spends too much time worrying about this today but I will respond with some points in addition to the other reply you got.

> Asking who? From what I've seen, the typical sentiment is that they've gone far past the expectations they set.

I'll note that anybody still playing the game probably likes it, and anybody who doesn't like it probably stopped playing years ago (except to perhaps check out new updates and continue to feel disappointed).

Some specific things:

- planets feel lifeless and very samey: there's nothing to really explore because the depth of geology, biology, ecology is not there. I suspect that's why they added so much base building, which was never a significant part of the early hype/marketing.

- even features they implemented feel incomplete and shallow, such as player customization and base building, compared to many other games which do those things better.

- the game was described as allowing the player to navigate an actual star system, and instead it's just a skybox densely littered with pirates and asteroids. If I want that, X4 is a hundred times better. For exploring stellar systems, Elite Dangerous or just SpaceEngine show that it's possible to do well.


> said that wobbly rockets and physics bugs are adding to the fun of the game, and they are implemented deliberately.

claiming they did this on purpose is just acknowledging that they are good developers. They aren't and they keep proving it day by day.

Wobbly rockets and physics bugs were fun and interesting in KSP1. What they did in KSP2 is not even close to how those bugs were in the first game. It's inherently worse. You can read a lot of reviews and people will say that even though these bugs existed in KSP1, they weren't as bad as they are in KSP2.


[flagged]


> Cory Doctorow should be imprisoned for coining that lame-ass term.

And who should we prosecute for "lame-ass" ?


Cory Doctorow.


seems a bit hyperbolic, especially for a supposed free-speech bastion like HN.


I haven't done Windows native development for over 20 years, but at my time during the 90's we already used the registry rather sparingly. And at this time, cross-platform development was not even in our radars.

I wonder how in the Year of Our Lord of 2023 someone feels confortable using the Windows Registry like some kind of blackboard.


Well it's not like they randomly decided to use it. It's official and recommended API in Unity. You can use it and not even know it ises registry under the hood (although, at this caliber, you definetly should know such things)


There's the odd game that uses registry for the purpose of save files for some reason, which is stupid when you have several PC's and want to sync progress for the odd session. There's one example with a very solid puzzle game (Lyne; https://store.steampowered.com/app/266010/LYNE/) that used registry but I see they now have Steam cloud on the profile page but back in the day I had to export registry keys to not have to replay the same puzzles, not fun.


Honestly when the brains come together on hackernews to reverse engineer and unfuck a high profile videogame I get unreasonably excited. The guy that fixed the GTAV loading times via identifying a broken JSON loop downloading cosmetics was astonishing to me


People doing work on their own free time to fix a corporation's mistakes for them gets you excited?


Yes - it reminds me that despite any overbearing corporatist or draconian protections on games in this very modern "service game" era, people are still clever enough to not only reverse engineer them, but resolve issues that cost some people thousands of hours of pain.

I still game an awful lot, GTAVs loading screens is one of many examplea that have juat caused me collectively hours of pain. I hate the anti-consumer world in which games are now made, but individual hobbyist brilliance fills me with joy.


He did say "unreasonably"


Adding 322 mb of data to the registry isn't going to be healthy for windows either.

Registry hives expand but don't contract again. The registry is effectively held entirely in RAM. You are therefore effectively wasting 322 mb of RAM even when KSP isn't running.


Is there a tool like WinDirStat/WizTree but for the registry?


I'm surprised they didn't just shoved it in SQLite and called it a day. It's public domain.

NIH I guess. Or maybe some old fart in company doesn't want to let go off their abomination.


You're surprised Microsoft doesn't replace the registry with SQLite in Windows?

If so, you might not be aware quite how many millions of entries are in the hierarchical key value database that is the registry, and quite how critical read-write latency is.

sqlite can do clever things like multiple indexes and full text search, but won't beat the windows registry on the performance front at the narrow usecase it was designed for.


SQLite only has two types: INTEGER, BLOB (all other SQL types are implicitly converted to/from BLOB). Registry has many types: SZ, WORD, DWORD, QWORD, BINARY.


I remember registry starting to crap out and be slow at barely few hundred megabytes. Admittely that was some time ago but still.


I remember it used to be an optimization thing in Windows 9x to compact the registry on startup. I am surprised that it's not really resolved.


You can still do it manually, but you need a bootable disk to do so:

https://learn.microsoft.com/en-us/troubleshoot/windows-serve...


Somewhere, there are Microsoft devs WTFing over crash reports indicating registry is over a gig.


People in both threads shit on devs as if they manually modified registry with a crappy script at that, but this is the official Unity API and this is the idiomatic way to store preferences.

Ofcourse, it is unfortunate to have bug here of all places, but it looks like people don't like KSP2 devs in general (maybe even for good reasons, I don't know) and try to exaggerate this issue as some kind of proof, when in reality there is nonw


Unity's default save system "PlayerPrefs" saves to the registry, but I find it difficult to see why this is a good idea for anything but global settings (e.g. graphics settings). Serialization to JSON is a fairly simple alternative.

Why would this be used for player data in a production game? It's frustrating to port playerprefs data between systems for testing purposes. Is there something I'm missing?


As a teenager I played so much KSP that my grades suffered, played KSP2 a few days ago & did a moon landing, the game has a long way to go but the experience wasn’t nearly as bad as the reviews suggested.


It’s improved over the last 6 months or however long it’s been. But it’s still a shadow of ksp1, while ksp2 was sold with breathless discussions of interstellar ships with colonization and a bunch of stuff that would really have made it a next generation on ksp. But instead it’s still a year or more from parity with a laughable cadence of improvements, minor patches coming out every two months. This is after they slipped their deadline repeatedly on early access.

I think the community and reviews would be different if the parent company hadn’t outright screwed the original developer of KSP, which was a beloved game. The drama around that poisoned a lot of community good will, because the original developers clearly had a love for the game and the community and made something unique, then were unceremoniously dumped and cut out of what was supposed to be a well funded effort to build KSP “right.” Then they release this as “right” and it’s a huge disappointment.


This is yet another example on how, while Windows can have its problems, 90% of them are other developers treating the system as their personal dump truck/litterbox

(as per another comment)

> having the Pqs preferences based off of the instance ID of the pqs object, which, according to unity's own documentation, changes between runs of the game

This looks like someone copy-pasted something off of Stack Overflow (wouldn't be the first time)


This is amazing. It shows that a developer team from a triple-A publisher with billions of dollars at their disposal is just incompetent. The single-man team headed by HarvesteR and later some other devs that joined him were much more competent and it was an indie game.

Take2 killed the KSP franchise and they continue to do so. From less and less frequent updates (KSP2 has received only 3 "major" patches since it was released in February, taking 2 months between updates), to useless hotfixes that take a week to be deployed to fix just a single bug, to them taking 6 months to try to find out the reason the rockets are wobbly, to them releasing a game in Early Access which wasn't even ready for EA, it was an alpha prototype, completely broken down, and just trying to milk the franchise lovers for their $40 for an INCOMPLETE PROTOTYPE (which will be raised to $70 once the game leaves EA), I wish they just abandoned KSP2.

KSP1 with its modding community has produced so much and it's a much better game than its sequel. The charts don't lie, right now 1k players on KSP1, 67 on KSP2.

Now someone's going to tell me "this is not how game dev works". Well. They have billions at their disposal and a predatory publisher on their back. And their dev team consists of a lot of modders from the KSP community which I particularly don't blame them for not delivering, but Nate and the team that was poached from Star Theory by Take2 because the former didn't agree with the time frame Take2 gave them to release a working prototype, they are really incompetent.

They are "building the game from ground up" using the same engine as its predecessor and amazingly they came across the exact same bugs that KSP1 had fixed like 8 years ago. This is not even newbie game dev, it feels like they're just a bunch of people that just started to learn how to write software.

I have an opinionated website tracking their lies and deceptions (based on Web3 is Going Great), if anyone is interested: https://nokerbal.space


Is it common to use the registry essentially as a database? 300mb is way more than I expected that thing to support in a single key, but I admittedly know nothing about Windows. I've only ever seen it used to store booleans or simple string values.


I just exported the registry on my personal windows machine to see. It was 400 MB as a .reg file and zipped down to less than a tenth of that.

I'm not sure what's common, but clearly not many programs are dumping large data into my registry.


Makes sense. one of the simplest ways of saving data in Unity (but not the most correct) is saving values via PlayerPrefs[0]. In Windows, this saving method defaults to registry instead of creating a separate save file in the game folder. if you don't make your own system for storing variables or running everything in cache and then use a smarter way of saving data, it can get out of hand fast.

This was way more common in the old Unity 5 days, though

[0]https://docs.unity3d.com/ScriptReference/PlayerPrefs.html


Everyone seems to love shitting on KSP2, when the same people forget what KSP1 was like in Alpha. No science, shitty graphics, excessively wobbly rockets, inconsistent orbits, (other than graphics) seems familiar?

It took a looooong time for KSP1 to get where it is now, it will also take awhile for KSP2 to be as polished.

Chill out.


KSP1 was made by a bunch of scrappy lovable indie devs making their first game (iirc Squad's team was a marketing company prior to this?) in a LCOL area funded by dreams and whatever $10 early access sales they made on steam, with no-one to tell them what to do.

KSP2 is made by TTWO, who is a publicly traded AAA studio of notable fame and decades of legacy and domain expertise.

If Squad sold KSP1's IP to TTWO and then went and made "KSP2" under a new name with their squad-sale money, that would be a different story. But even with that goodwill, I would still not be believing in the current KSP2 ever being a success story.


> KSP2 is made by TTWO, who is a publicly traded AAA studio of notable fame and decades of legacy and domain expertise.

KSP2 was made by indie devs (Star Theory) hired by T2

They just... failed.. they negotiated budget, failed to deliver, negotiated another bigger one, failed to deliver again, and T2 said "fuck it", took IP from them and created studio to develop it.

ST actually wanted to sell out to KSP2 after that disaster but they basically chose to poach ST developers instead, because why you'd buy a company where management now failed to deliver on promised goals twice.

My guess is ST either bite more than they could chew or purposefuly lowballed T2.


But the people they poached included the management which was perhaps the worst decision that T2 made when they created Intercept Games.

Furthermore, looking at the demos presented in 2019, it's not clear that much progress was made between then and now.

The complete stall since early access release (7 months with just a handful of hotfixes) and lack of feature delivery suggests that IG don't have the competency to deliver.

That, or they moved all their staff to their other "unnamed title" they're hiring for and have KSP2 on the most minimum of developers to pretend they're making progress toward their roadmap.


> But the people they poached included the management which was perhaps the worst decision that T2 made when they created Intercept Games.

Huh, didn't knew that. I only heard that "most" people came over.

But yeah full price EA into absolute shitshow of a game doesn't bode well.

> That, or they moved all their staff to their other "unnamed title" they're hiring for and have KSP2 on the most minimum of developers to pretend they're making progress toward their roadmap.

That would be weird, what makes the Take2 management think they can deliver something else competently ....


Being an original Alpha owner, prior to Steam even, I think many people forget it was ~$10 and that came without any overpromising by the development studio.

The two releases are nothing alike and if KSP 2 intended to use the same release model it should have either been dirt cheap or not released for many years. As it stands now it comes off as a massive grift and deserves the hate.


KSP2 was sold as a way to fix the mistakes and limitations of KSP1. It's a given that people would expect KSP2 to be on a better trajectory rather than repeating the same mistakes, overpromising and overcharging.

We could even just look at reentry heating, which was supposedly almost done, over 6 months later and even just that is nowhere to be seen. If that's taking them so long, how are they ever going to deliver the rest of their promises? Many of which there are serious technical questions about the feasibility of (eg multiplayer).


Except KSP was cheap (I got it for $9 IIRC, which was good value) when still in beta. KSP2 is $50 for the same level of beta, after years of development and drama. Don't chill, don't buy this crap until it gets better.


When KSP 1 shipped its alpha, there wasn’t a perfectly functional KSP 0 it was competing against. From what I’ve heard, they’re using the KSP 1 engine (restricting how much better they can make things) but are somehow shipping with many of the KSP 1 features missing. So it’s kind of a Worst Of Both Worlds situation.

Until KSP 2 achieves feature parity with KSP 1, I don’t see any reason why I should buy it.


Seems like the developers forgot too, since we have 10+ years of lessons learned and here we are starting from the same place making the same mistakes all over again.


KSP2 can't be developed forever.

KSP1 had (AFAIK) a cheaper dev team with no demanding publisher and a good reputation that brought a steady stream of money during alpha. If they progress at the speed of KSP1 from this point, they are going to get cut.


So you're saying the KSP2 devs had the benefit of being able to look at the past mistakes and issues of their predecessor, and still failed to learn from it?

Stop defending the billion dollar corporations, please.


I don't give a shit about T2, really. I only care about the concept of Kerbal Space Program, the mere fact it is getting attention at all is shocking, let alone from a AAA game studio (for better or for worse there). The heads of some of the best mods of KSP1 are in the dev teams, it seems to be getting the attention it needs to succeed in the long run.


But what's the end game for KSP2, where KSP1 is today?

Speaking form a software development perspective, I don't see it improving beyond that point, or if that's even possible at all. For all KSP2 promised to improve the physic engines of KSP1, it has seemingly came up with a foundation that is significantly worse, not sure what you can do when your underlying engine has severe limitations.


I forgot KSP2 was made by an indie studio /s


it was (althought hired by take2 to make it)

they failed


If you're contracted by a publisher to make something, you are by definition not an independent studio.


Independent means not being owned. That's all.


I went to download the attached .txt file and actually saw Firefox's download pie gradually filling. I thought, how big is this single registry key‽ 344Mb, that's how big!


I'm so grateful Unix did not implement its own registry. Files all the way down.


Isn't gsettings/gconf/dconf something like registry? Gnome tends to implement worst stuff from Windows and Mac after all...


Really? I see it as a mess in the whole house instead of a mess in one room only.


windows is a collection of thousands of rooms each with their own mess


After doing some profiling when that game released I'm not surprised. Game lacked LOD system, 70% of frame time was spent doing terrain rendering(IIRC it also generated mesh from texture every single frame).


Sorry for the language but Wow this is incredibly stupid How did this get allowed ?


Look at the reviews for KSP2. This is not even one of the larger problems with the game. They released into EA with terrible, nearly unplayable performance... on an RTX 4090! a $1600-2000 GPU that at the time was difficult to obtain even if you had the cash.


More saliently they did all that but also set a full retail price on the EA game as well.


It gets worse because they don't consider the current $50 price to be 'full price'. They claim it's the discounted EA price and full retail would be higher.


I didn't think I could be more annoyed at them, but I guess not! -_-


Should we expect $500? $50k?


Given the publisher, I think you can only expect it to cost 2K.


That kind of performance is fine for a pre alpha.

Its pretty clear the dev team did not want to push it out so soon. The game was not ready for Early Access. But they apparently did, and here we are.


A comment says a dev tool got included by mistake


Is this "not corrupting the registry" really a thing that people are worried about when you're using Windows' APIs to access it?


Who still uses the Windows Registry in 2023?


Unity does, in their official API :)


that is pretty sad in 2023


“Be warned though, deleting the wrong thing can stop applications from working, or even break the entire operating system.“

You gotta love Windows.


Yes, because if I go to /etc or /var and delete some random thing, linux is just magical and nothing will possible break.

Cut the crap


No, you cut the crap.

/etc can be entirely readonly to most apps, many of them also either wont start from root or drop permissions right after loading.

That is not the case for registry, far more ways to fuck up.


To delete system-wide things in the registry you will be prompted to escalate to admin privilege. It's usually one click.

To delete things in /etc you will be asked to gain root privilege. It's usually one click (You might be asked to confirm your password).

Deleting things when admin/root can break things in both cases.

Let's just admit it and move on.


Hold on, You pivoting.

The paragraph you quoted is in context warning the user about MANUAL intervention in the registry, that requires privileged access. So we aren't comparing it to "what can random app do", but what can super user do.


Read my answer below


This is the same case in any OS when you're deleting things with root privileges.


I macOS you can’t delete system files, even with root privileges, unless you disable system integrity, which requires a restart.

Not the same at all.


in macOS a developer is like 90% likely to already have SIP disabled, because if at any point they had to do even one single thing that SIP didn't like, they would have had to completely turn off all security for the entire operating system


Where did you take that percentage from? It’s very likely the other way around. Almost no one, developers included, have SIP disabled.

If you need kernel extensions you should look for a different solution, at least on the Mac.


Little Snitch required a kernel extension until the Network Filter API was introduced. Homebrew required SIP to be disabled when it was first introduced but that's almost definitely fixed now. Some other software like certain hypervisors could have required it. I just think it's more likely for someone to disable SIP than find a different program that doesn't mind it, and those programs were all over the place when it was first introduced. Less so now, I suppose.

> If you need kernel extensions you should look for a different solution, at least on the Mac.

In 2023 sure.


Just because current OS suck at protecting themselves, that doesn't mean that it is acceptable.


I think only thing worse to do with registry would be using it as savegame storage.

Holy fuck those guys are clueless, no wonder KSP2 launched in such shit state.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: