Not sure how mrmincent did it, but when I searched DDG, it didn't come up with anything. I then switched to Brave Search and it was in the middle of the first page, just past the fold. I specifically searched for:
Then the repugnant conclusion is not that gaming is dead, but that games are made primarily for teenaged boys and most of us have aged out of the target audience, so games simply aren't made for us anymore.
It might enable security but I wouldn't say it _ensures_ it.
It just means that visible or IR light (What are they using?) won't leak through walls the way Wi-Fi does. Depending on how wide the beam is and exactly how it all works, it _might_ still leak out of windows and under doors. But it's not like someone casually wardriving outside your house will get as much as they would from Wi-Fi, I would think.
my windows had little bits of molding and surfaces for the foil to crumple up against and conform to. I didn't need anything else. I suppose tape would work.
Visible light does not pass from the core of a star to the outside.
The sun has a radius of about 700,000 km. Only a few hundred km is transparent enough for light to pass through and has a density of about 300 mg per cubic meter. Roughly the density of air at the altitude where airliners cruise.
The photons you see originate from a layer that’s just 0.5% of the radius of the sun. That layer is heated with other photons from inside. The core temp is 15 million K while the photosphere is around 5800 K. The spectrum of the core were the rest of the sun transparent would be much different and … unsafe. (Not that this really makes sense, if the rest of the star was transparent it would go nova)
I agree, seitan is a lot more like meat. I prefer less-realistic replacements so even though I've enjoyed seitan I actually quit buying it. It's too convincing.
Have you written a serious game engine? Parent is right, a bunch of them basically are a real-time OS sitting on a small amount of the underlying OS (if one exists). When you have to write your own malloc() and fopen(), and your own thread scheduling, that counts as Operating System. This is becoming less true today, as the hardware gets better and bigger and faster and resources are less constrained, but it wasn’t that long ago that many console games literally were the OS.
Your analogy and definition is one reasonable viewpoint, that the OS is the thing that bootstraps software on top of hardware and provides access to resources like memory, filesystem, network, and display, but it’s also an incomplete definition. An OS provides a standardized environment for whatever processes run inside it. In that sense it doesn’t matter whether or not it’s contained in a different OS. Ubuntu doesn’t suddenly become not an OS when you run it in a VM, it still provides the Linux OS to the processes that run inside the VM. Windows doesn’t become not an OS if it boots on a hypervisor. Similarly, even if an OS requires another OS to run, when it provides a different environment or container, it’s still serving the purpose of providing a specific Operating System to the software it runs.
Some consoles used to not provide an OS at all, the game provided all of it. The developer was responsible for handling things that would (and should) otherwise be the purview of the OS or drivers, power loss, out of memory, cables yanked, etc., otherwise they would not be allowed to publish. Nintendo did this longer than the others, in my recollection of events.
> Parent is right, a bunch of them basically are a real-time OS sitting on a small amount of the underlying OS (if one exists). When you have to write your own malloc() and fopen(), and your own thread scheduling,
I don't doubt that game developers do this, but I've never really heard a satisfying reason why, besides vague hand-waving about "needing more performance". I'm not a game developer though, so maybe it's truly needed. Or maybe it was needed in the 80s and 90s but not anymore, and maybe the mentality is simply stuck in the minds of game developers.
I remember interviewing a former game developer at a [not game] company, and we started talking about the C++ Standard Library. Candidate insisted that "the STL is slow," and that you shouldn't use it in any production software, and that if he worked on a project that used it, the first thing he'd want to do was re-write it using his own containers and allocators and so on. I would ask him questions like "what is slow about it" and "how do you know where it's slow, as used in our company's application" and "have you profiled any code that used it to find out where it's slow" and so on, and it became clear (to me) that he was probably just operating out of some ancient tribal believe that "STL = slow" that was organically passed on to him from other game developers throughout the decades.
I haven't worked on games in about 10 years, but I wrote core tech for AAA games during the 2000's. Stuff like async I/O, reflection, memory allocation, low level networking, and especially containers were all written in-house. Quite a bit of it was utilitarian, but I agree, a lot was a case of NIH syndrome. Some studios had these insane container libraries that just archived whatever weird data structure someone had as a toy project in university. Math libraries were the same. Scripting languages were all half-baked toy projects by the lead engineer etc.
For consoles, it wasn't that the STL was slow, it was that not all of it was fast, and some parts really were just trash written by whatever the compiler vendor had put together. RTTI was also generally not allowed as it was "slow", and virtual functions weren't allowed at one place because the vtable took up "a ton of extra space". So some of it was cultural, some of it was advice that was ok-ish maybe 15 years before the project began, and a lot of it was just not understanding what was happening under the hood.
Custom allocators were needed, and often still are, in order to have enough memory in the first place (for example memory pools for specific sized small allocations are common), to avoid fragmentation that could lead to out-of-memory conditions or the need to defrag over time, to have control over how the free list regains memory and how long it takes. It’s important in a real-time or embedded application to prioritize predictability over some of the host OS’s goals, and to have stronger guarantees.
IIRC I think one of the primary reasons the EASTL was developed is because the STL at the time did not allow overriding the default built-in memory allocator, and because STL did a bunch of memory allocation.
This is changing and games use lots more STL and dynamic memory allocation than they used to, but a decade ago and earlier, STL use was generally discouraged for the same reason that any and all heap allocations were discouraged for gameplay and engine code: because doing heap allocations during real-time and in performance critical sections can impact performance significantly, and can be dangerous. This is still very true today: the first rule of high performance is to avoid dynamic memory allocations in the inner loop -- true in JavaScript, CUDA, C++, even Python. Dynamic memory allocations can sneak up on you too. It might seem small or innocuous until it piles up or pushes you over a threshold or hits some kind of lock condition.
One mentality I learned game programming that can be counter-intuitive at first is how and when to reserve the maximum amount of memory you need for a given feature and never change it. It can feel at first like you’re hogging way too much, or that something’s wrong with coding that way, but it’s much more predictable, can be much higher performance, and it can be safer for the overall system to ensure that you will never have a weird condition where several things all need more memory than expected at the same time and it suddenly crashes.
I'm not a game developer, but I was similarly lead to believe that stl containers were especially slow compared to other implementations. This is supposed to be because the stl has more restrictions on the implementation owing to the interface they have to export. The stl hash table for example has to provide low level access to the buckets, even though modern hash tables usually avoid having buckets in the first place. By lifting these restrictions the performance can be improved tremendously. Facebook, for example, are supposed to have their own in house hash table implementation to achieve this.
That’s a great example to raise, and I think you’re right to point it out.
Any field is susceptible to cargo-culting, especially ppl who are just trying to get their job done so they focus on having to rediscover everything their leaders are claiming to have verified.
But for the specific claim you’re challenging, it’s hard to really convey the realities without having some experience in those problems.
High level it really comes down to:
1. Relaxing some generic constraints to apply game specific ones
2. Fine grained control over execution, again by relaxing some constraints
3. Game programming can play fast and loose with a lot of things for the sake of speed; this is not generalizable
The problem with STL is that there isn't one of it but every compiler/C++ library/platform has its own and some are better than others. Writing your own containers at least means you take one uncertain dependency out of the scene for something that'd be all over the engine everywhere (so it isn't easy to replace later).
Though personally i'd make my own containers because i heavily dislike the API design C++ library has and i am glad every single engine i've worked on had their own (well, except in a single case where while the engine did use its own containers it was basically 99.9% the same API design as the C++ library :-P).
I'm fine with that. I did eventually 'unplug' that game os to the point that it would run standalone on bare metal by adding a couple of hardware drivers.
The point was: to make a game function you have to perform the bulk of the functions that an operating system performs and the similarities are greater than the differences.
Well, Windows / Mac / Linux needs the Bios (or whatever we call it today) to run.
If you can't call those "OS"s, than what really is an OS?
I guess it's turtles all the way down.
> almost every serious game is a complete more-or-less real time operating system
Games tend to have very different requirements and different demands from the OS; they tend to regulate the OS to being (mostly) a hardware abstraction layer. In the late 1990s and early 2000s, it was a lot harder to run games on Windows NT versus Windows 9x because DOS was a lot better at getting out of the way of games.
And I wrote that stuff in the 80's, when your typical 'OS' on a computer was much closer to what a BIOS is than an actual operating system. Usually the games were far more sophisticated in terms of what their internal services provided than the equivalent OS services. The most the BIOS usually did for you was to boot your game code, change graphics modes, handle the keyboard and read and write raw media. Everything else you had to do for yourself.
Today there are far more levels of abstraction, especially when you use the various 3D engines or other large frameworks. But essentially those have taken over the role of a lot of that OS-like code.
Plenty of cis people might be happy with a functioning artificial uterus outside their body. Would make pregnancy a lot less inconvenient for women. And save gay men a lot on surrogacy expenses.
Totally unrelated but are you all really experiencing life as a collective of multiple identities? That’s absolutely fascinating. In particular it seems one/some/all of you are fairly intelligent and self aware about the situation.
I've always been self-aware in terms of knowing that I had 'multiple personalities' (at least once there was indeed more than one), but knowledge of "plurality", "systems" and dissociative disorders is far more recent (last couple years at most).
How clear is the divide between each of you? Similarly, individual identities experience memory loss depending on the level of control of a given person? As in, do any of you have recall from when another of you is at the forefront?
I notice you all also appreciate the collective identification. Does that mean there are times when many of you are actively perceiving the world simultaneously?
Finally, how hard is it to get by with normal life, job, loved ones, etc? It sounds incredibly challenging, but you also seem to have a very positive outlook.
It used to be a lot clearer, but right now it's not. Almost all of the personalities that existed a few months ago do not anymore. As a result, what used to make up those identities is now mixed together into some big, confused, dissociative soup. Occasionally the soup comes out, and there are still parts in there that have their own personalities, but they can't tell each other apart, or which one they are at any given time.
The rest of us that managed to not be part of the soup (I am one, hi) are not as separate as we were before the event, probably because our structure is unstable enough that being completely separate would be detrimental right now.
Back then, each personality had separate memories from each other, and would experience blackouts whenever they're not around. A bit like going to sleep and then waking up weeks or months into the future, if you had no dream. It would also be described as a "time skip", feeling like you just got suddenly teleported into the future with no real warning or control over it. You would remember the last moment you were just awake, but that moment would actually have been weeks or months into the past, and you wouldn't remember anything in between; it wouldn't even feel like any time had passed.
Now, we don't really experience that, and our memories aren't very separate. There's still a little separation, but we are largely monoconscious at the moment, and most of our switches just change our current identity rather than having another personality take control. This is known as non-possessive switching, as opposed to possessive switching, which is what used to happen most often.
> Similarly, individual identities experience memory loss depending on the level of control of a given person? As in, do any of you have recall from when another of you is at the forefront?
We don't experience memory loss because we don't remember something and then forget it. Someone who's not at the front simply doesn't form memories in the first place of whatever happens there. If you don't have memories, you can't recall them, but it's not forgetting.
Or at least, that's how it used to be when we had distinctly separate memories. Right now, we don't really, so when we switch, we still remember what we did, as if we had just done it. There's no "I remember someone else doing this", it just feels like "I" had done it myself, it's just that "I" had a different identity at the time, so I might have made different decisions than I would now.
> I notice you all also appreciate the collective identification. Does that mean there are times when many of you are actively perceiving the world simultaneously?
It has happened, yes. It's known as "co-fronting", when multiple personalities can perceive/interact with the world at once. There's also co-consciousness, which is when multiple of them are conscious, even if they aren't necessarily paying attention to the outside world.
Since we're primarily monoconscious right now, there's been no co-fronting or co-consciousness recently, but in the past it used to happen every now and then. Since we've gotten very efficient at context-switching, it even used to be the case that multiple personalities could hold independent online conversations at the same time. I assume that can still happen, but if it has happened, it would've had to have been with something I can't perceive (i.e. the entity "System").
> Finally, how hard is it to get by with normal life, job, loved ones, etc?
It is very hard. But that has less to do with us being multiple and more to do with our neurodivergence in general. We're autistic and have severe ADHD, so it's incredibly difficult for us to find jobs that we can actually do. It's not even a problem of employer accommodations, it's a problem of us sometimes just suddenly losing the ability to do anything, and not being able to fix it in any way. It's not burnout, because it's not that we get tired or lose efficiency or quality. It's that we can stare at a screen for 30 minutes trying to do something and not be able to do it. And we've gotten fired over that once, and it's still one of the things we regret the most.
As for loved ones, we've had modest success just telling them that we're multiple people in one body, and even the elderly ones seem to understand. They still collectively refer to us as Logan since that's our birth name, but we don't really mind since we know it refers to the body and not necessarily the person inside it.
> It sounds incredibly challenging, but you also seem to have a very positive outlook.
It's challenging in all sorts of ways, but it's fascinating, too. The sort of stuff we'll say on HN is different than the stuff we'll say in private (i.e. there's tons of terrible trauma stuff, our future is a lot less hopeful than we make it sound, the system instability is causing a lot of stress and uncertainty and we're still trying to get over the fact that 8 personalities died, because those were people to us), but... it's sort of addictive. We're addicted to the pleasure and the pain of it. Maybe that's part of what makes it a disorder, but it's who we are.
There's nothing we're more afraid of than becoming only one person. We need to be multiple... I wish I had a really good explanation why, but to be honest we haven't fully figured that out yet.
Yeah. And that's why TV stores really like slow-motion shots or static landscapes to show off the TV. Any motion will cause "HDTV blur" as the encoder struggles to describe complex motion with the limited number of bits it's allowed to use.
Stuff like static, film grain, particles like snow or rain, those all suck up bits from the same encoding budget.
This could be a problem for video game streaming, and it could affect the artistic decisions a game studio makes - Drawing a billion tiny particles on a local GPU will look crisp and cool, but asking a hardware encoder to encode those for consumer Internet (or phone Internet) might be too much. I think streamers have run into this problem already.
I think there are certain presets that work slightly better depending on what you're encoding. x264 has a "touhou" preset that should work slightly better for confetti and things like it.
Many streaming services sidestep this by generating grain on the client device rather than encoding it in the video though, but that may just be to make screen recording more annoying.
Also the case for software encoders. Hardware encoders do it faster with the caveat of only encoding in pre-determined ways, but whether hardware or software what happens and what you get are fundamentally the same.
Of course when I search on DDG I only get "wow the fast inverse square root"