It means it's calculating 6000 frames per second, but all of them don't need to make it to the screen. On a 60hz screen, every 100th frame calculated would actually be displayed. (If we assume a perfect 6000fps, the video shows fluctuations between 4000fps-7000fps.
If you just mean how can it be so high in general? Old, well optimized, games run really well on faster modern hardware. I remember getting over 1,000 fps in Guild Wars 1 on a GTX 1060 when looking at an area with no monsters/NPCs.
edit( this paragraph doesn't apply here) ~~The PS1 also doesn't have floating point math, mostly everything is done in fixed point with integer math which is obscenely fast compared to floating point (it could also simulate an FPU if the precision is absolutely necessary but that's not suitable for realtime)~~
Just read further down the article that they converted it to float. I guess the game is just super optimized. I would have thought fixed point savings factored in here.
Nah, most people would probably not think of unseen frames being generated unless you're into studying your graphics performance.
It also technically does make a difference in some games. Even if the frames don't make it to the screen, if your mouse polls at 1000hz then you can get more precise input tracking by rendering up to 1000fps. It's something only really noticeable to professional gamers. But pro CSGO players are positive they can feel an improvement when running at fps more than double their monitor rate.
Also keep in mind that games don't run at a constant FPS. The "FPS" stat that we talk about is an average of the time each frame took to process and render, 1/x. But we can also talk about deviation from that average, which is where you get "1% low" and "0.1% low" stats. If your 1%s or 0.1%s dip below your monitor refresh rate, that becomes microstutter, which is way more noticeable than a reduction in the average frame time.
Wanting to have FPS at 2x your monitor's refresh rate will[0] provide a significant safety margin against frames that take too long to render. I will also point out that this 2x rule came into the community before we had adaptive sync, so the alternative would be turning off vsync in the vain hope that microstutter would get lost in the tearing.
For the record, "microstutter" wasn't really a common thing people talked about until fairly recently, so I'm kind of applying today's more scientific analysis of game performance to things that back then were largely superstitions that happened to work.
[0] Assuming a fast CPU and GPU drivers worth using
Very true, I didn't want to go too deep in one comment, but I tried to mention that the video of the modded Wipeout is fluctuating between 4000fps and 6000fps, but that doesn't cover the fact that 5999 frames may have been calculated in 0.1s and 1 frame took 0.9s
I actually tried increasing the input handling resolution of a game of mine by running the input polling and physics system 4x for each actual rendered frame. I thought it would be a great idea, you can get more precise input and physics without also calculating batching, draw calls, or sending anything to the GPU.
Only problem was that because the actual game logic and physics in my game was so simple, the visual frame would take about 5ms to calculate/render, and then the 4 non-visual frames would take about 0.2ms each to calculate. So even with a 1000hz mouse it was running the non-visual series of frames faster than the mouse would update, and then stalling for 5ms while the visual frame rendered.
When trying to run one system at a higher rate than the other(whether it's render, physics, input) the principles of DSP come into play: you're running a bunch of things that can be coordinated together for maximum smoothness if you define the sample rates to be consistent and interpolate the sampled data appropriately. If you can run something very fast but you don't define a set pace, you don't have a theoretically sound starting point, and that's where a lot of game timing systems(from basically every era of gaming) fall over and accidentally drop information or add timing artifacts.
So, like you, I made a system with a target physics refresh that was above render, but I did it with the goal of visually smooth operation - and therefore, I derived a frame pace from time-from-bootup, and guided the physics refresh not around the direct multiple of vsync, but "how many frames do I need to issue to keep pace with real time as defined by the system clock". Doing this naively creates disturbing rubberbanding effects, since the pace naturally oscillates, but adding a low-pass filter to reduce jitter and a "dropped frames" adjustment produced motion with a very satisfying quality.
I'm forgetting precisely what I did with the input, on the other hand, but I think I determined that "as fast as possible" was still an improvement because the way I was issuing frames was reducing the amount of aliasing of deadlines on the margin.
It's an area where you can definitely get pretty sophisticated. Many emulators for older systems now are emulating ahead, displaying that result, and then rolling back emulation state to create a configurable negative input latency.
Funny you mention DSP, that's a hobby of mine (for music) and I've never really connected DSP fundamentals and FPS outside of animation keyframes.
Your system sounds a lot like the current industry standard for deterministic physics engines if the input was processed without regards to the physics or rendering speed (you just need to run the physics at a fixed tickrate and it's deterministic). Did you wait for the real time that physics should be occurring at? Most of those don't actually run the physics ticks in realtime, basically if you're not rendering them you can process a bunch of them in a row and just simulate the clock stepping forward. For physics decoupling I normally just use Unity's built in interpolation system which does it that way, but I was trying to get fancy here. The issue in my case is because it depends on external input the physics processing would need to occur at specific real times. And unless I can know the time of the next frame in advance, that's difficult and not entirely possible (I didn't want to enforce vsync). It would have been fun to go down that rabbithole but at that point I decided to take the easy path and tie input polling to the render rate.
And then like 6 months later Unity released the new Input system which can be completely decoupled from any kind of framerate and just gives you realtime input timing values if you want.
Very much true, just wanted to add something. In the before times, games would be running on pure forward renderers. Which, in turn, would mitigate many of the the frame timing inconsistencies of the more complex pipelines of today!
Some still do that, of course. With the exception that we aren't relying in fixed function hw anymore!
Wait, really? That's odd. I thought the whole point of a deferred rendering pipeline was to reduce inconsistency by doing all your lighting calculations in one pass on one quad. In forward rendering you have to worry about overdraw - i.e. if you have a model that's half-obscured by another, but you render it first, you still wind up drawing the whole model, including the expensive pixel shader material you attached to it[0]. With deferred rendering all your model is doing is drawing textures to various channels in the G-buffer, which is cheap.
I thought the major downside of deferred was memory bandwidth - i.e. you have to write and read the entire G-buffer with at least 11[1] channels in order to produce an RGB image. That's a cost you pay every frame so it wouldn't hurt frame time consistency.
Meanwhile in forward-land it was the case that your FPS was extremely viewport dependent. Like, I remember looking down at a floor would double FPS, looking at a large scene with a bunch of objects or people in it would tank FPS, etc.
[0] Unless you get lucky with drawing order or sorted everything from front-to-back so that you can rely on early depth testing to kill those pixel draws. Which would also ruin your frame time consistency.
[1] XYZ normal, depth, RGB diffuse, RGB specular, and some kind of 'shininess' parameter that controls the specular exponent. Most practical deferred implementations will also have either a "material ID" channel or some special-purpose channels for controlling various visual effects in the game.
This is also why Breath of the Wild has a weird column where if you stand inside of it Link stops getting toon shading.
Yes, but in practice, no. These games are usually coded as a loop that runs with full CPU as fast as it can (unless capped, which the old one wasn’t), using as much cpu as is available. In that case, the fps is a side effect of how long the loop takes to run each pass (which is what happened here) - i.e. you don’t determine the fps, the fps is a result of how complicated or (in)efficient your code is. So going from running at 30 fps because it was so poorly coded and made such inefficient use of the cpu to running at 6000 fps because it now completes each loop pass that much faster, the cpu usage is actually the same.
Now if your code is so optimized that it can run at 6000 fps, at that point you can say “gee, I don’t need this many updates a second, let me cap it to x frames per second.” But how do you do that? The GPU is grabbing finished frames out of the buffer at its own pace, whether you are generating them at 6k/sec or just 5/sec. To cap your cpu consumption you would usually say “we need a new frame every 0.015s to always have a new frame ready for the GPU so that the screen updates sixty times a second, so if we finish a frame in 0.001s instead, sleep (effectively yielding cpu consumption to other processes) for 0.01 seconds after we run through the loop” - but while that may work for some things, there are other stuff that need to happen “in real-time” such as reloading the audio buffer (to avoid pauses or corrupted/garbled audio), etc and you also can’t rely on the system to actually wake you before 0.015s even though you asked it to wake you after just 0.01s to be extra safe.
Tl;dr, yes, once your code is running at 6k fps, then capping it to reduce consumption is an option, but running at 6k fps doesn’t actually increase cpu vs inefficiently running at 30fps.
It's possible that going far above "6000fps" might be necessary someday for holographic/3D displays that need to render the scene from hundreds or thousands of different viewpoints for one single frame.
Say you need to render a scene from 1000 different angles for a 3D display, just to get to a 60hz refresh rate you would need to render the scene 60,000 times.
This is the game update loop, which excludes rendering. (for some reason people still use FPS which is confusing)
I'm not aware of any displays like that, but if there were, you could optimize by eye tracking each viewer and only rendering the direction they're seeing it from. The "New 3DS" (note: different from the regular 3DS) did this.
That is so absolutely false. Any game you run if you don't cap fps it uses 100% of your gpu and potentially your cpu. As soon as you cap the framerate to 60 fps it starts behaving normally.
I think this was just a case of optimized code runs really fast, but sometimes the game will decouple the physics simulation from the graphics, I have seen this done in both directions, racing games where you want the physics to run faster than the graphics for a nice smooth car control, and building games where you want the physics to run slower than the graphics, mainly because you have so much physics you can not calculate it all every frame.
you need about the same impossible accuracy to play this. im only half joking, if you want to improve you will need to somehow respond more accurately. Seeing someone play it well is mind blowing if you know how hard it gets.
It's not intended to run at 6000fps. That's just how quickly it will run without any form of limiter. You can use your GPU settings to limit the framerate, or many games have a built in frame-limiter.
They're talking about the speed of the internal engine, not the display. So the display could still only be showing 60 frames to the user each second (or 120, or anything) but the internal engine is running at 6000fps.
Plenty (maybe nearly all?) of games do this because modern engines decouple the engine speed from the display speed. In older systems where you knew the engine was only going to run for a specific game on specific hardware (e.g. a SNES or GameCube or PlayStation), and you knew you were always going to be targeting 30fps, no more no less, you could pretty safely assume the game would _always_ run at 30fps and could use a "frame" as a unit of time. So if you want some in-game action like a melee attack to take 1 second, you could just count 30 frames and you would know it was 1 second long. But if somehow this game was later run at 60fps, that same attack would now only take .5 seconds, since there were twice as many frames in a second now.
So if you took a game like this meant for 30fps and ran it at 60, everything would just run twice as fast. You wouldn't actually be able to play the original game at anything higher than the original frame rate.
What they're saying here is that they decoupled the two, where originally they were coupled. So now the game can run at high fps and feel smoother than the original lower fps rate, but the gameplay is still at the original intended speed.
Interesting twist: Wipeout XL/2097 for PC was a terrifically bad port, and the game speed was proportional to how fast your video card could draw the 3D scene, just as you describe.
There was a patch at some point to fix this, but honestly just easier to load the game up in a PSX emulator these days.
No, it wasn’t hyper–optimized, it just doesn’t draw many triangles or use many light sources. There is simply a lot less for the renderer to do than in a modern game.
The risk of 'bad' code is that it might be hard to make the change you want. I would believe that spaghetti code is more likely to have bugs than 'well designed' code.
When code is fresh in your mind, there's more tolerance for how disorganised the code can be and still be easy to change.
If your shipping method is "one shot".. time spent cleaning it has a chance to provide value; but time spent adding features is very likely adds value. -- Probably if the code is very clean, then at least some time would have been better spent adding features.
You aren't wrong, but a lot of the risk of "good code" nowadays is that there is more of it. An added risk is a lot of "good code" is leaning heavily on practices that are not good for performance.
I don't want to push for lax testing standards. And I generally prefer modern build practices for programming. It is hard to take a lot of criticism of gaming code seriously, though, as games were shipped at what feels like a higher success throughput than most business software.
"Bad" code also has way less abstractions. It's ridiculously easy to change something (unless it's really spaghetti/everything is intertwined)
"Good" and modern code often has many abstractions. In some cases, that gives ultimate flexibility, but in some cases, if you are fighting the abstraction or the framework, it will make it almost impossible to change.
The IRS isn’t going to get involved if your physics don’t match reality. Defining and creating your own data means you get to skip most of the headaches in business software development.
Not a small point, but I think it is more than IRS concerns that have caused web bloat. Gmail taking seconds to respond to keyboard is definitely not explained away with business logic being hard. Is it?
The great debate is always "will I want to change this tiny thing here without impacting everything else?" vs DRY. Studio cultures differ, but I would say at least half resort to hard coding things like positions of every item manually given the excuse they might want to shift something by a bit later (in a hurry, crunching etc.) without side effects.
Thankfully the emergence of more standard tooling and engines has pushed this to being more of an art resource concern, but it does lead to things like being told that taking a game that assumes a 16:9 1080P display and making it more flexible will take multiple years of person time.
Personally I cannot stand this tendency, but do get why they do it.
You can get pretty close to the end of a project before you really understand what it was you were trying to do. Especially if management is allowed to keep moving things on you. I think this is substantially the instinct that leads to Waterfall. I just want to know what I'm supposed to be building for a while before you 'ruin' it.
Definitely. I've worked on my fair share of projects where the game is essentially completely changed midway or late in development.
Stuff like "this single player game we've been working on for 2 years now has to be a multiplayer game". That was a fun one.
In other industries I've been able to reasonably strategize for various changes but in games the change can be so profound and out of left field it's pretty much impossible to anticipate the change and plan for it.
Oh damn, one of the worst was the game was practically finished and “we just did a publishing deal, it is great, you just need to support 13 languages” “which?” “Arabic” . . .
I think the days of that happening are mostly gone. In the days where games were one-and-done, gameplay code was like this, but in the age of sequels & live service games, game code is as good (or bad) as any other code.
To frame this, I actually worked at Sony as a software engineer on the Playstation when Wipeout came out, so I have some firsthand insight into developing games around this time. When it comes to older games like this, there were a bunch of compromises that we had to make that introduced additionally complexity, that are entirely overlooked here. You're looking at this from the perspective of how a computer/processor works now, and what it is like to develop software without having to take into actual account the limitations of the hardware and processor itself, as a part of your code design.
For example, the Playstation 1 had a MIPS R3000 CPU, and a single instruction pipeline, so it's basically doing only doing 1 thing at a time. Multithreading doesn't exist. Our only parallel processing was that we could do all the game logic + all the math to transform all the triangles into screen space, and simultaneously the GPU would be drawing the last frame we drew. We had 4MB of memory total, so when we were were working on games then, we would have actual discussions about whether it was worth the overhead of including a malloc/free, or just hard-coding all your memory addresses because of the space you'd save. We would compile out our abstracted, "nice" versions of functions, count the instructions, and compare sizes to see where we could optimize the code to reduce the compiled output size. In return, we might be able to draw an extra triangle or two.
The instruction and datacache were tiny, and loaded based on address, so sometimes we would add code or instructions that didn't do anything just to make a loop not cross a cache boundary.
So we were working on under strict time pressure, with unknown hardware and badly translated japanese print manuals (and sometimes not even translated), in small teams without any real ability to communicate with anyone else in the industry about it (since we are all under NDA).
I'm not saying that we didn't write bad code, we did write plenty of it, but a number of the decisions on HOW to write the code itself that we were considering then aren't even visible to people reading the code now and making judgements on it. I knew I was writing spaghetti code some of the time, because I didn't have the memory budget to load it as data. Is there a cleaner way to do their UX and in a data-driven fashion? Sure. But for me to get that data, I would have to write a function to load a/multiple blocks of data from a 1x CD that a super low transfer rate, and an appallingly long seek times. Making a data-driven UI is possible, but not practical when someone hits the start button, they want the menu to come up immediately.
In many cases, we knew that we were writing bad code, but didn't have the capacity to write anything better. It literally didn't fit our budget for memory or CPU. Then you have to make decisions about where you CAN afford an abstraction, and those decisions can be quite painful.
I haven't violently disagreed with anyone in any of these threads, but did want to offer a bit of my first hand perspective on things that are often overlooked.
This reminds me of The Story of Mel[1]. It seems like you were doing Heavy Wizardry, which requires deep knowledge of technical details. The result can be Good Code in the sense that it makes the most use of available hardware and software resources, while possibly being hard or 'impossible' for people without equivalent knowledge to read or make changes. This is fine.
When you have more resources, you can write things prettier, not rely on side-effects, not 'abuse' features to do things normally considered BadTM, not "cross the streams". Or perhaps I should say when you fit easily within available resources, you have luxuries.
Don't feel bad about what you built, when you're aware of the constraints you were under and that the tradeoffs you were making were the best available choices at the time.
Always impressed how some people possess the skill to plough through these kind of codebases. I wouldnt even know where to start. Yet alone find the time to execute these kind of projects. Although maybe I should remember my life before kids. Coding till 4 pm every day. Huge respect for this guy.
It goes like this: You read a function and understand what it does. You scream at the original programmer (who was probably yourself) for being an utter idiot and doing it badly. Then you rewrite it better.
Then you go to the next function, and realize it doesn't even need the previous function at all, so you get mad at yourself and delete all that wonderful code you just wrote.
Then you try a new way: instead of inside out (start at the internal functions) you start at the top - you find where the code handles some particular task, and you drill down deep into it, throwing away useless code left and right and rewrite the thing better. (Your advantage being you know exactly what it needs to do, unlike your previous self who did not know that since he was still developing it.)
You also use a tool that finds dead code - in particular functions that are never called by anything.
Breaking big problems down into small ones is part of the process, yes, but only once you are well-versed in the problem space. The guy who rewrote this:
1. Is proficient in C.
2. Understands Playstation architecture and development better than the original programmers of the game. (Although Psygnosis had far less time and hindsight on their side.)
3. Is apparently quite familiar with 3D game programming and techniques in general, and how/when to use them.
4. Already had experience reverse-engineering parts of the game previously.
5. Had the free time to undertake a project of this scale.
So it's not something any random dev can just snap their fingers and decide to do. It's a case study on the intersection of experience, preparation, and luck.
On the other hand I think ars's point is good advice: The way you do the work is, by doing the work.
So many people I work with these days are constantly looking for tricks, or secrets, magic tools, or whatever they seem to think I'm doing to consistently outpace them, just to avoid writing code (/ reading code / thinking through code). They take 2-10x longer and their result is invariably worse. When you ask why they do it, they answer "you'd have to do so much refactoring", "the algorithm would be hard to implement", "I'd have to replace dependency X", and so on as a list of things that actually aren't so hard or time-consuming if you sit down and just do the work. But my only "trick" is that I write the code.
Just sit down and do it! I'm reminded of Harlan Ellison, commenting on why he wrote in public,
I do it because I think particularly in this country people are so distanced from literature, the way it’s taught in schools, that they think that people who write are magicians on a mountaintop somewhere. And I think that’s one of the reasons why there’s so much illiteracy in this country. So by doing it in public, I show people it’s a job.
Wipeout was asses in chairs writing code. This cleanup is an ass in a chair writing code. Yes soft skills are important blah blah. But if you want to be good, to learn to do this stuff, there's no way around putting your ass in the chair and writing code.
Shipping bad code fast and with little understanding isn't really what I'm talking about. I would say the kind of frontend developers you're describing lack hubris; the ones I'm talking about (which are all over the stack but I would say somewhat concentrated in the middleware-to-commodity-backend, Spring Boot, etc. area) lack laziness.
This specific case is just a grift. The usage of is-odd isn't as legitimate as people think.
If you look through the dependents, there are other packages by the same author that are "semi-useful" and depend on it. The author got some important packages to use those.
The author could just inline is-odd, it's a single line after all, but then he would lose the bragging rights of having a package with 500k weekly downloads.
This is how left-pad got included into create-react-app (or something like that) in the past. It was a 3rd or 4th level dependency of something that was actually useful.
The problem here is people and packages not really vetting their 3rd-4th level dependencies. And you'll see a lot of people here in HN defending the practice of not caring.
I know, right? I've spent the last 15 or so minutes just marveling at how nicely cleaned up it all is, I can only imagine how overwhelming it must've been to see the mess it was. Really shows what some extra time/budget could do for major studios releasing remasters.
"Either let it be, or shut this thing down and get a real remaster going."
Solidly agree, copyright / IP shouldn't be about holding the public hostage. It should be about maximizing the mutual benefit for both the creator and, very importantly, the public.
Culture deserves love and respect and must be 'accessible' (able to buy); or it should be set free (public domain).
Wipeout was great, but I always preferred playing Extreme-G on the N64. I know the source for the third game in the series was leaked, but I'm still waiting for the original!
The numbers are somewhat artificially exaggerated since it sounds like the source leak contains what are effectively multiple platform-specific copies of the same game.
That's a pretty common pattern when you're porting to a substantially different system, from a bespoke base like a small launch title, that was never intended to run on anything else. Especially in an era before everyone used vcs tools like git with cheap branches. We used to work on diverging whole copy forks constantly.
Going through this mess and cleaning it all up must have felt incredibly satisfying because of all the low-hanging fruit. It's like a long overdue spring cleaning.
Is the gameplay of Wipeout actually any good? Always admired the aesthetics, and I love F-Zero GX, but…
The bonking against sides, the short view distance induced by excessively-curved tracks makes it feel like the NES game RC Pro-Am, but instead of an tightly-cropped overhead view of the track, you see the next fifty feet of track ahead (at 1,000 scifimeters/sec).
The game felt brute-forceable through its brittle, unforgiving driving but it was so frustrating I never gelled with it. Was I missing anything?
"The original menu “system” was… I don't even know. I never took a close look. It's 5000 lines of spaghetti for the main menu plus another 4000 or so for the in game menu, credits (without the actual text) and win/lose screens."
"The 5000 lines of if else that handles the menu state is a striking witness to this insanity."
I can say, quite confidently, that was probably some junior developer who built the shell. It is always some junior developer that builds the shell. And it was a designer/director that didn't know what they wanted until just prior to putting the gold master in an envelope to send to the pressing house. Also the shell is completely throw away code that will never, ever be used again in any other game.
"Each time by a different set of developers that somehow had to make sense of this mess. The code still contains many of the remains of what came before. There was never any time to clean it up."
Exec Producer at publisher: "Hi, here's $25,000, can you port Wipeout from this platform to that platform? We'll give you another $10,000 if can do it by this date, and another $25,000 three years later if the port sells so many bajillion copies.
Desperate Studio Producer: "Absolutely can do that, let me find two cheap programmers to take care of that. Hey guys can you spend the next two months crunching like crazy with unpaid overtime to help me keep my boss happy and get that promotion? There will be a significant reward (a two pizza pizza party) if you can hit this release date."
The two developers then spend an inordinate amount of time trying to get the damn project to build and find some obscure missing header files, layers on more hacks and ships in the 11th hour.
And the rest is history. And nobody ever looks at the code ever again. Until the next poor programmer has to port it to a new platform.