There's a part in the hallway to the left where you fly a little drone into a laboratory to take a picture. It's one of the traditionally hard places to emulate for these sorts of emulation strategies that abstract the graphics output from the memory pathway of the real system. That's because the effect to make it look like a security camera that would happen these days as a post process fragment shader, is instead done on the main CPU, just reading and writing to the framebuffer. So the GPU emulation actually needs to write out the framebuffer at the correct format/resolution. And then you have to read the framebuffer from memory, not short circuited out of the GPU directly like is the core of how this is designed to get it's benefits.
I played that game so much as a teen. It's possible the PS version is more playable but it would feel off playing the wrong version.
The neat thing about the perfect dark effect is that they read modify write the entire framebuffer and it represents the full scanned out frame, so in those cases you'd need to drop scan out resolution back down dynamically, where as on the Mario Kart example you can keep the higher scan out resolution.
The RAD2X being a easy way to get started https://www.retrogamingcables.co.uk/RAD2X-CABLES In combination with refurbishing the original controllers with some new plastic https://store.kitsch-bent.com/product/n64-joystick-gears you and maybe a everdrive you can enjoy the original experience without too much work.
Still.. It can't increase the original resolution like this, this is just gorgeous... And so sharp
Maybe it made sense on the already kinda distorted consumer TVs of the day as a kind of primitive anti aliasing, I think it's horrible though.
Deconvolution is super cool. You can use some deconvolution algorithms in Gimp using the G'MIC plugin. There are a few different ones in the Details section under Sharpening, for example Richardson-Lucy  or Gold-Meinel. You can play with blurring an image and then using the deconvolution to remove the blur - it's surprising how much of a Gaussian blur can be removed. I've used it in the past to remove blur from some deliberately blurred 'preview' images. Try the different algorithms as some produce much better results than others, but I forget which.
As a result I subjectively find that despite having significantly weaker hardware, no perspective correction and no subpixel-precision PSX games often end up looking a lot more impressive. And that's got a lot to do with the incredible texture work these games use: https://www.mobygames.com/images/shots/l/243918-vagrant-stor...
OTOH, Vagrant Story came very late in the PS1's life (the PS2 came out like a month later), that's from an in-engine cinematic scene, the texture swimming issues aren't really noticeable in a screenshot, and it's hard to say for sure, but that may be a screenshot from an emulator.
The real experience was a bit more like https://www.youtube.com/watch?v=5GVE4a8ULww&t=912 (screenshot is from around 15:13).
Still pushing the hardware to the max, but games in a similar part of the n64's lifecycle (especially the ones requiring the expansion pak, Majora's Mask, Banjo-Tooie, Perfect Dark) also look pretty remarkable for the time.
The additional cost was an enormous step up in seek time and read time, which for some games manifested as load times everywhere, and for others meant a herculean effort in managing asset streaming, per the fascinating Andy Gavin talk that Ars published a few months ago:
Also streaming assets was rather uncommon at the time, Naughty Dog is really pushing the envelope here. It was especially uncommon because most games streamed the background music in real time straight from the CD, so if you wanted to side-load assets on demand you had to be very clever about it lest the audio got interrupted.
As a result you generally had long loading times at the start of levels but that's about it. Some games were really bad about it though, and had long loading times all over the place (some even when you do something as trivial as opening a menu) but that could generally be attributed to shoddy programming, not a weakness of the console per-se. Overall I think in hindsight the decision to use a disc drive was the right one and cartridges ended up being a rather severe liability for Nintendo at that time, although of course it's far from the only factor at play when comparing the successes of both consoles.
There were a few texture formats you could use. 16bit RGB was the lazy choice but you could squeeze more resolution if you used 4 or 8 bit greyscale (single channel) and put colors in the vertices.
You could also use palletized textures, with either 8 or 4 bit lookups (eg 256 or 16 unique colors per texture). Unfortunately that split the texture memory in half - the palette was 2kb and lookup was 2kb. If you were doing things right you spent a lot of time tweaking palettes by hand and writing code to best chose palette colors.
Tldr; 48x48 16bit RGBA, 48x48 palletized 256 color RGBA, 64x64 palletized 16 color RGBA, 96x96 4 bit intensity.
Edit: actually watched the whole video and realized the HDMI cable is actually doing post hoc image processing to reverse the blur, which is why the dithering is still wiped out but the blur is gone, which is pretty neat.
The standard output mode was 320x240 but developers realized you could reduce the buffer sizes and play with the screen borders to try to render less pixels per frame. Dropping resolution was a quick way to get frame rate up, and when your target TV was a NTSC CRT it didn't seem so bad.
The Antialiasing (which the game shark can disable) was what stopped the nasty pixel crawl and jaggies that Playstation games of that era suffered from. It was cutting-edge for the time - it used the 'extra' bit in the 9bit RAMBUS ram (that would have been for ECC in serious applications) to store coverage bits and blend edge pixels while maintaining crispness on interior edges.
For something like a photograph, you want as high a 'native resolution' as possible, so that there is as much information as possible in the original image. But for old video games, the assets and textures are still the same. Is this a bit like taking a high megapixel photo of a low quality print? Or is my understanding wrong?
I'm sure things like AA work better in the native renderer. Are there other advantages?
Previously, N64 emulators just used the commands sent to the RDP to tell the host system GPU what to do with something higher-level like OpenGL or DirectX (of course, this meant a lot of game-specific "hacks" in the emulator), rather than emulating the RDP itself by sending lower-level commands directly from the RDP to the host GPU with something like Vulkan. This is so-called High-Level Emulation (HLE), and it's a massive shortcut to emulating the whole RDP -- which meant that N64 could even be "emulated" on a PC from 1999.
Lower-level emulation of the RDP itself has been recently made possible, and now it can also now be "up-scaled" to arbitrary resolutions -- instead of just sticking to high level emulation and telling OpenGL or DirectX to render to a larger resolution -- or even worse, scaling the rasterized rendered output frames by treating them as images.
In practice, Mario 64 was just telling the RDP to render triangles anyways so it mapped nicely to OpenGL for the HLE case, but for more "accurate" emulation (this is like getting from 95% to 99%), the RDP itself needs to be emulated as well for things like the Perfect Dark drone camera mentioned elsewhere.
Most games used one of a few Nintendo provided RSP programs although later in the machine's lifetime they opened up the RSP compiler and tools to developers.
I'm a (sort of) purist who has an emulation (hence the sort of) PC hooked up to a 240p CRT TV for older games, but N64 running at higher res does look pretty nice in some games, and anything looks better than native 240p output with blurry bilinear upscaling on an LCD.
For more on that: https://www.youtube.com/watch?v=Ea6tw-gulnQ
It's actually a 480i signal with the timing fiddled with so that the alternate lines still strike the same part of the screen (this is why games from that era had such noticeable scanlines - the CRT beam is only lighting up alternate horizontal lines).
This also means that a lot of more modern TVs (and even some upscalers marketed for retro gaming) do an extra terrible job of upscaling 240p signals because they run the same logic that they would if it was normal 480i, resulting in unnecessary flickering or dropped frames.
My understanding is that the timing of the vblank signalling that comes between fields determines weather the next field is an even field or an odd field. If the vblank signalling comes in the middle of the last scanline, the next field is an even field; if the vblank comes aligned with the end of the last scanline, the next field is an odd field.
If you always start vblank signalling in the middle of a scanline, you get all even fields, if you always start vblank signalling at the end of a scanline, you get all odd fields.
PlayStation 1 games suffer from a similar problem, and rendering at higher resolution in an emulator also often reveals a surprisingly high polygon count for eg. character models that look like badly-drawn sprites at native resolution.
> I'm sure things like AA work better in the native renderer.
Rendering at higher resolution is the highest-quality form of AA possible.
But I think the bigger advantage is for distant objects - if you zoom in the goldeneye screenshot, you can see the face texture of the guy at the other end of the hall. If the regular native resolution had been used and upscaled afterwards, you wouldn't see the face at all, just a face-coloured blur
Image superscaling is a well known problem with real world solutions so, it's only a matter of time and interest.
ML will draw in details that never existed to begin with. It's quite amazing, but most versions of image superscaling that exists today is trained on drawn art, like from Deviant Art, so the ML makes the images look subtly pastel. It's great if you want a 4k desktop background.
Heck, I could make one if I cared enough. A friend of mine makes a popular emulator. Maybe she'd appreciate the functionality.
(Not suggesting that one should drive by signs in GT, but they're still fine as a cue.)
Even if the textures are blurry (they can be swapped too), the polygon details will be easier to see.
I wonder if someone has already thought of using one of the available pixel art upscalers to improve the texture resolutions. If not, it's probably only a matter of time.
I personally think it looks awful.
For this particular project, it's up in the air whether it actually works. Parallel RDP is not a "standard Vulkan game", and if I understand correctly behaves more like a compute shader program written in Vulkan. As a result, it requires the presence of certain more niche Vulkan extensions. MoltenVK by its nature is not as "feature rich" a Vulkan driver as bare metal Vulkan drivers are, so it might be missing extensions required for Parallel to work. In Parallel RDP's first iteration, it required a Vulkan extension that allows GPUs to use system memory, which was only present in certain Windows/Linux GPU drivers but not in MoltenVK. There's already a workaround for this with a minor performance impact.
Parallel RDP now seems to work on a few mobile GPU Vulkan implementations , which would be encouraging for MoltenVK as those drivers tend to be lower quality and have less coverage also. Maybe Parallel RDP already works on MoltenVK in fact, and just requires some testing .
: Worth noting it has only been released for Windows and Linux so this would require some building yourself: https://www.libretro.com/index.php/parallel-rdp-rewritten-fr...