Hacker News new | past | comments | ask | show | jobs | submit login
Super Mario 64 has been decompiled (github.com)
445 points by sjuut 82 days ago | hide | past | web | favorite | 170 comments

As an amusing side-effect, the team working on this effort also implemented IRIX userland support for QEMU since the original N64 toolchain ran on IRIX on the SGI Indy, and they need the original compilers to verify functional equivalence of their source: https://github.com/n64decomp/qemu-irix .

I honestly love coming to HN to see posts like this and comments like yours. It is always so neat to see the other sides of software engineering. You listed 4 acronyms and I have no idea what any of them are or how they fit into this story but all I want to do is deep dive into each one. It is also awesome to see people so interested in things that I've never even encountered before.

IRIX [1] was the version of SVR4 UNIX which ran on SGI [2] computers.

QEMU [3] is an emulator used to run programs for one machine on another.

N64 [4] is the Nintendo 64 games console.

SGI Indy [5] was a desktop SGI workstation from 1993.

[1] https://en.wikipedia.org/wiki/IRIX

[2] https://en.wikipedia.org/wiki/Silicon_Graphics

[3] https://www.qemu.org/

[4] https://en.wikipedia.org/wiki/Nintendo_64

[5] https://en.wikipedia.org/wiki/SGI_Indy

I love how you use 3 acronyms to describe the first acronym. It's acronyms all the way down!

SVR4: UNIX System V Release 4 https://en.m.wikipedia.org/wiki/UNIX_System_V

UNIX: Actually Unix. Not an acronym. https://en.m.wikipedia.org/wiki/Unix

"Unix. Not an acronym."


"In 1970, the group coined the name Unics for Uniplexed Information and Computing Service (pronounced "eunuchs"), as a pun on Multics, which stood for Multiplexed Information and Computer Services."

Uniplexed sounds like a joke word.

It was. They were poking fun at Multics being too bloated and complex.

> QEMU [3] is an emulator used to run programs for one machine on another.

More specifically one processor architecture to another. E.g. running on your desktop (usually an x86 based architecture) a Linux Operating System designed and compiled for a Raspberry Pi (ARM based architecture) and it's incompatible architecture. In this case they're running software designed for the same processor that the Nintendo 64 was targeting which so happens to also have ran a Unix OS known as IRIX.

[5] Often referred to as 'the pizzabox' IIRC

edit: oh, says it right there in the Wiki

Also the Sun SPARCstation.


Thank you, manual google paster.

I wrote to SGI in high school asking for some info on their computers and they sent back a stack of beautifully printed, full-color brochures. The Indy had a webcam, which was very rare in those days. Also included was a brochure on the Indigo workstation, which Industrial Light and Magic used for Jurassic Park, etc.

Nintendo is a little mysterious when it comes to what their actual tooling was, but I remember Donkey Kong Country being the first time I read they were using SGIs (or at least the studio "Rare" was).

It's somewhat surprising they used the Indy for developing Mario 64 – I always got the sense that it was somewhat lightweight in performance compared to the Indigo, but a very cool machine either way.

I have an SGI rotting in my garage, what's amazing about them is the quality of the monitor. For CRT displays, the best damn monitor I ever experienced, just CRISP.

The Nintendo 64 had a MIPS R4300 chip, the SGI Indigio also used the MIPS Rchip, the early one had R4000/R4400 chip, the later ones R8000+ chips. I can only speculate that by using SGI, you could run some of your non specific N64 code locally and debug faster.

Original PSX had a R3000 chip, but Sony opted for BSD, their devkit ran on FreeBSD PCs and you built the code and ran on actual PSX device. Cheaper...

Can you link more info about Sony's devkits using FreeBSD?

BSD would've been a strange choice as the Playstation 1 debuted on December 3rd, 1994. "FreeBSD 1" came out just 13 months earlier

The Playstation 2 "TOOL" machines ran Red Hat for some of them, which was a bit more mature by 2001

The Playstation 3 and 4 though both run Net and FreeBSD under the hood internally though

The Playstation 1 "TOOL" actually ran windows [1]. A large success of the the PS1 however was the the "twin ISA" card dev kit, which could be plugged into any PC-Compatible for PS1 development, which drastically lowered the cost of development for the PS1.

Also BSD != FreeBSD, BSD 4.3 Net/1 (the first BSD released under the BSD license instead of containing AT&T code) was released in 1989.


Was FreeBSD really a requirement? I used to have a Sony Net Yaroze that allowed me to build PSX executables on my PC, using Sony's custom GCC-based toolchain. It didn't require FreeBSD.

I worked at a company that did PSX dev on Windows PCs.

Those brochures are probably worth real money on eBay if you still have them, a PowerSeries brochure just sold for $200!

By the time the Indy came out, the Indigo2 had replaced the Indigo, and I suspect a midrange Indy was a good match for a midrange Indigo1 (at much lower cost). Nintendo made an N64 dev board for the Indy, essentially an N64 on a GIO board, complete with an adapter card to connect controllers.

Indy was a good match for a midrange Indigo1

The joke always was that the Indy was the Indigo without the go :-)

But it was a decent enough machine to develop on, you didn’t need the 3D stuff if you spent all day in Emacs or compiling. Whereas an Indigo was really targeted at say CAD users.

Haha that was because the base Indy was shipping with 16MB of RAM and IRIX 5 was too bloated for that to be usable. Meanwhile everyone with Indigos kept running IRIX 4 until things got better around 5.3.

The Indy had XZ graphics available, which I believe were the same as the top Elan option available on the Indigo (4 GEs)

Rare: The Inside Story - The Retro Hour EP180 https://www.youtube.com/watch?v=ED7rX3ZIBoE

"We get the inside story on the legendary Rare with an all-star panel - David Doak (GoldenEye), Chris Marlow & Shawn Pile (Conkers Bad Fur Day, David Wise (Donkey Kong Country series) and Kevin Bayliss (Battle Toads/Killer Instinct)"

the team working on this effort also implemented IRIX userland support for QEMU

What does this mean?

I think of QEMU as emulating hardware... What exactly is being emulated here?

QEMU is thought of as a hardware emulator, but supports "userland" emulation where the processor ISA is emulated but syscalls and memory are translated to the host OS.

I didn't even know QEMU could do that. That's insanely cool; kind of a weird combination of traditional virtualization and Wine.

One very cool thing that you can do with it is to use binfmt_misc to tell the kernel to use `qemu-arm` to run ARM binaries, then you can chroot in to an ARM device's filesystem from your x86 workstation, and all of the ARM binaries just work.

I've used this to `apt-get upgrade` a netboot Raspberry Pi installation from the server way faster than you can do it on the Pi itself.

I was just dealing with this today. QEMU was too slow on my MacBook Air though.

Do you have a link to a comprehensive guide on doing this by chance? I was thinking tomorrow I’d just launch an arm instance in AWS and figure it out but I have a dual Xeon workstation at work (windows) that I might try as well.

I don't have a guide that documents everything I did, but the process is described in pretty good detail here: https://wiki.debian.org/RaspberryPi/qemu-user-static

You can skip the first part about creating the image, since presumably you already have one. So the process for me is something like:

    apt-get install qemu qemu-user-static binfmt-support
    cp /usr/bin/qemu-arm-static ~/rpi_mnt/usr/bin
    systemd-nspawn -D ~/rpi_mnt bin/bash
(Some additional steps needed if you want to use regular chroot instead of nspawn.)

Sometimes qemu shows an error saying some operation isn't supported, but this hasn't broken anything yet for me, even after I did a whole Raspbian Stretch -> Buster upgrade this way.

BTW, with Debian buster and later, you won't have to copy the qemu-arm-static binary around, since the Linux kernel will now use the file from outside the chroot/container.

Awesome. Many thanks.

On a distro like debian you can even use it to build-and-run userspace binaries for an unrelated architecture (some chroot magic was required last time I checked).

You (1) use binfmt_misc to tell the kernel to use `qemu-ARCHITECTURE` to run binaries for that architecture, then (2) make sure you also have all of the libraries that the binary is linked against, then that binary executable should just run seamlessly.

Now, if your ARM binary was compiled to look for libc at /lib/libc.so, but /lib/libc.so is the host's x86 libc, then that obviously won't work; and the easiest way to get the libraries all sorted out is to use a chroot with OS install of the target architecture. If you do go the chroot route, you need to make sure that `qemu-ARCHITECTURE` is statically linked, because it won't have access to the x86 libraries it needs to run after the chroot(2) call happens (which is why most normally-dynamically-linked distros have a "qemu-user-static" package in addition to their normal "qemu-user" package).

But with a multilib scheme like Debian's, where all libraries get installed to /lib/ARCHITECTURE-TRIPLET/ instead of /lib/, then it should be possible to install all of the appropriate target libraries on the host system without a chroot! You "should" just need to configure APT to let you install packages built for that architecture. (I haven't actually tried this; I'm not a Debian user, but I am envious of their multilib).

I've used this to run some 32-bit Linux binaries under Windows Subsystem for Linux (WSL), which only natively supports 64-bit binaries. (Recompilation for 64-bit was not an option.) It wasn't ideal but it did work smoothly for the most part. I just used `dpkg --add-architecture i386 && apt update && apt install libc6:i386` rather than creating a separate chroot. I did have to edit the binfmt registration to remove the 'OC' flags set up by the qemu-user-binfmt package, since these aren't supported by WSL, and manually enable the i386 binfmt which is blocked by default on amd64 platforms. There is also a persistent SIGSEGV in one particular binary which may not be related specifically to running under QEMU.

The ISA is virtualized (and much faster) if you have KVM installed and your processor supports VTx extensions.

It means they managed to get irix running in qemu. Presumably on an x86 cpu.

Notable because the sgi indigo had a MIPS R3000A CPU.


> It means they managed to get irix running in qemu. Presumably on an x86 cpu.

Not quite. It means that they got qemu to emulate IRIX's syscall layer on linux. So you can run, lets say, a MIPS IRIX binary on x86 linux without having to emulate the entire machine.

No - qemu supports "userland" emulation where the processor ISA is emulated but syscalls and memory are translated to the host OS. The IRIX kernel and OS doesn't run in this scenario.

From my limited understanding stemming from a passing interest of such things: that sounds similar to how WINE operates, is it not?

Sort of.

Wine impersonates OS calls, (including syscalls) but does not perform emulation on the binary itself. Wine can only run windows applications written for x86, but not windows applications written for itanium.

This appears to be running both hardware emulation on the supplied binary, (which is what VMware/KVM/virtualbox etc do) as well as wine-like OS impersonation.

I made up the word "impersonates" for what wine does just to avoid confusion. It's not a word that's used in the literature afaik, although perhaps it (or a word like it) should be.

I think the usual term Wine (plus e.g. WSL1, Darling, Solaris/BSD Linux compatibility shims, etc.) uses is "translate", but "impersonate" does sound closer to what such systems actually do.

Speaking of Wine, you can actually run x86 Wine using QEMU on a Raspberry Pi and run Windows software with it. You essentially chroot into an x86 Debian environment that's running with QEMU, then install Wine in there and run it. There's a product called 'ExaGear Desktop' which makes the process pretty seamless from what I hear.


WINE is not emulating/translating instructions to a diffrent ISA. It rather has a win32 loader and inserts some shims to map some calls to windows library functions and others to native(as in host ex. Linux). That's how I understand it. You can however theoretically run x86 WINE to run a x86 windows binary on ARM with Qemu user emulation.

Doesn’t WINE support 16-bit emulation for Win311-stuff because you can’t run 16-bit code in x64 mode? Or was that a Windows limitation?

That's a Windows limitation. X64 chips are plenty capable of running 16-bit protected mode code while the OS runs in long mode. It's that Windows didn't want to deal with translating HANDLEs back and forth between the two modes.

Wine has never ran 16-bit Windows programs in 16-bit mode. They are instead translated to 32-bit at runtime using some magic, especially using 32-bit addresses to emulate 16-bit real mode.

Not to my knowledge, I believe WINE targets Windows 95 and newer. There is no ISA emulation, just DLL and other related Windows emulation.

There is some alpha generic mips support for qemu ( https://www.linux-mips.org/wiki/QEMU ), so it could be a set of patches to run IRIX on Qemu's generic MIPS machine emulator...

I wonder if it'll ever be upstreamed.

One thing I've always been curious about: is there any sort of clear continuity of architecture or design patterns between the games in the Super Mario series? Yes, they're probably all from-scratch rewrites of the engine, but could each successive engine be said to be a "descendant" of a previous one, on a design level?

One thing I know (and can be seen in this repo) is that SM64 emulates a version of the NES/SNES "Object Attribute Memory", as a pure-software ring-buffer. (I'd love to know whether that carries on to later titles like Galaxy, 3D World, NSMB(U), Mario Maker, etc.)

Super Mario 3D World's architecture goes back to Super Mario Sunshine. Some parts go back all the way to Super Mario 64, but not the object / actor management. The ring buffer isn't really emulating OAM, either.

You can trace the evolution of "LiveActor" all the way through until it ends up in Super Mario Odyssey.

Sunshine - https://github.com/shibbo/Corona/blob/master/include/actor/T...

Galaxy 1 - https://github.com/shibbo/Petari/blob/master/include/Actor/L...

Odyssey - https://github.com/shibbo/OdysseyReversed/blob/master/includ...

This architecture was so successful it ended up as the basis for all new Nintendo game development, so Breath of the Wild, Pikmin 3 and Splatoon, Mario Maker all use this new "Actor Library", or "al".

I have not looked at NSMBU, but NSMBWii uses a different core structure originally developed (as far as I know) by the Zelda team. I think it's mostly phased out these days, as is the set of "egg" libraries developed by the Mario Kart: Wii and Wii Sports teams.

> The ring buffer isn't really emulating OAM, either.

I mean, you're right, it's not a literal implementation of OAM in the sense of controlling the same things OAM controls. I was speaking kinda metaphorically.

NES/SNES OAM was useful for reading back entity physics data (because it gave objects X/Y position registers) which meant that developers (incl. Nintendo themselves) often chose to rely on the OAM-object "components" of a entity as the canonical handle for tracking the entity in the game physics (Rather than having a table somewhere in work-RAM of separate "physical" components for entities.) Games like SMW literally just index a table of actor behaviors off the OAM-object's name-table data; what an entity "is" from the game's perspective, is determined by what it currently looks like!

Since the OAM had a finite size, this reliance on OAM for tracking entities forced games into a structure where entities' lifetimes are coupled to the lifetime of their OAM-object representations. Which meant that every NES/SNES game relying on OAM to track entities needed an algorithm for dynamically allocating OAM-object slots to entities; and so, for evicting entities if OAM was exhausted. (Level design was done with a hard eye for avoiding OAM "thrashing" by keeping entities spaced apart, but the system still needed to be able to handle the case where mobile entities ended up following you and piling up.) Which brought into existence the common OAM LRU cache-eviction algorithm—i.e., the practice of "despawning" the oldest off-screen entities when new on-screen entities need OAM slots.

This determined a lot about the design of these NES/SNES games. It made mobs in these games into things that would lose their state whenever they were scrolled "far enough" off the screen; which in turn forced a design where—rather than a level just running a "start script" that would spawn entities at initial positions, tracking them in RAM from then on—you instead had adopt a hybrid approach where entities had both an OAM-object representation, and also an associated "spawner" (usually existing just as static level-data in ROM, though sometimes coupled to a bitflag tracking destroyed spawns) that would trigger [re]spawning for the entity.

SM64 is essentially "emulating OAM" in the sense that it assigns entities handles in a fixed-sized buffer, and then uses a very OAM-like logic (basically, "memory pressure" on this buffer) to decide when entities should be de-spawned; and then uses spawners to recreate entities that have been de-spawned due to this memory pressure (meaning that most entities don't "exist" until you get close enough to them.)

SM64 didn't need to do things this way; the N64 has enough RAM to track all the entities in every SM64 map at once, IIRC. They chose to impose this constraint artificially, in order to continue to build SM64 levels according to the design philosophy they had "discovered" due to the original constraints of the OAM system.

Later games in the Mario series, if-and-when they choose to have this de-spawn/re-spawn tracking feature†, are essentially "pretending to have OAM", but not really emulating it the way SM64 does. For example, Mario Maker de-spawns entities when they're scrolled sufficiently far off the screen, in a way that mimics OAM sufficiently well that re-spawning and enemy spawner semantics still work—but which isn't really an OAM-like system, in that there's no static buffer with memory-pressure causing de-spawning (and in fact, as long as the entities are willing to squeeze into one visual screen, existing entities will never be forced to de-spawn.)

† You could get a very interesting analysis of the way Nintendo probably internally divides/project-manages the Mario games, by just determining which titles "emulate" OAM the way SM64 does; which titles loosely mimic OAM, like Mario Maker; and which titles don't even bother with de-spawn/re-spawn tracking at all, but instead have persistent physical entities that just "go quiescent" when they're out of sight. (IIRC there's no Mario title that uses the fourth option—pure view-frustum culling of distant models that continue to "tick" while culled.)

> developers (incl. Nintendo themselves) often chose to rely on the OAM-object "components" of a entity as the canonical handle for tracking the entity in the game physics

I don't know how the SNES worked, but AFAIK most NES games did not track objects in this way. Instead, the game engine maintained its own buffers containing object state and copied necessary information to OAM every frame.

OAM only stored graphics state for the rendering hardware, which is not a convenient form for the game engine for a number of reasons. For instance, objects are nearly always composed of several OAM sprites placed next to each other, objects that are not visible during a given frame are not present in OAM, and a single animated object can switch between so many different graphical forms that it would be complicated to identify which object corresponds to a graphics tile from OAM. Additionally, OAM doesn't have extra room for non-graphical object state (like behavior timers or velocity information).

Semi-offtopic, but you've clearly spent a lot of time studying Nintendo's code, from a range of eras... I'd be curious to hear, if you had to make a very broad assessment, how would you rate the quality of Nintendo's programming?

Nintendo is quite clearly second-to-none on the design/creative end, how much does that translate to the technical aspect of game development? Speaking purely in terms of software.

I find this particularly interesting in the context of a company that appears to retain many of the same programmers today as they did 30 years ago, when software development was obviously much different.

Super Paper Mario uses an extremely similar engine as Paper Mario: The Thousand Year Door, which uses a slightly modified version of the Paper Mario 64 engine.

Intelligent Systems seems to have a good head on their shoulders for code reuse. Enough so that I would suspect that their Fire Emblem and Advance Wars series—when they were releasing concurrently—were the same engine underneath.

(Side-note: I've always wondered how the mini-games in IS's WarioWare series work—whether each game is entirely custom code, or whether they've come up with some sort of DSL for specifying reflex games. If the latter, I would bet that that has a decent genealogy too.)

Well, they made a game where you can make your own microgames (D.I.Y.), and I believe an Iwata Asks revealed it was basically a dumbed down version of the internal tools they had been using, at least for the earlier DS WarioWare game (Touched.) Not sure if that quite answers your question, but I would bet it's some kind of DSL interpreted by a microgame engine.

Fun Fact: The minigames of the WarioWare series began life in Mario Studio, the 64DD Japan-only sequel to Mario Paint.[0][1]



Super Paper Mario's movement felt quite similar to Thousand Year Door, which was to its detriment as the former was a platformer and the second was an RPG.

> The former was a platformer-RPG

FTFY. I don't think the RPG elements of SPM should be ignored; the game plays very differently to any of the other Mario platformers.

It may not be to everyone's tastes but to simplify the matter for the sake of a quick jab is hugely unfair, especially given it has one of the most touching stories in the Paper Mario canon.

Ring arrays are so useful it would be unheard of if those games did did not use them, regardless if it is ES/SNES "Object Attribute Memory" or something equivalent. Every game today and then "should" have one or more ring array in them, but sometimes a junior dev or one in a crunch will use a linked list in rare situations. A notable example is when Starcraft used a linked list that caused a difficult to reproduce bug when certain parts of the code were threaded. http://www.codeofhonor.com/blog/tough-times-on-the-road-to-s... (Found @ https://news.ycombinator.com/item?id=5751702)

It's not that it's a ring array; it's that it's a fixed-size ring array with an eviction algorithm, and specifically one that holds representations of entities, where the entity is considered to be destroyed in a semantic sense if it gets evicted from the ring array.

Picture a background jobs system like Sidekik/Resque. Imagine that one worker-node of this jobs system had a fixed-size ring array of jobs it had taken. Now imagine that you could push new jobs onto a specific node. And now imagine that the worker-node responded by not just overwriting one of the filled slots of the local jobs set, but actually ACKing said job to drop it from the global job-queue system. It's destroying a real entity with persistent global identity, in order to reclaim the slot that the local representation of that entity takes up.

That's what OAM is, when combined with the design pattern I'm talking about. It's a ridiculous system that'd never fly in a business; but it happens to work for games, where you control the world such that you can make the world hold "reminders" for the state you destroyed.

Zelda OoT was based on the Mario 64 engine

That makes a lot of sense as it's not like they had a lot of 3D engines for the N64 during launch lol. Wonder if Pilot Wings (for example) also shares similar rendering pipeline.

PilotWings 64 was made by a separate company (Paradigm), who used a very different structure for their games which feels a lot more "western" to me (the UltraVision 64 "engine" has a large structured data chunk which it reads a lot of stuff from; most Nintendo 64 games don't really have that sort of structure)

Know anything about Turok?

It’s a very simple 3D engine

I really don't think you can refer to these games as using different 3D engines. The 3D capabilities are ingrained in the N64. The SNES likewise didn't have any 2D engines (except maybe for when the extension chips were used). Perhaps what we're talking about are the game logic engines.

Ah, yeah that makes sense (I know more about SNES internals then N64).

I thought it was starfox 64 actually

It was a heavily modified version of Mario 64:

"Miyamoto: We were using the Super Mario 64 engine for Zelda, but we had to make so many modifications to it that it's a different engine now. What we have now is a very good engine, and I think we can use it for future games if we can come up with a very good concept. It took three or so years to make Zelda, and about half the time was spent on making the engine. We definitely want to make use of this engine again."[1]

[1] https://web.archive.org/web/20040619165414/http://www.miyamo...

ONLY 3?!! That game was massive! That sounds like an unbelievable feat of engineering even if the base engine was built off of SM64.

Then they made Majoras Mask in around 18 months (which uses same engine and assets).

I wonder if Nintendo shared source code with 2nd parties, like Rareware. I know they provided design consultation on Banjo-Kazooie, but perhaps they also provided source code?

There is probably an amount of code that is copies over to the new project that isn't game-specific.

This is cool and illegal. What makes me envy of the West (or countries other than Japan in general) is that this kind of attempt is somewhat condoned and praised, while in Japan there would be a vocal outcry and finger-pointing campaign (with some media exposure) to the point where the author would be forced to shut down the project. It's a blessing that people can pursue things like this, and it's a huge shame that Japan is such an anal when it comes to a marginally illegal activity in an open space. (I'm sure some people do it underground though.)

> It's a blessing that people can pursue things like this, and it's a huge shame that Japan is such an anal when it comes to a marginally illegal activity in an open space.

I've noticed spillover effects into Japanese gamers as well -- people being suspicious of or derisive about mods, even when they're perfectly legal and the game has built-in mod support (looking at you Monster Hunter World).

My (Japanese) girlfriend is on the very conservative side of the spectrum there and absolutely hates it when I bring up any kind of modding, and so do her friends -- the culture of "authorial intent is king" is very strangely strong for a culture that also appreciates and enjoys doujin.

Doujin works are made with the awareness that they are parodies of the original work. It does not alter the body of the original work in any way and, as the term itself means, self-published. It is made without any direct affilation in regards to the original work.

It's only because of time. If this were done on a newer platform/game or a game not as beloved, it would be closer to what you said.

Yeah and the way the author hedged this risk is by releasing it all at once. Nintendo may shut it down or even bring the author to court but the project is already complete. As long as just one person keeps a copy it will continue to exist and Nintendo can't do anything against it.

Your use of the term “open space” is interesting. The Comic Market could probably be considered a closed space but 600,000 annual attendants at a convention that glorifies and commercializes copyright infringement (to a good extent) suggests that there’s spaces in Japan for this sort of thing.

Doujin works organically grew underground before the internet era. I think the sole reason that doujin work is now somehow tolerated is that they're not minority anymore. They're big enough to gain public acknowledgement, but if a similar activity is attempted today by a much smaller group, they would be crushed by the public. It sucks to be a minority in Japan.

I am looking forward to the mods that this will enable. I highly recommend trying mario 64 on dolphin EMU at 1080P with a texture pack. A HD mod that added a few more polygons would really round out the experience.

> I highly recommend trying mario 64 on dolphin EMU

Wrong console?

There was a virtual console release for Mario 64, so it's still applicable

Would this help improve the Virtual Console release?

As silly as it may seem to use an emulator to run another emulator, Dolphin makes it quite easy to create and load custom textures, so it's a solid choice in this instance.

It would only help if the virtual console release is emulated.

Mupen64 for N64, with a python GUI. Was playing last night, best Mario ever.

Is a raspberry pi a good-enough platform to run N64 1080P games on?


N64 emulators are all pretty bad (inaccurate, use a decent amount of resources) and upscaling is relatively expensive. At least, way too expensive for an rpi to handle.

It will work fine as an emu at 240p though

The question is now, would it be possible for someone to make a port of Mario 64 that runs on the Pi, instead of trying to emulate it?

Usually after you get source releases to games, you get people that port them to different platforms. Like how we had Doom on iPods and Kodak digital cameras.

There's still a lot of assembly code

N64 only supported up to 240p, or in rare cases, 480i, which is basically the same thing computationally. Displaying on higher resolution just involves scaling (or up-sampling, but at that kind of resolution jump scaling is probably more appropriate).

I haven't tested N64 games on a RPi personally, but I imagine it would have no trouble with it, and there seem to be several retro-gaming projects that involve N64 games and use the Rpi.

Super Mario 64 is a 3D game. The emulator can render the polygons at any resolution. The 2D textures should be replaced or carefully scaled, though.

I got a pi 3b+ as I romanticized the idea of it, but it struggles a bit with SM64. I just use OpenEmu on my higher-powered laptop and HDMI out instead.

Pi format is still fun to tinker with and I encourage you to get one if you're at all interested. The 3b+ just wasn't the right tool for the job in my case. I haven't tried the pi 4, however.

You'll want the Raspberry Pi 4. The 3B+ is not powerful enough for plenty of games.

Not even close.

It would be great if this could be done for games where the source code was lost.

Like Panzer Dragoon Saga.

This decompile was done without the original source code, just the released game, which is effectively the same as being ‘lost’.

The reversed SM64 binary was compiled without optim IIRC though.

Of couse you can reverse an optimized binary, just launch IDA and start to have fun to get an idea of the work. Doable, but of course harder.

I meant where the source code does not exist anymore, anywhere.

Nintendo still has the source to SM64.

Maybe I misunderstood you.

I’m saying this same decompile process could indeed be done to any released game where the source code is lost, because that is effectively what happened in this case.

I think you misunderstood what the parent was saying.

On the technical level, you are correct in that in both scenarios the end result would be the same, as you are going from compiled code to decompiled code.

What I believe the parent is saying, is that applying this to Panzer Dragoon accomplishes more (on the human level), because devs of that game don't have the original source code anymore, while Super Mario 64 devs do.

Maybe the confusion could have been avoided if instead of:

> It would be great if this could be done...

codesushi42 would have said (emphasis mine):

> It would be great if this would be done...

Not that I think it was incorrect as it was, just a little ambiguous, I guess.

Not a native english speaker, but this seems like a nitpicky non-issue to me. How is it not the same as "could you please pass me a glass of water" vs. "would you please pass me a glass of water"? Both indicate a request rather than talking about actual physical ability to perform the action.

Also, I would agree more with your point if the parent said "It could be great if this could be done..." instead of "It would be great if this could be done". The first "would" seems to indicate to me pretty clearly that the parent was talking about a request rather than ability.

> Not a native english speaker, but this seems like a nitpicky non-issue to me. How is it not the same as "could you please pass me a glass of water" vs. "would you please pass me a glass of water"? Both indicate a request rather than talking about actual physical ability to perform the action.

in the case of something like the glass of water, "could" makes the sentence more indirect, and more polite.

the original post is "it would be great if [huge task undertaken by unspecified persons] could be done". this native speaker would not attempt to polite-ify a request for something like that (and i don't think other native speakers would either), so the original post can't be making a request. it is expressing a hope that the thing is possible. mburns (reasonably) then explains that it is possible. then codesushi42 sort of goes on the rails, and i can't figure out what they're attempting to convey at this point.

Huh? I wasn't making an appeal to anyone, so your point is moot.

Context is important. Why bother decompiling a game if you have the source already? Of course I meant decompiling games for which there is no source code available on any machine. Nintendo has the source for SM64.

What a ridiculous load of pedantry.

It isn't pedantry. Expressing a wish that something "could" be done is ambiguous. "Could" is both used as you originally intended and as an expression of ability. It's not pedantry to misunderstand, and it's not pedantry for GP to explain why the misunderstanding occurred.

A misunderstanding occurred. The misunderstanding was clarified, acknowledged, and explained. I'm not sure it contributes anything to make accusations of pedantry.

Yes, this thread right here officer.

> Both indicate a request rather than talking about actual physical ability to perform the action.

That's why I said I didn't think it was incorrect, but merely ambiguous. The use of "could" could also be interpreted as talking about the physical ability to perform the action. That was precisely how mburns seemed to have interpreted it. My comment was merely trying to clarify your explanation with a simpler version that tried to eliminate the ambiguity that was probably the source of the confusion.

I got an upvote for that comment. Maybe it was them and it did work.

> "It could be great if this could be done..."

That sounds like one wouldn't be sure if it would be great or not. I don't think anyone meant or interpreted that.

This and the comments below are missing the source of this disagreement. The way that conditionals are most commonly structured in English has rapidly shifted over the last 10 years. The simplest way to explain is with examples.

Old style: "If I had tried, I would have succeeded."

New style: "If I would have tried, I would have succeeded."

The extra "would" style used to be restricted only to adding strong emphasis, as in "if you would just LISTEN to me...". Slowly, this extra "would" has crept into other areas, like replacing the subjunctive as in the example:

Old style: "It would be great if this were done."

New style: "It would be great if this would be done."

The new style is "incorrect" English as of a couple of decades ago, but its usage is increasing. It still sounds terribly wrong to my ear, but what determines whether grammar is "correct" is the way in which people actually speak.

How was the source code lost? Did the company that produced the game go out of business?

Sega is still in business, but the code was lost.

Disks fail. Machines are thrown away. Who knows how it happened, but it isn't a rare event, sadly.

This is the "official" release, where someone from the team that was working on the decompilation is making it public rather than just a random person on the Discord.

But not much has changed, I guess it's hard to make progress in a month.

It's interesting there are bits of code that don't have a purpose, and may have been there to support a second player. For example here:

    > This is evidence of a removed second player, likely Luigi.
    > This variable lies in memory just after the gMarioObject and
    > has the same type of shadow that Mario does. The `isLuigi`
    > variable is never 1 in the game. Note that since this was a
    > switch-case, not an if-statement, the programmers possibly
    > intended there to be even more than 2 characters.

And more results when searching for "luigi":


I vaguely recall reading that the multiple characters in SM64DS were a feature that was cut from the original game. Am I hallucinating or did Nintendo say that somewhere?

(The additional characters in the DS remake were horribly unbalanced, so I wonder if the earlier implementation would have been better...)

"For Windows, install WSL and a distro of your choice and follow the Linux guide."

I love these instructions!

Also, I'd love to see this converted to a native executable. I wish Nintendo would actually allow that, although I'm sure they wouldn't.

If I remember correctly, some time ago I saw a video from someone who managed to build a substantial part of SM64 as a native executable and was able to verify that tool-assisted runs ran perfectly on in it (hence it being accurate). The video displayed the game as a wireframe and had no audio, since those parts are surely tied to the N64 hardware.

I can not figure out the right keywords to find it again, but you may be able to if you are interested.

EDIT: Even though I can't find the video anywhere (I promise it existed!), from https://warosu.org/vr/thread/5644072

"To answer your questions, yes: This is a full source code which can be recompiled with modern toolchains (gcc: ive done this already) and even target other platforms (PC) with quite a bit of work. There already exists some proof of concept wireframe stuff."

A native executable? It's not like the Nintendo 64 was using DirectX

You'd need to emulate/simulate/shim all the graphics calls and state changes, but that shouldn't have any bearing on the actual code architecture. In fact, given that Dolphin uses a JIT, you could argue that this already happens to some degree when you're playing Gamecube games, having the source just allows ahead-of-time compilation.

It's a decompiled result, it's incredibly unlikely the comments are from the original code, rather they'll have been placed there by the people doing the decompile.

Ah, I wasn't aware of that. I thought it might include those.

That's a shame!

Nintendo will request a takedown the second they see this, no?

pokered and its derivatives have been on github for many years. As long as it stays to a small scope Nintendo seems content to let these small projects be. That could change at any second though. Clone while you can.

What for? There's no copyrighted material in this repo.

The copyright status of explicitly decompiled source is still unproven in the US, as far as I know. SAS vs World Programming seems to indicate that decompilation followed by reimplementation over a wall is probably not infringing, but I don't think a case has been tried around the direct output of a decompiler (i.e. OpenRCT, this SM64, etc.).

There is pretty-much only copyrighted material in this repo.

Copyright is not purely literal, especially when it's copyright of computer code...

Obviously IANAL, but my understanding was that conversion and translation is deemed to be under the same copyright as the source.

The models and sounds aren't copyrighted? Because they are totally within the GitHub, there...

I didn't look through the repo, but given that the linked README talks about needing an original version of the ROM in order to extract assets, I would guess they're not in there?

The models certainly are. Don't think the textures are.

A number of assets, including textures, audio, and prerecorded demos, have been stripped out of this release.

Is there a usb compatible n64 controller?

Part of what makes this game such a watershed moment for 3d gaming is how the controller was designed to maximize it's potential.

To this day Mario 64 is one of the best games ever made.

8bitdo once hat an N64 controller, but seems like it's not available anymore, maybe you find a used one somewhere.

Also there are "Controller Converter" N64->USB available on i.e. Amazon and iNNEXT has an Retro 64-Bit N64 Controller on Amazon as well.

Don't know if any of these will work for you, at least the iNNEXT is mentioned in the retropie wiki[0].

[0] https://retropie.org.uk/docs/Nintendo-64/

What an awesome project. I would love to mess with random stuff like whirlpool strength and see what it does to the game. Efforts like this to make the decompiler output intelligible e.g. meaningful variable names make this much more approachable for a technical person like me without much of the niche platform-specific reverse engineering skills. In fact there are countless games I'd love to dive into like this.

Train a ML system on a range of parameters (whirlpool strength) until you have a decent port of the game to a neural network and/or tree-based algo. Then try to optimize the game based on people’s enjoyment.

I think it would be much easier to use change the parameters directly in the original program. Just have to use a gradient-free optimizer.

Yours is a better technical approach to the vague dream I had described.

Am I being downvoted because I don't have an optimal solution to some problem?

I don't know much about reverse engineering field, but more than 70% code is assembly. Is extracting assembly still considered decompiled?

Outside of a few audio and PAL routines (see asm/non_matchings), everything that was written in C has been decompiled back into C. There are a few routines written in ASM, like the boot code and some of the SDK code.

Most of the other "assembly" files are for data, like the level scripts. It's not assembly of machine code.

You could try to put those into C, but you're not gaining much--assuming that it's even something that can be represented in C without a bunch of fancy compiler specific tricks. You'd be better off creating a DSL or a custom program suite, which is probably what Nintendo was doing 25 years ago.

The real effort here is cleaning up the assembly, and as you mention it's nowhere close to being done, but it keeps getting posted every once in a while. Here's another post from a month ago: https://www.reddit.com/r/programming/comments/cbvl6l/super_m...

So how do people do this?

Manual inspection?

They got the original compiler running and made source that compiles to the same ROM.

How does that work? They still manually reversed the disassembly right?

It's amazing the level of effort and work that has gone into this and here I'm trying to finish a 100 line side project. :-]

You can get a lot more done with a team of people who are also getting paid.

Who was funding this decompilation? Was it from the speedrunning community?

Once the inevitable native ports start appearing, I’d love to see an OpenGL 1 graphical backend so it can run natively on IRIX

pannenkoek2012 would probably find more glitches, assuming this doesn’t get taken down by Nintendo given their aggressive stance on copyright

They wrote it in assembly, nuts!

Binary executables are machine code. One step from machine code is assembly. It’s easy to translate machine code to ASM (because you are just reversing the op code and adding data structures) from there it gets hard because compilers do all sorts of tricks to create performant assembly and throws away hints about code structure (e.g a simple overloaded function may become an ASM routine with 30 parameters depending on how it’s called. Or vice versa.. it’s like trying to recreate HD video from mpeg-1, entropy has been thrown away). So decompiled code is usually left in assembly. Sometimes an effort is made to create the C equivalent but that’s a maddening effort.

More than likely SM64 was written in c with some critical performance parts in ASM (like mode7 and some of the OAM stuff other threads talk about).


They've gone a lot further than that: assigning names to tons of stuff, adding comments, organizing all the files, and setting up a full build process so you can recompile a new ROM with modifications.

One wonders what the point is of publicly shitting on people’s work, like you just did.

Is the problem that there's no point or that you're being flippant, dismissive, and too lazy to see the point? Have you even taken a look through the code?

Let me just say this. Although not a complete restructuring, it's a TON more readable than a bog-standard decompilation of the ROM. This is something you'd know if you spent 10 minutes reading it.

> What's the point?

Mods! Much easier to modify source code than a compiled binary.

You get all sort of interesting things to come out of it too... like an MMO version of Mario 64

Decompiler cannot name your variables correctly. This project actually recovers the meaning behind all memory.

Same as any other impractical-but-fun project that gets featured on HN. It's weird to me that people keep being surprised.

It's more an indicator that apparently either not many meaningful things happen (which I doubt) or somehow people use this a some retro-wanking where they imagine "Uh yeah back in high school I was also working on stuff like this, those were the days".

What I am saying is that it doesn't surprise me people do these projects, what surprises me is that enough people care about them for it to make the front page of HN.

On the contrary, I feel like these kinds of projects are exactly what should be making the front page of Hacker News.

It's an indicator that different people are interested in different things. I'm not sure why you're determined to spin this into a personal failing.

I don't get it... If you acknowledge that there are reasons why people would be interested in participating in this project, then why wouldn't people be interested in reading about it?

yes, clearly a site called 'hacker news' should have no such articles.

'useless' projects like this are much easier to share than useful projects that might be tied to a company and therefore hard to release.

The point is that it's awesome!

lol i love seeing all the replies to comments on this that start with "No."

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact