Hacker News new | past | comments | ask | show | jobs | submit login
SNES Cartridges and Enhancement Chips (twitter.com)
256 points by ilamont 24 days ago | hide | past | web | favorite | 110 comments



A little bit meta here, but I know it's going to come up: somebody is going to complain that Twitter is the wrong way to write this content. How they loved it but were infuriated to read it in snippets. Pre-emptively, I'm copy-pasting an earlier comment of mine on this:

> Heh, from foone themself: https://threadreaderapp.com/thread/1066547670477488128.html

> Not to humblebrag or anything, but my favorite part of getting posted on hackernews or reddit is that EVERY SINGLE TIME there's one highly-ranked reply that's "jesus man, this could have been a blog post! why make 20 tweets when you can make one blog post?" CAUSE I CAN'T MAKE A BLOG POST, GOD DAMN IT.

> The short story is they are quite open about having ADHD, and that's what causes the long twitter rambles, but also what makes it very difficult to assemble it into a blog post. Every once in a while foone's wife will edit a popular thread into a blog post, examples: https://foone.wordpress.com/

> If this seems like a particularly good story to you, maybe you'd like to edit it into a draft for a blog post and gift that to foone?

https://news.ycombinator.com/item?id=19884445

And yeah, as soon as I saw "SNES Cartridges and Enhancement Chips" and the twitter domain, I felt about 90% sure it wouuld be linking to a foone tweetstorm.


I feel quite the converse..

..I accept twitter is here to stay, but always thought it was a a pointless snippet-fest good for nothing aside from millions being able to pile on praise/social-shaming.

Looking at this thread, it actually makes sense for once. It's a successive sequence of info & pictures - that follow a coherent thread. Great, just like any other blog.

However, if you're interested in any particular paragraph, you can click and see a whole discussion on "just that paragraph"

Not for one moment saying twitter's won me over, but it provides a cleaner way of annotating a story than any other platform I'm aware of.


I actually agree with this, Twitter is far from my favorite way to consume content but I actually really enjoyed the format that this was presented in.


I hadn't yet clicked on the link, but as soon as I read your comment I knew exactly who the author was going to be.

Foone's posts are the only content I enjoy reading on Twitter. He's mastered the format.


That’s an interesting way to look at it, and it got me thinking about how one might implement a blogging platform where discussion is more like comments in the margins of a book than some footnotes at the back of the book.


> if you're interested in any particular paragraph, you can click and see a whole discussion on "just that paragraph"

Only if you're subscribed.


Medium did that. I don't know if it still works, I haven't seen a medium post in anything other than elinks in a while.


> Not for one moment saying twitter's won me over, but it provides a cleaner way of annotating a story than any other platform I'm aware of.

Medium? It seems specially built for that.


It's not really the same. Medium turns every comment into its own full-fledged blog post. That's a lot of unbounded space if the restrictions of the format are what let you get the words out.

Mastodon does threads like Twitter with a little more room without most of the downsides. I've thought about setting up a managed Mastodon instance (with masto.host) to put under my domain to use as a sort of public outliner.

https://en.wikipedia.org/wiki/Outliner

That's what most people seem to use Twitter for anyway.


I wonder how long before there is a WordPress theme that mimics these tweetstorms?


meh. Tweet Stream of Consciousness Processors as a Service are the next new thing. A little ML, some FAAS handwaviness, and voila: Utterances as an Article as a Service for all!


Why does ADHD prevent someone from making a blog post, and how does posting it on twitter change that? Sincerely curious.


I can't speak from experience, but I can try to relay how I understand it from others. Part of the pattern of behaviour some people with ADHD describe is intense focus on topics, with no sense of when or what topics this will happen to. So someone may be intending to only post one or two tweets worth of interesting notes on enhancement chips, but find themselves going on and on without realizing that they've just spent an hour talking about the topic. They never intended to create blog-length material on the topic, but there it is.

The converse happens too: they can desperately want to write a long form article on a topic, and just can't find themselves starting it. The blank blog page is imposing and they don't know where they would start and maybe they'll distract themselves for a minute with something and whoops now they're very focussed on that distraction for a few hours.

I'm sure there's a lot more subtlety to this than I'm describing, and there's a real diversity of peoples' experiences with this, but there's one way that I understand it can lead to this situation.


This is a point often missed by people not afflicted by AD(H)D, it is not that people affected by it are incapable of focusing on things, its that they are incapable of controlling what they do, or don't pay attention to, and for how long, or how intensely.


As an example of this: yesterday I sat down just to add a “testimonials” section to one of my websites. By the end of the work day, I’d redesigned the entire site (which I’d just done back in January), something I had no inclination of doing when I first sat down.

I got nothing else done that day, all because I got a whiff of inspiration, causing me to lose all sense of time and place in an obsessive pursuit of bringing a fleeting vision to life. I didn’t eat breakfast, and I didn’t stop to eat lunch until my stomach hurt. There were several times during the day where I got caught myself forgetting to breathe.

But I’m mostly happy with how the website turned out, so there’s that. There’s some things that need to be done still, and it’s really, really hard for me to put the brakes on it over the weekend. Hell, here I am talking about it on Saturday morning.

Come Monday, maybe I’ll unravel a clumsily packaged mental model of what needs to be finished up in a whirlwind of keystrokes.

Or, just as likely, I’ll get annoyed by some trivial problem with a piece of code I’ve written, and go down a rabbit hole of studying alternative approaches until I find something I’m happy with, at which point I’m already waist-deep in the middle of some new project that may or may not ever see the light of day.

Equally likely is the possibility that too many sleepless nights will have sapped any semblance of focus and I’ll find myself just mindlessly going through the motions of being a semi-functional adult, forgetting all about finishing up this redesign.

Or, maybe I’ll get frustrated by a bug and decide to take a break from software development and come back to it later, like the time in 2012 when I realized that I’d accidentally trashed the source control on a project that I wanted to roll back a crappy refactoring job I’d just done on it, resulting in a five year gap on my resume and Github commits.

Thankfully, I’m self-employed and married to an incredibly understanding woman. ADHD is a hell of a disorder.


As someone with ADHD, this feels rather accurate. Thanks for the explanation!


As someone that loves retro tech (having started on a VIC-20) I love stuff like this! If Twitter is the method the author picked to share, I am happy that he shared. If it is not your cup of tea, do not read it or take the time to re-create in a method that you think works better.


From foone itself:

> they don't have the endless editing I get into with blog posts

I don't know if this is true or not, however Twitter is one of the only platforms that doesn't have an edit button. So it makes sense that Twitter may helps remove from people with ADHD the sentiment of "oh, maybe there is something that I can still improve from this post" before publishing it.


I know with my OCD, it's easier to send a single line/short message than write a post and sit there staring out at it whilst I overthink it (once I've sent it, it's not worth changing it), so this may be similar .


I think it only makes it worse by adding a fear of not making it perfect on the first time.


Maybe for you. Clearly not for foone, and not for me either. It's not that I have a fear of making something imperfect; it's that an unpublished blog post is an invitation to improve "just one more thing" before publishing... forever. Additionally, expectations are different on Twitter.

You can also delete a single tweet that feels like a much easier decision (because smaller impact) than deleting a blogpost.


I dunno, maybe he should work on that? I mean, you can’t let your weaknesses define you forever, right?


Maybe they should. Maybe they've got more urgent things to work on, like weaknesses that actually hold them back in life rather than producing fun-but-ultimately-unimportant information in your preferred format.

Maybe they're doing this for fun, and don't want to turn every aspect of their life into a self-improvement slog.


True, but learning how to write is pretty important in many aspects of life.


I’ve got ADHD pretty bad, and I can definitely understand why that’s an impediment to blogging.

For me at least, the “big picture” of a concept is definitely there, but it’s in the back of my mind. I can access it and talk about it, but particularly when writing, it takes serious concentration for me to convey that information to others, because in general, I’m more interested in fully understanding the mechanics of particular components of the picture.

When I attempt to write about a topic in-depth, I’m typically doing it in one of two ways: either it’s a sort of stream of consciousness sort of writing, full of digressions and errors, or it’s the result of a lot of planning, in which case it often takes me so long that I just give up before I finish.

I’m not a big Twitter user at all, but I have to admit that I’m drawn to the idea of writing about my interests in a more granular manner than traditional blogging generally permits.


The medium comes with lower expectations of structure.


That's a really cool way to write, I feel like it could be a methodology for students with ADHD (and maybe an addiction to their cell phone) to flesh out ideas for a paper and then reorganize (like a Trello column) them to get their base structure


Fun fact about the SuperFX chip: it was turned into its own commercially-licensable IP core (the "Argonaut RISC Core", or "ARC"), which became its own business, similar to ARM Holdings: https://en.wikipedia.org/wiki/Synopsys#ARC_International

Other fun fact: for a long while, Intel used an ARC core in its Management Engine.

That's right—not only can your motherboard run rootkits; it can also run StarFox ;)


It's a totally different core and ISA FWIW, they're just both designed by ARC. The ME one is a pretty standard RISC, and the SuperFX one would be difficult to call a RISC given that it's two address, has pretty complex memory instructions etc.


Other fun fact: for a long while, Intel used an ARC core in its Management Engine.

I believe their wireless cards (at least the 3945ABG) also used it; I remember the disassembly looked right, but never had the time to go through with the rest of that project...


Interesting fun fact!

How is it that a chip used for 2D/3D operations gets used in the ME though?

I would've thinked that they are pretty different, but I'm no expert so, any ideas?


The chip never did such operations... instead they ran it as both a secondary processor and a math DSP. The operations otherwise were entirely software, and conventional.

Another interesting one was the SA-1 chip used in a few games, such as Super Mario RPG. Its a SNES-compatible 65C816 that is 3x faster, has its own RAM bank, and can interrupt and be interrupt the main CPU, among other features, and was used to offload a lot of things, including some graphics tasks.


The SuperFX was pretty dedicated to graphics. It had stuff like a "plot pixel" instruction.


For X86 at least, there's different names for the same instruction, depending on what you want to do. So this could be an instruction that has other more general uses.

That said, digging around I havent found reference to Intel using the chip, and another responder claims it was a different chip by the same manufacturer.


Intel used ARC CPUs in the Intel ME before Skylake, a several generations descendant of the one designed by Argonaut. Its used a few other places in IoT-land as well. ARC is a well understood architecture that is easy to modify for individual customers, and can be fabbed anywhere.

Another interesting CPU that made its way around was the family that the SPC700 (SNES) and SPC1000 (PS1) audio DSPs belong to. They've shown up in a few embedded devices that needed to do audio work, such inside of AVRs and other similar devices.


They aren't they same architecture at all.

SuperFX is a CISC. Byte opcodes with prefixes, complex memory access instructions, two address, etc.

ARCompact is a RISC. 32-bit instructions with a 16-bit subset that expands to what you could encode with the 32-bit, fairly simple load store arch, three address ops, huge register file, etc.

They're about as different as two archs can get; they're just made by the same people.


Not sure who told you its CISC, the SuperFX GSU is a rather conventional RISC CPU.

https://www.eurogamer.net/articles/2013-07-04-born-slippy-th...

The best quote, from Jez San, one of Argonaut's founders, is, "At the time that it came out, it was also the world's best-selling RISC microprocessor until ARM became standardised in every cellphone and took the market by storm."


Because I've coded for it.

Name another RISC with a byte opcode with prefixes and is variable width depending on the arguments. That's about as CISC as you get.

It was pipelined, and at the time people for some reason thought that you couldn't pipeline a CISC and therefore it had to be a RISC.


ARV32, RISC-V, PowerPC, ARM's Thumb, all variable instruction encoding. The SuperFX came out after the Pentium, the first consumer super-scalar pipelined CISC CPU, and not the first CISC to do it.


None of those are byte opcodes encoding with prefixes.

None of those are variable width depending on what type of argument you have.

Also, PowerPC isn't even variable width at all.

And the Pentium was the first pipelined CISC microprocessor, ie. a single chip. At the time there was a holy war going on with one side being of the opinion that you shouldn't pipeline single chip processors, but instead rely on Moore's law. The thought was that the mainframe style multichip modules needed a pipeline to account for off chip delays, but that all on the same die it was unnecessary and created too unpredictability with pipeline bubbles, etc. Those people were obviously wrong and lost.

Edit: also the Pentium was released a month after StarFox was looking into it. And when you account for the long lead times of hardware, their statements make sense timeline wise. Particularly when you consider that that RISC was a huge buzzword at the time and being misapplied, sort of like how now everybody doing a simple linear regression is talking about all the machine learning they're doing.


The plot pixel instruction was a 'write to SNES's custom tile/attribute format as if it were a linear framebuffer' instruction. Pretty difficult to use generally.


Along these lines, Tom7 invented* 'reverse emulation' which effectively uses a Raspberry Pi as a co-processor, allowing him to run [spoilers ahoy!]an SNES emulator on an NES.[/spoilers]

The talk is brilliant: https://www.youtube.com/watch?v=ar9WRwCiSr0

* - as far as I know...


With regards to compression chips; If someone can substantiate this vague recollection I have it would be great.

I seem to recall Nintendo charged a per-unit royalty based off of the ROM size of the cartridge, which would make including a dedicated decompression chip reasonable, particularly since they are likely fairly simple to design anyways.

[edit] In addition, the two games listed as using the S-DD1 were 32M,it and 48M,it roms (there were very few 48 Mbit roms and 32Mbit was considered "large"), so these were already fairly expensive cartidges.

Found a pic of the Star Ocean PCB; note that the S-DD1 takes up less PCB space then even 1 of the two ROM chips:

https://snescentral.com/pcbboards.php?chip=SHVC-LN3B-01

[edit2] The SF Alpha PCB was much simpler by comparison:

https://snescentral.com/pcbboards.php?chip=SHVC-1N0N-01


This is true (the royalty varying on the size of the memory) but the royalty was also significantly higher if there was any custom hardware involved on the cart - specifically because you actually had to purchase the "Game Pak" from Nintendo, so they would charge you "whatever they could get away with" for enhancement chips.


Any idea if Enix was locked into buying the Game Pak already for Star Ocean by use of battery-backed SRAM or not?


All the game paks (aka cartridges) were manufactured by Nintendo. By selling a licensed game, they were already locked in.


So it's just a question of "was the S-DD1 priced cheaper than another (or larger) ROM chip by Nintendo" right?


I remember Street Fighter 2 being something like $69 (in ~1992 USD) when it came out, and it was something like 16 meg. A 48 meg ROM would have easily been $100+ if not $150.


Fun fact, an inflation calculator says that in 1992, a 50 dollar video game cartridge would be equivalent to $92 today. DLC and terribly unethical video game publishers aside, for the non-piracy crowd, we live in a golden age of affordable video games at $20 to $35 each.

If you can be content with buying xbox one and ps4 games on ebay a little while after they're released, you're unlikely to ever pay more than $20 for a game.


Yeah. Consider that the MSRP for the original legend of Zelda for the NES was $49.99 in 1986. This title took a team of ~7 people ~2 years, and several team members spent 5-7 months of that time also working on Super Mario Brothers, so it would probably have taken less time if not for that.

The typical price for the modern AAA title in 2019 is still only $59.99. These games will have 30 or so people at the absolute minimum and having well over 100 people is not at all rare. They often take 3 years or longer to make.

Even if you don't take into account inflation, such a small increase in nominal price despite enormous increases in development cost is rather remarkable. Then if you consider that inflation means that the real price of AAA games is basically half of what it was, and B list games being about half of that yet again...

Honestly I'm not surprised by DLC and microtransactions. The normal list price of AAA video games is substantially lower than it it should be. My gut tells me that in most other markets, the price would have risen to at least the $80-100 range, if not more, even taking into account the much larger sales volumes these days.


Yeah well back then game development was hard as it required deep knowledge of the hardware and also high critical thinking to implement various software tricks to overcome the limitations of the hardware. Those devs were as close to wizzards as you could get.

Later, hardware resources became cheaper and less of a concern and the SDKs made development a lot more accessible to people who didn't have a master's degrees in EE so the workforce could be cheaper and plentiful, cumulating to the game industry becoming the sweatshop it is today.


I don't think its got anything to do with game development being hard. Spectrum games were hard to develop, but games were cheap(er) and easily copyable, and one company didn't have a monopoly on production, I assume the same dynamic existed for the C64 also.

I suppose you could make the case that game development is always going to be hard, people are always going to push the abilities of the platform etc, it isn't like todays titles are tossed out on a weekly basis.


High margins combined with more customers causes this to make economic sense as the dev costs are spread across more customers.


There was also a Super FX 2 which powered Doom, among a few other games. Notably Winter Gold looked quite good given the platform's limitations: https://www.youtube.com/watch?v=9uQlCZVb_Ro


For a moment I thought this was an old Amiga 500 demo (older than this game). Spaceball's "State of the Art" [0].

Apparently same author for both rotoscoped animations.

[0]: https://www.youtube.com/watch?v=89wq5EoXy-0


Wow, yeah, this is fantastic, thank you for sharing - I love seeing the development of early polygonal 3D, such as Virtua Racing and Starfox, but I’d never seen this! Very cool.


Never heard of this game. It has a totally different visual style from anything else on the platform. Very impressive.


Software implementations of these chips can be found here

https://github.com/byuu/higan/tree/master/higan/sfc/coproces...


My favorite bragging point: the code in the necdsp folder (and the CPU core itself at component/processor/upd96050) there was used by Stephen Hawking when they emulated his old voice machine that used the same chip.

http://pawozniak.com/emulator/

Pretty cool that the same NEC uPD772x chip was shared between a text-to-speech engine and Super Mario Kart.

If you want to know everything there ever was to know about the chip, I mirrored every PDF datasheet I could find on it on my site:

https://byuu.net/datasheets#uPD7720

It has one of the most wild ISAs I've ever encountered, using a strict Harvard architecture and 24-bit VLIWs (very long instruction words.) Every opcode can do over a dozen different things in the same instruction at the same time. Writing code for it is quite the challenge.


> My favorite bragging point: the code in the necdsp folder (and the CPU core itself at component/processor/upd96050) there was used by Stephen Hawking when they emulated his old voice machine that used the same chip

Wow that really is amazing. The diversity of uses cannot be foreseen and is a good argument for both documentation at the time and emulation as useable history.

I’ve followed your work for a while, spent many hours playing Super Mario 3 on your emulators and reading your dev blogs.

Thanks for all you give.


the owner of this repo also has some good blog posts about this topic, including an overview of some of the custom chips and their impact on the emulator they also maintain (bsnes)

https://byuu.net/cartridges/boards


Thanks, @byuu!


bsnes is now named higan, and does several systems besides SNES.


Nope, byuu spun bsnes back into its own project.


Here is a very detailed programming guide for the Super FX:

https://en.wikibooks.org/wiki/Super_NES_Programming/Super_FX...


This thread mentions the Super Gameboy. There's a deeply fascinating series of blog posts about the Super Gameboy that everyone should read:

https://loveconquersallgam.es/post/2350461718/fuck-the-super...


Fascinating --


This strategy was used in most cartridge-based systems that became popular enough to warrant the engineering effort. The original NES had "mappers" which accomplished similar feats, mostly expanding ROM / RAM but also interesting actions like triggering interrupts when the PPU (Picture Processing Unit, nowadays called GPU) took certain actions. https://wiki.nesdev.com/w/index.php/Mapper

All of this was to squeeze more performance and make more advanced games.


One mapper even added audio channels to the NES, but would only work on the Japanese Famicom because the pins passing through the audio weren't present on the American NES. There's also one mapper that I guess may have been recently discovered that had a microcontroller on it that had its own ROM and IIRC was an attempt to copy protect the game.

I think you can't quite equate the PPU with GPUs - because the PPU doesn't run instructions or render things to a framebuffer. It's more like just the part of the GPU that takes memory and outputs a display signal - in VGA that's called a RAMDAC, not sure what the digital TMSS/HDMI/LVDS equivalent is.


The pins are there actually, just not on the cartridge side.

The little expansion slot under the NES has these and other pins. Some of the pins are electrical shorts with pins on the cartridge side.

So to play Japanese games with full audio all you’d need is an expansion cartridge selectively shorting the pins on that expansion port.


> One mapper

There were several that added extra audio. My favourite is the VRC7, Konami's mapper used in just one game, Lagrange Point, providing Yamaha OPL FM synthesis.


Sega had their own version for the Genesis, the SVP: https://en.m.wikipedia.org/wiki/Sega_Virtua_Processor

Virtua Racing was the only game released with it, but there were others planned for it originally. That is, until Sega decided to use the same technology for the 32X, which added a lot of power but was a terrible commercial failure.


I had that game for the 32x. You could get the car airborne, lock the brakes and leave skidmarks in the air.


In theory, could you make some ultra-modern SNES game using a modern ARM chip or such in the cart? Still some basic system limitations, but running pretty much everything on the add-on chip?


It's more or less been done on the NES - using a cartridge with a Raspberry Pi Zero in it to run SNES games: http://radar.spacebar.org/f/a/weblog/comment/1/1157

It's also roughly how the Super Game Boy worked - it had a Game Boy CPU in it: https://en.wikipedia.org/wiki/Super_Game_Boy


I am doing that for PalmOS 5.0 with reSpring actually: http://dmitry.gr/?r=05.Projects&proj=27.%20rePalm


> I settled on the 4KB part purely based on cost. Even at this measly 4KB size, this one RAM is by far the most expensive component on the board at $25. Given that the costs of putting in a 64KB part (my preferred size) were beyond my imagination (and beyond my wallet's abilities), I decided to invent a complex messaging protocol and make it work over a 4KB RAM used as a bidirectional mailbox.

Is this not the sort of memory you'd need?

https://www.digikey.com/product-detail/en/cypress-semiconduc...

or

https://www.digikey.com/product-detail/en/idt-integrated-dev...

just trying to understand the details here.


Those are similar to the ones I am using. Yes. And prices are similar too. Sadly....

Next rev I am replacing it with a small FPGA which will act as dual port memory. Believe it or not, that is much cheaper.


Whaaa? This, good sir, is fascinating! I am a huge fan of PalmOS and still use my Palm every day. I would love to make a custom Palm with a custom PalmOS but never even considered the plausibility or possibility. I’m gonna read through this now.


In fact, you can take it even further! tom7 (all of his pieces are amazing) put a raspberry pi into an NES cartridge, and manages to "run" Super Mario World on an NES.

My short description really does it a disservice - I strongly recommend watching the video explanation https://www.youtube.com/watch?v=ar9WRwCiSr0


To be fair 'Super Mario World' already runs on the Nes.

https://m.youtube.com/watch?v=TEZ3vZIi71g


This guy ported wolf3d to the Gameboy Color more or less that way (looks like all the game logic is still on the Game it's CPU and just the ray-casting is done on the co-processor)

http://www.happydaze.se/wolf/


Yeah, the SNES would pretty much turn into a (crappy) AV hub.


Fascinating! Did any later gen systems besides the Nintendo 64 use these chips? Would it be possible to use a similar configuration in Switch cartridges?


32X was a similar concept, with the idea of packaging up and reusing enhancement chips in a base cartridge that would add 3D support to the Genesis. The Genesis VDP is still used in 32X games, usually to render the background while the 32X's VDP renders flat shaded polygons on top.

The Saturn uses a similar architecture of using two VDPs, with one mainly dedicated to rendering the background.


I'm surprised this concept died out. Take the next-gen consoles, they're supporting games at 120fps/120hz. They could target 60fps (what the vast majority of users will use) and daisy-chain another console for the serious gamers who 'need' 120fps. This could be done through the HDMI cable (which supports ethernet in addition to video & sound), it could even be enhanced to support 8k.

Yes, this is essentially SLI/ CrossFire, something that has been around for some time, and support would be described as patchy at best. I believe the main issue is that it's left to game developers to implement, rather than something abstracted away to the graphics drivers/ GPUs.


> daisy-chain another console for the serious gamers who 'need' 120fps

Not exactly the same idea, but Forza Motorsport 3 (and I think 4) let you use 3 Xbox 360s to display on multiple monitors: https://www.youtube.com/watch?v=2UJ-QbpFFM8


Games are often CPU limited these days: updating game state from one frame to the next is one of the hard problems of modern game development. I don't think splitting the work between two consoles connected by such a slow link would work very well.

The other hard problem is building the instruction chain (the term is escaping me at the moment) which gets sent to the GPU. This would have to be duplicated on each system.

The most workable solution would be to have one system do nothing but update game state and send it to the other system, which just does rendering. Limit yourself as much as possible to one way communication. Hopefully the game state system would do a lot of GPU compute for physics etc, otherwise its GPU would be idle. Furthermore, there are a lot of single threaded bottlenecks in both game state and render, so you'd lose out in parallelism. (Many modern systems fix this problem by rendering frame N (which is read only on state N) while simultaneously computing game state for frame N+1. (which is also read only on state N) Since both operations are read only, their single thread bottlenecks are different single thread bottlenecks, which helps parallelism dramatically.) Overall you'd be getting a very minor boost in performance from doubling the hardware commitment; certainly no better than 50%, but I'd ballpark probably closer to 25%.

It's an interesting idea, but one which is ultimately destroyed by the unrelenting iron fist of Amdahl's Law.


The one place you could potentially win is VR: each console could run the same codebase, independently, to generate the left and right eye views - whatever one console can do for one display, two can do for two; the amount of information required to keep the two game simulations in sync would be relatively small (world/view transforms and frame timing from master to slave console to let the two simulations stay in lockstep).

People would balk at having to buy two consoles for VR though.


Hindsight is always 20/20. At the time, and even on paper, 32X did not sound like that bad of an idea. The 32X even sold well initially. I wanted one at launch, and bought one a few years ago (it truly is terrible). At launch it was a relatively cheap entry into 3D gaming. Keep in mind the only alternatives were multi thousand dollar PCs and the $800 3DO, not adjusted for inflation.

But the announcement of the Saturn soon after killed the 32X.

And programming for two VDPs was incredibly difficult. Especially considering games were still being written in assembly even at that point in the industry.

Plus the economics didn't make sense. It is cheaper to manufacture a single console instead of a base system and add-on separately. You end up confusing the consumer, and limiting market reach because your add-on market is a subset of your base unit's market size.

And the games were bad for the most part. Really bad. I have played the majority of home consoles. Even the likes of the 3DO, Jaguar, Virtual Boy, Sega CD, etc. The 32X is easily one of the worst consoles, ever, period.


IIRC the PlayStation VR helmet has a lot of onboard processing to help it generate two high-framerate screens. I’m not too up on its specs as I am really not at a place in my life where I’m willing to dedicate a room to VR and spend the cost of another console on the thing.

Really though I feel that the response of any console manufacturer to “hey let’s bring back the 32x” would be “do you know how much money that thing lost Sega”. Just update the specs a bit and sell a “Pro” or “Plus” model for the same cost as the original launch cost, and sell the old design for less.

You can get external FireWire boxes to cram graphics cards in for your computer; my general impression of the market is that “serious gamers” who “need” 120fps are pouring lots of money into Windows machines with pricey graphics cards.


According to Sony, the PSVR breakout box isn't doing much in the way of processing. It just handles 3D audio and warping 2D video for the headset (and the opposite for TV display of what the headset displays if the game doesn't provide its own feed).

https://www.eurogamer.net/articles/digitalfoundry-2016-what-...


The N64 didn't use any chips like this AFAIK. I've seen maker prototypes using this scheme on a GameBoy, and wouldn't be surprised to see a GameBoy Advance cart that used them but don't know of any off the top of my head except as a built in peripheral (so RTC, tilt sensor, some communication mechanism, etc.)

And it'd be a real pain to add chips like this to any later systems. The architecture is a little different, and the cartridges aren't actually mapped into the system's main address space anymore. They are relatively slow buses that look more like mass storage to the rest of the system. They can't DMA directly and that hop to hit the rest of the system obviates a lot of the benefit of having a co processor on there.


I think Switch carts use a form of flash memory, which precludes this sort of extra-flexible expansion. You could include a chip on the cart, but it wouldn't have much throughput to do things like real-time graphics transformation. As for compression, most consoles just do regular decompression with the open source image libraries in the system BIOS, or on-GPU texture decompression. It wouldn't surprise me if the sprites in a game like River City Girls, for instance, were just PNGs.


The Switch is "just" an Nvidia Shield, albeit not running android. The carts are your standard mask ROM on a (proprietary) bus manufactured by Macronix and so would not be able to extend the functionality in a meaningful way.

Texture decompression is handled using the hardware acceleration on the Tegra if being used to save VRAM, otherwise using standard libraries (like libpng, libjpeg, etc.) in the game software.


Small correction: the chips inside switch cartridges are mainly mask ROMs, not Flash.


Yes: The Gameboy Player for the Gamecube has a GBA processor and some glue logic inside it. There is at least one Nintendo DS cartridge which has a bluetooth chip inside it: Pokémon: Typing Adventure/ And several Gameboy Advance cartridges in the Boktai series have a 'sun sensor' in them. I'm sure there are other examples as well.


If anyone is interested in learning a little more about the 6502, Ben Eater recently did a video about it [0]. Also just in general if you hadn't heard of his channel and are reading this comment, you'll definitely enjoy what he does.

[0] https://www.youtube.com/watch?v=LnzuMJLZRdU


Wow I feel like gaming back in those days was the Wild West. So cool to edge up the competition with custom hardware


Does anybody knows how emulators cope with such games? Can you find them as ROM files?


Since there are only a small number of such chips, the emulator will also have a library of cartridges it supports. You emulate those chips the same as you would emulate the CPU or PPU. Sometimes it's easy: the add-on chip is just a regular SNES CPU at a higher clock speed. Sometimes it's hard, because it's some sort of obscure finicky twiddly bits.

The situation is significantly worse on, for instance, the NES, which had 40 or so different chips for bank switching, which is thankfully natively (and therefore uniformly) supported by the SNES.



It took Star Ocean forever to be emulated properly because of that co-processor chip!


Tengai Makyou Zero (SPC7110 + RTC-4513) suffered much longer, and the fan translation of it even longer still. Absolutely worth the wait though.


> used by the Japanese-only RPG

What is the name of that game?


I believe the name left out of that message was Daikaijuu Monogatari II


> Rather than try to emulate the Game Boy on the SNES, they just included the CPU in the cart!

There’s no way you could do that anyhow


Ok - everyone should feel free to publish as they prefer. On the other hand lashing out at critics is unproductive.


There's actually a couple variants of Pilot Wings with different math coprocessors that act ever so slightly differently:

https://twitter.com/foone/status/1126996260026605568?lang=en


I know I'm not supposed to comment on voting, but I'm honestly curious why this deserved to be taken down to -1.

The link seems entirely related to the original article and contains interesting information.

If I'm not to do it again, I need to understand why, please.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: