> Heh, from foone themself: https://threadreaderapp.com/thread/1066547670477488128.html
> Not to humblebrag or anything, but my favorite part of getting posted on hackernews or reddit is that EVERY SINGLE TIME there's one highly-ranked reply that's "jesus man, this could have been a blog post! why make 20 tweets when you can make one blog post?" CAUSE I CAN'T MAKE A BLOG POST, GOD DAMN IT.
> The short story is they are quite open about having ADHD, and that's what causes the long twitter rambles, but also what makes it very difficult to assemble it into a blog post. Every once in a while foone's wife will edit a popular thread into a blog post, examples: https://foone.wordpress.com/
> If this seems like a particularly good story to you, maybe you'd like to edit it into a draft for a blog post and gift that to foone?
And yeah, as soon as I saw "SNES Cartridges and Enhancement Chips" and the twitter domain, I felt about 90% sure it wouuld be linking to a foone tweetstorm.
..I accept twitter is here to stay, but always thought it was a a pointless snippet-fest good for nothing aside from millions being able to pile on praise/social-shaming.
Looking at this thread, it actually makes sense for once. It's a successive sequence of info & pictures - that follow a coherent thread. Great, just like any other blog.
However, if you're interested in any particular paragraph, you can click and see a whole discussion on "just that paragraph"
Not for one moment saying twitter's won me over, but it provides a cleaner way of annotating a story than any other platform I'm aware of.
Foone's posts are the only content I enjoy reading on Twitter. He's mastered the format.
Only if you're subscribed.
Medium? It seems specially built for that.
Mastodon does threads like Twitter with a little more room without most of the downsides. I've thought about setting up a managed Mastodon instance (with masto.host) to put under my domain to use as a sort of public outliner.
That's what most people seem to use Twitter for anyway.
The converse happens too: they can desperately want to write a long form article on a topic, and just can't find themselves starting it. The blank blog page is imposing and they don't know where they would start and maybe they'll distract themselves for a minute with something and whoops now they're very focussed on that distraction for a few hours.
I'm sure there's a lot more subtlety to this than I'm describing, and there's a real diversity of peoples' experiences with this, but there's one way that I understand it can lead to this situation.
I got nothing else done that day, all because I got a whiff of inspiration, causing me to lose all sense of time and place in an obsessive pursuit of bringing a fleeting vision to life. I didn’t eat breakfast, and I didn’t stop to eat lunch until
my stomach hurt. There were several times during the day where I got caught myself forgetting to breathe.
But I’m mostly happy with how the website turned out, so there’s that. There’s some things that need to be done still, and it’s really, really hard for me to put the brakes on it over the weekend. Hell, here I am talking about it on Saturday morning.
Come Monday, maybe I’ll unravel a clumsily packaged mental model of what needs to be finished up in a whirlwind of keystrokes.
Or, just as likely, I’ll get annoyed by some trivial problem with a piece of code I’ve written, and go down a rabbit hole of studying alternative approaches until I find something I’m happy with, at which point I’m already waist-deep in the middle of some new project that may or may not ever see the light of day.
Equally likely is the possibility that too many sleepless nights will have sapped any semblance of focus and I’ll find myself just mindlessly going through the motions of being a semi-functional adult, forgetting all about finishing up this redesign.
Or, maybe I’ll get frustrated by a bug and decide to take a break from software development and come back to it later, like the time in 2012 when I realized that I’d accidentally trashed the source control on a project that I wanted to roll back a crappy refactoring job I’d just done on it, resulting in a five year gap on my resume and Github commits.
Thankfully, I’m self-employed and married to an incredibly understanding woman. ADHD is a hell of a disorder.
> they don't have the endless editing I get into with blog posts
I don't know if this is true or not, however Twitter is one of the only platforms that doesn't have an edit button. So it makes sense that Twitter may helps remove from people with ADHD the sentiment of "oh, maybe there is something that I can still improve from this post" before publishing it.
You can also delete a single tweet that feels like a much easier decision (because smaller impact) than deleting a blogpost.
Maybe they're doing this for fun, and don't want to turn every aspect of their life into a self-improvement slog.
For me at least, the “big picture” of a concept is definitely there, but it’s in the back of my mind. I can access it and talk about it, but particularly when writing, it takes serious concentration for me to convey that information to others, because in general, I’m more interested in fully understanding the mechanics of particular components of the picture.
When I attempt to write about a topic in-depth, I’m typically doing it in one of two ways: either it’s a sort of stream of consciousness sort of writing, full of digressions and errors, or it’s the result of a lot of planning, in which case it often takes me so long that I just give up before I finish.
I’m not a big Twitter user at all, but I have to admit that I’m drawn to the idea of writing about my interests in a more granular manner than traditional blogging generally permits.
Other fun fact: for a long while, Intel used an ARC core in its Management Engine.
That's right—not only can your motherboard run rootkits; it can also run StarFox ;)
I believe their wireless cards (at least the 3945ABG) also used it; I remember the disassembly looked right, but never had the time to go through with the rest of that project...
How is it that a chip used for 2D/3D operations gets used in the ME though?
I would've thinked that they are pretty different, but I'm no expert so, any ideas?
Another interesting one was the SA-1 chip used in a few games, such as Super Mario RPG. Its a SNES-compatible 65C816 that is 3x faster, has its own RAM bank, and can interrupt and be interrupt the main CPU, among other features, and was used to offload a lot of things, including some graphics tasks.
That said, digging around I havent found reference to Intel using the chip, and another responder claims it was a different chip by the same manufacturer.
Another interesting CPU that made its way around was the family that the SPC700 (SNES) and SPC1000 (PS1) audio DSPs belong to. They've shown up in a few embedded devices that needed to do audio work, such inside of AVRs and other similar devices.
SuperFX is a CISC. Byte opcodes with prefixes, complex memory access instructions, two address, etc.
ARCompact is a RISC. 32-bit instructions with a 16-bit subset that expands to what you could encode with the 32-bit, fairly simple load store arch, three address ops, huge register file, etc.
They're about as different as two archs can get; they're just made by the same people.
The best quote, from Jez San, one of Argonaut's founders, is, "At the time that it came out, it was also the world's best-selling RISC microprocessor until ARM became standardised in every cellphone and took the market by storm."
Name another RISC with a byte opcode with prefixes and is variable width depending on the arguments. That's about as CISC as you get.
It was pipelined, and at the time people for some reason thought that you couldn't pipeline a CISC and therefore it had to be a RISC.
None of those are variable width depending on what type of argument you have.
Also, PowerPC isn't even variable width at all.
And the Pentium was the first pipelined CISC microprocessor, ie. a single chip. At the time there was a holy war going on with one side being of the opinion that you shouldn't pipeline single chip processors, but instead rely on Moore's law. The thought was that the mainframe style multichip modules needed a pipeline to account for off chip delays, but that all on the same die it was unnecessary and created too unpredictability with pipeline bubbles, etc. Those people were obviously wrong and lost.
Edit: also the Pentium was released a month after StarFox was looking into it. And when you account for the long lead times of hardware, their statements make sense timeline wise. Particularly when you consider that that RISC was a huge buzzword at the time and being misapplied, sort of like how now everybody doing a simple linear regression is talking about all the machine learning they're doing.
The talk is brilliant:
* - as far as I know...
I seem to recall Nintendo charged a per-unit royalty based off of the ROM size of the cartridge, which would make including a dedicated decompression chip reasonable, particularly since they are likely fairly simple to design anyways.
 In addition, the two games listed as using the S-DD1 were 32M,it and 48M,it roms (there were very few 48 Mbit roms and 32Mbit was considered "large"), so these were already fairly expensive cartidges.
Found a pic of the Star Ocean PCB; note that the S-DD1 takes up less PCB space then even 1 of the two ROM chips:
[edit2] The SF Alpha PCB was much simpler by comparison:
If you can be content with buying xbox one and ps4 games on ebay a little while after they're released, you're unlikely to ever pay more than $20 for a game.
The typical price for the modern AAA title in 2019 is still only $59.99. These games will have 30 or so people at the absolute minimum and having well over 100 people is not at all rare. They often take 3 years or longer to make.
Even if you don't take into account inflation, such a small increase in nominal price despite enormous increases in development cost is rather remarkable. Then if you consider that inflation means that the real price of AAA games is basically half of what it was, and B list games being about half of that yet again...
Honestly I'm not surprised by DLC and microtransactions. The normal list price of AAA video games is substantially lower than it it should be. My gut tells me that in most other markets, the price would have risen to at least the $80-100 range, if not more, even taking into account the much larger sales volumes these days.
Later, hardware resources became cheaper and less of a concern and the SDKs made development a lot more accessible to people who didn't have a master's degrees in EE so the workforce could be cheaper and plentiful, cumulating to the game industry becoming the sweatshop it is today.
I suppose you could make the case that game development is always going to be hard, people are always going to push the abilities of the platform etc, it isn't like todays titles are tossed out on a weekly basis.
Apparently same author for both rotoscoped animations.
Pretty cool that the same NEC uPD772x chip was shared between a text-to-speech engine and Super Mario Kart.
If you want to know everything there ever was to know about the chip, I mirrored every PDF datasheet I could find on it on my site:
It has one of the most wild ISAs I've ever encountered, using a strict Harvard architecture and 24-bit VLIWs (very long instruction words.) Every opcode can do over a dozen different things in the same instruction at the same time. Writing code for it is quite the challenge.
Wow that really is amazing. The diversity of uses cannot be foreseen and is a good argument for both documentation at the time and emulation as useable history.
I’ve followed your work for a while, spent many hours playing Super Mario 3 on your emulators and reading your dev blogs.
Thanks for all you give.
All of this was to squeeze more performance and make more advanced games.
I think you can't quite equate the PPU with GPUs - because the PPU doesn't run instructions or render things to a framebuffer. It's more like just the part of the GPU that takes memory and outputs a display signal - in VGA that's called a RAMDAC, not sure what the digital TMSS/HDMI/LVDS equivalent is.
The little expansion slot under the NES has these and other pins. Some of the pins are electrical shorts with pins on the cartridge side.
So to play Japanese games with full audio all you’d need is an expansion cartridge selectively shorting the pins on that expansion port.
There were several that added extra audio. My favourite is the VRC7, Konami's mapper used in just one game, Lagrange Point, providing Yamaha OPL FM synthesis.
Virtua Racing was the only game released with it, but there were others planned for it originally. That is, until Sega decided to use the same technology for the 32X, which added a lot of power but was a terrible commercial failure.
It's also roughly how the Super Game Boy worked - it had a Game Boy CPU in it: https://en.wikipedia.org/wiki/Super_Game_Boy
Is this not the sort of memory you'd need?
just trying to understand the details here.
Next rev I am replacing it with a small FPGA which will act as dual port memory. Believe it or not, that is much cheaper.
My short description really does it a disservice - I strongly recommend watching the video explanation https://www.youtube.com/watch?v=ar9WRwCiSr0
The Saturn uses a similar architecture of using two VDPs, with one mainly dedicated to rendering the background.
Yes, this is essentially SLI/ CrossFire, something that has been around for some time, and support would be described as patchy at best. I believe the main issue is that it's left to game developers to implement, rather than something abstracted away to the graphics drivers/ GPUs.
Not exactly the same idea, but Forza Motorsport 3 (and I think 4) let you use 3 Xbox 360s to display on multiple monitors: https://www.youtube.com/watch?v=2UJ-QbpFFM8
The other hard problem is building the instruction chain (the term is escaping me at the moment) which gets sent to the GPU. This would have to be duplicated on each system.
The most workable solution would be to have one system do nothing but update game state and send it to the other system, which just does rendering. Limit yourself as much as possible to one way communication. Hopefully the game state system would do a lot of GPU compute for physics etc, otherwise its GPU would be idle. Furthermore, there are a lot of single threaded bottlenecks in both game state and render, so you'd lose out in parallelism. (Many modern systems fix this problem by rendering frame N (which is read only on state N) while simultaneously computing game state for frame N+1. (which is also read only on state N) Since both operations are read only, their single thread bottlenecks are different single thread bottlenecks, which helps parallelism dramatically.) Overall you'd be getting a very minor boost in performance from doubling the hardware commitment; certainly no better than 50%, but I'd ballpark probably closer to 25%.
It's an interesting idea, but one which is ultimately destroyed by the unrelenting iron fist of Amdahl's Law.
People would balk at having to buy two consoles for VR though.
But the announcement of the Saturn soon after killed the 32X.
And programming for two VDPs was incredibly difficult. Especially considering games were still being written in assembly even at that point in the industry.
Plus the economics didn't make sense. It is cheaper to manufacture a single console instead of a base system and add-on separately. You end up confusing the consumer, and limiting market reach because your add-on market is a subset of your base unit's market size.
And the games were bad for the most part. Really bad. I have played the majority of home consoles. Even the likes of the 3DO, Jaguar, Virtual Boy, Sega CD, etc. The 32X is easily one of the worst consoles, ever, period.
Really though I feel that the response of any console manufacturer to “hey let’s bring back the 32x” would be “do you know how much money that thing lost Sega”. Just update the specs a bit and sell a “Pro” or “Plus” model for the same cost as the original launch cost, and sell the old design for less.
You can get external FireWire boxes to cram graphics cards in for your computer; my general impression of the market is that “serious gamers” who “need” 120fps are pouring lots of money into Windows machines with pricey graphics cards.
And it'd be a real pain to add chips like this to any later systems. The architecture is a little different, and the cartridges aren't actually mapped into the system's main address space anymore. They are relatively slow buses that look more like mass storage to the rest of the system. They can't DMA directly and that hop to hit the rest of the system obviates a lot of the benefit of having a co processor on there.
Texture decompression is handled using the hardware acceleration on the Tegra if being used to save VRAM, otherwise using standard libraries (like libpng, libjpeg, etc.) in the game software.
The situation is significantly worse on, for instance, the NES, which had 40 or so different chips for bank switching, which is thankfully natively (and therefore uniformly) supported by the SNES.
What is the name of that game?
There’s no way you could do that anyhow
The link seems entirely related to the original article and contains interesting information.
If I'm not to do it again, I need to understand why, please.