Hacker News new | past | comments | ask | show | jobs | submit login

This might be the most complex hardware architecture of a home console ever.



The Jaguar was pretty out there.

The thing had two custom RISC chips which could be used as CPU (one with additional GPU capabilities and the other with DSP) plus a 68000 devs were not supposed to use.


Plus the custom chips were buggy!


I always wonder what the Jaguar would have been like if not crippled by buggy hardware. Probably still a failure but I bet the games would have ran better.

You know what's also funny? I've never heard anybody complaining about the Saturn being buggy.

It's legendarily hard to code for, but seems like it was at least pretty solid.

Even the Genesis was sort of "buggy." I think there was one particular design choice that crippled digital sound playback. Also the shadow/highlight functionality is kind of weird, not sure if "buggy" is the right word, but weird.


The best thing Atari could have done was release it at launch with the CD, that would have made it much cheaper for devs to launch a game - ROM order pricing would kill many devs. Bundling an SDK would have helped massively as well.

There are lots of quirks with the hardware which point to Atari interfering with the development - the 68K was never supposed to be there (and an 020 would have uncrippled the bus by allowing it to run at full speed, see the arcade board), not using the dual RAM buses, having an object processor (Flare majored on DSP and Blitter, the object processor looks like a 5200/7800/Amiga/Panther throwback mandated by Atari) necessitated having a 2-chip solution where 1-chip would have been faster to develop and better.

Having said that, the Jag VR looked amazing for the time.


You seem really knowledgeable. Any insight into why Atari made those fateful decisions?


Dreamcast and PS 3 are also quite close in complexity.


I think you mean PS2 and PS3. Dreamcast was rather easy. One might even say, a dream to program for.


PS2 was the tail end of the bespoke hardware, and Sony gave their chips very marketing names, but I don’t remember the hardware being especially strange. The SDK being ass would be a different issue.


Well the two vector coprocessors (vu0, vu1) each talking to a different processor (cpu or "gpu") was quite weird, imho.


The PS2 was infamous to develop for. Most of the bottleneck came from the vector units. Most middleware eventually made working around the strange hardware much easier.

When asked if they were weary about developing on PS2, due to its reputation of it being tough to code for, Sega devs famously laughed and replied that they already mastered the Saturn - how much harder could it get?


I wonder how many games utilized Windows CE (which was a thing for it) on the Dreamcast.


Not a whole lot. If you wanted to extract the full power of the system then you needed the native SDK. Sega Rally 2 was ported using CE and it has frame rate issues it really shouldn't have an this is attributed to CE


I guess they had a porting job to do since the arcade hardware Sega Rally ran on wasn't Naomi(which was basically a souped up Dreamcast), and they figured if they used WindowsCE then they could use DirectX and get a PC port out of the same efforts.

But yeah, for a flagship title they should have went for a port with the official SDK.


The goal was to prove to third-party developers that it was trivial to port an existing Windows game to Dreamcast. It was sort of a tech demo for the industry.

The end result though was… demonstrating the Windows CE overhead.


Naomi games ported over to the Dreamcast just fine. It was essentially a Dreamcast with more memory if I'm not mistaken. Sega Rally 2 used the Model 2 board.


Sega Rally 1 used Model 2, Sega Rally 2 used Model 3.


The ps3 cell processor with the 8 "synergistic processing element" co-processors was definitely weird.

https://en.wikipedia.org/wiki/Cell_(processor)


"The Race for a New Game Machine" is an interesting book on the development of Cell, if the author comes off a little annoying at times. It was a neat idea to offload certain operations to the SPEs (glorified vector processors) but they all had their own RAM and communicated via a bus, so you really needed to optimize for it.


The ps3 was incredible, not so much in what it could do as in what IBM managed to make Sony pay for.


IBM also made Microsoft pay for the Xenon in the Xbox 360.


Xenon is a pretty standard SMP design, having 3 cores is basically the height of its oddity. It was a fine early-multicore CPU for consoles.

The Cell was very much not that, the SPEs were unnecessarily difficult to use for a console and ultimately a dead end though they made for great supercomputing elements before GPGPU really took off (some people built supercomputers out of PS3s clusters as that was literally the cheapest way to get cells).

The Cell supposedly cost 400 millions to develop, a cost largely borne by Sony. MS got IBM to retune the PPE for Xenon, but they'd likely have made do with an other IBM core as their base had that not existed.


This claim that the SPEs were difficult to use and maximize may have been true in the early years, but all of the major engines were optimized to quickly abstract them. Naughty Dog, a key contributor to the PS3 graphics libraries had optimized them so much that porting PS3 games to the PS4 was "hell" in their words.

>I wish we had a button that was like ‘Turn On PS4 Mode’, but no,” Druckmann said. "We expected it to be hell, and it was hell. Just getting an image onscreen, even an inferior one with the shadows broken, lighting broken and with it crashing every 30 seconds...that took a long time. These engineers are some of the best in the industry' and they optimized the game so much for the PS3’s SPUs specifically. It was optimized on a binary level, but after shifting those things over, you have to go back to the high level, make sure the systems are intact, and optimize it again. "I can’t describe how difficult a task that is. And once it’s running well, you’re running the [versions] side by side to make sure you didn’t screw something up in the process, like physics being slightly off, which throws the game off, or lighting being shifted and all of a sudden it’s a drastically different look. That’s not improved any more; that’s different. We want to stay faithful while being better.”


> This claim that the SPEs were difficult to use and maximize may have been true in the early years

The 7th gen appropriately "lasted" 7 years, "the early years" was a ton of it.

> Naughty Dog, a key contributor to the PS3 graphics libraries

ND is a first party studio, that means access to information and efforts to push the platform most developers can't justify, especially as third parties would usually need cross-platform support: the PS3 was no PS2.

> so much that porting PS3 games to the PS4 was "hell"

Which proves the point: the PS3's Cell was an expensive, unnecessarily complicated, and divergent dead end.


Remember that Naughty Dog was acquired by Sony, thus access to internal info, and until Sony released Phyre Engine, and the GT demoed what was possible with the PS3, most studios were kind of lost targeting it.


And they had Nintendo paying 'em in that generation (and the one before) too.


TIL the PlayStation 3 is used (on a grid/cluster of 16) as a viable alternative to Supercomputers by Physics researchers, this grid calculates how black holes collapse (!). I assume this is a cost-cutting measure by the researchers, and it makes me think how much more expensive a ‘supercomputer’ is and why, if it can be suitably replaced by the console with no games.


That continued with researchers using cheap gaming GPUs for simulations. Today a single Nvidia 4090 GPU exceeds the performance of that PS3 cluster.


"Exceeds" is an understatement. The Cell CPU had a maximum performance of 200 GFLOPS of FP32. A Nvidia 4090 has 73 TFLOPS of FP32, equivalent to 360 PS3.


For physics simulations they are using double precision. They increased the cluster to 400 PS3s so it continued to be useful in the 2010's.

https://web.uri.edu/gravity/ps3/

But I guess the cluster overhead would reduce performance to less than a 4090 in real applications.


Oh, in that case consumer GPUs absolutely suck at FP64, the vendors want you to buy the expensive data-center version. The RTX 4090 has a 1/64 rate when computing with FP64, so only 1.2 TFLOPS!

But from what I can find, the Cell also sucked at FP64, with a rate of 1/10 for a total of 15 GFLOPS (https://en.wikipedia.org/wiki/PlayStation_3_technical_specif... , second paragraph). The 400 PS3 cluster would be 6 TFLOPS or 5x RTX 4090.


I think they may have done that because Sony used the PS3 as a loss leader?


Not having developed for them, PS2 seemed the hardest of the latter generations, with PS3 having issues but at least not being designed with hand-written assembly in mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: