Hacker News new | past | comments | ask | show | jobs | submit login
The rise and fall of the PlayStation supercomputers (theverge.com)
283 points by lelf on Dec 7, 2019 | hide | past | favorite | 169 comments



The Playstation 1, 2 and 3 are each in their own way Sony's take on an alternate history of computer graphics. It was Sony's intent with all three consoles to put the "special effects" (that in 2019 are performed using shaders) in the CPU, and not the GPU.

As a result, the PS1, but especially the PS2 and PS3 were engineered to have amazing vector capabilities in the CPU. The PS2's Emotion Engine had extremely high performance vector processing units, and was so optimized for compute throughput it used two different floating point formats, neither of which are compatible with the IEEE754 standard that every other piece of hardware you've ever touched uses. The PS3 was the culmination of that philosophy, with the Cell being explicitly developed to put as much raw number crunching hardware in something that can still be called a CPU. It was just a single unimpressive POWER-compatible CPU core, and 60% of its die area went to specifically engineered number crunching units. This is what made it "supercomputer grade".

There's a persistent rumor that the Cell was so powerful that Sony had intended for the Playstation 3 to be equipped with two Cell chips, and no true GPU at all. But it seems Sony realized during development that their philosophy had no future. Sony compromised and introduced a standard Geforce GPU. Compared to the more conventional architecture of the Xbox 360, the compromised design still had slightly more potential but it was a lot more difficult to work with for games, and more expensive to produce as well.

Starting with the last generation of consoles, homegrown graphics architectures aren't tried anymore. You can only get performance and especially good developer buy-in on commodity hardware. As a result, the PS4 and Xbox One have PC hardware. Their successors will too. The Nintendo Switch shares a chip with the Nvidia Shield TV and Google Pixel C tablet. Console hardware is not really interesting anymore if you ask me.


The weird PS1/PS2/PS3 were because of Japan's heavy arcade culture, where quirky hardware/software designs were the standard. Xbox was approached from the "let's make a DirectX Box" angle, from commodity PC hardware, and it wasn't until the PS3's initial huge cost blunder that Sony realized they needed to go the commodity angle.

> There's a persistent rumor that the Cell was so powerful that Sony had intended for the Playstation 3 to be equipped with two Cell chips, and no true GPU at all. But it seems Sony realized during development that their philosophy had no future.

The original plan was for no GPU at all, relying on the SPUs as pixel processor units. When they realized that that wasn't feasible for a few reasons, they quickly partnered with Toshiba to make a custom GPU, and that ended poorly too, so Toshiba scrambled and licensed an NVIDIA core. The final "RSX" chip was still a Toshiba design, but with NVIDIA at the center.

As games got more and more complicated, the amount of developer support & tooling did as well. The Xbox 360 had incredibly strong tooling (ask someone who worked on the device about 360's PIX, and they'll gush for days), and the PS3 had very little. The PS3 shader compiler infamously had a random search for an optimizer, and Sony's recommendation was to just run it continually with different random seed inputs in hopes it found a better optimization for your shader.


> The weird PS1/PS2/PS3 were because of Japan's heavy arcade culture, where quirky hardware/software designs were the standard.

TBF until the early aughts bespoke hardware was the only way to hit a suitable price target and get the performances and graphics snazz you wanted.

The first "consumer" PCs which were competitive with consoles in terms of gaming (late 90s, starting with the Voodoo 2 or so) had the graphics adapter alone costing as much as a current-gen console.


> The PS3 shader compiler infamously had a random search for an optimizer, and Sony's recommendation was to just run it continually with different random seed inputs in hopes it found a better optimization for your shader.

Sounds like machine learning with linear regression.


How is it like linear regression, considering linear regression has a closed-form solution, and even if you're using an iterative solver, is a convex problem.


>The weird PS1/PS2/PS3 were because of Japan's heavy arcade culture, where quirky hardware/software designs were the standard.

MSX, PC98. The Nintendo uses a standard 6502 CPU with a PPU and the Mega Drive used the uberknown M68k.


I mean, the PS3's Cell uses PowerPC. We're not talking about the ISA here, but the architecture and bizarre stuff coming along with it, like the NES PPU.


The architecture and design of PS1/2/3 were driven from Tokyo, where as PS4/5/+ are driven from the US. It happened because the internal corporate structure changed significantly, leading the design to be heavily influenced by game dev community vs hardware engineers.

As such... its sad as we will never see any new crazy ass hardware architectures going forward.


It's a shame in a way. I remember the reports about the Cell architecture for the PS3. By all accounts it wasn't easy to program for, but that was a hell of an innovation only to fall back to the standard PC arch. Impressive as hell but it took game devs a long time to make the most of it; and then they did in spades. You only have to track the trajectory of Uncharted in the space of a console generation to see what that architecture could do. It must have informed Naughty Dog's design ever since because they haven't suffered from rolling back to a traditional chipset.

It was a different world though, PowerPC was still alive and strong back then.

The PS4 does continue the trend but I think that's more a question of time and investment rather than pure tech considering they have a steady trickle of top notch exclusives.


I prototyped some physics modeling on the Cell BE. Having Linux on it was a huge help, but it definitely was an odd beast to program for.

One thing that made it practical to work with was that we could install Linux on a PS3.

When Sony retroactively killed the ability to install Linux, they did more than make PS3 irrelevant to supercomputing. They made me deeply skeptical of relying on a technology with a single-vendor gatekeeper.


For the sake of the environment and not throwing away hundreds of thousands of perfectly functional computers I hope they will change that situation. If nothing else, it's going to be a hacker's dream to play with such a unique chipset. It shouldn't die out.

I'd love to poke into it myself. The only barrier is the language you have to use to speak with it.


I suppose there are multiple measures of environmental-friendly. I would guess that the ps3 offer pretty bad flops per Watt compared to newer hardware.


I'm more curious about the subsequent death of PowerPC. Back in the day we had a decent amount of hardware running on that chip.

But on benchmark levels Cell would be competing on different grounds to an X64 and a few PCIe slots right? I've not a clue, I'm curious but I know I don't really know what I'm talking about after this point.


I got two big take aways from reading naughty dog’s blog about what they learned from the cell.

the cell really taught them the trade off between latancy and throughput. That by batching up computations they could do orders of magnitude more than naive pipelining. But, to get that good batching, they had to compromise on framerate.

Which gets to the second big takeaway, which was that standard deviation in the framerate was more important than the framerate and by keeping the framerate low they could also drastically reduce standard deviation in the framerate (Incidently I understand that this was something the amiga demoscene really understood, and low S.D. was a must for any text marquee to look completely smooth).


The homogenization of compute hardware is indeed very depressing


The rise of ARM via mobile devices is a nice antidote.

The different ARMs don't even share a common instruction sets, I think.


I remember them pushing the idea the Cell would be used everywhere. At a game conference the brought out an engineer talking about how the Cell would be used in everything...like refrigerators...and then the price for the PS3 was $600 USD...

Meanwhile off the shelf hardware is cheap and has support from developers. And any kind of enhancement to consumer products involves cheap hardware to keep costs down.

I couldn't help but wonder if it wasn't a functional and yet a bit of a corporate vanity project that had missed the boat and had nonsensical ambitions.


The PSP as well had essentially a second CPU with some kind of programmable DSP for handling media like MP3 & MP4 playback. Some enterprising individuals figured out how to use it for general computation in the PSP homebrew heydays. I think there was a Nintendo DS emulator that used it for some kind of speed up.


> As a result, the PS4 and Xbox One have PC hardware.

Does this mean the Xbox was pretty revolutionary for showing that you really can make a console with commodity hardware?


Yeah, I recall the Xbox being super popular to pick up for the sole purpose of repurposing into a media center (the main OS being the Linux distro Xbox Media Center (xbmc), today known as Kodi)


It was as much a pragmatic choice as an inspired one; commodity pc hardware was obviously getting close to cheap enough. Others have suggested Microsoft lost money with this move, even if so it was probably still a pragmatic choice. Microsoft were entering a market that has been notoriously unfriendly to new entrants and spending more on hardware to ensure a successful launch of such an ambitious brand could be argued to make sense, with consoles able to recoup hardware loses in software revenues.

It was likely more about making it easy to port existing windows directx based games quickly and cheaply. Similarly, it granted a development stack that a great many developers would already be somewhat familiar with. All good things for a new entrant to a tough marketplace.

Presumably it also allowed them to get to market faster? Much less custom silicon?


"Revolutionary" is probably not the right term, at this point it was pretty much inevitable given the progress of PC hardware, but it was ahead of the curve by necessity.


Didn't they have to kill it only a couple of years in because it wasn't cost-effective? IIRC it was sold at a huge loss that they never managed to get back.


The original Xbox had some contractual problems that prevented it from being cost-reduced. I don't think MS cared if they broke even since they were in it for the long haul. Everybody knows Xbox 3.11 for Workgroups was the first good Xbox anyway.


Very different from the Windows Phone philosophy, then.


Not necessarily. PS2 dominated its generation; Xbox did OK, but only a company Microsoft's size could have sustained it (and even Microsoft superseded the console in four years, unusually short for that generation). Xbox 360, Microsoft's second try, was hugely successful, beating PS3 everywhere outside Japan. Had it only been as successful as Xbox was, though, Microsoft might not have tried again.

No Windows Phone iteration ever had more than a few percentage points of share. iOS and Android both clearly destroyed it in the market. Had WP seen a few years of sustained success it would still be around today, sort of like how BlackBerry the brand is still around, albeit for Android-based devices.


The Xbox 360 only beat the PS3 in English-speaking countries. By the end of the generation the PS3 had outsold it world-wide, if only barely.


Why doesn’t Intel make a chip specialized for gaming? Something like a dual core CPU optimized for single threaded performance and a variable number of GPU cores (a guaranteed minimum up to a perfect maximum), binned by clock speed. Skip all the support for extra pcie lanes (except maybe for storage).

From your comment, it seems like Sony tried exactly this but failed. Is there an inherent flaw in such a design or just poor execution?

Edit: the biggest problem I see with such a design is RAM IO, but I would think that could be mitigated. Maybe even eliminate support for DDR, instead embedding 4 or 8 GB in the package.


That's what the current console SOCs pretty much are - intergrated CPU/gaming GPU on one die backed by unified GDDR for both CPU/GPU, though 8 weak cores as it was the only viable architecture AMD had at the time for a console type device.

From what I understand this is still different from the Sony approach where you'd have a weaker GPU, but would offload shaders etc to the CPU which could do this due to it's powerful dedicated vector processing units. Of course this had the flaw of GPUs eventually develping shader capabilities etc and making it pointless - and harder to program for if you have to target PC/xBox etc as well.


IDK specifically, but generally: parallelism is the way to get more compute despite limited GHz, and something like GPU architecture is a neater fit than general-purpose CPU architecture.

BTW the latest AMD GPU architecture (RDNA) is specific for gaming, in that they get around 40% better framerates for the same raw GFLOPS of their previous architecture (GNC).


> Console hardware is not really interesting anymore if you ask me.

and Hallelujah to that

Praise Talos

We really do not need a Wakanda with an independent history of processor technology. Maybe 50 years ago the Cambrian explosion of possibilities was relevant, but this century?

Consoles had to compromise based on heating and costs while being competitive in graphical capabilities, those things haven't been issues that the market actually cares about in 15-20 years. For the market, the graphics pale to the actual experience, they're good enough and would never attract enthusiasts that are willing to use more electrical energy for bragging rights.

General processors and off the shelf graphics processors are good enough.

Interchangeable parts wins again.


> Console hardware is not really interesting anymore if you ask me.

Agreed. And that’s a good thing.

PS1, PS2, and PS3 were all underpowered. I’ll never forgive the PS2 for holding back games for 7 years. Grumble grumble.

PS3 may technically not have been slower. But if it takes 2x the effort to get a 10% improvement it is in practice.

Everything’s a PC now. Which means the focus can be on making good games instead of optimizing for specialized hardware to serve a fraction of the audience.


I disagree any of them were underpowered. The PS1 and PS3 arguably produced the nicest graphics of their generations - the PS1 because its competition was compromised in their own way, and the PS3 due to sheer brute force.

The PS2 for sure was less powerful than the more conventional architectures of the GameCube and Xbox, but it was released 18 months before either of them in a time when Moore's law was still extremely relevant. Yet it could still hold its own against the competition. All pieces of impressive hardware.


Compared to PCs of the same era they were very underpowered. And that only got worse as the generation went on. The PS3 and 360 only had 512mb of ram shared between the cpu and gpu which was a pretty serious handicap.

(The lack of ram was definitely the biggest issue, and the thing that held back PC games the most as levels had to be made a lot smaller than they could have been otherwise - Destiny and Final Fantasy XIV are two concrete examples, both released expansions with much larger playable areas as soon as PS3/360 support was dropped).

Compared to other consoles of the same generation they were on par though.


> Compared to PCs of the same era they were very underpowered.

Compared to top-of-the-line PCs of the same era yes. But the GPU alone of "PCs of the same era" would set you back as much as a 6th gen console, and you'd have a shiny graphics adapter sitting in a box.

The 7th gen is where consoles started really falling behind (though somewhat ironically also where MS decided to go with somewhat more bespoke hardware than the "PC in a box" Xbox).


Did you just call wobbly fixed point vertex no subpixel precision no transparency no perspective correction mess full of gaps on PS1 a nicest graphics of their generations?


For what it’s worth, I do really enjoy the aesthetic of that sort of work in the same way that I enjoy a nice piece of pixel art with a limited colour palette.


Well, if you like textures, then yes.

The most serious competition at the time was the Nintendo 64, and it lacked storage.


PS1 didn't hold back PC or other consoles so I bear it no ill will.

PS2 dominated sixth gen sales. By being the winning console but releasing 18 months earlier it held back everyone. And since, like you said, Moore's law was very much alive this represented a significant reduction in overall quality for an entire generation. :(

PS3 at it's peak was great. But it took waaaay too much effort to get there making it a very poor value proposition for developers. And 7th gen was a fully multi-platform era so the end result is the average game was held back and only first party prestige titles could justify the extra cost.


> But it took waaaay too much effort to get there making it a very poor value proposition for developers.

It’s funny that this is always a criticism levelled at the CPU and not the developers.


It’s a financial decision. Building a competitive console game wasn’t cheap by that generation, and passion only goes so far. Publishers ultimately want to recoup their investment, which means developers targeting the common subset of functionality and using middleware wherever possible to support multiple platforms. Unless Sony was your publisher, it wouldn’t have made economic sense to deeply optimize for Cell. PS4 and Xbox One being so similar to each other is good for developers, which is good for games, which is good for gamers, even if it does mean fewer esoteric toys for passionate engineers to play with.


This reminds me so much of the discussions around the Amiga back in the 80s.

Powerful custom hardware can be great for the developers that invest in it, until it isn't anymore and now you've got a bunch of sunk R&D. And then there's the need to ship on more than one platform...


No, its' tragic that blaming developers is even considered an option. Developers are human and can be expected to behave as such. The value of developer time is well known, and if anything developers undervalue their own time, and are far too willing to waste their time microoptimising for a gratuitously difficult CPU design rather than getting on with more valuable work.


You kinda have to work with the people existing, while there were other CPUs available at the time.

And even if everyone were a super smart 10x-developer like you surely are, their time would still be better spent on a platform that is more productive. The idea that some technology is difficult but rewards proficiency with incredible productivity is almost entirely wrong, which is why nobody does web development in Assembly. LaTeX vs Word would be the only example I would have agreed with in the past, but if you search you'll find a study showing even the most proficient practitioners are slower and make more typesetting mistakes using the former.


> And even if everyone were a super smart 10x-developer like you surely are

That's not called for. They never implied they were even above average. Just that a bunch of game devs were somewhat lazy.


> Just that a bunch of game devs were somewhat lazy.

That's not called for. You don't know those developers, you don't know what work was on their plate or what constraints they were toiling under.


It is completely fine for you to say that opinion is unjustified.

But they still weren't calling themselves some kind of better-than-everyone dev.

Though if you're using "developers" to mean that entire section of the company, including management, then what was on their plate and the constraints are almost all self-inflicted.


Yes. The blame correctly lies on the processor architecture.

Assume you have a developer who is one of the best in the world at writing hyper-optimized SPU code. Imagine this dev WANTS to write micro-optimized assembly. (And there are many such devs). This developer's time would far more efficiently spent if they were writing "normal" code for a "normal" processor.

So yeah. The criticism belongs squarely on the processor. :)


This was the moment that Sony could have made something useful out of themselves here for allowing Linux to be used on a popular games console at the time, as in improving the accessibility of installing Linux on a computing machine. (Or even basing GameOS on Linux).

Instead they cowardly remove it due to a 'hack' by Geohot and locked it down for 'security purposes' and potential researchers, scientists and those without access to supercomputers are the ones to suffer from this decision.

Sony had to appeal to their gaming community over the science community so that's that.

Also it's 'OtherOS' not 'OpenOS'.


(I'm sure I'll lose karma for explaining Sonys side, since some people just want to be mad)

Linux support was not removed due to Geohot. It was a driver and business priority issue. Geohot just sealed the deal.

Having worked there during the kerfuffle, I can confirm their reason was valid for not supporting Linux going forward. It was a non trivial effort and they wanted those engineers to make it support the new hardware turns. It was a company wide goal to hit break even at retail pricing at that time (Sony was amazing at running hardware coat forecasting internally and iirc hit their 7 year Target by a week). This required the new hardware. So they dropped Linux. That part is real.

Then they effed up by not messaging it well and by dropping promised back support.

And then a bunch of the community responded as petulant children and attacked sonys main business and released products that allowed others to infringe Sony and other game studios property with impunity. These actions killed any chance Sony would engage with this community for the next decade or so.

Sony was also backed into a corner technically as I recall as the thing hot hacked (by realizing it was a null crypto key on the one processor), and thus allowed Linux was also the thing that did the drm. Sony had to choose between another year or so of drm or Linux support.

As in many cases not parties are at fault. Contracts can and are broken. Sony paid multiple prices for it. So did the Linux/hacker community.


I am taken aback a bit by this characterisation:

    And then a bunch of the community responded as petulant children and
Sony supported linux in public, people bought their hardware for it, and then, without warning, sony janked the rug from under them. Of course people tried to take back their hardware, for which they paid. The blame here is 100% on Sony.

Sony has a massive internal conflict of interest by being both a content creator and a hardware provider. They sabotage their own products to the detriment of their customers again and again.

Sony worked with the community for a few years, and had the longest unhacked console to show for it. Then They slapped the community in the face and paid dearly for it, as the should. Sony should beg the community for another chance.


> Sony supported linux in public, people bought their hardware for it, and then, without warning, sony janked the rug from under them. Of course people tried to take back their hardware, for which they paid. The blame here is 100% on Sony.

I hadn't bought any Sony hardware at that moment so IIRC I just decided right there and then that there was no reason to send money Sonys way or feel sorry for them.

First Sony console I bought was a used PS3 sometime after PS4 was released. And I was enthusiastic about PS when they first released the console with Linux support.

I guess I'm not the only one.


I have a feeling that many people who complained were not affected because they either never used Linux or they only used Linux and could just not upgrade. I understand that in theory Sony should have fixed their hypervisor bugs and kept OtherOS working, but I also understand that the world isn't perfect.


Why would many people complain if they were not affected? That seems like an odd conspiracy theory.


If you wait till you're affected, it's too late. Take secure boot for example - should we wait until there are no computers on the market where the user can unlock it, before we complain? Because that's what it would take for someone intent on buying an unlockable computer to be affected.

You can apply this to any other infringement on freedom.



When you have a legitimate beef or problem with someone, you don't take illegitimate and illegal actions against them. I'm all for reverse engineering. That's fine.

Extorting a company by threatening them with release of an exploit if you don't get your way is not legitimate, that's blackmail or extortion.

All of the hackers attacking Sony for months, disabling PSN, is not legitimate, and should never have been supported by anyone.

Just let the courts handle it like they should and did.


> illegal actions against them

What's illegal about reverse engineering a physical product you sell to someone?

> Disabling psn

Are you kidding me? The PSN was notriously insecure. Sony didn't give a crap until they were embarassed.


Read my post.

Reverse engineering legal and to me moral.

Extorting a company with a threat of release of a thing to get what you want: extortion and illegal. This was done by people around hots.

Agreed: psn was insecure. Hacking is still illegal.

By taking these actions we lost the moral high ground as hackers. The class action by itself would be just fine


The class action only happened because it was hacked. Sony just threw up it's hands and said "we don't care" until they were forced to.


>you don't take illegitimate and illegal actions against them.

What sony did is illegal.


That is what courts are for not viglante action


My recollection is that the PS3/Linux thing was a failed tax dodge, because that meant you could classify the system as a general purpose computer instead of a game console. When the EU decided to categorize it as a game console anyway, they killed the Linux project.


Wasn't that the PS2?

I recall they offered a "Linux" kit with mouse/keyboard, network/HDD adapter (so limited for fat models only) and a VGA cable to use with a computer monitor with sync-on-green support...


No, this reads like you working there has just made you immune to seeing all the missteps. No one made them do all the scuffed Sony custom logic they keep putting into their devices, causing all this overhead and compromise on engineering. No one pressured them into the ever more convoluted, bound-to-fail DRM schemes.


> This required the new hardware. So they dropped Linux.

Lack of support for new hardware does not explain why they dropped support for early hardware.


It's pretty simple. You pay for something with a headline capability. Capability is removed.

You no longer have the thing you paid for.

Honestly, your attitude is rather shocking.


Yeah, that sounds about right.

As someone who was on the dev side of the fence at that time one of the areas where Sony was pretty poor was the devtools. Other than some hardware capabilities of the profiler Microsoft blew away everything else that Sony had.

While running Linux on it was pretty neat, I wonder what the opportunity cost was to other parts of the development experience on the platform.


> And then a bunch of the community responded as petulant children and attacked sonys main business and released products that allowed others to infringe Sony and other game studios property with impunity. These actions killed any chance Sony would engage with this community for the next decade or so.

Which sony regulary acts with impunity of being toxic towards it's users. (One thing to note: you can never get a refund on something you "bought" or something that was fraudulently bought on your PSN account. Do a charge back on a psn purchase, you loose all of your entitlements you purchased before)


The notion that it is more reasonable to remove a feature that you advertised for instead of losing drm just blows my mind.


I think the thing that you are missing is that DRM is a key feature required by game publishers. Games is the main use of the platform and the reason it exists.

Selling a Linux box isn't. One can argue that selling a Linux box at a loss is a dumb idea. One can argue doing so helped super computing to the masses and was a good gesture. However when the reason the box is subsidized is threatened then it's pretty rational to remove the thing threatening it.

This was definitely a lesser of 2 evils for Sony business choice. They misjudged how much people cared about Linux. As others have misjudged other ads like no man's sky.


The game publishers are not your clients, the people buying the console are.


This doesn't take anything away from Sony's decision making, but removing OtherOS never really helped them "get good security".

The PlayStation 3 was cracked wide open [1] without OtherOS and because people were pissed at its removal. On Playstation 3s released in the first 3.5 years of its lifespan you can get full hypervisor compromise without any hardware mods, and you can yet again run OtherOS on the latest firmware. Now with less restrictions than before. Newer PS3s (starting with CECH-25xx) have 'only' been partially compromised. They can't run OtherOS, but they can still run homebrew and piracy despite Sony taking OtherOS away.

Maybe it's a bit of solace that this stuff only punished Sony. But inevitably, the lesson they learnt is "we'll never experiment with this stuff again".

[1]: This video is really worth watching if you're interested in this stuff: https://www.youtube.com/watch?v=DUGGJpn2_zY


To play the devil's advocate, Sony is in the business of selling games, their cosnoles were just a means to that and since they were selling the hardware at a loss, selling more of it to people who will never buy games was bad for their business model.


There was no reason they couldn’t have sold a slightly less locked-down system to researchers at a higher price point —- Microsoft did exactly that with unsold X360 Kinect units (and bundled in a USB adapter) for medical research.


Cell processor shipped for commercial installation in the IBM Blades QS20/QS21/QS22, which was based on Roadrunner supercomputer.

Sony also demoed rack-mount with Cell (https://en.wikipedia.org/wiki/Zego) but it is not clear that it was available in open market


The reason was cost. Have fun convincing management to allocate resources to develop and support another SKU tailored for researchers.


You don't need to - just license it out to a third-party systems integrator.


So what? Don't need much difference in hardware, it was only a firmware change.


Firmware changes that relax the security of your hardware require a lot of red tape and raise much fear in management.

SONY is infamous for going overboard on DRM.


It is time and not a priority.

A lot of things can be done given unlimited time and resources, but unfortunately we don’t have that.


To play devils advocate, you can sell games without building perfectly viable general purpose compute hardware and then locking it down with DRM to make it useless as such. Microsoft, Sony, and Nintendo are not developing game consoles out of anything but an incredibly evil greed - they are wasting insurmountable amounts of raw materials making computers that cannot be used as computers.

The consoles that existed in the 70s and 80s were largely justified in being specialized hardware specifically for realtime rendering which had no realistic application outside the narrow scope of fixed function realtime interactive rendering they produced. In the 90s the PS1, N64, and Dreamcast were all computer adjacent with the next geration - PS2 / Gamecube / Xbox - all being perfectly functional general purpose computers that were simply crippled with proprietary software into being toys.

Since then their hardware has only gotten more generic and general purpose and the degree of their draconian platform lockdown grown to an insane degree to resist emulation and unlocking. Its a shame Valve only ever saw SteamOS as a hedge against Microsoft - if they had taken it seriously and put in the legitimate ridiculous amount of money it would have taken to breakout a console platform without the horrifying amount of proprietary bullshit going into the competitions products we might have seen that incredible amount of waste become as taboo as it deserves to be.

Honestly the worst offender is the Nintendo Switch - it is verbatim just the successor device to the Nvidia Shield Tablet, just with a grotesque proprietary operating environment and impenetrably locked bootloader. Every time I see one of those things I am struck by how absurdly wasteful it is to see someone carrying that thing and a regular tablet when Nintendo, Sony, and Microsoft could all just be shipping their own proprietary DRM laden store apps on top of a freedom respecting software foundation. If you want to exploit people by depriving software freedoms while they pay you for it... I'd argue its a product of propagandized deception and manipulation, but its still people considered informed consumers buying that garbage. Its a lot less bad than entire computers, fit for a multitude of purposes, being reduced to toys.


> Microsoft, Sony, and Nintendo are not developing game consoles out of anything but an incredibly evil greed - they are wasting insurmountable amounts of raw materials making computers that cannot be used as computers.

I think this is a bit harsh. Is artwork a waste of good canvas?

(I guess to be fair to you here, at least canvas is a renewable resource)


There isn't an appropriate analogy for IP based DRM because the real world doesn't behave like intellectual property.

I was trying to come up with some convoluted analogy about making art on a gold plated easel that self destructs every decade and burns any canvas besides a specific size that is put on it, but just trying to formulate a physical analog hints at the absurdity of it all.

Actually, wait. You buy a tractor but you can only use it on certain days of the week and it permanently shuts off after 3 years and stops working if you try to attach a snow plow to it. Except that is already the reality of John Deere products being DRM locked and not owned by the buyers.


I thought the John Deere thing was only to the extent that they locked-down their users from tweaking their firmware - not that their firmware actively tries to frustrate use-cases like attaching a plow or disabling the tractor after 3 years?


Ever replaced a blown fuse? Air filter? oil? requires authorized service technician with special lets call the mothership to reauthorize tool.


Valves Proton enabled me to switch fully to Linux once and for all. The few games that wont run now are far more acceptable than not being able to play virtually any mainstream titles. Its surprising how well it works to me and reaffirms my support for Valve and the Steam store.


Have you used a switch? There is no commodity tablet on the market with the attachable controller form factor & dockable with HDMI passthru that the Switch has, at the Switch pricepoint.


Why would you make an open platform with custom hardware when you can make a proprietary locked down one that you can use as an app store % cut mill like what Nintendo does?

They take the air out of the industry with how much money they make operating the way they do. That doesn't make it right, ethical, or reasonable.


If consoles were sold at a profit what you say may make sense. However building an open platform means you need to devote resources to it both from a capital perspective and opportunity cost.

Driving demand towards the hardware, instead of software and licensing costs on a negative margin product is a pretty sure way to run a company out of business.


It has been over a decade since any console hardware product has been sold at a loss. That stopped with the Xbox 360 and PS3. Every Wii, Wii U, PS4, Xbox One, Switch, etc have all been sold at marginal profit.


Why can't console hardware be sold at a profit? Make up for the increased sale price by bundling a rebate on the first X officially-published games the customer buys. Then everything after that is pure profit for the console vendor.


The more likely you are to buy the (cheap-ish) hardware, the more likely they are to sell you more games - at the regular price. Also, there is this balance which is unrelated to the cost of making the hardware, the balance between the buyer’s wanting to actually use the hardware, since it was bought not too cheaply, and not making the price of hardware too high and scare off the potential buyer.


This doesn't matter anyways, consoles haven't been sold at a loss for almost a decade.


This used to be called ‘pack-ins’, no? Like when Sonic was included with a Genesis?


Nintendo has always sold at a profit AFAICT


Nintendo is exactly that, a toy maker. That their toys run on commoditized hardware is a side-effect, not a voluntary choice. They will never ship a general purpose computing device.

Are you saying we should force them with laws?


I would have loved SteamOS to be a general purpose Linux OS with Steam preinstalled instead of the... thing it actually was.


If the experience had been seamless I think it would have taken off.

But it wasn't, mostly due to the games. Many of them would require emulating a mouse just to get into the game, which meant emulating a desktop. Then the game would start and it would switch over to the controller mode.

Many games just didn't work well under it, and so forth.


I haven't installed it myself, but isn't it just a slightly tweaked Debian? I thought you could leave the Big Picture Mode-style UI and get to a desktop where you could install additional software.


Didn't they sell a bunch of PS3s to the US Air Force for research purposes?

I'm sure that military personnel enjoy games as much as the next person, but it's probably not why they bought the machines.

https://phys.org/news/2010-12-air-playstation-3s-supercomput...


They did. The article linked above mentions that it took them a while to convince Sony to agree to it.

Also, if you run the math in the article you linked, it may suggest they paid more than list price.

> core made of 1,760 Sony PlayStation 3 (PS3) consoles

> the PS3s for the supercomputer's core cost about $2 million

that's $1,136 per unit.


Definitely a completely different support contract than a consumer console. That sounds more like an amortization of how many consoles they’d need to support that large of a system for the life of the contract.


Could also be the rake you pay to the company who will sell you hardware on terms with a PO.

Company I worked for in the past had a significant line of business with a major post prod company in the valley that was just buying hundreds of DVD players off ebay at a 20% mark up and then drop shipping them.

There's tons of companies and government agencies that cant just send someone to bestbuy with a visa to make purchases. Has to be a preapproved vendor and has to be on at least net 30 terms and purchased via PO. There's a premium to be paid to companies/vendors that can work within your companies processes to quickly get you what you need to not block projects.

I do wonder if PS3 have GSA pricing though.


Plus they eventually got dinged for this because they had inexplicably advertised the Linux support, so removing it turned that into false advertising. Owners of the classic PS3 got a chunk of $ from a class action settlement.


Correction: the lawyers got a chunk of money....


It was never about supporting Linux or scientific computing. They just wanted to pay less taxes. Linux support is what allowed them to argue the PS2 and PS3 were real computers rather than just an entertainment product.


> due to a 'hack' by Geohot

This is OT, but I think people here might enjoy it: Geohot doing hackerrank problems for 5 hours straight: https://www.youtube.com/watch?v=EecfVsdQMcM


I'll never buy another Sony console again after they removed the ability to run Linux on the PS3. It's one of the reasons I bought one and I used it a lot.


Do you think that Sony is having executive board meetings worrying about a few Linux users who won’t buy their consoles?


Choosing not to buy products from a company you don't like seems pretty reasonable to me. It doesn't have to be a cunning plot to destroy the company.


True. But it’s like every single random submission about “Why I chose to leave $X”. In the grand scheme of things. No one cares.


Well, obviously, some do. There is always an agenda to be pursued, but you are right in that companies do not care. Even if they say they do.


Im pretty sure they had more than one board meeting worrying about sudden coordinated effort of 'linux nerds' to utterly destroy PS3 security model.


Yes, they share the room with the Facebook executives who worry about the HN users who will quit the social network.


They didn’t remove it. They just ended updates for it so you couldn’t also use an updated console AND have linux support. Software isn’t free, so I people are over entitled if they expect lifetime updates.


Which crippled the console, because you couldn't play games online without updating. And if that wasn't bad enough, soon after you couldn't even play new games offline. I used my PS3 as a backup computer when I was at school (came in handy when my laptop died), so I didn't update it. When I bought Dark Souls, I found out I couldn't play it.


This makes sense. New games could rely on API changes in software. It happens all the time for me as an iOS developer, and we have to make an educated decision on which OS versions to support and which ones not to.


Do you buy Microsoft or Nintendo consoles?

If so why?


I buy Nintendo consoles due to the quality of the software and hardware.

The Nintendo DS, at the time, was unparalleled in terms of software quality, and, while not as portable as a Game Boy Advance, was compatible with my existing GBA collection, allowed me to wirelessly play; and even eventually play online, with friends, and offered an almost dauntingly large library of quality software.

This was before the days of the iPhone, and while I did have a Windows Phone at the time, it never could have, and certainly did not, have the library of software, or hardware design, to consider it a valid console. Being a PDA and a phone was more than enough at the time.

I played ‘Halo’, of all things, on my PowerBook G4 back in the day, saw that as the driver to XBOX, and couldn’t have cared less.


But both of these companies are hostile to Linux... Why would you buy anything from them?

Is it worse to try and fail than to never try at all?


When reading through this I was sure this was about Folding@Home but turns out this was a different project! Strange that Folding@Home isn't even mentioned, if I recall it correctly, you could install Folding@Home easily via the store on the PS3, and a normal user could contribute. The Verge article seems more focused on customized machine (rooted to run Linux and stored at location) instead of using the standard OS and letting the contributors have the machine at home. Folding@Home seems to still be running to this day as well! Plenty of information about it here: https://en.wikipedia.org/wiki/Folding@home#PlayStation_3


Off-topic, what's the current state of protein folding simulation? Is it still an unsolved problem? It was mentioned pretty often as a hard problem but I stopped hearing about it a couple years ago.


I had the same assumption. One of my college roommates (academic year 2008–09) got a PS3 and set up Folding@home on it. I thought that was so cool at the time. It was my first exposure to distributed computing (in this sense of the term).


definitely kept my ps3 on during middle school hours/overnight running this, such a cool project they added to the system


The entire interest in this came from the fact that Sony was selling PS3s at a loss, hoping to make up for it with game sales. They were willing to put up with people buying them for compute for a little bit for marketing but it was never going to be sustainable system.


"Supercomputer projects needed the original PS3, not the PS3 Slim, because Sony had removed Linux support from the console in response to hacks — which later led to a class-action settlement. This article originally stated that it was because the PS3 Slim was less powerful. We regret the error."

Wonder what made the writer originally assume that? Mid-generation console refreshes would be the one occurence where much care would be taken to keep the performance not faster, not slower, but completely identical against the original hardware. I remember reading somewhere that the Xbox 360 refreshes had some hardware to intentionally add latency for inter-chip communications to keep performance identical.


Seriously surprised that the article didn't mention Iraq buying 4,000 PlayStation 2's in the year 2000. I think Saddam was doing some supercomputing.

https://www.theregister.co.uk/2000/12/19/iraq_buys_4000_play...


> "The Air Force had to convince Sony to..."

To be a fly on the wall during that discussion. I'm sure money wasn't an issue for the Air Force, or that Sony had some balls of steel.


Are you implying that the Air Force is in the business of making “offers that cannot be refused” especially to foreign owned companies?


Well, actually yes.

Notice how there's very few military aircraft companies in the USA?

That's because the little fish got a visit from Washington saying, "that was your last contract unless you merge."

Northrop tried to resist, for a while anyway.

Regarding foreign companies, the Avro Arrow was cut up after the US offered Canada a northern missile defense network. The Arrow was an early Mach 2 interceptor - think of the foreign sales possibilities.

Many of the designers ended up at NASA.



> the PS4 can’t easily be turned into a cog for a supercomputing machine. “There’s nothing novel about the PlayStation 4, it’s just a regular old PC,” Khanna says. “We weren’t really motivated to do anything with the PlayStation 4.”

The article doesn’t really explain in detail the difference that made the ps3 an attractive buy for making super computers over regular PCs. Can someone elaborate?


From Wikipedia about another, similar project (Folding@Home)

> At the time of its inception, its main streaming Cell processor delivered a 20 times speed increase over PCs for some calculations, processing power which could not be found on other systems such as the Xbox 360.[36][190] The PS3's high speed and efficiency introduced other opportunities for worthwhile optimizations according to Amdahl's law

> The PS3's uniform console environment made technical support easier and made Folding@home more user friendly.[36] The PS3 also had the ability to stream data quickly to its GPU, which was used for real-time atomic-level visualizing of the current protein dynamics

https://en.wikipedia.org/wiki/Folding@home#PlayStation_3


Ironically, this is the kind of massive research that got Sony into troubles because it costed a ton and was hard for developers to master (SEGA felt this hard a decade prior), hence the Nintendo U-turn and Sony/Microsoft 90% mainstream hardware follow-up.


The processor in the PS3 was very different. It was called Cell. It had 9 cores: 1 general-purpose processor and 8 stripped-down, hot-rod processors. They were tied together by a fast bus. I think the cores could even talk to cores in other boxes more efficiently than could be done with x86 processors. Making a supercomputer from PS3s was 10 times cheaper and would run 10 times cooler. --- https://www.datacenterdynamics.com/analysis/the-playstation-...

DISCLAIMER: I am not a chip expert. I threw together this summary after reading several articles.


As I recall, a big part of it was that Sony was subsidizing the powerful hardware, selling it slightly below market under the business model of making the money back by selling games to run on the systems (kind of like the printer/ink model)

This meant you could buy much more hardware for the same cost verse buying the parts individually.


The PS3's CPU was a single general-purpose core and 6 SIMD coprocessors.

It was crap for creating games (and a pain in the ass to develop for), but it was great for pure compute.

By comparison the 360 used a trio of a similar general-purpose core without the SIMD processors on the side[1] had significantly less pure compute power.

[1] but with better built-in SIMD: the PS3's general-purpose core had a single internal SIMD unit while each of the 360's had 2, the 360's SIMD units were also improved with games-relevant operations and 4x the number of SIMD registers


> The PS3's CPU was a single general-purpose core and 6 SIMD coprocessors

6 usable for games but 8 physically in the silicon, with one disabled and one reserved for the OS. So a dual thread PPU and 6 game usable SPEs for the advertised "9 thread" (sometimes just referred to as 9-core) CPU.


> 6 usable for games but 8 physically in the silicon, with one disabled and one reserved for the OS.

Indeed, I initially put in the disabled one (for yield) and one reserved (especially as IBM's server version had all 8 enabled) but I figured that was of low enough interest I could just list the actually usable ones.


Massive vector compute units.


> Just a regular old pc

It's literally there: the PS4 is amd based pc (bulldozer architecture iirc) packaged in a fancy case and with a nice gpu.


Jaguar, not Bulldozer.


Semiconductor pricing is largely determined by volume. The Cell processor had mediocre normal performance inline with other Power derivatives but really good vector performance for the target price (this was before GPU computing matured), so people with vector-amenable tasks could theoretically win by adopting a processor shipping by the million rather than using lower-volume chips targeted at traditional HPC uses. Since Cell was a Sony/IBM project, IBM was interested in reselling the same work to new customers and (IIRC) needed the volume to have their chip foundry business be healthy.

The problem was similar to Intel’s flop with IA-64: you could get a few amazing benchmarks but code which couldn’t be perfectly scheduled like that performed poorly, and that tuning was hard even if it could be done. The scientists I supported at the time were interested but the numbers made no sense: some code was inherently branch-heavy and couldn’t be tuned much and for the portions which could it was cheaper to buy more regular hardware using the money which would have gone to a high-end programmer, especially when you looked at the long-term maintenance cost of that heavily optimized code (everyone of that era remembered ripping out previous assembly optimizations which had become worse than C on newer CPU & compilers).


It was a cute way to get a Cell processor running Linux for an order of magnitude less than any other option, during the brief window where that processor looked like it was going to be future of high performance CPUs. (Anyone else here have the misfortune to work with LLNL's Roadrunner?). But then IBM killed it, and Nvidia came along and ate the market.


LANL had the Roadrunner. LLNL had BlueGene/L at the time.


quite so. time and autocorrect.


That wire rack shelf (sans wheels) is still how some low budget colocation/dedicated server operators set up their hardware. Minitower or mid-tower microatx/ATX sized cases with low cost motherboards, CPU, power supplies. A cheap microatx sized tower case with one rear exhaust fan is $35.

Each system with one or two cat5e cables and a single power cable. In really low cost colocation facilities the limiting factor is often total kW of electricity and cooling available, not square footage of floor space.

It's been a common thing in the datacenter/dedicated server business for about 18-20 years.

The same general idea was adopted by Bitcoin miners using ASIC hardware, and later on with Ethereum mining again with systems using two or four GPUs connected to a low-cost ATX motherboard.


If you're not running 7+ cards on a mining rig you're probably doing it wrong. The higher the number of cards you run in a rig the more you can distribute the cost of the non mining hardware (HD, mobo, cpu, memory) across cards.

A mobo that supports 8 cards might be double the cost of a board that supports 4 cards, but it prevents you from having to buy a second cpu, a second hard drive, second psu, and more memory to put in your second cheapo mobo.

Now if you can buy random cheap ATX boxes for less than $27 dollars per PCI-E slot you are better off initially, but then management across all your various junk boxes becomes a PITA quick.


The last time I spent 15 minutes doing a possible ROI calculation on a GPU based Ethereum mining system, about a year ago, I came to the conclusion that the upfront cost of buying the GPUs alone, plus the electricity (entirely ignoring the cost of a dedicated mining purpose motherboard and cpu/ram, power supply, etc) would not have a reasonable payback period. Even if the electricity was 1.5 cents a kWh, and cooling was free.

I've seen the various motherboards from Taiwanese board manufacturers which are designed for mining, with many small pci-express 3.0 slots intended for use with riser ribbon/interface cards, such as for use as you describe with 8 GPUs attached to one motherboard.

https://www.techradar.com/news/best-mining-motherboards


I did some Ethereum mining for a little while and simply couldn't believe that people were buying/building new systems. PCI 3 is backwards compatible with PCI 2, which has far more bandwidth than a mining GPU will ever need.

I bought surplus Core2Duo systems for $10 and slapped in a cheap SSD and a 1060 6GB. The CPU would sit there at 99% idle and the power supply never broke a sweat.


That is a very good point. The calculations I did were well after the value drop (early 2018) in Ethereum, and assumed that case, motherboard, cpu, ram and power supply were 0 dollars. Just based on the cost of electricity and buying several thousand dollars of GPUs. Even then it didn't work out.


I should have paid more attention to my phrasing. If you were running less than 7 cards per rig you likely did it wrong. Now if you do it at all you're doing it wrong.

ROI is more or less impossible now. By the time you'd get close the reward will halve again. Also back then the diff kept rising increasingly quickly as well. I was doing this back in like 2016 with free power and cooling and made a grip of money.


I think for the ROI to be worth it now somebody would have to literally give you $3000 to $4000 of GPUs for zero dollars, and your electricity bill would have to be some magical $0.00 per kWh.


I would expect this to be more viable than ever, now that the cost of console hardware is so heavily subsidised by online subscription services.


The CPUs of modern console are pretty standard general-purpose CPUs: the PS4 and XB1 use customised AMD APU, and the Switch uses a Tegra X1. That's not useful for supercomputers.

The PS3 was using a pretty weird CPU, which importantly had a single general-purpose core but 6 SIMD co-processors, making it very useful for compute tasks on supercomputers.

These days you'd use off-the-shelf compute-oriented FPGA or stream / GPGPU units paired with general-purpose CPUs e.g. OLCF-4 has 9216 general-purpose CPUs driving 27648 Tesla V100.


Also, console makers collect fees from third-party game publishers, whether it's on physical media or it's downloaded. I don't know how much profit comes from each source.


Most people have been saying that consoles are less subsidized now than they used to be.


I have a recollection that at the time when this sort of thing became possible, that there was some worry that unfriendly nations could use this to get around technology export restrictions (what no, we're not buying a supercomputer--we're just buying 500 game consoles for our weapons designers).


I love the plug for Person of Interest and was surprised that the original ones from the Air Force ended up there.


It would be awesome if you could install Linux on modern Xbox or PS4 consoles and use them as PCs...


I’m not sure how much of a benefit that’d be at this point — the PS3 was some custom hardware that was unique to its era, while the current consoles are 5-year-old x86 chips running custom OSes. Putting Linux into that mix only gets you a stable, but outdated, hardware platform.


I was really thinking more about the upcoming PS5 and Xbox Scarlett, which both should have Zen 2 CPUs and Navi GPUs, along with a generous helping of GDDR6 memory and SSDs.


At launch, those systems will probably be very competitive with off-the-shelf PCs. By 2022, they won't look as good.

One thing these new consoles are doing that seems interesting is that they use a huge pool of unified RAM (rather than two independent pools of main+video memory).


The hardware on a PS4 Pro or an Xbox One X is pretty anemic CPU-compute wise compared to what you could build for near the same cost.

Something like a $190 Ryzen 3600 on a $100 motherboard. Add RAM and a small low cost SSD, or just do network boot/PXE. Total cost might be somewhat more than buying a PS4 Pro but it would run circles around it in any cpu intensive application.


You can install Linux on modern PS4 - failoverflow team managed to get it running. With 3d support even.

See their 33c3 presentation on how they did it. It's filled with great details on how PS4 differs from PCs: https://youtu.be/-AoHGJ1g9aM


The Xbox One already runs UWP apps. But I guess not Win32 or a standard Windows desktop?


If you count using an external HD valid, you can install Arch or Fedora on PS4 right now if you can still find one with a FW not higher that 5.05/5.07...


he article neglects that the PS3 supercomputer was popular because Sony sold them at a loss.

No hardware manufacturer does this any more.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: