
Storage Matters: Why Xbox and Playstation SSDs Usher in a New Era of Gaming - xoa
https://www.anandtech.com/show/15848/storage-matters-xbox-ps5-new-era-of-gaming/
======
ardit33
I hate to say it but this SSD story has been plastered all over the news by
the PR machine, mainly because the new consoles have no other inovations
whatsoever....

No, it is not going to usher a new area of gaming, just make game have less
'loading' screens, and perhaps larger open worlds.... but it is no near the
innovation and changes we experiences in the 90s and early 2000's....

Let me say in a more popular language: History has demonstrated that gamers
don't give a f@ck on how fast is the SSD.... PS1 won over N64 even though it
had huge loading times, compared to near instantaneous N64's loading
screens....

I meant, personally, i think it is great, as PS4 loading time are bad, but
just fast loading time it is not going to make me buy a game.

Anyway, just a sad PR piece trying to get people exited for something that
looks a small (but good) evolutionary improvement over the current PS4
Pro/Xbox X.

Now if we had custom fast/ray-casting technology, than it would have been
exiting. But it looks like if enabled, a game will either have to drop
resolutions and cap frame-rates to barely playable 30fps.

But, hey your SSD will be fast.....

~~~
jimmaswell
The N64 looked way better, loaded faster, had a better controller, and to me
had a better game library as a whole. The PS1 had comparatively horrible
architectural choices like no built-in z-buffer, texture filtering, or
floating point processing. Apparently N64 sold less per year available though.
What did everyone see in the PS1?

~~~
inertiatic
Maybe N64 had a better library of games to you, but for others PS1 had Silent
Hill, MGS, FFVII, Gran Turismo, Tomb Raider, Tekken, all games that define
their genre even 20 years later.

~~~
therockhead
Yeah, I still remember playing Tekken for the first time at my friends house,
which was over 25 years ago. It was truly jaw dropping for the time.

------
walrus01
One really good example of an Xbox One game that has grown to truly bloated
proportions is Destiny 2. The loading times are multiple minutes per map
section now. Total game is about 105GB on disk.

People have even resorted to using external 256GB SATA3 SSDs in USB3.0
enclosures, which are faster than the internal 5400 rpm HDD in an Xbox One S.

I do think we will still see a lot of people connecting USB3.0/USB3.1 external
spinning drives to their new-generation consoles. 1TB of SSD space is enough
for maybe 12 or 14 games, maximum. "Serious" console gaming people buy many
more games than that...

~~~
christoph
Ahhh, remember those days when you would get back from the shop with a
cartridge, rip off the shrink rap, drop it in the slot, hit the on button and
you were playing straight away in seconds. Good times.

Fallout 76 was something like a 45gb install and then a 50gb day 1 "patch"...

~~~
tylersmith
That sounds like a lot more effort and time than just downloading 95gb.

~~~
dmonitor
Depends where you live. For me, 95gb takes a couple hours. For my friend in
Kentucky, it takes at least a week

------
GiorgioG
It's about time. SSDs have been around a very long time, and without a
supported way of swapping out the internal hard drive, console gamers are
penalized with long waits vs PCs where SSDs have become fairly standard.

~~~
freeone3000
PS4s have been able to swap out their hard drive and it's fully supported.
However, base model PS4s run it at SATA2 speeds no matter what's attached,
with SATA3 only being supported on the Pro.

~~~
spoopyskelly
Swapping the 5400 RPM drive with a SSD still makes sense even on SATA2. You'll
go from 80ish M/s reads to 300.

------
xoa
This article of course focuses primarily on the consoles with an interesting
gathering of some of the lower level details behind "fast SSD". But while it
compares it to servers what it really brought to mind to me was mainframes,
which often had not particularly impressive CPU specs at first glance but were
kings of IO and offloading. It made me wonder if over the next decade we'll
see yet another example of the cyclical nature of the tech industry in PCs. I
haven't been following storage super closely for a bit, but it hasn't been
_that_ long yet the numbers the author tosses out in terms of "by the end of
the year everyone will be doing 6-7+ GB/s" felt pretty staggering. Combined
with the sudden leap forward in system bandwidth thanks to AMD, and after a
fairly long period of stagnation around HDDs and the limits of SATA it feels
like a huge amount of progress in storage is happening at an enormous clip.
And that is leading to the discovery of bottlenecks elsewhere, like with ZFS
Issue #8381 [1] where the ARC meant to improve performance actually becomes
troublesome. AMD has really pushed available PCIe bandwidth up dramatically
with EPYC 2 doing up to 128 GB/s, yet with drive numbers like those arrays
could actually be made to get towards it already!

As the article says that kind of IO becomes a real strain on valuable general
purpose CPU time. So I wonder if we'll actually start to see regular PCs begin
offloading again, reversing the generalize everything a bit with more support
specialized silicon so that they can actually feed the beast. I could see that
having real implications for a lot of future software and OS development too,
particularly for OSS. If things go that way I hope some good open standards
can get out ahead of it.

\----

1:
[https://github.com/openzfs/zfs/issues/8381](https://github.com/openzfs/zfs/issues/8381)

------
wltprgm
This article didn't talk deep enough about PS5 custom storage chip controller
or I missed it.

Reference:

1\. [https://youtu.be/4ehDRCE1Z38?t=480](https://youtu.be/4ehDRCE1Z38?t=480)

2\. [https://linustechtips.com/main/topic/1205112-i%E2%80%99ve-
di...](https://linustechtips.com/main/topic/1205112-i%E2%80%99ve-disappointed-
and-embarrassed-myself/)

~~~
wtallis
I think you may have failed to notice that there's more than one page to this
article. Page 1 covers the SSD hardware itself, and page 2 covers the other
storage-related hardware that's not actually on the SSD itself.

~~~
wltprgm
I really failed to notice that, yea they did talk about it in other pages

------
hyperpallium
> The PS5's SSD can supply data at 5.5 GB/s. The RAM runs at 448 GB/s, _81
> times faster_. [https://www.anandtech.com/show/15848/storage-matters-xbox-
> ps...](https://www.anandtech.com/show/15848/storage-matters-xbox-ps5-new-
> era-of-gaming/3)

SSD is compressed (by "Kraken"), giving 8-9GB/s typical.

You could compress RAM too, but not worth the latency penalty.

~~~
MikusR
Textures are kept compressed in ram

~~~
mcraiha
Yes, but they have special formats (ETC, PVRTC, ASTC etc.) so GPU can use them
directly, and the compression is lossy. e.g. PNG would not work in this
situation because GPU cannot decode PNG textures fast enough.

------
CivBase
My understanding is that what makes these new SSDs so special is the embedded
decompression hardware. That sounds great... but I can't help but think back a
couple generations to the PS3 and its cell architecture. Yeah, it was more
efficient than x86, but few developers really managed to capitalize on it
until the end of that generation. By that point, the PS3 was lagging far
behind the 360 is US sales and had long since been eclipsed by PCs in terms of
performance.

Will we see something similar with these SSDs? Are devs actually prepared to
take advantage of the special decompression hardware? Or will it be the stuff
of dreams and tech demos for the first few years?

~~~
wtallis
For starters, the compression hardware isn't part of the SSD.

But more importantly, transparent decompression has been implemented many
times on many systems; it's not a pipe dream. Developers don't have to do much
of anything special to use the decompression hardware. They just have to use
the platform's standard APIs for reading data from disk, and the OS or
standard library takes care of routing it through the hardware offloads. From
the application programmer's perspective, the decompression just shifts the
performance characteristics of the storage a bit—higher latency, higher
throughput, stronger preference for certain block sizes.

The bigger question is whether developers will be able to make good use of
high bandwidth storage, regardless of how that storage is provided. Replacing
everything in RAM on a ~2 second timescale is pretty wild. Lots of game
engines have systems to coordinate loading assets on the fly, and the
scalability limits of those existing solutions are going to be tested.

------
duxup
Are any of these magical load times and such going to actually happen?

I feel like gaming companies really don't emphasize load times unless they
have to and their energy goes into pretty graphics or other things.

In theory couldn't they have ultra fast load times now if they just lowered
the amount they needed to load to start? / Manage the amount of content they
need to load to start?

I wonder if they'll just focus on pushing any speed benefits into....load
times for even more assets, pretty graphics or whatever they choose.

In the meantime console announcement promises/ hype have a terrible history as
far as accuracy goes...

~~~
_kbh_
The great thing about having these ultra fast SSDs and the associated software
stacks is that you can get both faster load times and also more assets. One of
the greatest benefits of these SSDs imo is that it tightens up how big your
streaming pool has to be, freeing up a large amount of ram and allowing for on
demand loading of assets just before they are needed.

Additionally the huge bandwidth means that the consoles can fill their entire
ram in between 3ish and 6ish seconds, which should reduce max loading times by
a lot.

------
jiggawatts
A lot of people here don't quite seem to get what this is all about.

Previously, a PC or a console might have 4-8 cores (or hyperthreads, or cell
processors, or whatever). Typically, you'd want to be doing sequential I/O
from just one of those processor cores, because more than likely be a
mechanical disk. At _most_ you could do asynchronous or threaded I/O so that
the CPU core could also do other computation while waiting for textures to
stream in (or whatever), but that's about it.

More than one concurrent I/O at a time (even from other programs!) would cause
the disk head to seek, killing your performance. You'd get stuttering in the
game or texture popping as the I/O fell behind. You could budget on _maybe_
20-30 MB/s if you're lucky, or less than 5 MB/s random I/O if unlucky.

 _It made no difference if you had a PC with an SSD, as zero games were
written for it._

So now, with SSDs in consoles as standard you would automatically assume, that
we can finally utilise all the CPU threads and throw 8 cores worth of I/O at
the SSD. Maybe increase the queue depth a bit more, have some asynchronous and
overlapped I/Os or something.

But that's _crazy difficult_. The models and textures being streamed in are
actually _used_ on the GPU, not the CPU. Each texture is typically mip-mapped,
building up a "pyramid" of lower and lower resolution versions to prevent
moire when viewed at a distance. The "currently used set" of each texture is a
complicated angled 'slice' through this sort-of-3D texture 'volume'! This is
used asynchronously by the GPU, and ideally you'd somehow want to track this,
figuring out which parts are used by the GPU, which aren't, from which mip-map
layer, etc.. and then _predict ahead_ the next set of 32x32 tiles or whatever
you'd want to load. In parallel. Across 8 CPU cores. While the GPU is doing
its own thing.

Just no. That's not happening. The synchronisation alone would be a nightmare.
It's crazy talk. No human can write code like this, and certainly not
optimally _and_ efficiently.

So the brilliant thing about the GPU on the PS5 is that the GPU can make
direct I/O calls. No CPU involvement. Whatever it "needs", as and when it
decides, it can simply directly fetch.

It's not going to do this with 8 cores, or 16 threads. No, it's going to do it
with _thousands_ of cores. The I/O queue depths that used to be "exactly 1"
for literally 100% of all previous games are not going to be 8. They're going
to be tens of thousands! This is massive. This isn't "a couple of random I/Os
done a little bit better", it's _four or five orders of magnitude_.

------
betoharres
"all games and game engines are still designed to be playable off hard drives
because current consoles and many low-end PCs lack SSDs"

Saved you some time

~~~
gameswithgo
that is not a fair characterization at all

------
dvfjsdhgfv
> Game programmers cannot take full advantage of NVMe SSD performance without
> making their games unplayably slow on hard drives.

Well, yet. As history shows, many big game shops care very little whether
their game is unplayable on current systems and push for a more expensive
setup instead. On the other hand, this is partly the reason I can now do ML at
home doing things that were far out of my range a few years ago.

~~~
ladberg
Game developers might treat PC players like that, but never console players.
If an optimization boosted performance on PCs but was detrimental to consoles
they would never implement it.

~~~
dvfjsdhgfv
I was specifically referring to PC games. They can't do that on consoles as
the game for model X is for model X, whereas the one for PC has minimal and
recommended requirements.

------
MintelIE
Every generation has seen the various console makers hyping some magic
technical aspect of their machines. We had Blast Processing, Mode 7, polygons,
MORE polygons, the Emotion Engine, RAMBUS, hyper futuristic Cell technology,
and none of it solves the central problem for game designers - how to make a
FUN game.

I’d give up this ultra fast SSD in a heartbeat if it meant there wouldn’t be
any pay to win or DLC to buy for a half completed but shipped anyway game.

