
Samsung Kicks Off Mass Production of 8 TB NF1 SSDs with PCIe 4 Interface - el_duderino
https://www.anandtech.com/show/13003/samsung-kicks-off-mass-production-of-8-tb-nf1-ssds-with-pcie-4-interface
======
cazim
Samsung also demonstrated 16TB NF1 SSDs. 36x 16TB in 1U so ~24PB in a 42U
rack...

[https://www.anandtech.com/show/12567/hands-on-samsung-
nf1-16...](https://www.anandtech.com/show/12567/hands-on-samsung-nf1-16-tb-
ssds)

[http://www.samsung.com/us/labs/pdfs/collateral/Samsung-
PM983...](http://www.samsung.com/us/labs/pdfs/collateral/Samsung-
PM983-NF1-Product-Brief-final.pdf)

------
iooi
> NF1 SSDs enabled an undisclosed maker of servers to install 72 of such
> drives in a 2U rack for a 576 TB capacity

That's 12PB in a 42U rack! I wonder what sort of storage densities Google,
Amazon, and Dropbox are achieving.

~~~
_wmd
Note that 2U box needs something like 250gbit internal bus just for disk IO,
assuming you wanted the full throughput of every drive.. there is a huge trade
off in terms of processing capability and networking bandwidth putting so much
disk in a single box, the rest of the hardware can't cope with it.

These things look useful only if a tiny fraction of the stored data is hot, I
imagine lower density that provides more flexibility in terms of processing
room on the box, and networking around the box is probably a better option in
a lot of cases.

Been pondering these devices for the past 10 minutes or so, can't think of a
use for them except elaborate storage servers.

~~~
paranoidrobot
I think AMD Epyc is a perfect match for this... 128 PCIe lanes per socket.

On a dual socket Epyc system with 36 of these, each with a 4x PCIe connection
'only' takes up 144 lanes, leaving plenty of lanes free for even 8x Dual-QSFP+
40GbE NICs.

Of course, I'm not sure how you'd physically pack all of that into even a 2U
form factor.

Usefulness... data warehouses. If the price was right, we'd probably move to
that to speed up various ingestion & reporting functionality. At 288TB
(assuming RAID10) capacity, it'd be a perfect fit.

~~~
Dylan16807
Epyc is 128 lanes _period_. The dual-socket configuration steals half the
lanes for the interconnect.

It's also PCIe 3 for now.

~~~
paranoidrobot
Ah, right. My mistake.

Still, I think it might be in a better position than Xeon for raw storage IO.

------
ggm
Death-of-the-disk stories usually have to deal with price. But, if you drive
the density north far enough, at the price per GB level I think this may no
longer be an issue.

Simplistic price (bought price) vs power budget, speed, retention
time/replacement time, all gets very complicated.

I haven't had a (company) laptop in three rounds with spinning disk. I
replaced my @home with an SSD in the last pre-tax-claim spend cycle. I still
buy commodity USB drives to be the disks for rPI small devices, but I am
beginning to wonder how long that will sustain?

The floppy->semi-floppy->USBdrive story had a longish arc, but at a lower
level of penetration. Once the hardshell floppy appeared, the larger units
died out pretty quickly, and once USB bootable became ubiquitious, boot
floppies ceased to inform my life pretty quickly, and now with RAC and iDRAC
cards, I barely touch boot media either (admittedly, the RAC card has an SD
and I write "images" to it in ISO format but I keep wondering how long boot
media will depend on emulating a spinning CD/DVD drive.

TL;DR This feels like right size, right time. If the spec for the backplane is
good, I'd like to see this baked and shipped in other vendors. Goodbye
spinning metal?

~~~
FullyFunctional
Not for backup and long term storage. Spinning rust is far more dependable
than flash when sitting on a shelf, and much less sensitive to temperature.

I know where I'm archiving my data.

~~~
valarauca1
So you use tapes?

Seriously tapes are way more dependable than disks. Less moving parts and less
chance internal eletral mechanical break down. And check all the other boxes
you mention.

~~~
FullyFunctional
Updated with more detail and maybe fewer typos:

I certainly considered tape and used it in the past, but there are two huge
upsides to hard disks:

1\. Obviously access and transfer time is much better.

2\. I can access hard disks from 30+ years ago. My oldest drives use SCSI and
IDE, but even SATA has been around for ages. My tapes however can only be read
of the particular branch of tape drives and there are an infinitude of
standards/models around so I'd have to store the tape drive along with it. I'm
not sure I'd be able to restore my father old backups as the tape drive was
some weird PC thing.

A few years ago I finally transferred my tapes to my FreeNAS box and breathed
a sigh of relief as I don't know when I'd be able to run my DAT changer again.

So, no thanks, I'll stay with hard drives (mind you, drives _will_ die as well
so you still need redundancy).

PS: I want to emphasize that I was talking about shelf life. For data sitting
in a good NAS, like ZFS based FreeNAS, I have little worries as there's
redundancy and weekly scrubbing (and I do have a backup of that as well). I do
worry a great deal about any data that might sit on a USBKey or an SSD in the
closet.

------
srcmap
Like to hear everyone's guesstimate on when/if the $/TB of SSD will cross over
the HDD.

~~~
wmf
Never, because if it did cross over suddenly flash demand would surge by 10x
and then there'd be a massive shortage, pushing the price back up.
[https://blog.dshr.org/2018/03/flash-vs-disk-
again.html](https://blog.dshr.org/2018/03/flash-vs-disk-again.html)

~~~
tfha
No. If it costs less to produce, you'd just increase production until one
passed the other. The demand surge might delay the inevitable for a year or
two, but fabs would catch up and then the HDD would retire.

dshr misses the mark a lot of the time.

~~~
ksec
>No. If it costs less to produce,

The point is it doesn't, and not anytime soon.

~~~
tsenkov
It may never cost lower to produce in the environment in which both are
massively available as manufacturing. But it may happen that manufacturers
concentrate on flash storage (could be better profit margins, could be bigger
demand from corporations or IAAS operators, could be many other things) and
thus, in a cascading way make prices of disks higher for manufacturing, due to
limited supply of materials, machines, trained workers for manufacturing of
disks.

The only way I see the current status quo to be static by definition is if
there is indeed a (above-mentioned) shortage of some raw material that would
prevent the switch. Can someone comment on what exactly that shortage would
be? Edit: I read the linked post (didn't see this before, sorry). TL;DR - it's
just lack of enough manufacturing and issues with building more fabs.

~~~
ksec
Fabs are expensive. The article already points out NAND is reaching its
limits. What those fantasies about NAND stack that could do 1024 or 2048
layers may never appears. At least in the next 5 years. Basically we have used
up all of the tricks. ( For now )

Assuming a 128 Layer yield is good enough and solvable within the next 3 - 4
years. Micron is only just started shipping 96 layers and may take another 2 -
3 years for it to be mainstream. QLC only brings 33% capacity improvement over
TLC at the expense of much lower write cycles and slower latency. Node scaling
are also much more expensive, we don't have the node scaling with density
increase while per transistor is half anymore.

Build more Fabs? Well China is pouring in $100B to brute force this problem.
Nothing has come out of it just yet. And if it wasn't China, who has the
incentive to build expensive Fabs, with little to no expertise in memory,
patents, for a possible profits margin where its industry has a habit of
cycles or long losses? I.e High Risk

TSMC ex-CEO Morris mentioned it multiple times, their company will not produce
DRAM or NAND.

While the three DRAM and NAND manufacturer are well aware of what China is
trying to do, and are milking the market for as long as they could.

No where in the NAND 5 years roadmap points to it being technically feasible,
and economically feasible that is could be cheaper then HDD in the 5 years
AFTER the current 5 years roadmap. I.e Nothing is showing it could happen in
the next 10 years. While the HDD camp, Western Digital has the roadmap and
tech, that they can reach 40 to 80TB per HDD in the next 8 years.

So I have no idea what I am getting downvoted. Not to mention one of the
poster is correct about how cheaper NAND and higher demand will means prices
go up higher.

That is like those people who keeps talking about OLED replacing LCD, nothing
in the roadmap, of all possibilities show OLED will even be the same price as
LCD within the next 5 years. The most optimistic forecast has it being double
the price of LCD within 5 years. That is including the printable OLED that
Sharp is investing in.

~~~
tsenkov
Thanks. This is very informative.

------
pmorici
Notably the specs don't support OPAL encryption like many of Samsung's other
products. That's a bit disappointing.

------
locusm
How are people using NVMe drives in servers? Is soft raid the only option?

~~~
wmf
NVMe RAID now exists: [https://www.broadcom.com/products/storage/raid-
controllers/m...](https://www.broadcom.com/products/storage/raid-
controllers/megaraid-9460-16i) But I suspect most people are using soft RAID
or no RAID.

------
mrbonner
Is this suitable for NVM usage?

------
bluedino
Top option in the new Mac Pro?

~~~
minikites
This SSD will probably be out of date when Apple decides to care enough about
the Mac to ship the Mac Pro. Maybe 2023?

