
NVMe and an interesting technology change - stargrave
https://utcc.utoronto.ca/~cks/space/blog/tech/NVMeAndTechChange
======
evil-olive
Take a look at this teardown of a 2.5" Samsung 860 Pro:

[https://www.myfixguide.com/samsung-860-pro-ssd-
teardown/](https://www.myfixguide.com/samsung-860-pro-ssd-teardown/)

Notice how little of the case is actually occupied by circuitry:
[https://i.imgur.com/lzPTC3T.jpg](https://i.imgur.com/lzPTC3T.jpg)

I think the better question than M.2 vs. U.2 is the "plugs directly into
motherboard" form factor vs "uses a cable from the motherboard to plug into an
enclosure for a separate circuit board that needs separate mounting points in
the case" form factor.

Since forever, hard drives have come in 3.5" and 2.5" form factors, so when
SSDs hit the market it made sense for them to come in backwards-compatible
enclosures, in the same way it made sense for them to implement backwards-
compatible protocols like SATA.

NVMe, as a protocol, breaks backwards compatibility, to great benefit, so I
don't think it's surprising that breaking backwards compatibility on form
factor happens at the same time.

U.2 seems ideal if you've already got a server chassis design that accepts
2.5" hot-swap SSDs, and you want to use the NVMe command set instead of SATA.
For most consumer purposes, having M.2 storage that you just plug into the
motherboard like you do RAM and the CPU is completely worth the switch away
from the 2.5" drives.

~~~
spamizbad
You're correct. U.2 is quite popular in the server space. Both for hot-
swapping and thermal performance, as manufacturers design drive enclosures to
act as heatsinks for higher end enterprise grade drives.

------
kev009
U.2 is more popular in servers. M.2 is common because of laptops. It is used
in servers mainly for boot drive or where someone is gaming the cost
difference of enterprise drives. There are form factors like Intel's ruler if
you really want density that bests even custom M.2 carriers for servers.

Another point the author may not be aware of, most server SSD sales happen as
parts of bigger integration deals. The SKUs for many/most U.2 devices for
instance might never hit something like Newegg or Amazon but might be moving
in massive volumes at Supermicro or Sanmina or the like. This is in contrast
to say 3.5" hard disks which spent most of their life forwarded by pushing
consumer demand and the only difference in an enterprise disk may be some
firmware knobs.

~~~
vbezhenar
What's wrong with simple PCI-e ports? I want to buy Intel Optane with PCI-e
connection, I think that it allows for better cooling.

~~~
saint_abroad
A 2U server chassis can fit perhaps 2x PCI cards (with riser), vs 20x 2.5"
hot-swap nvme.

~~~
wtallis
Two PCIe add-in cards is normal for a 1U server. 2U servers can accommodate
quite a bit more; mine has room for 6 full-height and two half-height, for a
total of 8. And for 2.5" drives, a full front-loading 2U is usually 24 drives.

------
jinmingjian
U.2, a.k.a. SFF-8639[1], in fact, is the successor of SAS.(It also allows
legacy SAS and SATA usage.) So, it has a stronger enterprise inheritance. And
so, the enterprise clients can even use the same disk cases/cages without any
problem when updating.

Compared to M.2 form, U.2 has two benefits:

1\. support hot-swapping(SAS-age enterprise character)

2\. better thermal performance/heat dissipation(for larger shape)

for data centers, the future will be the EDSFF family[2](goodness of
U.2+goodness of M.2). M.2 can still stay for the consumer market.

[1] [https://en.wikipedia.org/wiki/U.2](https://en.wikipedia.org/wiki/U.2)

[2] [https://www.anandtech.com/show/13218/ssd-form-factors-
prolif...](https://www.anandtech.com/show/13218/ssd-form-factors-proliferate-
at-flash-memory-summit-2018)

~~~
wtallis
There's a third major benefit to U.2: power delivery. M.2's 3.3V supply is a
problem when SSD power gets above about 8W. U.2 provides 12V in addition to
3.3V, and that's really a prerequisite to getting up to the power levels where
the thermal dissipation advantage becomes important.

Dual-port support is also a niche benefit that U.2 carries over from SAS.

------
KamiCrit
"Most people prefer the simplicity of plugging a M.2 card into a motherboard
connector rather than mounting a separate drive and running cables to it."

I love that part so much, I'm a complete convert. Considering replacing SATA
SSDs with NVMe SSDs just to reduce the cable load inside my rig. Just wish
PCIe cards with dual NVMe ports were more abundant.

~~~
frou_dh
The placement of the M.2 slot on motherboards often seems daft. On my ASUS
motherboard, it's directly above my graphics card that dumps 300W worth of
heat.

~~~
rhinoceraptor
It seems like a lot of video cards are trending towards blower style designs
which don't vent hot air inside the case, so if you have one of those it's
probably not a big deal.

On my Asus Z270i, the M.2 that's right there has a heatsink too.

~~~
manderley
Seems very much like the opposite to me, even Nvidia's "Founder's Edition"
went to an open design in its latest iteration. Only AMD reference cards seem
to still be blowers, AIO designs are pretty much all open.

~~~
rhinoceraptor
Now that I look, it does seem like a lot of them are open air cards. I have a
mini ITX case, so I only ever look at blower cards since I don't really have
the airflow to carry the GPU heat out.

------
llampx
It's interesting to hear that M.2 was the underdog in 2015. I recently built a
new PC and made sure to get a motherboard with two M.2 slots precisely because
of the "look ma, no cables" aspect of the connector. Even with SATA.. That's a
big advantage in my book.

U.2 seems to be more popular in applications where drives may need to be
changed out more frequently, like datacenters.

~~~
julianwachholz
I bought a Z97 motherboard in late 2014 and it had an M.2 port. I don't
remember seeing any consumer boards with U.2 connectors.

~~~
manderley
Yep. But SATA Express used to be common (but mostly unused) for a while.

------
jakedata
I have had the opportunity to work with an Epyc server with a full load of
NVMe/U.2 direct connected to all those tasty PCIe lanes. Any given drive
worked wonderfully, and writing simultaneously across many drives with DD
showed very good performance. Unfortunately any attempt to use Linux native
software RAID showed performance barely in excess of a single drive, even with
a simple stripe. I don't blame the NVMe, but something in Linux RAID just
doesn't scale in performance. I spent days tweaking it, ultimately it was a
great disappointment.

~~~
mschuster91
> I don't blame the NVMe, but something in Linux RAID just doesn't scale in
> performance. I spent days tweaking it, ultimately it was a great
> disappointment.

Yeah, it's a real mess to get it up to speed and if it involves parity
calculations (RAID 5) then there is a CPU bottleneck. Used to be single-core
only until a couple years ago, no idea how the situation looks today.

------
altmind
Its unfortunate how U.2 does not receive much attention. How much M.2 devices
can you probably fit into curent workstation/server motherboards? 3? U.2.
takes place of SATA for being a dedicated connection to multiple storage
devices and can fit so much more in same chassis.

If we need some sort of new-gen ssd raid, the unweildiness of m.2 mounting
won't gonna cut it.

The u2 benefits, compared to m2 mentioned on wikipedia seems to miss the point
- u.2 is meant to be important in server space where you can have 12 drives
per 1U, all with frontal/top load. all these cards we have with m.2 are not
meant and cannot be operated at storage-system scale.

~~~
homero
What about

Intel didn't make enough PCIe lanes available in consumer chipsets to run
enough U.2 drives to be attractive. Whether U.2 or M.2, Intel was always only
going to give you one or two.

~~~
altmind
Yes, that's also a problem. Yet, even intel on prosumer grade x299 had 44
cpu+24 slower chipset lanes where each nvme need 4x. AMD, otoh have EPYCs with
128 total lanes. Maybe demand would've lead intel into making changes to their
server lineup.

~~~
masklinn
> Maybe demand would've lead intel into making changes to their server lineup.

Competition might, but it doesn't look like Zen's popular enough (Zen provides
24 lanes at the low end, 64 on HEDT, and as you noted 128 on EPYC, IIRC
Intel's entire consumer lines is a flat 16).

~~~
dogma1138
Zen doesn’t really provide 24 lanes on the low end any more than Intel does
“20” as both provide only 16 lanes for general purpose use, in Zen desktop 4
are reserved for the CPU to Chipset bus and 4 more are reserved for the SoC
I/O.

------
chx
If you want to run multiple U.2 drives, then OCuLink on the host side should
interest you: both the quad port AOC-SLG3-4E4T and the OCuLink to U.2 cables
are relatively cheap. [https://forums.servethehome.com/index.php?threads/nvme-
make-...](https://forums.servethehome.com/index.php?threads/nvme-make-the-
most-of-your-pci-e-slots-how-to-config-supermicro-boards-for-aoc-slg3-2e4t-et-
al.17651/)

Another choice is a somewhat more usual four M.2 card and
[https://click.intel.com/u-2-to-m-2-ssd-cable-
replacement-u-2...](https://click.intel.com/u-2-to-m-2-ssd-cable-
replacement-u-2-to-m-2-cable-for-pcie-nvme-supporting-intel-solid-state-
drives.html) this cable. To me this seems much more of a hack than OCuLink.

------
ComputerGuru
The answer is - as always - compatibility. There is nothing worse than a new
interface. It limits the consumer base for a manufacturer, it limits the
lifespan of a device, it limits the upgrade options for a customer, it raises
the amount of permutations (and cost) required to target a common subset, etc,
etc.

If there's an interface that solves the problems of the 80% that don't need
more than one drive in their laptop or PC (let alone more than the two most
laptops can fit if the manufacturers wanted) and it avoids all these issues...
then the alternative isn't going anywhere.

It's the same story with SAS, despite its considerable superiority over SATA,
it was never going to appear in consumer hardware.

------
PaulHoule
Note the part about Intel limiting pcie lanes. It is a story that has been
mostly ignored but it has shaped the pc industry in many ways, mainly to the
detriment of Intel because it eliminates some of the seasons to buy a computer
instead of a phone.

------
moondev
U.2 all the way. It's hot swappable and much easier to manage than M.2 tucked
away under a GPU. Epyc systems support up to 24(!) of them.

~~~
masklinn
> Epyc systems support up to 24(!) of them.

Could probably do more assuming a headless storage-dedicated motherboard: IIRC
EPYC only _requires_ 4 lanes going to the chipset, if the mobo provides only
U.2 slots and no PCIe (or alternatively lets you disable and reuse the PCIe
lanes going to PCIe slots) it should support up to 31 drives.

~~~
moondev
But then there would be no lanes for 10GbE+ networking right? I was going off
the commercially available platforms such as [https://www.dell.com/en-
us/work/shop/povw/poweredge-r7415](https://www.dell.com/en-
us/work/shop/povw/poweredge-r7415)

~~~
masklinn
Correct, you'd only get the chipset's IO. IIRC each 10Gb port would be 2 PCIe3
lanes (with some waste, you can run 100Gb on PCIe3x16).

~~~
jinmingjian
corrected: 10GB -> 10Gbps, 100GB -> 100Gbps

~~~
masklinn
You're completely right, fixed the suffix.

------
patrickg_zill
Nvme is an interesting development in storage.

It does away with layers of abstraction and gives you 4x pcie lanes pretty
much straight to the device.

