Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are past that now, new PCIe 4.0 SSDs just have been showcased along with the new AMD chips and they can do 5GBytes/s read and a bit above 4GB/s write (AMD is rumored to have invested in the R&D of the controller). You'd need 40 GbE to match one -- and EPYC Rome, also scheduled for this fall, will have 160 lanes allowing for dozens of them. You could very easily reach 100 GByte/s read which no network will match.


> You could very easily reach 100 GByte/s read which no network will match.

High end networking gear already has higher throughput:

http://www.mellanox.com/page/ethernet_cards_overview

https://www.servethehome.com/mellanox-connectx-6-brings-200g...

A SAN/NAS using the same PCIe 4.0 SSD's you mention could probably fill the pipes too.

... and it would probably need a bunch of network stack tuning. ;)


>> You could very easily reach 100 GByte/s read which no network will match.

> High end networking gear already has higher throughput:

100GB/s > 200Gbps

You would need 4x 200Gbps ports to reach 100GB/s, so 2x MCX653105A-ECAT (each 2x 16-lanes) at > $700 each, and pay for 1/10th of a ~$30 000 switch, IOW 100GB/s would cost you ~ $4400, before paying for the storage.

Sure, it could be done, but it wouldn't be cheap, and you'll have used most of the PCIe lanes.


Agreed. Higher end network gear is $$$. :(

EPYC servers (128 PCIe lanes) would probably be the go too, not Xeons.

This is just Imagineering though. ;)

With the specifics, wouldn't it be 4 cards needed? Each card has 2x 100Gb/s ports, so 8 ports in total.


The 200 gb/s network adapter that you linked are 4 times slower than 100GB/s. The parent comment wrote explicitly 100 Gbyte/s.


Oops. Didn't spot that, sorry. :)

That being said, after re-reading the comment they're talking about adding multiple PCIe cards to a box to achieve 100GB/s of local total throughput.

That would be achievable over a network by adding multiple PCIe 200Gb/s network cards too. :)


Nah, a motherboard with enough M.2 connectors could easily exist. Or, U.2 or OCuLink. We have already seen 1P EPYC servers with six OCULink connectors...


Sure. My point is just that whatever bandwidth you can do locally, you can also do over the network.

As a sibling comment mentions though... the cost difference would be substantial. :(


Twin ConnectX6 adaptors, gives you 800Gbps, or ~1GB/s, at an absolute theoretical max.

It's good to see that local storage has finally returned to the reasonable state of being faster than network storage. SATA / SAS was a long, slow period ...


If it’s 800 gbps then it’s 100GB/s, not 1...


You're right. Brainfade.

So, even with protocol overhead from all the stack layers chewing up maybe an order of magnitude, that'd still leave 10GBps.

So .. I guess it's still possible, if impractical, to outperform a good PCIe SSD with the latest network interface.


... ~10GB/s can be done by a single 100Gb/s adapter.

More 0's needed? :)


cough 1GB/s can by done by 10GbE.

Maybe a slight typo there? Need to add a few zeros? :)


You're right, doh. See above.


Pcie 3 was not the bottleneck for SSD. They typically are only 4 pcie lanes, when they could go up to 16 for 4x the bandwidth.


But the standard M.2 NVMe interface happens to only have 4 lanes. PCIe4 will double the available bandwidth for these very common SSDs.


The new x570 motherboards will have them soon the pci 4.0 soon




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: