We are past that now, new PCIe 4.0 SSDs just have been showcased along with the new AMD chips and they can do 5GBytes/s read and a bit above 4GB/s write (AMD is rumored to have invested in the R&D of the controller). You'd need 40 GbE to match one -- and EPYC Rome, also scheduled for this fall, will have 160 lanes allowing for dozens of them. You could very easily reach 100 GByte/s read which no network will match.
>> You could very easily reach 100 GByte/s read which no network will match.
> High end networking gear already has higher throughput:
100GB/s > 200Gbps
You would need 4x 200Gbps ports to reach 100GB/s, so 2x MCX653105A-ECAT (each 2x 16-lanes) at > $700 each, and pay for 1/10th of a ~$30 000 switch, IOW 100GB/s would cost you ~ $4400, before paying for the storage.
Sure, it could be done, but it wouldn't be cheap, and you'll have used most of the PCIe lanes.
Nah, a motherboard with enough M.2 connectors could easily exist. Or, U.2 or OCuLink. We have already seen 1P EPYC servers with six OCULink connectors...
Twin ConnectX6 adaptors, gives you 800Gbps, or ~1GB/s, at an absolute theoretical max.
It's good to see that local storage has finally returned to the reasonable state of being faster than network storage. SATA / SAS was a long, slow period ...