This headline is inaccurate. Phison Electronics acquired Nextorage in January this year, via acquisition of remaining shares from Sony. Sony is no longer involved in this Joint-Venture.
From their press release:
Phison acquires Nextorage to strengthen customized high-end storage market
...acquired shares of its joint-venture company Nextorage Corporation from its joint-venture partner, Sony Storage Media Solutions Corporation
"Dynamic SLC caching stores cache size up to 1/3 of the total storage area of SSD"
I want to bench this in a enterprise test. This is a lot of SLC and portends high random 4K writes that are very hard to achieve without something like a Pliops or GRAID array controller.
Claims are, "random read/write up to 1,000 K IOPS", which is three times that of most enterprise drives.
All this really means is that you use 100% off the SSD in 'SLC' mode, giving you 1/3 the usual space (since most SSDs are TLC these days). As you grow past 1/3 drive full, the drive reverts to TLC mode more and more and the drive slows down. Nothing innovative, as far as I can tell.
Even in enterprise, buying drives three or four times as big to ensure consistent maximum performance is something you'll only do some of the time.
By minimum, I'm talking about how a dynamic SLC cache is by definition using spare space in the drive. If all those cells are filled up with TLC data, they can't be used as SLC cache. But one drive might guarantee 5GB of SLC cache even when it's near-full, and another drive might guarantee 100GB, and those drives will have very different performance characteristics.
The binary prefixes aren't particularly useful when talking about IOPS. They're useful for storage since a lot of structures are powers of two (eg. 512 byte or 4096 byte sectors), but that doesn't apply to IOPS.
Yes. I was quoting from the vendor literature, and thought about changing to 1M IOPS but I think it's quoted this way become the competition is offering ca 300K R4K at best, so the comparison matters.
I wonder what the power consumption of that thing is. That giant heat sink they have sitting on that m.2 gumstick doesn’t bode well. I know that power consumption depends on load but just idle and full power wattage would be really informative. The last thing you want is your super fast ssd throttling to a crawl because it sits in a laptop with no space for a heatsink. Then all those performance numbers are meaningless.
IIRC max power allowed via m.2 connector is 25W. I suspect that heat sink might be just a gimmick - Sarbent is shipping NVME drives with peak power draw <10W with a triple heat pipe cooler..
CXL Optane wasn't going to happen because of the necessity for drivers in either block or direct access mode. Couldn't plug and play.
CXL itself is a good reduction in buss latency and there's been virtually nothing other words the occasional battery backed NAND DIMM on any memory buss, so SCL on CXL isn't going to be poor.
That 2016 presentation was not referring to any particular storage class memory technology, and merely was reiterating the definition of the category as something in between (and distinct from) DRAM and NAND. Western Digital still has not fully committed to a specific SCM contender or brought one to market. The server ecosystem is far more ready for a viable SCM than it was in 2016 (and CXL is part of that ecosystem readiness and maturity), but we're still missing the silver bullet technology that provides a memory cell with the durability, speed and cost necessary to squeeze between DRAM and NAND.
> CXL Optane wasn't going to happen because of the necessity for drivers in either block or direct access mode. Couldn't plug and play.
Drivers had nothing to do with that. Intel's Optane/3D XPoint could easily have adopted a CXL interface; nothing about the memory technology is tied to NVMe or Intel's DDR4-based proprietary persistent memory module interface. The only reason a CXL Optane didn't happen is because CXL wasn't going to magically make Optane profitable.
> The only reason a CXL Optane didn't happen is because CXL wasn't going to magically make Optane profitable.
What confuses me is that the PCIe Optane drives launched at $4/GB and the second generation was only $2.30/GB on the biggest model.
Those prices gave those drives plenty of price advantage over DRAM. Was that all being sold at a massive loss? Why was DIMM Optane so much more expensive, and would CXL Optane have to be the same price?
I'd like to see the distribution on that 250-5000 nanoseconds claim. The low end of that competes quite well with Optane, but Optane under moderate loads can keep up similar latency for 99.99% of requests.
Ah, I think I see where you're confused. You're probably conflating the M.2 form factor with NVMe[0], which is sorta incorrect. NVMe drives can come in a variety of physical interfaces, of which, M.2 is one, but M.2 drives can be either SATA or NVMe. (or USB?)[1]
I doubt it as long as sata based HDDs are still a thing. There’s gotta be some market for sata SSDs. I’ve been considering replacing a NAS ZFS HDD array with SSDs. The HDDs are 4TB and some 2TB 2.5” SSDs are starting to become reasonably priced. I don’t need 4TB drives.
For my NAS uses the lower reliability of and SSD is not much of a concern. I still have a working intel SSD that is the first or second intel consumer drive they produced, it’s over a decade old.
I don’t want my drive to handle encryption. That’s what operating system is for. It’s not only much more flexible, but also much more secure.
Have you seen quality of the code that runs in most of the embedded devices firmware? It’s a huge surprise that any of the devices actually work. I don’t trust them a second to implement any secure encryption.
> I don’t want my drive to handle encryption. That’s what operating system is for. It’s not only much more flexible, but also much more secure.
The problem:
- an operating system by necessity must have the crypto keys in RAM (which means it's vulnerable to a kernel-level exploit, a hardware-level exploit like Thunderbolt, or to a "freezing spray" attack)
- all I/O data will have to be shuffled between the SATA controller, RAM and CPU multiple times for decryption/encryption, incurring a (significant) latency penalty
Leaving the crypto operations to the SSD controller or an intermediate FPGA/ASIC removes a lot of these problems:
- the OS can wipe the memory containing the key information after passing them to the disk
- no part of the system can retrieve the key information past that point, and the disk controller can be built in a way that automatically wipes its internal RAM / plaintext key storage upon power loss
- there is no performance/latency penalty at all or at least significantly lower, the system gains back the ability to do DMA transfer (e.g. load GPU texture data straight from the disk into the GPU's RAM space)
You cant really trust hardware for encryption. There are too many actors that opposing strong crypto. Apple can afford to do it because of scale and by leaving cloud backdoor, but PC hardware manufacturer cant.
Many countries and specifically authoritarian / totalitarian require certification of crypto for device to be sold on their markets. So it's easier for hardware manufacturers to either include none or just have single default option of possibly weak crypto.
From their press release:
https://www.phison.com/en/company/newsroom/press-releases/ge...