Hacker News new | past | comments | ask | show | jobs | submit login

Apple NVMe SSDs have worked fine for years in mainline. This is a myth that won't die.

The Linux driver required two new quirks (different queue entry size, and an issue with using multiple queues IIRC). That's it. That's all it was.

On the M1, NVMe is not PCIe but rather a platform device, which requires abstracting out the bus from the driver (not hard); Arnd already has a prototype implementation of this and I'm going to work on it next.




As I understand it, Apple's NVMe were pretty wildly non-standards-compliant - they assume that tags are allocated to commands in the same way as Apple's driver does, including crashing if you use the same tag at the same time in both the admin and IO queues and only accepting a limited range of tags, and as you say they use a totally different queue entry size from the one required by the standard. Also, apparently interrupts didn't work properly or something.

Oh, and it looks like the fixes only made it into mainline Linux in 5.4, less than a year and a half ago, and from there it would've taken some time to reach distros...


There's also applespi for their input drivers. I think with macbooks you never know what protocol apple will change the next iteration. Not something I would use as a daily driver(running linux) anymore.


Maybe I'm remembering it wrong, but wasn't there an issue with a secret handshake, and if the system didn't do it in a certain time after the boot, the drive disappeared? I.e. some kind of T2-based security?


Please don't give Apple any more ideas.


Interesting... what bus does it use if not PCIe? At the driver level I’m guessing it just dumps NVMe packets onto shared memory and twiddle some sort of M1-specific hardware register?


Yep.

Generally, "platform device" means that it's just a direct physical memory map. Honestly, from a driver perspective, that's sort of what you get with PCIe as well. The physical addresses is just dynamically determined during enumeration instead. Of course, there's some boilerplate core stuff to perform mappings and handle interrupts specific to PCI, but at the end of the day, you just get a memory mapped interface.

This is unlike something like USB where you need to deal with packets directly.


> Honestly, from a driver perspective, that's sort of what you get with PCIe as well.

Right, I was sort of alluding to that. I’m really just curious how the NVMe packets physically make their way to the SSD.


A network on chip protocol. Probably something ACE5 compatible, but Apple hasn't been public about those bits AFAIK.


Yes. It seems there is no distinct SSD on the system. The M1 SoC seems to communicate with raw flash. I tried looking up the datasheet for the flash ICs (SDRGJHI4) to see if they would leave any clues but it’s not publicly available AFAICT. This is rather interesting that Apple has custom or semi-custom IP that manages raw flash as part of their SoC. That does seem like a natural outgrowth of shipping iPhones for so many years.

The specific logical signals between separate IPs on the SoC is slightly less interesting to me then. It’s likely something similar to ACE5, like you said, for sharing the memory bus.


Ah, yeah, it's been integrated on their SoCs for quite a while. Word on the street is that it's the (internal only successor to the) Anobit IP they bought back in 2011 with an ARM core strapped to the front for the NVMe interface.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: