Hacker News new | past | comments | ask | show | jobs | submit login

Sure, adding that functionality to NVMe would be easy; there are sufficient provisions around for adding such support. E.g. for example a global flag who's support is communicated and which can then be turned on by the host to cause the very same normal flush opcode to now also guarantee a pipelined write barrier (while retaining the flush-write-back-caches-before-reporting-completion-of-this-submitted-IO-operation).

The reason it hadn't yet been supported btw. is that they explicitly wanted to allow fully parallel processing of commands in a queue, at least for submissions that concurrently exist in the command queue. In practice I don't see why this would have to be enforced to such an extend, as the only reason for out-of-order processing I can think of is that the auxiliary data of a command is physically located in host memory and the DMA reads across PCIe from the NVMe controller to the host memory happen to complete out-of-order for host DRAM controller/pattern reasons. Thus it might be something you'd not want to turn on without using controller memory buffer (where you can mmap some of the DRAM on the NVMe device into host memory, write your full-detail commands directly to this across the PCIe, and keep the NVMe controller from having to first send a read request across PCIe in response to you ringing it's doorbell: instead it can directly read from it's local DRAM when you ring the doorbell).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: