Think about how CPUs access RAM, Random Access Memory.
Now think about how you would store and process the ECC (checksum / parity data) without the optimally dispersed bits.
This is very much like RAID-5,6,Z or like the addition of integrity codes in other media. CDs and DVDs have the ECC protected data as the default layer and also expose the raw blocks to the OS.
Random seeks in a CD / DVD were possible because the device could start reading from anywhere and get back a full stream of data.
With RAM used the same way the first question is: Where is the ECC decision drawn? Is it per bus word (E.G. 128 or 256 bits), an OS page (often 4KByte for common OSes), some other unit? Next is where is the ECC data stored?
If you're keeping this efficient smaller reads are better, and the 4KByte OS page is a common multiple for lots of things so at the moment that's about where I'd draw the line. Dedicating silicon to calculate the parity on a page that large is probably way more expensive than gates for a couple extra bits per native word (and designs that might also match parity used for internal cache too and thus could be reused), but would be a logical maximum for doing something the hard way.
Virtual page alignment would also be a performance issue, and the cache would have an effectively fixed blocksize and read-ahead granularity.
Or everyone could just use the one already developed industry standard that was optimized by engineers not thinking on the back of a napkin.
You could even potentially have different parity levels for different applications depending on need.