"Disabling bad memory" as in DRAM isn't a thing that happens, to anybody. DRAM is made in its own fabs and goes through QA before being packaged. So whether it lands onto DIMMs or in a SoC package, it's known-good dies that are being used.
And you cannot look at the number of SKUs without also taking into account how many different die designs are being manufactured and binned to produce that product line. Intel and AMD CPUs have far more bins per die, and usually fewer different die sizes as a starting point. Apple isn't manufacturing a M3 Max and sometimes binning that down to a M3 Pro, or a M3 Pro down to a M3. You're really just seeing about two choices for enabled core count from each die, which is not any kind of red flag.
Disabling cache as a binning strategy isn't too common these days, unless it's a cache slice associated with a CPU or GPU core that's being disabled. Large SRAMs are manufactured usually with some spare cache lines so that they can tolerate a few defects while still operating with the nominal capacity. SRAM defects are usually not the driving force behind a binning decision.
Back when Intel was stagnant at 4 cores for the bulk of their consumer CPU product line, they did stuff like sell i7 parts with 8MB L3 cache and i5 parts with 6MB cache, more as a product segmentation strategy than to improve yields (they once infamously sold a CPU with 3MB last level cache and later offered a software update to increase it to 4MB, meaning all chips of that model had passed binning for 4MB). Nowadays Intel's cache capacities are pretty well correlated with the number of enabled cores. AMD usually doesn't vary L3 cache sizes even between parts with a different number of enabled cores: you get 32MB per 8-core chiplet, whether you have 8 cores or 6 cores enabled.
I don't know to what extent the cache sizes on Apple's chips vary between bins, but it probably follows the pattern of losing only cache that's tied to some other structure that gets disabled.