Which means, assuming google is running very large machines with lots of memory that one might expect a single correctable error once every 6-10 years on your average workstation of small server. That's generously assuming your workstation has 1/3 as much memory as the average google server.
Google does not use very large or even large machines for most of their fleet. You can quickly see in the paper this is for 1, 2, and 4 GB RAM machines (in 2006-2008).
These were state of the art 13 years ago. It’s not safe to extrapolate from this paper that they aren’t using servers having significantly more memory today. Thirteen years is more than 2 full depreciation intervals.
With a single bit flip on 8% of the dimms you only need 12.5 dimms in your workstation to have one bit flip every year. Not everyone has that much dimms, but at least 4 is pretty normal. So in average every 3 years for every workstation.
But i don‘t know how relevant these metrics from 2009 are. Did memory got better or worse compared to 2009 for bit flips?