Apple's epsilon reset problem is real, but it's worth pointing out that they use additional heuristics based on hashing that plausibly add another layer of privacy [1]. Plausibly, not provably, but it's a bit more than just resetting epsilon. I believe Google and Microsoft use similar tweaked forms of differential privacy. In particular, note that all of these companies -- again, going off public papers -- use the "local" variant of differential privacy, which requires less trust on the user's part.
The question of "lifetime" differential privacy, for a single user across different computations and datasets, is still fairly open as far as I know.
The question of "lifetime" differential privacy, for a single user across different computations and datasets, is still fairly open as far as I know.
[1] https://machinelearning.apple.com/docs/learning-with-privacy...