Specifically, it was recently shown that the fastest implementations of ORAM (https://en.wikipedia.org/wiki/Oblivious_RAM) are actually in line with the lower bounds of theoretical Differentially Private RAM performance - meaning you gain zero performance advantage by using a weaker security model. Of course, it's unclear if this is true for other data structures and usages, but ORAM is a non-insignificant research area.
See https://eprint.iacr.org/2018/1051 for more details.
However, DP is useful for much more than hiding access patterns, and was originally developed for hiding information in statistical analyses; obliviousness cannot help you in those use cases
The purpose of differential privacy is to allow one a processor to compute an approximately accurate model without ever knowing the exact details of any origin.
Eg the example for ORAM is a secure element talking to main memory, and wanting to prevent an observer from knowing the actual structure of memory being used by the SE.
This is compared to the use case of differential privacy which is an endpoint wishing to process data without ever knowing what the actual source data was. For example as deployed by Apple: many metrics are bludgeoned with noise on the customers device before being sent to Apple servers. That way (theoretically) Apple never knows exact details of user behaviour, but can accumulate enough info to make approximations at scale.