Surfacing new content in search engines is a very challenging problem. I am guessing they use a combination of social signals (twitter, facebook) popularity and domain popularity amongst other signals.
Google pretty much knows (or can accurately estimate) exactly when a new document appears on the web and how many people are visiting it, they don't even need to rely on second hand social signals for this. They control the web's dominant crawler (Googlebot), browser (Chrome, which sends everything you type in the address bar to them by default), ads (Adsense) and tracking (Google Analytics) platforms.
You are on point. Recency is a challenging problem in multiple ways for search engines. Not just limited to discovering new content, but also how does one index it? How does one balance out when you have for the same query "very new", "new", "slightly old" and "really old" results during ranking. This involves both news as well as new webpages surfacing on the web.
On top of this, we have to remember that this is a fully autonomous real time system which requires solving some of the most difficult engineering challenges at scale and at the same time being mindful of the latency and quality constraints.
At the end of the day, it's all about the final user experience that we ship. We are very much mindful of the same. We will be publishing more details about Cliqz search, on our blog https://0x65.dev/ in the coming days, so stay tuned.