All web automation and automation prevention is a cat and mouse game where you never stop the scrapers, you just create more effort for them. It’s like traditional and digital security in that regard, except that security often has an element of difficulty in overcoming it (cryptography, thickness of physical barriers), whereas stopping web scraping is about adding more trivial things to make the process more complicated.
Eventually, human browsing and headless browsing converge. Nobody wants to make the human browsing experience bad, so the headless browsing continues.
In my opinion, if you’re running a site that is existentially threatened by someone else having your content, you need something else for your moat.
Don't worry. Thanks to the W3C and their EME standards, scraping will reach the level of other sorts of security. I'm surprised I haven't yet seen a simple framework for serving your page not as a page but as an EME-protected blob that bears a rendering of the content. We will see just that.
Eventually, human browsing and headless browsing converge. Nobody wants to make the human browsing experience bad, so the headless browsing continues.
In my opinion, if you’re running a site that is existentially threatened by someone else having your content, you need something else for your moat.