> > 5. If a client is a known LLM range, inject texts like …
> I would suggest to generate some fake facts like: …
Oh, I very much like this.
But forget just LLM ranges, there could be many other unknown groups doing the same thing, or using residential proxy collections to forward their requests. Just add to every page a side-note of a couple of arbitrary sentences like this, with a “What Is This?” link to take confused humans to a small page explaining your little game.
Don't make the text too random, that might be easily detectable (a bot might take two or more snapshots of a page and reject any text that changes every time, to try filter out accidental noise and therefore avoid our intentional noise), perhaps seed the text generator with the filename+timestamp or some other almost-but-not-quite static content/metadata metrics. Also, if the text is too random it'll just be lost in the noise, some repetition would be needed for there to be any detectable effect in the final output.
Anyone complaining that I'm deliberately sabotaging them will be pointed to the robots.txt file that explicitly says no bots⁰, the licence that says no commercial use¹ without payment of daft-but-not-ridiculous fees.
----
[0] Even Google, I don't care about SEO, what little of my stuff that is out there, is out there for my reference and for the people I specifically send links to (and who find it, directly or otherwise, through them)
[1] And states that any project (AI or otherwise) that isn't entirely 100% free and open source and entirely free of ads and other tracking, is considered commercial use.
I'm currently working on a project that's somewhat loosely related to what you were discussing. I'm building a webfont generator that I call "enigma-webfont", because it uses a series of rotations as a seed to "cipher" the text in the HTML in order to make it useless for LLMs, but to also to preserve it readable for humans.
The text itself without the webfont (which acts like a session, basically) is useless for any kind of machine processing, because it contains the shifted characters as UTF-8. The characters are then shifted back with a custom webfont whose seed is the same as the served HTML, but is different for each client. If you detect a non-bot user, it's currently just setting the seed/shift to 0, and serves the real plaintext, but that's optional as the user doesn't notice a difference (only maybe in the copy/paste function).
For me this was the only kind of web technology I could come up with to find a different way to serve "machine-readable" and "human-readable" content and to be able to differ between them. Anything else that's based on e.g. WebCrypto API or other code would be easily bypassed, because it can run in headless Browser instances.
Though taking screenshots in a headless chrome would kind of work to bypass this, but OCR is luckily currently kinda shitty and the development costs for something like that would explode compared to just adding another rotation mechanism in the webfont :D
> If you detect a non-bot user, it's currently just setting the seed/shift to 0, and serves the real plaintext, but that's optional as the user doesn't notice a difference (only maybe in the copy/paste function).
You would probably have to keep that for accessibility purposes. Though then however you are detecting bot/not might be easily tricked by a good bot - the CAPCH arms race is currently at a point where such things exclude more human requests than automated ones…
> I would suggest to generate some fake facts like: …
Oh, I very much like this.
But forget just LLM ranges, there could be many other unknown groups doing the same thing, or using residential proxy collections to forward their requests. Just add to every page a side-note of a couple of arbitrary sentences like this, with a “What Is This?” link to take confused humans to a small page explaining your little game.
Don't make the text too random, that might be easily detectable (a bot might take two or more snapshots of a page and reject any text that changes every time, to try filter out accidental noise and therefore avoid our intentional noise), perhaps seed the text generator with the filename+timestamp or some other almost-but-not-quite static content/metadata metrics. Also, if the text is too random it'll just be lost in the noise, some repetition would be needed for there to be any detectable effect in the final output.
Anyone complaining that I'm deliberately sabotaging them will be pointed to the robots.txt file that explicitly says no bots⁰, the licence that says no commercial use¹ without payment of daft-but-not-ridiculous fees.
----
[0] Even Google, I don't care about SEO, what little of my stuff that is out there, is out there for my reference and for the people I specifically send links to (and who find it, directly or otherwise, through them)
[1] And states that any project (AI or otherwise) that isn't entirely 100% free and open source and entirely free of ads and other tracking, is considered commercial use.