> Defending copyrights as an individual isn’t an easy matter.
But the same people seem to fucking hate Article 11, which would forbid these sites from copying the entire article and would give the author of the material real options to take action.
I don't think article 11 or 13 are particulary good, but this submission does a good job of showing why some people think they're needed.
And if you can keep the copyright-theft'd works off of Google, Bing, and other big name search engines... then you've basically won in this day and age.
Yeah, its an extreme measure. But its pretty clear cut how Article 13 helps in this case. Link Tax (Article 11) may also play a role, but its less obvious IMO.
Nope. You can not. Google is not a content sharing platform which is what Article 13 is about.
You can do the same as before: Send a takedown request to google. Again, literally nothing changes here.
If you ping Bing or Google before you publish, they’ll get a 404 and will take that as a sign that there is no content there. They also will wait longer before trying to reindex a page that previously returned a 404.
Why does Google not do the same? It seems to me that it's their responsibility as a search engine to give authors tools to identify their work.
Of course, this doesn't prove your page has the original content though.
 at least according to https://dejanseo.com.au/hijack/ (which according to the author is still a problem today, see https://news.ycombinator.com/item?id=17827589)
(I didn’t attack these servers, by the by. They came to my servers and gobbled up all the auto-generated junk I served them all on their own.)
However, in recent years it has been more and more difficult to identify the right IP address as everything is hosted people are hosting behind Cloudflare or do the actual scraping from a short-term lease server with an unique IP.
This gibberish actually outranks legit content which refers to my content, sometimes even my own articles, especially when it is turned into a PDF.
Seems like it is easy to block ~250k webpages like:
but I guess Google keeps them there to keep the spammers in the dark? I hope so, else their new ranking signals allow for easier spam.
I think it's unfair to look at these results and say "but it's so easy to block these". Google's time is best spent on solutions which will reliably and automatically block these, without going through fairly manual steps.
I did notice a decrease in quality in gmail's spam filter though. Increase in false positives and false negatives lately. I guess it's unlikely to be related...
No doubt the recent machine learning hype has given spammers more advanced tools to avoid detection.
In the old usenet we do communicate DIRECTLY each others so anyone have a reputation, so we know "original sources" and we prize accordingly. In today's world being disconnected, on one side only content producer, on the over side only consumers, in the middle at maximum uncomfortable platforms that are limited and limiting user-user communication it's harder.
IMVHO the medicine is coming back to the communication era we lost in the past, no other systems proved to be effective, take a look a audio/video piracy as a good living example.
Second, I have no interest in seeing ads or having my data sent to third parties.
If it turns out not to work, they can go back.