Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tynt: What's being copied from your website right now? (tynt.com)
25 points by epall on Feb 22, 2009 | hide | past | favorite | 20 comments



From their about page: Tracer inserts a java script tag into the html code of your website to non-invasively track users interactions with your website content. The only change the user sees is when copied content is pasted into an email, blog or website, we automatically add a link back to the originating site at the end of the content.

What happens if the user just deletes the link, as I have with the above quote?


I'm pretty sure this service is not meant for the technically inclined. If someone who is technically inclined was worried about this, they would understand that they could use Google and/or a custom web crawler that searched over a subset of sites you cared about[1].

[1]After all, if someone copies your work and no once sees it, who really cares? The scope of your web crawler must be limited somehow. You could simply have it check up on your competition and/or popular links on news aggregating sites for categories related to your business. For example, if I was starting a technology blog, I could write up a Python crawler in about 30 minutes that parsed through blog articles on other popular tech blogs with related tags.


I think that there is a difference between services not meant for technically inclined people and those built by non-technically inclined people. In its current incarnation, I do not see how this service is useful, even if you are not technically inclined.

I think there is a need for a service like this (that watches the web for people republishing your content), but Tracer is not it. Tracer is just a silly JavaScript hack.


Would it be a good idea to combine link placement, reporting back to the server on every highlight and searching for the content later be a good combination?


It doesn't work if javascript is disabled. In addition, NoScript doesn't like it (even with javascript enabled).


Years ago I applied a trick I learned from the Associated Press. Back in World War I, the Associated Press suspected that the Hearst newspaper chain was copying their stories from the Russian front, and they began running stories about a fictitious general Nelotsky. When the Hearst newspapers picked up the Nelotsky "story," the AP called them on it, pointing out that the first part of the general's name is just the English word "stolen" spelled in reverse. I used a similar technique on my most plagiarized webpage,

http://learninfreedom.org/colleges_4_hmsc.html

mentioning a college that doesn't actually exist, but has a name from the Greek word for "steal." That finally got one persistent thief to acknowledge that my site was his source.



When I was a kid I had problems with one particular boy always copying my tests in school. I mentioned it to my mum and she suggested I write fake answers in pencil, wait for the boy to copy, and then go back and correct myself in pen.

Similar technique I guess; and he took the hint after a couple of days.


Map makers (at the street level) did a similar thing.


Although some of those ghost streets were simply city projects gone wrong (ran out of money/support), so only some errors on maps are traps.

There is one (somewhat) famous example of a trap, in the placement of two fictitious towns in Michigan, Goblu and Beatosu (this being back in the day of their rivalry).


These guys are not fans http://www.ericlander.com/324.html


Looks to me like Tynt is trying to head in a new direction with Tracer. I don't see any references on their homepage to the sort of technology that these guys are concerned about.


Great idea! Now we can finally, with accuracy, determine what portion of the internet population uses the click-drag technique when reading long blog posts. Yeah!


i believe this still works with copy (apple-c) and paste (apple-p), which i do all the time. not just click-drag.


Interesting concept, but this is not designed to find where your content has been copied onto other sites/blogs. That would be an cool service.


That service is called Google.


And if you're looking for a slightly more scalable solution than "Find a sentence from Page A which is juicy, Google it, record results, repeat for all 10,000 pages on my website", you may be interested in CopyScape.

For example, if you feed them www.tynt.com, they'd tell you that feedmyapp.com has borrowed a few sentences from them. (It makes sense, as the borrowed phrases are part of a listing for tynt.com)


That service is called Attributor.com


i use http://copygator.com but am interesting in how tynt solution differs


Copygator actually crawls the internet looking for your blog posts. All Tracer does is sniff for a ctrl-c command from someone visiting your website and injects a link into the copied text.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: