Hacker News new | comments | show | ask | jobs | submit login

I'm building one of these at the moment.

It's to process URLs, and it's just a part of what I need, but knowing that others might like such a thing means I'll see if I can open it up afterwards.

I'm looking to achieve this by two methods:

1) If the link contains the ID of the end page (i.e. ASIN for Amazon), then reconstruct the URL of the end page.

2) For places in which #1 fails, follow the link and seek to determine whether a permalink or canonical URL exists at the end page.

Ironically I seek to strip affiliate codes in order to add my own in my given use case... but I'm using golang and am trying to structure it all in a flow based way in which stripping and adding codes are just separate steps.

So it doesn't seem to me to be too hard to then expose each side as a service by itself.

2) What if I build a redirect engine that doesn't redirect to an affiliate page at first and then turn on the redirect after your engine completes the permalink check? For example, if you do the permalink check when the story is published, I would have my engine wait 5 minutes or so before changing the destination of the URL to my affiliate page.

For my own work I plan to keep a store of ALL user submitted links.

I plan to iterate over it and do sane things with it: * Is it an embedded image and is Chrome reporting the domain as malware? = Convert to link instead of image * If it a link that was only transiently available? = indicate that it is no longer available and suggest searching instead

My aim is to self-heal user submitted content that links elsewhere, as much as it is to monetise that content where it's possible to.

I plan to visit links on a schedule and react as necessary. I hadn't really figured in affiliate spammers, but that would just be part of the self-healing now... detect change in destination and re-run the bit that strips affiliate codes.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact