I'm not suggesting that this is trivial to implement, but in principle, wouldn't it be fairly simple for Pinterest to identify these guys based on their 'social networks'? If a group of accounts only 'pins' posts of other accounts in that same group, that suggests either a spambot farm, or a very inclusive group of friends. False-positive detection could be decreased by looking at account sign-up dates, or profile photos.
It sounds like he only has one Amazon Associates account. Identifying all his accounts would be trivial, then -- find all accounts that have posted an Amazon link with the same associate ID in the URL.
Most of the experienced spammers fake the referrer. They'll have a scrub site setup which will appear as any blog. The spammer will spam links to the blog page (often using a url shortener); however the link will contain a id. If you visit the page with the id it will selectively redirect traffic to their amazon affiliate link, without the id link it will appear as a normal blogpost.
Amazon will think the traffic comes from the blogpost. The person getting spammed won't get any protection if they filter amazon links.
That might help for sites with a small userbase. However these are large platforms with large clusters of users that can be easily contacted. They'll just invest in a captcha cracking service which runs at about a 1000 solved captchas for $3. If you ban datacenter ips, use dnsbl and scan for proxies, they'll switch to rented dsl lines.
On the other end are the users. If you ban proxies, finger their ports and ask them to solve a captcha every time they hit the submit button; you'll create some serious animosity. Stopping spam means having to invest, come up with complicated algorithms and you still might accidentally ban innocent users who will blog about this or tell their friends .
The real question is... does it matter that affiliate links are being posted if it needs a guide to let the everyday users notice it . My niece doesn't even know what a affiliate link is and neither do most users. I mean if there's a 100% method to stop it, implement it. However, should you invest money and dev time into problems that nobody has solved to this date...
manual captcha breaking services are available for as low as $10 for 1000 captcha breaks. Most of the bots use them, just pass the captcha image to the service and there is someone at the other end who would just enter the characters and all of these happens within seconds. search deathbycaptcha etc.
It's fun to imagine large social networks of bots which are indistinguishable (from Pinterest's/Facebook's/Twitter's P.O.V.) from human social networks. Here, it could be a 1-man spamming operation, but you can imagine government-scale astroturfing.