Of course some bots may make use of high-level browser engines (such as those provided by acceptance testing frameworks) to try and get around this, plus you'll always have cheap human labor. But ultimately, anti-spam is an arms race and simple tactics like this will get rid of most unwanted agents.
What worked in the end was a points system for spammy behavior: First post has URL in it? +1 point. User fills out linkedin field on profile? +1 point (seriously, none of our legit users did this...). User posts a word on the blocklist? (viagra, cialis, cvv2, etc) +1 point. User Agent is IE? +1 point (we're a Mac site). After a certain number of points, the user was banned and all their generated content deleted. After a certain number of posts without triggering the ban, they're greenlighted. Spammers quickly noticed their posts disappeared instantly and left the site.
Umh, anything that will end up in the POST request will be reproduced by a bot, I don't even actually look at the page when implementing screen scraping modules, but just at the Network tab of the Chrome Dev Tools.
What I think have the potential to remove the need for conscious CAPTCHA solving is what Google is supposedly doing here: machine learning on behavioral patterns in the user interaction with the form (instead of just with the CAPTCHA).
It seems like you had that backwards -- hope that clears it up.
I think that's where the issue lies: these tools do different things. Honeypot fields defend against general bots. Captchas defend against specific bots, too, but also have greater friction, so are only used when specific bots are an issue.