Request rate is definitely one thing you can limit, but it's tricky when attackers potentially control large numbers of IP addresses.
There's an annoying triangle here: wanting to preserve privacy (== unlinkability), machine-independence, and "working well for good traffic with limited resources, as well as blocking attackers with substantially more resources". Ideally it is "choose zero", I'd be happy if the state of the art were even at "choose one".
Update: I attempted every contact avenue available to me and no one I spoke with (by email/contact forms and phone) seemed to know what was happening. I ended up compartmentalizing my accounts (so removed my personal id from each) and set up new billing accounts and had some moderate success. By moderate I mean:
1. I would get an email saying the restrictions had been lifted because no violations of t&c's had been found.
2. Low and behold all apps would be running again so I could re-set up paid accounts for appengine apps, etc.
3. About 3 hours would pass and I'd see that each app would again be suspended. A little while later I started hitting the free plan thresholds and was unable to do anything about it. I also couldn't contact customers as I had no access to data store.
This repeated for a few days and seems to have resolved itself. No explanations from Google. No apologies.
As a result I have absolutely zero confidence in their competence at supporting cloud based businesses and am currently working my ass off to migrate away.
If I'm not logged into a site a that I regularly use, I'm probably not logged into my email, either. In order to log into my favorite site with Passwordless, I have to log into my email as well. With my password. One login for the price of two, and I'm still using a password.
I found a demo via this old forum thread from August.
Obviously there are privacy concerns. That being said, this looks like a boon for anyone interested in bot detection, as you can periodically challenge your users' humanity without getting too much in their way. Nice one, Google.
From the thread:
Implemented it successfully for a website. I have to say, it works great!
it also checks if html pages are changed at runtime and how many times you "reload" the page where the captcha is. When it thinks you are a bot a captcha popups, when entered, it got checked on googles servers if it's right and fills in a hidden input. When the user submits the form, the filled in captcha coded, again, will be verifed. [sic]
"Since it goes through Google's servers, they can verify a lot of things. Whether you are logged in currently to google, have you been logged in the past, verify your activity on your IP address, etc. Even if you signed in from the same ip or ip range like a year ago, they can still tell it's you based on your previous actions."
The normal captchas have been getting increasingly user-hostile over time. The only limit on them is what users are willing to put up with, and now that Google's most profitable users don't get them that's less of an issue. In fact, having nearly unsolvable captchas is actually an advantage because it encourages users to let Google track them.
No, this is likely done with machine learning trained on real vs fraudulent user data. So they are going to be watching for much more subtle features than just being in a different region. Tons of people travel all of the world. Less people manually resent their MAC addresses or use datacenter ISPs.
Why does Google bother with so many minor script-related security enhancements in Chrome that will barely affect anyone (such as extra HTTP headers allowing for bonus layers of XSS protection just in case the site's developers weren't smart enough to cover all possible injection angles) if they are going to also let random untrustworthy developers abuse their extension installation API to achieve over 750,000 installs of a mysterious/shady/useless browser extension that inexplicably asks for permission to read and write to the DOM on every single page of every single site the user ever visits in the future, and which very obviously only exists for the purpose of doing the exact same kinds of terrible things that XSS prevention was conceived of in the first place in order to stop?
I'd personally love them to do that. I guess the arguments are basically the double-edged sword of dictatorship. You have a paradise if the ruler is wise, just and benevolent, as you can escape pretty much all of the stupid coordination problems that pester democracies - but on the other hand you risk getting totally screwed up if the dictator goes evil (which can, and probably always will, happen over time, when a good dictator gets succeeded by a bad one).
The problem is that every website does the same thing, and now it's impossible to use the internet anonymously. But actual spammers can spend a few bucks on IP rotating services. IP discrimination causes far more harm than good.
McKenna is more famous for his excessive use of psychedelics.
Terence McKenna: "There are only about 1,000 of these GBMs a year, so it's a rare disease. I never won anything before - why now?" Like everybody else, he suspected a lifetime of exotic drug use may have been to blame. 
Interesting thought: Since those bitcoins were part of the first transaction, and since the transaction history for those coins would be saved and hence documentable, is it possible that those particular coins might become "collector's items"? And perhaps "worth" more than the "face value" of 10 BTC, to the right collector?