On a related note, though, is there a way to limit CPU time on the headless chrome API?
We're running our PDF generator on docker images, and we built https://github.com/RealImage/proxywall to run as a 'sidecar' container that sits in from like nginx / Apache would - it rejects request that don't match certain criteria. If you business model supports it you might want to have a whitelist of domains that each account can take shots of.
You can use cgroup limits for any process. Just set a high period and the quota you want: https://access.redhat.com/documentation/en-us/red_hat_enterp...
That was my first thought when I read the chat. It just seems stupid to even remotely suggest something like this.
I'm sure there's inspecific biting precedent about using an overabundance of caution in writing about any jokes you might have made to any other person, maybe especially in a legal system where even the risk of having legal proceedings can cripple you for the rest of your life, but the guy literally said "No, don't. I was just joking" a few moments later.
There's lots of things you could do, but one idea is to have an approach where your service states it will use cached images for pages requested above a threshold in a particular timeframe - that would deter this kind of abuse, with minimal impact on genuine users.
1. Use cgroups to limit cpu usage on a process.
2. Block coinhive
3. Implement captchas
6. API throttling and 1 minute cache per URL
7. Disallow 1 IP from creating more than X accounts at a certain speed per day
“Pacific”? Did you mean “specific” or “pacifistic”?
peaceful in character or intent.
Finding a fine line is probably what's required. Having your tool suffer DDoS from known exploit vectors harms all the legitimate users at the expense of some bot-writer's pleasure.
Arguably this is the basis of many laws that govern our lives. Lots of rules are in place and enforced because individuals found ways to ruin things for everyone else