

Ask HN: How to handle external net access from freely available site? - gpjt

TL;DR: we're offering free accounts on our servers that allow people to run Python and Bash stuff.  What kind of restrictions (if any) should we put on those accounts to prevent malicious use for which we might ultimately be found liable?<p>The details:<p>We have a site (http://www.pythonanywhere.com/) that provides in-browser command-line (Python and Bash) access for paying customers and also for free users.  One of the things that people can do from the command line is access sites elsewhere on the Internet.  Over the last couple of weeks we've had some misbehaviour on the part of a few of our free users -- hammering sites for SEO purposes, packet flooding, messing around with IRC servers, that kind of thing.<p>Responding to these was pretty easy, as they were simple TOC violations so we could warn them or ban them as appropriate.  But we're concerned about what might happen and what our liability might be if someone started using us for a more damaging attacks.<p>Most other hosting sites only offer paid plans.  One advantage of this model is that in general they know who their customers are, so if someone misbehaves, they can tell any investigating authorities who it was -- and of course it's also easier to stop them from starting another account and doing the same thing again.  So as an interim measure, we've blocked external internet access for our free users except for SSH to a GitHub (and a couple of similar sites) and access via a proxy with a broad whitelist for HTTP and HTTPS.<p>But we're worried that this might be too restrictive.  Managing a whitelist feels like it might become a big job over time, and it will be annoying for our users -- we'll never be able to keep up with the new cool APIs other people are exposing.  But dropping the whitelist would be risky too, as it would make it all too easy for people to DoS small websites.<p>Any advice on how we can handle this better would be very much appreciated!
======
dbarlett
What about adaptive blacklisting? Something like fail2ban or mod_security, but
on your proxy, facing inward.

~~~
gpjt
Interesting idea. I can't quite see how it would work, though -- if I
understand them correctly, fail2ban and mod_security rely on knowing what a
suspicious URL looks like. We could obviously keep an eye on someone
hammering, say "/login" on an external site -- but across the broader
internet, there could be any number of URLs hammering which would be a Bad
Thing.

Am I missing something?

~~~
dbarlett
Normally, mod_security parses incoming traffic and blocks suspicious activity
based on third-party rules [1]. Since you're using Squid, you can monitor
outgoing traffic instead. Malicious traffic would get dropped instead of being
proxied. Since you control the source servers, you could tarpit and/or ban the
user.

Snort + Snortsam [2] or fwsnort [3] would do the same thing for non-HTTP
traffic.

[1]
[https://www.owasp.org/index.php/Category:OWASP_ModSecurity_C...](https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project)
[2] <http://www.snortsam.net> [3] <http://cipherdyne.org/fwsnort/>

