Hacker News new | past | comments | ask | show | jobs | submit login

I don't think you understand. If I decide as the owner of a site, that I don't want you scraping my business and I block you, then I am in that position. I'm automatically in that position because I can implement the blocks necessary to uphold the the terms of use of my business, or I can just do it for arbitrary reasons. Maybe you are hammering my server. Maybe I'm in a bad mood this morning and don't like that you're using Python.

I can unilaterally decide whether or not you use my business, in any way shape or form, even if I just don't like you, as long as I don't violate any laws (discrimination etc).




I absolutely understand, it's just not hard to make scraper traffic appear as (or be) legitimate browser traffic and/or simply distributed across numerous IPs. Other technical controls all have trivial circumvention methods. There is legal precedent (at least in the US) suggesting that scraping public information may be permissible under law (see HiQ Labs v. LinkedIn). Scrapers only ever need to succeed once.

Under these circumstances, how can a website operator feel any sense of practical control over scrapers?


This is kind of a silly argument. If a physical business trespasses me for shoplifting, I can just put on a disguise and go back and shoplift more. Why do business think they have control over shoplifters?


This is kind of a silly argument, for every item you shoplift: do you ask if you can take it without paying and then get granted permission?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: