It started as volunteer project and some projections put savings at around 10% of total budget after it will become mandatory in April.
It would make it more useful for flagging up potential stories, as well as researching stories journalists are already writing.
disclosure: I work for a company that provides real-time data to journalists for story discovery, and I know we'd certainly be interested
Carlos, super good job, congratulations!! I study issues related to technology vs. corruption from here in Berkeley. I have interesting evidence from contacts who have experienced post-technology change in government. Peru has (too much) potential in this area. If you need help at any time I am happy to support you!
thank you! In Peru there are already several groups of journalists who have partnered with developers to make interesting data journalism projects. There is Ojo Publico , Convoca  and IDL Reporters . But all the same, it's not enough, there's so much to do!
Or maybe that can be implemented in Manolo's GUI. It should not be difficult as it is based on Django.
Probably world changing, when considering that even semi-technical folks can cook up tools to dig into things like this.
I know this tool was by a developer, but scrapinghub has web UI to make scrapers.
I have personally used Scrapy in the past, I find it to be a great tool.
Congratulations on your work!
“You can’t visit 160,000 people,” she notes. “But
you can easily interrogate 160,000 records.”
Lobbyists have to follow registration procedures, and their official interactions and contributions are posted to an official database that can be downloaded as bulk XML:
Could they lie? Sure, but in the basic analysis that I've done, they generally don't feel the need to...or rather, things that I would have thought that lobbyists/causes would hide, they don't. Perhaps the consequences of getting caught (e.g. in an investigation that discovers a coverup) far outweigh the annoyance of filing the proper paperwork...having it recorded in a XML database that few people take the time to parse is probably enough obscurity for most situations.
There's also the White House visitor database, which does have some outright admissions, but still contains valuable information if you know how to filter the columns:
But it's also a case (as it is with most data) where having some political knowledge is almost as important as being good at data-wrangling. For example, it's trivial to discover that Rahm Emanuel had few visitors despite is key role, so you'd have to be able to notice than and then take the extra step to find out his workaround:
And then there are the many bespoke systems and logs you can find if you do a little research. The FDA, for example, has a calendar of FDA officials' contacts with outside people...again, it might not contain everything but it's difficult enough to parse that being able to mine it (and having some domain knowledge) will still yield interesting insights: http://www.fda.gov/NewsEvents/MeetingsConferencesWorkshops/P...
There's also OIRA, which I haven't ever looked at but seems to have the same potential of finding underreported links if you have the patience to parse and text mine it: https://www.whitehouse.gov/omb/oira_0910_meetings/
And of course, there's just the good ol FEC contributions database, which at least shows you individuals (and who they work for): https://github.com/datahoarder/fec_individual_donors
This is not to undermine what's described in the OP...but just to show how lucky you are if you're in the U.S. when it comes to dealing with official records. They don't contain everything perhaps but there's definitely enough (nevermind what you can obtain through FOIA by being the first person to ask for things) out there to explore influence and politics without as many technical hurdles.
Do you know what they are required to report? For example, if they have a 'social' dinner with a lobbyist, must that be reported? Are the requirements the same across the Executive Branch? All three branches?
Both the House and the Senate have gift travel databases (travel that's reimbursed by an outside group, such as a charter flight to visit an oil drilling rig) 
The branches differ in how such things are reported...this was pretty obvious recently when Justice Scalia died at a ranch and people started wondering who paid for the trip...take one look at how these forms are supplied and it should be pretty obvious why we don't normally hear about SCOTUS relationships until something really weird happens .
This NYT editorial "So Who's a Lobbyist?" has a nice rundown of the ways that people who would generally be considered a lobbyist can escape disclosure requirements: http://www.nytimes.com/2012/01/27/opinion/so-whos-a-lobbyist...
Still, it's useful to be able to parse the dataset in an attempt to find what's missing...something that is difficult to do conceptually unless you're dealing with the actual dataset on your own system.
I live in the US and am privileged with the level of transparency that exists, but it's still not necessarily enough. Similar issues are present with the clunky nature of government websites and databases and so I think we're in agreement that it's not even close to the potential of what it could be.
Thanks for sharing all the links and information!
Did you mean omissions?
Web scraping is a really powerful tool for increasing transparency on the internet especially with how transient online data is.
My own project, Transparent, has similar goals.
For developers and managers out there, do you prefer to build your own in-house scrapers or use Scrapy or tools like Mozenda instead? What about import.io and kimono?
I'm asking because lot of developers seem to be adamant against using web scraping tools they didn't develop themselves. Which seems counter productive because you are going into technical debt for an already solved problem.
So developers, what is the perfect web scraping tool you envision?
And it's always a fine balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped.
It seems like web scraping is a really shitty business to be in and nobody really wants to pay for it.
( The data that I'm mining is published here: http://www.bcra.gov.ar/Estadisticas/estprv010000.asp )
On this case, some scripts using Beautiful Soup were enough to get the job done, but I was completely unaware of Scrapy, seems like a fantastic tool, if I would have known about it I probably would have used it.
Web scraping is everywhere, even if it's not necessarily spoken openly about or acknowledged. The publicized perception of web scraping is fairly negative, but doesn't take into account the benefits of data used in machine-learning or democratized data extraction (as in the case of this article or for building public service apps like transportation notifications), or the simple realities of competitive pricing and monitoring the activities of resellers.
Researchers, academics, data scientists, marketers, the list goes on for those who use web scraping daily.
Glad you enjoyed the article! I'm hoping that more examples of ethical data extraction will start to turn the tide of public perception.
I completely accept how important scraping is as a data source, but that doesn't make it any more legal. It's in a space right now where only big companies can take unmitigated advantage of the tool, because it'd cost millions of dollars to successfully defend a CFAA suit.
I work for Scrapinghub as well and try to understand the law around this. I can help with some pointers to why I think some web scraping isn't illegal… there are of courses some limits to this.
When the data scraped is "is publicly available on the Internet, without requiring any login, password, or other individualized grant of access", the Eastern District Court of Virginia in Cvent vs Eventbrite (https://casetext.com/case/cvent-inc-v-eventbrite) ruled one could not be deemed to be exceeding unauthorized access.
There are two ways, that I know of, that courts have ruled you can exceed your authorization:
- When the site owner has contacted you and removed your authorization in a written manner, as happened on Craigslist vs 3taps.
- By accepting the terms of service and agreeing against scraping. You have to do this through a "clickwrap" ToS, rather than a "browsewrap". You can read about the differences here: https://termsfeed.com/blog/browsewrap-clickwrap/
As a matter of policy, we don't scrape any site with a ToS with clear anti-scraping language and which forces us to create an account or "constructively agree" as part of the use of the site.
Any user wishing to revoke authorization for anyone using our platform can make an abuse report on our site– we tend to handle these within 24 hours and haven't had a single claim go further than this stage, as we aim to be reasonable and look for a way to provide value to both sides.
Most websites have anti-scraping boilerplate in their ToU. I'm pretty sure that's in the "customized" ToU I got from LegalZoom. So you're basically saying that if the data is not behind a login and doesn't require you to fill any forms that contain either a checkbox or nearby language that indicates submitting the form constitutes acceptance of the ToU, you'll scrape it, even if the ToU explicitly bans scraping.
What do you plan to do when you do receive a C&D from someone that claims you've agreed to their browsewrap ToU? Are you going to argue that your use is not unauthorized since the data is public?
I guess I assume you'll comply with any C&D since you state that receipt of a written removal constitutes a revocation of authorization. However, I don't believe that's enough for some. Check out QVC v. Resultly, where QVC sued on a CFAA claim based on browse-wrap agreements (they lost; Resultly asserted permission was granted by robots.txt and the Court agreed).
Beyond just CFAA claims, there are copyright and trademark claims attached to most scraping cases. Those have unfortunately succeeded most of the time. The most egregious is Ticketmaster v RMG, where it was ruled that RMG had violated Ticketmaster's copyright by downloading the page (specifically, making a copy of the page contents in RAM and extracting only non-copyrightable content, and then discarding the complete copy; in short, downloading the page). Facebook v. Power Ventures is also pretty brutal.
I hope Scrapinghub is well-funded enough to trailblaze some space in the law here, because we really need it.
I'm not sure Scrapinghub is funded externally as I couldn't find anything on their valuation or revenues so I assume that they are bootsrapped.
I do not discount any of what you wrote but a lot of it are imagined dangers, scrapinghub would be immune to such cases unless they sided with their customers like 3taps did. 3taps did not stop scraping for their client Padmapper. I don't think scrapinghub is willing to put their neck out for someone paying $20/month to scrape craigslist. In fact, those are the shittiest segments of this market imo, the bottom feeders who demand excessively unrealistic expectations from scraping as a blackbox magical world that will solve their problems.
I wasn't doing anything egregious. The product I offered did not compete with their products; it actually made it easier for the consumer to spend money with them. The data I was gathering is mere factual data and is not subject to copyright (though, as in Ticketmaster v. RMG, this alone will not protect from copyright infringement claims). Their site is the single place that this factual data can be found.
If I were to actually dispute this company's claims and refuse to comply with their C&D, they would sue me. This would've cost me millions of dollars in legal fees before the case was through, which is irrelevant to them but obviously well outside of my reach. There's a good chance they would've gotten an injunction legally forbidding me from continuing to offer my service almost immediately, so then I'd have been stopped from offering my product AND I would've had a pending lawsuit against me, which would've asserted some absurd dollar amount of damage, and, if Facebook v Power Ventures is any indication, there would've been a good chance that I would've been held personally liable for it.
It doesn't matter that their claims are all dependent on interpretation and grey area. What matters is that if you don't have $30-$40 million dollars sitting around, you can't take the risk of a lawsuit from a big company. Gotta earmark $1-10 million for legal fees (depending on what kind of lawyers you get; the opposing party in my case has one of the most expensive law firms in the country); set aside $5-10 million in case you lose and have to pay damages, set aside some chunk of money to continue to bear the cost of maintaining and running the business despite the legal pressure and despite the likelihood that you've been legally disallowed from selling your primary money maker pending resolution of the case, which will likely drag on for a minimum of 3-5 years, and up to 10 years is not really unheard of. Gotta have the extra $20 mil+ so that you don't pour more than 50% of your net worth into something that is very possibly a losing battle.
My lawyer advises me that the various workarounds I devised could be construed as conspiracy and aiding and abetting, even though I would no longer be making any requests to the complainant's servers at all. This also wouldn't stop the complainant from suing me for past damages or to stop the practice they dislike, even if I'm doing it through means that totally obviate the need to access any of their servers.
If I were to continue operating, the only option would be to leave the U.S. entirely for a jurisdiction that doesn't enforce U.S. judgments (since I would be sued in the US and lose by default; my lawyer indicates that merely moving my company overseas is insufficient), and not return until the statue of limitations expires on the judgment that would get registered. Even this is not foolproof because the activity would have to be obviously and unequivocally legal in the new host jurisdiction so that the company's lawsuit in that jurisdiction wouldn't get anywhere, the jurisdiction would have to decline to enforce judgments on U.S. persons, and they'd have to be impervious to attempts by one of the world's largest companies to influence their legal system. I haven't found such a jurisdiction yet. Some are kinda-sorta close (but not really).
There's a distinct line between what you do with the data you scrape vs. writing the tools and code to build a script that will get you that data.
You don't arrest the kitchen knife company's CEO because someone used it to stab someone. And the fact that the web scraping/crawler vendors have saturated the market is testament to the fact that you are overreacting.
I'd imagine the fear of facing a devastating legal battle and how it might have permanently shifted your view on web scraping but I see no valid basis for all web scraping services and vendors to shut down.
I also find it puzzling you would be acting against your interests to continue to openly talk in details about a legal situation like this because that padmapper guy pretty much just shut up as soon as the details were involved.
but feel free to provide more details that shows that you were running a web scraping service or software company.
Big companies don't send polite emails asking you to pretty please stop. They just let their lawyers deal with the whole kit and kaboodle.
Scrapinghub's existence depends essentially on luck; first, that they won't get sued, and second, if they do get sued, that they'll get a sympathetic judge who will find that no contract was entered due to insufficient notice. That is not likely due to the nature of scrapinghub's operations (see Register.com v. Verio). The fact that some people are able to scrape and get away without being sued doesn't change the legal reality or the dubiousness of investing in a company with such a large risk profile.
The knife CEO analogy fails because CFAA claims are NOT about how the data is used. They are about the method used to obtain the data. The entity exceeding authorized computer access or accessing a computer without authorization -- in this case, that is scrapinghub, kimono, et al -- is the entity that has committed the violation of the CFAA. In your knife analogy, if the knife company had illegally acquired the metals used to manufacture the knife, it would be the culpable party, not the end user that bought its knives. The data that scrapinghub goes out, obtains, crafts and packages according to customer specifications ("make this page on craigslist a CSV file that auto-updates every 5 minutes") is the metal that the knife company goes out, obtains, crafts and packages according to customer specifications ("make this metal a sharp cutting utensil").
The person using the data that results from CFAA violations may be doing other illegal things, but in almost all of these types of cases, they're not violating the CFAA if they're not the ones accessing the computer that supplies the data.
I'm really not sure what you're arguing about anymore. The CFAA isn't a real law because the person gathering the data isn't necessarily the one putting it to use? I don't understand.
Got a source on that? Google scrapes all the time. This is how they index all the pages it discovers.
The only real scenario I recall is 3tap vs craigslist but they just kept scraping craigslist even after they banned their IP addresses with multiple proxies.
Then there are airplane ticket websites scraping each other and getting into hot waters.
Having said that it's not a clear cut definition as you'd like to put it. CFAA ruling was only because craigslist felt directly threatened by Padmapper which relied on 3taps.
This really is a shitty shitty business model. All that work 3taps did for you guys and they take all the heat? I don't know why 3taps didn't just comply, was PadMapper 100% of their business?
I definitely think that on the outset, it looks weird that 3Taps ended up taking PadMapper's heat, but I think that 3Taps wanted to become a generalized thing-as-a-service vendor. It's possible that PadMapper wasn't 3Taps's only customer for the CL feeds. As PadMapper wasn't contacting CL's computers without authorization, it makes sense that CL had to change the target to 3Taps. At that point, PadMapper would've seen that scraping CL meant a near-impossible legal challenge for a startup and been wise enough not to implement their own solution.
This is all just speculation, but I doubt that 3Taps stuck its neck out for the sole benefit of PadMapper.
I like this feature because it impedes the rapid nesting of conversations, and also allows the author time to edit his reply before anyone can address it.
The source is the CFAA, which makes it a crime and/or a tort to commit any "unauthorized" access to a computer system. Because authorization is not defined in the statute, it's a matter of interpretation whether or not one's use is unauthorized. Historically, judges have strongly disfavored scrapers.
Because there is a lot of grey area around what may or may not constitute "unauthorized" access to a computer system, if a company does bring a tort claim against you for accessing their system without authorization, you might actually win -- if you can afford the time and money to fight them for the minimum 3-5 years it'll take your case to resolve. This is hundreds of thousands in legal fees easy.
3taps eventually had no choice but to give up because they couldn't take the legal costs anymore, and Power Ventures tried to stick it out and ended up not only being held liable for $3 million in damages to Facebook's systems when no actual damage had occurred at all, but the veil was pierced and the founder held personally liable. It's obvious from the court documents that he was struggling to afford counsel, and companies must be represented by an attorney, so he didn't even have an option to try to represent himself.
>Google scrapes all the time. This is how they index all the pages it discovers.
Yes, Google's operations are, strictly speaking, illegal on various fronts. They depend heavily on automated access, which many sites they index explicitly forbid and thus Google is committing "unauthorized access" to these computer systems, and they also store complete copies of the site and the individual images displayed on the site, virtually all of which are protected by copyright, and all of which constitutes flagrant violation of copyright law.
If someone did bring a CFAA claim against Google for this (which no one would, because Google is one of the wealthiest companies in the world, and it'd therefore cost tens of millions to sue them), Google would likely argue that robots.txt is the only authorization it is obligated to observe, which may or may not be an effective argument. Google also make no guarantees about the extent to which it obeys robots.txt; it's a way to signal your desires to Google, which it may or may not honor.
tl;dr The very short answer to all of this is that traditionally, the legal system has been extremely suspicious of scrapers and has treated them very badly, applying concepts intended for the physical world like trespass to chattels to server access. This has been improving somewhat in recent years, but is still a very financially and legally precarious situation in which to find oneself. The people who get away with it get away with it because no one sued them before they were too big to sue.
It's illegal to break the CFAA whether the plaintiff specifically tells you that they think you're doing it or not. If they send a C&D, yes, you'd be wise to comply, but that's not going to absolve you from claims that you harmed their company by violating the CFAA before they sent it (which do happen and are usually claiming a pretty ridiculously silly amount of damages for something as innocent as downloading a web page from their server). You'd have to argue in court that your access was authorized and they'd have to argue that your access wasn't authorized. The judge and/or jury would then evaluate.
I don't think so. The CFAA states:
>Whoever intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains information from any protected computer shall be punished as provided in subsection (c) of this section. (a)(2)(C)
It defines a "protected computer" as:
>...the term "protected computer" means a computer which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States; (e)(2)(B)
As the Supreme Court has ruled that virtually anything in the United States is subject to the Commerce Clause, this comprises practically all computers, especially after you consider that usage of a computer network almost certainly takes your traffic out of state. Many states have corollary laws to the CFAA with substantially similar language, so if you can miraculously convince a judge that the computers involved are not part of interstate commerce and that the feds therefore have no jurisdiction, there's a good chance you'll have to contend against a similarly-worded state statute.
I don't see any limitations or exceptions here. If you are accessing a computer in an "unauthorized" manner and obtain information whilst doing so, you have violated the CFAA.
The reason scraping can happen is a combination of lack of technical awareness (both from lawyers about computers and from programmers about law) and the cost of pursuing a lawsuit. Even if you break the law, someone has to take issue with your law-breaking before anything happens; they have to file either a lawsuit or an indictment to get the ball rolling. That some people are able to get away with violating the CFAA without someone registering a formal complaint on the matter has nothing to do with whether or not one has violated the statute.
The only way that scrapers don't violate the CFAA is a liberal interpretation of the term "unauthorized", wherein a judge states that if a computer is advertising and allowing public access, then all members of the public are inherently authorized to access it. I know that several scrapers have taken their cases through the courts hoping that such an interpretation would be given.
If you need to build a solid web scraping stack which is going to be maintained by many people and is critical to your business, you have two options… to use Scrapy or to build something yourself.
Scrapy has been tried and tested over 6-7 years of community development, as well as being the base infrastructure for a number of >$1B businesses. Not only this, but there is a suite of tools which you have been built around it – Portia for one, but also other lots of useful open source libraries: http://scrapinghub.com/opensource/).
Right now most people still have the issue of having to use xpath or css selectors to run your crawl or get the data, but not for too long.
There's more and more ways of skipping this step and getting at data automatically:
Scrapy (and also lots of python tools, likely a majority of them created by people using it and BeautifulSoup) have lowered the cost of building web data harvesting systems to the point where one guy can build crawlers for an entire industry in a couple of months.
Outside of that, I did often find myself building my own tools with a combination of ruby, nokogiri and mechanize. Partly out of a desire to learn something new, and partly many of my use scenarios didn't require anything more complex than "Go to these pages, get the data within these elements and throw a CSV file over there".
Otherwise, I'd recommend you check out Portia (open source). We're in the middle of releasing the beta 2.0 version.
Is this even a realistic business model? Seems like this is what Scrapy is doing and what Import.io is doing. Make the tool free in order to get free marketing and then charge people willing to pay money to extract data.
Meanwhile I see Mozenda charging like 5 cents for each page extracted, do you think this is a fair model or does it not matter?
Charges come with large scale crawls (above certain limits on our platform), additional products like Crawlera (our smart downloader that routes requests from a crawl through a pool of IP addresses to avoid bans), datasets, and for us to handle complex crawls for companies outsourcing to us.
Our model is that there is something for everyone whether you are looking to dip your toes into web scraping (free), use it occasionally (usually journalists) or dependent on web crawling for your business.
Right. I had first come across Scrapy, while browsing the web for Python software tools, some years ago, on the site of a company in Uruguay called Insophia. It was in the list of products developed by them, and that they worked on. Scrapinghub came later.
Horrible industry imho when you have to give away things for free just to be competitive. I just don't get why people would expect software to be free.
We provide the best Platform (as a Service) to run Scrapy or Portia spiders, and will soon be supporting most standard web scraping technologies. This is free for light users, but we charge for people who need extra or dedicated computing or network resources.
We also provide help to startups or enterprise orgs looking to get help in building a web data harvesting system (more than just parsing pages!), either by building it ourselves or by helping our partners train their engineers in using our technologies.
This has worked so far, and we're very healthy from a revenue perspective – more than doubling every year for a few years now, and good enough to grow to become the largest fully distributed company outside of the US.
We're pretty happy with being a brand that gives to the community, it tends to get repaid 10x in the long run.
It's a hard problem to generalize.
> balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped
Agreed. No one wants to be the bad guy and most clients looking to spam people are awful clients to have anyhow. Btw scraping LinkedIn is fairly difficult/expensive and they like to sue people.
We're happy to unban accounts when people give us reason to believe they will post only civil and substantive comments in the future. You're welcome to email email@example.com if that's the case.