But the central argument to me in this piece is that the DOJ is simply criminalizing URL editing. That is to me a gross oversimplification of what's happened. The CFAA is constructed not to criminalize accidental or reckless unauthorized access, but instead using a "knowing" standard. The DOJ's argument in the Aurenheimer case is that the defendant was aware that he shouldn't have had access to information tied to ICC-IDs, just as he'd have been aware had he tried to loop through Social Security Numbers in some other application.
There are plenty of sane arguments (see Orin Kerr† for a good survey) that what Aurenheimer did shouldn't have constituted unauthorized access. I don't actually happen to agree with any of the ones I've heard, but, more importantly, I have a hard time believing that those arguments are so dispositive that they indicate malfeasance on the part of prosecutors.
To me, the central problem with the CFAA isn't that it's easy to trip. Rather, it's that the sentencing is totally out of whack, in two ways: (1) that CFAA reacts in a particularly noxious catalytic way with other criminal statutes to accelerate minor infractions into significant felonies, and (2) that sentences scale with "damages", which have the effect of creating sentences that scale with the number of iterations in a for(;;) loop, which is nonsensical.
The problem is not simply that once prosecuted, defendants face unjust sentences. It's worse: the oversentencing creates a perverse incentive for prosecutors, turning run-of-the-mill incidents into high-profile vanity cases that lock the DOJ into pointlessly aggressive prosecutions.
To me, it makes sense that what Aurenheimer did should have been illegal, but it makes no sense at all that he's serving a custodial sentence over it.
(I did read the whole article; I didn't find the user-agent and responsible disclosure points particularly compelling, but maybe you did; I'm happy to opine about them as well. It's my judgement, not the article's overt wording, that the argument revolves around URL editing.)
If we can't trust sentencing as a process, and I'm beginning to believe we can't, maybe sensible laws can nonetheless be ultimately unreasonable in context.
Judging from the on-line information available about him, he was three years old, or thereabouts.
"Dissenting Justice Scalia believed the sentencing commission to be an unconstitutional delegation of legislative power by Congress to another agency because the guidelines established by the Sentencing Commission have the force of law: a judge who disregards them will be reversed. Scalia noted that the guidelines were 'heavily laden (or ought to be) with value judgments and policy assessments' rather than merely technical, Scalia also disputed the assertion by majority that the sentencing commission was in the judicial branch rather than the legislative saying the commission 'is not a court, does not exercise judicial power, and is not controlled by or accountable to members of the Judicial Branch.'"
Here's why. The prosecution details the steps Spitler took: downloading the iPad image, decrypting it, finding the url the system used (I'm guessing by running strings), and then spoofing an iPad browser request via the user agent string, and providing the userid to obtain an email address.
IANAL, but it seems that they are maybe trying to make the case that the user agent string was equivocal to a password, or that decrypting the image was the point of exceeding access. If decrypting the image was the issue, then I imagine this would be placed with all the other similar cases (DeCSS, etc), but it wouldn't constitute identity fraud. If the user agent string is seen as the password, then that is the weakest security system I've ever seen.
I haven't actually kept up with how AT&T apparently fixed it, but it seems that a rational response to this would be to make users authenticate with their own password BEFORE it spits out information like an email address. If you don someone's userid but have no password (or session id token, etc), I'd suggest that's impersonation, but not identity theft or fraud. If we're going to criminalize impersonation, I guess the Saturday Night Live cast needs to find a new career.
That said, I totally understand why weev contacted reporters and not AT&T. We're in an age where contacting large corporations about security fixes typically results in a gag order on the security researcher and no fix (hi Cisco). By contacting a reporter, he increased the chance that the story would get out and AT&T would fix the issue.
Finally, a lot of people have suggested that weev "deserved" to go to jail for other things he's done. I'm not denying he's a troll, and he has done some over the top things. However, it's not illegal to be a troll, and while one might say he should be in jail for other things he has done, he is currently in jail for this. IMHO, the punishment not only outweighs the crime, but in conjunction with the other abuses of CFAA prosecution we've seen lately (such as Aaron Swartz), I think it's time we stop allowing the government to use poster children like weev as punching bags for obvious career boosting agendas.
Finally, even if it is concluded that weev committed a crime, something with which I disagree, would you say it's ok to punish it by nearly 4 years in prison, denial to medical care, and solitary confinement for using email? All of those things have happened after he was indicted.
That's all that really matters to the judge and jury. The technical aspects don't matter much to them.
Also: If ease of access to information means anyone can take it, do you mean to say the NSA should take whatever they want because people don't encrypt their data?
The point here is, if we want to nail weev to a cross, AT&T should be nailed up right next to him.
For a typical burglary, person A leaves their door unlocked, and person B walks in. The items clearly belong to person A, and when person B takes them and walks out, theft has clearly occurred.
In this case, person A walks near person B's house, and sees that person B has laid the possessions of person C all over the sidewalk. Person A brings out their duplicator machine, creates mirror images of all person C's items, takes those mirror images, and walks away.
While there is a question of whether person A should have duplicated those items, person C is sitting across town clueless as to what's going on. There's also the question of whether person B should have left things all over the sidewalk, or should have placed the things behind the door.
If we begin comparing accessing a website to opening a door, that creates a lot of legal confusion. IANAL, but IIRC, the current legal understanding is that a computer on a network falls under the jurisdiction of the network. If that's the case, and we consider the Internet to be a public place, then a web server placed on the internet becomes public, unless there's a password on it. If, instead, we consider web servers to be like doors, where you need permission to access them, then anyone who spiders a website might be considered guilty of attempted breaking and entering. For another example, does it make more sense to allow allow smartphone apps to have full access to your phone by default, or should permission be granted for special capabilities? AFAIK, consent in this area is not very well defined.
In the traditional sense of theft, there is an object that I once had in my possession and it has now been taken from me. That doesn't really work so well with digital media where the supply issue goes away.
There's a lot more to this discussion, but I'm curious what the next response will be :)
But, again: I think this case didn't deserve to be prosecuted, and I think CFAA's sentencing should be revised to ensure that in the future prosecutors have no incentive to push pointless cases like it.
We should be thankful that braggarts and clowns like anonymous et al exist because they bring to light many breaches and weak security systems that would have been kept secret otherwise.
I think you may be right from a legal perspective but I find it troubling that the law is so structured. I think it's important that when dealing with a system that's designed to serve some information to the public, but not other information, it's critical that there be no ambiguity about what a given person is allowed to access.
I do not mean to say that all security mechanisms must be effective, else the issue of unauthorized access would be moot, but that no reasonable, technically adept person would think the security mechanism is not a security mechanism. In the case of a website or web service, a number of well-known industry-standard mechanisms exist, and it's reasonable to expect people to use them.
This is a clever trick of the prosecutors. It exploits the fact that the way the judge is going to handle this case is to give the brief to the young clerk who spends a lot of time on Facebook, where "heavy Facebook use" is the proxy for "reasonably sophisticated computer user".
HN user Rayiner is a law clerk in a US appeals court, and he's pretty handy with assembler from wha tI recall. This is a ridiculous straw man argument what badly misrepresents the claims in the brief.
Overall, I think this article is terribly poorly written. An inability to handle basic grammar is not a good foundation for parsing legal arguments, and much of the author's argument is predicated on the assumption that lawyers and judges do not understand computers.
Graham isn't hypothesizing an ignorant law clerk. He is responding to the prosecution's legal brief, which does so. It isn't a "straw man argument" to point out that they've constructed a hypothetical "sophisticated computer user" who doesn't know the first thing about HTTP.
Ignorance of the law is no excuse, but ignorance of everything else is fine if you are the law.
Frankly, I would trust a law clerk who knew nothing about computers to understand the subject better after study than I would a programmer who knew nothing about law.
That said, I know a lot of young, competent engineers and scientists who know next to nothing about the workings of computers and networks. They could figure out a lot if they had the time to put into it (I've seen a couple switch into development successfully), but usually they don't and their knowledge is of the surface-level stuff. That could still help with gut checks about what's reasonable behavior online for a casual user, but it's far from the nuanced understanding necessary to understand the ramifications of and make calls about things like the various applications of the CFAA in cases involving more advanced users.
Most people are not curious about technology and generally don't have a good understanding other people's curiosity about the subject. Should they be the ones to judge whether someone was just playing around or trying to attack something? Or should it be people with that curiosity who have had experience in playing around with security?
I would say that programmers' interpretation of the law via intent and current context in tech cases is frequently more consistent with what a just society needs than most judges' attempts at maintaining consistency with past rulings until a higher circuit corrects the precedent. I wouldn't dismiss the whole class as overenthusiastic amateurs.
I may just not be seeing the value in the judges' attempts at finding consistency, though, and I'm curious as to why they strive so hard for it versus trying to find the correct interpretation. My understanding is that that's just an attribute of the common law system. If someone could tell me why that's valuable (perhaps for consistency of enforcement/predictability of outcomes?), that'd be great. Sorry for the tangent, but it's something I'm curious about.
I may just not be seeing the value in the judges' attempts at finding consistency, though, and I'm curious as to why they strive so hard for it versus trying to find the correct interpretation.
This is very much an epistemological question. I'm personally a utilitarian but as we are not granted with the gift of foresight I accept that we need to work within an established framework (ie maintaining consistency with precedent) because what is correct is not nearly as obvious as we would like it to be (eg in this article I think the assumption of what user agent strings are for is too pat by far). A good, accessible, and affordable book on this subject is Bad Acts and guilty Minds by Leo Katz - written by a law professor but for a lay audience. I would be a good deal more utilitarian than he is, but then I'd have approached the defense of Weev's case far differently too.
 There is no question about his competence with computers.
I would like to see this rhetoric about Weev stop. People are allowing themselves to be distracted by the character of the defendant rather than the stupidity of the laws involved.
Whether or not he belongs in prison is completely irrelevant to the conversation about the sentencing and laws and prosecutorial conduct involved.
Unless, of course, you want to defend bad laws so long as they apply to people who are not you.
It's akin to saying, "Alan Turing is gay, but he's done some good work in cryptography anyway..." ... that example only seems ridiculous now because social mores have changed.
Weev's character would have relevance in a discussion about whether or not he deserves a Great Justice award, not whether or not the prosecution in this case is just or not.
Weev is proud of hurting innocent people. He brags about it. He wants us to know. And I'm sure as hell not going to try to cover that up on his behalf, or tolerate those who do.
"His rise as a folk hero is a sign of how desensitized to the abuse of women online people have become. I get so angry at the tech press, the way they try to spin him as a trickster, a prankster. It’s like they feel they have to at least say he’s a jerk. Openly admitting you enjoy ‘ruining lives for lulz’ is way past being a ‘jerk’. And it wasn’t just my life. He included my kids in his work. I think he does belong in prison for crimes he has committed, but what he’s in for now is not one of those crimes. I hate supporting the Free Weev movement, but I do."
There are a myriad of things that are legal for the government to do that are illegal for a common citizen. There's no irony in that.
Weev didn't hack anything. He committed data theft and possibly attempted extortion. Those should be the the basis of his trial.
Also, just to be clear, those things are actually quite illegal for the government to do both on a national and international scale.
If the government does not require a warrant to do something, then it should be legal for anyone to do. After all, the entire purpose of a warrant is to insure oversight in the use of government power.
The government doesn't require a warrant to prevent people from entering or leaving the country.
The government doesn't require a warrant to block off city streets or do any of a number of things to public property.
The only things that the Constitution requires the government to get a warrant to do are "search and seizure", which are terms with very specific meanings in the Common Law. The NSA somehow argues that intercepting people's traffic isn't a "search" until an analyst actually looks at it, which I think is a ridiculous argument; however, the response isn't "everything you do needs a warrant", but "that's a search, and searches need warrants".
Citizens in the US have a duty to de-escalate the situation, a 'duty to retreat', unless they're backed into a metaphorical corner ('castle doctrine').
Police are presently seen as having a duty to escalate - to allow someone potentially hostile to back down and leave without handcuffs is seen as a dangerous failure, extending even to periods when the police officer is off-duty. Meek compliance with 'lawful orders' is the penultimate goal, and people will be bossed around, arrested, tortured (Who the fuck thought 'drive stun' mode was a good idea), or shot for failing to show appropriate amounts of submissiveness.
Assault against a police officer is seen as a crime against the state, whereas assault against a citizen is essentially mandated for a police officer to do their job.
The rules for actual murder are only slightly less assymmetrical.
First breaking off civil relations with the citizenry via the drug war and then paramilitarizing our police force post 9/11, and finally having their behavior revealed with Youtube and smartphones, has severely damaged the credibility of the police in this country, good and bad; It's going to take some severe changes to bring it back - changes explicitly designed to "make it harder for them to do their job", as they would describe it.
"A Stand-Your-Ground law is a type of self-defense law that gives individuals the right to use deadly force to defend themselves without any requirement to evade or retreat from a dangerous situation. It is law in certain jurisdictions within the United States."
This is the type of law that allowed Trayvon Martin's killer to walk away as an innocent man.
Please stop. Zimmerman's legal team never even mentioned SYG. It wouldn't have made sense, since their claim was that at the time of the shooting he was pinned on his back and unable to move. In such a situation, no one has a "duty to retreat".
I'm not claiming SYG is good or bad law, but if you'd like to argue against it please do so in a sensible manner.
"The "stand your ground law" was not used by the Zimmerman defense team during the trial, although it was considered at an earlier time. Some sources have pointed out that “Stand Your Ground” was mentioned in the Jury Instructions preceding the trial, however, this is part of the required Jury Instructions in all Florida murder trials in which the defendant claims “Justifiable Use of Deadly Force” as part of their defense."
"The police chief said that Zimmerman was released because there was no evidence to refute Zimmerman's claim of having acted in self-defense, and that under Florida's Stand Your Ground statute, the police were prohibited by law from making an arrest."
Honestly, I don't think it's unreasonable to think that SYG played a role in the jury's decision-making process. But hey, don't take my word for it, what about the reaction of the Governor of Florida (again from Wikipedia):
"Three weeks after the shooting, Florida Governor Rick Scott commissioned a 19-member task force to review the Florida statute that deals with justifiable use of force, including the Stand Your Ground provision."
If that's still to tenuous a connection for you, let's hear from one of the jurors on the case:
"An anonymous member of the jury appeared on Anderson Cooper 360 on July 15 to discuss how Florida's Stand Your Ground law provided a legal justification for Zimmerman's actions. According to the juror, neither charge against Zimmerman applied "because of the heat of the moment and the Stand Your Ground"
So yeah, I really do think it's "sensible" to think that SYG helped Trayvon Martin's killer walk away as an innocent man.
Other parts of the world take the idea of excessive use of police force somewhat more seriously and are weary of it.
What weev did was quite different in that he accessed this web service in exactly the way it was intended. Even if he was not the intended consumer of this data, his attempted access never exceeded the defined and expected parameters of the API he was accessing. Furthermore, he didn't circumvent  any access restrictions; rather, access restrictions were never imposed. weev had no information available to himself as to AT&T's intent to disclose or not disclose customer emails; as far as he was concerned, the existence of this API could have been a purposeful and not simply negligent disclosure on the part of AT&T.
I think that the reason that the weev case rankles is that web developers do this kind of thing all the time. What is the difference between what weev did here and Padmapper did when it built a product on top of Craigslist's data? Despite Eric DeMenthon's protests to the contrary, a strong argument to could be made that Padmapper's intent was to cause severe commercial harm to Craigslist, which is conceivably why he got sued. In spite of the civil case, however, criminal charges are almost unthinkable.
Also, how often do we read about someone's project being hampered when a private Google API is turned off?  Anyone that builds a commercial product on top of something like this would be deemed a fool, but I've never seen anyone accuse a developer who is using this kind of API of acting criminally.
What is the difference, under the law, between someone accessing a private Google API and the private AT&T API that weev accessed? As a web developer with zero documentation, zero information beyond simply knowledge of the API URL's existence, there is no apparent difference beyond what content was being served by these APIs. So, if that is the case, at what point should web developers accessing undocumented APIs begin to be concerned about their criminal liability?
 Shouldn't it be circumvention not authorization that that defines criminal access under the law?
 Just the easiest-to-find example: https://news.ycombinator.com/item?id=4441677
So is a thief who walks through a door carelessly left unlocked "accessing it exactly in the way it was intended." It's what he does afterwards that makes the difference.
> What is the difference, under the law, between someone accessing a private Google API and the private AT&T API that weev accessed? As a web developer with zero documentation, zero information beyond simply knowledge of the API URL's existence, there is no apparent difference beyond what content was being served by these APIs. So, if that is the case, at what point should web developers accessing undocumented APIs begin to be concerned about their criminal liability?
When the content you get back from a URL is other people's private data, it doesn't take a genius to figure out that maybe there's some criminal liability there.
If he takes some pictures and leaves he certainly isn't guilty of breaking and entering.
e.g. Homakov's hack of github didn't deserve jail time as it was for publicity, not malevolance.
The grandparent making the point about status 200 has a point, especially in regards to this case. If a website is returning 200s for a get request. Then you are implicitly 'authorized' to see that page. The counter point made of SQL injection is also valid, but SQL injection wasn't used here. Just plain old GET requests.
It's difficult to draw real world comparisons to things like this. So I don't think you can simplify it down to locked/unlocked doors, or public/private property.
If I go to cia.gov/supersecretfiles and it returns something... did I just "hack" the CIA? It doesn't make sense to me.
URIs that return 200s are public resources.
There doesn't even need to be a metaphor here: the data physically existed on a private server, and weev was not authorized to access it.
What's returned is data physically on a private server. I am not authorized to access that server.
But the internet would be a pretty crap place if that was against the law.
As I said, metaphors to locked/unlocked public/private don't make sense. But happy for you to keep stretching this analogy until it fits.
Comparing it to houses that have doors, locked or otherwise, is exceptionally disingenious.
The point about "200" error codes is sophistry. We all know that every 200 code is not actually a deliberate authorization. If you believe otherwise, then any SQL injection attack that uses GETs and generates 200 must be authorized.
Seems the sophistry here is applying another clear cut version of hacking to say that this 'not clear cut at all' version is also wrong.
I have no problem with basing it off intent, but the focus should be on prosecuting whoever put that data out there in the first place with gross negligence.
So, if by incrementing ICC-IDs, you found random technical data about AT&T provisioning, it would be very hard to argue that you were knowingly accessing it without authorization. But when the information you find is so personal that your first instinct is chat about selling it to spamming rings, you are on considerably less safe footing.
I am ambivalent about software liability. Vulnerable software is much more common than most people think it is, and it would be a shame if ill-conceived liability rules created a situation for startups analogous to that of medical malpractice insurance. On the other hand, liability laws would be hugely lucrative for me.
Hypothetically the police give me a Police report number that I can access at police.gov/crimes/:reportno
I discover if I increment/decrement these I can get ALL reports. I then build a cool mashup of crimes in the area on a google map.
It turns out the police didn't intend that, am I now a criminal (because of the polices intent)?
Using stolen identities to purchase goods: illegal.
Attacking a database: not illegal without CFAA.
I'm stumbling around trying to figure out what the right balance is too, but I think the existing laws we have around fraud and privacy are all that we need. That is, we don't need to criminalize accessing inadvertently public information; we just need to criminalize exploiting it.
That's a big problem since we can't really even define clearly (and rationally) what an attack is.
The law lives in a conservative analogue world and will continue to do so for years to come.
He would be eligible for a halfway house; in his case that would be within three months of his mandatory release date.
So in any case he is going to spend more than two years in a federal prison. Doing time is not easy if you fight the prison system, and according to reports this is what he has been doing.
His sentence also undoubtedly contained a supervised release provision. So if he violates the conditions of his release (probably no computer use, that's a standard one) he goes back inside for the duration of the supervised release period.
Federal prison is no joke. There are very good reasons to appeal.
Or the user agent? What if you're just curious if the app will even serve another browser?