Hacker News new | past | comments | ask | show | jobs | submit login
Jail Looms for Man Who Revealed AT&T Leaked iPad User E-Mails (technologyreview.com)
148 points by robdoherty2 on Nov 20, 2012 | hide | past | favorite | 112 comments



Hi there - I'm the one who put up the $50k to bail weev out of jail.[1] (Otherwise he would have had to sit in Essex County Jail during these ~2 years since this started.)

There were some others in line to assist (I live in Europe), but they all feared various forms of retribution/harassment from the FBI/DoJ, so it fell to me (someone with comparatively little to lose, stateside). This only serves to underscore the truly chilling effects of these sorts of governmental abuses of power.

I also host his website, http://freeweev.info, where you can make donations to his case via both Paypal and Bitcoin. (He has various restrictions placed on his use of technology while out on bail.)

Please feel free to contact me directly if you have questions related to his case. Contact info can be found in my profile.

5539 AD00 DE4C 42F3 AFE1 1575 0524 43F4 DF2A 55C2

[1] https://twitter.com/rabite/status/270668883172671489


I'm glad someone is taking on the task of defending Weev because the charges they have against him are ridiculous and stupid.

Sadly, Weev's nature is that of a rattlesnake, and he's basically burned every bridge he could on his way here. Thus, the EFF will have nothing to do with him.


> Sadly, Weev's nature is that of a rattlesnake, and he's basically burned every bridge he could on his way here. Thus, the EFF will have nothing to do with him.

Could you expand on that? Why doesn't the EFF want to help him?


Sorry, not going there. Suffice to say, he burned a bridge.


From my understanding all you had to do was pass the ICC-ID to a script on AT&T's servers to get back the user data. I can see the court interpreting the ICC-ID as a form of access control since you'd have to guess them similarly to passwords.

What isn't quite clear to me is what they did with this data. It seems they reported the hole to AT&T who then fixed it. That's good. It also seems they passed the data off to reporters, which may be bad for their case. It seems like they acted, at least mostly, responsibly.

Assuming the data was never released to the public I don't think they should be prosecuting Auernheimer. That said, it does seem like they have a case based on the wording of the law.

It's a very real possibility that he's facing jail time, especially when you consider the volume of data. He's charged with breaking 18 USC § 1028A (aggravated ID theft laws) which carries a mandatory minimum of 2 years. Federal judges have some control over this but generally stick with sentencing guidelines. He could fight it and win, get them to reduce the charges, or do some serious work for the gov and hope the judge goes easy on him.

Note: I was convicted of violating 18 USC § 1028A (among other laws) so I have personal experience with this law, sentencing guidelines, and judge discretion but IANAL.


The ICCIDs here are sequential integers. It wasn't brute force, just incrementation. They are not secret and are certainly not access credentials. There were no access controls surrounding the web service in question.

The prosecution is asserting that access to any system without authorization is "access to a protected system" in the legal sense, which is obviously bogus. This would make the Googlebot's operators criminally liable if I put up a site at "johndoessocialsecuritynumber.com".

In fact, authorization is built into HTTP. There were no protections in place surrounding this data. Regardless of what he did with the data, downloading something from a public website is not criminal. (Though irrelevant legally, it's worth noting that he did nothing with the data except shame AT&T.)

(I put up weev's bail and am handling some of his PR while his computer restrictions are in place.)


No, it wouldn't expose Google to that liability, because Google would have no intent to access information of any sort, let alone information that Google would have reason to believe it shouldn't have access to. The crimes charged here aren't strict-liability; the prosecution is required to prove intent.

Also: the authorization "built into" HTTP is used by a tiny minority of all web applications. Clearly, if you break into a retail brokerage and execute trades on behalf of other people, you're going to be liable, regardless of how that retail brokerage chooses to authenticate access to the site.

(I don't have a position about this particular case; I know virtually nothing about it).


> No, it wouldn't expose Google to that liability

It wouldn't expose Google to that liability because Google is a large US corporation.

If it were a single greyhat individual with a history of unpopular speech, the argument would probably go something along the lines of "any reasonable person wouldn't spider the whole web, because they'd know they'd eventually get SOMETHING they're not authorized to access..."


I'm just stating facts. The prosecution is required to prove that the defendant not only exceeded their authorization to the application in question, but that they did so knowingly.


[deleted]


In practice, you're unlikely to be charged for good-faith testing of someone else's system without their permission, especially if you're not an idiot and you don't ransom off your findings to the company. Also, many sites explicitly offer permission to security researchers to test their sites.

However, I believe it remains black-letter illegal to "test" websites for security flaws in such a way where you actually exploit flaws and gain access to internals or sensitive data.

Either way, don't do it. If you don't have permission (Google, for instance, gives blanket permission for testing), don't fuck with other people's web apps. It's very difficult to ensure that any kind of security testing, outside of really basic stuff like CSRF, won't disrupt the site; even silly XSS vectors can get cached in backends and replayed to other customers. You're unlikely to pick up a felony charge for doing this, but you can be sued.


I'm guessing that sneak wasn't referring to HTTP Basic Auth, but rather the fact that when an HTTP client says "GET", the server effectively says "HOW MUCH?" (A crude allusion to the classic "jump/how high?") I think sneak is arguing that receiving a response to an isolated GET request implies that the GET request was "authorized."

Edit: after reading selections of IRC logs and e-mails from the indictment, my sympathy with weev has been reduced. Granted, an indictment will select the most salacious excerpts to make its case in the strongest way possible, but the conversations in the indictment are far from white hat (unless the defendants manage to convince the court that their talk of phishing and spamming was merely juvenile humor).


I understand what he's saying, but a legal defense based on the response to an HTTP GET constituting de facto permission to access whatever that URL addresses is going to be disappointed. I'm not saying the defendant here is guilty; I'm just saying, he's not not guilty simply by reason of HTTP.

As a matter of law, the onus is not on the operators of web applications to ensure that their defenses are effective. If you do something stupid in your web app, but it remains clear to me that I'm not meant to access whatever your stupid app failed to prevent me from accessing, I can still be charged with a felony for doing it.

Now, if whatever it was that you tried and failed to protect wasn't obviously protected, so that I can argue in court that a reasonable person would have believed they did have permission to interact with it, I'll probably win in court. The prosecution will no doubt collect all my emails and any other piece of evidence it can find to build a case that I knew I wasn't supposed to hit your site that way. If you were especially dumb, and built an application that not only exposed sensitive information, but did so in a way that any normal person would just have assumed that sensitive information was theirs for the taking, the prosecution will have a hard time making their case.


I wonder what one could get an expert witness to testify to with regards to the meaning behind the HTTP protocol with respect to 'authorization'.


Sergey Bratus[1] (of Langsec[2] fame) testified as an expert witness for the defense in this very way within the last few days.

[1] http://www.cs.dartmouth.edu/~sergey/

[2] http://www.cs.dartmouth.edu/~sergey/langsec/


Do you really buy this line of argument? How many banking applications configure themselves so that they rely on the intended meanings of HTTP verbs and authorization headers as their primary overt security mechanism? And of those, how many do so correctly?

I get why Bratus would testify. The defendant here needs all the help he can get and is morally entitled to the best case he can possibly present. I respect and admire everyone who is trying to help him out. But presuming he's not guilty of a real conspiracy to defraud anyone, I have a hard time believing it's because AT&T's web application vulnerability entitled the public to their database.


I think it's reasonable for them to argue that AT&T's server's willingness to give them the e-mail addresses means that obtaining the addresses was not illegal, and that despite mulling over the darker possibilities available to them, by choosing not to put the e-mail addresses to illegal use they committed no crime. They could argue that they should not be convicted of conspiracy because they ultimately decided not to abuse the list of addresses.


I would like it to be harder than it seems to be to prove conspiracy to commit fraud.

I don't think I'd like it to be harder than it seems to be to prove unauthorized access.

I know that's the opposite of what most nerds like me want, but I think we're well served by a very broad definition of unauthorized access, and we're poorly served by vague conspiracy laws in more places than just online.

Note that under the US Code, you need both elements. Just plain unauthorized access isn't a federal crime; you need an intent to defraud.


I know that's the opposite of what most nerds like me want, but I think we're well served by a very broad definition of unauthorized access, and we're poorly served by vague conspiracy laws in more places than just online.

I do agree with you regarding conspiracy, but you are right that I would in principle prefer to have every Internet-facing system as robustly secured as if it had been independently reviewed by you, cpercival, and the people who wrote the space shuttle's software. A small part of the reason I want this is so that absolutely anybody can confidently write and deploy scraping software that collects and analyzes information in new ways (e.g. IBM Watson, better search engines, or some other as yet undiscovered idea).


The CFAA does not in practice prevent search engines from scraping pages. In order to be charged under the CFAA, you must willfully access specific information on a website in furtherance of a fraudulent scheme, which in turn means you must be making specific representations as to your identity or actions that a website could reasonably rely on in order to trick that website into doing something it wouldn't have done otherwise. In CFAA cases, the prosecution must prove not only unauthorized access (which is easy) but also fraud (which is not as easy).


I think that you may be my only truly whitehat friend. <g>


I do. If there is any crime here (I don't think there is even one, FYI) it's AT&T not taking adequate measures to safeguard their customers' PII.

I don't think trafficking in any information should be a crime, though (unless it's the government - an asymmetry is necessary there), so I don't think a criminal trial is in any way justified.


> regardless of how that retail brokerage chooses to authenticate access

There was no other authentication system aside from HTTP in place in this case.

If that's the only one, and it's wide open, any reasonable person knows that the information there is public. That's how the web works. We're allowed to load URLs on planet Earth.


I understand where you're coming from, but that logic doesn't really work, does it? I can tell you that somewhere there is indeed an application that will respond to an unauthenticated GET request by transferring funds between accounts. You and I both know that. Deliberately loading that URL on planet Earth to effect funds transfers will get you charged.

So it's obviously more complicated than just "any unauthenticated URL must be fair game".


That sounds like a pretty dubious assertion to me, especially given the nature of $$$. Care to provide some evidence?


If that sounds like a dubious assertion to you, I'm guessing you don't work in application security. Me and 'sneak do.


I am currently employed in the banking industry, and live in Charlotte, NC–a major banking center. I can confirm that bugs of this nature exist, and are in fact, not even uncommon.


I guess maybe the disconnect here is, and help me understand, but why did he need to gather 110k email addresses?

If he never had any intention of using them, or just wanted to publicize the breach, why not gather two or three and then go to the press?

I guess an analogy might be - I see a bunch of boxes on the curb outside a doctor's office. I open one up and look at the first manila folder - it's someone's medical records. I open the second, it's also medical records.

If I take those two to go show a reporter, or yell at the doctor's office, I'm ethically clean. But if I take the boxes home with me, I'm in a much trickier legal and ethical situation, as why did I need to take the records home?


In computers, the only quantities that matter are zero, one, or many. My guess: weev wrote a script that incremented IDs just to see what would happen, it ran for a short time, and... whoops, there's 110k addresses there!

It's difficult to analogize this to the physical world; maybe you could say that you are the garbage collector, so your automated truck picking up one box of trash that happened to contain medical records is the same as picking up all the boxes, but that's still far from a perfect analogy.


It's worth reading the indictment, here (thanks to ssclafani for digging this up):

http://www.scribd.com/doc/113664772/46-Indictment

"The Account Slurper attached AT&T's servers for several days in or around June 2010, and was designed to harvest as many ICC-ID/email pairings as possible."

This is the indictment, so not proven, but the period for which the script ran is probably accurate. So nah, not a short time.

There's also some great IRC logs later on where they're talking about using the collected email addresses for phishing, debating whether or not it's worth the effort if they don't get passwords, and talking about selling the resulting email database.


I just glanced through the indictment linked above and the IRC log snippets are extremely damaging. Very hard to make the case that this is a purely innocent whistle blower. Maybe its just trash talking among friends on IRC, but it will sound very, very bad when read in court.


Those are the things they DISCUSSED doing - what ACTUALLY HAPPENED was that they sent excerpts to the media and deleted their own copies.

It looks bad but I think it's more important to focus on the fact that they ended up doing the Right Thing with the data instead of fucking anyone directly - even when distinctly aware of the various opportunities available for misusing that data.

Many greyhat researchers don't have those ethics or morals.


That's why they've been charged with "Conspiracy", isn't it? The elements of federal Conspiracy:

(1) An agreement between two or more persons to commit at least one crime.

(2) A person joining the conspiracy knowing of at least one of its objects and intending to help accomplish it.

(3) One of the members of the conspiracy performed at least one overt act for the purpose of carrying out the conspiracy.

So, here:

1. Let's build a database of stolen AT&T email addresses associated with iPads and then sell them to spammers.

2. Here, run this script as I adjust to to increase the number of accounts it successfully finds.

3. Oh, look, I ran this script and got 100,000 email addresses.

Not good, right?

Look, if you read Spitler's indictment, it really doesn't seem like this particular conspiracy wanted to do anything but troll a huge company with thousands of people's personal information. I don't have a whole lot of sympathy for the defendants here, but it does seem like an injustice that could have been addressed in civil court.

It seems like a stretch to convince a jury that these people really wanted to sell the information they collected.

On the other hand, I think the idea that AT&T made this information available to the public in such a manner that you could have in good faith harvested hundreds of thousands of addresses is pretty much bunk. It's not going to help that A.A.'s first instinct was to run to the media because simply having the addresses was such a big story that they'd be on the front page of NEWS.GOOGLE.COM. He more or less immediately made clear to everyone that he knew he had no business handling that data.


It's debatable whether or not they did the right thing. Many security researchers would say that the right thing would have been telling AT&T first, giving AT&T a reasonable amount of time to respond, and only then going to the media.

This also speaks to motives. At the end of the day, I don't really care whether or not weev is a good guy. I do think it's important to be really clear about why he does the things he does, because otherwise you're not having the real conversation. IMHO, it's more important to protect jerks than it is to protect nice guys. It's harder to protect jerks.

Therefore, you're doing the community a disservice if you paint weev as an angel. Let him be who he is. Then defend him if his case merits it.

Tangentially, "many greyhat researchers don't have those ethics or morals" is irrelevant. The question at hand is whether or not weev acted morally, not whether or not he did better than average.


I don't see 'sneak doing anything but saying that A.A. was hit very hard by the Justice Department, and that he deserves the best possible defense and, in the meantime, the least possible disruption to his life. I wouldn't have coughed up bail money, but I admire the hell out of 'sneak for doing that.


He did also claim that weev did the Right Thing. I don't think that assertion is clearly accurate.

It's entirely possible that I'm focusing too much on motive; possibly the end effect (hole fixed) matters more than why weev did it in the first place.


Point of information: did weev or anyone at Goatse Security report the hole to AT&T, or did they just send the information to the media? None of the reporting I read at the time said that they reported to AT&T, but I could easily have missed something.


According to Spitler's indictment, they did not; in a captured IRC chat, A.A. told another member of his group explicitly that he had not reported the breach to AT&T.


The article states

"Weev and a fellow hacker who originally uncovered AT&T’s mistake and collected the e-mails didn’t ask the company for permission to access the Web addresses that shared iPad users’ private information"

I really don't see how this argument holds up.

From a technical point of view the very nature of HTTP includes asking for permission. You send a request "Please can I see the information at this URL" If the organisation running the server does not give that permission, it should not serve the data.

These charges seem pretty odd to me, hopefully it gets resolved in a sensible manner.


> From a technical point of view the very nature of HTTP includes asking for permission.

A web server isn't an agent of the company and has no capacity to grant or deny permission.

Think of it as a security system you install in your home. Now, if the security system is malfunctioning and you notice that it is malfunctioning ... do you call up the owner and let them know or do you go inside and look through their stuff? If you go inside and look through their stuff, it's trespassing.

Obviously the two scenarios aren't the same, but I'd imagine that's the logic used in the argument.


As an information security professional, I see two different issues at play here. First, they got access. They were granted access by the admin who did not lock down the server. I am not a lawyer, but I see the unauthenticated web server, no matter how much of a mistake, as being implicit permission to access the site. A house, by default, implies privacy. A web server is more of a business in this metaphor. If the door is open and the lights are on, it's implied you can come in and look around. Machines accessible over the web are by default open to everyone unless permission is revoked. The "unauthorized access" charge, in my opinion, should be struck down. When a site is made accessible from the unauthenticated Internet an admin implicitly granting you permission to visit the site.

The second issue at play is the fact that the guy apparently collected some email conversations to use as proof. Using my business metaphor, walking into a closed business that to a layman appears open is a simple mistake. Anyone could reasonably assume the business is open. However, collecting their merchandise even just to prove they forgot to lock up would still be stealing. In this situation, it's unauthorized copying. Most reasonable people would consider this to be unacceptable.

The second situation is muddied a bit further by my wording "most". Websites accessible when unauthenticated are able to be scraped easily. What if the Googlebot crawled the site and collected the information due to a poor robots.txt? What if you walked into the business and tried some free samples (unauthenticated websites are implicitly free samples)? Data privacy comes into play on this one though, and I would argue that any reasonable person would understand these as private communications. While they are accessible to view, any reasonable person would understand it is unethical to read them and unacceptable to copy them.

The fatal flaw of the defendant was copying the emails. Up to that point, he was completely within reasonable practice in my opinion. Here's a takeaway for any startups: security isn't a joke. It's a career ender, it's a business ender, and it could be a career ender for your customers who trusted you. You hire the best programmers, but budget a little aside for an external penetration test, and take the results seriously. Don't lose your company and your reputation because of a caffeine-fueled oversight.


You don't live in a world governed by machines and pure logic. You live in a world governed by human beings and their nature.

You have the capacity to recognize where you should be and where you shouldn't be. What you should be seeing and what you shouldn't be seeing.

Right from wrong.

> A web server is more of a business in this metaphor. If the door is open and the lights are on, it's implied you can come in and look around.

No.

If you're inside a business and you see a door open and it is evident by the design of the building that it is their storage space ... you do not have the right to waltz on in. You damn well know through your experience in hundreds of other stores that this area is used by employees and for employees only. You do not belong there.

Are you telling me he read those emails by accident? Just stumbled on them? Or did he know exactly what he was doing?

Enough of this white hat bullshit. I do not have the right to self-deputize myself and become a vigilante on the Internet. If these clowns don't know how to secure their own damn servers, let them pay the price that will be exacted by less scrupulous individuals. That's how the free market works. Stupidity is severely punished. They will very quickly learn how to properly set permissions on their server.


The problem with metaphors is that they only resemble what they are describing. They'll always be imperfect. The problem with web servers is that anything that is public-facing is just that. Security through obscurity is no security at all.

Like I said, the guy went too far. But visiting a public-facing website is not a crime, no matter how you happen to discover the URL. There's no sign on the door saying "keep out", even though the server is more than capable of displaying one. Do you have a right to walk into any business, or walk into their storage space? No, but any reasonable person (notice I keep using this phrase? It's going to come up in court) would assume if the lights are on and the door is open, you can walk in. You might be mistaken, and a clerk might show you out. Intent is a critical factor. Like I said, the guy went too far. He didn't enter by mistake, though someone could have. He entered with the intent of making unauthorized copies of private data. Walking into a store's storage space isn't illegal, but a reasonable person would know that taking pictures of customer data is.

It's not illegal to visit any public facing Internet site. It is illegal to make unauthorized copies of restricted data. It's also against The company is hugely to blame in this situation for leaking private information. So is the guy who broke the law by making unauthorized copies of this private information. I support him having criminal charges filed against him. My point was that there are two issues at hand, one illegal and one perfectly within the law. Implied consent at odds with intent. It should be an interesting case.


He demonstrated a proof of concept, collected data, and went to journalists. Cherry picking irc logs for things for possible uses of the data is weak because they have a weak case.

Arguing about methods of responsible disclosure, a very dead horse that has been beaten to dust, seems like a waste of time and not really relevant.

This is just the endgame of the chilling effect of arresting and hounding researchers which has been going strong ever since 2001 http://news.cnet.com/2100-1001-270082.html


> Intent is a critical factor. Like I said, the guy went too far. He didn't enter by mistake, though someone could have. He entered with the intent of making unauthorized copies of private data.

We're in agreement here. I think we're both making the same point. Intent is the key here.

The problem is that if you just consider servers, configurations, permissions, and other technical aspects ... intent doesn't enter the picture. That's the wrong way to think about this.


I do agree that we're making the same point, and I wrote my response to you in the mindset that I had poorly communicated my initial conclusion. Your point compliments my own. The difference we may have is that I don't view intent in the highest importance when someone visits a public server. Intent will only get you so far as long as server, configurations, permissions, and other technical aspects are in order. The reason he was able to copy restricted data is because the technical aspects were not in order. That's where the muddiness comes in; you wouldn't need intent to make unauthorized copies in this situation. The Googlebot could have made unauthorized copies. Your browser's cache could make unauthorized copies. Archive.org could have made unauthorized copies. Googling for plaintext and valid credit card numbers might shock you in what Google is finding on public servers.

His intent comes into play only secondarily in my opinion. I might enter a store with intent to steal something, but if a security guard is standing next to me and a camera is watching, I'd walk right back out. The lack of security is what allowed him to complete his intentions of unauthorized copying. It does not absolve him of his crimes, but thinking about the potential for unintentional restricted data access tells me that his crimes sit in line with the failed (non-criminal but out-of-compliance) policies of the host.


> If these clowns don't know how to secure their own damn servers, let them pay the price that will be exacted by less scrupulous individuals.

AT&T will not be affected whatsoever by a security breech, only those people whose information is leaked will be affected. The whole point of a white hat is to show this vulnerability and have it fixed before damage is done by someone with malicious intent.

> That's how the free market works. Stupidity is severely punished. They will very quickly learn how to properly set permissions on their server.

We do not live in a free market, and corporations are disproportionately powerful compared to individual people. You are asking that individual people have their data leaked and their lives potentially affected so that AT&T can look bad and then walk away from this situation without any punishment.

Further, it is very clear that companies make mistakes all the time with configuration their servers and tools in ways that makes data leaks and theft possible. We should demand that this flaws be exposed and fixed ASAP, there is nothing to be gained here by harassing those doing that exposure.


> AT&T will not be affected whatsoever by a security breech

That's naive. If my emails become public, trust me, I'll cancel my AT&T service. If AT&T becomes known for airing people's dirty laundry, they will quickly bleed customers.

> Further, it is very clear that companies make mistakes all the time with configuration their servers and tools in ways that makes data leaks and theft possible.

Yes, they do. And in cases where individuals are hurt, those individuals sue the company involved. Either individually or collectively. Those companies do pay for their mistakes.

Except, of course, in cases where no actual measurable harm was done by the security breach.

> We should demand that this flaws be exposed and fixed ASAP, there is nothing to be gained here by harassing those doing that exposure.

There is a reason we vest the authority to enforce laws and pursue criminals in only a select few trained individuals. It's naive to think random teenagers have a fine grasp of the law, civil rights, and a well-tuned moral compass.


A web server isn't an agent of the company and has no capacity to grant or deny permission.

A web server certainly can grant or deny permission, but it seems that this one didn't.


That is perhaps not the same as "capacity" in the legal sense. Is a web-server legally competent?


The blame would rest on the admin who set the permissions. You can't blame a lock for not being locked, but you can blame the night watchman for not locking it. This doesn't remove blame from the intruder though.


A webserver that doesn't restrict access (not even obfuscating the URL) is more similar to a "Free reading material!" shelf at the bookstore. If you go and take one of everything, it's not your fault if the bookstore mistakenly put things there that they'd intended not to be freely available.


> A web server isn't an agent of the company and has no capacity to grant or deny permission.

Imagine I send a company a polite letter, requesting permission. The CEO hand-writes a letter (with his quill pen) telling me that I may access the information. After doing so, some critics on the internet start complaining that "A letter isn't an agent of the company and has no capacity to grant or deny permission."

Your claim is completely bogus. A web server DOES have the capacity to grant or deny permission because it is simply the mechanism by which the granting is delivered. Those who configured the server were the ones granting the permission.

I do not believe that the judge is claiming a web server cannot grant permission, I believe the judge is claiming that having to construct the URL by hand (rather than clicking on a link) is "a security measure" that has been "bypassed". For what it's worth (not much) I disagree strongly with this interpretation.


By that reasoning, a lock has the capacity to grant or deny access to whatever is behind a locked door. And if I pick the lock, well, that just means I was sufficiently persuasive that the lock agreed to let me in, doesn't it? Clearly, by using a lock that opens in response to certain inputs, the owner is choosing to grant access to anybody who provides those inputs.

I'm not trying to argue that guessing sequential IDs in a URL is morally the same as picking a lock. I'm arguing that in both cases, there's no human in the loop, so it's not at all obvious to what extent a human should be assigned responsibility. In your example, the letter does not have agency, but the CEO certainly does; and if weev had written 110,000 letters to AT&T that were read and responded to by humans, I can't imagine how there would be any case against him.

See also: the debates surrounding Google's autonomous cars, or the Do-Not-Track header.


On the other hand, if Weev is sentenced: what does that mean for such things as Google's security flaw bounty?


>> required visiting an AT&T web address with a particular – and easy to guess – code tagged onto the end.

How is this different than a password?


A password at least makes it clear to a bystander that some access control is intended at that URL. Consider the silly case where I have a server responding to example.com/funny/ and then try to claim that it was secure simply because I had not published the link. People would be quite confused if they went to jail for visiting it.


Someone can provide you with a clickable link, as in for instance this submission, and you would never even know that the content you are accessing is supposed to be "protected".


You can format a link to be something like:

http://username:password@members.example.com

I wouldn't say that means the account in question is unprotected.


If you are going to nitpick, I will say that this is a feature that relies on browser-support. It's not fundamental to the web. Query-strings however by definition needs to be supported on the server-side. They are a part of the web. They are required for the web to work.

Why is "browser-support" relevant? Your example is not supported in MSIE. I also thought it was removed from Chrome (in the name of "simplicity"), but I may be wrong.

A link with query-strings is guaranteed to work for everyone.

http://support.microsoft.com/kb/834489


Huh, I had no idea that feature had been deprecated. I guess it's been a little longer since I used it than I thought.


It was used for lots of http://famous-website.com:long-token-nobody-will-ever-read@p... style attacks.

Microsoft's solution to the problem may not have been ideal, but at least that was the reasoning behind it.

Edit: And what do you see once you click post? Hacker news ironically proving Microsoft's point. It's a wonderful world we live in.


I see your point, but how does this apply to this case?


I guess this is exactly the thing that the court must decide on: whether guessing that code can be considered as a circumvention of security measures or not.


Following that logic breeds bizarre results.

What if you find this magic token because it was embedded in some client-side, javascript login-form? Are you a hacker for viewing the source?

Securing content on the internet is easy. If you don't want it accessible to anyone, don't give the content to anyone who provides an unauthenticated HTTP request.

Why are we putting the legal responsibility of maintaining security on that content on everyone except the ones actually in position to do so?


If I look under your doormat, and there is a key, and I use it to open your front door...


Rather if you leave a (possibly classified) document under your doormat, am I a criminal if I find them and read them?


Depends on the document and jurisdiction. If I remember correctly, some levels of military classifications here in Finland require you to not read the document and return it to the officials. Of course the one who left the document would also get reprimanded at least.

Using someones password without permission is as illegal whether you shoulder surfed it, cracked it or red it from a post-it note.


A house's front door implies an expectation of privacy. A web server implies an expectation of public access.


Yes, the 403 status code exists for a reason!


We live in a tech-filled world without a reliable means for responsible disclosure, no way to hold a company accountable for reacting to attempts of responsible disclosure, and any whistle-blowers are immediately branded as "criminals" and "hackers".

This whole process, or lack thereof, needs some serious disruption.

Edit: My comment is intended to be a general observation and not specifically about this case


>> We live in a tech-filled world without a reliable means for responsible disclosure

There are many avenues for responsible disclosure, Google that phrase.


I guess that's why companies like Google, Facebook, Mozilla or Microsoft go to great lengths to publicize their disclosure platforms? This endless stream of avenues exist yet these companies feel the need to create their own? Right.

CERT-CC is the closest we have and that does little to ensure things are actually resolved nor does it afford any sort of protection to the reporter.

Care to link to one of these "many avenues" instead of just pointing me to Google?


As far as I know, following responsible disclosure practices does not waive any legal liability. It may make you look professional, but if someone who is friends with the FBI wants to fuck you, they can still fuck you.


Following the published disclosure guidelines that Google and Facebook created will in fact prevent the FBI from "fucking you", because any lawyer in the world can read the plain meaning of the statutes point out that a reasonable person would believe they had permission to conduct tests.

On the other hand, if you're dealing with an application for which you don't have any written permission to test, just stop. You are not entitled to conduct your own security tests of other people's applications. Leave them alone.


> You are not entitled to conduct your own security tests of other people's applications. Leave them alone.

Your attitude is part of the problem and why we need a solution.

Let's create an analogy: I drop off my daughter at daycare, I've entrusted this place of business with something vital to me, as have numerous other customers. I get off work early to go pick up my daughter, upon arrival, I find the guardian eating lunch in an adjacent room and the children are left completely unattended. This is clearly a problem on numerous levels.

So what do I do? Do I politely inform the administration of the issue and hope they will fix things immediately? What if they blow me off and do not discipline nor change policy? Do I withdraw my child's enrollment and just blindly hope that other customers know of their utter disregard for the trust we've placed in them? Do I report them to the police or child protective services? Do I picket the business and attempt to inform other people of the issues? What if informing the public simultaneously informs some criminals of the ability to do some baby snatching?

In this scenario, a parent actually has some legal recourse options to ensure the problem is fixed. The issue in web security is there is no governing authority to report the problem to and if we take our findings public, we place other customer's information at risk and put ourselves in a position of legal risk as well. We get branded a "criminal hacker" and find ourselves off to jail.


Two things.

First, you're responding to a factual argument with an argument about my attitude. It is not "my attitude" that people "shouldn't" be entitled to test applications. It is a fact that they are not allowed to do that. Unauthorized access to computer systems, which has a broad but actually very straightforward definition, is unlawful. If you cause damages when you do it, you're liable for civil damages. If you attempt in any way to profit from having done it, you're liable for a felony charge.

Second, arguing about this stuff by analogy is perilous. You can see that here, because it's immediately obvious that observing your child's caretaker eating lunch is not comparable to breaking into a web application to steal a database over the span of several days. To attempt to equalize the analogy, imagine that instead of simply observing your child's caretaker, you instead break into that caretaker's residence or place of work. You've now committed a felony.

I am for many obvious reasons pro- security testing, and I'm happy to speak glowingly of the companies that have set up policies to make it safer for researchers to test their sites. In advice I've provided on HN and to startups, I've consistently told companies to create pages to thank security researchers.

Having said all that: if you don't have permission to test someone's web app, don't do it. The law very reasonably says that if a company doesn't want to submit itself to unauthorized intrusive testing by strangers, it is entitled to treat its applications, its servers, and the data it holds as private property. Not every case in which a stranger pokes for flaws in someone's app will be open- and- shut, and that's for the best too. But the rule of thumb is very simple: leave other people's apps alone.


(To be clear, I'm speaking more broadly about the topic and absolutely not condoning the weev's alleged actions. Lack of responsible disclosure, discussion of profiting from the flaw and exploiting it far beyond simple validation tests are all going to make it very difficult for him.)

Observing sequential identifiers in a URL and validating a gaping security hole is hardly something I'd classify along the lines of "stealing a database" nor should it be considered "unauthorized access", however the judicial system clearly feels different and that's the sort of changes I'm advocating that need to be adjusted.

Of course, discussing legislation changes that would allow anyone to execute security testing is probably not a very lucrative topic with the founder of a company that provides security research & testing.


A.A. and his alleged coconspirator may be exonerated, so maybe you're right. But using a flaw like this to download 100,000 email addresses, and then select from those addresses those of prominent members of the media to do outreach for a PR campaign, all the while talking on IRC about selling the addresses to spammers --- that's something more than simply observing increasing numeric IDs.

Your last sentence is needlessly and pointlessly hostile. If I was arguing out of personal interest, I wouldn't be recommending that companies pre-consent to having strangers test their web apps by setting up thank-you pages, now would I?


That last sentence is incredibly lame. I've had problems with 'tptacek (from an incident several years ago that I'm finally over) but he argues for security policies that he believes work, not those that are in his interest.

A wild-west attitude, with every man for himself, no-holds-barred, no-legal-resource-when-you-are-hacked is the best possible thing for security people, who would be kingmakers in such a universe.


The security field in such a universe would be a gigantic, insanely lucrative shakedown racket. "I can help harden your systems against attacks — or I can just steal your customer database and sell their info to these nice Russian gentlemen. Whichever you prefer."


"I showed up early and saw something really bad" is hardly the same category as "I decided to test the security of someone else's webapp." If weev did nothing and randomly got someone else's private data sent to him (and people have reported similar incidents here on HN), that would be similar.

Is the list of questions in your 4th paragraph meant to say that weev had no hope to get this fixed besides writing a tool that pulled down the information of thousands of users and then taking it to the press?

We get branded a "criminal hacker" and find ourselves off to jail.

It's not that hard to avoid landing in jail for computer crimes.


Disruption by who? Other sovereign entities?


Perhaps by the state in which a business or other entity is registered.

In some states there are strict laws about maintaining privacy of customer data. But, AFAIK, where I am there's nothing specifically addressing negligence in failing to address a security breach that can be called "cracking".

Government could, as part of their protection of their citizens offer a service of receiving security notices from third parties and passing them on to registered entities. The government run body could then check - and possibly advise on - the fixing of the [potential] data breach.

In the UK the Information Commissioner's Office handle data protection issues (including legislational requirements placed on organisations). They have a form [a .doc, that tells me a lot TBH!] for companies to notify the ICO (http://www.ico.gov.uk/for_organisations/data_protection/lose...); it's not much of a stretch to have a form for third parties but there would need to be legislation to deal with enforcing responses and providing an amnesty for those making [non-malicious] reports.


Perhaps something like wikileaks for internet security? Located outside US jurisdiction?


He should totally have posted those on pastebin instead and shared on the net.

I think that if one finds an embarrassing security vulnerability, they should look up the offending company and upon finding that company ever took part in the crap like described in the article, they should publish the vulnerability wide open, goatse-style, anonymously via pastebin.

Those companies should suffer to the end of times and their clients should too.

Unless that company properly apologized, which they never do. Corporations don't seem to be capable of that. Not in their DNA.


Why should I, as an innocent party to a companies security failings have my personal details revealed. This is exactly why we have laws protecting identity theft and fraud.

Naming and shaming the company and the extent of their security failing is enough. But only after they have patched the hole.


Because you trusted the wrong people, and this has consequences.

The customer has to suffer from the mistakes of the companies so that:

1. She stops trusting any company blindly with her data.

2. The market demands improvements from the service providers (and no, sacrificing a scapegoat in court is not an "improvement").

3. Service companies react. Unless there's a major scandal, companies are slow to react to vulnerabilities.

About that last point, it's a bit unfair to assume that the activist tried to contact AT&T and they didn't react. I honestly don't know if that's what happened in this case, and I hope it is. However there are several cases of companies "turning a blind eye" on the issue.

As a customer, the only entity to blame when such a breach occur, is the provider that you entrusted with your data. Not the activists (regardless of their intents) nor the hackers or whatever monster you heard about in the media.

If I entrust (and pay) someone with my bike and it gets stolen on his watch, who's to blame? The thief or the guardian?


This is an excellent point... sloppy, careless companies can operate more cheaply (until they go down in flames); insulating the consumer from these security risks would introduce the same kind of moral hazard as insulating investors from the downside on risky investments.


"Why should I, as an innocent party to a companies security failings have my personal details revealed."

Because life is pain?

But of course you can always sue the company that leaked your data and then mistreated people who warned them. Hell, you should. Probably with a class action suit.


> Because life is pain?

lol... just a touch dramatic there.


Because shaming is a whole lot of fun.

So we will come up with whatever justifications needed to say it's okay.


This is extremely dangerous. Condemning the whistle blowers results in a much less secure environment, since you'll scare away the white hats, and insecure systems will remain insecure. We need whistle blower protection laws in place, and we need them now.


What's further terrifying is that the courts have made restriction of his use of non-windows non-monitored computers part of his bail conditions - prior to a trial to determine guilt.

A person who's only marketable skill is on the Internet, completely prohibited from using ssh or virtualization, by nothing more than an error-riddled FBI complaint document. (This started prior to the grand jury indictment.) He's been without significant work for over two years as a result.

It's a scary thought that this could happen to any one of us.


>What's further terrifying is that the courts have made restriction of his use of non-windows non-monitored computers part of his bail conditions - prior to a trial to determine guilt.

How does he use a telephone? Or a suitably advanced toaster?


At first, we had to drive all over to find a phone that couldn't even MMS (because MMS can be used to send email). It was a nightmare.

Eventually some of those restrictions were relaxed.


Bah, more weev hagiography? He's not a whistle blower, he's a troll.

Look: first off, he didn't tell AT&T about the hack, he told a bunch of news organizations. See http://www.forbes.com/sites/firewall/2010/06/09/atts-ipad-ha... for details. "we did a benefit analysis and decided they could take our story viral the fastest."

One of those organizations presumably told AT&T; all AT&T has ever said is that they learned about the exploit from a business customer.

Second, weev is the same guy who got up on stage at Toorcon and lied about finding an exploit in Firefox. Just flat out lied. One of the ways in which he's a great social manipulator is that he has no qualms about the big lies, and doesn't really care if they're noticed.

Third, at the time, weev said "There is nothing in Apple’s SDK APIs that would allow an application to have this identifier [the ICC-ID] – it is a shared secret that should indicate physical proximity to the iPad." So while it's amusing to see his defenders talk about how unimportant ICC-IDs are, it's an interesting change of tune.

Fourth, even if you believe weev when he says he forced the Amazon delist of gay-themed books (I never have), that goes over the line from activism to exploiting.

Yeah, we do have to protect even unpleasant, obnoxious whistle blowers. I just don't see how weev counts as a whistle blower, and I think that any article on his situation should present the full picture rather than painting him as a misunderstood good guy.


Weev is different. That's for sure. But what's at stake here is bigger than him. Anyone can be sued for going to a damn URL. That's the real problem here (as well as a myriad of others).


Oh, weev isn't a good guy. He's a right shithead.

He's also a whistleblower.

The emails were not sold, pastebinned, or mailed MobileSafari overflows. The media was contacted because he's a megalomaniac.

One thing he is not is a criminal.


And who at AT&T faces jail for publicly exposing these Emails in first place?


10 years in jail for incrementing an integer at the end of a URL. Wow. What a disconnect.


It's pretty scary that you can go to jail (for a long time!) for something as simple as:

> ~curl http://example.com/att/[1-100].html

I understand what the lawyers are saying, but this is just beyond common sense. There should be liabilities for deploying software that publicly gives out information you do not intend to.


If you've never deployed software without security flaws, it means you have never deployed software.


Sure, but why should who makes the software bear no responsibility? If a building is found to have a flaw the architect or an engineer are who get questions, not whoever noticed the flaw.


The flaws in architecture are well-understood and there are rarely-changing building codes to describe exactly what should and should not happen.

Software does not exist in any such stable world. There can be two pieces of software, each perfectly legitimate and doing exactly what they intend, that when both are present format a customer's hard drive. Who does the customer sue then?

If you want to make developers responsible, I won't personally be hurt much, since I can make a shitload of money finding vulnerabilities in other people's code (and have done so in the past). The lawyers will make lots of money, too, as we have jury trials to figure out whether that SQL injection was really negligent or not.


At the same time, it's worth noting that running curl in a loop is slightly less difficult than factoring RSA keys. So if someone "hacks" you via an unauthenticated public web service, the courts should take that as less of a computer crime on the attacker's part and more of negligence on your part. But on the other hand, if someone breaks into your well-engineered system through some nearly-impossible attack, then you should not be liable.


The indictment filed against Weev by the DOJ in 2011:

http://www.scribd.com/doc/113664772/46-Indictment


The complaint cites "losses"

> To date, AT&T has spent approximately $73,000 in remedying the data breach. Those costs include, among other things, the cost of contacting all iPad 3G customers to inform them of the breach and AT&T's response to it.

BOO-#$%^ING-HOO.

As if AT&T shouldn't have had to lock down their user's info, and it's some kind of injustice to them that they have to do things the right way now. This perfectly highlights the fundamental disconnect between the corporations, the (their) legal system, and the Internet-connected world.


How about this. What if we have another section on websites called something like 'Submit a bug' next to 'About,' 'Contact Us' etc.

If more and more sites make it easier to report these things to them, hopefully either the error will be fixed or if it is being ignored, the "hacker" will have some kind of proof to claim that he tried to tell the bank to fix their door.


How about this: We don't prosecute people for loading URLs on the public internet first and foremost.


> "The more concerning charge to online activists watching Weev’s case is based on the Computer Fraud and Abuse Act, which forbids “unauthorized access” to a computer."

There are HTTP codes to indicate lack of authorization. Presumably he didn't encounter those. How is this an argument?


Although we all know what 'hacker' means, I suspect that most people think the words means 'someone who does nefarious things with computers.' Calling him a hacker probably doesn't help his case, regardless of him being guilty or not.


He's no whistleblower; a criminal bum is all. He's a Freemason (wannabee or otherwise), enough said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: