There were some others in line to assist (I live in Europe), but they all feared various forms of retribution/harassment from the FBI/DoJ, so it fell to me (someone with comparatively little to lose, stateside). This only serves to underscore the truly chilling effects of these sorts of governmental abuses of power.
I also host his website, http://freeweev.info, where you can make donations to his case via both Paypal and Bitcoin. (He has various restrictions placed on his use of technology while out on bail.)
Please feel free to contact me directly if you have questions related to his case. Contact info can be found in my profile.
5539 AD00 DE4C 42F3 AFE1 1575 0524 43F4 DF2A 55C2
Sadly, Weev's nature is that of a rattlesnake, and he's basically burned every bridge he could on his way here. Thus, the EFF will have nothing to do with him.
Could you expand on that? Why doesn't the EFF want to help him?
What isn't quite clear to me is what they did with this data. It seems they reported the hole to AT&T who then fixed it. That's good. It also seems they passed the data off to reporters, which may be bad for their case. It seems like they acted, at least mostly, responsibly.
Assuming the data was never released to the public I don't think they should be prosecuting Auernheimer. That said, it does seem like they have a case based on the wording of the law.
It's a very real possibility that he's facing jail time, especially when you consider the volume of data. He's charged with breaking 18 USC § 1028A (aggravated ID theft laws) which carries a mandatory minimum of 2 years. Federal judges have some control over this but generally stick with sentencing guidelines. He could fight it and win, get them to reduce the charges, or do some serious work for the gov and hope the judge goes easy on him.
Note: I was convicted of violating 18 USC § 1028A (among other laws) so I have personal experience with this law, sentencing guidelines, and judge discretion but IANAL.
The prosecution is asserting that access to any system without authorization is "access to a protected system" in the legal sense, which is obviously bogus. This would make the Googlebot's operators criminally liable if I put up a site at "johndoessocialsecuritynumber.com".
In fact, authorization is built into HTTP. There were no protections in place surrounding this data. Regardless of what he did with the data, downloading something from a public website is not criminal. (Though irrelevant legally, it's worth noting that he did nothing with the data except shame AT&T.)
(I put up weev's bail and am handling some of his PR while his computer restrictions are in place.)
Also: the authorization "built into" HTTP is used by a tiny minority of all web applications. Clearly, if you break into a retail brokerage and execute trades on behalf of other people, you're going to be liable, regardless of how that retail brokerage chooses to authenticate access to the site.
(I don't have a position about this particular case; I know virtually nothing about it).
It wouldn't expose Google to that liability because Google is a large US corporation.
If it were a single greyhat individual with a history of unpopular speech, the argument would probably go something along the lines of "any reasonable person wouldn't spider the whole web, because they'd know they'd eventually get SOMETHING they're not authorized to access..."
However, I believe it remains black-letter illegal to "test" websites for security flaws in such a way where you actually exploit flaws and gain access to internals or sensitive data.
Either way, don't do it. If you don't have permission (Google, for instance, gives blanket permission for testing), don't fuck with other people's web apps. It's very difficult to ensure that any kind of security testing, outside of really basic stuff like CSRF, won't disrupt the site; even silly XSS vectors can get cached in backends and replayed to other customers. You're unlikely to pick up a felony charge for doing this, but you can be sued.
Edit: after reading selections of IRC logs and e-mails from the indictment, my sympathy with weev has been reduced. Granted, an indictment will select the most salacious excerpts to make its case in the strongest way possible, but the conversations in the indictment are far from white hat (unless the defendants manage to convince the court that their talk of phishing and spamming was merely juvenile humor).
As a matter of law, the onus is not on the operators of web applications to ensure that their defenses are effective. If you do something stupid in your web app, but it remains clear to me that I'm not meant to access whatever your stupid app failed to prevent me from accessing, I can still be charged with a felony for doing it.
Now, if whatever it was that you tried and failed to protect wasn't obviously protected, so that I can argue in court that a reasonable person would have believed they did have permission to interact with it, I'll probably win in court. The prosecution will no doubt collect all my emails and any other piece of evidence it can find to build a case that I knew I wasn't supposed to hit your site that way. If you were especially dumb, and built an application that not only exposed sensitive information, but did so in a way that any normal person would just have assumed that sensitive information was theirs for the taking, the prosecution will have a hard time making their case.
I get why Bratus would testify. The defendant here needs all the help he can get and is morally entitled to the best case he can possibly present. I respect and admire everyone who is trying to help him out. But presuming he's not guilty of a real conspiracy to defraud anyone, I have a hard time believing it's because AT&T's web application vulnerability entitled the public to their database.
I don't think I'd like it to be harder than it seems to be to prove unauthorized access.
I know that's the opposite of what most nerds like me want, but I think we're well served by a very broad definition of unauthorized access, and we're poorly served by vague conspiracy laws in more places than just online.
Note that under the US Code, you need both elements. Just plain unauthorized access isn't a federal crime; you need an intent to defraud.
I do agree with you regarding conspiracy, but you are right that I would in principle prefer to have every Internet-facing system as robustly secured as if it had been independently reviewed by you, cpercival, and the people who wrote the space shuttle's software. A small part of the reason I want this is so that absolutely anybody can confidently write and deploy scraping software that collects and analyzes information in new ways (e.g. IBM Watson, better search engines, or some other as yet undiscovered idea).
I don't think trafficking in any information should be a crime, though (unless it's the government - an asymmetry is necessary there), so I don't think a criminal trial is in any way justified.
There was no other authentication system aside from HTTP in place in this case.
If that's the only one, and it's wide open, any reasonable person knows that the information there is public. That's how the web works. We're allowed to load URLs on planet Earth.
So it's obviously more complicated than just "any unauthenticated URL must be fair game".
If he never had any intention of using them, or just wanted to publicize the breach, why not gather two or three and then go to the press?
I guess an analogy might be - I see a bunch of boxes on the curb outside a doctor's office. I open one up and look at the first manila folder - it's someone's medical records. I open the second, it's also medical records.
If I take those two to go show a reporter, or yell at the doctor's office, I'm ethically clean. But if I take the boxes home with me, I'm in a much trickier legal and ethical situation, as why did I need to take the records home?
It's difficult to analogize this to the physical world; maybe you could say that you are the garbage collector, so your automated truck picking up one box of trash that happened to contain medical records is the same as picking up all the boxes, but that's still far from a perfect analogy.
"The Account Slurper attached AT&T's servers for several days in or around June 2010, and was designed to harvest as many ICC-ID/email pairings as possible."
This is the indictment, so not proven, but the period for which the script ran is probably accurate. So nah, not a short time.
There's also some great IRC logs later on where they're talking about using the collected email addresses for phishing, debating whether or not it's worth the effort if they don't get passwords, and talking about selling the resulting email database.
It looks bad but I think it's more important to focus on the fact that they ended up doing the Right Thing with the data instead of fucking anyone directly - even when distinctly aware of the various opportunities available for misusing that data.
Many greyhat researchers don't have those ethics or morals.
(1) An agreement between two or more persons to commit at least one crime.
(2) A person joining the conspiracy knowing of at least one of its objects and intending to help accomplish it.
(3) One of the members of the conspiracy performed at least one overt act for the purpose of carrying out the conspiracy.
1. Let's build a database of stolen AT&T email addresses associated with iPads and then sell them to spammers.
2. Here, run this script as I adjust to to increase the number of accounts it successfully finds.
3. Oh, look, I ran this script and got 100,000 email addresses.
Not good, right?
Look, if you read Spitler's indictment, it really doesn't seem like this particular conspiracy wanted to do anything but troll a huge company with thousands of people's personal information. I don't have a whole lot of sympathy for the defendants here, but it does seem like an injustice that could have been addressed in civil court.
It seems like a stretch to convince a jury that these people really wanted to sell the information they collected.
On the other hand, I think the idea that AT&T made this information available to the public in such a manner that you could have in good faith harvested hundreds of thousands of addresses is pretty much bunk. It's not going to help that A.A.'s first instinct was to run to the media because simply having the addresses was such a big story that they'd be on the front page of NEWS.GOOGLE.COM. He more or less immediately made clear to everyone that he knew he had no business handling that data.
This also speaks to motives. At the end of the day, I don't really care whether or not weev is a good guy. I do think it's important to be really clear about why he does the things he does, because otherwise you're not having the real conversation. IMHO, it's more important to protect jerks than it is to protect nice guys. It's harder to protect jerks.
Therefore, you're doing the community a disservice if you paint weev as an angel. Let him be who he is. Then defend him if his case merits it.
Tangentially, "many greyhat researchers don't have those ethics or morals" is irrelevant. The question at hand is whether or not weev acted morally, not whether or not he did better than average.
It's entirely possible that I'm focusing too much on motive; possibly the end effect (hole fixed) matters more than why weev did it in the first place.
"Weev and a fellow hacker who originally uncovered AT&T’s mistake and collected the e-mails didn’t ask the company for permission to access the Web addresses that shared iPad users’ private information"
I really don't see how this argument holds up.
From a technical point of view the very nature of HTTP includes asking for permission.
You send a request "Please can I see the information at this URL"
If the organisation running the server does not give that permission, it should not serve the data.
These charges seem pretty odd to me, hopefully it gets resolved in a sensible manner.
A web server isn't an agent of the company and has no capacity to grant or deny permission.
Think of it as a security system you install in your home. Now, if the security system is malfunctioning and you notice that it is malfunctioning ... do you call up the owner and let them know or do you go inside and look through their stuff? If you go inside and look through their stuff, it's trespassing.
Obviously the two scenarios aren't the same, but I'd imagine that's the logic used in the argument.
The second issue at play is the fact that the guy apparently collected some email conversations to use as proof. Using my business metaphor, walking into a closed business that to a layman appears open is a simple mistake. Anyone could reasonably assume the business is open. However, collecting their merchandise even just to prove they forgot to lock up would still be stealing. In this situation, it's unauthorized copying. Most reasonable people would consider this to be unacceptable.
The second situation is muddied a bit further by my wording "most". Websites accessible when unauthenticated are able to be scraped easily. What if the Googlebot crawled the site and collected the information due to a poor robots.txt? What if you walked into the business and tried some free samples (unauthenticated websites are implicitly free samples)? Data privacy comes into play on this one though, and I would argue that any reasonable person would understand these as private communications. While they are accessible to view, any reasonable person would understand it is unethical to read them and unacceptable to copy them.
The fatal flaw of the defendant was copying the emails. Up to that point, he was completely within reasonable practice in my opinion. Here's a takeaway for any startups: security isn't a joke. It's a career ender, it's a business ender, and it could be a career ender for your customers who trusted you. You hire the best programmers, but budget a little aside for an external penetration test, and take the results seriously. Don't lose your company and your reputation because of a caffeine-fueled oversight.
You have the capacity to recognize where you should be and where you shouldn't be. What you should be seeing and what you shouldn't be seeing.
Right from wrong.
> A web server is more of a business in this metaphor. If the door is open and the lights are on, it's implied you can come in and look around.
If you're inside a business and you see a door open and it is evident by the design of the building that it is their storage space ... you do not have the right to waltz on in. You damn well know through your experience in hundreds of other stores that this area is used by employees and for employees only. You do not belong there.
Are you telling me he read those emails by accident? Just stumbled on them? Or did he know exactly what he was doing?
Enough of this white hat bullshit. I do not have the right to self-deputize myself and become a vigilante on the Internet. If these clowns don't know how to secure their own damn servers, let them pay the price that will be exacted by less scrupulous individuals. That's how the free market works. Stupidity is severely punished. They will very quickly learn how to properly set permissions on their server.
Like I said, the guy went too far. But visiting a public-facing website is not a crime, no matter how you happen to discover the URL. There's no sign on the door saying "keep out", even though the server is more than capable of displaying one. Do you have a right to walk into any business, or walk into their storage space? No, but any reasonable person (notice I keep using this phrase? It's going to come up in court) would assume if the lights are on and the door is open, you can walk in. You might be mistaken, and a clerk might show you out. Intent is a critical factor. Like I said, the guy went too far. He didn't enter by mistake, though someone could have. He entered with the intent of making unauthorized copies of private data. Walking into a store's storage space isn't illegal, but a reasonable person would know that taking pictures of customer data is.
It's not illegal to visit any public facing Internet site. It is illegal to make unauthorized copies of restricted data. It's also against The company is hugely to blame in this situation for leaking private information. So is the guy who broke the law by making unauthorized copies of this private information. I support him having criminal charges filed against him. My point was that there are two issues at hand, one illegal and one perfectly within the law. Implied consent at odds with intent. It should be an interesting case.
Arguing about methods of responsible disclosure, a very dead horse that has been beaten to dust, seems like a waste of time and not really relevant.
This is just the endgame of the chilling effect of arresting and hounding researchers which has been going strong ever since 2001
We're in agreement here. I think we're both making the same point. Intent is the key here.
The problem is that if you just consider servers, configurations, permissions, and other technical aspects ... intent doesn't enter the picture. That's the wrong way to think about this.
His intent comes into play only secondarily in my opinion. I might enter a store with intent to steal something, but if a security guard is standing next to me and a camera is watching, I'd walk right back out. The lack of security is what allowed him to complete his intentions of unauthorized copying. It does not absolve him of his crimes, but thinking about the potential for unintentional restricted data access tells me that his crimes sit in line with the failed (non-criminal but out-of-compliance) policies of the host.
AT&T will not be affected whatsoever by a security breech, only those people whose information is leaked will be affected. The whole point of a white hat is to show this vulnerability and have it fixed before damage is done by someone with malicious intent.
> That's how the free market works. Stupidity is severely punished. They will very quickly learn how to properly set permissions on their server.
We do not live in a free market, and corporations are disproportionately powerful compared to individual people. You are asking that individual people have their data leaked and their lives potentially affected so that AT&T can look bad and then walk away from this situation without any punishment.
Further, it is very clear that companies make mistakes all the time with configuration their servers and tools in ways that makes data leaks and theft possible. We should demand that this flaws be exposed and fixed ASAP, there is nothing to be gained here by harassing those doing that exposure.
That's naive. If my emails become public, trust me, I'll cancel my AT&T service. If AT&T becomes known for airing people's dirty laundry, they will quickly bleed customers.
> Further, it is very clear that companies make mistakes all the time with configuration their servers and tools in ways that makes data leaks and theft possible.
Yes, they do. And in cases where individuals are hurt, those individuals sue the company involved. Either individually or collectively. Those companies do pay for their mistakes.
Except, of course, in cases where no actual measurable harm was done by the security breach.
> We should demand that this flaws be exposed and fixed ASAP, there is nothing to be gained here by harassing those doing that exposure.
There is a reason we vest the authority to enforce laws and pursue criminals in only a select few trained individuals. It's naive to think random teenagers have a fine grasp of the law, civil rights, and a well-tuned moral compass.
A web server certainly can grant or deny permission, but it seems that this one didn't.
Imagine I send a company a polite letter, requesting permission. The CEO hand-writes a letter (with his quill pen) telling me that I may access the information. After doing so, some critics on the internet start complaining that "A letter isn't an agent of the company and has no capacity to grant or deny permission."
Your claim is completely bogus. A web server DOES have the capacity to grant or deny permission because it is simply the mechanism by which the granting is delivered. Those who configured the server were the ones granting the permission.
I do not believe that the judge is claiming a web server cannot grant permission, I believe the judge is claiming that having to construct the URL by hand (rather than clicking on a link) is "a security measure" that has been "bypassed". For what it's worth (not much) I disagree strongly with this interpretation.
I'm not trying to argue that guessing sequential IDs in a URL is morally the same as picking a lock. I'm arguing that in both cases, there's no human in the loop, so it's not at all obvious to what extent a human should be assigned responsibility. In your example, the letter does not have agency, but the CEO certainly does; and if weev had written 110,000 letters to AT&T that were read and responded to by humans, I can't imagine how there would be any case against him.
See also: the debates surrounding Google's autonomous cars, or the Do-Not-Track header.
How is this different than a password?
I wouldn't say that means the account in question is unprotected.
Why is "browser-support" relevant? Your example is not supported in MSIE. I also thought it was removed from Chrome (in the name of "simplicity"), but I may be wrong.
A link with query-strings is guaranteed to work for everyone.
Microsoft's solution to the problem may not have been ideal, but at least that was the reasoning behind it.
Edit: And what do you see once you click post? Hacker news ironically proving Microsoft's point. It's a wonderful world we live in.
Securing content on the internet is easy. If you don't want it accessible to anyone, don't give the content to anyone who provides an unauthenticated HTTP request.
Why are we putting the legal responsibility of maintaining security on that content on everyone except the ones actually in position to do so?
Using someones password without permission is as illegal whether you shoulder surfed it, cracked it or red it from a post-it note.
This whole process, or lack thereof, needs some serious disruption.
Edit: My comment is intended to be a general observation and not specifically about this case
There are many avenues for responsible disclosure, Google that phrase.
CERT-CC is the closest we have and that does little to ensure things are actually resolved nor does it afford any sort of protection to the reporter.
Care to link to one of these "many avenues" instead of just pointing me to Google?
On the other hand, if you're dealing with an application for which you don't have any written permission to test, just stop. You are not entitled to conduct your own security tests of other people's applications. Leave them alone.
Your attitude is part of the problem and why we need a solution.
Let's create an analogy: I drop off my daughter at daycare, I've entrusted this place of business with something vital to me, as have numerous other customers. I get off work early to go pick up my daughter, upon arrival, I find the guardian eating lunch in an adjacent room and the children are left completely unattended. This is clearly a problem on numerous levels.
So what do I do? Do I politely inform the administration of the issue and hope they will fix things immediately? What if they blow me off and do not discipline nor change policy? Do I withdraw my child's enrollment and just blindly hope that other customers know of their utter disregard for the trust we've placed in them? Do I report them to the police or child protective services? Do I picket the business and attempt to inform other people of the issues? What if informing the public simultaneously informs some criminals of the ability to do some baby snatching?
In this scenario, a parent actually has some legal recourse options to ensure the problem is fixed. The issue in web security is there is no governing authority to report the problem to and if we take our findings public, we place other customer's information at risk and put ourselves in a position of legal risk as well. We get branded a "criminal hacker" and find ourselves off to jail.
First, you're responding to a factual argument with an argument about my attitude. It is not "my attitude" that people "shouldn't" be entitled to test applications. It is a fact that they are not allowed to do that. Unauthorized access to computer systems, which has a broad but actually very straightforward definition, is unlawful. If you cause damages when you do it, you're liable for civil damages. If you attempt in any way to profit from having done it, you're liable for a felony charge.
Second, arguing about this stuff by analogy is perilous. You can see that here, because it's immediately obvious that observing your child's caretaker eating lunch is not comparable to breaking into a web application to steal a database over the span of several days. To attempt to equalize the analogy, imagine that instead of simply observing your child's caretaker, you instead break into that caretaker's residence or place of work. You've now committed a felony.
I am for many obvious reasons pro- security testing, and I'm happy to speak glowingly of the companies that have set up policies to make it safer for researchers to test their sites. In advice I've provided on HN and to startups, I've consistently told companies to create pages to thank security researchers.
Having said all that: if you don't have permission to test someone's web app, don't do it. The law very reasonably says that if a company doesn't want to submit itself to unauthorized intrusive testing by strangers, it is entitled to treat its applications, its servers, and the data it holds as private property. Not every case in which a stranger pokes for flaws in someone's app will be open- and- shut, and that's for the best too. But the rule of thumb is very simple: leave other people's apps alone.
Observing sequential identifiers in a URL and validating a gaping security hole is hardly something I'd classify along the lines of "stealing a database" nor should it be considered "unauthorized access", however the judicial system clearly feels different and that's the sort of changes I'm advocating that need to be adjusted.
Of course, discussing legislation changes that would allow anyone to execute security testing is probably not a very lucrative topic with the founder of a company that provides security research & testing.
Your last sentence is needlessly and pointlessly hostile. If I was arguing out of personal interest, I wouldn't be recommending that companies pre-consent to having strangers test their web apps by setting up thank-you pages, now would I?
A wild-west attitude, with every man for himself, no-holds-barred, no-legal-resource-when-you-are-hacked is the best possible thing for security people, who would be kingmakers in such a universe.
Is the list of questions in your 4th paragraph meant to say that weev had no hope to get this fixed besides writing a tool that pulled down the information of thousands of users and then taking it to the press?
We get branded a "criminal hacker" and find ourselves off to jail.
It's not that hard to avoid landing in jail for computer crimes.
In some states there are strict laws about maintaining privacy of customer data. But, AFAIK, where I am there's nothing specifically addressing negligence in failing to address a security breach that can be called "cracking".
Government could, as part of their protection of their citizens offer a service of receiving security notices from third parties and passing them on to registered entities. The government run body could then check - and possibly advise on - the fixing of the [potential] data breach.
In the UK the Information Commissioner's Office handle data protection issues (including legislational requirements placed on organisations). They have a form [a .doc, that tells me a lot TBH!] for companies to notify the ICO (http://www.ico.gov.uk/for_organisations/data_protection/lose...); it's not much of a stretch to have a form for third parties but there would need to be legislation to deal with enforcing responses and providing an amnesty for those making [non-malicious] reports.
I think that if one finds an embarrassing security vulnerability, they should look up the offending company and upon finding that company ever took part in the crap like described in the article, they should publish the vulnerability wide open, goatse-style, anonymously via pastebin.
Those companies should suffer to the end of times and their clients should too.
Unless that company properly apologized, which they never do. Corporations don't seem to be capable of that. Not in their DNA.
Naming and shaming the company and the extent of their security failing is enough. But only after they have patched the hole.
The customer has to suffer from the mistakes of the companies so that:
1. She stops trusting any company blindly with her data.
2. The market demands improvements from the service providers (and no, sacrificing a scapegoat in court is not an "improvement").
3. Service companies react. Unless there's a major scandal, companies are slow to react to vulnerabilities.
About that last point, it's a bit unfair to assume that the activist tried to contact AT&T and they didn't react. I honestly don't know if that's what happened in this case, and I hope it is. However there are several cases of companies "turning a blind eye" on the issue.
As a customer, the only entity to blame when such a breach occur, is the provider that you entrusted with your data. Not the activists (regardless of their intents) nor the hackers or whatever monster you heard about in the media.
If I entrust (and pay) someone with my bike and it gets stolen on his watch, who's to blame? The thief or the guardian?
Because life is pain?
But of course you can always sue the company that leaked your data and then mistreated people who warned them. Hell, you should. Probably with a class action suit.
lol... just a touch dramatic there.
So we will come up with whatever justifications needed to say it's okay.
A person who's only marketable skill is on the Internet, completely prohibited from using ssh or virtualization, by nothing more than an error-riddled FBI complaint document. (This started prior to the grand jury indictment.) He's been without significant work for over two years as a result.
It's a scary thought that this could happen to any one of us.
How does he use a telephone? Or a suitably advanced toaster?
Eventually some of those restrictions were relaxed.
Look: first off, he didn't tell AT&T about the hack, he told a bunch of news organizations. See http://www.forbes.com/sites/firewall/2010/06/09/atts-ipad-ha... for details. "we did a benefit analysis and decided they could take our story viral the fastest."
One of those organizations presumably told AT&T; all AT&T has ever said is that they learned about the exploit from a business customer.
Second, weev is the same guy who got up on stage at Toorcon and lied about finding an exploit in Firefox. Just flat out lied. One of the ways in which he's a great social manipulator is that he has no qualms about the big lies, and doesn't really care if they're noticed.
Third, at the time, weev said "There is nothing in Apple’s SDK APIs that would allow an application to have this identifier [the ICC-ID] – it is a shared secret that should indicate physical proximity to the iPad." So while it's amusing to see his defenders talk about how unimportant ICC-IDs are, it's an interesting change of tune.
Fourth, even if you believe weev when he says he forced the Amazon delist of gay-themed books (I never have), that goes over the line from activism to exploiting.
Yeah, we do have to protect even unpleasant, obnoxious whistle blowers. I just don't see how weev counts as a whistle blower, and I think that any article on his situation should present the full picture rather than painting him as a misunderstood good guy.
He's also a whistleblower.
The emails were not sold, pastebinned, or mailed MobileSafari overflows. The media was contacted because he's a megalomaniac.
One thing he is not is a criminal.
> ~curl http://example.com/att/[1-100].html
I understand what the lawyers are saying, but this is just beyond common sense. There should be liabilities for deploying software that publicly gives out information you do not intend to.
Software does not exist in any such stable world. There can be two pieces of software, each perfectly legitimate and doing exactly what they intend, that when both are present format a customer's hard drive. Who does the customer sue then?
If you want to make developers responsible, I won't personally be hurt much, since I can make a shitload of money finding vulnerabilities in other people's code (and have done so in the past). The lawyers will make lots of money, too, as we have jury trials to figure out whether that SQL injection was really negligent or not.
> To date, AT&T has spent approximately $73,000 in remedying the data breach. Those costs include, among other things, the cost of contacting all iPad 3G customers to inform them of the breach and AT&T's response to it.
As if AT&T shouldn't have had to lock down their user's info, and it's some kind of injustice to them that they have to do things the right way now. This perfectly highlights the fundamental disconnect between the corporations, the (their) legal system, and the Internet-connected world.
If more and more sites make it easier to report these things to them, hopefully either the error will be fixed or if it is being ignored, the "hacker" will have some kind of proof to claim that he tried to tell the bank to fix their door.
There are HTTP codes to indicate lack of authorization. Presumably he didn't encounter those. How is this an argument?