
Jail Looms for Man Who Revealed AT&T Leaked iPad User E-Mails - robdoherty2
http://www.technologyreview.com/view/507661/jail-looms-for-man-who-revealed-att-leaked-ipad-user-e-mails/
======
sneak
Hi there - I'm the one who put up the $50k to bail weev out of jail.[1]
(Otherwise he would have had to sit in Essex County Jail during these ~2 years
since this started.)

There were some others in line to assist (I live in Europe), but they all
feared various forms of retribution/harassment from the FBI/DoJ, so it fell to
me (someone with comparatively little to lose, stateside). This only serves to
underscore the truly chilling effects of these sorts of governmental abuses of
power.

I also host his website, <http://freeweev.info>, where you can make donations
to his case via both Paypal and Bitcoin. (He has various restrictions placed
on his use of technology while out on bail.)

Please feel free to contact me directly if you have questions related to his
case. Contact info can be found in my profile.

5539 AD00 DE4C 42F3 AFE1 1575 0524 43F4 DF2A 55C2

[1] <https://twitter.com/rabite/status/270668883172671489>

~~~
VonGuard
I'm glad someone is taking on the task of defending Weev because the charges
they have against him are ridiculous and stupid.

Sadly, Weev's nature is that of a rattlesnake, and he's basically burned every
bridge he could on his way here. Thus, the EFF will have nothing to do with
him.

~~~
olalonde
> Sadly, Weev's nature is that of a rattlesnake, and he's basically burned
> every bridge he could on his way here. Thus, the EFF will have nothing to do
> with him.

Could you expand on that? Why doesn't the EFF want to help him?

~~~
VonGuard
Sorry, not going there. Suffice to say, he burned a bridge.

------
driverdan
From my understanding all you had to do was pass the ICC-ID to a script on
AT&T's servers to get back the user data. I can see the court interpreting the
ICC-ID as a form of access control since you'd have to guess them similarly to
passwords.

What isn't quite clear to me is what they did with this data. It seems they
reported the hole to AT&T who then fixed it. That's good. It also seems they
passed the data off to reporters, which may be bad for their case. It seems
like they acted, at least mostly, responsibly.

Assuming the data was never released to the public I don't think they should
be prosecuting Auernheimer. That said, it does seem like they have a case
based on the wording of the law.

It's a very real possibility that he's facing jail time, especially when you
consider the volume of data. He's charged with breaking 18 USC § 1028A
(aggravated ID theft laws) which carries a mandatory minimum of 2 years.
Federal judges have _some_ control over this but generally stick with
sentencing guidelines. He could fight it and win, get them to reduce the
charges, or do some serious work for the gov and hope the judge goes easy on
him.

 _Note: I was convicted of violating 18 USC § 1028A (among other laws) so I
have personal experience with this law, sentencing guidelines, and judge
discretion but IANAL._

~~~
sneak
The ICCIDs here are sequential integers. It wasn't brute force, just
incrementation. They are not secret and are certainly not access credentials.
There were no access controls surrounding the web service in question.

The prosecution is asserting that access to any system without authorization
is "access to a protected system" in the legal sense, which is obviously
bogus. This would make the Googlebot's operators criminally liable if I put up
a site at "johndoessocialsecuritynumber.com".

In fact, authorization is built into HTTP. There were no protections in place
surrounding this data. Regardless of what he did with the data, downloading
something from a public website is not criminal. (Though irrelevant legally,
it's worth noting that he did nothing with the data except shame AT&T.)

(I put up weev's bail and am handling some of his PR while his computer
restrictions are in place.)

~~~
tptacek
No, it wouldn't expose Google to that liability, because Google would have no
intent to access information of any sort, let alone information that Google
would have reason to believe it shouldn't have access to. The crimes charged
here aren't strict-liability; the prosecution is required to prove intent.

Also: the authorization "built into" HTTP is used by a tiny minority of all
web applications. Clearly, if you break into a retail brokerage and execute
trades on behalf of other people, you're going to be liable, regardless of how
that retail brokerage chooses to authenticate access to the site.

(I don't have a position about this particular case; I know virtually nothing
about it).

~~~
nitrogen
I'm guessing that sneak wasn't referring to HTTP Basic Auth, but rather the
fact that when an HTTP client says "GET", the server effectively says "HOW
MUCH?" (A crude allusion to the classic "jump/how high?") I think sneak is
arguing that receiving a response to an isolated GET request implies that the
GET request was "authorized."

Edit: after reading selections of IRC logs and e-mails from the indictment, my
sympathy with weev has been reduced. Granted, an indictment will select the
most salacious excerpts to make its case in the strongest way possible, but
the conversations in the indictment are far from white hat (unless the
defendants manage to convince the court that their talk of phishing and
spamming was merely juvenile humor).

~~~
gknoy
I wonder what one could get an expert witness to testify to with regards to
the meaning behind the HTTP protocol with respect to 'authorization'.

~~~
sneak
Sergey Bratus[1] (of Langsec[2] fame) testified as an expert witness for the
defense in this very way within the last few days.

[1] <http://www.cs.dartmouth.edu/~sergey/>

[2] <http://www.cs.dartmouth.edu/~sergey/langsec/>

~~~
tptacek
Do you really buy this line of argument? How many banking applications
configure themselves so that they rely on the intended meanings of HTTP verbs
and authorization headers as their primary overt security mechanism? And of
those, how many do so correctly?

I get why Bratus would testify. The defendant here needs all the help he can
get and is morally entitled to the best case he can possibly present. I
respect and admire everyone who is trying to help him out. But presuming he's
not guilty of a real conspiracy to defraud anyone, I have a hard time
believing it's because AT&T's web application vulnerability entitled the
public to their database.

~~~
nitrogen
I think it's reasonable for them to argue that AT&T's server's willingness to
give them the e-mail addresses means that obtaining the addresses was not
illegal, and that despite mulling over the darker possibilities available to
them, by choosing not to put the e-mail addresses to illegal use they
committed no crime. They could argue that they should not be convicted of
conspiracy because they ultimately decided not to abuse the list of addresses.

~~~
tptacek
I would like it to be harder than it seems to be to prove conspiracy to commit
fraud.

I don't think I'd like it to be harder than it seems to be to prove
unauthorized access.

I know that's the opposite of what most nerds like me want, but I think we're
well served by a very broad definition of unauthorized access, and we're
poorly served by vague conspiracy laws in more places than just online.

Note that under the US Code, you need _both elements_. Just plain unauthorized
access isn't a federal crime; you need an intent to defraud.

~~~
nitrogen
_I know that's the opposite of what most nerds like me want, but I think we're
well served by a very broad definition of unauthorized access, and we're
poorly served by vague conspiracy laws in more places than just online._

I do agree with you regarding conspiracy, but you are right that I would in
principle prefer to have every Internet-facing system as robustly secured as
if it had been independently reviewed by you, cpercival, and the people who
wrote the space shuttle's software. A small part of the reason I want this is
so that absolutely anybody can confidently write and deploy scraping software
that collects and analyzes information in new ways (e.g. IBM Watson, better
search engines, or some other as yet undiscovered idea).

~~~
tptacek
The CFAA does not in practice prevent search engines from scraping pages. In
order to be charged under the CFAA, you must willfully access specific
information on a website in furtherance of a fraudulent scheme, which in turn
means you must be making specific representations as to your identity or
actions that a website could reasonably rely on in order to trick that website
into doing something it wouldn't have done otherwise. In CFAA cases, the
prosecution must prove not only unauthorized access (which is easy) but also
fraud (which is not as easy).

------
rcknight
The article states

"Weev and a fellow hacker who originally uncovered AT&T’s mistake and
collected the e-mails didn’t ask the company for permission to access the Web
addresses that shared iPad users’ private information"

I really don't see how this argument holds up.

From a technical point of view the very nature of HTTP includes asking for
permission. You send a request "Please can I see the information at this URL"
If the organisation running the server does not give that permission, it
should not serve the data.

These charges seem pretty odd to me, hopefully it gets resolved in a sensible
manner.

~~~
maratd
> From a technical point of view the very nature of HTTP includes asking for
> permission.

A web server isn't an agent of the company and has no capacity to grant or
deny permission.

Think of it as a security system you install in your home. Now, if the
security system is malfunctioning and you notice that it is malfunctioning ...
do you call up the owner and let them know or do you go inside and look
through their stuff? If you go inside and look through their stuff, it's
trespassing.

Obviously the two scenarios aren't the same, but I'd imagine that's the logic
used in the argument.

~~~
freehunter
As an information security professional, I see two different issues at play
here. First, they got access. They were granted access by the admin who did
not lock down the server. I am not a lawyer, but I see the unauthenticated web
server, no matter how much of a mistake, as being implicit permission to
access the site. A house, by default, implies privacy. A web server is more of
a business in this metaphor. If the door is open and the lights are on, it's
implied you can come in and look around. Machines accessible over the web are
by default open to everyone unless permission is revoked. The "unauthorized
access" charge, in my opinion, should be struck down. When a site is made
accessible from the unauthenticated Internet an admin implicitly granting you
permission to visit the site.

The second issue at play is the fact that the guy apparently collected some
email conversations to use as proof. Using my business metaphor, walking into
a closed business that to a layman appears open is a simple mistake. Anyone
could reasonably assume the business is open. However, collecting their
merchandise even just to prove they forgot to lock up would still be stealing.
In this situation, it's unauthorized copying. Most reasonable people would
consider this to be unacceptable.

The second situation is muddied a bit further by my wording "most". Websites
accessible when unauthenticated are able to be scraped easily. What if the
Googlebot crawled the site and collected the information due to a poor
robots.txt? What if you walked into the business and tried some free samples
(unauthenticated websites are implicitly free samples)? Data privacy comes
into play on this one though, and I would argue that any reasonable person
would understand these as private communications. While they are accessible to
view, any reasonable person would understand it is unethical to read them and
unacceptable to copy them.

The fatal flaw of the defendant was copying the emails. Up to that point, he
was completely within reasonable practice in my opinion. Here's a takeaway for
any startups: security isn't a joke. It's a career ender, it's a business
ender, and it could be a career ender for your customers who trusted you. You
hire the best programmers, but budget a little aside for an external
penetration test, and take the results seriously. Don't lose your company and
your reputation because of a caffeine-fueled oversight.

~~~
maratd
You don't live in a world governed by machines and pure logic. You live in a
world governed by human beings and their nature.

You have the capacity to recognize where you should be and where you shouldn't
be. What you should be seeing and what you shouldn't be seeing.

Right from wrong.

> A web server is more of a business in this metaphor. If the door is open and
> the lights are on, it's implied you can come in and look around.

No.

If you're inside a business and you see a door open and it is evident by the
design of the building that it is their storage space ... you do not have the
right to waltz on in. You damn well know through your experience in hundreds
of other stores that this area is used by employees and for employees only.
You do not belong there.

Are you telling me he read those emails by accident? Just stumbled on them? Or
did he know _exactly_ what he was doing?

Enough of this white hat bullshit. I do not have the right to self-deputize
myself and become a vigilante on the Internet. If these clowns don't know how
to secure their own damn servers, let them pay the price that will be exacted
by less scrupulous individuals. That's how the free market works. Stupidity is
severely punished. They will very quickly learn how to properly set
permissions on their server.

~~~
freehunter
The problem with metaphors is that they only resemble what they are
describing. They'll always be imperfect. The problem with web servers is that
anything that is public-facing is just that. Security through obscurity is no
security at all.

Like I said, the guy went too far. But visiting a public-facing website is not
a crime, no matter how you happen to discover the URL. There's no sign on the
door saying "keep out", even though the server is more than capable of
displaying one. Do you have a right to walk into any business, or walk into
their storage space? No, but any reasonable person (notice I keep using this
phrase? It's going to come up in court) would assume if the lights are on and
the door is open, you can walk in. You might be mistaken, and a clerk might
show you out. Intent is a critical factor. Like I said, the guy went too far.
He didn't enter by mistake, though someone could have. He entered with the
intent of making unauthorized copies of private data. Walking into a store's
storage space isn't illegal, but a reasonable person would know that taking
pictures of customer data _is_.

It's not illegal to visit any public facing Internet site. It is illegal to
make unauthorized copies of restricted data. It's also against The company is
hugely to blame in this situation for leaking private information. So is the
guy who broke the law by making unauthorized copies of this private
information. I support him having criminal charges filed against him. My point
was that there are two issues at hand, one illegal and one perfectly within
the law. Implied consent at odds with intent. It should be an interesting
case.

~~~
maratd
> Intent is a critical factor. Like I said, the guy went too far. He didn't
> enter by mistake, though someone could have. He entered with the intent of
> making unauthorized copies of private data.

We're in agreement here. I think we're both making the same point. Intent is
the key here.

The problem is that if you just consider servers, configurations, permissions,
and other technical aspects ... intent doesn't enter the picture. That's the
wrong way to think about this.

~~~
freehunter
I do agree that we're making the same point, and I wrote my response to you in
the mindset that I had poorly communicated my initial conclusion. Your point
compliments my own. The difference we may have is that I don't view intent in
the highest importance when someone visits a public server. Intent will only
get you so far as long as server, configurations, permissions, and other
technical aspects are in order. The reason he was able to copy restricted data
is because the technical aspects were not in order. That's where the muddiness
comes in; you wouldn't need intent to make unauthorized copies in this
situation. The Googlebot could have made unauthorized copies. Your browser's
cache could make unauthorized copies. Archive.org could have made unauthorized
copies. Googling for plaintext and valid credit card numbers might shock you
in what Google is finding on public servers.

His intent comes into play only secondarily in my opinion. I might enter a
store with intent to steal something, but if a security guard is standing next
to me and a camera is watching, I'd walk right back out. The lack of security
is what allowed him to complete his intentions of unauthorized copying. It
does not absolve him of his crimes, but thinking about the potential for
unintentional restricted data access tells me that his crimes sit in line with
the failed (non-criminal but out-of-compliance) policies of the host.

------
meritt
We live in a tech-filled world without a reliable means for responsible
disclosure, no way to hold a company accountable for reacting to attempts of
responsible disclosure, and any whistle-blowers are immediately branded as
"criminals" and "hackers".

This whole process, or lack thereof, needs some serious disruption.

 _Edit: My comment is intended to be a general observation and not
specifically about this case_

~~~
jebblue
>> We live in a tech-filled world without a reliable means for responsible
disclosure

There are many avenues for responsible disclosure, Google that phrase.

~~~
jrockway
As far as I know, following responsible disclosure practices does not waive
any legal liability. It may make you look professional, but if someone who is
friends with the FBI wants to fuck you, they can still fuck you.

~~~
tptacek
Following the published disclosure guidelines that Google and Facebook created
will in fact prevent the FBI from "fucking you", because any lawyer in the
world can read the plain meaning of the statutes point out that a reasonable
person would believe they had permission to conduct tests.

On the other hand, if you're dealing with an application for which you don't
have any written permission to test, just stop. You are not entitled to
conduct your own security tests of other people's applications. Leave them
alone.

~~~
meritt
> You are not entitled to conduct your own security tests of other people's
> applications. Leave them alone.

Your attitude is part of the problem and why we need a solution.

Let's create an analogy: I drop off my daughter at daycare, I've entrusted
this place of business with something vital to me, as have numerous other
customers. I get off work early to go pick up my daughter, upon arrival, I
find the guardian eating lunch in an adjacent room and the children are left
completely unattended. This is clearly a problem on numerous levels.

So what do I do? Do I politely inform the administration of the issue and hope
they will fix things immediately? What if they blow me off and do not
discipline nor change policy? Do I withdraw my child's enrollment and just
blindly hope that other customers know of their utter disregard for the trust
we've placed in them? Do I report them to the police or child protective
services? Do I picket the business and attempt to inform other people of the
issues? What if informing the public simultaneously informs some criminals of
the ability to do some baby snatching?

In this scenario, a parent actually has some legal recourse options to ensure
the problem is fixed. The issue in web security is there is no governing
authority to report the problem to and if we take our findings public, we
place other customer's information at risk and put ourselves in a position of
legal risk as well. We get branded a "criminal hacker" and find ourselves off
to jail.

~~~
tptacek
Two things.

First, you're responding to a factual argument with an argument about my
attitude. It is not "my attitude" that people "shouldn't" be entitled to test
applications. It is a fact that they are not allowed to do that. Unauthorized
access to computer systems, which has a broad but actually very
straightforward definition, is unlawful. If you cause damages when you do it,
you're liable for civil damages. If you attempt in any way to profit from
having done it, you're liable for a felony charge.

Second, arguing about this stuff by analogy is perilous. You can see that
here, because it's immediately obvious that observing your child's caretaker
eating lunch is not comparable to breaking into a web application to steal a
database over the span of several days. To attempt to equalize the analogy,
imagine that instead of simply observing your child's caretaker, you instead
break into that caretaker's residence or place of work. You've now committed a
felony.

I am for many obvious reasons pro- security testing, and I'm happy to speak
glowingly of the companies that have set up policies to make it safer for
researchers to test their sites. In advice I've provided on HN and to
startups, I've consistently told companies to create pages to thank security
researchers.

Having said all that: if you don't have permission to test someone's web app,
don't do it. The law very reasonably says that if a company doesn't want to
submit itself to unauthorized intrusive testing by strangers, it is entitled
to treat its applications, its servers, and the data it holds as private
property. Not every case in which a stranger pokes for flaws in someone's app
will be open- and- shut, and that's for the best too. But the rule of thumb is
very simple: leave other people's apps alone.

~~~
meritt
_(To be clear, I'm speaking more broadly about the topic and absolutely not
condoning the weev's alleged actions. Lack of responsible disclosure,
discussion of profiting from the flaw and exploiting it far beyond simple
validation tests are all going to make it very difficult for him.)_

Observing sequential identifiers in a URL and validating a gaping security
hole is hardly something I'd classify along the lines of "stealing a database"
nor should it be considered "unauthorized access", however the judicial system
clearly feels different and that's the sort of changes I'm advocating that
need to be adjusted.

Of course, discussing legislation changes that would allow anyone to execute
security testing is probably not a very lucrative topic with the founder of a
company that provides security research & testing.

~~~
danielweber
That last sentence is incredibly lame. I've had problems with 'tptacek (from
an incident several years ago that I'm finally over) but he argues for
security policies that he believes work, not those that are in his interest.

A wild-west attitude, with every man for himself, no-holds-barred, no-legal-
resource-when-you-are-hacked is the best possible thing for security people,
who would be kingmakers in such a universe.

~~~
chc
The security field in such a universe would be a gigantic, insanely lucrative
shakedown racket. "I can help harden your systems against attacks — or I can
just steal your customer database and sell their info to these nice Russian
gentlemen. Whichever you prefer."

------
guard-of-terra
He should totally have posted those on pastebin instead and shared on the net.

I think that if one finds an embarrassing security vulnerability, they should
look up the offending company and upon finding that company ever took part in
the crap like described in the article, they should publish the vulnerability
wide open, goatse-style, anonymously via pastebin.

Those companies should suffer to the end of times and their clients should
too.

Unless that company properly apologized, which they never do. Corporations
don't seem to be capable of that. Not in their DNA.

~~~
emmapersky
Why should I, as an innocent party to a companies security failings have my
personal details revealed. This is exactly why we have laws protecting
identity theft and fraud.

Naming and shaming the company and the extent of their security failing is
enough. But only _after_ they have patched the hole.

~~~
babarock
Because you trusted the wrong people, and this has consequences.

The customer has to suffer from the mistakes of the companies so that:

1\. She stops trusting any company blindly with her data.

2\. The market demands improvements from the service providers (and no,
sacrificing a scapegoat in court is not an "improvement").

3\. Service companies react. Unless there's a major scandal, companies are
slow to react to vulnerabilities.

 _About that last point, it's a bit unfair to assume that the activist tried
to contact AT &T and they didn't react. I honestly don't know if that's what
happened in this case, and I hope it is. However there are several cases of
companies "turning a blind eye" on the issue._

As a customer, the only entity to blame when such a breach occur, is the
provider that you entrusted with your data. Not the activists (regardless of
their intents) nor the _hackers_ or whatever monster you heard about in the
media.

If I entrust (and pay) someone with my bike and it gets stolen on his watch,
who's to blame? The thief or the guardian?

~~~
sk5t
This is an excellent point... sloppy, careless companies can operate more
cheaply (until they go down in flames); insulating the consumer from these
security risks would introduce the same kind of moral hazard as insulating
investors from the downside on risky investments.

------
binarymax
This is extremely dangerous. Condemning the whistle blowers results in a much
less secure environment, since you'll scare away the white hats, and insecure
systems will remain insecure. We need whistle blower protection laws in place,
and we need them now.

~~~
sneak
What's further terrifying is that the courts have made restriction of his use
of non-windows non-monitored computers part of his bail conditions - prior to
a trial to determine guilt.

A person who's only marketable skill is on the Internet, completely prohibited
from using ssh or virtualization, by nothing more than an error-riddled FBI
complaint document. (This started prior to the grand jury indictment.) He's
been without significant work for over two years as a result.

It's a scary thought that this could happen to any one of us.

~~~
DanBC
>What's further terrifying is that the courts have made restriction of his use
of non-windows non-monitored computers part of his bail conditions - prior to
a trial to determine guilt.

How does he use a telephone? Or a suitably advanced toaster?

~~~
sneak
At first, we had to drive all over to find a phone that couldn't even MMS
(because MMS can be used to send email). It was a nightmare.

Eventually some of those restrictions were relaxed.

------
BryantD
Bah, more weev hagiography? He's not a whistle blower, he's a troll.

Look: first off, he didn't tell AT&T about the hack, he told a bunch of news
organizations. See [http://www.forbes.com/sites/firewall/2010/06/09/atts-ipad-
ha...](http://www.forbes.com/sites/firewall/2010/06/09/atts-ipad-hackers-
ignored-by-reuters-other-mainstream-press/) for details. "we did a benefit
analysis and decided they could take our story viral the fastest."

One of those organizations presumably told AT&T; all AT&T has ever said is
that they learned about the exploit from a business customer.

Second, weev is the same guy who got up on stage at Toorcon and lied about
finding an exploit in Firefox. Just flat out lied. One of the ways in which
he's a great social manipulator is that he has no qualms about the big lies,
and doesn't really care if they're noticed.

Third, at the time, weev said "There is nothing in Apple’s SDK APIs that would
allow an application to have this identifier [the ICC-ID] – it is a shared
secret that should indicate physical proximity to the iPad." So while it's
amusing to see his defenders talk about how unimportant ICC-IDs are, it's an
interesting change of tune.

Fourth, even if you believe weev when he says he forced the Amazon delist of
gay-themed books (I never have), that goes over the line from activism to
exploiting.

Yeah, we do have to protect even unpleasant, obnoxious whistle blowers. I just
don't see how weev counts as a whistle blower, and I think that any article on
his situation should present the full picture rather than painting him as a
misunderstood good guy.

~~~
cullend
Weev is different. That's for sure. But what's at stake here is bigger than
him. Anyone can be sued for going to a damn URL. That's the real problem here
(as well as a myriad of others).

------
moe
And who at AT&T faces jail for publicly exposing these Emails in first place?

------
lawnchair_larry
10 years in jail for incrementing an integer at the end of a URL. Wow. What a
disconnect.

------
jonknee
It's pretty scary that you can go to jail (for a long time!) for something as
simple as:

> ~curl <http://example.com/att/[1-100].html>

I understand what the lawyers are saying, but this is just beyond common
sense. There _should_ be liabilities for deploying software that publicly
gives out information you do not intend to.

~~~
danielweber
If you've never deployed software without security flaws, it means you have
never deployed software.

~~~
jonknee
Sure, but why should who makes the software bear no responsibility? If a
building is found to have a flaw the architect or an engineer are who get
questions, not whoever noticed the flaw.

~~~
danielweber
The flaws in architecture are well-understood and there are rarely-changing
building codes to describe exactly what should and should not happen.

Software does not exist in any such stable world. There can be two pieces of
software, each perfectly legitimate and doing exactly what they intend, that
when both are present format a customer's hard drive. Who does the customer
sue then?

If you want to make developers responsible, I won't personally be hurt much,
since I can make a shitload of money finding vulnerabilities in other people's
code (and have done so in the past). The lawyers will make lots of money, too,
as we have jury trials to figure out whether that SQL injection was really
negligent or not.

------
ssclafani
The indictment filed against Weev by the DOJ in 2011:

<http://www.scribd.com/doc/113664772/46-Indictment>

~~~
jcromartie
The complaint cites "losses"

> To date, AT&T has spent approximately $73,000 in remedying the data breach.
> Those costs include, among other things, the cost of contacting all iPad 3G
> customers to inform them of the breach and AT&T's response to it.

BOO-#$%^ING-HOO.

As if AT&T shouldn't have had to lock down their user's info, and it's some
kind of injustice to them that they have to do things the right way now. This
perfectly highlights the fundamental disconnect between the corporations, the
(their) legal system, and the Internet-connected world.

------
manaskarekar
How about this. What if we have another section on websites called something
like 'Submit a bug' next to 'About,' 'Contact Us' etc.

If more and more sites make it easier to report these things to them,
hopefully either the error will be fixed or if it is being ignored, the
"hacker" will have some kind of proof to claim that he tried to tell the bank
to fix their door.

~~~
sneak
How about this: We don't prosecute people for loading URLs on the public
internet first and foremost.

------
recursive
> "The more concerning charge to online activists watching Weev’s case is
> based on the Computer Fraud and Abuse Act, which forbids “unauthorized
> access” to a computer."

There are HTTP codes to indicate lack of authorization. Presumably he didn't
encounter those. How is this an argument?

------
Cieplak
Although we all know what 'hacker' means, I suspect that most people think the
words means 'someone who does nefarious things with computers.' Calling him a
hacker probably doesn't help his case, regardless of him being guilty or not.

------
DancingDeer
He's no whistleblower; a criminal bum is all. He's a Freemason (wannabee or
otherwise), enough said.

