

Hacker who stole Facebook source code comes clean - sytelus
http://www.computerworld.com/s/article/9226600/Facebook_hacker_comes_clean

======
babarock
> Mangham used a vulnerability to download Facebook's source code, arguably
> the company's most valued and secret intellectual property

Err... no! The company most valued property would be the data they have and
sell. (I know this kind of comment will attract all kind of pedantry around
the word "intellectual property", but let's agree on something: If they can
sell your data, it's their property.)

I agree that source code exposure is a security threat, but it seems to me
that we have yet another case of a journalist looking for a scoop by using big
words.

Also, this story kinda puts a whole new twist to Zuckerberg telling Wall
Street about the "Hacker Way" ...

~~~
dimitar
The users or their data aren't intellectual property. IP is commonly
understood to be the things protected by copyright or patent law and some
other things like utility models, trademarks and etc.

~~~
dredmorbius
Customer lists _are_ intellectual and business property. Cue "who's the
customer" debate.

The _real_ assets for Facebook, though, are its user _relationships_ and the
_data stream_ this generates (where currency of data and interrelationships
among it are key) by which it can generate advertising and other revenues
through its true customer relationships -- the relations between FB and those
entities paying cash on the barrelhead (or however the cool kids are
conducting monetary transactions these days).

Brand identity, network effects, and a few other things also come into play.

Source code may weaken portions of the value e or expose windows of
vulnerability. It's by _no_ means the most valuable asset though.

------
nbpoole
There was some previous discussion on HN about this individual:

<http://news.ycombinator.com/item?id=3604623>

This comment in particular was very relevant:

<http://news.ycombinator.com/item?id=3605343>

------
rollypolly

      "It is also worth mentioning that I had the source code
      for just over three weeks with absolutely nothing to
      prevent me from making copies and redistributing it,
      this was more than enough time to have caused significant
      damage to Facebook or to find a buyer, if that had ever
      actually been my intention but quite clearly it was not,"
      Mangham wrote.
    

Why would anyone be interested in buying this code anyway? It's not like you
could put your own Facebook clone into production based on stolen code.

~~~
noonespecial
Making a Facebook clone is about the last reason I can think of for buying
Facebook's source. I'm guessing any buyer interested would be looking to use
it to find exploits for snooping or worse.

~~~
Strallus
The article even stated a reason for wanting to purchase the source code.

> source code would surely have been of interest to cybercriminals who attempt
> to use Facebook to perpetuate scams

------
TheCapn
"When you consider that the only thing that stood between Facebook and
potential annihilation were my ethics"

I want to disagree with this. The source code leak wouldn't end Facebook. So
what if there were hundreds of clones? The secrets that the code holds could
be modified quickly enough to block intrusion.

To me the comparison is like saying KFC would be ruined if their "8 Secret
Spices" got leaked.... guess what, they're not that secret any more and KFC is
still running strong.

~~~
jamaicahest
Not to say anything of the business execution and infrastructure management,
without either facebook would be no more than a bunch of php files on a
server.

------
zalew
> He was also hoping that even when he got caught, Facebook would let him off
> the hook. That didn't happen.

It doesn't happen. Pretty naive thinking of his, he could have just read some
hacker/cracker stories from the 90s/00s to know about that. Max Butler served
(his first sentence for cracking, not the current one for cc) even though his
exploit actually provided a patch and intentions were whitehat. Why learn it
the hard way?

~~~
PakG1
I can't remember who it was, but there was the one guy who managed to get
inside the code and deface a whole bunch of people's profiles. He was so
impressive that Facebook hired the guy. This was in their early days.

~~~
fsckin
Could be this one: [http://www.quora.com/How-did-Chris-Putnam-get-hired-at-
Faceb...](http://www.quora.com/How-did-Chris-Putnam-get-hired-at-Facebook)

------
MrHoltz
<http://news.ycombinator.com/item?id=3605343>

I manage Facebook's Whitehat program (<https://www.facebook.com/whitehat>). We
have taken an incredibly open stance towards security researchers and welcome
the contributions they make towards securing the internet. Our policy towards
this research is documented quite succinctly:

"If you give us a reasonable time to respond to your report before making any
information public and make a good faith effort to avoid privacy violations,
destruction of data and interruption or degradation of our service during your
research, we will not bring any lawsuit against you or ask law enforcement to
investigate you."

His attempt to access data was outside our whitehat guidelines, had clear
malicious intent, and included extensive and destructive efforts to remain
undiscovered and anonymous. In addition, he made no effort to contact Facebook
with his discoveries, and even denied involvement when initially questioned.
His attempt to claim he intended responsible disclosure only after faced with
criminal action is false and insulting to the community of responsible
security researchers.

<http://gmangham.blogspot.co.uk/>

[5] I think the white hat bug bounty programme is a very good idea and that
schemes like it are a very useful way for companies, especially the big ones
to manage their large attack surfaces. I suspect some people are wondering why
I didn’t use it to submit my findings, well the answer to that is that the bug
bounty programme DID NOT EXIST when I was working on my audit, therefore it
was not an option that I could take. I am willing to bet that it became a
higher priority afterwards though.

It's pretty obvious that somebody here is lying, when did the bug bounty
programme start?

~~~
nbpoole
<https://www.facebook.com/note.php?note_id=10150270651335766>

That's a discussion of it from back in August. It started prior to that. And
before that Facebook redid its existing responsible disclosure policy
([https://www.eff.org/deeplinks/2010/12/knowledge-power-
facebo...](https://www.eff.org/deeplinks/2010/12/knowledge-power-facebooks-
exceptional-approach) | From December 2010).

~~~
MrHoltz
Looks like they had a quiet policy on it from mid 2010. It didn't seem to get
much fanfare until about a year later when it became common public knowledge.
I guess it's possible both are lying or neither of them.

~~~
nbpoole
[http://www.bbc.co.uk/news/uk-england-york-north-
yorkshire-17...](http://www.bbc.co.uk/news/uk-england-york-north-
yorkshire-17079853)

"Glenn Mangham, 26, had earlier admitted infiltrating the social networking
website between April and May 2011."

I just checked my old emails and found XSS vulnerabilities I reported to
Facebook, under their responsible disclosure policy but prior to the
introduction of the bug bounty program, from late 2010 / early 2011. His
timeline doesn't match reality.

~~~
MrHoltz
Granted it looks like they had a policy but it's possible that not many people
were aware of it. I don't know anyone that reads the lengthy ToS or policy
documents of companies they deal with and they didn't seem to give it much
promotion until after this incident. He does specifically say bug bounty
programme and not the policy so I'm willing to give him that. If a company has
a stance they do need to promote it and perhaps have stronger wording than we
might not hang you out to dry.

------
driverdan
Link to original for those who hate blogspam:

[http://gmangham.blogspot.co.uk/2012/04/facebook-hack-what-
re...](http://gmangham.blogspot.co.uk/2012/04/facebook-hack-what-really-
happened.html)

------
citricsquid
After watching the video the problem seems to be that while he is absolutely a
white-hat, someone that Facebook should have had no problem with and would (I
assume) loved to have spoken with about the vulnerability, they had absolutely
no idea he _was_ a white-hat and so they approached the situation under the
assumption he was malicious in his actions, his response to being caught
validated this assumption, at which point Facebook's only action was to alert
authorities and then the consequent action is out of their hands. They can't
turn someone over to the police and then retroactively say "it's okay
nevermind", can they?

It's unfortunate but ultimately I don't see how this is anything other than an
unfortunate turn of events that nobody can be held responsible for. Facebook
didn't chase him because they didn't like him, it was because they believed he
was a direct threat, isn't that what any business would do? When he says that
he believes Facebook made the bug bounty program a higher priority after the
situation with him just validates the idea that Facebook regret this but it
was out of their control.

Seems the metropolitan police are at fault to his further gripes.

------
gavanwoolery
For a company that claims to be all about hacker culture (regardless of
whether this was white/black/grey hat), I am surprised that Facebook is
pressing charges rather than offering to hire this guy.

~~~
sneak
It is not Facebook that decides whether or not a criminal offense is
prosecuted, but the DA. The victim actually has no say in the matter.

~~~
TomAnthony
Not in this instance, as the court case wasn't in the US, but in London.

In the UK the Crown Prosecution Service decides whether to prosecute, and that
involves 'any views expressed by the victim' [1]. If the victim has withdrawn
their complaint, then the CPS will decide whether prosecuting is in 'the
public interest'.

1)
[http://www.cps.gov.uk/victims_witnesses/reporting_a_crime/de...](http://www.cps.gov.uk/victims_witnesses/reporting_a_crime/decision_to_charge.html)

------
Strallus
Yeah... he stashed a copy.

------
bmelton
This article is so interesting, but mostly because I feel that there's
something missing with it.

Mangham seems like there is some reason for which he shouldn't be in trouble
at all, but it's never revealed -- or maybe there isn't any real reason, and
he's just delusional? I don't know.

It starts off bad enough -- "Strictly speaking what I did broke the law
because at the time and subsequently it was not authorized," -- So wait. It
wasn't EVERY authorized? Or is he implying that sometime before the time he
actually stole the source it had been authorized? Was he hired to do this work
at some time, then let go afterward, only to keep trying? It's confusing
without context (to me at least).

Then he realizes that FB is on his tail and he "panicked because I knew how
bad it looked without sufficient context." What is the context? He seems to
imply that there is some reason for which he shouldn't be thrown in jail, but
never seems to get at exactly what it is.

There's something weird here, and practically every statement he makes hints
at some deeper cause, but always falls short of actually revealing what it is.

~~~
jasonjackson
Nothing seemed missing to me.

I think he thought that Zuckerberg/Facebook would emphasize with what it's
like to be a white hat hacker, be very curious, challenge yourself and then
let him off the hook. Other companies have been known to hire hackers after
getting caught.

One thing that didn't really add up is he never turned himself in after
committing the crime. He waited 3 weeks? That places doubt on his intentions.
Maybe he just freaked out.

~~~
Strallus
Do white-hat hackers normally turn themselves in?

(also, I think you meant _empathize_ )

~~~
nbpoole
(Note: this post represents my own opinions, not anyone else's)

No, but they normally report the vulnerabilities they find.

I participate in a _lot_ of responsible disclosure programs (Google, Facebook,
Mozilla, Dropbox, Twitter, Etsy, etc). All of those programs dictate that you
report the security vulnerabilities you find, and that you not abuse them.

What was described in the blog post sounds a lot like real security audits
that I've seen done. However, the difference is that those audits are done by
professional security researchers who have been hired by the company for that
purpose. If you're an outside security researcher you have to abide by a very
different set of standards. Common sense would argue those standards include
abiding by the company's responsible disclosure policy.

