
‘Google’ Hackers Had Ability to Alter Source Code - phsr
http://www.wired.com/threatlevel/2010/03/source-code-hacks/
======
bruceboughton
“Additionally, due to the open nature of most SCM systems today, much of the
source code it is built to protect can be copied and managed on the endpoint
developer system,” the paper states. “It is quite common to have developers
copy source code files to their local systems, edit them locally, and then
check them back into the source code tree. . . . As a result, attackers often
don’t even need to target and hack the backend SCM systems; they can simply
target the individual developer systems to harvest large amounts of source
code rather quickly.”

\---

Forgive me but isn't this the function of a SCM? What other models are there?

~~~
roc
Locking down individual developers to only the source they absolutely need.

In that model, a single compromised workstation would only expose a few
modules to malicious changes and only a few projects to code theft.

In the more-common wide-open implementation, a single compromised workstation
exposes the entirety of code for every project to theft _and_ malicious
changes.

When you're talking about an under-the-radar hack that took place over months,
hackers intentionally slipping vulnerabilities into the code is a very real
risk.

~~~
bruceboughton
And you can surely do this with most SCMs on the market, including Perforce.
Now all the hacker needs to do is compromise the build server...

------
MrHyde
"I sorta find it amusing that McAfee released a PDF of the white paper,
considering that Abode’s PDF Reader is also a popular attack vector. It’s like
railing again IE6 being insecure, but using it to post the message that IE6 is
insecure."

-From the comments in the original article.

I found the above interesting because it suggests to me one of the fundamental
principles of security: we must necessarily try to improve our security from
inside of systems which are already insecure.

We could say that no one should ever use an IE with a zero-day vulnerability.
And that no one should use pdf because it can be an attack vector. Or view
jpegs because there have been embedded executable code vulnerabilities. Or run
executable code because sometimes it is malicious.

Security is always a matter of trade-offs. One can never build a house which
cannot be broken into but one can build a house that is not worth breaking
into.

Sounds like there were a number of vulnerabilities here. It also sounds like
improving default settings is one of the best solutions here. But it's clearly
not a justification for no longer using SCM, or pdf. Perhaps for old IE.

------
mustpax
Why are we taking McAfee's word on what happened again? They never claim to
have directly investigated Google, and from what I can tell Google isn't their
client. They are purely extrapolating based on other companies they work with.

This just sounds like they are trying to scare everyone about what happened to
Google so that they can sell them _McAfee Security_ , which as we all know
will fix all your problems.

~~~
shpxnvz
Yeah, it's clearly said that McAfee "provided information to Google" which
sure doesn't sound like any direct involvement with their investigation. In
addition, from a quick scan of their linked PDF, I don't think McAfee
mentioned Google at all - which makes it look more like the Wired writer just
plastered "Google" all over the article to make it a bigger headline.

I'd blame Wired here more than McAfee.

------
plaes
Duh... headline should replace hackers with crackers.

On first sight I really thought that it was about regular Google employees who
developed something new during their blessed self-time.

~~~
ajross
The headline is using a definition of "hacker" that has been common for 30
years now. This kind of linguistic purity argument is silly, and ultimately
futile. Languages change. We need to learn to speak the language as it exists,
not what we think it "should" be.

At least sites like this one preserve the original meaning of the term. That's
the best we can hope for.

~~~
bediger
Finally, someone that agrees with me on linguistic purity!

Personally, I think that purity is for monks and hocky players! I use the word
"hacker" to refer to Giant Amazonian Parrots, and the work "cracker" to refer
to small, circular, non-magnetic yet still metallic hardware.

Now, excuse me while I mambo dogface down to the banana patch.

------
yellowbkpk
If I had a local working copy of a git (or any of the dvcs's) repository and a
hacker attempted to taint the "master" copy, would that show up as a diff
between my copy and the master?

With SVN, a hacker could inject changes into the repo on the server which
would show up as a incoming change to my local repository.

In both cases, we could detect an intrusion if someone was vigilant enough to
take a look at all incoming/outgoing changes and diffs -- or am I missing
something?

~~~
lutorm
Well, if there are enough people working on a repository, you might be
inundated with changes every time you update. You'd never suspect anything
unless you knew which files specific users should be editing, and checking
that for each patch would probably be prohibitively time consuming.

But unless perforce allows you to delete history, it shouldn't be a big deal
to fix. The hard part is finding it. I wonder if there are automated systems
that can check the patches users commit and look for "outliers" in content or
style?

~~~
fierarul
So I guess Google's style of code-reviewing every commit should keep them safe
from this...

~~~
kvs
Good point; This is something I wanted to see discussed but the paper didn't
touch on it at all. It just focused on tools and vulnerabilities in the tool
and didn't dip into the process surrounding the tool.

------
lutorm
“It is quite common to have developers copy source code files to their local
systems, edit them locally, and then check them back into the source code
tree. . . . "

Well, doh. How would it work otherwise?

------
gojomo
At the time of the initial report, I suspected that a reason Google seemed so
angry was that the hackers had tried to inject new vulnerabilities into Google
software that would then damage third-party Google users (either via browser
compromises of downloads of Google-distributed client software, like
Desktop/Toolbar/Chrome/etc.).

Stealing 'secret algorithms' is in one category of transgression. It's still
somewhat flattering to the target. It's driven by curiosity, not always a bad
thing. To the extent such theft-of-secrets enables competitors to increase
market share by improving their own operations, it hurts the target, yes, but
might still be net-beneficial to society in a dynamic long-term analysis. It's
theft, but not really violence. A victim will adopt countermeasures, and might
expect compensation/damages, but may not be moved to retaliate in-kind. A
rational weighing of costs and benefits tends to dominate the choice of
responses.

On the other hand, to try to corrupt a company's offerings to damage their
customers -- and possibly destroy the company's reputation as a trusted source
of downloadable software -- moves into another more serious category. It's
malicious and destructive. The victim may feel existentially threatened, and
feel obligated to retaliate using any means available. A rational weighing of
options may not matter; there is a urge to punish, even incurring large costs
in the process.

Google's initial response made me think they viewed the China breaches in the
latter category, despite the limited details.

------
aneth
Nice counter-example for anyone claiming expensive enterprise products
(Perforce) are more secure than widely used open source products
(git/subversion).

~~~
smiler
Not really. Read the article - the compromise was entirely down to user
stupidity - clicking on phishing links and the use of an insecure, outdated
web browser.

Perforce can be secured, but the administrators clearly chose not to.

The article is a joke as well. It cites the fact that all of the source code
is on a developers machine - how else are they meant to develop, code and
test? It then also cites that the files are stored in plaintext - source code
can't be stored one-way encrypted can it! (And there's little point in two-way
encryption if the hacker has access to the machine)

So in summary, it was not Perforce. It was, as always, human error

~~~
simonw
"the compromise was entirely down to user stupidity - clicking on phishing
links and the use of an insecure, outdated web browser"

It was a spear-phishing attack (great name) - you get an e-mail from someone
you know suggesting you check out a link. The browser security flaw was a
0-day, so even if you were fully patched you would still be affected. Sure,
using IE isn't brilliant, but there are certainly 0-day holes in other
browsers. I don't think it's reasonable to blame this attack on user
stupidity.

~~~
bruceboughton
Still it's not clear how the SCM vendor is supposed to mitigate against this?
If an attacker can inject code onto a machine inside your enterprise (and they
obviously can) then you're pretty much screwed whatever you're running
(because _someone_ has to be running _something_ that can affect production
systems, be they developer, administrator, etc.)

