
Facebook exposed identities of moderators to suspected terrorists - petethomas
https://theguardian.com/technology/2017/jun/16/facebook-moderators-identity-exposed-terrorist-groups
======
TheSpiceIsLife
There's a lot of "security is hard" comments here.

What I can't understand is:

 _Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit
based at Facebook’s European headquarters in Dublin, Ireland._

If you work in a counter-terrorism unit the consequences of your identity
being known by the wrong people are extreme. Extreme as possible death.

Why aren't these employees identities obfuscated by multiple layers of
protection. They shouldn't be logged in to Facebook's intranet in anyway that
could possible expose them. This could mean they don't have ID cards on them
to access the building they work in, they use manual time cards / attendance
records. Their pay handled by a separate entity. They are not allowed to have
_personal Facebook profiles_ as a condition of employment in that department.

------
pavement
The interesting part about this story is that if you're getting paid €15 an
hour, to moderate facebook, and mark/flag garbage, you probably aren't doing
anything important, and have saved no one's life. You are a cog in a machine
that pays lip service to some misplaced idea that this sort of thing can
intervene, and deflect negative outcomes into controlled results.

But, also even for all the nothing that this day job accomplishes, now we have
an identity spill, and "terrorists" who don't really care about who they kill,
so much as how many get killed (they will literally stab random people on the
street, without warning, given the opportunity), now have a short list of
people they can make easy news with. Not because they think this will matter,
or get results, or slow down their open and flagrant use of facebook anyway,
but because it's a splashy news item that they can use as a promotional
device.

So nobodies accomplishing nothing, maybe get squished by people who know that
these targets don't actually matter in a tactical or strategic sense, but
incidentally as a public relations bump.

And so, if one person from the group of people suffers harm, the machine
grinds onward on both sides unchanged, and this distraction evaporates, as
more inconsequential crap gets dumped onto facebook, and faceless "whoevers"
pick through the tin cans and banana peels looking for a smoking gun that will
never appear. The difference being that they would be slightly less than
random targets, but still nobodies, brought down by sacrificed expendable
cannon fodder, entrapped into suicide attacks no matter what.

------
thrwwysxs
I've locked my FB profile down quite heavily, including hiding it from search
engines. But my name leaks out to Google anyway, through its "view the
profiles of people named" user list feature. Social networks pay lip-service
to privacy, and incidents like this underscore that.

------
nkrisc
Why are their personal profiles tied to their moderation duties? Are these not
their passional profiles but one that also is tied to their identity in
another way?

~~~
nerdponx
When I visited the NYC Facebook office in 2015, I was told that employees were
required to use their personal Facebook pages for work purposes.

~~~
williamscales
That's mind boggling. I wonder if you can just create a second Facebook page
and use that. Would Facebook really fire an employee for using a work
Facebook? And what if you didn't use Facebook in the first place? Any Facebook
employees want to chime in?

~~~
userbinator
_And what if you didn 't use Facebook in the first place?_

That's even more mind boggling. Why would someone who didn't use Facebook want
to work for them, much less get hired without creating an account sometime
before that?

These days, it seems that if you don't use Facebook you probably have a strong
opinion not to, and are unlikely to even consider working for them.

~~~
dsacco
I don't use a Facebook profile, but it's not because I have a strong opinion
about Facebook. It's just that nothing about its core product particularly
appeals to me. But I also don't find anything about it morally repugnant, and
I respect that many people really enjoy using it. I even respect that many
other people don't enjoy it at all (and even find it repulsive), though they
may be fatiguingly militant at times.

However, I'd probably work at Facebook. I respect and admire Facebook's
overall security organization more than just about any other large tech
company except Google, Apple or Microsoft. I haven't actually looked for
employment there, but I've heard it's nice, and there are probably really
interesting roles available for cryptography research and engineering.

------
aaron-lebo
What good is hundreds of billions of dollars if you can't do the small things
right? Or maybe too much money just allows you be mediocre. There seems to be
an inflection point that happens at, but who knows what that is.

Greed and pride come before the fall; hopefully all you people out there
working to dethrone these companies will keep kicking.

~~~
sillysaurus3
Security is hard. You can't make this kind of statement if you haven't been a
pentester. Do a stint for a year and you'll see.

I personally found a remote code exec on one of the biggest security company's
servers. I can't be more specific, but suffice to say, it was a small
oversight that had big consequences. It's very hard not to screw up the small
things.

If you've written a large service, and you give me a week with it, odds are
better than 50/50 that I'll at least find an XSS. I think out of fifty or so
pentests, there were only two that I didn't find an XSS or better, and one was
a read-only informational pamphlet.

~~~
aaron-lebo
It might be hard, but if you've got those kind of resources available to you
and the consequence of being wrong is hundreds of your employees might not
ever be safe again, you'd better make sure you are right.

 _A bug in the software, discovered late last year, resulted in the personal
profiles of content moderators automatically appearing as notifications in the
activity log of the Facebook groups, whose administrators were removed from
the platform for breaching the terms of service. The personal details of
Facebook moderators were then viewable to the remaining admins of the group._

So nobody noticed that or fixed it for 2 weeks (edit: the article actually
says that the bug was active for 2 weeks before being discovered and not fixed
for another 2 weeks, and was retroactively exposing employees for more than a
year - is that security)? Thousands of employees and nobody is checking for
that?

Come on. We praise Elon sending things into space but FB can't do some basic
error checking? Something doesn't add up.

~~~
sillysaurus3
Did I mention that the other pentesters on the team completely missed that
remote code exec I found? I hate to brag, but sometimes it's the best way to
get a point across.

Security is so hard that you can't find everything. One of the dirty secrets
of the security industry that nobody likes to talk about is that when you do a
pentest, you're deemed secure, but it's overwhelmingly likely that the
pentesters didn't find all the vulnerabilities.

Security breaches are rare, which is why this system works. But it's not
malicious -- it's just very, _very_ hard to be secure all the time. Security
isn't even a priority in most cases.

I get that you're saying Facebook has resources. But with resources comes
scale. How many programmers do you suppose worked on that feature? It could be
one or two out of a dozen assigned to that arm of the project. Or it could be
a dozen involved in a multi-month refactoring of spaghetti code. Or it could
be one overworked person who threw in a "fix" before crawling home to bed. FB
hires exceptional people, and even exceptional people ship bugs.

By the way, two weeks is a _tiny_ amount of time. A real-world pentest takes
about that long to book, unless you already have pentesters on retainer. I'm
sure Facebook does, but as I said, pentesters can't catch everything.

~~~
aaron-lebo
It's frustration speaking, but you're right in what you are saying.

Good security folks are valuable for a reason. Good luck to your efforts.

------
maxharris
Is it actually a given that the prominence of terrorists is simply an
inevitable fact of life, a phenomenon that has no cause and no solution, and
that we have no choice but to simply accept it? Is it actually true that they
have no actual cause, represent no idea, have no specific aims, no ideology,
no picture of what they want the world to look like? Is it actually true that
their existence can be fully explained by purely material or historical
factors, such as economic conditions, wars, foreign influences or the ratio of
young males to young females in a given nation?

It sucks that a bug caused a number of innocent people to fear for their
lives. But this wider question can't be ignored - what could Facebook do alone
if it were specifically targeted? People would still be fearing for their
lives.

Important problems are not solved by ignoring or papering over fundamentals.

------
danjoc
Note: The dupe filter doesn't catch a missing www.

[https://news.ycombinator.com/item?id=14571426](https://news.ycombinator.com/item?id=14571426)

~~~
sillysaurus3
As it shouldn't. It's a different URL.

It's pretty annoying when dupe filters "catch" submissions with ?repost=1
appended to them, for example. It was either Reddit or HN that did that at one
point. Sometimes there are legitimate reasons to resubmit.

It's also a form of letting the community express themselves -- if a thread is
repeatedly flagkilled, having some mechanism to resubmit it is a way of
resisting the urge to cry censorship. It often gets a thread to the front page
(albeit with cement boots).

That's a bit tangential, but I'm just providing a few counterpoints to resist
the urge to tighten the dupe filter. The manual method works pretty well.

~~~
danjoc
>It's a different URL.

It seems ripe for abuse. #walawalabingbang is also able to pass.

[https://news.ycombinator.com/item?id=14573297](https://news.ycombinator.com/item?id=14573297)

That opens the door to unlimited resubmissions.

~~~
sillysaurus3
That's a rather inefficient way to get your account banned. ;)

Jokes aside, I've been consistently impressed with how well manual methods
have worked for HN. It was very unexpected that you can simply throw human
resources at problems of scale and do so well. There is lots of smart software
powering those humans, but the manual approach avoids false positives.

Stripping a hashtag has also caused problems, by the way. I remember one
submission in particular where the OP complained that the hashtag linked to a
different entry in a blog, which was set up to make that the only way to link
to it.

~~~
Houshalter
Deepmind's blog does that which is the most annoying thing. E.g.:
[https://deepmind.com/blog/#decoupled-neural-interfaces-
using...](https://deepmind.com/blog/#decoupled-neural-interfaces-using-
synthetic-gradients)

It screws up url searches when I want to find if the link was posted before on
HN or reddit, for instance.

------
hagakure0c
Facebook is like a Bentham construction from the outside. No one wants to get
in.

