Ok. So Facebook monitors Messenger (not just your public "wall" of Facebook) which has the appearance of being private person to person communication and detects that there are encouragements of violence in Myanmar (written I assume in some language other than English).
That is some impressive operation they have going on.
No wonders that the Chinese did not let get them inside their country.
To paint a picture here are some headlines by BILD (largest German tabloid):
"ZUCKERBERG SCANDAL: So verdient Facebook mit Ihren Daten Geld", (ZUCKERBERG SCANDAL: This is how Facebook earns money with your data), article based on a report by German national television (ZDF)
"„SCHADEN FÜR DIE DEMOKRATIE“
60 Prozent der Deutschen fürchten Facebook" (DAMAGE TO DEMOCRACY, 60 Percent of Germans fear Facebook)
Then again, you did say "propaganda" campaign, so I guess there was your assumption of dishonesty.
1) "...[E]thical to me IFF you don't make stuff up." I think you can have propaganda that uses only facts, but in a disingenuous manner. You just don't mention _all_ the facts, and/or put a lens on the wrong perspective. Is that "dishonest?" I'm not sure - maybe a misinterpretation?.
As an example, you can see this all the time with the arguments against global warming: "some studies show that temperatures are not increasing." If you look hard enough, you'll find data points and/or papers that argue against global warming, but they're a very very minor # of studies, that it's not a fair perspective or lens on the arguement.
2) "I find it imperative to do so." I am not a Facebook fan-person by any means, but I'm not sure I find it morally _imperative_ that folks rally against what they are doing. I have a hard time believing that the 1000's of employees that work there are all evil people, but rather the majority are probably well-intentioned and may just need a correction of direction. Can I prove this? No, but if they were, say, selling nukes to ISIS or something at that level, then I might agree with you a bit more.
If Facebook is the vector of incitement to violence around the world, they may be as hazardous as tobacco and firearms.
Nothing wrong with regulation and information campaigns about the dangers of tobacco and firearms, so why not Facebook?
1) While they're not perfect, I'm not sure I believe they're doing more harm than good. As an annecdotal example, my family loves Facebook, because I live pretty far from most of them, and they like the ability to know that I'm healthy + happy, and/or if I'm travelling that I got back safely. They're generally older, so not that tech-savvy, and I do give Facebook Design a lot of credit for making some really intuitive choices in a lot of areas that make it accessible to older / non-geeky audiences. Could I just text / call them? Sure, but I'm a busy person like most people here, so it's easy to just throw up a picture or post to FB and make their days a bit better.
2) A lot of the arguments that they're doing "harm" are pretty pseudo-scientific, if you look at the claims/articles closely. Many rely on "correlation over causation" views, anecdotes, and the like. It's only recently that some true studies have tried to prove this , but even then arguing for putting FB on the level of tobacco and guns is a strong claim to make off of minimal peer-reviewed evidence. We might as well toss Google + Amazon + Twitter into this same boat.
It's somewhat ironic you use this as an argument for good. I'm much like yourself in this scenario, so I'm not trying to take a moral high ground, but I can't say I truly believe this is enriching the relationships of people on either side.
Jokes aside. No-one says "shut it off", but make them follow some rules and possibly adding some rules is not a strange thing at this point.
Or, to take another reading of it - Facebook actively monitors nothing. They get a call that people are abusing the service, and they open the hatch and proceed to take a look.
Skype does it, too, by the way, in case that's also a surprise. Probably most of the ones that don't use end-to-end encryption by default do. And now that Brian Acton left WhatsApp, you should be very careful with what you saw in WhatsApp's end-to-end encrypted chats, too.
For instance: http://www.information-age.com/whatsapp-can-read-users-encry...
If there are other insecurities, please post them here - I haven’t heard of others than this one.
And this: http://www.businessinsider.com/whatsapp-security-2014-2?r=US...
The fact that it is owned by Facebook is enough to make me stay away.
That’s fair, but I think it’s important to be very explicit if that’s the criticism.
It’s true that Facebook can see your contacts list if you share it with WhatsApp but no one ever claimed that they couldn’t. This isn’t a security vulnerability as such.
The second article you linked is from before WhatsApp’s move to the Signal protocol.
Yes, it’s owned by Facebook and yes, it’s closed source, but WhatsApp has brought end-to-end encryption to over a billion people. For the vast majority of use cases it’s a huge security improvement.
Besides that I would never touch WhatsApp. The fact that they hand your phone number over to facebook is one of many reasons to stay far away of this messenger.
I uninstalled the FB app ages ago, and I've removed all shortcuts to FB from the start pages in my browsers. That helps a bit. I've also set Cookie Autodelete to not allow any FB cookies, so I have to log in every time, presenting a further small hurdle I have to cross. It's a very small thing, but it makes me more aware of what I'm doing, so I can stop myself.
Why inflate what actually happened that much? Checked you on that statement, and even the daily mail article just mentioned it was a security guard who told the photographer to go to HQ, where he was met by two executives. No 'paramilitary force'...
I like neither the company nor the person, but please don't spread false information.
Responding to Tim Cook:
>You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.
>That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.
>But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.
>I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me.
The fact is advertising business models, like Facebook, have inherent conflicts that Apple doesn't have. Paying directly for a product that you're going to use creates a beautiful alignment of interests. Even Microsoft wasn't as beautifully aligned as Apple was/is because Windows was sold to CIOs and IT departments and not the person who was going to actually use it.
"Apple is for rich people" would have been excellent jujtsu if not for that leaked memo, at least rhetorically. Unfortunately that memo laid bare for everyone to see exactly what Facebook as an organization and a culture prioritizes, and reveals that all of Facebook's public statements made after privacy scandals over the years, as well as Zuckerberg's protestations that he cares about the user over "connecting the world" (euphemism for increasing engagement and thus ad growth), were complete bullshit. Ezra refers to Tristan Harris' comment that Zuckerberg couldn't do anything that decreased engagement by 50%, which is completely right and zeros right in on that inherent conflict in ad business models.
On the ethnic cleansing in Myanmar and Facebook's role (I'm going to quote Ezra because Mark's answer is garbage and a complete dodge):
>One of the scary stories I’ve read about Facebook over the past year is that it had become a real source of anti-Rohingya propaganda in Myanmar, and thus become part of an ethnic cleansing. Phil Robertson, who’s a deputy director of Human Rights Watch in Asia, made the point that Facebook is dominant for news information in Myanmar but Myanmar is not an incredibly important market for Facebook. It doesn’t get the attention we give things that go wrong in America. I doubt you have a proportionate amount of staff in Myanmar to what you have in America. And he said the result is you end up being like “an absentee landlord” in Southeast Asia.
>Is Facebook too big to manage its global scale in some of these other countries, the ones we don’t always talk about in this conversation, effectively?
This gets to the heart of the matter. Facebook (and Google with YouTube) are just too big for them to even grasp what's going on on their platforms. "Absentee landlord" is such an apt way to put it. These companies sneeze and the ripples are massively world changing, from election meddling to fucking ethnic cleansing.
The only real solution to this is regulation on a global scale. Antitrust, privacy regulation, Germany's hate speech law applying to social networks (this won't happen in the US for obvious reasons); all of it has to be on the table.
At its heart, Facebook can't be honest with what it is. Mark Zuckerberg can't admit to himself that what's good for Facebook is bad for society. That's why he says a lot about "community" and "connection" but never talks about advertising or monetization. Look at his updates - they never talk about the billions of dollars they make (money talk - that's taboo!)
Facebook doesn't make money via advertising. Advertising is a billboard, or a TV commercial. Facebook makes money by selling your private data (oh you're showing signs of depression - try some prozac!). Facebook is a surveillance operation in a way that traditional advertising isn't.
I have a friend who worked on the newsfeed team at Facebook. Nice guy. Smart guy. But he's unable to consider the broader ethical implications of what he's doing. In the Facebook paradigm more engagement = good.
Facebook has contempt for the general public, and for that reason deserves contempt from the general public.
There have been exceptions, but they are broadly speaking either from years ago, or were mistakes in the first place. Facebook is generally uninterested in giving away your information -- that is, after all, a big chunk of what makes them valuable. If everyone had the same information, they wouldn't be as able to sell ads.
If you think about it on a spectrum, where traditional advertising lets you do some crude targeting (If I advertise on American's Next Top Model, I'll get a different audience than if I advertise in the Wall Street Journal) that lets audiences opting in and out based on individual choice (I don't have to watch America's Next Top Model, even if I have all the characteristics of an ideal viewer). No data is sold here.
At the other end are people who sell leads - usually for B2B customers (here's the names and contact info for 500 IT Directors). Data is explicitly being sold here.
Facebook sits somewhere in the middle which is why we don't quite know how to talk about it. We can target things like traditional advertising, but we are now also able to get explicit information on our targets, in a way that hasn't really been done before.
Hmm, but ultimately the money still comes from the target audience anyway, right? The advertiser only pays to reach a given audience if that audience is going to buy enough product to pay back the cost of advertising. Maybe you could make the argument that the really poor Facebook users are essentially drafting off of imperfectly targeted advertising?
I think the real reason for the advertising model is that Facebook is more valuable the more people use it. So it started charging money, maybe only 1/4rd of the users actually pay, the other very casual users just use the fee as an excuse to finally stop using Facebook. But now with such an exodus, the paying people don't want to pay anymore, and then you have a death spiral.
Note that the really poor Facebook users can still be targeted for advertising by politicians to curry their votes.
Don't forget FB as business tool. There it not only aligns interests, it gives back more than you put in, many businesses depend on it, and the conversion rate they enable.
In my experience that's not a popular sentiment. FB has absolutely decimated the organic reach of posts by businesses, creators, and other entities whose customers/fans have followed them. Now you have to pay Facebook to reach those customers/fans who followed you in order to see your posts.
Here's science journalist Derek Muller (of Veritasium) describing his observations on this issue in detail: https://www.youtube.com/watch?v=l9ZqXlHl65g
Depends on what outcome you want. Buying Apple/MS products - your money simply goes into some executives pocket or in a pile of cash promoting an even greater concentration of wealth. While that is in the interest of Apple, its not really in my interest as a consumer. I want as little of my money as possible to go into a pile of cash, and the maximum possible to the actual cost of the good.
No? I'm still out, then. Good riddance.
I think that pretty much sums it up.
When you use the word "community" to describe a user base of 2 billion people you've stretched the word beyond all sense or recognition.
Frankly, this is a weak answer. They don't have a good solution and this approach is not going to prevent it from happening in future elections.
Does anything Facebook (or third party companies) do with your private data actually have an impact in the day to day lives of the vast majority of the world's population? Sure, hyper-targeted advertising and echo-chamber content curation can have a negative impact to society, but these are macro issues. People will generally follow the path of least resistance and if you give them something of high utility for no cost and no immediate negative impact to their daily lives, they will conveniently ignore many serious issues that should otherwise be concerning.
Yes, I mean, that's the problem, is that they keep saying this, but, you know, there's this recidivism problem. They keep not really doing anything.
And I think that the problem is that their model depends on accumulating data and giving it to advertisers. And anything that comes close to threatening that business model, they don't really seem that interested in doing something serious about it.
You know, I understand that, but I think the time of "trust us" has got to be over.
You know, the - fundamentally, Facebook is a surveillance machine. They get as much data as they can, and they promise advertisers that they're able to manipulate us, and that is at the core. And so, you know, they started this by saying, well, this wasn't really a data breach, this is our normal business model, which I think should tell you something, and then later said, well, it's not so great, and so forth.
But they're really showing an unwillingness to do something more serious about this problem. And it keeps happening over and over again.
There is just something not right here with this company and their unwillingness to come clean. And I think that the idea, well, just trust because Zuckerberg wrote a message on Facebook, that everything is going to be fine is really something government investigators cannot trust.
And once again, I think the concern in Facebook's heart is that, at some point, this will hurt their advertising revenue and the promises they have made investors. And so they're unwilling to take serious steps.
And I think the fundamental problem is, they're all dependent on this pure advertising model, you know, nothing but trying to get as much data out of us and sell as much as they can of our time and attention to other people.
And that just leads in very dark directions."
"You know, I find that argument, that if you're not paying that somehow we can't care about you, to be extremely glib and not at all aligned with the truth.
The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can't afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.
I think now people are appropriately focused on some of the risks and downsides as well. And I think we were too slow in investing enough in that. It's not like we did nothing. I mean, at the beginning of last year, I think we had 10,000 people working on security. But by the end of this year, we're going to have 20,000 people working on security.
[ __% of total headcount at Facebook ]
In terms of resolving a lot of these issues, I think it's just a case where because we didn't invest enough, I think we will dig through this hole, but it will take a few years. I wish I could solve all these issues in three months or six months, but I just think the reality is that solving some of these questions is just going to take a longer period of time.
Now, the good news there is that we really started investing more, at least a year ago. So if it's going to be a three-year process, then I think we're about a year in already. And hopefully, by the end of this year, we'll have really started to turn the corner on some of these issues."