I interviewed with FB shortly after the Cambridge Analytica stuff came out (but long enough after that it was clear to everyone that FB had screwed up big time).
When it was my turn to ask questions in one particular interview, I gave a standard fallback when I have nothing else: "[despite all the blahblah positives], having worked at other technology companies, what's your least favorite thing about working at FB?"
This guy was a FB vet (maybe 5 or 10 years? Very long in FB time, I think). He gives me this spiel about how hard the teams work to maintain user privacy and how unfairly they're being treated by the public and the media, and how hard it is to work in an environment where everyone treats you so unfairly.
I was floored. Not even a hint of apology, remorse, or "we could have done X better". "Unfair. Fake news. We're doing great things, and no one thanks us enough." And from someone that had probably made a mint having been around at FB near-IPO time. It was my first or second of the day, and while no one else was so blatant, the sentiment persisted throughout the rest of the day.
I bombed the interviews hard, they didn't want me back, but I had basically decided I'd never work there by the time lunch rolled around.
Having dug into the Cambridge Analytica story, I can see a number of reasons why Facebook workers would feel wronged by the coverage. In particular, many outlets omitted or downplayed the fact that Cambridge Analytica had lied about the purpose of their data collection, falsely claiming that it was for academic research. Dozens of other academic institutions were similarly allowed to solicit data from users, yet those go unmentioned because most people don't see users voluntarily sharing their data for academic purposes. But that's what Cambridge Analytica was, from Facebook's perspective. The picture painted by the media was one where Facebook brazenly sold people's data to nefarious actors, when in reality Facebook treated Cambridge Analytica just like other academic organizations in allowing them to solicit users for data - except it turned out that CA had lied about the purpose of its data collection.
This story leaves makes me more empathetic towards your interviewer. If my company was so throughly demonized by the media such that candidates discount my own experience actually working at the company and are "floored" when when I tell them that the reality is different from public coverage, I'd be pretty salty too.
For starters, most of us here don't benefit financially from putting a positive spin on disasters at Facebook. As the Upton Sinclair saying goes, it is difficult to get a man to understand something, when his salary depends upon his not understanding it.
I've not talked to Facebook employees personally but I've witnessed the same degree of delusion at other large tech companies. Some are straight-up like a cult with employees willing to defend just about anything their company does regardless of how hazardous it was. Which is even more baffling if one takes into consideration that these are usually salaried employees working there only for a few years with no personal stake whatsoever.
The point isn't even to discuss some sort of internal nuance about the technical details of CA. If you are a company as large as Facebook and you allow user data to be abused to this degree, even if there is some nuance to it, when facing the public you apologize, tuck your tail between your legs, admit that you screwed up, and fix your problems, and you turn the arrogance and the world saving rhetoric down a notch.
Well I think I can solve this puzzle. It's possible some people disagree with you about how "hazardous" their company is.
They're not "cult-like" or "delusional". They're not willing to "defend just about anything" despite the fact that they are only there a few years.
They just don't agree with you on the amount of hazard their companies are doing.
Note: This is regardless of whether you are in fact right or wrong. But it kind of bugs me that someone not having the same views is branded in such a way, as if somehow clearly you know the truth, so anyone who isn't automatically on your side must be delusional/evil in some way.
That's not what I believe at all. I've had plenty of discussions in my life with people from all kinds of sectors and the tech industry, in particular, FAANG employees in my experience stand out in this way. If you talk to someone from the Big 4 like PWC I never exactly got the impression that they're overly attached to their companies point of view.
Tech companies have very cleverly fostered some sort of ideological atmosphere among their employees that makes them defensive about their wrongdoings, and they have long pushed the idea that they're not just vehicles to create profit for shareholders but on world-saving missions.
As another example, remember when Uber essentially spammed mayor DeBlasio's office through their app in an attempt to undermine regulation and to effectively get ahead of the law through a harassment campaign? At that time I talked to Uber employees and a good chunk defended it.
Can you imagine any ordinary industry acting like this?
You mean proactively explaining to their workforce the reasoning behind actions likely to gain widespread attention in the press?
Yes; that's called treating your employees with respect. Everyone generally expects employees to have some level of insider knowledge, and it's polite to give your workers enough of a heads-up to not be blindsided by questions from friends and family.
The difference with Facebook and some other tech companies is that there's enough trust that employees are generally better informed about the strategic and competitive landscape the company is operating in, and that context can explain actions that may look nonsensical from the outside.
More industries should work like this, not less -- It's treating workers as people that can think for themselves instead of simply cogs in the machine.
As I recall, the sense of unfairness that was going around was rooted mostly in it feeling like old news. CA was the ghost of a bad policy that had already been rescinded, and there was very little awareness of that in the media coverage. Instead, there were loud calls for Facebook to do something, but every reasonable thing had already been done years before.
When I worked there, nobody believed that the policy which birthed CA was a good idea, which is why it was long gone. Also, everything had played out already in the public eye (in the tech press) -- anyone who had been paying attention should have known about most of these things already.
¹ It seems silly to make all these disclaimers, but they seem necessary with the mood here.
It was swept under the rug.
Ugh, this quote. If someone is working at a SWE at Facebook, their career options are probably pretty good. They can get a salary most anywhere.
Too often people use this quote as an intellectual shortcut to ‘I’m right, you’re wrong because your job blinds you to your bias’. Sometimes, they do just know more than you about it.
I’d suggest you take your own advice and dial up the humility a notch and quit branding people who disagree with you as ‘delusional’ or ‘cult like’
With all the demonstrable shenanigans at FB, the word "salary" is the thing you're going to criticize?
Of course, we are talking about Facebook, who have literally been in the news before for themselves running research experiments on uninformed, random users:
The reason it ends up being an issue is because of politics: it gives people an explanation and someone to blame for an election that did not play out according to the expectations and/or hopes of half of the country.
And of course, a couple months later it came out that they were storing passwords in plaintext logs.
This wasn't just about CA. There was a class of problems that FB was facing, and the guy didn't acknowledge a single thing they could have done better.
CA is not actually that bad of a situation.
There are much greater privacy and freedom of expression concerns.
Facebook is the one that collected the data. It is their responsibility to ensure that who they give your data to is who they are, and that they are doing what they should be doing to your data.
The should have a comprehensive compliance program for third parties with access to user data and have the necessary enforcement regime with enough bite to prevent abuse. Not to hide behind their policies until they get caught red-handed and then shrug off their responsibilities to everyone but themselves.
Certainly, with the benefit of hindsight we can see that the potential for abuse for this kind of data sharing - even with the requirement that it's only used for academic purposes - is significant. But it seems to me that the primary culprit here is Kogan, who lied to Facebook about the use of the collected data, rather than Facebook whose fault was being too trusting of academics.
As though simply working at a university is sufficient reason to trust someone.
As others have said, requiring evidence of University IRB review (which it doesn’t sound like they did) would have been a way to require more safeguards. Yes, bad actors could and maybe would have still abused the system, it makes their work harder and more visible.
And what did FB do as due diligence? Very little. When FB discovered the breach, what did they do as a response and to mitigate the effects to the affected users? How did they recover damages and/or enforce specific performance of contract terms to not only remove the data in question but all of the products that resulted in the processing of the data? Again very little.
When banks gave out mortgages like how FB gave out user data, the result was massive financial crisis. And now with FB, we got (more) idiocracy.
They revoked Cambridge Analytica's access to Facebook's data, and told Cambridge Analytica to delete the data they had gathered. And Cambridge Analytica again lied to Facebook and told them they had deleted the data. That's the extent of what Facebook could have done. If Kogan broke laws - and he probably did - that's the government's prerogative to charge and prosecute him.
Likening this scenario to banks giving out mortgages that they know debtors cannot pay off is not an effective comparison. This is more like someone securing a loan by falsifying their income. In both cases the customers were harmed. Facebook users' data was used for purposes to which they did not consent, and the bank customers' money was loaned out at excessive risk. But the culprit that is responsible for this is the one that deceived the company, not the company itself. One could reasonably argue that Facebook should have been wise enough to avoid being duped, but that's still much more generous to Facebook than the bulk of the coverage I read that attempted to assign primary blame on Facebook rather than Cambridge Analytica.
Also Facebook did in fact breach UK data protection laws and was fined by the ICO for its role in the CA scandal. It was found that their data privacy policies and processes was insufficient. Unfortunately this was before the GDPR and hence the maximum fine that could be imposed was insignificant at £500K.
#2 Demonstrates persistent misunderstandings of what events transpired. Facebook was not breached in any way. Again, Camridge Analytica did not hack into Facebook's systems. This was Cambridge Analytica's subsequent misuse of the data that they had collected with Facbook's consent, but under stricter terms than the purposed for which Cambridge Analytica subsequently used the data. Facebook ordered Cambridge Analytica to delete the data.
1. The ICO disagrees with you. Facebook was fined specifically for breaching data privacy laws in the UK.
From what I read, your parent comment sound entirely logical coming from a different POV. I don't see anything illegitimate about it, even if it WAS Facebook PR, the point within are entirely valid.
I just don't see that as reasonable in discourse. I mean you can hold those beliefs, but if you do express them in public I think it's right to be called out on how unacceptable that is.
Instead of discussing what went wrong and how FB's policies and operations are deficient, the conversation is being shifted towards how much the media is biased towards it.
FB is not the victim here. Hence to have a productive discussion, being on the same page is indeed important - as in accepting first and foremost what FB's responsibilities are.
And this is why we have the schism in our society. No one is willing to find the middle ground and listen any more. Everyone simultaneously believes that listening is others responsibility.
If there is a obvious problem that one side is steadfastly refusing to acknowledge and insists on blaming it on others, then yeah we have a schism in our society as one side is just ignoring reality.
Choosing to give away your property to third parties is not comparable to being robbed. Facebook collected the data on unsuspecting users and proceeded to willingly give away the data. Thus Facebook is responsible for what comes out of it. There is no way around it, and it boggles the mind how this fact is brushed aside.
This crucially omits the part where Cambridge Analytica lied about the purposes of the data that was collected, and subsequently lied again when Facebook learned of this deception and demanded that the data Cambridge Analytica collected be deleted.
You're right this isn't like a bank robbery. This is more like someone securing a loan and then running off with the money. The bank's customers were harmed and one could criticize the bank's scrutiny of its debtors. But there nefarious party is the one deceiving the bank.
This line of complaint is absurdly disingenuous. The problems that Facebook has created are not solved with EULAs, and Facebook's responsibility on having created this whole mess is not brushed aside by claiming that third parties did not clicked on the right checkbox when downloading Facebook's data.
This is precisely the type of PR problem that Facebook creates for themselves: this insistent, desperate, cynical, and pathetically inneficient way they try to pin the blame on others for the problem Facebook single-handedly created. Force-feeding this nonsense through astroturfing campaigns doesn't change the problem and the responsibility that Facebook has.
"Third parties did not click on the right checkbox when downloading Facebook's data" is not even remotely close to what happened. The fact that this perspective is so common is big party of why I doubt many people received coverage of the events that was even close to objective.
Alexandr Kogan was a senior research associate at the University of Cambirdge, and developed a personality quiz app that collected data that he claimed he would use for academic purposes. He subsequently used this data for commercial and political purposes, and when Facebook discovered this they revoked Kogan's app's access and demanded that he delete the data that he had collected. Kogan told Facebook that he had deleted the data when he had not done so. This wasn't third parties not checking the right box, this was a deliberate and involved plan evade Facebook's data use policies.
It was precisely what happened. Facebook's excuse is that Cambridge Analytics did not complied with Facebook's terms of service.
And the fact is, Cambridge Analytics were just the stupid ones, the ones that opened their mouths and openly bragged about what they were doing.
So, enough with the bullshit.
A lot of us are evaluating the entire picture, based in large part on the track record of Facebook over the last 10+ years.
For example, everyone who takes a look at the "friendly fraud"  case comes away thinking "Boy, these Facebook employees will stop at nothing to make a buck". Now layer that on top of whatever came out during the CA scandal, and now you can see that the issue is Facebook employees are actually acting like a cult.
And by the way, nothing has changed. You can see this in how every time someone from Facebook does any PR at all (e.g. podcast interviews), they take a lot of care to make sure they don't go on podcasts which bring up the privacy issue.
Here is an open challenge to any Facebook employee who is reading this comment - go on an interview with a known "hostile" who is also not considered an idiot conspiracy theorist - e.g. DHH - and have them interview you. You know that you would never even consider doing it, because you do have a lot of shitty things still going on at Facebook that you wouldn't want to later contradict.
By the way, the same challenge applies to folks working at Google.
First, you may not have the right training (e.g., media studies, media economics, privacy). Second, you favor your coworkers because they are your friends. Third, the positive news and justified criticism of external critics is widely shared internally, but the points that are appropriately critical are much less shared. A single flawed external article can lead to defensiveness, making it that much easier to dismiss tens of other appropriate articles.
More poetically, it's like the Three Blind Men and elephant. The conviction of the blind men is so great, but their closeness leads to a misplaced confidence that they know the answer:
I call this effect the "truth distortion field," and liken it to the Facebook friend filter bubble:
That is really troubling. Excusing nefarious behavior because "well the user opted in" is a downright horrible way to rationalize bad behavior. Users never read the TOS. They barely understand the privacy settings (though I've heard they've gotten better ... haven't had Facebook in years). I tend to think that, yeah, Cambridge Analytica was probably overblown a bit for a number of reasons, but wow. I know a couple people who have quit Facebook (one recently, one a couple years ago before the election) and a couple who have stayed. I have to say, the ones who've stayed have drifted away from the rest of our friend group. Sad to see.
Funny how it means the company is now probably full of either "ignorance is bliss" or morally bankrupt people, because the people with good conscience have left. Not that society at large is much different... what uncomfortable truths are we ignoring?
the grand majority of folks know not to bite the hand that feeds them very well, more than anyone else could
Between jyrkesh's tale and yours, it makes me think that things are even worse at FB than I thought they were.
She told me that people are really worried about how the controversy affected the stock price. (Facebook stock was down around that time.)
I'm not sure how many employees worry about stock price than the supposed effect of the CA controversy but I was disappointed to hear that from a Facebook employee.
My company might have exposed millions of user to political propaganda/wrong think. It is being fixed though.
6 month worth of base salary.
Facebook had relatively permission APIs. The entire world knew it. Journalists, developers, Big Cos.
Nobody was screaming that there was a problem. Though there were a few issues around privacy, there wasn't really any public dialog specifically around the nature of those APIs.
CA found a sneaky way to take advantage of somewhat open APIs.
As the world started to become somewhat more concerned, and FB saw some room for potential abuse - FB rightly tightened up the APIs a little. This was long before the CA scandal blew up.
Then it became public that CA was doing some sneaky things with the data. It's actually highly debatable if they did anything wrong within the context of the information available - or at least outside of industry norms -> what CA was doing with data from FB is what everyone is and was doing with data from similar sources.
FB called on CA to erase the data, and checked that they did. It turns out, CA lied and did not.
FB did the right thing every step of the way in the CA scandal. APIs of every kind err a little bit one way or the other over time and as issues surfaced, FB moved in the right direction without coaxing.
The media absolutely misrepresented the entire issue. They didn't really make it clear what happened, nor did they clarify what exactly FB did wrong, but most importantly, they misled the public with respect to a separate, secondary issue which was 'special API access'.
The 'scandal' is that there are tons of companies like CA using private data for all sorts of reasons, it's mostly not FB.
There are definitely privacy issues around FB, and I'm not fan of them, but those are a separate issue.
No, the press shouldn't be regulated. Yes, bias is normal. No, Fox News doesn't "swing public opinion" (they reflect views that already exist, and pander to them). Life is difficult, you have to think for yourself.
The second is fundamentally a subject view of the world, so you are neither right nor wrong. I disagree with you strongly. We should be concerned about things that harm the our civic health even if we ourselves to blame for them. I think there's a clear parallel to our actual health: when 80% of our population is overweight or obese, it's maybe time to think beyond "adults are responsible for what they eat" and towards "the world would be better if we could make progress on this issue". Ditto our past history with smoking. We know that people are subject to persuasion and psychological manipulation and exploitation of their cognitive biases. A model of the world that has everyone as solely responsible for their own destiny is a pretty useless one, in my opinion.
But, your first claim -- "Fox News doesn't 'swing public opinion'" is an empirical one. Either it does or it doesn't. You're either right or wrong.
DellaVigna and Kaplan's 2007 piece in the Quarterly Journal of Economics investigates this question. They exploit the fact that the phased roll-out of Fox News between 1996 and 2000 impacted 20% of the country. This allows for a natural experiment where some markets are "treated" by Fox News and others are not.
Okay, so, first, should we believe this design? If the entry of Fox News into a market was random, then this is an experiment and we have a treatment effect. If the entry of Fox news into a market was conditional on market characteristics that we can assess, we have "selection on observables", and we can recover a treatment effect. We need only a good selection model. It is also the case that if entry into a market was non-random, but conditionally ignorable with respect to the potential outcomes of the market (e.g. random with respect to politics), we can get a treatment effect without knowing the full selection model. So we have some different routes to the answer here.
The answer is between #2 and #3 -- Fox News did enter markets in a non-random way, but not based on demographics or past voting history, based mainly on geographic considerations. Given the regulatory and infrastructural component of entering a new market, this is probably not surprising, but it does guard against "Fox entered conservative markets first" as a counterclaim.
Now, having clarified the design based considerations, the authors find that Fox News was responsible for a ~0.5 percent vote share increase in Republican vote share in the presidential election of 2000. Whether you consider that large or small depends on your frame of reference. My sense would be that 0.5 percent is small cosmically, but it's maybe possible that you could have an election where that kind of margin is decisive. Remind me again, was the 2000 election close?
This is a fairly credible and careful design published in a good journal and widely cited (1486 citations, which is quite high for a nonmethods economics paper). That doesn't make it true, but it does suggest that the field of Economics views this as an important paper and that it has been exposed to scrutiny from a wide variety of sources.
If there's a little contempt in my reply here, it's because I think your off-hand claim that media diet, priming, and framing effects are solely demand driven and have no actual impact on viewers seems to fly in the face of like 50 years of communications, economics, and political science research. In other words, it feels like your first reply was "50 years of research doesn't matter, humans aren't influenced by any of this stuff". Maybe that's not what you meant, but when I see someone flippantly respond without evidence, I think "Welcome to hell" -- imo, that's the kind of thing we get when we don't teach people to argue.
Second, you confusion on personal choice is quite palpable. You can have concern about a topic but not take responsibility for that. Those are two very different statements because the latter begs the question: who takes responsibility? From the tone of your answer, the suggestion is "wise people like me" (as latter comments will find, modesty is not something that pervades your thinking). Great, that is the dictator's charter...and that is also not subjective (again, you are mostly confused about this topic).
Third, as you appear to understand the study that you quote is not an RCT. There is an unknown level of selection bias, you do not mention (unsurprisingly) whether this was overcome in the study. The study used differences-in-differences, this will overcome selection bias with some caveats.
Fourth, it says nothing about the mechanism for this process. Again, you are still assuming that persuasion works in the way you feel it works...but we know that voting behaviour is mostly determined by other factors, some of which weren't controlled for in the study. And you believe too, that because the election result was close this was the deciding factor? Another extraordinary assumption with no evidence.
Fifth, citations mean nothing. The only evidence on this subject is that people do not think clearly, academics are largely left-leaning and are subject to all the same biases as the rest of us. Sorry.
Sixth, there is not fifty years of research into this. This is the kind of lazy, exaggerated claim that people who have no knowledge of the area but with strong preconceptions often make. You seem to believe that you are hyper-rational, and that you only believe in things with evidence so end up making the completely false, bombastic claim that there is fifty years of research behind your claims. I will make this simple: you are talking to someone who has done postgrad work in politics, no such evidence exists. This isn't even something about which you can produce very solid research (#8). You can model voting behavior very well...but it includes no factors like exposure to Fox News.
Seventh, something that you need to understand is that there is no point arguing with people like yourself. It is actually far easier to get through to people who have no education. The absolute worst people in the world to argue with are people who believe they are educated (although often not about the relevant area) because they believe that there is evidence for their feelings. They do not believe they think irrationally but everyone does. My point about "Welcome to hell" is: bias occurs to everyone but is only applied to groups that the observer disagrees with...this kind of position never changes (and later in life, they come to think about people on the other side too).
Eighth, the view about Fox News affecting an election isn't subjective. It is something that we can produce knowledge on but it is fundamentally unknowable. It is important to think about this and understand why it is the case. Academics, particularly within economics, do not understand this and it has led to lots of pointless work.
Ninth, I would like to hear precisely how you discovered this paper. Was it something like: google "fox media bias" -> find paper confirming belief?
Tenth, you are looking at this subject in a very unproductive way (again, because you are biased...in fact, you almost entirely missed the point of what I said). Everything is biased. Other news stations in the US are biased. This has been going on for longer than the US has been a country (political pamphlets started appearing in the late 16th century). Most people are not going to think for themselves but that option has to be left open to them. I live in the UK, and we have had massive issues with this because certain media sources have started producing "fact-check" services...inevitably, we now spend as much time fact-checking the fact-checkers who largely produce news that is just as factually inaccurate (in my experience, it is often more inaccurate, ref #7). The only reason people get angry about this topic is because they view one side as wrong, and another side as right...rather than the reality which is both sides are biased equally (again, this is a fairly well-understood aspect of human nature...thinking otherwise is a sign of bias).
> It feels like finance in 2009.
> One one side, you had smart, ambitious people who ended up there simply because you were told to go. On the other, you had the classic Gordon Gekko-ish types reciting Liar’s Poker anecdotes ad nauseam.
> Enter the crisis and everyone was equally tarred as the bad guys. The former have slowly made their way out (mostly over to tech), while the latter remain[...]
It does feel like the tide of public opinion might be turning from "too uncritical" to "too critical".
At least, as somebody who spends too much time on both Hacker News and Twitter, this seems believable to me.
People vastly underestimate the power of data IMHO and therefore the responsibility that comes with it. We'll look back in 20 years and wonder wtf we were doing just like any other decade, cause hindsight is 20/20 (after a certain number of years out ofc).
I don't think our society is really prepared for the digitization of so much information and there will be growing pains.
It's one of the simplest rule of a State hostile to its citizens: the more you control information, the stronger the leash. The tighter your hold. And if cruelty is on the menu, the worse it can get with more, better, faster information.
So I thought, when Facebook really took off, "people do this now because there's this fad, this dopamine of novelty, but surely soon enough it'll recede" and I was damn wrong.
I also thought, "oh but governments with half-decent minds will undoubtedly choose to increase protection for citizens, enact "digital rights" to further the reach of the sanctity of one's "home territory" to internet, a "digital democracy" in terms of infrastructure rules so to speak, and again I was dead wrong — they're doing it the exact opposite direction, proving that the concepts were not lunatic and very much implementable, but as it turns out in the form of "digital authoritarianism".
So yeah, I'm the usual optimistic about most things, even the worst like climate change; but in this case, I know no example in history to disprove this assumption: in the current context, if any major western country falls into dictatorship / totalitarianism / etc, it may get very dark very fast. The powers of such a modern incarnation of extreme regimes would be, well, unprecendented, and that's kind of understating orders of magnitude in scale.
That wasn't my takeaway. My takeaway was, "Good. Pressure is finally getting through to the masses of complacent co-conspirators who still have enough of a soul to care. Maybe things will finally start changing."
I think Facebooks' failure in PR is more a symptom of how far Facebook has slid down the ethical slope into outright corruption. As you do more and more heinous things, it becomes more difficult to defend those actions.
Some forms of media advertisements are unethical, you say?
I was going to ask you for a reference on this, because I was pretty sure that no such standard existed.
I was quite wrong!
"Broadcasters are responsible for selecting the broadcast material that airs on their stations, including advertisements. The FCC expects broadcasters to be responsible to the community they serve and act with reasonable care to ensure that advertisements aired on their stations are not false or misleading.
The FTC has primary responsibility for determining whether specific advertising is false or misleading, and for taking action against the sponsors of such material. You can file a complaint with the FTC online or call toll-free 1-877-FTC-HELP (1-877-382-4357)."
So we have:
"The FCC expects broadcasters to be responsible to the community they serve and act with reasonable care"
"The FTC has primary responsibility for determining"
So...maybe we should give online advertising the same treatment?
As an aside: this FCC regulation surprises me; I'm quite aware that there are numerous limits to free speech, but I didn't expect this to be one of them.
I've always been an ardent believer in expansive free speech, and I still am, though age has allowed me to accept more limits. Though it doesn't feel right to me, the negative impact of (largely?) unregulated online political advertising is big and getting bigger.
Then also some states like Ohio it's illegal to sell ads to businesses that aren't solvent, Facebook is currently being sued by school districts for selling ads to a charter school that later ended up closing, which I feel is a huge overreach since most of it's automatic and the school themselves decided to buy the ads. So I guess you have to do a financial background check before selling ads. Probably easier to just exclude Ohio from selling ads, but the lawsuit hasn't been settled yet. So far no activity for 7 months but still on the docket as a open case.
Edit: FCC -> FTC
Hahahahahahaha yeah sure they do.
Then there's another show about entertainment and news, and the woman on their sometimes gets dresses provided to wear. I noticed that's mentioned in a text overlay at the very end... However guides for YouTubers say you should mention say it on audio too, don't just put in the text description or on the screen as text.
Then Apple gives TV shows and movies free iPhone's and Macbooks for promotion... I seen it mentioned in the end screen but don't remember ever hearing a voice over saying products provided by Apple, but if you are a YouTuber you are supposed to mention you got a free Macbook every time you mention it according to one guide. So seems unequal and some confusion out there too. I know some tech channels, rving, camping, etc will accept free products to review, but if you use or mention them in any future videos or blog posts, people might of not seen your first post where you reviewed and disclosed you got it for free to review... So I guess you have to repeat a bunch of legal jargon in every video then.
I doubt many people get free Apple products as vloggers, but one of the examples was if you got a free knife as a hunting channel, you should disclose it in future mentions. So that's a more realistic example probably, but wouldn't surprise me if someone even forgets who sent them a knife if they collect them after the initial review. Then recently the FTC set more rules for videos targeted at kids on YouTube, and some lawyer mentioned he spoke with the FTC and some of these people don't even own phones or really understand what YouTube even is. Also not sure if I remember if the Price is Right mentioning products they are giving away, I think they just show the car yet as far as I know GM or KIA is giving it to them for free for the product placement, I bet if a YouTuber gave away cars they'd have to disclose it much more. I guess the old media probably can afford more people in DC than the independent vloggers can.
Then if you got paid to go to a conference, and wanted to live tweet about it, you have to mention it in every single Tweet some how from my understanding. Not sure how you'd do it with the space limit. Maybe when I read these things, I'm taking it more literally than most do. I bet a lot of businesses are out of compliance with many things if you look for it. I was reading up on PCI recently for credit card processing, and I helped a lady once years ago with her shopping site and if I remember right she just gave people a generic admin login shared between people, but I guess that's a major no no as everyone should have their own separate account. Then some web hosts market their servers as being PCI compliance too, but wonder if it's really true. Just seems like a lot out there, but I do remember hearing once that the average citizen commits 3 felonies a day.
I think it'd be fun to be a travel vlogger some day, so I think my personal policy would be not to accept free stuff to review, but I know some companies just send people stuff in general if they post a PO Box without asking, so wonder if they feel pressured to review items... but I think the whole idea of fans sending you random stuff is creepy, there's some vloggers who do mail vlogs and people send them candy and stuff... Mommy and daddy says not to take candy from strangers, yet you let random people from around the world mail you candy... I rather just use things I paid for myself, and if I liked it enough mention it on my own. Don't want to feel obligated to some brand.
I do think the general idea of the FTC is good though, don't want companies lying and scamming people. There's probably selective enforcement too, for like the most outrages cases.
There's a credit for a product placement researcher and a "thanks to Thermo Fisher for lab equipment" (plus a couple of others that don't stand out as companies you'd recognise) but there's no "Promotional Consideration from ..." section in the credits that I can see (Bluray version, 2:04:21 length)
Of course, that requires a political system motivated by doing the right thing for the country's citizens. As opposed to being controlled (sorry, lobbied) by monopolistic corporations and, in particular, where few politicians dare take a stance against FB for fear of the misinformation campaign that would inevitably ensue. On Facebook.
They already have that power, and one of the problems posed by Facebook is that they leverage that power to disseminate false information devised to undermine democracies by unscrupulous totalitarian regimes.
1) Facbook could be broken apart from Instagram and WhatsApp.
2) Mark Zuckerberg is not beholden even to shareholders due to the rights of the shares he owns. Restructure the companies so he still owns the same % of the companies but has the same voting rights as any other shareholder.
Luckily you don't need logic when you can just throw money at lobbying. Sadly, this is incredibly effective in the US today.
Even if that's true, it might not work for Facebook. The politicians that Facebook would need to lobby may feel personally threatened by Facebook's power over the electoral process. Penny-ante campaign contributions and a few face to face meetings probably aren't enough to overcome that.
True, but a promise to use that power to benefit particular politicians probably would.
I would rather risk total garbage in the public forum than establish a ministry of truth.
If fact-checking political advertising gives them too much power, and not doing so is something they themselves decided on, then they already have that power. If no one company should have that amount of power, they should have that power taken away from them.
The bigger next step is that any alternative to breaking them up would be against their freedom of speech. I'm having trouble coming up with an alternative that wouldn't, but there might be one.
The logical conclusion is that the government steps in write laws on what social media should do.
Facebook's PR has been troubled for a very long time. To suggest that they has a stellar image before 2019 is a joke. I can list many other slip ups where Facebook could have come out and said something (or even better, did something) and then weeks later they come up with a weak statement. If that's great PR, I'd like to offer my services to anybody who needs it.
While I'd agree that "No one ever broke rank. The messaging was crystal clear.", the message was always an awful one and now they have a relatively negative reputation despite being a remarkable success.
That confidence in the stock is obviously not just from good PR, but I don't see the evidence that the PR folks fucked things up either, given how much potential damage the CA scandal was predicted to cause.
Was Facebook having an internal debate over whether we should all have a free and fair election in the U.S.?
If Bosworth is calling for an "internal debate over whether we should have a free and fair election in the U.S.", it's exactly the same sort of debate that is occurring in newsrooms, radio and television studios all across the country. Every new media offends the old. Newspapers were offended by radio. Radio was offended by television. Now they're all ganging up on Facebook.
Hate speech law in the US is practically nonexistent. Libel is much more difficult case to argue. Restrictions on political speech are widely held to be unconstitutional.
Compare that with other democracies, such as the UK, Germany or India, and I think you'll find that the US does have significantly more freedom of speech than other democratic nations.
Try, for example, to blog that your company should not trade with Israel whilst tendering for a federal contract.
(Just an example: I don’t have a dog in the actual fight).
i.e., there is free speech for groups as well as individuals
Saying there's right to free speech for groups unless someone gets paid is a little silly and hard to enforce. Hence our current situation.
From another perspective, there is obviously "freedom of the press" but the Constitution does not say who is allowed to own a press and it doesn't say you have to personally own a press to make use of it. In fact, it would probably be problematic if you weren't allowed to hire a press if you had something you wanted to say.
Additionally, money is not speech. We disallow politicians directly giving people money to vote for them - why would we do that, since the politicians trade is in speech, he's simply delivering stump speech in the form of a $100 bill to his potential constituents.
And groups do have voting rights. That's how Congress works. The U.S. is a republic.
Not any arbitrary group gets to directly vote, no. But different groups do have rights: races, religions, genders, ethnicities, and, yes, groups of people with common values and interests.
The Citizens United case, the most recent Supreme Court case striking down restrictions on political speech, was about a nonprofit organization opposed to Hillary Clinton. The government tried to shut down their “documentary” film about Hillary.
Corporate speech is just "speech for groups of people". Those groups may be for-profit corporations, or they may be nonprofit.
> If I want to eat sugar and die an early death that is a valid position.
Sure, if you want to make an informed choice, go for it. Nothing prevents a person from buying and consuming 10lbs of sugar everyday. However, the obvious problem which a lot of people have been focussed on is how do we prevent unhealthy amounts of sugar from being present in all food.
All of this sidestepping from Facebook and its execs reminds me of tactics used by Trump (and Republicans).
I'm going to have to contradict you there. Anyone who was following the 2016 Russian Interference campaign, and related news stories, would know that Facebook used to pay people to moderate news stories, perhaps in a manner that was like a news room... But then they fired those people because they were not promoting conspiracy theory news stories, and so conservatives claimed FB news was "biased".
This essentially created the environment where no one at FB was willing to fact check for fear of losing their jobs. When you hear about FB saying they will not fact check political content, it's specifically because if they did, they'd have no choice but to point out all the lies and inaccuracies in Trump's statements. So instead, they simply refuse to touch any of it, which leaves the door wide open to political actors to spread any disinformation they want.
A few related stories, for the interested:
As the  reference says, unless FB has suddenly grown a backbone with regards to standing up for truth over dollars, then any new effort will fail in the same way it did last time. Facebook has never been able to demonstrably prove it cares about election meddling, and you shouldn't believe empty words in their press releases.
I find it somewhat sanctimonious for the traditional news media to be calling out Facebook for favoring dollars over truth when they themselves were just as key as Facebook was to normalizing and publicizing Trump. CNN executives made a conscious decision to give Trump lots of coverage, in an effort to compete with Fox News. They too presented Trump's views as being just as legitimate and valid as the views of those opposing him.
>> Was Facebook having an internal debate over whether we should all have a free and fair election in the U.S.?
> I would encourage you to read the memo itself  and find out. It's not that long. In it Bosworth acknowledges that Facebook, as a new medium for propagating information, had a role in shaping the outcome of the 2016 election. He said that it was no longer tenable for Facebook to claim that it had no effect, and that as the 2020 election approaches, Facebook should be conscious of its role and formulate specific policies proactively so that it doesn't find itself in the same position it found itself in 2016, as it reacted to candidates and third parties using the platform in ways that hadn't been anticipated.
> If Bosworth is calling for an "internal debate over whether we should have a free and fair election in the U.S.", it's exactly the same sort of debate that is occurring in newsrooms, radio and television studios all across the country.
This seems a bit illogical to me. Facebook is a platform that enables sophisticated communication, for anyone who chooses to to use it. I don't see how the opinion that Trump "got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser" in any way implies that the election wasn't either free or fair.
Let's rewind to 2008:
>>> Like a lot of Web innovators, the Obama campaign did not invent anything completely new. Instead, by bolting together social networking applications under the banner of a movement, they created an unforeseen force to raise money, organize locally, fight smear campaigns and get out the vote that helped them topple the Clinton machine and then John McCain and the Republicans.
>>> As a result, when he arrives at 1600 Pennsylvania, Mr. Obama will have not just a political base, but a database, millions of names of supporters who can be engaged almost instantly. And there’s every reason to believe that he will use the network not just to campaign, but to govern. His e-mail message to supporters on Tuesday night included the line, “We have a lot of work to do to get our country back on track, and I’ll be in touch soon about what comes next.” The incoming administration is already open for business on the Web at Change.gov, a digital gateway for the transition.
>>> The Bush campaign arrived at the White House with a conviction that it would continue a conservative revolution with the help of Karl Rove’s voter lists, phone banks and direct mail. But those tools were crude and expensive compared with what the Obama camp is bringing to the Oval Office.
Obama seems to have had an advantage in 2008 - was that "unfair"?
If the capabilities within the Facebook platform, that are available to everyone, are deemed to be harmful to democracy, then so be it, but claims that it is unfair, or should somehow have no effect, seem way off the mark to me.
1. The US Presidential election is decided by a small percentage of voters who have an open mind and live in the right districts.
2. Facebook’s targeted advertising allows advertisers to efficiently buy the votes of these select voters.
3. Without Facebook, this wasn’t already happening.
Personally, I believe #1 is more or less indisputable, and I’m willing to believe #2. But I’m not so sure about #3.
> Personally, I believe #1 is more or less indisputable, and I’m willing to believe #2. But I’m not so sure about #3.
I think you overstate #3 a little bit. It could have been happening without Facebook, but at a lower scale and not effectively enough to matter.
I think there's also a #4:
4. Facebook ad-targeting allows the influence to be covert, because watchdog group members are probably not part of the targeted demographics.
I think #4 is still valid:
1. Facebook may not be to correctly identify political ads vs other ads.
2. Their Ad Library seems to be missing important information .
3. A disclosure like the Ad Library still obscures influence campaigns by greatly reducing the ability of watchdogs to passively monitor the political discourse. Instead they have expend much more manpower to actively monitor the library with the right search terms, and if they fail to do that they'll miss things.
> Facebook has launched an archive of American political ads, which the company says is an alternative to ProPublica’s tool. However, Facebook’s ad archive is only available in three countries, fails to disclose important targeting data and doesn’t even include all political ads run in the U.S.
> Our tool regularly caught political ads that aren’t reflected in Facebook’s archive. Just this month, we noticed four groups running ads that haven’t been in Facebook’s archive:
It was happening, but it was done by people from political parties visiting those voters in person at their doorsteps (I know of it from documentaries), obviously way more expensive and time consuming than ads, and ineffective when done in a foreign accent or language or without other means of identification/verification, so people outside the US had it a lot harder to influence those voters.
This double negative is a bit confusing. Targeted political messaging absolutely was a thing before Facebook and even the internet. Correlate info on zip code demographics, the readership demographics of different publications and media outlets and one can provide messages that target specific groups.
So the argument could be said they disrupted that black market but are a publicly listed company.
All they need to do is target adverts reminding you to vote to the people who will vote the way they want.
Facebook ran an experiment in 2010 that shows they can definitely increase the chance of individual users voting: https://www.smithsonianmag.com/science-nature/how-a-facebook...
Facebook's ad model is built to reward "virality" - meaning being over the top and salacious means lower costs. Even Boz acknowledges that was part of the Trump team's genius - they used the platform, as it was built, perfectly. The Democrats naturally have equal access to this platform, so I don't think its as much a left-right thing, vs a crazy-calm divide.
The other thing is just disinformation / identity verification - making sure all advertisers are who they say they are (and you don't get russians posing as black lives matter, etc.)
> But he maintained that the company should not change its policies on political advertising, saying that doing so in order to avert a victory by Mr. Trump would be a misuse of power, comparing it to a scene from “The Lord of the Rings.”
There's a reason editorial departments are totally separate from ad sales.
FTC disclosure of paid content is extremely important and taken very seriously (at least where I worked and by the FTC).
IMHO Facebook doesn't have any reputation left to tarnish, but Teen Vogue screwed themselves here, badly.
Corporate shouldn't be able to buy anything from media outlets other than Ads that look.like ads and completely visually separate from actual journalistic pieces. Anything else is just unethical.
Facebooks strategy was always running away from a trail of PR bodies via sheer sized based on "customers" (which should be called products, to be honest) which did and do not care. I mean sure, this PR debacle is not a bath in glory by any means. But I can not think of any PR campaign in response to the countless past scandals which made me think "Wow, nice catch". At least in Germany, their response to the accusation of manipulating elections was a billboard campaign advertising that in Facebook you have a settings page where you can click switches and thus be in full control of your privacy (:D).
Was there ever any effective response to Zuckerberg abusing his company data to crack journalists accounts? I can even remember an age old thing where Facebook made all your posts visible forever on your board or something which was just drowned in the ongoing unchallenged growth of Facebook after some time...
To clarify, I am glad that media is finally elevating from lizard memes but I reject the notion that this current affair is the first visible crack of rotten foundations one could have observed.
The Social Network was largely a negative portrayal of Zuck and FB.
I remember when FB bought Instagram there was a lot of initial backlash.
Most of the positive stuff around Facebook was mainly due to the huge upward movements in it's stock price post IPO that solidified Zuckerberg and Sandberg as business geniuses. Would argue that their individual profiles became much more positive as the stock price increased, but would not really ever say Facebook the company was really perceived in a positive light outside of business community.
That's why their PR machine is now switching to these slightly astroturfy campaigns. The next step is an advert along the lines of "They call it pollution. We call it life".
I like that analogy or narrative. The aesthetic value of the analogy is appealing to me.
Seems more analogous to a tobacco company. Some people think it's bad and they use it anyway.
They have an addictive product in common. But oil seems more apt.
The sale of tobacco is highly regulated. Oil, less so; Facebook, not at all.
Oil became useful in the Industrial Revolution, and enabled downstream innovation. Facebook came about amidst the modern Internet and is likewise a platform. Tobacco is old and the end of the line; the ancillary market is limited.
Finally, both oil and Facebook are tools (for some). Tobacco could be thought of as a tool, but it‘a more universally hedonistic.
Facebook is not meeting a core need. It's peddling something entirely optional to society. At best it provides entertainment value, while being extremely addictive and dangerous.
Facebook is like crack.
That’s interesting to think about compared to so many companies I’ve seen having their own url, even if their website is all html tables and css rules from 10+ years ago.
That is a midrange restaurant in tech-savvy Singapore, opened a few months ago by people who already had another successful restaurant.
And here is their website: http://belimbingsuperstar.com
As of this writing the website is literally a GoDaddy placeholder page. They use the domain for email only, they didn't even bother to redirect it to Facebook.
I don't like the taste of this, I'm having a hard time figuring out exactly why, but it's something like:
We depend on oil, because we depend on oil. I can say I "need" my phone but I bet I'd be just like anybody else in the 60s if I didn't have my phone. I "need" the BART but only because I live in a city that exists in its current state because of the existence of public transit. Etc. Feel where I'm going on this?
Modern society would absolutely collapse if oil just suddenly vanished. Obviously. But how could that possibly happen? It's impossible. And it's also, I think, absurd to suggest there's no possible way to quickly and intelligently transition off oil dependency, barring the fight against the Oil lobby, which I'll get to.
Also, I postulate modern society could absolutely exist in an almost identical form to how it does today, if it hadn't become addicted to oil from the get go. It's feasible from an economic and engineering standpoint that we could have transitioned our entire electrical grid to nuclear, with some trickles of hydro and wind here and there, in the 1980s. Sure electric cars haven't really been entirely feasible until recently, but electrified public transit absolutely has been, and maybe we would have gotten better electrified vehicles sooner if there wasn't as much oil available.
My basic point is that Oil is not "necessary" for modern society, either in its formation or in its current iteration, and that we can absolutely get by without it, EXCEPT for the fact that would hurt some shareholders in a very big way and so they will fight tooth and nail to prevent them. With them out of the way, the obstacles would be so much less.
To elaborate: my problem with the Facebook <-> Oil Company analogy is the magnitude mismatch in the utility axis. Facebook's operating niche describes very specific solutions for social interaction, in a space saturated with stronger and very enjoyable alternatives, some as old as humanity itself.
I think the scale might work out in the toxicity axis though.
I consider human connections to be person to person, where you can see, talk to, hear, gauge body language and nuance, and adjust to that in real time. Facebook is a substitute for human connection, and a poor one at that. I see a lot in the news about how people feel more depressed and alone than ever, and are online more than ever and checking their devices every few minutes. So much so that it's to the point that when they actually DO have person to person contact they're still tied into their devices instead of being present in the moment with undivided attention and enjoying that other persons company. I agree that human connection is a core need. I just think we're conflating facebook as somewhat fulfilling that need. I'd argue it's doing the opposite and destroying human connections, in general.
Reading these posts full of hyperbole, one would think human society couldn't exist without Facebook.
Doesn't this phenomenon merely indicate that some market forces believe that the industry Facebook works within has potential for more profits than are currently being extracted?
Back then, facebook had a tv ad series. It was a vague, "what's this ad for?" kind of ad. The jist of the ad was "friendship is great" with some sort of facebook=friendship reveal at the end.
The ad was creepy. The suggestion that facebook is providing the core human need of friendship is creepy. Facebook is incidental.
The problem is that there is a huge distance between what Facebook PR says and what Facebook does. The constant apologies for misbehaviors that never are never corrected, the apparently deliberate misinterpretations of much of the criticism leveled at Facebook, the continual discovery of new misbehaviors that should have been stopped, and so forth.
That Facebook has now taken an official public stance of being antagonistic, dismissive, and condescending is bad, but it's just bad icing on an already bad cake.
The whole VPN they ran that tracks kids ... that's not even a mystery as far as being a terrible idea.
Then Apple told them to knock it off.
So Facebook renamed it (sloppily too) and put it up on the app store again until they got caught again.
Kids, users, other companies, they don't care.
How can a PR person even craft a response like "hey we did wrong but we're sure we won't ... well yeah we probabbly will do that again, maybe immediately"
Enforcement from Apple on FB basically came the day after they were hit with an embarrassing “Hackers can turn on your camera without you knowing it FaceTime” bug. This part is hearsay, but, IIRC, Apple only started enforcing on the dozens of other companies that were doing the same thing once legal action was threatened for uneven enforcement. (Meanwhile, the kid that I chatted with that reported things to Apple in the first place... definitely reported more than just FB.)
IMO, I wouldn’t be surprised if this happened this way cause Apple had decided to use FB as the fall for a PR media blitz. (In particular, Apple made the decision to “enforce on FB” basically the day after their “someone else can turn on your camera with FaceTime without you knowing” bug.)
Being willing (and encouraging) others to listen to other viewpoints should not be considered a PR issue.
The start of the response was perfectly fine and I did not feel the author took offence to it.
I have no idea why people would take offense to 'If you haven't tried it, I suggest you do'. Maybe the same reason they would be offended by talking to a person with different opinions.
On its own it’s an innocent enough statement, and even correct. But in this context, i.e. when people are accusing you of something, and it is part of your response, it comes off as very condescending and smug.
There is a reason PR departments exist: they tend to be hypersensitive to the different ways in which a piece of communication can be perceived, and to dull its sharp edges.
Facebook’s Mark Zuckerberg Is Hiring a Team Worthy of a 2020 Presidential Campaign - Facebook CEO Mark Zuckerberg's latest hire is further fueling speculation that he could be planning a 2020 presidential bid
>Zuckerberg and wife Priscilla Chan have hired Democratic pollster Joel Benenson, a former top adviser and longtime pollster to President Barack Obama and the chief strategist of Hillary Clinton’s 2016 presidential campaign, as a consultant,
>In January , Zuckerberg, 33, and Chan, 32, hired David Plouffe, campaign manager for Obama’s 2008 presidential run, as president of policy and advocacy. They also brought on Amy Dudley, a former communications adviser to Virginia Democratic Sen. Tim Kaine. Ken Mehlman, who ran President George W. Bush’s 2004 reelection campaign, is also on the charity’s board.
>And Zuckerberg’s personal photographer, Charles Ommanney, was the photographer for Bush and Obama’s presidential campaigns, Business Insider reported.
Joel Benenson and David Plouffe are working elsewhere now. Amy Dudley is still the spokesperson for Zuckerberg iniative.
Zuckerberg has had incredible PR army working just for him, not for Facebook. I think there has been switch in his personal ambitions and change in PR people.
I agree they’re doing bad stuff but I think the argument in the article makes a very good point about how they used to be good at handling PR for their evil and getting away with it and now they seem to have lost that touch.
I doubt the level of evil has changed dramatically in the last 18mo but the PR reactions have.
> something noticeably changed. They got combative. They got sloppy.
An interesting take, whatever you think of the underlying merits of the company and the complaints against it.
Oh, he better be careful. That’s pretty close to pushing a right-wing talking point /s
Facebook, etc.. Is no different in its power to influence elections than traditional news papers ever have been. The only difference is a matter of scale and homogeny. Today, instead of thousands of media publications, each pushing their own biases and perspectives (which they always have), you have a very small group of tech giants that get to decide what you see (and even more frighteningly), what you’re allowed to say about it. Sure there’s more media outlets today than there’s ever been, but a small handful of companies get to control their reach, and (again more frighteningly) their ability to collect revenue.
Then you have the issue that traditional media has always represented a diverse set of opinions and world views. You have conservative and liberal outlets, outlets the promote free market ideas and those that promote socialist ideas, outlets that promote regulation and those that promote small government. The small number of organisations that control access to ideas and speech today all represent an incredibly homogeneous political world view.
The problem isn’t that those organisations have done a poor job of controlling the flow of information, it’s that they have the ability to do it in the first place. Those companies should not have the ability to act as gatekeepers who determine the credibility of information, or the moral implications of speech. In a free society, that responsibility falls of on the shoulders of every individual, and to have an authority doing that on their behalf denies them the opportunity to do so themselves.
This is also a matter of values, not constitutional law (as people often like to derail such conversations by claiming regulation would violate 1A). The law that empowers these companies to to moderate content is the Communications Decency Act, not the 1st amendment.
> Oh, he better be careful. That’s pretty close to pushing a right-wing talking point /s
> Facebook, etc.. Is no different in its power to influence elections than traditional news papers ever have been. The only difference is a matter of scale and homogeny.
That's an empty argument. Everything is about the scale, by that argument each individual has the same power as e.g Facebook to influence elections, it's just a matter of scale. However, scale is the difference between effectively no power and a lot of power.
> Today, instead of thousands of media publications, each pushing their own biases and perspectives (which they always have), you have a very small group of tech giants that get to decide what you see (and even more frighteningly), what you’re allowed to say about it. Sure there’s more media outlets today than there’s ever been, but a small handful of companies get to control their reach, and (again more frighteningly) their ability to collect revenue.
> Then you have the issue that traditional media has always represented a diverse set of opinions and world views. You have conservative and liberal outlets, outlets the promote free market ideas and those that promote socialist ideas, outlets that promote regulation and those that promote small government. The small number of organisations that control access to ideas and speech today all represent an incredibly homogeneous political world view.
> The problem isn’t that those organisations have done a poor job of controlling the flow of information, it’s that they have the ability to do it in the first place. Those companies should not have the ability to act as gatekeepers who determine the credibility of information, or the moral implications of speech. In a free society, that responsibility falls of on the shoulders of every individual, and to have an authority doing that on their behalf denies them the opportunity to do so themselves.
> This is also a matter of values, not constitutional law (as people often like to derail such conversations by claiming regulation would violate 1A). The law that empowers these companies to to moderate content is the Communications Decency Act, not the 1st amendment.
1. Facebook has NEVER had positive PR. The tech-media has always used FB as the Silicon Valley punching bag. Name me one good article in the news about FB? At some point you're just numb to it all.
2. Facebook has had leakers for years, and especially when Boz makes posts!
To say that Facebook PR is broken implies it wasn't before. I'd argue that FB has long needed a comms team representative of the company it actually is. FB comms feel designed for the old college only network FB once was. When Google can do similar misdeeds and get only a tiny slap on the wrist, I do agree it's probably a comms problem in the end.
Also, to completely write off Sandberg because of one corporate misstep reeks of "Cancel Culture" hysteria, but that's just me.
Sometimes things are just bad and it takes time to realize.
Regardless of the packaging.
Here we have a company whose business idea it is to lock up as much of the web as possible on their platform. Contacts, messageboards, photos, events, calendars, chat, dating, anything goes really. They then earn money from selling data.
Then the Cambridge Analytica scandal breaks where a company has used data from Facebook unethically. How was this not expected? Had it been better if this company used their data a little less unethically? Had it been better if Facebook sold PR-as-a-service directly instead of enabling an ecosystem of smaller companies offering this?
(Because the latter is what is going to happen over the long run should Facebook continue to be successful. When there is no room to grow anymore the ecosystem is going to be cannibalized.)
This is all inherent to business models dependent on data ownership. Upon observing the majority of the population migrating to closed platforms, the discourse among the people who care about open data must instead be framed in terms of how to shape the conditions of said business models. Codifying data ownership is one way to move forward, GDPR being the most known example. It may be blunt but it at least poses the question of who owns which data.
What groups and activities do people who think this way hold up as positive role models?
They use mostly (self-deprecating) humour, both on billboards as well as individually as Twitter messages.
This strategy seems somewhat risky. There were some where I thought "funny, but I wouldn't want to be the one posting this".
Hard to describe, because it's mostly a function of each tweet being really good. They must have some larger advertising company behind this, with tons of talent.
They have seen a spectacular improvement in public approval, and even on such measures as violence against employees, fare-evasion, and vandalism.
Dev blogs for a couple of projects like Github might also count.
I don't understand. How is it "absurdity"? And the linked Gizmodo article doesn't seem to call it absurd either.
Zuckerberg's dumbfucks quote is consistent with the content of every public communication major steering decision FB has made to date. He is truly the embodiment of lawful evil.
No, Facebook isn't powerful enough to sway an election (I get that some people are really angry about Trump...if you aren't thinking rationally, don't have an opinion rather than start up with the conspiracy theories). No, business/life is not like LOTR. There aren't goodies and baddies (irony, this is part of why Trump was elected).
I get that's it's kind of sarcastic, but I don't really think "what a dick". It's not great for someone in his position to be sarcastic like that, but I think more "he must be hounded by people who are ready to burn him at the stake for having lunch with certain people so.."