As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.
Overall, an optimal outcome for all parties currently, except for society as a whole down the line
What, exactly, are you claiming about this that was “illegal”? When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
The Facebook developer platform became so limited in 2014 that most developers (including me) left. There was no point in developing apps for the social graph that had no ability to be use the social graph. But even prior to that, the sharing of this information with apps, even those authorized by a friend, wasn’t “illegal”. You agreed to it when you signed up for Facebook and voluntarily handed them your information. Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here.
So what about this whole situation is "illegal"?
There was no consent because tracking was not opt-in but opt out.
More recently regarding user personal details and lack of consent, again deemed illegal in Germany https://amp.theguardian.com/technology/2018/feb/12/facebook-...
The pattern I see is that europe considers facebook's methods to commoditize users' data to be unreasonable and will be regulated.
The two links suggest awareness by Facebook of the illegality.
So, where is the irrelevance of the two links with regards to the privacy concerns highlighted by this whole story?
Voluntarily putting your info on a free site is the consent.
Would you also cry "illegal" if HN sold your post history, which they have tied to your IP address, which they can easily link to who you are?
Now the majority of Facebook data, which is visible only to a selected group of individuals, can not contradict privacy laws of a country. Despite whatever agreement you signed. Law trumps user agreements.
As an extreme example I can sign that I give you the right to kill me, but if you do that you have still committed pre-meditated homicide. So it does not matter what Facebook made you sign it is likely they committed (and still do) a crime in most European countries. For the U.S. the waters appear extremely blurry from what I understand... But I can not offer an opinion. If there is a lawyer/ privacy expert to pitch in this discussion that would be nice.
This sounds as if nobody can agree to voluntarily take part in a potentially fatal experiment, like testing a potent medicine or be a test pilot. I believe legal provisions exist here.
Even the opposite can be true. Life saving medication may be illegal in one country but another provides it without restriction.
Anyways, my point here is you can not override law (or you shouldn't be able to).
Having said this there are some job roles that are excluded from specific parts of the legislation due to their unique nature (e.g. the Army)
Each country has its own laws covering this situation.
Giving someone personal information still restricts them legally irrespective of what they think i've consented to or their own definition of consent. The legal system has its own opinion.
If the are companies in the UK, or people working in the UK, the sharing or retention of data may have been illegal under British law.
Almost no one reads them, so they should not be enforceable.
I mean, as developers we know when a session is established, pages are visited, and can easily see how long they've been on the page.
No one can read the typical T&C in 10 seconds .. let alone 1 minute .. especially without even opening the page! So the options should be something like:
[x] I don't care, just whatever dude.
[ ] No. Get me out of here. Because I don't know how to close the browser window myself.
EDIT: Found this as a way of proof
It varies depending on where in the world you are but its actually pretty unclear as to whether they are enforceable. Or at least, a specific set of T&Cs with a specific user/customer may be found to be unenforceable for a wide range of reasons.
Of course, the company that puts the T&Cs in front of you isn't going to tell you that.
Perhaps not in the US, but I'd like to point out that it's not the same way everywhere: I think the EU is moving towards another direction. There's a whole thing around whether a company has a responsibility to do due diligence around preserving the personal information of its users. Just because you gave them your data doesn't always mean that they can now do whatever they want with it (e.g. give access to detailed information in large amounts to third parties). Even if you sign an agreement, in many jurisdictions there are certain rights a company can't just make you sign away.
There's no consent on behalf of the friends if one shares information about their friends to an app. Consent is given directly
What matters is how criminal law views this particular data collection in the context of Facebook's working relationship with this particular client, within all of the various jurisdictions that Facebook operates.
The article "What Colour Are Your Bits", is a pretty good look at this - http://ansuz.sooke.bc.ca/entry/23
Except you CAN'T sign away your rights in many jurisdictions, including this thing called HIPAA. So if Facebook sold people's mentions of health problems to third-parties... is that a violation?
I mean, this is the exact scenario people grilled Windows 10's spyware. But somehow, Facebook doing it, isn't an issue? What's the difference? I'm honestly asking.
Almost EVERY free site is trying their damn best to collect as much info and link everything together to sell it so they can- you know.. make money.
I guess those people are instead willing to pay for virtually every site they use on the internet, right? Right? crickets
> We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data.
Sounds like they'll make the developer agreement legally binding so they can take legal action for violating the ToS.
Obama's campaign said they were scraping for academic purposes, downloaded a bunch of data and then lied about deleting it? There are multiple levels of B.S. when it comes to Cambridge Analytica and Facebook. Only one finds comparison in the 2008 or 2012 races (on either side).
What Facebook did with the Obama campaign was far worse. No authorization was required - they gave them free, full access to the entire social graph. Didn't like Obama and didn't authorize an app to access your and your friends' info? Too bad, your information was still given to them and used to help get him elected. That wasn't a problem for anyone in the press though, because of course, Obama was a Democrat.
edit: I misspoke. There was an app required, but the Obama campaign accessed 190 MILLION profiles, versus 50 million involved in CA. According to :
"The campaign boasted that more than a million people downloaded the app, which, given an average friend-list size of 190, means that as many as 190 million had at least some of their Facebook data vacuumed up by the Obama campaign — without their knowledge or consent...A former [Obama] campaign director, Carol Davidsen, tweeted that Facebook "was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing.""
you are doing the old classic false equivalence... the deliberate use of an obama sanctioned facebook app that asks you to press a button to share the get out to vote (probably for obama) with their friends is 100% different than Cambridge University creating psyche profiles on people to ostensibly use as a fun way to find out your profile. then to sell that harvested data to a different campaigning firm Cambridge Analytica to use to micro target ads to them for political purposes unbeknownst to them is way different. It's about expectations: one expects it to be used in a political fashion because they opted into it and the other is not expecting it to be used to target themselves and their friends later.
So how is that a false equivalence?
permission for one.
> The campaign called this effort targeted sharing. And in those final weeks of the campaign, the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button. More than 600,000 supporters followed through with more than 5 million contacts, asking their friends to register to vote, give money, vote or look at a video designed to change their mind.
intent and deliberate action up front by the user of the facebook app for two not some shady psych profile that gets sold later and used behind the scenes to target them without any link to the campaign. that's absolutely a false equivalence.
Also, that number 189 million is absolutely likely to be way too high and is likely bullshit. Say I have 190 friends with a significant overlap with say 50 friends who all know each other and are likely also friends with each other. it simply doesn't follow that every single friend has 190 unique friends to get a graph of that size even though the average across all users might be 190 friends. have a few more of those for different groups (business, whatever) and you have a much lower number. coupled with the fact that you have no idea what the average number of friends the people that opt into that app have if you are just comparing it to the overall average number of friends for the entire set of facebook users you possibly have a lower number still.
Which were the same permissions that the users of the Kogan app gave. Remember that the issue here revolves around the fact that friends of the users of either app never gave permission for their data to be used. The only difference is that Obama did this to about 4X as many people.
Also, that number 189 million is absolutely likely to be way too high and is likely bullshit
According to their own campaign manager, they obtained the entire US social graph, and Facebook knew about it and allowed it to happen. The 190 million number may actually be low.
you're still wrong but at this point, with all the evidence showing just how different the situation is, i suspect you want to be for whatever reason.
For hiring this isn't as touchy a subject, but surely there's a qualitative difference in these two interactions. One is based in an above-biard interaction. The other in subterfuge and hidden motives.
I think you are the one missing the point.
> The OFA app determined who should share what with who by running all the extracted profile data (including that taken from opted-in users' oblivious, perhaps anti-Obama friends) through their psychographic models.
Yes, this was a political app, downloaded by people motivated to get obama to be the president. They shared that info willingly to the obama campaign. The obama campaign used that information to suggest "hey man, we need help in texas and you know someone that might be able to help us. Can you do us a solid and send them some of this info?" Nothing I have seen suggests that they did anything untoward with the information voluntarily and directly shared with the campaign. They used it to suggest people to share the message to. And they did any action with the direct permission of the people using the political app.
> It's identical, except, for "the good guys".
Not even a little bit. The Koger fellow used the data he harvested from those psych profiles and also got the friends information that the people taking those "tests" doubtfully wanted them to have. If I take a silly test that is a facebook app I would not expect them to datamine my friends to later sell that information off to a political campaigning firm to later use for a political campaign that they may not have wanted it to have. If trump had used an app that did the same as obama I would have had no issue with it. he didn't, and I do.
Those people who were opted-in by proxy, i.e. the friends you sold out, may not have wanted the Obama campaign or Cambridge Analytica to get that info, but they never had a choice!
Both apps (well, OFA for sure, CA allegedly) took data legitimately provided to them, used it to feed predictive models, and then actioned marketing around exploiting those learnings.
>We released this tool for the Obama Campaign in August 2012. Over the next 3 months, we had over a million supporters authorize our app, giving us potential access to over 200 million unique people in their social network. For each campaign activity (persuasion, volunteering, voter registration, get-out-the-vote), we ran predictive models on this friend network to determine who to target. We then constructed an “influence graph” that matched existing supporters with potential targets. For each supporter, we provided a customized list of key friends with whom to share different types of content. 
Literally the ONLY difference, other than the political leanings, is Koger's app was then "acquired" by CA, in breach of Facebook's TOS. Which again, is something that happened probably ALL THE TIME in the pre-2014 wild west of Facebook app mining.
bullshit. plain and simple. you are falsely making those two equivalent.
Do I have a right to be upset about this or not?
"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."
We can all argue about whether facebook should even allow apps the kind of data they do, but the crux of what CA did was to re-sell data they collected from Facebook, right? This is where the two aren't the same, as far as I'm aware, but like I said, I'm not super in the know.
No. An independent developer had an app many years ago, recorded data from that app, and then sold that data to CA years after the fact. That developer did in fact violate Facebook's developer platform policies by selling the data, but CA had nothing to do with the app or what was represented to the people using it when they installed it.
Why didn’t Hillary win? I’m sure she had access to similar tools?
I think someone needs to take a look at the Hillary campaign and contrast it to identify exactly how effective Facebook data is
How is that not exactly the same thing, just done on a much larger scale by Obama? I get it, this is mostly a lefty crowd here on HN. But I just can't stand hypocrisy on either side. The fact that hundreds of commenters in here want to defend one guy and castigate another for doing the exact same thing based solely on the political affiliation of each is disgusting to me.
Source? All I can find is a scheme where they prompted people to sign in to a campaign site and grabbed the friend data that way. Icky as hell, but it's not obvious why that's "even worse" than CA.
What distinguishes CA and the Trump campaign is how well the data was used. CA staff apparently just did a better job at using the data to manipulate people. Their canvassing app. Using bots on Facebook and Twitter. Plus entrepreneurs from Eastern Europe or whatever. And maybe they went further, discouraging potential Clinton and Sanders supporters fron voting.
But anyway, the key point isn't whether CA or Obama/Clinton got more data from Facebook, or whether Facebook willingly helped Obama/Clinton. The key point is how well voters can now be manipulated. It's another level in the problem that money buys political power. And it's arguably how AI could do the same.
I've seen no hard data showing that any of this did Trump any good. Apparently not a single study has been done as to whether people either failed to vote or changed their vote based upon fake news or the use of this data.
I do follow fringes of online anarchism and anomie, and I was surprised to see so much support for Trump, when the polls were showing him so far down. And much of that support was so odd that I thought it ironic. But whatever.
I'd like to see such a study, for sure. But I'm not optimistic. Arguably, many who voted for Trump would be no more forthcoming to researchers than they were to those doing the election polling. There's just too much polarization, I suspect.
By the way, this is something that would have been stopped on any other app long before they ever accessed 190 million profiles. So Facebook essentially gave them the entire US social graph, that they wouldn't have allowed any other app with only 1 million users to have.
Fair point. But wondering if anyone asked them to delete it, and then went about verifying it later? I am guessing nobody bothered too hard because as the person in charge of the strategy said "Facebook were on our side"
They [Facebook] were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
It's not like academics are any morally superior to anyone else anyway so I don't think Facebook should be handing that stuff out to anyone.
"Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing." - Carol Davidsen, Former Obama Campaign Director
The extraction of Facebook's social graph is the element I agree "finds comparison in the 2008 or 2012 races" . But Obama's campaign didn't mis-represent its identity nor intentions to Facebook. Kogan and Cambridge Analytica did. Obama's campaign didn't lie about deleting its data when asked to (which, to my understanding, it wasn't). Kogan and Cambridge Analytica did. Moreover, Obama's campaign reported its backers to the FEC; Kogan and Cambridge Analytica do not.
As for Obama's campaign not being asked to delete data, all apps were required to delete data that they came into possession of under the pre-2014 policy. Those were the terms they agreed to when they deployed the app - especially one that was allowed special access to the entire US social graph (where no other app would have been). So if Kogan violated the policy, so has the Obama campaign.
Obama: "Give us a list of your Facebook friends, and we'll help you contact them about voting for Obama." While the Obama campaign did collect data about friends, this data was voluntarily given to them by users. If the Obama campaign had your Facebook data, it was because one of your friends knowingly and voluntarily gave it to the Obama campaign.
Kogan/CA: "Take a free personality quiz!" The people taking the quiz likely had no idea that they were supporting a political campaign or helping Trump win. You could have your data given to CA even if nobody in your friend graph supported it.
Also, that 190 million figure is probably inaccurate, because there is probably significant overlap in people's friends lists.
It was given to them by the 1 million users that authorized the app. Not the other 189 million (or more) users.
According to their own campaign manager, they "sucked out the whole of the social graph" with Facebook's blessings. So they may have actually had more than 190 million, since there are more US users that on Facebook.
Citation needed. Everything I've seen indicates Obama's campaign didn't violate the rules: https://twitter.com/mbsimon/status/975231597183229953
Sure....coming right up! How about a statement from Obama's former campaign director saying that they "sucked out the whole of the social graph" with Facebook's blessings?
That doesn't prove what you are saying. "sucked out the whole of the social graph" is a non-technical statement that doesn't have an explicit meaning, nor is it specified what user data they were scraping.
You might think “well that’s egregious, but the rules at the time allowed it”. Unfortunately, even that isn’t true. Obama’s app did not play by the rules. The Facebook developer platform rules at the time stated, essentially, that you could not use the data you gained access to through your apps for any purpose outside of the operation of your apps. In other words, you weren’t allowed to create an app that claims it just sends a message to your friends about how wonderful Obama is, and then take the friend data you gain access to through that app and use it for the targeting of political advertising and/or campaign strategy, which is by all accounts precisely what they did. So they broke the same rules that Kogan did, just on a much larger scale, and with Facebook’s tacit approval.
Gleaming insights from that data and then using that data for targeting on FB is/was the point.
Creating an app that collected the data under false pretenses, transferring that data to another 3rd party and then claiming you never had that data makes it a different situation.
How do I know this? I developed apps on the platform from 2007-2014 and was always reading the rules and any changes they issued because I didn’t want to violate them and have my app banned. They were exceedingly clear on this issue. Sadly, Obama’s app was allowed to violate these rules and received no ban.
Unless I'm reading this interface totally wrong
But regardless, it was still a developer TOS violation to export user IDs of people that hadn’t personally authorized your app and use those IDs in any custom audience even back when Obama did it. In other words, you weren’t supposed to grab user IDs of friends of your users and use those in custom audiences. In fact at one point, in an attempt to enforce this policy, Facebook stopped returning friend user IDs, and instead gave proxy user IDs that were meaningful only within the API, but couldn’t be used for custom audience targeting. Then they got rid of the target by ID option altogether.
I know a lot of folks have been desperate to make a false equivalence with the Obama campaign's social media use, but it doesn't pass the sniff test:
They used the exact same technique to access data from 190 million profiles - and about 189 million of those were without explicit authorization. Exactly the same technique, just with 4X the reach that CA had.
If you know of any evidence that Obama's campaign broke either the law or FB's ToS, please let me know.
That is simply not true. Facebook rate-limited apps for exactly this reason. Additionally, although it was technically possible, the TOS did not allow mass-scraping of friend data for any purpose, much less political purposes. Facebook monitored and routinely banned apps long before they ever accessed the data of even a few million friend profiles, let alone 190 million of them. That's why, according to Obama's own campaign manager, Facebook was "surprised" but then decided not to ban them - which they would have done to any other app.
the data was used for political microtargeting (in violation of FB's TOS)
Which is precisely what Obama did.
Was it bad management ?
Was it not using Facebook data effectively ?
Was CA that pivotal? Was it fake news or better microtargetting?
If both sides used it why did one side lose?
At the very start of Facebook platform the API would anonymize the user's email address, the app would get an app-specific hash, e.g. email@example.com It was a working email address with Facebook handling the forwarding.
This proved futile as the very first thing that apps then did was to ask users for their real email address.
How about the ones they agreed to with the FTC in 2011 ?
Basically, FB should introduce an App-Engine like platform where the backend of any 3rd-party application that uses FB data has to run on FB-owned servers. Developers of these applications would then ship their code to FB (similar to Heroku) and run in a sandboxed environment where they are not allowed to take data out at all.
That way FB can audit how the data is being used at anytime and kick out people who are out of compliance with their terms. If a user deletes their FB account, now all their data could be deleted from any 3rd party applications automatically. This is basically similar to the way the government handles classified data.
Or perhaps their could be a limited capability for developers to log in to their app as another user of it, but those accesses would be logged and periodically audited (just like the as-another-user-logins of regular FB engineers are).
+--------------+ +----------+ +--------------+
| | | | | |
| other server | | Server | | other server |
| | | | | |
+-------+------+ +----+-----+ +-------+------+
^ | ^
| | |
| http request |
| containing user |
| data for UI |
http request made | rouge http request
with a legit purpose | sending the user data
say fetching assets v to some other server
| +-------+-------+ |
| | | |
+------------+ mobile app +--------------+
1. can facebook do this? Apple could; they approve apps and get the binary/manifest before the app is released. But how could Facebook enforce this?
2. would developers be ok with this? Relying 100% on facebook?
It is an option, I'm not saying it's infeasible. An interesting idea for sure, thanks for sharing!
All of this tends to trade one problem for another, though: the user no longer has to worry that the third-party app has access to their Facebook data. But now the user has to worry that Facebook has access to their third-party app data.
2) Developers simply wouldn't have a a choice in the matter - FB is so big that they can force the market in a certain direction if they want to.
This is the status quo as of the 2014 platform policy changes.
The entire debacle is around the data retrieved (and saved) prior to that, which is what Cambridge Analytics has done.
> Or perhaps their could be a limited capability for developers to log in to their app as another user of it
FB Developer app has support for test users, who are marked as such.
Would that work for mobile apps?
This is even before we get into trust issues most developers would have with Facebook having access to their user data.
Most would be correct - some flows like inviting friends to an app, or showing friends who also happen to use this app (for games, etc.) are so common that they're integrated into the Facebook SDK.
For performance reasons (both on Facebook's end and for the sake of user's experience, just in case his phone moved into a zone with inferior coverage) a fat network request was preferred to multitude of thin small ones - you couldn't really predict when the user would fancy inviting their friends, saving their score, looking at their achievements, or checking out friends who are in same town as them - so a 24-hour cache policy was instituted.
> we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
To generalize the issue, if you were in charge of APIs at some company A and some company B (not necessarily located in your jurisdiction and not necessarily subject to the same legislative framework as you) told you they used to have the data, but deleted it since then, what additional measures would you recommend company A pursue?
Zuck's announcement seems to have a decent outline -- start with investigating apps with access to large amounts of data, audit apps with strange behavior. But after a year, after the CA (and other) controversies are forgotten, what I was thinking was that FB should be doing regular, random audits/investigations, and publicize the punishment.
I don't mean identifiable shaming, e.g. "Last week, we banned Jane Smith and her Flappy Farm app for misusing the data of 3,000+ users". But maybe weekly/monthly tallies of apps that were shut down or sanctioned, and a breakdown of the reasons why, and users affected, etc. Every once in awhile, an app maker might post a "We Fucked Up" article on HN, which helps even more in reminding people of TOS.
There's really no way to check if someone has made a copy of data if you've given them that data.
It's not that hard to check if an application prompts a user for an email address.
* someone used Facebook Connect to login to NYTimes Web site, curious about their Food & Recipes newsletter, wants to sign up
* someone logging into e-commerce shop through Facebook Connect, making a purchase and then deciding that yes, they would like to track and manage their order on the retailer's Web site, and they will even sign up for an account with retailer to do that
I think the only sane response would be to shut down the developer program. I doubt it contributes much to the FB bottomline and it's clearly something FB doesn't care much about given the breadth of this scandal.
What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?
He was part of Cambridge Analytica at the time. So they suspended his account along with the rest of them I suppose.
The Christopher Wylie that, by his own account, was a knowing, active, and key participant in the things they punished CA for, and in fact claims to be the one who came up with the concept for it?
What about it?
It's even mentioned in that article how to get around the fix they put in so you couldn't target a group of less than 20 people.
I'm not sure if this particular method still works, but let's take a step back and think before we make claims about what is and is not possible in a complex system with lots features and knobs to twiddle. Whether this was an emergent feature from other features of the system or a specifically desired and designed behavior, at least at one point Facebook allowed very fine grained targeting.
I assume their scraper was more in depth. If you had friend-type permissions back then you could perhaps see their posts and shares. You'd need post and share content to run text analysis, figure out their hot buttons based on language and frequency, sort them into groups and then target then.
Affiliate marketers could also upload lists of unique user ID's (like the FB group members scraped above) for specific ad campaigns, y'know, disguised hookup sites, skin cream credit card rebill offers, etc.
Back then, many of the privacy options were deeply hidden, obscure, changed names a couple times a year or were unavailable.
That said, since you can target specifically, if the system did allow some way to exfiltrate user data, it would make financial sense for larger analytics companies to do so to provide value-added services where they could target much more specifically than Facebook intended (or at least intended to make obvious?).
Offtopic but i cant help it. You get what you measure. Which is why economies that only measure profit optimize for nothing but profit. When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit, we should not be surprised that companies like Facebook do awful things. the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of) https://en.wikipedia.org/wiki/Generally_Accepted_Accounting_...
We don't need accountants to measure legality. That's what we have law enforcement and courts for. Investors care about profits; behaving illegally should hurt profits. Deputising a multi-billion dollar company's thousands of shareholders as its moral police is an absurd proposal.
In the 1930s, the Congress realized that financial crimes were (a) prevalent, (b) serious and (c) difficult to investigate and prosecute. So it created the SEC . Its specialists, with the budget, focus and mandate to pursue securities-related violations, have been effective (relative to pre-1930s finance).
Regulators make rules. They also enforce them. We have no top cop for technology. The costs of that gap are becoming apparent.
How do you record company/moral values into Books of Accounts? How does one audit those morals? What category do you assign it under: Asset, Liability or Owners Equity? And moreover, what would happen if morals change and some are deemed obsolete? Also, if you start to measure company values then it becomes obvious that other things that have been kept out of purview of the Books would want to have an equal footing too; like: legal contracts with clients, employee agreements, court cases, so on and so forth.
There is a very good reason why accounting standards (like GAAP or IFRS) decided to only record transactions into the Books and not be concerned with legalities or moralities. It's impossible to assign an amount to a company/moral value as what is valuable to me as a shareholder may not be considered valuable to say the local tax authority or an would be investor and vice versa.
Please note that I am only replying in the context of GAAP (which you mentioned). If not, I agree with all the ideas you presented and only disagree on the one aspect of it being added to the GAAP/IFRS standards as the purpose of it's creation was to precisely steer away from unknowns and only record the knowns.
Mashing value metrics in to accounting practices seems problematic at best.
On the whole I think it's a tricky subject because we want to be careful not to stifle innovation, and sometimes problems only become evident after quite a few pavers have been laid on the road of good intentions.
Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.
Intangibles can be bought and sold as they aren't attached to anything emotional. Moral behaviours/values, if embedded into the books, have to be through a transaction. How do you transact moral behaviour/values? That is the question I have.
> Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.
Accounting principles state that for anything to be recorded in the Books a prior transaction should exist. Brand and Goodwill, by accounting principles, are only recorded after they are transacted the first time. What that means is: Say you start an enterprise. The enterprise over its lifetime acquires a Brand value. However, you cannot record that Brand value until the enterprise is sold to another entity. Only in that scenario, can the buying entity record it into it's Books as Asset.
EDIT: IP, Copyright or Patents on the other hand can be recorded as Intangibles because you "bought" it from an issuing entity (the Government or any other body which is issuing you the certifications in exchange for a monetary value). Hence a transaction exists prior to the recordation in the Books which has valued the asset.
EDIT: To explain better: The reason you cannot record a Brand value into the Books until it's either sold/acquired is primarily because there is no way to gauge the value of Brand/Goodwill. I may consider my enterprise Brand value to be a million dollars. But you might consider it to have no value. Unless a transaction occurs, a value cannot be arrived at as it inherently has no value. Hence the recordation in the Books happens only and only after a transaction takes place.
(I see now our replies crossed so will leave this and stop posting :)
We will make X and Y the value statements of our company/brand. If our actions reduce the value of X and Y statements, what will that cost us? This at least would provide a 'rough' starting value, to be reviewed/measured against market response. Do companies already do this? I'd think it important for service/advertising based companies (but I am guessing, I have only couch-potato knowledge).
Recommended watching the following TED video to introduce the topic:
Also check out the MSCI links in my other comments to this thread if you have a chance.
This appears to solve the issue of having wide permissions, but it does not do so. In reality, this is an attempt at transferring facebook's risk to shady app developers, while the overall lifecycle for the app won't change.
In essence, this is a do-nothing from the standpoint of app developers who have requested additional permissions. Any app developer who is told they need to undergo an audit, due to transcribing the entire social network, can simply say no and get their account banned. It will likely have no effect, as the account will almost certainly have already been suspended in such a spot.
They intended for cool apps to go viral across their social graph so Facebook could be a “social utility” and the operating system of human relationships and other airy fantasies they spouted in 2012, 2013, 2014 when they built the app platform.
Their hopes of a beautiful future of joy and freedom were dashed when they discovered humans are capable of garbage behaviour.
Ironically they believed the walled garden of Facebook would clean up the cesspool of blog comments. Oops.
Why would they intend for it to happen? They didn't make any money off of this exfiltration, CA paid people via Mechanical Turk to install the application so that they could mine their data. Facebook didn't get a dime. In fact, they have a monetary interest in preventing this, because their data is worth something and these guys just got it from free usage of their API. So the insinuation that Facebook wanted this to happen, or looked away because it benefited them, makes zero sense.
What kind of safeguards are you imagining? How do you have 3rd parties interface with Facebook without letting those applications reason about the information within a Facebook account? Tinder is valued at over a billion dollars and it's not possible to use it without a Facebook account-- should Facebook shut that down and ban the entire concept of 3rd party Facebook interaction?
I do not understand the anger. They did nothing wrong. I can't believe that people are legitimately arguing that users shouldn't have a right to expose their information to apps.
Are you honestly suggesting that CA wrote Facebook a check?
“First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps.“
Might be a too little, too late attempt at self regulation since they've apparently known about this cambridge analytica situation since before the election and since then there have been facebook employees embedded with CA to help them target ads.
MSNBC is already playing this as: 1) starts with a denial 2) admits wrongdoing 3) claims behavior will change 4) changes are not carried out and we're in the same place a year down the road.
They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
Worth noting that Zuck wants something from Xi and Medvedev, has everything he needs in terms of support from the US Gov.
Facebook as almost insured that it becomes the whipping post of the FAANG companies as the government wants to look hard on tech. Im genuninely not sure what Zuck can do as long as Apple, Google and Amazon don't make mistakes in the same way.
The next 10 years are going to be a lot like the old ad age about being chased out of a campsite by a bear. It's unimportant to be the fastest (best) of the group, you just cant be the slowest.
It seems that it worker out for Bill Gates pretty well.
I've said variations on this before: https://news.ycombinator.com/item?id=16438362, but FB will get serious about it when users stop using it.
Until that time, everyone can complain, but the concept of "revealed preferences" is relevant. Do people actually care? If so, they'll change their behavior, FB will likely notice, and changes will happen.
People have been complaining about FB and privacy since practically day 0. Throughout that entire period, FB has only become more popular.
I get the feeling, but you'll agree that a legally binding legal procedure that is an actual existential threat to your hundred billion dollar company that employs 25k people requires a different approach then a seduction meeting with a despot to try to loosen regulations.
This strikes me as especially interesting. I mean, I'd personally theoretically have my reservations about this congress over and above the average congress, but FaceMark refusing strikes me as a deeply telling datum about how he's thinking about this.
This is not a deeply telling datum. It is a boring an standard one.
More accurately there is risk in talking to the US congress when the topic sounds more like an inquisition than when congress is asking for opinions.
The matter then is why is it a risk for Facebook to discuss the CA issue? Are they worried about a witch hunt or a public ethics execution?
> ...my reservations about this congress over and above the average congress...
ugh, come on. it's not like the whole congress votes on how you're to be treated, and the questions you'll be asked. the biggest heels on both sides of the aisle are free to harangue you all they want.
Not the whole Congress, but how exactly do you think procedure and parameters for hearings are set and objections during hearings are resolved? Both the specific personalities in leadership positions and the attitudes of the majority matter a lot.
EDIT: It's true that there is a tradition of providing some semblance of balance in committee process, including hearings, with the majority (not necessarily by party) mainly controlling what items are considered and what hearings are held, and ultimately the outcome; but not only is partisanship greater than in the past, but precisely those traditions have noticeably weakened over the last couple decades and particularly in the current Congress.
Any exec hauled on the carpet is going to want a drink afterwards. That's not the concern.
The election woke people up.
I found this from 2011 when they shut it down as a "mistake." (https://newsroom.fb.com/news/2011/11/our-commitment-to-the-f...).
Unfortunately, it looks like they removed the launch release - but it would be interesting to see how it was presented in light of the recent news.
They're not a product company, they're a distraction company.
Correction, they are a surveillance company. Need I remind everyone of Google and FB's In-Q-Tel CIA partners in crime? They just figured out a way for everyone to willingly report on themselves, but not just on themselves, on others too! FB is bad and should collapse like every other dotcom boom-bust that uses and abuses it's users... but the difference is the level of monopoly on non-technical users that didn't exist in the 90's. Back then the technical community could have dropped a product like hotcakes and watched it bust... but due to the increase of non-technical users who sign any EULA/TOS and don't give a crap about privacy... I expect nothing will happen until something really bad and at a massive scale happens.
For those who are younger, consider this the slashdot/digg/reddit cycle. Reddit will die next the closer it gets to IPO too.
It's the beauty of computing though. Every market is ripe for disruption if someone has a good vision and follow-through. The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.
Can you substantiate this claim of tech product companies getting large/successful and then “dying due to lack of integrity”?
I haven’t noticed the pattern but if there is one I’m sure interested in the evidence. (And in what sense do they lack integrity?)
Not exactly. In all fairness, as Zuck points out, this is a key part of the story, and in theory, why this is different:
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent.
This raises the question of what Facebook was doing (if anything) to prevent this sort of action, but the fact that they just took CA at their word that they deleted this ill-gotten data (of course they didn't), it makes me think they did very little. I think this is just as concerning as any other part of this story. Even if people are knowingly willing to hand over data to Facebook (or the devs of some app) in exchange to use a service, they wouldn't think that it's a free for all and anyone can mine the data for whatever they want.
Not to mention that it's arguable that "consent" was really given for facebook to share the data in the first place. I'd be interested to see some polling results asking if facebook users knew what facebook was up to and whether they feel OK with it.
I think this is from 2013.
This was openly covered last year by BBC when they interviewed Trump's digital campaign manager at the time Theresa Hong [interview linked]. The campaign spent $16M on Facebook. Understandably, Facebook gave them the white glove treatment, even had their own employees embedded in Project Alamo (the headquarters of the campaign's digital arm).
But today Facebook claims they had no idea who one of their multi-million-dollar clients in 2015-2016 were. That it was just some random quiz-making hacker dude selling data to some other random company.
This piece of work posted today by Facebook is what we call damage control. Don't expect the truth from it-- it will contain truths, but it will not be the truth of the matter. And don't let it set your dialogue, man.
Number of times the word "advertising" was mentioned: 0
Facebook continues to pretend it's business model is unicorns and kittens, not selling user data for money.
"learn from this experience", "doesn't change what happened in the past. ", "responsibility", "going forward", "together".
You find this same boilerplate from athletes who beat their wife, do drugs, or kill people.
Yet, maybe it serves a purpose? It doesn't matter how sorry they actually feel, nobody is going to feel better from a large, public, self-flagellating apology. Nobody is going to be happier or more mollified or satisfied. All it's going to accomplish is to provide grist for the lawsuit mill - after all, why apologize if you're not guilty?
Again, you're completely right. They're clearly not sorry.
Because you get more money for a cow than milk over a long time period and can use the cash to expand the cow-selling business?
(And that original metaphor was a bit mismatched, actually.)
FB provides a platform for ads, sure, but it can be used for way more than just ads.
3P tracking can infer who (even specifically) is viewing ads and it knows the social graph by other means. It can also do effective mass psychometric tests. A/B testing infrastructure can be used for more than just optimizing ads ... all of the psychometric dimensions tested by Kogan’s 2013 app can be expressed as embedded imagery and messaging in ads, and the same types of tests can operate at the same scale by integrating 3P data and tracking (some of which certainly originates from policy-compliant FB apps) which already knows the social graph beyond what FB will allow a single app or ad to draw.
But no, we've got to pretend Facebook is a touchy-feely community. I get the feeling that Facebook is ashamed of the way they make their billions.
You seem lost in a false narrative that he is out to get you :(
I challenge you to rethink your position assuming he means well...just for the fun of it...and share with us what your conclusion would be
He said those things when he was 19 years old and Facebook was still a random side project in college.
And yes people were indeed "dumb fucks" to give a random college kid with a side project their personal information. If you went around Walmart and asked people for their SSNs with no value proposition, then they would indeed be dumb as well.
The $400 billion company that Facebook is now and the growth a person goes through over decades of being a CEO and managing people is significant.
For you to bring up something the guy said when he was a kid is disingenuous and does more to harm any point you may have had than help it.
Also can you explain to me how people submitting personal information to a random form from a college kid is not dumb? Facebook the product didn’t even exist at that point by the way and the information collected was more like email address, nothing really more nefarious than that.
Yes, most of my intake into the military were 18 and 19. As a society we consider them adult enough to send them into harm's way. I was the Old Man at 31.
Your defence seems to be based on the premise that 19 year olds are children. No aspect of law or society concurs with that.
Some others, like CA and ilk, also get easy access to user’s psychometric data, by embedding the psychometrics into A/B tested ad content.
The adage “you are the product” has only become more true as FB has advanced, whether by their intended or explicit policies or not.
Honestly, at the moment, Facebook has huge economic incentives to keep as much data to itself as possible. Facebook being the only place where you can microtarget to such an extent is a huge moat around their business.
And microtargeting can be abused (or just plain used, depending on your perspective) to infer only additional data by incorporating it into the 3P analysis once users click and start loading non-FB content. Microtargeting by its nature leaks information about the target segments ...
Figuring out the data FB uses to microtarget is simply a matter of buying enough ads, or getting in the middle of enough campaign-to-user relationships (as a central 3P ad exchange or tracking service).
I wouldn't argue that micro-targeting can't end up with very specific privacy concerns, but I don't think it's nearly the same scale as "you should probably assume that if you signed up to enable the Graph API on Facebook all information about you prior to 2015 is available to people you probably don't trust".
The resulting dataset over even short periods (< 1 yr) time is comparable to a total datadump, including an accurate social graph. A “very specific pivacy concern” it is not.
Targeted campaign products leak data about the user’s targeted attributes by their very nature.
If you want FB’s targeting data, simply buy targeted campaigns and associate the target attributes with the users who click once they are on your server. At scale, the targeting data is transparent.
Big ad analytics companies and ad exchanges can and do basically sit in the center of many campaigns and slurp up the targeting data which is naturally leaking from FB by virtue of their selling capaigns based on those targets.
Whether they want to or not, they are selling their data.
Edit: Care to reply instead of downvote? Is everything above not true?