Hacker News new | comments | show | ask | jobs | submit login
Forget Equifax. Facebook and Google Have the Data That Should Worry You (bloomberg.com)
233 points by adventured 5 months ago | hide | past | web | favorite | 114 comments



I think what we're seeing are the same sorts of things that led to building codes, fire codes, licensing of architects and civil engineers.

After so many building collapses, bridge collapses, or disastrous fires, people finally demanded that "build whatever you want, however you want" is not OK if you are making something for use by the general public.

I think ultimately we're going to see legislation requiring licenses/certifications for software designers and software companies and software service providers. Like a civil engineer is licensed and personally liable for designs he approves.

It's coming. People will only tolerate the current shitshow that is our industry for so long.


It might happen but I don't think it'll work. Codes work because they deal with known, predictable, and repeated problems. You can keep improving the fire code with each disastrous fire and fire will not actively try to outsmart you next time.

Security OTOH is anti-inductive because you're dealing with intelligent attackers adjusting their methods. In this environment, security code and certifications will just become another pile of papers for bureaucrats to verify after your data was leaked anyway.

But maybe it will at least force manufacturers to make their products' software updateable.


I disagree.

The attackers can't get the data if you don't store that data to begin with. Voila!

I think there should be legislation that simply restricts who can store which data, as well as legislation that forces open-sourcing of critical infrastructure software.


We can't even agree on a very weak version: if you store personally identifiable information about me, you MUST share it with me. It does not matter if you're a local corner store or the CIA. You may not store personally identifiable information about me without telling me all about what you're storing.

I bring it up and conversation shuts down. Nobody likes it. Therefore, I submit that what you're saying won't happen either.


I think it's predictable if certain data is stored that others will want it.

Maybe this is extreme - but the rule should be not to store it at all if possible.


> But maybe it will at least force manufacturers to make their products' software updateable.

Hopefully, those updates will NOT be performed remotely by the manufacturers though. Imagine a self-driving car company rushing a quick-and-dirty fix to each of its cars remotely. I'm hoping that at least some certification institute will be sitting in between both parties to make sure everything is tested correctly before it's pushed to the consumer.


There are many ideas to make this workable.

How about this: Did you test your code for buffer overflows? If not, you're negligent.

Maybe you need to submit an affidavit that the code has ben tested, a log from an outside testing service (similar to a building inspector, but for code quality), or some other step to verify that something has been done. For the record, building inspections have flaws, too, but they sure help in spite of those issues!

Many other ideas, but that one is simple to communicate.


> I think ultimately we're going to see legislation requiring licenses/certifications for software designers and software companies and software service providers.

What will these licenses test us? I have no problem finding a software engineer guilty of negligence writing bad code because they were too lazy (but proving it is very difficult, what is the definition of "lazy"), but that means everyone is guilty! There are toooooo many corner cases, and we depend on upstream and downstream system to be non-faulty and 100% reliable.

Physical quality control is a lot easier to implement, so if someone fabricated lab inspection results they can go to jail because science doesn't lie right?

Now look at C++, even if we fix all the warning messages from the compiler, doesn't mean someone didn't cast the right type at runtime and boom (what about underflow, overflow, not freeing memory, reuse the wrong pointer)? We lost Mars Probe in 1999 because of a simple inch-to-meter conversion error. We don't have a formal verification system that we can all agree on.


It should be coming, but we can't make any assumptions. This kind of standard won't happen on it's own. People need to fight for this standard to make it happen. The worrying thing is that Facebook and Google have an unprecedented amount of power to resist external restrictions on what they can build and how they can use it. Simply attempting to pass legislation may not be enough.


As a FLOSS contributor/developer, I don't like this idea. Having to apply for and receive a license to write programs or modify existing ones would pretty much kill any FLOSS project not backed by a company.

In addition, how would this be enforceable? Unlike a civil engineer, where there's some amount of physical presence required in what they do, SW engineers can literally be anywhere in the world and still be productive and it seems that this system could easily be gamed.


But there were always security regulations: PCI DSS, SOC 2, HIPPA, ISO 30001


As a natural-born American you still have the choice to not sign up for a Google account or a Facebook account. You do not however have the option of refusing to apply for a social security number when your child is born.

This piece looks likes little more than propaganda from the credit reporting agencies. Shame on you Bloomberg.


> you still have the choice to not sign up for a Google account or a Facebook account

Unless you're careful to guard against it, you're loading up Facebook and Google's trackers on just about every site you visit. Registering for an account is not a prerequisite for them to build a profile on you.


Can someone open a bank account in my name or take out a loan based on data google/facebook have on my browsing history? I kind of doubt it.

I saw this post and immediately thought "propaganda" as well.


Not yet, but it's the sort of thing I can imagine happening in a couple of years. I think there have already been a couple of attempts at involving FB profiles in credit ratings.

And of course you can be refused a job based on your social media profile, or whatever comes up when people Google for your name.


> And of course you can be refused a job based on your social media profile, or whatever comes up when people Google for your name.

I used to spend some time in some car enthusiast forum back in the early 2Ks and after a few years I lost interest, so I stopped visiting. 10 years later, I decided to visit again and noticed that many posts had been made in my name (thankfully, just with my nickname), with horrible things being written "by me". Of course I had no way of knowing that their servers had been hacked years ago, and no recourse.

I can only imagine if there was my real name used on a random server somewhere with some dirt written, uncovered by someone googling my name...

"If it's on the internet, it must be true" is a scary thought.


Maybe, just maybe, there's a bit more at stake than just your credit rating.


Maybe I can be concerned about both at the same time. Just because Facebook and Google have damaging information about me doesn't mean Equifax doesn't. Also, I am not a homeowner, so a hit to my credit could be devastating financially.

As others have said, google/facebook are opt-in. I have knowingly given them the information they have on me. Equifax is not opt-in. I have never used any of their services.


> As others have said, google/facebook are opt-in. I have knowingly given them the information they have on me.

Did you knowingly give them your credit card transaction data, too?

https://www.washingtonpost.com/news/the-switch/wp/2017/05/23...

Did you "opt-in" to being tracked across most of the web by Google Analytics?


Google and Facebook are not opt in. They have your data if you browse the web and don't know enough to block/avoid their tracking techniques, or with Facebook if you just have Friends who use it. That argument is bullshit and I'm sick of hearing it.

Say you never signed up for Facebook but don't block their tracking stuff, or maybe you do block trackers but also have some friends who use facebook and have unwittingly shared a bunch of information about you. Someone can know just enough to fraudulently sign up for a Faceook profile in your name, and Facebook will helpfuly make suggestions and pre-populate bits of the profile with stuff they already know about you. That person knows a lot more about you now.

It's also not a stretch to imaging Facebook or Google profiles being used as ID for credit and other financial institutions in the future, there are already many online services which allow this and we already have things like Android Pay.

This shit's complicated, I think we're massively underestimating the risk of trusting these companies just because they haven't leaked user data. Making dismissive and oversimplified statements about it doesn't help anyone.


It's not that hard to guard against their trackers. More critical, from my point of view, is the fact that, assuming you have a normal social life, snapshots of you show up on FB. And people identify you in those snapshots. So anybody with access to that data can build a partial timeline of what you do, who your friends are, and where you've been even if you've never created an account.

It's big brother implemented through the people around you.


"This piece looks likes little more than propaganda from the credit reporting agencies. Shame on you Bloomberg."

No, it's not propaganda, it's regular journalism.

Google and FB are both 'large corps' whom journo bosses would be weary of offending, so no.

Whenever a journalist 'names names' of big companies, they're probably doing regular journalism.

The bad stuff happens when stories are suppressed to avoid upsetting advertisers.


Really, an article in the wake of the most significant data breach in the US telling me to forget about that breach and think about something else?

The first two words of this title are "Forget Equifax ..." An imperative. This is exactly propaganda.

>"Whenever a journalist 'names names' of big companies, they're probably doing regular journalism."

What? Since when is using proper nouns a defining characteristic of "regular journalism"? That is a ridiculous statement.

>"The bad stuff happens when stories are suppressed to avoid upsetting advertisers."

Do you see many ads for Google on Bloomberg? No. There is zero risk there at all. Bloomberg is not in the ad business.

The real bad stuff happens when propaganda masquerades as journalism.


There is zero possibility this is a propaganda piece for Equifax.

Bloomberg is a fairly respectable entity, they just don't do the bidding of arbitrary 'big corps' against other 'arbitrary big corps'.

As far as 'Google not advertising on Bloomberg' - my friend - Google owns the internet. Every web site on planet earth is 100% beholden to search results. Bloomberg lives and dies on Google results.

Google has been known to fuddy with search results for direct competitors, so if Bloomberg needs to be afraid of someone, it's Google - moreover - they have nothing to gain from doing a 'pro Equifax' propaganda piece.

There is corporate influence and national propaganda in journalism - but this is not it.

This is just regular journalism.

And I for one support the premise.

Equifax is irrelevant.

Google and FB together, are almost 100% of the risk. They have 'everything' on us.


>"Bloomberg is a fairly respectable entity, they just don't do the bidding of arbitrary 'big corps' against other 'arbitrary big corps'"

Firstly there is not one "Bloomberg." There are many different Bloomberg companies and Bloomberg News regularly publishes questionable and fluffy pieces. See:

http://www.huffingtonpost.com/josh-nelson/bloomberg-news-ref...

You might also want to look up Randolph Hearst and the term Yellow Journalism if you are so naive to believe that "respectable newspapers" whatever that means don't publish propaganda. See also Judith Miller and how the NYTimes, another "respectable paper" sold the Iraq War if you're still not convinced that propaganda pieces are a real thing:

https://www.mediamatters.org/blog/2014/07/01/how-the-iraq-wa...

>"As far as 'Google not advertising on Bloomberg' - my friend - Google owns the internet. Every web site on planet earth is 100% beholden to search results. Bloomberg lives and dies on Google results."

I am not your friend and honestly you seem like a troll. Bloomberg does not need Google. Bloomberg is foremost a very successful financial services company and software company. And as distant third a media company. Bloomberg L.P does billions of dollars in revenue a year selling service to the financial industry. They do not need Google search results at all. Maybe you should read up on Bloomberg L.P a bit. So no, they don't "live and die by Google results."


As the world around us becomes more and more saturated with sensors (not least of which the ones we carry around with us), and the more the transactions of our lives require submitting to these sensors, the less compelling is the "well, you don't have to sign up" argument.

Technology today is more like language than product. There is no law requiring that you speak, or even learn how to speak; but if you choose not to speak you consign yourself to an underclass of mutes.


you're bending the word propaganda to the breaking point.


I'm not particularly worried, Google and Facebook have two of the best security teams in the world and spend an amazing amount of money to protect customer data.

The amount of convenience I get from Facebook and Google outweigh the risk I'm exposed to in my opinion.

If Governments want to create data protection regulations I'll save my opinion until I see whatever regulation is proposed but I think it'll be difficult to create data protection regulation that is effective at protecting people without imposing a large expensive burden on companies.


I'm not particularly worried, Google and Facebook have two of the best security teams in the world and spend an amazing amount of money to protect customer data.

NSA begs to differ, on both counts https://www.theguardian.com/us-news/the-nsa-files

CIA too https://wikileaks.org/ciav7p1/


I've read the reporting of Snowden's leaks and I have at least as much exposure to the inner workings of the five eyes as Snowden did and I have come to a different conclusion.

I do not believe that Google is voluntarily handing over any sensitive customer data to the NSA without legal compulsion. I would not be surprised if intelligence agencies around the world including US based ones target Google's communications (probably in full accordance to each agency's authorization) but that would not be with Google's cooperation.

I believe that Google is taking steps to protect customer data from all bad actors, including Government agencies. One initiative that was underway around the time I worked at Google was to encrypt all the data-center to data-center traffic at Google in an attempt to frustrate anyone who was tapping Google's backbone links.


> I do not believe that Google is voluntarily handing over any sensitive customer data to the NSA without legal compulsion

Why does it matter if it's under legal compulsion or not. At the end of the day they are handing over your data. They could choose not to store it, thus making it impossible to do, but they don't.


"Why does it matter if it's under legal compulsion or not. "

Because we are all obliged to follow the law.


A) No we aren't. Unjust laws should be fought. I realize that this is easier said than done, but it's still true.

B) They could organize it so that even when the law comes asking, they can't comply, but they don't.

What matters is that they are acting insecurely, and providing data that they shouldn't be storing/providing in the first place. The law is irrelevant here.


"A) No we aren't. Unjust laws should be fought. "

In the court system, not by disobeying.

Your view of 'what is unjust' is likely completely different from the view of others.

Particularly in this case, I don't have any problem with Google or FB handing over data for individuals under investigation, wherein a Judge had provided a warrant. This is 'legal' in every sense of the term and has been for some time.

As for 'mass surveillance' - well, this was a murkier area, and has been cleared up by the Supreme Court, and I don't suspect they are doing it.

If Google does not want to hand over data to officials producing warrants, they can take it up in court, and try to get an injunction against the process of handing over. If a judge feels there is merit to the case, they will grant the injunction while the case is being resolved.

"They could organize it so that even when the law comes asking, they can't comply, but they don't."

Nope. They can organize all they want, but if the Government is well within legal limits, Google et. al. would face some serious pain. Again for 'mass surveillance' stuff (i.e. legal ambiguity a few years ago), they'd have some legal footing to fight (i.e. try for injunctions), but for other things, not so much.


> In the court system, not by disobeying.

Which cannot be done, when any issues with these laws are discussed in "secret courts" [0], and where the individuals involved cannot reach out to experts in the field, because their hands are tied by gag orders.

The strength of the warrant becomes less when you recognise that FISA approves almost every request it gets. The warrant is little more than a proforma.

The structure of the current laws prevents a lawful answer to the situation.

I can't advocate breaking the law, that would be going against myself.

But neither can I advocate for the law, here, because it is failing to protect the people of the nation, from the power of the nation.

[0] http://edition.cnn.com/2017/03/08/politics/fisa-court-explai...


The article you cited shows there is rather heavy oversight, and at a very high level.

And that only about 1500 FISA requests are granted a year, which is a very small number for 300 M people, relating to another 7 Billion.

A single case might yield 5 or 10 warrants, ergo, possibly as few as 150 serious cases.

That's small.

That 'they are almost always granted' is not so bad in and of itself. If there's a 'known process' for getting warrants, and law enforcement knows what will be approved and what won't - well - then there shouldn't be too many that are denied.

Underlying the 'warrant' is not something 'pro forma' - it's a set of expectations and requirements upon the part of the overview system in place. The 'form' requires that the applicant fulfills some very important criteria.

I do think it's fair to be suspicious and that we should be vigilant about it, but I don't think that 1500 requests a year is too out of line.

I think the big concern is the 'mass surveillance' - or when local cops are making requests to do local-yocal small cases that don't have relevance to things like actual terrorism.


> I do think it's fair to be suspicious and that we should be vigilant about it, but I don't think that 1500 requests a year is too out of line.

Those 1500 requests cover about 15 million people though, which skews the weighting. That gives me concern.

> If there's a 'known process' for getting warrants, and law enforcement knows what will be approved and what won't

Either that, or there is a culture that rejecting a warrant needs extenuating circumstances, in which case it becomes a large concern.

We can't know if the oversight is simply managerial, or actually effective. It's done with the utmost secrecy, with many punishments awaiting any who might speak out.

> I think the big concern is the 'mass surveillance' - or when local cops are making requests to do local-yocal small cases that don't have relevance to things like actual terrorism.

Unfortunately FISA enables mass surveillance. And the checks and balances seem heavily weighted against the individual, and in favour of a state they can't oppose.


I disagree with your view of fighting unjust laws, but I can agree to disagree on that one. It's a complicated issue for sure.

> Nope. They can organize all they want, but if the Government is well within legal limits

You are misunderstanding what I mean here: Google can make it so that nobody except the user can read their data, but Google chooses not to. If they did this (read: the correct/secure way), then the government can ask all they want, and Google would be unable to comply.


Protection through obscurity. If the NSA needs to go through the legal process to access data, the chances of them looking through your data is much lower than if they had unrestricted access. The cost of data inquiry discourages data inquiries.


That doesn't really answer my question though. The problem is that Google is handing over data, not what method is being used to try and access that data.


Yes, but if you reside in the US and NSA or CIA want to get your data, they are going to get it. Even if you have the best security team and program in the world.


Missed my point, maybe I didn't make that clear: if there's an org with brilliant math PHDs and security people with virtually unlimited budgets, that's NSA.

Yet, their info got out. If they got hacked....


Ah sorry, I did misunderstand. Yes, no one is safe. Not Google, not Facebook, and not NSA or CIA or any agency or organization on Earth. A good defense is much harder than a good offense when it comes to infosec.


You're pointing out an orthogonal issue.

If the US government wants your data and it's being held in a US company, then IT security isn't the issue, it's the legal framework. No IT security will help when armed agents of the government legally enter your business and coerce you into handing over your data.

The US Government can equally detain you directly and coerce you into handing over all of your data if that's what the law allows.


They most likely have access on their own -- seems a bit suspicious why so many open AWS ops reqs require a security clearance.


This is why: https://aws.amazon.com/govcloud-us/

Only people with security clearance can access government cloud facilities


>Google and Facebook have two of the best security teams in the world and spend an amazing amount of money to protect customer data

Are you serious? Wasn't Google found to be sending data between their data centers in plain text?

https://www.theverge.com/2013/10/30/5046958/nsa-secretly-tap...


I think one can excuse them from not securing themselves against their own government physically tapping their internal network.


It's referred to as "their internal network", but these taps were not within their buildings, as far as I recall. From my understanding, these are still leased lines run externally from facility to facility. It's been generally assumed that nobody is between points A and B on that line, but the line is still largely traveling through unsupervised space, and we are not the only government to have invested significant resources into tapping lines. (I believe both the US and Russia has submarines specifically designed to facilitate tapping undersea cables.)


and don't forget that most (all?) modern lines have been funded directly by the US government. For all intents and purposes...they are government owned.


Their own government is just what we know about. You really think foreign governments weren't doing something similar? Why would you think that?

We most certainly cannot excuse them.


>I'm not particularly worried, Google and Facebook have two of the best security teams in the world and spend an amazing amount of money to protect customer data.

The fact that they have the best engineers and the best infrastructure, doesn't mean that they are able to find all vulnerabilities and fix them in time. The only way to do it, is to have a proof that system is secure which is beyond our reach today.


"...Google and Facebook have two of the best security teams in the world..."

Accepting that statement at face value, I would still caution: today they do.

What happens in 10 years when the bloom is off of the rose, and either cannot attract top talent or have ignored the need to stay on top of their game?

That data's still there.


> I'm not particularly worried, Google and Facebook have two of the best security teams in the world and spend an amazing amount of money to protect customer data.

But the Google and Facebook security teams are particularly motivated to protect Google and Facebook, not me. If our interests happen to align, then I reap the benefits; but if they conflict, or even if they don't align particularly well, then the sterling quality of those teams becomes more or less useless to me.


Those teams are pretty motivated to protect users as well, because user population and network effects are key to their business models (perhaps a bit more for Facebook than for Google but still). If users started feeling like visiting those sites exposed them to security risks, as well as the existing privacy concerns, it would be disastrous for the bottom line.


I can't say I'm that concerned about those regulations being an expensive burden on companies. I'm more concerned about them being effective.


You're a little foolish to believe that some sensitive data within Google/Facebook/XYZ does not get shared outside with FVEY countries and intelligence agencies.


As someone who has worked for a five eyes intelligence agency and at Google I can tell you that Google does not share sensitive data with the Government unless compelled to do so by law.


> unless compelled to do so by law

That's a pretty significant hedge.

Also, how effective are Google's processes against dishonest employees? It would not be hard for, say, the Chinese government to plant a mole in Google's security team. (Actually, the real nightmare scenario is a Chinese mole in Intel or Apple's hardware team.)


Honestly Google is one of the few companies I know of which really considers internal threats and has taken precautions to reduce the likelihood that internal bad actors will have access, to reduce the scope of access to any bad actors that slip through, and to ensure that illegitimate accesses will be noticed, reducing the time a bad actor will have access.


I believe you. But one still has to wonder who watches the watchmen.


Crazy libertarian SREs. :-P


> Crazy libertarian SREs. :-P

A Google search didn't immediately clarify. What is 'SRE'? The most plausible de-abbreviation I could find was "site reliability engineer".


Site-reliability engineers is the correct de-abbreviation.

While data at Google is tightly protected, everything else at Google is fairly transparent. In particular, you can see who is using what resources. SREs, among other things, monitor and manage this resource usage. Large, abnormal usages ("Why is someone copying all this data off-site?") will be noticed, and some of the people who will notice won't take "because the NSA told us to" as an acceptable answer.



That'd be the right abbreviation.


Google fights against subpoenas and warrants all the time but sometimes they lose. At that point they have to comply or be held in contempt and I can't really blame Google for complying in that situation.

With many thousands of engineers it is totally possible for bad actors to have infiltrated Google. It's one of the reasons why there are such strict protocols for accessing customer data or production hardware. The idea is that by default no one has access to anything and that all accesses to data and production hardware are logged and audited.

I'm sure there're still opportunities for a rogue employee to do something bad but Google are way better at protecting access to their customer's data than many of the companies I've seen.


> Google fights against subpoenas and warrants all the time but sometimes they lose. At that point they have to comply or be held in contempt and I can't really blame Google for complying in that situation.

I can and do blame them without hesitation. They could choose not to store that data, or make it zero-knowledge-based, yet they choose not to.

Just because they've been compelled to hand over data via legal means does not excuse them for doing the wrong/insecure thing in the first place.


Do they fight against all subpoenas equally or do they make calls about which to fight and which to comply with? If they choose to comply with some, how do they make that call?


>> "...that Google does not share sensitive data with the Government..."

It's possibly true that Google doesn't intentionally share data. But a nonzero fraction of Google's employees are actually working for the intelligence services of various countries.

The fraction is debatable but it's naive/misleading to pretend that the big tech Co's aren't thoroughly penetrated by every halfway respectable intelligence outfit.


Do you have the "pay grade" to be able to make this assertion credibly? One cannot, in general, see the forest for the trees. This adage is not merely a put-down.


If you're asking whether I was the director of my agency or the CEO of Google then the answer is no. I have also not worked for the Government in about 6 years or Google in 4 years so maybe everything changed the moment I left.

However, I was an Assistant Director (which is a senior tech / middle management kind of level) at first an intelligence agency, and then at a similar grade in a law enforcement organization and then later a security engineer at Google.

The idea that an intelligence agency (or law enforcement agency) would ask Google for help isn't far fetched, in fact when I was in law enforcement we'd ask private companies for help all the time. Some of the time they'd help, the rest of the time they tell us to come back with a warrant.

Google was famously in the second category, to the point where we'd not even bother asking. Even if we had a warrant we'd hesitate because we'd expect Google to challenge the warrant in court and it'd be a huge expensive hassle.

When I worked at Google it was the same, the idea of sharing private information with anyone was anathema. Engineers even being able to see private information without an audit trail and alarms going off was next to impossible.

The idea of a secret extra-legal conduit of information from Google to the US Government seems so far fetched to me that I have trouble even considering it. If there is a conduit for information it's there because the US Government has some legal instrument to compel Google to create it and that Google grudgingly has accepted the instrument as valid.

But what do I know, like I said, I wasn't the CEO of Google or DIRNSA. Maybe I just was never in the right rooms or policy meetings? Maybe everything I saw was staged for my benefit.


I'm not sure if it's still the case, but there used be ample variability of compliance in Google's transparency report across the globe. Outsiders assumed that in the US the percentage was higher because Google was in bed with the FBI and co.

In reality, the explanation was even simpler: in the US, authorities either didn't bother asking, as you said, or put the effort in crafting a proper request.

Elsewhere, they assumed they could obtain anything (or remove anything about a local figure), just because they had a badge. In some cases, the requests were so ridiculous that they had to be educated about how the law worked in their own country. Over time, compliance increased in those countries as well.


I worked for Google a long time ago and it's almost certainly true that this still official company policy. The real question is whether all Google employees always act in accordance with that policy. That is a much dicier proposition.

It's also an open question how much data Google is compelled to share by law in the age of NSLs.


Yes they have immense amounts of personal data and that is worrying. It is worrying they share this data with government agencies, marketing agencies etc.

BUT, unlike Equifax they haven't been completely compromised by hackers. Equifax is unique it's level of incompetence. I'm surprised the equity is holding up as well as it is. The company is grossly negligent.


To me the difference is I signed up with Facebook, even though I rarely use it any more. I created a Gmail account, and I use Google for searches. All of those were options presented to me, with alternatives - there are other ways to stay in touch with friends, to send and receive email, to search the internet.

There is no reasonable way to participate in modern society without a bank account, a cell phone, and water+electricity to my home. So I had no choice but to allow Equifax to have my data, even though if presented with any alternative I would not have.


Not only that but Facebook and Google collect personal data voluntarily, to some extent. Equifax leaked THE personal identifier used in the U.S which affects a persons livelihood.

I'm not worried out Facebook getting hacked and someone getting 10 years worth of photos, shopping data, or browsing history. Even credit cards have better protection than what Equifax has for SSNs.

And again, the consumers who shoulder the burden of this catastrophic data breech had absolutely no say in who brokered the SSNs.


> It is worrying they share this data with government agencies, marketing agencies etc

Facebook and Google are marketing/advertising agencies.


30 mil US accounts on FB were scraped using Mechanical Turk in an ingenious way. No hacking was required.

I don't think that for many it matters if their data was obtained through hacking or through legal means.

https://theintercept.com/2017/03/30/facebook-failed-to-prote...


You don't have your social security number and loans on your Facebook profile (or maybe you do). However whatever you put on Facebook was your choice. Nobody "posts" to Equifax.


I agree with you, but things are changing. Facebook can create a shadow profile on you, friends can tag you into pictures without your consent, when you delete data it will still remain in Google caches and you will need to individually request removal for each URL, because it won't automatically remove them if the original page from FB is gone.


I'm sure some do.

Zuck: Yeah so if you ever need info about anyone at Harvard

Zuck: Just ask.

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb fucks


Seems like there's a (difficulty/physical file size)/(profit or effort)*(likelihood of success) score that matters in these things: a terabyte of credit card numbers (alone) would be 62.5 billion card numbers, but a terabyte of facebook chat messages/private pictures is some small fraction of data. It'd be much easier for O(20 mb) to move off the network than O(500 GB). The former can be profitably resold, while the latter requires a careful search to find high value data (e.g. pictures like the fappening, I guess?) and no foolproof profit mechanism. And the sheer size brings other problems like how to store it and how to exfiltrate it and deal with it without being noticed. And of course, if you ARE good at all of the above, why not give in to a chill life in SF instead of a "life of crime" and all the volatility/violence/cloak-and-dagger that goes with it?

Not saying a Facebook/Google data breach isn't terrifying -- it certainly is, and the privacy implications are indeed upsetting -- but the profitability path is just not as clear.

The "Russian propagandists" angle from the article is interesting, but seems a bit separate from the "FB has a shitload of data on people" problem. It's basically using the ad/social aspects of the service as designed: changing people's perspectives on something you want them to feel differently about. (Albeit aimed at a different target!) Not sure how to solve that problem.


What's the value of a credit card number? It can quickly be changed and you would be hard pressed to exploit a billion of them at the same time.

What's the value of a psychological profile built from FB data? People say Trump won the election because Cambridge Analytica built a psychological profile of most Americans and targeted them with customized propaganda. Probably an exaggeration, but it's early days. This will only get better. Can you make your psychological profile invalid the same way you can change your credit card number?


> you would be hard pressed to exploit a billion of them at the same time.

Really? A billion automated orders for something, followed by the banking system having a metaphorical heart attack and either cancelling lots of/all cards (breaking normal usage until the card printers can catch up), or few/no cards (and risking further, directly exploitable, fraud)?

Sounds like a potent way to successful fight a nation-state from a coffee shop.


You're thinking in terms of today. Network throughput keeps growing and 500GB in 10 years might become really little. We should think ahead not try to fix it after the disaster.


People don't understand why it's dangerous because such hacks are what Taleb calls black swans. They are really rare, but their results might be catastrophic in unforeseen way.

Those who thinks that Google and Facebook have harmless data compared to Equifax, imagine what criminals might do with all your information located there to target you. Add here AI technologies, like NLP and speech synthesis, and it gets really scary. Think about massive but at the same time highly targeted social engineering attack abusing all this information.

We should not allow this to happen and it's better to prevent it now, not after the moment such attack happens. Markets won't solve this problem due to rarity of such events. Only legislation can help here.


Marc Goodman wrote a book on this: http://www.futurecrimesbook.com/bios/marc-goodman

He has a TED talk about the same topic: https://www.ted.com/talks/marc_goodman_a_vision_of_crimes_in...


I think that the US gov has not yet realized the full potential danger of data exposure. The best thing is to plan for these disaster scenarios like you would pandemics, wars, and natural disasters. There should be strong regulations, a set of protocols, a way to mitigate these situations.

Understandably, the government is way behind on this issue, still seeing the digital information (especially on social media) as secondary or less important.

I think its the type of thing with banks that slowly evolved, where security got higher and higher, FDIC was established, bank vaults improved, alarming improved, money tracing was implemented. We aren't there yet for cyber crimes.


I think the people have not yet realized the full potential danger of the US gov abusing our data.


The danger is basically a nonrepudiation problem.

Look at all the finger pointing after Trump was elected. It was Hillary's fault. It was Bernie's fault. It was the Dem's fault. Trump was what people wanted. It was the Russians. It was Facebook. It was the MSM. All this is just for an election.

Assume someone does use the massive data stores of Google or FB to do something bad. How can you even identify that that happened? How can you identify malicious actors and the incompetent ones?


That's sortof like in the old days when someone robbed the bank, you had no way to even identify them if they were masked. If they travelled far, you couldn't find them, no cross-state warrants. Also, they would rip off the traveling gold wagons and make off with untraceable gold they could melt and split for money.

In those days, the idea of tracing or catching these criminals would have seemed nigh impossible. I'm sure with enough brain power, lost billions, corrupt elections, and casualties, we'll come up with something sooner or later to mitigate the issues.


Learn the history of the FBI.

Catching cross state criminals came with it's own set of trade-offs for society.


I don't understand why people villinize Facebook and Google so much. If anything, they've done a fantastic job at data security. They explicitly hire security engineers and even offer bug bounties for potential issues, treating security as a priority.

I mention this all the time: What scares me is companies in the wild that consider security as a burden rather than something core to their business. We see companies even actively punishing people for finding issues in their software and going through responsible disclosure. This sort of response should be outlawed.

Equifax's breach is just a drop in the bucket compared to all the other breaches less well known companies experienced in the past. Like hey, some obscure pay system at Home Depot got hacked and someone stole my credit card, but fuck Google for tracking me online. We have this "too small to care" mentality about these business committing horrifying security transgressions until it's too late.


No, both are bad and we should not forget none of them.


As far as I know google/Facebook doesn't have my SS number, dl license number etc. I would be more worried if they hacked my bank account etc.


Are you sure that you don't have your SS or DL number somewhere on google or Facebook? You accountant might have sent you your tax return in pdf via email. You might have took a picture of a document where your SSN appears.


Do they? Cause Equifax has my financial info. Facebook and Google know that I like cat videos.


Exactly. Also, people opt in to Google and Facebook, at least to some degree. I never consented to Equifax having my data, and I got nothing in return for it. It's the difference between a free transaction - even one you regret later - and theft.


> people opt in to Google and Facebook

What about shadow profiles? Nobody is opting into those.


Google knows everything you searched for. Hope you never looked up an AIDS test.


That's still not on the level of what Equifax has. Honestly, no one else is going to care if I did do that. Evil hackers aren't going to be able to use that info. Stuff in my credit report, however...


I'll trade my entire browsing history for your SSN


The vast majority of people have nothing very precious to lose to text message leaks, or even photo leaks, but they do have a lot to lose credit wise and financially from identity theft.

Seriously, I'm open for arguments, but the data Google has on me is definitly not as risky. Maybe I'd lose some pride, but not money and financial power.

And lets be real, the problem is that government and financial institutions don't use secret passwords, but only ids to authenticate you. That's where the change need to happen.


I'm becoming more and more convinced that Facebook is simply an evil company, and Google isn't far behind.


Yes and no. FB and Goog dont have your most important data. Equifax has EVERYTHING that is core to your being


It's the other way around.


Oh man! Why google and not your government, bank or health care provider?


Its more than terryfying me.. absolutely hate both


Why does it feel like there is a barrage of propaganda being published against tech companies right now?


Probably because they've been using known and unsafe practices for a long time, and we're giving them a bunch of our data. Seems like "propaganda" is the wrong term, and "legitimate concern and skepticism" would be the correct term.


Think it is pretty clear there is. Now for what purpose and whom? Is it to create more distrust in the US? Is it to try to get us to destroy ourselves from within?


Some of it might be related to partisan politics, in which these organization play an increasingly important role. Some of it might be competitors or would-be competitors. A pretty fair bit, especially here, is likely to be sour grapes and tall poppy syndrome. Some people feel they deserve those billions, and are extremely bitter toward those who actually succeeded.


How about "Don't you dare forget Equifax, and Google and Facebook are even worse"? Bloomberg's gotten really annoying with their clickbaity titles this year.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: