After so many building collapses, bridge collapses, or disastrous fires, people finally demanded that "build whatever you want, however you want" is not OK if you are making something for use by the general public.
I think ultimately we're going to see legislation requiring licenses/certifications for software designers and software companies and software service providers. Like a civil engineer is licensed and personally liable for designs he approves.
It's coming. People will only tolerate the current shitshow that is our industry for so long.
Security OTOH is anti-inductive because you're dealing with intelligent attackers adjusting their methods. In this environment, security code and certifications will just become another pile of papers for bureaucrats to verify after your data was leaked anyway.
But maybe it will at least force manufacturers to make their products' software updateable.
The attackers can't get the data if you don't store that data to begin with. Voila!
I think there should be legislation that simply restricts who can store which data, as well as legislation that forces open-sourcing of critical infrastructure software.
I bring it up and conversation shuts down. Nobody likes it. Therefore, I submit that what you're saying won't happen either.
Maybe this is extreme - but the rule should be not to store it at all if possible.
Hopefully, those updates will NOT be performed remotely by the manufacturers though. Imagine a self-driving car company rushing a quick-and-dirty fix to each of its cars remotely. I'm hoping that at least some certification institute will be sitting in between both parties to make sure everything is tested correctly before it's pushed to the consumer.
How about this: Did you test your code for buffer overflows? If not, you're negligent.
Maybe you need to submit an affidavit that the code has ben tested, a log from an outside testing service (similar to a building inspector, but for code quality), or some other step to verify that something has been done. For the record, building inspections have flaws, too, but they sure help in spite of those issues!
Many other ideas, but that one is simple to communicate.
What will these licenses test us? I have no problem finding a software engineer guilty of negligence writing bad code because they were too lazy (but proving it is very difficult, what is the definition of "lazy"), but that means everyone is guilty! There are toooooo many corner cases, and we depend on upstream and downstream system to be non-faulty and 100% reliable.
Physical quality control is a lot easier to implement, so if someone fabricated lab inspection results they can go to jail because science doesn't lie right?
Now look at C++, even if we fix all the warning messages from the compiler, doesn't mean someone didn't cast the right type at runtime and boom (what about underflow, overflow, not freeing memory, reuse the wrong pointer)? We lost Mars Probe in 1999 because of a simple inch-to-meter conversion error. We don't have a formal verification system that we can all agree on.
In addition, how would this be enforceable? Unlike a civil engineer, where there's some amount of physical presence required in what they do, SW engineers can literally be anywhere in the world and still be productive and it seems that this system could easily be gamed.
This piece looks likes little more than propaganda from the credit reporting agencies. Shame on you Bloomberg.
Unless you're careful to guard against it, you're loading up Facebook and Google's trackers on just about every site you visit. Registering for an account is not a prerequisite for them to build a profile on you.
I saw this post and immediately thought "propaganda" as well.
And of course you can be refused a job based on your social media profile, or whatever comes up when people Google for your name.
I used to spend some time in some car enthusiast forum back in the early 2Ks and after a few years I lost interest, so I stopped visiting.
10 years later, I decided to visit again and noticed that many posts had been made in my name (thankfully, just with my nickname), with horrible things being written "by me". Of course I had no way of knowing that their servers had been hacked years ago, and no recourse.
I can only imagine if there was my real name used on a random server somewhere with some dirt written, uncovered by someone googling my name...
"If it's on the internet, it must be true" is a scary thought.
As others have said, google/facebook are opt-in. I have knowingly given them the information they have on me. Equifax is not opt-in. I have never used any of their services.
Did you knowingly give them your credit card transaction data, too?
Did you "opt-in" to being tracked across most of the web by Google Analytics?
Say you never signed up for Facebook but don't block their tracking stuff, or maybe you do block trackers but also have some friends who use facebook and have unwittingly shared a bunch of information about you. Someone can know just enough to fraudulently sign up for a Faceook profile in your name, and Facebook will helpfuly make suggestions and pre-populate bits of the profile with stuff they already know about you. That person knows a lot more about you now.
It's also not a stretch to imaging Facebook or Google profiles being used as ID for credit and other financial institutions in the future, there are already many online services which allow this and we already have things like Android Pay.
This shit's complicated, I think we're massively underestimating the risk of trusting these companies just because they haven't leaked user data. Making dismissive and oversimplified statements about it doesn't help anyone.
It's big brother implemented through the people around you.
No, it's not propaganda, it's regular journalism.
Google and FB are both 'large corps' whom journo bosses would be weary of offending, so no.
Whenever a journalist 'names names' of big companies, they're probably doing regular journalism.
The bad stuff happens when stories are suppressed to avoid upsetting advertisers.
The first two words of this title are "Forget Equifax ..." An imperative. This is exactly propaganda.
>"Whenever a journalist 'names names' of big companies, they're probably doing regular journalism."
What? Since when is using proper nouns a defining characteristic of "regular journalism"? That is a ridiculous statement.
>"The bad stuff happens when stories are suppressed to avoid upsetting advertisers."
Do you see many ads for Google on Bloomberg? No. There is zero risk there at all. Bloomberg is not in the ad business.
The real bad stuff happens when propaganda masquerades as journalism.
Bloomberg is a fairly respectable entity, they just don't do the bidding of arbitrary 'big corps' against other 'arbitrary big corps'.
As far as 'Google not advertising on Bloomberg' - my friend - Google owns the internet. Every web site on planet earth is 100% beholden to search results. Bloomberg lives and dies on Google results.
Google has been known to fuddy with search results for direct competitors, so if Bloomberg needs to be afraid of someone, it's Google - moreover - they have nothing to gain from doing a 'pro Equifax' propaganda piece.
There is corporate influence and national propaganda in journalism - but this is not it.
This is just regular journalism.
And I for one support the premise.
Equifax is irrelevant.
Google and FB together, are almost 100% of the risk. They have 'everything' on us.
Firstly there is not one "Bloomberg." There are many different Bloomberg companies and Bloomberg News regularly publishes questionable and fluffy pieces. See:
You might also want to look up Randolph Hearst and the term Yellow Journalism if you are so naive to believe that "respectable newspapers" whatever that means don't publish propaganda. See also Judith Miller and how the NYTimes, another "respectable paper" sold the Iraq War if you're still not convinced that propaganda pieces are a real thing:
>"As far as 'Google not advertising on Bloomberg' - my friend - Google owns the internet. Every web site on planet earth is 100% beholden to search results. Bloomberg lives and dies on Google results."
I am not your friend and honestly you seem like a troll. Bloomberg does not need Google. Bloomberg is foremost a very successful financial services company and software company. And as distant third a media company. Bloomberg L.P does billions of dollars in revenue a year selling service to the financial industry. They do not need Google search results at all. Maybe you should read up on Bloomberg L.P a bit. So no, they don't "live and die by Google results."
Technology today is more like language than product. There is no law requiring that you speak, or even learn how to speak; but if you choose not to speak you consign yourself to an underclass of mutes.
The amount of convenience I get from Facebook and Google outweigh the risk I'm exposed to in my opinion.
If Governments want to create data protection regulations I'll save my opinion until I see whatever regulation is proposed but I think it'll be difficult to create data protection regulation that is effective at protecting people without imposing a large expensive burden on companies.
NSA begs to differ, on both counts https://www.theguardian.com/us-news/the-nsa-files
CIA too https://wikileaks.org/ciav7p1/
I do not believe that Google is voluntarily handing over any sensitive customer data to the NSA without legal compulsion. I would not be surprised if intelligence agencies around the world including US based ones target Google's communications (probably in full accordance to each agency's authorization) but that would not be with Google's cooperation.
I believe that Google is taking steps to protect customer data from all bad actors, including Government agencies. One initiative that was underway around the time I worked at Google was to encrypt all the data-center to data-center traffic at Google in an attempt to frustrate anyone who was tapping Google's backbone links.
Why does it matter if it's under legal compulsion or not. At the end of the day they are handing over your data. They could choose not to store it, thus making it impossible to do, but they don't.
Because we are all obliged to follow the law.
B) They could organize it so that even when the law comes asking, they can't comply, but they don't.
What matters is that they are acting insecurely, and providing data that they shouldn't be storing/providing in the first place. The law is irrelevant here.
In the court system, not by disobeying.
Your view of 'what is unjust' is likely completely different from the view of others.
Particularly in this case, I don't have any problem with Google or FB handing over data for individuals under investigation, wherein a Judge had provided a warrant. This is 'legal' in every sense of the term and has been for some time.
As for 'mass surveillance' - well, this was a murkier area, and has been cleared up by the Supreme Court, and I don't suspect they are doing it.
If Google does not want to hand over data to officials producing warrants, they can take it up in court, and try to get an injunction against the process of handing over. If a judge feels there is merit to the case, they will grant the injunction while the case is being resolved.
"They could organize it so that even when the law comes asking, they can't comply, but they don't."
Nope. They can organize all they want, but if the Government is well within legal limits, Google et. al. would face some serious pain. Again for 'mass surveillance' stuff (i.e. legal ambiguity a few years ago), they'd have some legal footing to fight (i.e. try for injunctions), but for other things, not so much.
Which cannot be done, when any issues with these laws are discussed in "secret courts" , and where the individuals involved cannot reach out to experts in the field, because their hands are tied by gag orders.
The strength of the warrant becomes less when you recognise that FISA approves almost every request it gets. The warrant is little more than a proforma.
The structure of the current laws prevents a lawful answer to the situation.
I can't advocate breaking the law, that would be going against myself.
But neither can I advocate for the law, here, because it is failing to protect the people of the nation, from the power of the nation.
And that only about 1500 FISA requests are granted a year, which is a very small number for 300 M people, relating to another 7 Billion.
A single case might yield 5 or 10 warrants, ergo, possibly as few as 150 serious cases.
That 'they are almost always granted' is not so bad in and of itself. If there's a 'known process' for getting warrants, and law enforcement knows what will be approved and what won't - well - then there shouldn't be too many that are denied.
Underlying the 'warrant' is not something 'pro forma' - it's a set of expectations and requirements upon the part of the overview system in place. The 'form' requires that the applicant fulfills some very important criteria.
I do think it's fair to be suspicious and that we should be vigilant about it, but I don't think that 1500 requests a year is too out of line.
I think the big concern is the 'mass surveillance' - or when local cops are making requests to do local-yocal small cases that don't have relevance to things like actual terrorism.
Those 1500 requests cover about 15 million people though, which skews the weighting. That gives me concern.
> If there's a 'known process' for getting warrants, and law enforcement knows what will be approved and what won't
Either that, or there is a culture that rejecting a warrant needs extenuating circumstances, in which case it becomes a large concern.
We can't know if the oversight is simply managerial, or actually effective. It's done with the utmost secrecy, with many punishments awaiting any who might speak out.
> I think the big concern is the 'mass surveillance' - or when local cops are making requests to do local-yocal small cases that don't have relevance to things like actual terrorism.
Unfortunately FISA enables mass surveillance. And the checks and balances seem heavily weighted against the individual, and in favour of a state they can't oppose.
> Nope. They can organize all they want, but if the Government is well within legal limits
You are misunderstanding what I mean here: Google can make it so that nobody except the user can read their data, but Google chooses not to. If they did this (read: the correct/secure way), then the government can ask all they want, and Google would be unable to comply.
Yet, their info got out. If they got hacked....
If the US government wants your data and it's being held in a US company, then IT security isn't the issue, it's the legal framework. No IT security will help when armed agents of the government legally enter your business and coerce you into handing over your data.
The US Government can equally detain you directly and coerce you into handing over all of your data if that's what the law allows.
Only people with security clearance can access government cloud facilities
Are you serious? Wasn't Google found to be sending data between their data centers in plain text?
We most certainly cannot excuse them.
The fact that they have the best engineers and the best infrastructure, doesn't mean that they are able to find all vulnerabilities and fix them in time. The only way to do it, is to have a proof that system is secure which is beyond our reach today.
Accepting that statement at face value, I would still caution: today they do.
What happens in 10 years when the bloom is off of the rose, and either cannot attract top talent or have ignored the need to stay on top of their game?
That data's still there.
But the Google and Facebook security teams are particularly motivated to protect Google and Facebook, not me. If our interests happen to align, then I reap the benefits; but if they conflict, or even if they don't align particularly well, then the sterling quality of those teams becomes more or less useless to me.
That's a pretty significant hedge.
Also, how effective are Google's processes against dishonest employees? It would not be hard for, say, the Chinese government to plant a mole in Google's security team. (Actually, the real nightmare scenario is a Chinese mole in Intel or Apple's hardware team.)
A Google search didn't immediately clarify. What is 'SRE'? The most plausible de-abbreviation I could find was "site reliability engineer".
While data at Google is tightly protected, everything else at Google is fairly transparent. In particular, you can see who is using what resources. SREs, among other things, monitor and manage this resource usage. Large, abnormal usages ("Why is someone copying all this data off-site?") will be noticed, and some of the people who will notice won't take "because the NSA told us to" as an acceptable answer.
With many thousands of engineers it is totally possible for bad actors to have infiltrated Google. It's one of the reasons why there are such strict protocols for accessing customer data or production hardware. The idea is that by default no one has access to anything and that all accesses to data and production hardware are logged and audited.
I'm sure there're still opportunities for a rogue employee to do something bad but Google are way better at protecting access to their customer's data than many of the companies I've seen.
I can and do blame them without hesitation. They could choose not to store that data, or make it zero-knowledge-based, yet they choose not to.
Just because they've been compelled to hand over data via legal means does not excuse them for doing the wrong/insecure thing in the first place.
It's possibly true that Google doesn't intentionally share data. But a nonzero fraction of Google's employees are actually working for the intelligence services of various countries.
The fraction is debatable but it's naive/misleading to pretend that the big tech Co's aren't thoroughly penetrated by every halfway respectable intelligence outfit.
However, I was an Assistant Director (which is a senior tech / middle management kind of level) at first an intelligence agency, and then at a similar grade in a law enforcement organization and then later a security engineer at Google.
The idea that an intelligence agency (or law enforcement agency) would ask Google for help isn't far fetched, in fact when I was in law enforcement we'd ask private companies for help all the time. Some of the time they'd help, the rest of the time they tell us to come back with a warrant.
Google was famously in the second category, to the point where we'd not even bother asking. Even if we had a warrant we'd hesitate because we'd expect Google to challenge the warrant in court and it'd be a huge expensive hassle.
When I worked at Google it was the same, the idea of sharing private information with anyone was anathema. Engineers even being able to see private information without an audit trail and alarms going off was next to impossible.
The idea of a secret extra-legal conduit of information from Google to the US Government seems so far fetched to me that I have trouble even considering it. If there is a conduit for information it's there because the US Government has some legal instrument to compel Google to create it and that Google grudgingly has accepted the instrument as valid.
But what do I know, like I said, I wasn't the CEO of Google or DIRNSA. Maybe I just was never in the right rooms or policy meetings? Maybe everything I saw was staged for my benefit.
In reality, the explanation was even simpler: in the US, authorities either didn't bother asking, as you said, or put the effort in crafting a proper request.
Elsewhere, they assumed they could obtain anything (or remove anything about a local figure), just because they had a badge. In some cases, the requests were so ridiculous that they had to be educated about how the law worked in their own country. Over time, compliance increased in those countries as well.
It's also an open question how much data Google is compelled to share by law in the age of NSLs.
BUT, unlike Equifax they haven't been completely compromised by hackers. Equifax is unique it's level of incompetence. I'm surprised the equity is holding up as well as it is. The company is grossly negligent.
There is no reasonable way to participate in modern society without a bank account, a cell phone, and water+electricity to my home. So I had no choice but to allow Equifax to have my data, even though if presented with any alternative I would not have.
I'm not worried out Facebook getting hacked and someone getting 10 years worth of photos, shopping data, or browsing history. Even credit cards have better protection than what Equifax has for SSNs.
And again, the consumers who shoulder the burden of this catastrophic data breech had absolutely no say in who brokered the SSNs.
Facebook and Google are marketing/advertising agencies.
I don't think that for many it matters if their data was obtained through hacking or through legal means.
Zuck: Yeah so if you ever need info about anyone at Harvard
Zuck: Just ask.
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[Redacted Friend's Name]: What? How'd you manage that one?
Zuck: People just submitted it.
Zuck: I don't know why.
Zuck: They "trust me"
Zuck: Dumb fucks
Not saying a Facebook/Google data breach isn't terrifying -- it certainly is, and the privacy implications are indeed upsetting -- but the profitability path is just not as clear.
The "Russian propagandists" angle from the article is interesting, but seems a bit separate from the "FB has a shitload of data on people" problem. It's basically using the ad/social aspects of the service as designed: changing people's perspectives on something you want them to feel differently about. (Albeit aimed at a different target!) Not sure how to solve that problem.
What's the value of a psychological profile built from FB data? People say Trump won the election because Cambridge Analytica built a psychological profile of most Americans and targeted them with customized propaganda. Probably an exaggeration, but it's early days. This will only get better. Can you make your psychological profile invalid the same way you can change your credit card number?
Really? A billion automated orders for something, followed by the banking system having a metaphorical heart attack and either cancelling lots of/all cards (breaking normal usage until the card printers can catch up), or few/no cards (and risking further, directly exploitable, fraud)?
Sounds like a potent way to successful fight a nation-state from a coffee shop.
Those who thinks that Google and Facebook have harmless data compared to Equifax, imagine what criminals might do with all your information located there to target you. Add here AI technologies, like NLP and speech synthesis, and it gets really scary. Think about massive but at the same time highly targeted social engineering attack abusing all this information.
We should not allow this to happen and it's better to prevent it now, not after the moment such attack happens. Markets won't solve this problem due to rarity of such events. Only legislation can help here.
He has a TED talk about the same topic:
Understandably, the government is way behind on this issue, still seeing the digital information (especially on social media) as secondary or less important.
I think its the type of thing with banks that slowly evolved, where security got higher and higher, FDIC was established, bank vaults improved, alarming improved, money tracing was implemented. We aren't there yet for cyber crimes.
Look at all the finger pointing after Trump was elected. It was Hillary's fault. It was Bernie's fault. It was the Dem's fault. Trump was what people wanted. It was the Russians. It was Facebook. It was the MSM. All this is just for an election.
Assume someone does use the massive data stores of Google or FB to do something bad. How can you even identify that that happened? How can you identify malicious actors and the incompetent ones?
In those days, the idea of tracing or catching these criminals would have seemed nigh impossible. I'm sure with enough brain power, lost billions, corrupt elections, and casualties, we'll come up with something sooner or later to mitigate the issues.
Catching cross state criminals came with it's own set of trade-offs for society.
I mention this all the time: What scares me is companies in the wild that consider security as a burden rather than something core to their business. We see companies even actively punishing people for finding issues in their software and going through responsible disclosure. This sort of response should be outlawed.
Equifax's breach is just a drop in the bucket compared to all the other breaches less well known companies experienced in the past. Like hey, some obscure pay system at Home Depot got hacked and someone stole my credit card, but fuck Google for tracking me online. We have this "too small to care" mentality about these business committing horrifying security transgressions until it's too late.
What about shadow profiles? Nobody is opting into those.
Seriously, I'm open for arguments, but the data Google has on me is definitly not as risky. Maybe I'd lose some pride, but not money and financial power.
And lets be real, the problem is that government and financial institutions don't use secret passwords, but only ids to authenticate you. That's where the change need to happen.