An error occurred during a connection to archive.is. Cannot communicate securely with peer: no common encryption algorithm(s). Error code: SSL_ERROR_NO_CYPHER_OVERLAP
SSLLabs probably has the same problem: https://www.ssllabs.com/ssltest/analyze.html?d=archive.is
"We delete [the] temporary logs [which include your full IP address to identify things like DDoS attacks and debug problems] within 24 to 48 hours."
"In the permanent logs, we don't keep personally identifiable information or IP information. After keeping [the data we do keep] for two weeks, we randomly sample a small subset for permanent storage."
"We don't correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services."
Unless you think they're lying or unable to enforce this policy, this addresses most of the common privacy concerns I've heard in this context.
(I have worked for Google in the past, but I have never been involved at all with Google Public DNS or its privacy promises.)
Adhering to these promises is a separate question from changes of policy in the future, of course, and just as separate from failures in the areas of product design or ethics. Many parts of the conglomerate that calls itself Google have gotten worse in all of those areas over the last several years, though I am still a big fan of how GCP is progressing.
But none of this makes me think that they're retaining more Google Public DNS data than they claim. Given how little of that data they retain for the long haul, the risk of bad retroactive impact from a change in policy in this area is quite low. The risk is admittedly higher for other consumer services which do retain identifiable data over a long period of time.
Conversely, the risk is lower for G Suite and GCP offerings and for European residents, given the concerns and compliance obligations of business customers and the obligations imposed by the GDPR.
Obviously, no user would be able to justify buying subscriptions to every publication linked from HN.
And if there was no paywall bypass, then HN couldn't link to it and it would get no HN traffic and no discussion on HN at all.
Allowing paywall bypass means the publication gets the HN traffic and discussion it wouldn't otherwise get, and the possibility of converting some of that traffic to subscribers who wouldn't otherwise subscribe.
For what it's worth, the owners/operators clearly don't think it puts this forum at risk, as the sharing of paywall bypass links is explicitly allowed/encouraged according to the guidelines and moderator comments.
And the very fact that the publications themselves allow bypass via certain referrers (e.g., Facebook) suggests they don't have a problem with it.
The other way of looking at it is that it can boost publication revenue... by driving more traffic
But we’re not talking about piracy here. If the publishers thought of it that way they’d block all access to archive sites.
Anyway, rather than a snarky dismissal like this, do you have a constructive suggestion for a solution that works well for everyone?
If the answer is “no paywalled sites on HN ever” please say so.
But if you have a more nuanced suggestion that would be a huge help!
Wait, so they only keep two weeks worth of logs and within these logs they did not find anyone abusing this flaw. How can they be certain for any time period from two week prior ?
The wording of this is really pushing the boundary of plausibility.
I fail to understand the logic of how this would protect privacy? Access logs with no profile data logged would not compromise privacy would it?
Can anyone confirm the timing of the google blog post? It seems the WSJ article was posted at a similar time.
This leads me to believe that the most likely reason we are hearing about this now is due to comment requests from the WSJ. WHEN Goog realised it was out they published.
Goog is trying to avoid using the words Data breach as they may get into hot water in EU.
Love how it is buried in the article.
My guess is a similar thing has been happening with android permissions. Data has been leaking through that they have just not admitted to it.
True, but access logs without profile data would prevent you know _which_ profiles were accessed. This matches with the actual claim in the article that they would be "unable to determine which users were affected"
I can see how that's private info, but would this be:
"client x viewed user 1234567's profile"
I am not a lawyer. You should hire an appropriately qualified lawyer to review your data hygiene practices.
The wording of this is really pushing the boundary of plausibility.
As far as GDPR goes technically this breach happened right before they would have faced large repercussions from it.
I'm not surprised. They (claim to) do something similar with the logs of their DNS service: two weeks of anonymized logs after which they "randomly sample a small subset for permanent storage".
Out of curiosity, when was this policy adopted? After these security holes were discovered?
If you don't have a good business case for keeping it, you're often better off erring on the side of deletion.
They had plenty of experience to suggest to them that keeping highly-detailed logs around indefinitely could do more harm to their users than good.
It doesn't sound like the logs are detailed enough to prove the antecedent, though, so it doesn't really matter if the logic is sound.
It's not about the mistake that led to the breach. It's about what you do once you become aware of it as a company, as a team, and as an individual.
I am quite confused about how poor this has been handled by Facebook recently and now Google follows suit.
[Honest question] Should the company announce it publicly?
PS: Keeping in mind that this is part of the Murdoch vs. Google war going on for about 10 years:
[Edit: added the "Honest Question" tag]
Edit 2: Related post by Google:
An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.
A good point. There is also the fact that the average Internet user has no clue what things like a vulnerability, or a bug, or even a log is or what it means. Data mining, web scraping or data harvesting--no clue.
I just saw a TV report this weekend that stated CA hacked FB. Well on second thought, maybe that's better than trying to explain that even though "thisisyourdigitallife" you really need to spend some time and effort to understand what it all actually means.
The default P0 timeline is 90 days... do we know when Google found this vulnerability in Google+? Does Google apply the P0 deadline to their own vulnerabilities? Is it fair to expect them to?
P0 "competes" on an even playing field with everyone else doing public vulnerability research and, to a reasonable approximation, has access to the same information that everyone else does. Internal security assessment teams have privileged information not available to public researchers, and rely on that information to get assessment work done in a reasonable amount of time.
When P0 discovers a bug, it has (again, to an approximation) proven that any team of researchers could reasonably find that same bug --- everyone's using roughly the same sources and methods to find them (albeit P0's are done at a much higher level of execution than most amateur teams). That's the premise under which P0 bugs are announced on a timeline: what P0 has done is spent Google engineering hours surfacing and refining public information.
If you want to go a little further into it: the 90 day release window has a long history in vulnerability research. It's the product of more than a decade of empirical results showing that if you don't create a forcing function, vulnerabilities don't get patched at all; vendors will back-burner them indefinitely. Google's internal teams don't have that problem: when Google bugs get found by internal teams (and, presumably, by external ones), they get fixed fast. There's no incentive problem to solve with an announcement window.
Another lens to look at this through is the P0 practice of announcing after the publication of patches, regardless of where the window is. That's because, again, P0 is doing public research. Typically, when a P0 bug is patched, the whole world now has access to a before/after snapshot that documents the bug in enough detail to reproduce it. At that point, not announcing does the operator community a disservice, because the bug has been disclosed publicly, just in a form that is only "available" to people motivated to exploit the bug.
And again: not at all the case with internal assessments.
This wasn't a Project Zero bug, was it? Project Zero is a very special team with a distinct and notable charter. They aren't "Google's Security Folks". Certainly Project Zero has discovered and disclosed bugs in Google products in the past.
edit: less-than sign wrong way
> edit: less-than sign wrong way*
Breaches, not vulnerabilities. The discussion is not whether or not breaches should be disclosed, but whether newly discovered and believed-to-be-unexploited vulnerabilities should be disclosed.
: They should of course, after a reasonable period in which to patch the vulnerability used.
you cannot prove the negative (realistically). If you have a vulnerability, you must treat it as though it has been exploited.
You can want the world to work differently, but to do so coherently I think you should explicitly engage with the unintended consequences of such a policy.
edit: Within reason, anyway. Obviously if your vulnerability includes write access to logs or something then you're poked.
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
^ the above statement, but couched with this:
> We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.
Devil's advocate: Do you believe that proactive security assessments would still be performed if each vulnerability found was required to be disclosed as though it had been exploited?
It is required by law to report breaches of data, though I can assure you that in practice, this does not happen nearly as often as you'd expect or hope.
There is, however, no requirement to disclose vulnerabilities for which there is no evidence of exploitation or data breach, or to disclose vulnerabilities that were provably never exploited.
In my case, it was usually not a computer issue. It was usually a case of "We sent a check or letter to the wrong address" and it was weirdly common for the reason to be "Because your dad, brother or cousin with a similar name and address also has a policy with us and you people are nigh impossible to tell apart."
And we couldn't say anything like that.
Point being that divulging the issue comes with risks of making the problem worse. So it's not as simple and straight forward as it seems.
If there is a reasonable belief that data was exposed, all of the exposed CA residents need to be notified, and if > 500, the Atty General of CA needs to additionally be notified.
> California law requires a business or state agency to notify any California resident whose unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person.
With links to more specifics in the CA Civil code.
Is it 100% confirmed that the logs would show it?
What they said was "We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused."
That seems only to say they couldn't find anything. Not that it absolutely didn't happen.
The answer is clearly no as they only had two weeks worth of logs out of the three years during which this bug has existed. Here's what they're saying:
"We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug [...]"
> This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data
This is such a bogus statement out front. The first time I read it, I didn't even see "the full list" mentioned. The full list is much longer than this seemingly innocuous list of properties of a person. It includes such gems as:
> A list of places where this person has lived.
> A list of email addresses that this person has,
> The hosted domain name for the user's Google Apps account.
It's a little worse than they painted it to be, maybe not much, but at least they're being transparent, I guess...
Changes the facts of the story 0%
Given that, and the articles you pointed out regarding Murdoch's history (coupled with his unusual willingness to compromise the editorial independence of his media properties), I'm not sure the WSJ is a reliable source in this case. Not that I expect them to blatantly lie, but half-truths and misdirection can go a long way (cf. comments in this discussion that question whether the WSJ is making a mountain out of a molehill here).
Citation needed. Google's blog post wasn't very clear, but it sounds like an API that more than 400 developers use returned more data than was intended. Google thinks the developers didn't use this information, but logs wouldn't help Google come to this conclusion.
Murdoch has a vendetta, but Google isn't being transparent here either.
Some other article I saw quoted somewhere said that they only kept logs for a short time for this service. I wonder how they ruled out exploits older than the logs?
In other words, you don't have evidence that this vulnerability wasn't abused in 2 weeks, you have evidence that no one abused it in ~6 months. Still not perfect, but a more compelling argument that it wasn't abused.
>We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug.
They have no idea if it has been used.
The other parts to your question are:
1. Should the company be compelled to disclose the vulnerability?
I don't think that is reasonable. Enforcing this would be a nightmare anyway.
2. If they did not disclose the vulnerability, but it becomes public knowledge another way, should there be any recourse?
I think that should be decided and enforced by the users. Unfortunately, that is becoming steadily more difficult, as companies like Google grow, and there are fewer viable/available alternatives to their products.
3. Does a company have a moral imperative to share this information?
I think that the action to share such information takes a higher moral ground than to do otherwise.
That's not true. The logs show that it has never been used by anyone the two weeks they had logs for. It looks like the vulnerability existed for about three years. Given this is Google+ we're talking about, it's entirely believable that someone widely exploited the bug in the past, but stopped because Google+ is dead and no one updates it anymore.
> [Honest question] Should the company announce it publicly?
Yes, and they did. They just waited for six months to do it.
Does anyone read the article anymore? Or do they just read it for what they want to see and ignore the rest?
They don't have log coverage for 90%+ of the time period, so that's not something their logs could even possibly show.
The Wall Street Journal: "Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."
"Google discloses Microsoft Edge security flaw before a patch is ready"
Do you think Google acted in a fair or unfair matter?
1. There's actually nothing here to suggest this was done as part of project zero or any part of tavis's job. In fact, this was before project zero even existed, AFAIK.
2. He published details about it in march (O(60) days), as he said. It was still a security bug then, just missing a working exploit.
3. This thread produced a working exploit.
I'm gonna go with "either unrelated or fair".
Unrelated if it wasn't done by Tavis as part of his job, and fair if it was given the timeline and disclosure policies that existed at the time.
So does that mean that every single time you patch a remote-exploitable hole, in your web server, et.al, you have to file a report with every single government saying that there might have been a leak of all our data from all of our users, but we're not sure? It will be like the California proposition 65 warning where there will be so many "could be potentially harmful to a fetus" warnings that they all fade into the noise.
Just think of all of the CVE's reported by Microsoft, Red hat, etc. Anyone of those _could_ be a vulnerability leading to the loss of user data, and there is no guarantees that you would be able to detect it via logs.
Because reality is that this would essentially require policing every commit that ever makes it to public serving in every company.
To call that unreasonable is vastly understating matters.
Yes, because the #1 thing on the mind of someone who gained unauthorized access (e.g. a remote execution vulnerability) to a system is to cover their tracks, which includes things like doctoring logs.
> Company finds a security vulnerability caused by a bug
Remote execution vulnerabilities do exist..
I don't think it was ever officially announced/admitted anywhere, though. But it was exactly around the time that Google+ was rolled out.
Normally we'd treat these as dupes of each other (and initially we did that), but there seem to be two stories here: one about the data breach and one about Google+. So I guess we'll leave both of them up.
This despite the fact that we've been publicly asking for more limited OAuth scopes for years (c.f. my HN posting history and my tickets on the Google issue tracker), and the fact that we've had zero security incidents in over three years. All to ensure that we're not risking exposing user data that we don't even have, have never accessed, and shouldn't need access to in the first place.
Agree that this is Google pushing their security costs onto developers and effectively killing bootstrapped SAAS apps.
If attachments are going from Gmail to Google Docs then they're probably fine, I'd imagine one audit could cover all those types of apps. For things that send email to Slack or whatever I'd expect that Slack would need to pay to have that audited.
I for one experienced personally when they invited my friend & I to discuss buying our app. They just wanted our secret sauce & after baiting us with promise of working together they us kicked us out and said the race is on.
Now there was no guarantees we’d be working together but two guys with no connections is tricked by Google and told the race is on? Us vs. Google who later is granted patents for what we met them about.
Now everyone says that’s just how Silicon Valley is...its expected. Hmmm things change!
So your saying it held no value to your employer and they have every right to treat the little guy dreamer inventors who do not have the right connections like dogs?
Also it wasn't nor isn't an idea rather algorithms we created in 2013 (have improved since) that just worked and work now... demo videos below...
Turn audience & their devices into a stereo system
Drive In Movie app(listen to movie's audio on your device)
We'd love to find those connections who sincerely want to guide us (are well connected) so our next big meeting with a FANNG or another tech company is a win for all! I believe if we went in the meeting well connected in the Valley things would have been different! Not treated like dogs!
I've seen this claims so many times, people claiming some investors took their deck and gave it to their portfolio companies, people claiming another company copied them. The instances where this actually happens usually occurs when an already proven market product is copied (e.g. look at FB copying Snapchat recently), I've never seen it with a pre-success product.
Think of it this way, if I had the idea for capacitive touch smart screen phone in 2004 and met with Apple, could I claim they stole the iPhone from me? There is so much more to an iPhone than just the 'idea'.
Now, if you have some non-obvious algorithm, which if you asked a senior engineer to design, could not come up with it in a few weeks, my opinion would be different. Like if you invented a fundamental new type of homomorphic encryption, which, could be explained on a single page, but enables a fundamentally new type of distributed computing.
There are indeed, some ideas which are very 'dense' in value purely by their description alone, but they're few and far between.
In any case, online forums are rife with people making conspiratorial claims, in economics, in politics, everywhere, and I think a technical community like HN should demand a higher degree of evidence.
I made up this NDA I signed https://ryanspahn.com/motorola-google-Expired-NDA2013.pdf and this letter when Google absorbed ATAP https://ryanspahn.com/google.JPG .
Ive got emails from the jerk who invited us out there .. who baited us then said here is the door and by the way the race is on..goodbye.
I met with many others too ..like Samsung and that guy was an upstanding gentleman. Google was awful... treated us like dogs!
I have no reason to lie only to tell my story to warn others and highlight that Google no longer follows it's motto "Don't be evil."
I'm sorry to hear you had that experience. Where I part company is the added interpretation. If I was in your situation, and had a negative experience, I'm pretty sure I'd be angry, and feel wronged, and feel 'used', it's only natural.
I mean, Dropbox tried to sell themselves to Apple and Steve Jobs didn't like the price, he went on to say 'you dont have product, you have a feature', actually worse than that:
"And so he started trolling us a little bit, saying we're a feature, not a product, and telling us a bunch of things like that we don't control an operating system so we're going to be disadvantaged, we're going to have to figure out distribution deals, which are risky, and sort of a bunch of business-plan critiques. But then he was like, 'Alright, well I guess we're gonna have to go kill you, basically.' Maybe not in those words, but pretty close."
But do I believe Apple "stole" Dropbox? nah, cloud based backups, sync, etc are pretty straightforward, and although there is innovation at the UI and syncing protocol layer, iCloud is not really a Dropbox clone.
BTW, I encourage you to continue to try and innovate around your idea. There may be a use case beyond 'speakers', like emergency alerts, security, think "California earthquake imminent in 10 seconds!". There could also be a use case for synchronized sound based gaming at parties. There's loads more to do, and having had some bad experiences shouldn't discourage you, it kind of comes with the territory of presenting ideas and startups, you really get shit on a lot.
Indeed we have not given up and the recent news that Google was awarded patents for SpeakerBlast type technology has lit a fire under us even more.
Well I'd enjoy learning what you do at Google. Are you on the Chrome Audio team ;-)
*Edit: weird your first post here was flagged.
I'm not on the Chrome audio team, but I have had experience with Chrome Audio, as I implemented an OpenAL layer for GwtQuake and the web version of Angry Birds for Chrome using it. See https://www.youtube.com/watch?v=F_sbusEUz5w for example PlayN, a cross-platform (Web, Android, iOS, Flash) library I worked on in 20% time, and https://www.youtube.com/watch?v=aW--Wlf9EFs which details some early experiments with image processing, and the porting of Quake to Chrome in 2010.
Looks like you lived in Maryland too at one time. I grew up in Towson(now in Bel Air in Harford County) and all my family is here. I’d move out there for a dream job; inventing/building/designing interesting tech for a FANNG company. The road to there looks to be thru an acquisition.
Well it was nice chatting with you!
This is the same comcast we're talking about right? The one that's spent millions lobbying for the right to monitor what their customers do online and sell it to advertisers?
Are you sure you are not both? ATT gives you a discount on your monthly fee if you allow them to inject their own ads. It was opt-in, but it would not surprise me if they don't just do it to all users. Plus, if you are using their DNS, I'd assume they are slurping in all of that data.
Ay, there's the rub, innit?
I had teo Blogger blogs connected to the account and my Twitter was connected to my Blogger as well. I’ve lost those blogs too, and I couldn’t get them back. At some point, I’ve realized some suspicious tweets in my timeline and realized that they are from my blogs! So, Google freed my blogs but didn’t removed its connected accounts. Whoever got the account probably didn’t have any idea that these addresses are connected to a Twitter account as well! But the share to Twitter was on, and whatever she/he was positing were ending up in my twitter account!
Point being, Google is leaking from strange places! Add Google+ to your Blogger and you’ll risk much more I guess!
>"A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica."
>"The document shows Google officials felt that disclosure could have serious ramifications. Revealing the incident would likely result “in us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal,” the memo said. It “almost guarantees Sundar will testify before Congress.”"
>"Internal lawyers advised that Google wasn’t legally required to disclose the incident to the public, the people said. Because the company didn’t know what developers may have what data, the group also didn’t believe notifying users would give any actionable benefit to the end users, the people said."
These statements and tactics seem to be taken from the same playbook that Big Pharma, Big Tobacco or any other soulless Mega Corp uses. As long as it it's legal they don't care if it's right. Did their arrogance prevent them from entertaining the idea that disclosure would have provided users with the "actionable benefit" of considering whether or not they wanted to delete their Google accounts?
> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API. We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.
I did consider replying to the buried the lede comment that "I guess I can stop occasionally wondering if I should get familiar with Google+." Then decided to tweet it instead because that's insubstantive, or could be interpreted as such.
I'm actually a fan of Google, but never got into Google+. I don't openly fangirl the company in part because it gets so much open hatred here.
> At the same time, we have many enterprise customers who are finding great value in using Google+ within their companies. Our review showed that Google+ is better suited as an enterprise product where co-workers can engage in internal discussions on a secure corporate social network. Enterprise customers can set common access rules, and use central controls, for their entire organization. We’ve decided to focus on our enterprise efforts and will be launching new features purpose-built for businesses.
I don't use it often, but occasionally find useful communities there, especially concerning technical subjects. Now all of that is going to disappear.
It's annoying that Google apparently prizes the opinion of enterprise customers enough to half-abort the plan to shut down Google+, while for some reason maintaining a stubborn insistence on removing its access for the rest of us. Yes, this will be another point to add on the list of reasons to never become invested in a new Google product.
If most people are visiting by accident and immediately leaving, it's probably actively causing usability problems and should be shut down.
It can let the non-corporate users enjoy the fruit of their labors at keeping Google+ running for corporate users at almost no additional cost.
Depending on the logs is the worst idea ever in terms of breach determination. I don’t know how many times we’ve had 40 IoCs, but just because there isn’t a log file (often because no one splurged for the SIEM and the syslog collector broke beyond repair months ago) management acts like they’ve won the legal liability / cyber security lottery.
Obviously it’s not as black and white as that, but the burden of proof should be on the companies to show that no malicious use happened right after they go public with a breach.
Going public with this kind of information, even if nothing happened, could have driven much better behavior across the United States if not the world by setting the example. But Google chose the path of self-protection and short-term gain.
Microsoft sends out monthly security patches. Each fix is in there is fixing a vulnerability. Every Windows server has multiple vulnerabilities fixed every month. Is every company that uses Windows now required to determine if any of those vulnerabilities were actually used? This seems like a bottomless hole.
It wasn't hard, just about 5 hours of work and then a few hours to set everything up as I like it. I pay ~$5 per month for email/calendar/contacts through Fastmail, ~$10 per year for 2 domains (each), and ~$5 for an Android app to sync my CardDAV/CalDAV accounts with my Android phone. I have almost completely deleted/disabled Google apps on Android, although I'm not ready to run LineageOS quite yet. I even use an OSM-based maps app, which doesn't work as well as Google Maps, but it is sufficient; navigation sucks compared to Google Maps, but that's the price you pay for doing this sort of thing.
I'm not super-paranoid about government surveillance and I didn't care about Google tailoring ads to me like most folks here, but after all of the data breaches and such, I decided that controlling my own data is worthwhile just to make me feel better. Now, I am able to do most of the stuff I could do before, maybe 70-80% as good as with Google for some things (like maps), but I have peace of mind.
Security is hard. There are many, many more compromises of small firms and self-maintained servers than of these big players, it's just that they don't get major media coverage in 99% of cases.
Take 100 people, and suppose 1 of them decides to stop using Gmail, replacing it with a custom setup. 99 decide to stick with Gmail. The 1 person who spent hours on a custom setup is more likely to leave a comment sharing their experience, tips and tricks, etc. The 99 won't have something noteworthy to post about.
End result is you see disproportionally more comments from people who do something drastic and unusual compared to ones who don't.
2) Every government that has the power to compel Google to release your private information, from Chile's to the Cayman Islands
3) Every agent empowered by any of the governments from 2)
4) Every person and/or organisation that could be furnished with your data as part of some sort of "discovery" process by any of the agents listed in 3
(for the US: essentially anyone and everyone you've ever had any sort of commercial relationship with)
This means that if you, say, have a divorce case, expect your entire Gmail contents to be used against you by your significant other.
The problem is well explained here:
This has nothing to do with Google. In a divorce case, you would be compelled to disclose your email, not Google, so using another provider wouldn't matter, because you would be compelled to turn over any emails there too.
(incidentally this is exactly why most corporations these days have a pretty short email retention policy, something like 3-6 months. As long as you delete the mails before a complaint, or at least before discovery is granted, there's nothing wrong with deleting them, even if it is to avoid them being used against you later. But there's other advantages too: it helps me get organized, and it prevents me from obsessing over things that have slipped so far down the priority list they'll never happen. Which is very soothing. Plus it breeds the good habit of not storing important things in your inbox)
I would like to point out that often the criticism is leveled that you'd only do this if you're guilty. Aside from that that is the "if you've got nothing to hide ..." argument against privacy, it comes with a lot of false assumptions. For instance, that such information will not spread, that laws won't change in bad ways you can't control, that you can trust law enforcement infinitely and indefinitely, and so on and so forth.
Also, a wise man once said: "In theory, theory and practice are the same. In practice however, ...". What you control is yours, and with proper security measures no power the police or justice system has can break that control. That's as it should be.
I still haven't eliminated Google Photos, I do appreciate the free sync on my Pixel 2 XL for peace of mind, Google Translate is still super useful for offline translate and the camera translate feature and I haven't yet been able to replace Google Maps for transit instructions. I still use a Chromecast which I guess is next to go, maybe I'll actually setup a HTPC of sorts, or perhaps just use Kodi on my Xbox One S as that seems to work well enough.
Centralization does have drawbacks, but it in terms of security it is a major step up from homerun servers in many ways.
I've thought about this over and over, and it's hard to come to a solid conclusion about keeping personal data safe (in this context I mean emails and files you may store in the cloud, not browsing history, social media posts, etc.). There are so many options with downfalls for each, and I'm not a security expert. So every time I get excited about trying a new service geared towards privacy, or setting up my own instances, inevitably somebody points out the terrible pitfall in it and I get discouraged.
1. Don't use the internet or internet services, period. <- Not tenable for most of us.
2. Use services who market themselves as geared towards privacy. <- Can't actually trust those services, even with E2E encryption because they could be running different code from what you think they're running.
3. Use regular cloud options, but stack stuff on top - VeraCrypt volumes or Cryptomator with Google drive, GPG for email, etc. <- Really difficult to setup and have a nice reliable way of accessing data on mobile/desktop/etc. No security audits on a lot of the open source software.
4. Host your own services - i.e. a Nextcloud 14 instance on EC2 with an S3 backend, then use client-side E2E <- Difficult to make sure you set the service up in a safe way, and not even a fraction of as much resources in auditing code as, say, a giant corporation.
5. Spread what you do out over multiple services - FastMail for email, DropBox for cloud storage, Standard Notes for notes, etc. <- A real pain.
I know there will never be a consensus on this, but I'd love to hear what your thoughts are on the best way to keep my personal files and notes personal to me. Let's assume I'm not a target of any spy agencies or whatnot, but I want to make it very, very difficult for anyone to read my person notes and files but me.
Its complete hypocrisy to affect commitment and then slink away when it comes to your own products. That raises questions about conflict of interest and credibility.
They've reported plenty of Google/Chrome bugs in the past. Your claims of hypocrisy are very off base.
>Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time.
Reckless endangerment deals with the possibility of something bad happening, but notice that word "reckless."
This article makes this sound like this is something that it appears to not be.
Did this all start when Google fired Damore? Or does it date further back?
They only kept two weeks of logs, yet this bug was accessible over a three year window. So out of 156 weeks they can only rule out 2 weeks where data was not accessed. I think that's pretty pathetic for a company that stores your precise location, search history, photos, email, text messages, calendar, social network, date of birth, etc.
This is a very big story. Hence why Google executives sat on it for six months.
EDIT: fixed link. thanks timvisee