This is so strange, this is exactly how you don't handle problems like this.
Write a blog post, explain what happened, explain who's affected and to what extent, explain if it can be fixed and what you're doing, and explain what you'll do to make sure it doesn't happen again.
Putting out a supposed hidden fix in the Drive for Desktop client, to see if it can recover files locally (?!) when the entire issue is files disappearing from the cloud, doesn't seem like it makes any sense.
Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer. As the saying goes, "the cover-up is worse than the crime". Why Google is not fully and transparently acknowledging this issue baffles me. The corporate playbook for these types of situations is well known, and it involves being transparent and accountable.
To preface, I do not intend to defend Google nor do I work with them or represent them.
That said, I have been in similar situations with large scale customers. It is hard. Some percentage of customers are pathological, and even after you fix their problem refuse to stop continuing the rumors.
Once it’s fixed, I want all communication forward looking. Some percent of people are flat out insane, incompetent, or just assholes. Sometimes you have to lock the thread in order to stop a conversation about something that is already fixed.
Large scale customer bases are just a different beast. Once you experience it, you know what I mean. That doesn’t mean Google took the right path - only people with a comprehensive perspective can evaluate that, and I’m just some idiot on a forum who knows nothing about the specifics.
Of course. Doing the right thing at the moment is also hard. But that's the right thing. Google is famously under-communicating and opaque, locking a thread is par for the course.
Again, of course, their reputation loss doesn't show on their bottom line. (How would it? They let loose the whole CFO army, and we don't really have the convenience of a randomized trial.) But incidences like these are accumulating the kindling to slowly but surely chip users away from the behemoth.
If you lose customers their data, fail to recover it, and then want all communication to be "forward looking" you are either flat out insane, incompetent or just an asshole.
Or, you realize the past cannot be changed. And the customers who only want to rehash the past, rather than saying “where do we go from here”, are pathological.
If I run a business and someone communicates to me that they are pathological and cannot be satisfied unless I invent a Time Machine, I am not going to be particularly concerned about their outcomes. They’re just not worth it - fire your shitty customers, for the sake of your business and employees.
Only if the number of customers is significant and they are litigious organized and funded.
My point is that any time you have >1M customers, you will have many pathological people whom you don’t want to do business with in the first place. The right amount of “firing your customers” is nonzero. Anyone who has worked in a customer service role has experienced this.
So it's okay to screw those customers because they aren't litigious, organized, and funded?
You fucked over your customers and some of those who were harmed will be rightly furious with you. The solution here is to try and do good by them, not put your head in the sand and treat the them as a percentage!
No wonder people are no longer willing to assume good faith when having to deal with corporations. It's because of people who think like you.
I do think this is an interesting conversation, but would like to request that we remove loaded language like "screwing over" customers.
To me, the primary fact seems to be that Google lost some customers data. We can all agree on that - they should have kept it, but they didn't. They sold a product that, for some number of customers, was defective.
What is their ethical and financial culpability here? To me, if they did their best - if they have industry leading backup/replication technology (which I think they probably do), there really isn't much that CAN be done.
On the customer side - your data has been lost. What should you do in this situation?
My experience leads me to believe that some people, as upset as they are, understand that sometimes shit happens. The other set of these impacted customers do not accept/understand that - they want you to invent a time machine a reverse reality. Barring that, they want your first born plus 10%.
To me, it is perfectly acceptable to tell that second group of people something like this: "I am sorry this happened, despite our planning and efforts. It sucks. We cannot fix it. However, you are also a toxic customer - moving forward you should look to another company to fill those needs."
At the very least, firing those customers will help with your line-employees' quality of life. Yes, shit happens and it sucks - but there are a lot of assholes who only make a situation worse. No matter what you do, you will never make them happy, and trying to make them happy will have great cost.
Those "pathological" customers, I have no problem telling to pound sand.
So it's never OK to intentionally screw customers. But when bad things happen, are people looking for the best available resolution? If not, let them go be jagoffs somewhere else.
Google should definitely refund them some amount of money (both good and bad customers). For the pathological ones, it's a nice way of saying "it's worth paying you to go away".
Most people who know me wouldn't describe me that way. All I'm saying is that some customers are pathological and you'll never please them, so don't try. Put those resources into your healthy relationships.
Pretty sure the person you replied to is saying that Google, did NOT fail to recover it. You should read the post you replied to again, I don't think you understood it.
It could be victim blaming. It could be rational. Have you never been in a situation where a frantic user didn't understand that you gave them the solution to their problem and they just continued lashing out? Sometimes victims are done right by, and they just create victims out of another party.
And someone can be a victim of data loss and simultaneously a bully for trying to harangue Google. Being a victim in one area is not a “get out of being called an asshole” card. Especially when it comes to data loss, where ultimately the data owner is the responsible party.
Victim blaming is problematic in general, I agree. But when “shit happens” (as it did here, assuming nobody think Google deleted the data on purpose), at some point victims become aggressors.
It sucks to lose your data. It might suck more for the Google employees who lost the data. Have a little empathy for both sides - those who don’t can eat rocks.
...I'm sorry; are you saying that it sucks more for the poor Google employees (making mad bank, by the way) because they have to deal with knowing there are angry customers out there that they don't have to interact with anyway because Google outsources that kind of support to community forums...
...than for people who lost months worth of work because they trusted, in good faith, that the platform Google promotes as being a great place to keep your data safe, would keep their data safe?
Maybe the “right” grayhat/blackhat way to handle it is to use high-quality, convincing sock puppet accounts to manufacture consensus against the “conspiracy theorists”. It’s not ethical but its the more effective alternative if you’re already at the point of locking threads where people continue to point out that you still haven't fixed the problem.
Great idea. Google could even use their fake AI to respond in real-time to negative YouTube videos and find the hidden positive user sentiment under a cup.
The part I quote below resonated with me, if you have an email I could reach you at I would like to ask your opinion about how to handle a situation. It is very private.
> I have been in similar situations with large scale customers. It is hard. Some percentage of customers are pathological, and even after you fix their problem refuse to stop continuing the rumors.
>Once it’s fixed, I want all communication forward looking. Some percent of people are flat out insane, incompetent, or just assholes. Sometimes you have to lock the thread in order to stop a conversation about something that is already fixed.
>Large scale customer bases are just a different beast. Once you experience it, you know what I mean. That doesn’t mean Google took the right path - only people with a comprehensive perspective can evaluate that, and I’m just some idiot on a forum who knows nothing about the specifics.
That’s what any normal company who is used to dealing with customers would do, but google isn’t that. Google is entirely unacquainted with the concept of “customer relations”. I’m half convinced that google-the-business sees customers as convenient peasants that purchase whatever it deigns to sell. The idea of supporting customers is basically antithetical to them: look at all the stories of people trying to get support for GCP as a great case in point.
Google, institutionally, still seems to have not realized that many users are now customers and not (or at least, in addition to) product. If you are receiving money from somebody, they are your customer, and you have a responsibility to provide at least vaguely good service.
I feel like Google is having difficulty internalizing concept of maintenance, of making what was theirs continuously theirs through recurring interventions. "Feature complete and not obsolete" seems like a common theme among some of killed by Google products, in line with this.
> Google, institutionally, still seems to have not realized that many users are now customers and not (or at least, in addition to) product. If you are receiving money from somebody, they are your customer, and you have a responsibility to provide at least vaguely good service.
I still use Google reflexively, but its quality is so low these days...for most of my searches ("[local business I don't know the URL of]", "[foo] wikipedia", "[foo] reddit", "[some library] docs") I'm pretty sure Bing or DDG would work fine.
YouTube is uncontested. Let me throw in a couple of others: for desktop maps, AFAIK Google is still tops (on my phone I use Apple Maps and it's...fine). For free email, AFAIK GMail is still the standard (despite various UI changes over the years that have made it worse).
I cannot tell you how many times I've seen a video I wanted to watch on Youtube and had it disappear on reloading the page/clicking another video and going back. Even opening in another tab doesn't work sometimes? (It registers a regular click rather than ctrl-click and loads the link normally rather than in a new tab.) Don't get me started on the way it takes control of your keyboard input on video pages in completely unintuitive way. Will the right arrow track forward or raise the volume? Depends on YT's mood, I guess.
It survives solely on network effect. I can't wait for a competitor, and several are waiting in the wing.
> for desktop maps, AFAIK Google is still tops (on my phone I use Apple Maps and it's...fine)
I use Apple Maps on desktop as well. In my area (Denver) they seem to have about the same number of unique issues, but
1. Mobile Google Maps is so, so terrible in terms of screen-space utilization, look and feel, and also its behavior doing turn-by-turn. I've never been anywhere close to as angry at my iPhone then when Google Maps confused me into a parking lot and the low-quality synthetic voice got nearly a minute behind in micromanaging my way out
2. Desktop Google Maps is a lot less usable if you refuse to give the entirety of Google.com precise location access. The move from maps.google.com -> google.com/maps marked the last time I asked it for directions.
Not sure if it's a US vs Norway thing but Google Maps keep confusing me by saying "keep left at the fork". Like what fork?! Oh, you mean don't exit the freeway...
Google search has basically zero objective advantage over other free alternatives. The only thing keeping it so dominant is the public's lack of reason to experiment with others.
I disagree on search: I use DDG most of the time, and it uses Bing. I'll turn to Google (using "!g" in DDG) occasionally if I'm not finding what I want, but for most searches, DDG is sufficient.
However, I'll also say that sometimes the alternatives to Google aren't very good, though they do of course exist. Anything that requires an Apple device is a no-go for me, for instance, so things like Apple Maps are out. Google Maps for me is a must-have, for instance; I use it for navigating Tokyo on a daily basis and something like OsmAnd isn't going to work here at all (Google Maps finds businesses, tells me when they're open, and tells me exactly how to get there on public transit.) Google Docs is pretty useful for some things too; I don't use it for anything too important (I use LibroOffice at home for that), but for something I want to be able to access from my phone or work computer, it's great. The competition seems to be MS Office 365, and I'm not going to use that: I hate MS and I see no reason to pay for a subscription service here when Google's free offering is fine. Google Calendar is really useful too, and lets me share my calendar with others easily.
Comcast is truly evil and horrible, but since the US loves monopolies and lets companies like Comcast establish local/regional monopolies, people there are stuck with them. In better-run countries like where I now live, this doesn't happen, because there's tons of competition for internet service.
YouTube really is pretty close to a monopoly though; it's not like you can just go to Vimeo and find the same videos.
Oh they don't care because people keep paying or they make money from the customer data they sell to others. At my employer we just switched away from google because they decided to double our monthly bill because we didn't want to enter into a contract. We could never reach our account manager or anyone at support that was helpful, they are just too big to care now.
We are still dealing with stupid Google issues like any time our mail relay sends mail to Gmail servers it's 50/50 whether they decide to block it saying we aren't authenticated properly even though our SPF record is indeed valid.
And this is why no one should have anything to do with Google. If the last fifteen years haven't taught people that, well, there's just no hope for you. For everyone else: Stay away from their offerings as if they were the plague.
Its amazing how you can say as much for almost every product they have. Especially as they get more ambitious. Google Fiber. Sidewalk Toronto. The list goes on. Why believe anything they pitch to you when they've proven time and time again that they aren't likely to deliver as promised, or even at all?
It is weird, it's off putting, and it does come off as arrogant. Someone has convinced someone else that these lack of interactions is saving them something or from something. I think it's costs them but I have no data, so it's pure speculation. I know I wouldn't consider them for any serious service except maybe email at this point.
I meant more morally than de facto. As in the buyer of that service, I expect support (and will move to O365 if it is unavailable.) Though in a perfect world there would be more than 2 choices...
To be honest, I have gotten pretty good support from GSuite, though it was fixing Google's own incompetence. It turns out that if you bought GSuite when prompted while registering your business' domain w/ Google Domains, that was a special GSuite locked out of certain features. Because reasons. It took contacting support to fix this. In a very Googley way, it was clear that many companies had hit this problem, and they'd built tools to fix it. They of course didn't stop the problem, but they did have a well-orchestrated process to fix it...
> I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer.
They're not pretending it didn't happen though? As per the article they acknowledged it and published a help center article on it. They named the software versions affected (notifying the affected users seems impossible, since the entire problem was that the data had not been synced). Following the links in the help center article, during the incident they posted in a pinned article in the support forum (multiple times) on how to avoid triggering the bug and how to avoid making it worse.
That's pretty much what you wanted to see except for a blog post with an RCA, no?
> Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
So the suggestion is that in addition to the bug that they acknowledged, there's a totally different one that appeared at the same time affecting totally different functionality and with different symptoms, and that they're covering up despite not covering up the other bug? That seems like a complicated explanation when there's an obvious and simpler explanation around.
That's also the kind of thing that's pretty much impossible to prove categorically, let alone communicate the proof in a way that's understandable to the average user. What are you going to say? "We've checked really hard and can't confirm the reports"?
(I mean, I guess it's possible to do it. Collect 100 credible reports of files going missing that can reliably identify the supposedly missing file by name and creation date rather than say that it was probably a .doc file sometime in March. Then do an analysis on e.g. audit logs on what the reality is. How many files were never there at all? How many were explicitly deleted by the user? How many were accidentally uploaded to a user's work account rather than personal account? How many were still in the drive, and the user just couldn't find them? And yes, once you've exhausted all the possibilities, how many disappeared without a trace? Then publish the statistics. But while doing such an investigation privately to make sure whether there is a problem makes sense, publishing the results seems like a stunningly bad PR strategy even if no data was indeed lost.)
From that help center article: "If you're among the small subset of Drive for desktop users on version 84 who experienced issues accessing local files that had yet to be synced to Drive"
Is "issues accessing local files" how anyone would describe deleting a user's local files?
Your question "why" is probably because you think google should know better. However, I am reminded of the post from a few weeks of the person who left google after decades of working there who claims the culture has changed and attracted more incompetent corporate, political types.
EDIT: found it, not decades but almost 2, he left after working there 18 years.
for google the problem is small enough they're encouraging individuals to file a small claims they'll gladly hand a check for, or it's big enough that google doesn't want to document shooting themselves in the foot.
I also think there's a long tail of Beavis' out there that you need to lock things down to stop the rumors.
Genuinely, I would love an answer from someone that believes in both "never talk to the cops" and "corporations should be open about their fuck-ups" to articulate how they reconcile both concepts. For me they're the same side of the coin, but I'd enjoy to be convince otherwise.
I don't understand the point of Company hosting forums that aren't staffed by Company. Well I do. It doesn't help users at all. The only feature is Company's censorship. It's a hostile social hack on the user base who should be using a different forum host.
When I ask people if they have out-of-cloud backups of their data they look at me as if I'm mad. The cloud can't lose data. Until it does. And then what?
This makes me irate, in a way that’s hard to express.
They are not doing it wrong. What’s the threat model you’re saying people are not accounting for? eu-west-1 getting nuked?
“AWS” also isn’t a monolithic entity: for all intents and purposes AWS Backup is a separate vendor from AWS RDS, just with a unified billing and management pane.
I’d rather use that vendor, integrated with my AWS resources and managed with the same access controls, encryption, billing, etc that I use for everything else than ship it off to a random third party and maintain that connection.
Because the risk factor of multiple, isolated and separate AWS teams running different products with different infra having simultaneous large data loss incidents boils down to “nukes”.
So maybe people get irate in the same way as they might do with people who say stuff like “the cloud is a scam, why use it when you can host things on servers in a closet?”
Interesting comment. So, what would you do if you were called by someone whose cloud account got hacked and wiped? Or whose cloud account got closed because of 'unspecified suspicious activity on your account'? Of course that never happens, right?
Also while I have heard many horror stories about suspicious account closures for many vendors, AWS at least currently seems to be on top of their game.
I fully agree with your argument, just adding colour to it
I feel like we haven't yet come up with the right models and vocabulary to express these kinds of risks, or maybe I just haven't heard it.
Especially because your credentials to AWS are likely stored somewhere that would also store your credentials to your separate backup vendor -- so why would two cloud vendors provide any more protection than two products within one cloud vendor?
It's clear to me how to model hardware failure, or accidental data loss. It's not clear how to model "hackers gain access to one set of credentials but not another" or "provider closes your account and won't give you your data".
I'll give you an example. An issue with a single data centre can cause multiple AWS APIs in the region to fail across services (I've seen this several times).
At that point, if you're only in a single region, you're stuffed. Networking may be affected and you want to spin up in another region, but RDS APIs fail so you can't copy over your backups, for some reasons AMIs won't copy between regions and the R53 control plane APIs fail too, so even if you could bring up a replacement you can't update DNS anyway...
These can be planned for and mitigations put in place in advance, but that involves similar reasoning as might lead you to decide multi-cloud to be a safer option.
I think it is generally fine to not have out-of-cloud backups of data as long as you still have the primary copy locally (as opposed to data being only in the cloud), so you're screwed only if the cloud provider loses data the same day as you happen to lose your local data.
I like the expression, "Your data doesn't exist unless it exists in three places, and at least one of those places should be under your direct control."
If a copy is physically protected (probably good to have one that is) then it could potentially be unencrypted. Restic won't let you have unencrypted backups (a reasonable design decision to prevent accidentally unencrypted backups), but Borg will.
I also keep my important passwords written down on a piece of paper in a fire safe. This includes my borg and tarsnap keys.
Encryption is an layer to prevent disclosure in cases where offsite vaulting fails to keep data confidential. Backup passwords can be split amongst trusted individuals (N-person keying) so that no one person can access contents themselves, but there is no point of failure when multiple people with the same part of the password.
No, you haven't demonstrated that redundancy will fail. If all 4 copies had the same password, the password was not redundant, and it is the non-redundant component that failed.
I never said they used the same password. If anything, having 4 different passwords to data you access exceptionally, you’re likely to have a bad time. Do you also store the passwords 4 times in 4 separate places?
I'm currently battling their support. Tried opening a Play account to publish an app to the app store but missed an email to verify my identity. Now the link in the email no longer works because "Google couldn't verify your identity" as I've had "too many tries" and my account is restricted meaning I can't publish the app.
Support just repeats the same things back that I've had too many tries and my account is restricted and I can't get a refund.
It's woeful how bad the support is to get such a simple thing sorted out. Don't miss that email if setting up a developer play account!
(If anyone can fix this my developer play account ID is: 7827257533299144892)
I like the text at the bottom of the page if you don't have javascript enabled:
> Hey NoScript peeps, (or other users without Javascript), I dig the way you roll. That's why this page is mostly static and doesn't generate the list dynamically. The only things you're missing are a progressive descent into darkness and a happy face who gets sicker and sicker as you go on. Oh, and there's a total at the bottom, but anyone who uses NoScript can surely count for themselves.
I feel this is increasingly becoming the era of the NAS.
A lot of the time I go looking for shows or movies it’s no longer offered on the same service if quite literally at all. Many of my liked YouTube videos are now just [deleted].
Not to mention any data you store in the cloud when engineer(s) experience career altering events.
Most Americans don't have enough data to warrant a dedicated NAS. And even if they do, the cost is a major dealbreaker, $400+ including drives is way more than most would ever consider spending on computer hardware.
It's equivalent to 16 years of paying for Google Drive for me. Personally my ideal setup is just using google drive, and then dumping a backup from Google Takeout to a usb ssd every now and then.
So last time I saw this discussed on HN [1], people said there were tape backups of all these files and that they would be fully recovered. Some people said they worked adjacent to this tape backup system. I'm inclined to believe they weren't lying and that the tape backup system does exist.
So why weren't they able to recover from tape? Is the tape backup more limited than people reported, and this data wasn't backed up? Was it just too difficult and expensive to scan the tapes and decide which was the canonical version of each file?
Are we sure this is limited to Drive? The Ars article mentions that some users reportedly lost data without using the desktop app at all, which seems to imply that (one of?) the bugs was inside Google's infra.
I wonder if they might have suffered some invisible data corruption issue in Colossus or whatever they use now, and the effects on Drive just happen to be the most visible. Though presumably whatever broke wasn't part of GCP or we would have noticed by now, right?
> Are we sure this is limited to Drive? The Ars article mentions that some users reportedly lost data without using the desktop app at all, which seems to imply that (one of?) the bugs was inside Google's infra.
Seems much more plausible that there's something wrong with the backend code for google drive (the product).
Hard to tell. Google does lose data, but they aren't going to own up to it.
Anecdotally, I use a different Google Drive client (Syncdocs) and haven't lost anything. However, I don't know what % of users are affected.
Seriously. The system in question is a distributed one with multiple participants and the resolution, which is gated to specific versions of the Windows sync client, strongly suggests the bug is in that client.
The Ars article mentions people losing things in Sheets where they didn't have a Windows client at all, though. Of course, we have inadequate intimation to determine if it's the same issue.
I had a problem using it just yesterday. I uploaded an image using the phone app, then tried to find it and download it on my desktop. It wasn’t in the Recent list. I found it by searching, tried to download it and the website told me “You have selected a file that does not exist”.
I can confirm that some files were somehow deleted in march. I suspect the problem is related to announced that 5m files is not limit anymore [1]. I was thinking that some of our team members messed up but now I feel bad.
Haven’t used these “cloud” storage systems to store anything critical for a looong time. I was burned a long time ago when I had college assignments stored on a university backup system (I think they outsourced it to MS?).
Lost a half semester of work because a stupid sys admin(s) for the university nuked the primary blob storage (instead of months old records) in an attempt to save $$$.
The more you look at Google products, the more the illusion of their "eliteness" evaporates.
Shitty branding. Shitty UI. Shitty functional design. And most of all, shitty attitude.
I had the luxury of telling a Google recruiter thanks, but NOPE. I had friends who are highly competent programmers with FAANG resumes treated like trash by Google interviewers, and another who actually worked there who gave detailed accounts of a toxic culture that pitted peers against each other.
Google sucks. They're coasting on their entrenchment and that's that.
Whether or not Google truly 'lost data', any backup is only as good as the last verification done by the datas owner.
Simply putting varying degrees of closed-source Syncing App in place long term, then expecting them to just keep working perfectly in all circumstances, through all possible future PEBKAC (either by the user or the corporation) - to the extent that they're reliable as a persons long term and only data backup - is a false expectation that the masses have bought into as part of the whole mobile/cloud world hype.
Turns out "Cloud" is someone else's computer, with the same possible issues. Saying that, I would be very surprised if the same ever happened on S3. Not saying it won't, but if the specs are to be believe, seems pretty rock solid.
On a separate note, really hope the industry either makes cloud compute cheaper or starts using better priced competitors/on premisis soon, the big 3 clouds are crazy expensive IMO.
Interestingly: S3 loses based on their own figures some data every year. They store 100 trillion objects and they say they are 99.999999999% reliable on an annual basis. So every year they lose some objects, but the chance that these are your objects is vanishingly small.
Nitpick: they actually make a distinction between availability and durability. Availability means "you can get your data." Durability means "we still have your data." Only durability has 11 9's.
Be careful with projecting these numbers forward. As far as I can tell, there’s no SLA for durability, only for 99.9% uptime (you can get 10% off your bill if this is violated)
> Be careful with projecting these numbers forward. As far as I can tell, there’s no SLA for durability, only for 99.9% uptime (you can get 10% off your bill if this is violated)
> Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
Statistics can do that. "If we were only X% durable, what are the odds we would have made it so far with no data loss?"
Then pick a threshold p value you like and that X% will be a reasonably confident estimate
If other words if you think you have 99.999999999 but you really have 99.9, Nature will correct you rather quickly. If reality persistently lets you get away with it, you can counterfactually deduce you must have more nines.
You’d have to randomly sample the set of all objects to do that. But I’m not aware of a database technique that lets you pick randomly. So I’m not sure how you’d validate the objects exist in practice. Checking 100 trillion objects seems expensive and time consuming. Doing it in flight while objects are being accessed is free but also heavily biased (ie those are files you’d be more likely to repair in the first place anyway)
There's different sources of failures, and when you start claiming 11 nines of durability, you really have to go and think about all the different possible causes of failure. So it's not just a flat continuous rate, it's a sum of many different things.
11 nines is such a big number, you start modelling natural catastrophes, floods, solar flares, very unlikely discrete events as well.
DropBox lost a shitload of my files, which were due to a client. They never figured out how, and the files reappeared a week later. In the meantime, I had to sign up for Google Drive and deliver the files late... looking unprofessional.
Don't rely on "the cloud" unless you absolutely have to.
I had an issue the other day where I deleted a file through the web interface and saved a file with the same name using the Windows client. The file disappeared and was never uploaded. Would that be a manifestation of the problem, or is the problem a "files at rest are disappearing" issue?
Working at Google, I had my cls modified without changes from my end.
Managers would get mad at me for not delivering things on time, when I could say "the cat ate my homework" seems like people sabotaged my career got to the rest of Google.
The worst thing knowing while my code changes were being sabotaged everybody else's seemed to be fine.
> Working at Google, I had my cls modified without changes from my end.
You can't make an accusation like that without furnishing some sort of evidence/documentation. Even if you're not trying to post this for internet points, it'd be still worth the effort into collecting the evidence, because you can use it later to convince your coworkers/managers that you're not at fault.
What should I have done take screenshots every 5 minutes?
Your response seems so inept. I’m posting on the internet in case anyone has to deal with something similar there’s a trail of evidence. Downvoting would be the same as agreeing with the Nazis.
I don’t expect people to believe me, just putting my experience into the space.
>What should I have done take screenshots every 5 minutes?
1. OBS allows you to take 60 screenshots a second (hint: a video is a sequence of images) with zero effort
2. Can't you take a single screenshot when you're done and sending it off for review? Or at the end of each day if you're working on a massive change?
>Your response seems so inept. I’m posting on the internet in case anyone has to deal with something similar there’s a trail of evidence. Downvoting would be the same as agreeing with the Nazis.
A random off topic comment in a random hacker news thread isn't going to help anyone. "My dog ate my homework and btw there's this anonymous commenter on this thread about google drive data loss that says the same thing" is only marginally more credible than "my dog ate my homework". If anything the former makes you look worse. Why were you wasting time surfing hacker news rather than coding? Or at the very least gathering actual evidence that spooky things are happening with the VCS?
> CL: Stands for "changelist", which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a "change", "patch", or "pull-request".
You should consider adding IPD specifically to the HN guidelines. It's not mentioned.
It seems you've sometimes considered it a kind of personal attack, but I certainly didn't intend it to be one here - I mentioned it out of genuine concern for the poster's well-being.
I believe you! The problem is that intent is invisible and what really matters in the threads is not intent, but effects. The IPD thing just has reliably bad effects, so it's best to avoid.
We can't add things like that to the guidelines because there are too many of them. I posted a bit about this yesterday: https://news.ycombinator.com/item?id=38620847 - if it helps at all.
>Think of it this way: two sources of persistent data (your memories, and the source control system) are in conflict. You'd want to audit both, wouldn't you?
Going to see a psychiatrist for $300+/hr seems a bit extreme. A better first step would be to document the phenomena, so if it occurs again you can convince others that you're not at fault. For instance, having a screen recorder on 24/7 so you can rewind back and see what actions you took. Or if you're paranoid that your machine is backdoored, printing of screenshots of the diffs in the CL, reviewing the hardcopy to verify they're good, and then putting it in a safe place.
that's less extreme than seeing a psychiatrist? interesting.
the paranoia was part of why I suggested the psych, though. tampering with a submitted CL would require access to the SCM's database, then to rewrite history, as well as changing the code. that requires a conspiracy. and for what? to undermine someone's career? I could see a major intelligence agency pulling it off to sneak in a backdoor or something, not some coworker trying to get ahead.
Write a blog post, explain what happened, explain who's affected and to what extent, explain if it can be fixed and what you're doing, and explain what you'll do to make sure it doesn't happen again.
Putting out a supposed hidden fix in the Drive for Desktop client, to see if it can recover files locally (?!) when the entire issue is files disappearing from the cloud, doesn't seem like it makes any sense.
Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer. As the saying goes, "the cover-up is worse than the crime". Why Google is not fully and transparently acknowledging this issue baffles me. The corporate playbook for these types of situations is well known, and it involves being transparent and accountable.