Hacker News new | past | comments | ask | show | jobs | submit login

Gosh, this was so scary... I thought someone had hacked in and deleted everything...

I hope they come back. This is still pretty scary




Same, my Manager called my and said "everything is down".

So I wander over to my Firebase console, and there's no database loading. Thank god for twitter, and people also saying that they have the same issue or I would have for sure though we've been hacked.

I hope this is a good wake up call for everyone. I know that I'm going to think more about how we do backups and fail-safes


[I am the Cloud SQL tech lead]

This is a networking issue, and your data is safe. Cloud SQL stores instance metadata regionally, so it shares a failure domain with the data it describes. When the region is down or inaccessible, instances are missing from the list results, but that doesn't say anything about the instance availability from within region.


That's good to know. What confuses me is why they're saying "We continue to experience high levels of network congestion in the eastern USA", when I'm in us-west2 (Los Angeles) and none of my CloudSQL instances, nor is my k8s cluster showing up or contactable...


Same. I was thinking, oh, my db cluster must be having trouble recovering. Couldn't get any response through kubectl. Logged in to the cloud console and it looks all brand new, like I have no clusters setup at all.

Of course, this is 2 weeks after switching everything over from AWS.


And here I thought I was having a bad day with Google Play not loading


my vm instances are all still there, can even log in via SSH in the compute engine tab. looks like they got a reboot 15 min ago. just restarted some processes but lost my progress on about 12hrs of computing time, i'm guessing it's going to be hard to get a refund..


You'll get a 25 percent refund of all costs for the month if you ask support




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: