Hacker News new | past | comments | ask | show | jobs | submit login

Not sure if this is my own personal bias, but I could have sworn this issue was effecting traffic for longer.

My company wasn’t effected, so I wasn’t paying close attention to it. I was surprised to read it was only ~90 min that services were unreachable.

Anyone else have stabilizing ancedata?




As a Googler privy to the internal postmortem: as stated in the public postmortem, all traffic was unaffected within 33 minutes of the problem appearing. The bug was very on/off: at 09:35PT a corrupted configuration stopped ~immediately (usually double digit seconds of propagation delay) all traffic. At 10:08PT it was verified that the whole service is running the configuration from before the corruption.

The >1h duration was for inability to change your load balancing configuration.


Maybe you're thinking of this incident? https://status.cloud.google.com/incidents/1xkAB1KmLrh5g3v9ZE.... It was a few days earlier and took almost 2 hours.


We received errors at least 45 minutes before their stated time. :-/


Then you have been hit by some other issue.


It was definitely more than 404's they are claiming. Go playground was 503'd.


Which it could easily have been because it itself received a 404 from something and couldn't handle that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: