Hacker News new | past | comments | ask | show | jobs | submit login

Disclosure: I work on Google Cloud (but disclaimer, I'm on vacation and so not much use to you!).

We're having what appears to be a serious networking outage. It's disrupting everything, including unfortunately the tooling we usually use to communicate across the company about outages.

There are backup plans, of course, but I wanted to at least come here to say: you're not crazy, nothing is lost (to those concerns downthread), but there is serious packet loss at the least. You'll have to wait for someone actually involved in the incident to say more.




To clarify something: this outage doesn’t appear to be global, but it is hitting us particularly hard in parts of the US. So for the folks with working VMs in Mumbai, you’re not crazy. But for everyone with sadness in us-central1, the team is on it.


It seems global to me. This is really strange compared to AWS. I don't remember an outage there (other than s3) impacting instances or networking globally.


You obviously don't recall the early years of AWS. Half of internet would go down for hours.


Back when S3 failures would take town Reddit, parts of Twitter .. Netflix survived because they had additional availability zones. I can remember the bigger names started moving more stuff to their own data centers.

AWS tries to lock people in to specific services now which makes it really difficult to migrate. It also takes a while before you get to the tipping point where hosting your own is more financially viable .. and then if you trying migrating, you're stuck using so many of their services you can't even do cost comparisons.


Netflix actually added the additional AZs because of a prior outage that did take them down.

"After a 2012 storm-related power outage at Amazon during which Netflix suffered through three hours of downtime, a Netflix engineer noted that the company had begun to work with Amazon to eliminate “single points of failure that cause region-wide outages.” They understood it was the company’s responsibility to ensure Netflix was available to entertain their customers no matter what. It would not suffice to blame their cloud provider when someone could not relax and watch a movie at the end of a long day."

https://www.networkworld.com/article/3178076/why-netflix-did...


We went multi-region as a result of the 2012 inc. source: I now manage the team responsible for performing regional evacuations (shifting traffic and scaling the savior regions).


That sounds fascinating! How often does your team have to leap into action?


We don’t usually discuss the frequency of unplanned failovers, but I will tell you that we do a planned failover at least every two weeks. The team also uses traffic shaping to perform whole system load tests with production traffic, which happens quarterly.


Do you do any chaos testing? Seems like it would slot right in, there.


I'd say yes. I heard about this tool just a week ago at a developer conference.

https://github.com/Netflix/chaosmonkey


Netflix was a pioneer of chaos testing, right? https://en.m.wikipedia.org/wiki/Chaos_engineering



they have invented the term, so probably yes :)


I think some Google engineers published a free Meap book on service relatability and uptime guarantees. Seemingly counterintuitive, scheduling downtime, without other teams’ prior knowledge, encourages teams to handle outages properly and reduce single points of failure, among other things.


Service Reliability Engineering is on OReilly press. It's a good book. Up there with ZeroMQ and Data Intensive Applications as maybe the best three books from OReilly in the past ten years.


Derp, Site Reliability Engineering.

https://landing.google.com/sre/books/


I think you’re misremembering about Twitter, which still doesn’t use AWS except for data analytics and cold storage last I heard (2 months ago).


Avatars were hosted on S3 for a long time, IIRC.


I am not sure if a single S3 outage pushed any big names into their own "datacenter". S3 has still the world record of reliability that you cannot challenge with your inhouse solutions. You can prove it otherwise. I would love to hear a solution that has the same durability, avabiality and scalability as S3.

For the downvoters, please just link here the proof if you disagree.

Here are the S3 numbers: https://aws.amazon.com/s3/sla/


It's not so much AWS vs. in-house. But AWS (or GCP/DO/etc.) vs. multi/hybrid solutions. The latter of which would presumably have lower downtime.


I don't see why multi/hybrid would have lower downtime. All cloud providers as far as I know, though I know mostly of AWS, already have their services in multiple data-centers and their endpoints in multiple regions. So if you make yourself use more then one of their AZs and Region, you would be just as multi as with your own data center.


Using a single cloud provider with a multiple region setup won't protect you from some issues in their networking infrastructure, as the subject of this thread supposedly shows.

Although I guess depending on how your own infrastructure is setup, even a multi cloud provider setup won't save you from a network outage like the current Google cloud one.


Hum, I'm not an expert on Google cloud, but for AWS, regions are completely independent and run their own networking infrastructure. So if you really wanted to tolerate a region infrastructure failure, you could design your app to fail over to another region. There shouldn't be any single point of failure between the regions, at least as far as I know.


Why would you think that self-managed has lower downtime than AWS using multiple datacenters/regions?


Actually, I imagine that if you could go multi-regional then your self-managed solution may be directly competitive in terms of uptime. The idea that in-house can't be multi-regional is a bit old fashioned in 2019.


For several reasons, most notably: staff, build quality, standards, knowledge of building extremely reliable datacenters. Most of the people who are the most knowledgeable about datacenters also happen to be working for cloud vendors. On the top of that: software. Writing reliable software at scale is a challenge.


Multi/hybrid means you use both self managed and AWS datacenters.


Cannot challenge with your own inhouse solutions, you say?

Challenge Accepted... and defeated: https://blogs.dropbox.com/tech/2016/03/magic-pocket-infrastr...

but to be fair, storage is core to Dropbox's business... this is not true for most companies.

disclaimer: I work for Dropbox, though not on Magic Pocket.


> For the downvoters, please just link here the proof if you disagree.

> Here are the S3 numbers: https://aws.amazon.com/s3/sla/

99.9%

https://azure.microsoft.com/en-au/support/legal/sla/storage/...

99.99%


>> Here are the S3 numbers: https://aws.amazon.com/s3/sla/

> 99.9%

(single-region)

There doesn't seem to be an SLA on S3-cross-region-replication configurations, but I am not aware of a multi-region S3 (read) outage, ever.

> https://azure.microsoft.com/en-au/support/legal/sla/storage/....

> 99.99%

99.99% is for "Read Access-Geo Redundant Storage (RA-GRS)"

Their equivalent SLA is the same (99.9% for "Locally Redundant Storage (LRS), Zone Redundant Storage (ZRS), and Geo Redundant Storage (GRS) Accounts.").


Azure is a cloud solution. The thread is about how a random datacenter with a random solution is better than S3.


Wow, he’s comparing the storages SLA of the two biggest cloud services in the world. Pedantic behavior should hurt.


> For the downvoters, please just link here the proof if you disagree.

https://wasabi.com/


How can they possibly guarantee eleven nines? Considering I’ve never heard of this company and they offer such crazy-sounding improvements over the big three, it feels like there should be a catch.


11 9s isn't uncommon. AWS S3 does 11 9s (upto 16 9s with cross region replication?) for data durability, too. AFAIK, AWS published papers about their use of formal methods to ascertain bugs from other parts of the system didn't creep in to affect durability/availability guarantees: https://blog.acolyer.org/2014/11/24/use-of-formal-methods-at...

This is a pretty neat and concise read on ObjectStorage in-use at BigTech, in case you're interested: https://maisonbisson.com/post/object-storage-prior-art-and-l...


You have to be kidding me. 14 9's is already microseconds a year. Surely below anybody's error bar for whether a service is down or not.

16 9's and aws should easily last as long as the great pyramids without a second worth of outage.

What a joke


The 16 9's are for durability, not availability. AWS is not saying S3 will never go down; they're saying it will rarely lose your data.


This number is still total bullshit. They could lose a few kb and be above that for centuries


It's not about losing a few kb here and there.

It's about losing entire data centers to massive natural disasters once in a century.


None of the big cloud providers have unrecoverably lost hosted data yet, despite sorting vast volumes, so this doesn't seem BS to me.


AWS lost data in Australia a few years ago due to a power outage I believe.


on EBS, not on S3. EBS has much lower durability guarantees


Not losing any data yet doesn't give justification for such absurd numbers


Those numbers probably aren't as absurd as you think. 16 9s is, I think 10 bytes lost per exabyte-year of data storage.

There's perhaps the additional asterisk of "and we haven't suffered a catastrophic event that entirely puts us out of business". (Which is maybe only terrorist attacks). Because then you're talking about losing data only when cosmic-ray bitflips happen simultaneously in data centers on different continents, which I'd expect doesn't happen too often.


This is for data loss. 11 9s is like 1 byte lost per terabyte-year or something, which isn't an unreasonable number.


This is why I linked the SLA page which you obviously have not read. There are different numbers for durability and availability.


For data durability? I believe some AWS offerings also have an SLA of eleven 9's of data durability.


11 9s of durability, barely two 9s of availability

I'm sure that's okay if you do bulk processing / time-independent analysis, but don't host production assets on wasabi.


I was asking numbers of reliability, durability and availability for a service like S3. What does wasabi has to do with that?


Always in Virginia, because US-east has always been cheaper.


I know a consultant who calls that region us-tirefire-1.


I and some previous coworkers call it the YOLO region.


The only regions that are more expensive than us-east-1 in the States are GovCloud and us-west-1 (Bay Area). Both us-west-2 (Oregon) and us-east-2 (Ohio) are priced the same as us-east-1.


I would probably go with US-EAST-2 just because it's isolated from anything except perhaps a freak Tornado and better situated on the eastern US. Latency to/from there should be near optimal for most eastern US/Canada population.


One caveat with us-east-2 is that it appears to get new features after us-east-1 and us-west-2. You can view the service support by region here: https://aws.amazon.com/about-aws/global-infrastructure/regio....


Fair point. It depends on what the project is.


And for those of us in GST/HST/VAT land, hosting in USA saves us some tax expenditures.


How?

At least in EU services bought from overseas are subject to reverse charge, i.e. self-assessment of VAT (Article 196 of https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02... ).

Though note that if you are an EU AWS customer, you are not buying from outside EU, you are buying from Amazon's EU branches regardless of AWS region. If Amazon has a local branch in your country, they charge you VAT as any local company does. Otherwise you buy from an Amazon branch in another EU country, and you again need to self-assess VAT (reverse charge) per Article 196.


My experience is with Canadian HST.

Since AWS built a DC in Canada, I’m paying HST on my Route53 expenses, but not on my S3 charges in non-Canadian DCs.

I’m not an HST registrant (small supplier, or if you’re just using services personally), so there’s nothing to self-assess.

Even if self-assessment was required, you get some deferral on paying (unless you have to remit at time of invoice?).


Makes sense.

I believe it works differently in EU (i.e. US DCs taxed) as per Article 44 the place of supply of services is the customer's country if the customer has no establishment in the supplier's country.


AWS is registered for Australian GST - they therefore charge GST on all(ish) services[0].

IBM/Softlayer, Rackspace, Google Cloud, Microsoft and I imagine everyone else large enough to count also does, too.

For Australian businesses, at least, being charged GST isn't a problem - they can claim it as an input and get a tax credit[1].

[0] https://aws.amazon.com/tax-help/australia/

[1] https://www.ato.gov.au/Business/GST/Claiming-GST-credits/


You know, normally you still have to pay that tax - just through a reverse charge process


Not the case in Canada if you’re not an HST registrant (non-business or a small enough business where you’re exempt).

Even if you did have to self-assess, better to pay later than right away.


Mostly because those sites were never architected to work across multiple availability zones.


Years ago, when I was playing with AWS in a course on building cloud-hosted services, it was well-known that all the AWS management was hosted out of a single zone, and there were several days we had to cancel class because us-east-1 had an outage, so while technically all our VMs hosted out of other AZs were extant, all our attempts to manage our VMs via the web UI or API were timing or erroring out.

I understand this is long-since resolved (I haven't tried building a service on Amazon in a couple years, so this isn't personal experience), but centralized failure modes in decentralized systems can persist longer than you might expect.

(Work for Google, not on Cloud or anything related to this outage that I'm aware of, I have no knowledge other than reading the linked outage page.)


> it was well-known that all the AWS management was hosted out of a single zone, and there were several days we had to cancel class because us-east-1 had an outage

Maybe you mean region, because there is no way that AWS tools were ever hosted out of a single zone (of which there are 4 in us-east-1). In fact, as of a few years ago, the web interface wasn’t even a single tool, so it’s unlikely that there was a global outage for all the tools.

And if this was later than 2012, even more unlikely, since Amazon retail was running on EC2 among other services at that point. Any outage would be for a few hours, at most.


Quoting https://docs.aws.amazon.com/general/latest/gr/rande.html

"Some services, such as IAM, do not support Regions; therefore, their endpoints do not include a Region."

There was a partial outage maybe a month and a half ago where our typical AWS Console links didn't work but another region did. My understanding is that if that outage were in us-east-1 then making changes to IAM roles wouldn't have worked.


The original poster said that none of AWS services are in a single AZ, the quote you referenced says that IAMs do not support regions.

Your quote cd mean two things.

- that IAM services are hosted in one region (not one AZ)

And/Or

- that IAM is for the entire account not per region like other services (which is true)


Just this year an issue in us-east-1 caused the console to fail pretty globally.


Quite possibly, it has been a number of years at this point, and I didn't dig out the conversations about it for primary sourcing.


Where are you based? If you’re in the US (or route through the US) and trying to reach our APIs (like storage.googleapis.com), you’ll be having a hard time. Perhaps even if the service you’re trying to reach is say a VM in Mumbai.


I am in Brazil, with servers in southamerica. Right now it seems back to normal.


I have an instance in us-west-1 (Oregon) which is up, but an instance in us-west-2 (Los Angeles) which is down. Not sure if that means Oregon is unaffected though.


us-west-1 is Northern California (Bay area). us-west-2 is Oregon (Boardman).


Incorrect. GCE us-west1 is the Dalles, Oregon and us-west2 is Los Angeles.


What I said is correct for AWS. In retrospect I guess the context was a bit ambiguous.

(I will note that I was technically more right in the most obnoxiously pedantic sense since the hyphenation style you used is unique to AWS - `us-west-1` is AWS-style while `us-west1` is GCE-style :P)


EUW doesn't seem to be affected.


My instance in Belgium works fine


Some services are still impacted globally. Gmail over IMAP is unreachable for me. (Edit: gmail web is fine)


+1- imap gmail is down for me in Australia


Yes, same here in UK (for some hours now).


Quick update from Germany, both youtube and gmail appear to work fine


I’m from the US and in Australia right now. Both me and my friends in the US are experiencing outages across google properties and Snapchat, so it’s pretty global.


Fiber cut? SDN bug that causes traffic to be misdirected? One or more core routers swallowing or corrupting packets?


It seemed to be congestion in the North East US.


> including unfortunately the tooling we usually use to communicate across the company about outages.

There's some irony in that.


Edit: and I agree!

I’m not in SRE so I don’t bother with all the backup modes (direct IRC channel, phone lines, “pagers” with backup numbers). I don’t think the networking SRE folks are as impacted in their direct communication, but they are (obviously) not able to get the word out as easily.

Still, it seems reasonable to me to use tooling for most outages that relies on “the network is fine overall”, to optimize for the common case.

Note: the status dashboard now correctly highlights (Edit: with a banner at the top) that multiple things are impacted because Networking. The Networking outage is the root cause.


> the status dashboard now correctly highlights that multiple things are impacted because Networking.

this column of green checkmarks begs to differ: https://i.imgur.com/2TPD9e9.png


This is a person who's trying to help out while on vacation...can we try being more thankful, and not nitpick everything they say?


Thanks! I’ll leave this here as evidence that I should rightfully reduce my days off by 1 :).


The banner at the top. Sorry if that wasn’t clear.


While not exactly google cloud, G suite dashboard seems accurate: https://www.google.com/appsstatus#hl=en&v=status


For me, at least, that was showing as all green for at least 30 minutes.


AWS experienced a major outage a few years ago that couldn't be communicated to customers because it took out all the components central to update the status board. One of those obvious-in-hindsight situations.

Not long after that incident, they migrated it to something that couldn't be affected by any outage. I imagine Google will probably do the same thing after this :)


The status page is the kind of thing you expect to be hosted on a competitor network. It is not dogfooding but it is sensible.

Reminds me of when I was working with a telecoms company. It was a large multinational company and the second largest network in the country I was in at the time.

I was surprised when I noticed all the senior execs were carrying two phones, of which the second was a mobile number on the main competitor (ie the largest network). After a while, I realised that it made sense, as when the shit really hit the fan they could still be reached even when our network had a total outage.


> Not long after that incident, they migrated it to something that couldn't be affected by any outage.

Like the black box on an airplane, if it has 100% uptime why don’t they build the whole thing out of that? ;)


Was just reading it, they made their status page multi-region.


Even more irony: Google+ shown as working fine: https://i.imgur.com/52ACuiY.png


G+ is alive and well for G Suite subscribers, not the general users.


> including unfortunately the tooling we usually use to communicate across the company about outages.

So memegen is down?


I'm guessing this will be part of the next DiRT exercise :-) (DiRT being the disaster recovery exercises that Google runs internally to prepare for this sort of thing)


Well, lots of revenue is lost, that's for sure.


>nothing is lost

except time


Can't use my Nest lock to let guests into my house. I'm pretty sure their infrastructure is hosted in Google Cloud. So yeah... definitely some stuff lost.


You have my honest sympathy because of the difficulties you now suffer through, but it bears emphasizing: this is what you get when you replace what should be a physical product under your control with Internet-connected service running on third-party servers. IoT as seen on the consumer market is a Bad Idea.


It's a trade-off of risks. Leaving a key under the may could lead to a security breach.


I am pretty sure there are smart locks that don't rely on an active connection to the cloud. The lock downloads keys when it has a connection and a smartphone can download keys. This means they work even if no active internet connection at the time the person tries to open. If the connection was dead the entire time between creating the new key and the person trying to use the lock it still wouldn't work.

If there are not locks that work this way it sure seems like there should be. Using cloud services to enable cool features is great. But if those services are not designed from the beginning with fallback for when the internet/cloud isn't live that is something that is a weakness that often is unwise to leave in place imo.


FWIW - The Nest lock in question doesn't rely on an active internet connection to work. If it can't connect, it can still be unlocked using the sets of PINs you can setup for individual users (including setting start/end times and time of day that the codes are active). There's even a set of 9V battery terminals at the bottom in case you forget to change the batteries that power the lock.

This does mean you need to setup a code in advance of people showing up, but it's an under 30 second setup that I've found simpler than unlocking once someone shows up. The cameras dropping offline are a hot mess though, since those have no local storage option.


It may not be worth the complexity to give users the choice. If I were to issue keys to guests this way I would want my revocations to be immediately effective no matter what. Guest keys requiring a working network is a fine trade-off.


You can have this without user intervention - have the lock download an expiration time with the list of allowed guest keys, or have the guest keys public-key signed with metadata like expiration time.

If the cloud is down, revocations aren't going to happen instantly anyway. (Although you might be able to hack up a local WiFi or Bluetooth fallback.)


So can a compromise of a "smart" lock.

It's a fake trade-off, because you're choosing between lo-tech solution and bad engineering. IoT would work better if you made the "I" part stand for "Intranet", and kept the whole thing a product instead of a service. Alas, this wouldn't support user exploitation.


Yeah, my dream device would be some standard app architecture that could run on consumer routers. You buy the router and it's your family file and print server, and also is the public portal to manage your IoT devices like cameras, locks, thermostats, and lights.


You can get a fair amount of this with a Synology box. Granted, a tool for the reasonably technically savvy and probably not grandma.


I love my Synology, I wish they would expand more into being the controller of the various home IOT devices.


I don't use the features, but I know my Qnap keeps touting IoT so they might be worth checking out as well.

It's also my Plex media server, file server, VPN, I run some containers on there. I used to use it as a print server but my new printer is wireless so I never bothered


Don't be ridiculous. Real alternatives would include P2P between your smart lock and your phone app or a locally hosted hub device which controls all home automation/IoT, instead of a cloud. If the Internet can still route a "unlock" message from your phone to your lock, why do you require a cloud for it to work?


Or use one of the boxes with combination lock that you can screw onto your wall for holding a physical key. Some are even recommended by insurance companies.


At least you can isolate your security risk to something you have more control over than a random network outage.


Any key commands they have already set up will still work. Nest is pretty good at having network failures fail to a working state. They might not be able to actively open the lock over the network is the only change.


One of the reasons why I personally wanted a smart-lock that had BLE support along with a keypad for backup in addition to HomeKit connectivity.


Sure you can, but you'll need to give them your code or the master code. Unless you've enabled Privacy Mode, in which case... I don't know if even the master code would work.


You should have foreseen this when you bought stuff that rely on "the cloud"


Everyone talking about security and not replacing locks with smart locks seems to forget that you can just kick the fucking door down or jimmy a window open.


Or just sawzall a hole in the side of the house...


After you've cut the power, just to be safe? ;)


Except kicking the door down is not particularly scalable or clandestine


To bad we don't have google cars yet.


"Cloud Automotive Collision Avoidance and Cloud Automotive Braking services are currently unavailable. Cloud Automotive Acceleration is currently accepting unauthenticated PUT requests. We apologise for any inconvenience caused."


Our algorithms have detected unusual patterns and we have terminated your account as per clause 404 in Terms And Conditions. The vehicle will now stop and you are requested to exit.


Phoenix Arizona residents think otherwise


They weren't wearing Batman t-shirts were they?

http://www.ktvu.com/news/mistaken-identity-nest-locks-out-ho...


I wonder if in the future products will advertise that they work independently (decoupling as a feature).


holy shit lmao. I'm sorry that sucks.


and a nice Sunday afternoon


And lots of sales on my case


And the illusion of superiority over non cloud offerings.


I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There’s a fine line or at least some subtlety here though. This leads to some interesting conversations when people notice how hard I push back against NIH. You don’t have to be the author to understand and be able to fiddle with tool internals. In a pinch you can tinker with things you run yourself.


> I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There are also advantages to being part of the herd.

When you are hosted at some non-cloud data center, and they have a problem that takes them offline, your customers notice.

When you are hosted at a giant cloud provider, and they have a problem that takes them offline, your customers might not even notice because your business is just one of dozens of businesses and services they use that aren't working for them.


Of course customers don't care about the root cause. The point of the cloud isn't to have a convenient scapegoat to punt blame to when your business is affected. It's a calculated risk that uptime will be superior compared to running and maintaining your own infrastructure, thus allowing your business to offer an overall better customer experience. Even when big outages like this one are taken into account, it's often a pretty good bet to take.


What does NIH stand for?


Not Invented Here


How come?


The small bare metal hosting company I use for some projects hardly goes down, and when there is an issue, I can actually get a human being on the phone in 2 minutes. Plus, a bare metal server with tons of RAM costs less than a small VM on the big cloud providers.


> a bare metal server with tons of RAM costs less than a small VM on the big cloud providers

Who are you getting this steal of a deal from?


Hetzner is an example. Been using them for years and it's been a solid experience so far. OVH should be able to match them, and there's others, I'm sure.


Hetzner is pretty excellent quality service overall. OVH is very low quality service, especially with the networking and admin pane.


hetzner.de, online.net, ovh.com, netcup.de for the EU-market.


Anywhere. Really.

Cloud costs roughly 4x than bare metal for sustained usage (of my workload). Even with the heavy discounts we get for being a large customer it’s still much more expensive. But I guess op-ex > cap-ex


Lots of responses, and I appreciate them, but I'm specifically looking for a bare metal server with "tons of RAM", that is at the same or lower price point as a google/microsoft/amazon "small" node.

I've never seen any of the providers listed offer "tons of ram" (unless we consider hundreds / low thousands of megabytes to be "tons") at that price point.


I've had pretty good luck with Green House Data's Colo Service and their Cloud offerings. A couple of RU's in the data center can host 1000's of VM's in multi-regions with great connectivity between them.


Care to name names? I've been looking for a small, cheap failover for a moderately low traffic app.


In the US I use Hivelocity. If you want cheapest possible, Hetzner/OVH have deals you can get for _cheap._


I've a question that always stopped me going that route, what happens when a disk or other hardware fails on these servers? beyond data loss I mean, like physically what happens who carries out the repair how long does it takes


For Hetzner you have to monitor your disks and run RAID-1. As soon as you get the first SMART-Failures you can file a ticket and either replace ASAP or shedule a time. This happened to me a few times in the past years it always has been just 15-30m delay after filing the ticket and at most 5 minutes downtime. You have to get your Linux stuff right through i.e. booting with a new disk.

If you don't like that you can order a KVM-VM with dedicated cores at similiar prices and the problem is not yours anymore.


Most bare metal providers nowadays contact you just like AWS and say "hey your hardware is failing get a new box.". Unless it's something exotic it's usually not long for setup time, and in some cases just like a VM it's online in a minute or two.


thanks!


Thanks a million. Those prices look similar to what I've used in the past, it's just been a long time since I've gone shopping for small scale dedicated hosting.


You weren't kidding, 1:10 ratio to what we pay for similar VPS. And guaranteed worldwide lowest price on one of them. Except we get free bandwidth with ours.


There are some whole argue that the resiliency of cloud providers beats on prem or self hosted, and yet they’re down just as much or more (GCP, Azure, and AWS all the same). Don’t take my word for it; search HN for “$provider is down” and observe the frequency of occurrences.

You want velocity for your dev team? You get that. You want better uptime? Your expectations are gonna have a bad time. No need for rapid dev or bursty workloads? You’re lighting money on fire.

Disclaimer: I get paid to move clients to or from the cloud, everyone’s money is green. Opinion above is my own.


Solutions based on third-party butts have essentially two modes: the usual, where everything is smooth, and the bad one, where nothing works and you're shit out of luck - you can't get to your data anymore, because it's in my butt, accessible only through that butt, and arguably not even your data.

With on-prem solutions, you can at least access the physical servers and get your data out to carry on with your day while the infrastructure gets fixed.


Any solution would be based on third parties, the robust solution is either to run your own country with fuel sources for electricity and army to defend the datacenters or rely on multiple independent infrastructures. I think the latter is less complex.


This is a ridiculous statement. Surely you realise that there is a sliding scale.

You can run your own hardware and pull in multiple power lines without establishing your own country.

I’ve ran my own hardware, maybe people have genuinely forgotten what it’s like, and granted, it takes preparation and planning and it’s harder than clicking “go” in a dashboard. But it’s not the same as establishing a country and source your own fuel and feed an army. This is absurd.


Correct. Most CFO's I've run into as of late would rather spend $100 on a cloud vm than deal with capex, depreciation, and management of the infrastructure. Even though doing it yourself with the right people can go alot further.


The GP's statement is about relying on third parties, multiple power lines with generators you don't own on the other side falls under it.

Fun related fact: My first employee's main office was in former electonics factory in Moscow's downtown powered by 2 thermal power stations (and no other alternatives), which have exact same maintenance schedule.


Assuming you have data that is tiny enough to fit anywhere other than the cluster you were using. Assuming you can afford to have a second instance with enough compute just sitting around. Assuming it's not the HDDs, RAID controller, SAN, etc which is causing the outage. Assuming it's not a fire/flood/earthquake in your datacenter causing the outage.

...etc.


Ah, yes, I will never forget running a site in New Orleans, and the disaster preparedness plan included "When a named storm enters or appears in the Gulf of Mexico, transfer all services to offsite hosting outside the Gulf Coast". We weren't allowed to use Heroku in steady state, but we could in an emergency. But then we figured out they were in St. Louis, so we had to have a separate plan for flooding in the Mississippi River Valley.


Took me a second.

I didn’t know the cloud-to-butt translator worked on comments too. I forgot that was even a thing.


Oh that’s weird, because it totally worked for me with “butts” as a euphemism for “people”, as in “butt-in-seat time” — relying on a third-party service is essentially relying on third party butts (i.e. people), and your data is only accessible through those people, whom you don’t control.

And then “your data is in my butt” was just a play on that.


I keep forgetting that I have it on, my brain treats the two words as identical at this point. The translator has this property, which I also tend to forget about, that it will substitute words in your HN comment if you edit it.

But yeah, it's still a thing, and the message behind it isn't any less current.


There is a cloud I've developed that is secure and isn't a butt :P

https://hackaday.io/project/12985-multisite-homeofficehacker...

I made IoT using cheap (arduino, nrf24l01+, sensors/actuators) for local device telemetry, MQTT, node-red, and Tor for connecting clouds of endpoints that aren't local.

Long story short, its an IoT that is secure, consisting of a cloud of devices only you own.

Oh yeah, and GPL3 to boot.


And reputation. With this outage the global media socket is going to be in gCloud nine.


and reputation.


Seems to be the private network. The public network looks fine to us from all over the world?


Not on my end. Public access in us-west2 (Los Angeles) is down for me.


Hmmm... why is our monitoring network not showing that?

Edit: ah, looks like the LB is sending LA traffic to Oregon.


Our Oregon VMs are up.


> but there is serious packet loss at the least.

Can confirm with Gmail in Europe. Everything works but it's sluggish (i.e. no immediate reaction on button clicks).


We are also hosted on GCP bit nothing is down for us. We are using 3 regions in US and 2 in EU.


What can be the reason for the outage? Can it be a cyber attack to your servers?


go/stopleaks :)


Hm, isn't releasing go links publicly also verboten? :)


This happened to Amazon S3 as well once. The "X" image they use to indicate a service outage was served by... yup, S3, which was down obviously.


One of the projects I worked on was using data URIs for critical images, and I wouldn’t trust that particular team to babysit my goldfish.

Sounds like Google and Amazon are hiring way too many optimists. I kinda blame the war on QA for part of this, but damn that’s some Pollyanna bullshit.


You're brave to jump on here when on holiday!

Shouldn't that outage system be aware when service heartbeats stop?

Could this be a solar flare?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: