Hacker News new | past | comments | ask | show | jobs | submit login
Oracle Cloud is unreachable (oraclecloud.com)
171 points by inopinatus on March 11, 2021 | hide | past | favorite | 241 comments



Tiny print below:

> This incident affects: Network Infrastructure (South Korea North (Chuncheon), India South (Hyderabad), Australia Southeast (Melbourne), India West (Mumbai), Japan Central (Osaka), South Korea Central (Seoul), Japan East (Tokyo), Australia East (Sydney), Canada Southeast (Montreal), Canada Southeast (Toronto), Netherlands Northwest (Amsterdam), Germany Central (Frankfurt), Switzerland North (Zurich), UAE East (Dubai), Saudi Arabia West (Jeddah), Chile Central (Santiago), Brazil East (Sao Paulo), UK West (Cardiff), UK South (London), US East (Ashburn), US West (Phoenix), US West (San Jose)) and

> Console (South Korea North (Chuncheon), India South (Hyderabad), Australia Southeast (Melbourne), ap-mumbai-1, Japan Central (Osaka), ap-seoul-1, Australia East (Sydney), ap-tokyo-1, Canada Southeast (Montreal), ca-toronto-1 region, Netherlands Northwest (Amsterdam), eu-frankfurt-1 region, eu-zurich-1 region, UAE East (Dubai), Saudi Arabia West (Jeddah), Chile Central (Santiago), sa-saopaulo-1, UK West (Cardiff), uk-london-1 region, us-ashburn-1 region, us-phoenix-1 region, US West (San Jose)).

So basically the entirety of Oracle Cloud is unreachable. Oops.

Edit: If you were wondering if Oracle really has that many regions, it's actually the same list repeated twice for "Networking" and "Console". Which means both the data and the control plane seem to have fallen off the face of the earth.


Complete outage world wide and the status page says it's a potential issue which may affect some customers lol


Should we just read that as they may actually have some customers?


Isn't TikTok now hosted on Oracle?



Taking a page out of the AWS playbook


This seems incredible to me. Shouldn't there be some sort of circuit breaker in place to limit the blast radius of an incident like this?


One would begin to think that, then remember they're discussing Oracle


When I heard about Oracle Cloud I'll admit I took a quick-look to see if they had any USP for "general" cloud computing customers - or even if they'd be significantly cheaper than our Azure and AWS subs to make the cost to transition worthwhile (lol, of course not)...

The main reason we won't consider using Oracle Cloud is simply because Oracle has already demonstrated their sheer contempt for their own customers through their on-prem database licensing program, so I dread to think what they would do to enforce their IP rights or ToS with their cloud customers.

Another reason we disregarded them (and GCP[1]) is because they simply aren't _big and successful enough_ to give us the confidence they'll keep on running into the forseeable future and with the kind of reliability we expect - and this outage incident demonstrates that they clearly aren't. I know that Azure and AWS have both had significant outage incidents in the past, but both have been rock-solid stable in recent years. For this to happen at Oracle suggests something is institutionally wrong: if they can't get 99.999% uptime on their own worldwide Cloud operations then I cannot trust them with our own SaaS platform because we promise 99.999% to our own customers.

[1]I'm wary about GCP because Google hasn't issued any kind of promissory statement that GCP won't end up on the [Google Graveyard](https://killedbygoogle.com/)


I'm a Googler, opinions are my own.

The terms of use for GCP does say any product that is shut down will give at least a one-year notice before it's shut down[1]. For any given product within GCP that has been shut down the lead time has been greater than a year as far as I'm aware.

The fact that Google is calling out GCP separately on its earning calls is actually somewhat of a commitment to trying to grow that business. Very few products within Google are called out separately, so that was a fairly big change.

[1] https://cloud.google.com/terms/

> (d) Discontinuation of Services. Google will notify Customer at least 12 months before discontinuing any Service (or associated material functionality) unless Google replaces such discontinued Service or functionality with a materially similar Service or functionality.


A year is _crazy short_ for enterprise business, which is where most of cloud growth is right now. It's worse if that component is core to the business in any way.

Think about components you rely on for your service. How long would it take you to stand up or migrate to an alternative. Consider that you've also got all your other business to do too. You can't spent a year just spinning your wheels from a business perspective, while you're migrating stuff between services because your upstream provider decided to stop building something. You've still got to keep growing. If you're lucky, you've got staffing for meeting your business growth needs, but odds are that's barely true.

On top of having to migrate, what are your replacement's performance characteristics? Are you replacing it with your own solution instead, and if so how do you operate it, what happens under load, how do you ensure durability? etc. etc. lots and lots of questions and you've got scant 12 months to figure out what you're doing and how to get off it _on top of your existing business needs_.

This criticism has been brought up so many times, and yet routinely we still see GCP saying "but we promise to give you a whole year's notice!" as if that's a good thing.

(disclaimer: I work for another cloud provider)


Does any cloud provider provide a blanket "all released features will be supported for X years" to all users without a contract (as part of the terms)? I have never looked into other cloud providers, so I honestly don't know.

Also, I'm guessing larger companies could push for certain longer deprecation policies as part of their contracts on certain features. I have no insights here, just assuming from what other people on this thread are saying.

(Googler, opinions are my own)


Well that’s hardly the problem honestly. AWS is surprisingly good at not dropping services or creating long-term deprecation plans. They don’t have to enter into a contract to guarantee the longevity of their services. It’s definitely what makes AWS so trustworthy. Amazon in general is pretty good at supporting both business and consumer services in the long run, even when some of those services are clearly money losing units.

That’s because the company is basically designed like a synergistic portfolio where even money-losing businesses have a purpose in the big picture.

And that’s where Google fails. It’s very well known how ruthless Google can be at killing business that don’t produce tangible results. The Alphabet re-org clearly showcases that they are playing a game of optimizing the allocation of their assets. That mentality bleeds down.

Unfortunately, you can’t run a Cloud Business like that. To Google’s merit, it seems that they are not running their cloud unit with that mentality (I don’t think I have heard about GCP services that shut down). But it’s clear there’s a generalized feeling in the industry that Google can at any point become impatient with GCP, leaving many stranded.

For Google to be a distant third in the Cloud game must be pretty hard. This is something that plays to their strengths and yet they are hardly closing the gap.

The big difference is that the companies that are winning (Amazon and Microsoft) are willing to lose in some areas to win in others. They excel at that. Google on the other hand seems to have a hard time with that shit...


> AWS is surprisingly good at not dropping services or creating long-term deprecation plans.

Case in point; AWS SimpleDB. SimpleDB hasn't been available on the AWS console for years (since 2015?) but is still accessible through the SDKs and API calls. They don't promote it or even update it (AFAIK) but it's still there and you can still use it. Amazing!


Yeah, while it's not (AFAIK) a written commitment anywhere, what I've heard is that Amazon hasn't spun down any AWS service that still has any use, just reduced visibility and stopped new work and provided a migration path for those who choose to get off.

As long as they don't visibly break it, that provides a reputation that is hard to beat without something much more than one year notice as a binding commitment.


Probably not.

But Google has an absolutely dreadful image to drop products and features at the drop of a hat.

Take Microsoft as a counter example, which went through absolute crazy length to provide backward compatibility for their products.

Google just doesn't have that credibility on long term commitments to bank your entire enterprise on its products.


This is very true. People forget that enterprises, especially large ones, will take it on the chin in a lot of other areas in exchange for long-term stability.

I think this is one of the reasons MS Teams has been so successful, even when there are better apps out there that provide all the same functionality and more.


I have no idea if any of the others have such promises, but wouldn't you agree that Google kind of stands out from the Three Big cloud providers when looking at the historical rate of service deprecation? It's not an accident that Google is rather (in)famous regarding that, and MS & Amazon are not.


No. Amazon and Microsoft are equally happy to kill their failed consumer products as Google. And none of them have a high rate of killing their business-focused cloud products.

But for whatever reason, when Bezos brags about how making failed products is a core part of the company culture because it shows you’re tajing risks, HN shrugs. When MS kills off billion dollar consumer bets after only a couple of years, HN just goes “lol, why would anyone have used Mixer”.

It is a clear double standard that seems to have happened since memes are self-reinforcing.


An example of this is how long it took Amazon to remove oracle, it was only recently - if a non-tech focused enterprise move to gcp and they get a year to move everything to azure/aws that is a killer.


> Think about components you rely on for your service. How long would it take you to stand up or migrate to an alternative.

Architect at a F500 here.

It would take us easily 2 years to roll out a solution, get migrated to it, and have it implemented, tested, and signed off. Probably another year (or more) to work out all of the kinks or handle the stuff that wasn't in the initial build (e.g. phase 2 expansions, etc.).

There is a reason we do 5 year TCOs, and it's not just cuz of the depreciation schedules. Also the reason we lean to things with 5-, or 10-year maintenance cycles. 1-year notice is not enough, full-stop.

Not talking about rolling our own solution here either, this is COTS being deployed in a 99.999% environment in multiple countries.


Disclaimer: So do I. Occupational hazard on HN.

A year is crazy short if your job is to architect an ideal solution, but insanely long if your job is to simply migrate your employer onto a roughly equivalent solution. If a cloud customer isn't continuously testing a migration they're naive.

General best practice is to have a current best setup using all the first-party services on cloud-provider #1 using all their hosted services, and a fallback on cp#2 that self-hosts the services from cp#1 and uses their built-in services where better. It's what you should already be doing for benchmarking and testing but for a fallback if needed.

And once you've got the fallback planned you're free to move the active service around, almost at a whim. Even for a fortune 50 with petabytes of data - if your data isn't already hosted everywhere you're just begging to be wiped out with a simple account problem.


Imagine working for a cloud company and then also saying "If a cloud customer isn't continuously testing a migration they're naive.", not knowing that every user of your software doesn't have time to continuously fiddle with shit that supposedly work correctly at the time.


While they haven't actually explicitly said that, and certainly my employer hasn't heard that message, our AWS rep has repeatedly and loudly said that all of our planning should incorporate failure starting from the smallest (like an individual instance) up through a service in an AZ to a service in a region or a complete AZ outage, to a complete regional outage or global service outage to a global provider outage.

So, the message is implicitly there.


> shit that supposedly work correctly at the time.

I do love me a well-formed customer query.

Yeah, it's supposed to work. I get paid more it if does. But in exceptional circumstances, it won't. And in more exceptional circumstances your alerts will fail too.

As an admin what do you do with the 90% of your time that isn't actively fixing something if not planning for how to fix the next thing? We did this years before cloud meant anything other than prepare for floods.


> As an admin what do you do with the 90% of your time that isn't actively fixing something if not planning for how to fix the next thing?

Not every cloud customer has a dedicated admin team, some even don't have any dedicated developer teams. They simply wanted a website done, contracted someone and they uploaded some HTML to a cloud instance.

Sure, if you're a large company, it makes sense to have redundancy. If you're running a small company with some esoteric webshop to serve AFK customers, it makes less sense.


If it didn't take a team to design what you use then it probably won't take a team to design a fallback. But the difficulty doesn't excuse not doing it, if you don't have a fallback for whatever you pick, pick something simpler.

Ideally we keep stuff running, but I don't want to be the kind of person who just tells you that we've got it. You need to know your airbag might fail so that you take your seatbelt seriously.


> If a cloud customer isn't continuously testing a migration they're naive

this is a dumb point of view, that only cloud providers (and their employers) can advocate.

if i'm a company, being "on the cloud" doesn't automatically makes me money. my business makes me money. if every two years i have to waste a year (or even six months) re-architecting then "the cloud" is costing me people money on top of the infrastructure money.


So, you architect for multicloud once, and then in the event of a cloud provider failure (a high impact but fairly low risk event if you haven't mitigated the impact with a multicloud architecture) you just shift resources to the surviving providers and maybe throw something on the backlog to incorporate a new provider in your multicloud setup, but even if that takes some rearchitecting, the median interval isn't going to be every two years, or, most likely, even every 10.

Most enterprises won't do this, either, but it's not because of recurring rearchitecting costs.


> if ... then "the cloud" is costing me people money on top of the infrastructure money.

Yeah. Do you think I disagree? It's not for everyone.

> this is a dumb point of view, that only cloud providers (and their employers) can advocate.

No, it's the truth. Stuff fails.

And if you think that this is the company's messaging you're totally wrong.


That's insane. It's ridiculous that you have to constantly test a potential migration!


It's not insane if it's rewarded in the long run of the competition.

Like, "it's insane to grow legs just to live, if you can just crawl like a worm".


They aren't naive though. Migration testing has costs. A business needs to weigh this against many other costs they have.


Of course. But you can't just say you can't afford it. Fire doesn't care that you couldn't afford a smoke detector.

If you can't afford a recovery plan then what you can't actually afford is the service that needs the recovery plan that you can't afford to develop and test.

All costs, including of switching away if it fails, have to be considered as the sticker price.


That's absurd; if a customer has to engineer for and be prepared to migrate to another cloud provider at any moment, that erases any possible cost advantage for using a cloud provider in the first place. They might as well just self-manage bare metal.


It's the truth. The truth cannot be absurd.

As for running your own servers, that too can fail meaning you still need a migration strategy (even just to new hardware) and you need to be testing it constantly.

And no, the benefit of a cloud provider isn't that normal stuff is easy or cheap but that otherwise impossible stuff can be attempted.


> As for running your own servers, that too can fail meaning you still need a migration strategy (even just to new hardware) and you need to be testing it constantly.

That's what DR is for. We have a main bare metal site and a secondary site. Throw a couple of spare servers / switches / PDUs / Hard Drives / whatever in that space too.

Cloud options need to be better and more effective than that.

> And no, the benefit of a cloud provider isn't that normal stuff is easy or cheap but that otherwise impossible stuff can be attempted.

A virtual server is a virtual server. A container is a container. The only thing the cloud offers me is the ability to change my CapEx spends into OpEx spends. Otherwise I have to hope that the vendor won't do me dirty, and will leave me in a stable, workable place 3+ years from now.

The bare metal colo operations will. Long track record of stability at everywhere I've been. Barring act of god or otherwise unusual circumstances I know my tier 4 colo will be there next year, and the year after. Will GCP be around?


What's DR other than migrating onto what's hoped to be (but never is) an identical setup? It's like having your Amazon fallback be ... Amazon in another region.

That protects you against localized outages but not design failures or systematic outages or incompatibilities in new versions of your stack.

And it takes time to keep your DR plan up to date, patch the VMs, etc, and test it. Almost like this migration plan I'm talking about.

> The only thing the cloud offers me is the ability to change my CapEx spends into OpEx spends.

Ehh, not really. You can setup load-balancer pools larger than your entire colo, or use a globe-spanning backbone to create datasets that auto-replicate worldwide. And which are usually much easier than setting these services up yourself, let alone building the multiple zones. If the cloud is just a big colo to you then you probably shouldn't use the cloud. It's frightfully expensive.


> If a cloud customer isn't continuously testing a migration they're naive.

Most cloud customers are naive, as the stream of major enterprise breaches caused by S3 buckets without security settings that have been default for many years demonstrates.


yes its your customers' fault.


So, pray tell, how would you assess and assign fault here? What's the actual bad action and who caused it?

You see, there's strength in the truth. It doesn't matter if drunk drivers shouldn't hit you, that's why you check both ways before crossing the street, and you'd be naive not to. (In this analogy, drunk drivers are outages...)


I don't think a year is long enough though - especially for something that's unique and foundational. A complex product built using BigTable, for example, may need more than a year to migrate to something else.


This was our conclusion with an existing gcp monolithic service.


How does this compare to the terms and the actions of Amazon towards AWS?


People simply know that AWS isn't going anywhere. It's the Microsoft Windows of the cloud world. Google has a long history of dropping stuff relatively quickly.


Amazon runs on AWS, Google does not run on GCP.

Or am I missing something?


> Amazon runs on AWS,

true or a team is slowly migrating so one day true

> Google does not run on GCP.

Ive heard other googlers say "GCP runs on google" as a somewhat tongue-in-cheek reference to the internal architecture


> Amazon runs on AWS

That's not a given.

At least in the early AWS days, internal sites had special configurations.

Source: worked there.


In the early days, they had a ton of special configurations, like w.amazon.com - the internal wiki, which was probably running in a datacenter somewhere.

But in recent years, they run entirely on AWS. They even have solutions architects to help them build things in a resilient way. They also spent years ripping out Oracle and replacing it with RDS MySQL, PostgreSQL and Aurora.

It is true that Amazon runs on AWS.

Source: worked there.


You guys also have “vending machines” for things like Ethernet cables in the office with price tags displayed. They’re free, but it’s to remind you not to waste Bezos’ money on frivolous things. And you pay market-rate for microwave reheated salmon in the cafeteria.

...then they go and buy more of some the most expensive office space in the world per-sqft.

Source: I interviewed there 3 times and every time left me with a dystopic impression of the company. To add insult to injury their recruiter only ballparked me only slightly above half of my then-TC.


I stopped working there in 2019 and in the few years I worked there I never saw prices on the IT kit in the vending machines. The IT vending machines were one of the better ideas I've seen at companies I've worked at. Many companies would require you to open a ticket and have an IT person hand deliver the keyboard, mouse, or other accessory to your desk. At Amazon, you can just go to the vending machine, swipe your badge, and get whatever you need, no human work required.

How is that a negative? Seems like a more efficient way to operate a company.


At Microsoft, in my buildings at least, they kept the giant PC-recycle boxes around for a few months and we were encouraged to fish-out anything useful (I know other teams forbade this, it's complicated). When there was something I needed I could usually get it from my team's admin-assistant just by asking them over Lync. I never had to file a request with IT to get any parts or equipment I needed.

> Seems like a more efficient way to operate a company.

I do actually agree with you - it's just a matter of framing the company's practices.


This was a big HN thread about a year ago, when Amazon switched off their last Oracle instance and went fully AWS on the store backend.

https://news.ycombinator.com/item?id=21258127

That's what I was getting at. There's no way AWS is going anywhere if the entire store is running on it.


Runway is not enough enough of a pull when we need to heavily invest in the infrastructure initially as well as over time as we solve quirks.


> Another reason we disregarded them (and GCP[1]) is because they simply aren't _big and successful enough_ to give us the confidence they'll keep on running into the forseeable future

I've got a great big bottle of schadenfreude waiting for the day Goog announce that GCP was just somebody's 20% project and they're moving to another team - complete with an "Our fantastic journey" blog post and a new entry on all the google graveyard websites...


GCP already feels too big to me to become an incredible journey, deprioritized and left to slowly bitrot I could maybe see if it fails to grow, but even for Google closing a whole Cloud Platform on this scale would be unprecedented and _extremely_ damaging.

Google has lost a lot of trust and goodwill, but GCP isn't Google Reader. A sudden GCP EOL would be cataclysmic :)


Google heavily bet on Google Plus, and they still left it to rot and shut it down. GCP closing would be much bigger than Reader, but not that much bigger than Plus.


I don’t see how it’s the same scale. Dropping GCP would hugely disrupt other companies’ operations, not just users’ social media habits.


Aye. I can migrate my social media activities to another platform, but migrating my company's infrastructure would take years and cost millions.

We could either do it fast and have revenue impacting events, or slow and at higher labor and opportunity costs.


Hate to be the bearer of bad news, but it was already leaked that Google attached funding strings to GCP growth targets.

https://www.theinformation.com/articles/google-brass-set-202...


The idea of giving up on GCP because it wasn’t the highest profit cloud provider is an interesting idea, I honestly don’t believe it. As if there would be a call. “we’re only number 2 in cloud computing so we are shutting down”. Does google really make that kind of decision. It feels like giving up on the internet because they didn’t win.


Look at every other Google product decision ever. It's a lens through which to evaluate the "GCP 2023 leak".



Paywalled :-(


Agreed, GCP EOL is one of the few things that would piss me off enough to leave Gsuite and the rest of Google's services.



10 years from now: Azure, AWS and GCP are spun off into separate companies due to antitrust action and then promptly merge into one.


Oh come on, what do you think this is?

AT&T? Standard Oil?

Oh...


And people will argue that cloud datacenters should become a public utility and then I start to wonder if this is a problem that neither private capitalism nor state-owned industry can solve...


GCP is probably too big even for Google to do that.

Individual services under GCP though, who knows.


Fwiw, the amount of money that Google is pouring into GCP, including their own transoceanic fiber makes me believe they are committed to it more than “other” endeavors.

Honestly the biggest issue with Google is that they need to dramatically improve their open source database offerings, including 3rd party vendors.

RDS/Aurora eats their proverbial lunch. Azure at least bought Citus to improve their PG offering.

The GCP database offerings feel like something that was put there to check the box, not to be competitive.

EDIT: The non-Google database offerings.


> The GCP database offerings...

On the contrary, I'd argue that BigTable, Spanner and Firestore (for Firebase) are all tentpole products that draw people in to GCP in the first place, and they stick around because GCP offers other services that probably do save the customer time and resources and are willing to use them because they've already overcome the emotional hurdle of trusting a cloud provider with their operations.


Let me limit it to the non-Google database offerings.


Why, though? No one signs up for GCP because they want to be completely independent of Google products. Judging it based on non-Google DBs is (IMO, anyway) judging it on criteria that are irrelevant to its actual value proposition.


Plenty of people sign up for AWS without intending to use AWS proprietary DB’s. If GCP wants to compete with AWS they need to compete on general purpose workloads.


> No one signs up for GCP because they want to be completely independent of Google products.

Then they're losing a lot of potential customers.


Are you not going to use BigQuery while being on GCP simply because it's Google's proprietary offering too?

Assuming it fits your needs ofc.


For some things, sure. But first and foremost my concern is going to be Postgres.


TIL Azure bought out Citus, missed the HN headline on that one...



I need to run Postgres in production soon and have been trying to decide between AWS RDS multi-AZ (non-Aurora) and Google Cloud SQL HA. I have some experience with RDS, and from the documentation, Cloud SQL seems very similar.

What's worse about Google Cloud SQL?


> What's worse about Google Cloud SQL?

https://issuetracker.google.com/issues/35904358


(2017)

5 years is a lot of room for change. I remember Gmail having to restore a bunch of email from tape a decade or so ago.

---

As an aside, the advice that

> If you are having the issue *stopping* or restarting an instance, try adding and then removing an IP address to the Acess control list. After that you should be able to stop the instance.

was particularly curious to read though. Huh.


That issue was closed in February 2017?


I'm not worried about hitting the same error they did, I'm worried about getting the same customer support they did.


Not so long ago GCP Pg didn't support PITR. It looks like they do now. Not sure if you can create a separate instance from backup or restore only to a running instance. Not sure if they allow external replicas.

I suggest you discuss the requirements with your team and then compare both offerings.


Interested in the same question. So what are the problems of AWS/Azure/GCP Postgresql HA databases. Anyone better than the others? Documented problems for anyone?


I got to deploy some PaaS stuff in Azure recently.

Azure SQL Database has a zone redundant mode, but not a zonal mode. This means the network performance will always be terrible, because the database will be in an unknown zone, so you can't put your app tier in the same zone.

Conversely, App Service has only a zonal configuration or nothing. No zone-redundant option! You'd think this "app+db" PaaS combination would be well designed and tested, but you'd be wrong...

The Azure SQL network performance has two modes: insecure and acceptable OR secure and horrendously slow. To get a private 10.0.0.0/8 address they tunnel(!) through a VPN gateway they manage and then proxy(!) the traffic again. I saw latencies as high as 13 milliseconds for "SELECT 1" within a region, which is just absurd.

PS: This is for the Business Critical elastic pool tier, which guarantees 1-2ms I/O latency, but you can't actually get transaction latencies this low because of the networking issues.

There's lots of little annoyances too, like: you can't copy databases across subscriptions. You can't create databases in an elastic pool in a different resource group. You can only have one "Azure AD administrator", because SQL Server doesn't really support IAM. (You can use a group, but not two groups.) Similarly, non-admin access is granted via SQL only, not Resource Manager, which makes it nigh impossible to automate some builds via ARM templates. The built-in firewall is a disaster: IPv4 only and no support for service tags. The SQL Elastic Jobs resource is incomplete and seems useless. Similarly, SQL Sync is a toy and was broken when I tried it.

Overall, it feels very immature and not suitable for any real enterprise use case. Small Wordpress sites? Sure. Infrequently used apps that don't need performance? Probably okay. Maybe.


  Fwiw, the amount of money that Google is pouring into GCP, 
  including their own transoceanic fiber makes me believe 
  they are committed to it more than “other” endeavors.
I thought this way too about another fiber-rollout business of Google's. It didn't turn out the way I expected.


I think google knows that if GCP were to fail that their stock would drop like a rock and take years to recover. The Cloud is the present (and the future).


Oracle OCI is one of the few cloud providers who'll actually give you two VMs for free, forever.

https://www.oracle.com/cloud/free/

Could be worth checking out.


Anyone know what "1/8 OCPU and 1 GB memory each" translates to in processing power?

Fine for a personal hobby web server to try out nodejs on?


Its working just fine. Running some MySQL and Minecraft server without any problems.

And free 10TB of traffic...


What benchmark do you want me to run?

(Not `dd`. Something actually useful.)


None would make any sense to me. I'm working my way through freecodecamp and feel like starting to play with a server.


That's awesome.

Well, while it certainly looks (and feels) a bit like a cliché, I find myself recommending AWS for such a purpose. Common issues will be a single StackOverflow answer away. The platform (infra, byzantine pricing model) is at least well documented. The company has a reputation for making expensive honest accidents go away :).

This being said, exploration and _actually having to dive into something_ can provide insight and understanding that just doesn't get mentally registered when on the mainstream path (because it all just works).

And as another commenter said, you get 10TB of traffic. Which is hands-down insane. At 450Mbps too. (Boggles) (AWS is 1GB/mo, albeit at 1Gbit.)


I host quite a lot of stuff on those two VMs, works fine.


Thanks!


It is requiring giving Oracle credit card, right?

Because I heard horror stories about Oracle suddenly discovering that you did something that qualifies for extra-special-charge.

And I am not a major corporation that can easily pay major bill to another corporation.

("requires credit card" part is understandable to me, but I see no reason to trust Oracle given what I know about their approach)


Many years ago, Oracle bothered me by phone and email with the bills of a quasi-homonym in another continent, and I only had a developer account. So you don't even need to share billing details to have problems.


Credit card information is standard for fraud prevention.

It’s impossible to breakout of the free tier and get charged for something.


I explicitly noted that I see why credit card are required, just that I am considering Oracle to be malicious and likely to be fraudulent.

> It’s impossible to breakout of the free tier and get charged for something.

Impossible in which sense? If they got my credit card info they can charge - and possibly justify it by convoluted rules.

And "Oracle promised something" is sadly not very credible.


They can't that's where your logic breaks down.


What are you basing this claim on? The reason I am not using any cloud trial that requires credit card is the chance of them driving me to bankruptcy.


Based on me using Oracle Free Tier since it was released. It is impossible to get charged for your use of the free services.


and free databases too forever


There is a distinct lack of GCP products on https://killedbygoogle.com/

Would you avoid AWS because of Fire Phone?


Perhaps a better comparison would be, https://killedbyamazon.com/ (or killedbyaws?), which I don't think exists at all.


Genuine question: did you check that domain before you posted your comment?

Because if it didn't exist, well, it does now. It's sitting on one of those existentially-depressing parking pages. And ironically has a hilariously broken (go and see) HTTPS cert.


> Genuine question: did you check that domain before you posted your comment?

Indeed I did. It showed me some generic buy-this-domain website with no substantial/relevant info (actual products killed by the Amazon mega corporation).

Nevertheless I presented my comment as a concept of `a website that lists the products killed by Amazon`, which could have been located in https://productskilledbyamazon.com, for all I know. It would be more interesting to find out if such website actually exists.

EDIT: A quick internet search does not net me any results. Which is why I'm skeptical it actually exists.


> I'm wary about GCP because Google hasn't issued any kind of promissory statement that GCP won't end up on the [Google Graveyard](https://killedbygoogle.com/)

Isn't that literally what an SLA is, a (contractually) guaranteed level of service for some interval of time?


Google's GCP SLAs are 99.95% per month[1]. There's no mention of any guarantee that the GCP Cloud nor that specific GCP Products (like App Engine) will be supported for, say, 10+ years from now (well, 10+ years is unreasonable, but something like "at least 3 years, and with 2 years' notice of any planned discontinuation" would be nice).

[1]: https://cloud.google.com/appengine/sla


Those are the standard SLAs but I presume you could negotiate a better one assuming you're willing to pay? I don't know what the cost for that would be though.


Not really: an SLA for a service doesn't preclude the deprecation of the entire service.


Yes but it's also why those are a lot shorter. If Google has been signing 10 year horizon SLAs, that'd be big news.



That press release doesn't say anything about Google signing an SLA that commits them to 10 years of service availability.


Those do not require that the services guaranteed contractually remain available to other customers for use.


But it generally makes business sense to deprecate a system even though you need to maintain it for 2 customers for another 7 years?


It’s one thing to be wary of experimental new consumer products like many Google has tried and discarded, but I can assure you I have personally observed vast amounts of money poured into GCP in the enterprise realm and even if they retire a GCP product here and there, most of the myriad products in GCP will be around making money for quite a while!


> even if they retire a GCP product here and there, most of the myriad products in GCP will be around making money for quite a while!

Sad reality if your business depended on that retired product though.


> I'm wary about GCP because Google hasn't issued any kind of promissory statement that GCP won't end up on the [Google Graveyard]

Snapchat paying them $400 MILLION per year takes care of this issue. That's 1 company. A few jaw dropping others have popped up in S-1 filings.


So then it shouldn't be impossible for Google to go out and give potential customers some sort of guarantee that they'll be around? But still doesn't seem to happen, hence people hesitate.

And even if the entire GCP might never disappear, Google is famous for shutting things down so I could imagine people get upset when GCP is closing down individual features people are still using.


>Google is famous for shutting things down

Google is famous for shutting down consumer services. It very rarely shuts down major business facing services.


Ya I don’t worry about GCP going away. As a B2B service, they have contracts with customers that would be very expensive for them to break. Also, their internal tools track very closely with GCP offerings AFAIK. The have found financial leverage in plowing profits into cloud CapEx, so renting out that gear is all incremental profit.


The Oracle cloud is for Oracle customers. The same way the IBM cloud is for IBM customers.


I think the point though is that OP does not believe that Oracle has the stomach, stamina, will, call it what you will that IBM has demonstrated with say its backwards compatibility of software written for the IBM S360...


I wonder how much computing would have advanced if in the 1980s IBM had told their big institutional bank and insurance company customers to get their program code... er... up to code because of the cost of maintaining backwards compatibility. I'd be really surprised if the incumbent legacy industries really were fine with paying IBM through the nose for the privilege of not having to hire people to stay on the upgrade treadmill (and avoid the problem of being stuck on COBOL in 2021...). If not, then that's very, very short-term thinking on their part and you'd think insurance companies of all things would know the true value of long-term thinking...

Well, IBM did kinda cheat: rather than go with Microsoft's approach of making the platform backwards-compatible (e.g. by not fixing bugs that programs depend on, or introducing shims, or redirecting/intercepting calls from known older programs), IBM introduced some pretty hefty OS virtualization technology to their platforms decades before x86 virtualisation was a thing - which I imagine makes back-compat much easier.


Absolutely agree. I'd call it innovation though, not cheating. They had a need and they found a good way to deal with it. Awesome!

P.S.: I still remember when I told my dad I had this cool new thing, VMWare now. There were a few other "yeah, we had that on the mainframe X many decades ago" get-off-my-lawn moments like that :)


Idk, Oracle RDBMS is from 1979, and Java is quite the poster child for legendary backwards compat.


I'll sort of give you Oracle RDBMS. It's also their main product and cash cow.

Java isn't Oracle's though. It was created by Sun (released in 1996) and has had a community driven process for language changes since 1998 (jcp.org). Oracle only bought it and openJDK is a thing.


GCP wont be killed anytime soon its third biggest cloud provider but i can agree AWS is a lot safer bet.


Their SLA is 99.99 for most services. I know aws just recently had an outage for a couple of hours. Normal for cloud computing businesses


Not worldwide


So I have to ask, is there actually a member of this community that uses Oracle Cloud? To be completely honest I'm surprised


From what I've seen, Oracle Cloud usage is mostly to lift-and-shift legacy Oracle software packages off-prem and things that need low-latency access to them.

It's more Oracle software as SaaS, some basic IaaS (more like a hosting provider than a cloud) and hosted Oracle databases (which are just Oracle running Exadata for you).

So it's big companies struggling to get off old Oracle (and acquisitions like PeopleSoft) stuff and it's gross.

I don't know of anyone using it as "their cloud", but more of another thing to mix in with AWS/Azure/GCP to get rid of on-prem or colo facilities. It's more like IBM Cloud: A way to hang on to customers, sell services (human services like consulting and support), and keep milking those old products for big companies.


I should mention that you can't run Oracle RAC on any "real" public cloud, because it doesn't play nice with SDNs and needs shared block storage. People have done it with hacks (like VPN tunnels between hosts), but it's not performant and well, an unmaintainable hack. I've seen things like read-only single instance replicas of on-prem DBs, but uhhh.

A stand-alone Oracle server isn't a reliable solution because patching = outage. So Oracle providing managed Exadata is a win for some (again, usually big) businesses.

Oracle got a lot of companies hooked on proprietary DB features and rewriting can be painful or require spinning up a dev team for something in "maintenance mode". Even Amazon just fully got off Oracle a couple years go IIRC.


Just as a counterpoint, regarding your IBM Cloud snub, my small company has been doing our cloud needs with IBM for the past 5ish years, with great pricing, stability and support. The portfolio of non-ibm-specific services is huge.


And much of IBM cloud was popular with smaller companies when it was SoftLayer, before IBM acquired it.

I know one successful PaaS which has stayed, which to me says a lot about the quality of IBM Cloud


We do. Our stack is a fairly standard Java app on an oracle dB. Seems to work well enough. We couldn't get the app to work with azure and aws is a no go (we have retail clients)


Sorry I'm out of the loop: why does "we have retail clients" mean "aws is a no go".


Retailers like wal mart supposedly demand their partners and suppliers to not use AWS.


Can confirm, did large projects at huge retailer, AWS was cursed, anything but AWS, spoken by execs through clenched teeth.


and to clarify on the above reply, it's widely suspected that Amazon snoop at data hosted on AWS, so they can make/sell competing products that are selling well based on what they learn from snooping.

So basically any business that makes or sells anything that Amazon may possibly make or sell in the future doesn't want to touch AWS.


That cannot possibly be true.

Do you imagine the fallout if it came out ( and how would in not, thousands of people would have passed through, one disgruntled employee is enough) ? AWS would lose most customers within a few years, Amazon.com as well ( disgusted buyers and sellers), there'd be lawsuit everywhere. I really don't think anyone in Amazon is stupid enough to risk a trillion dollar business for some potential competitive advantage on whatever sold on Amazon.com


Amazon has regularly launched products with their "Amazon Basics" brand, copying well-selling 3rd-party products sold on Amazon.com, always a few bucks cheaper. [1]

What makes you think it would be different with AWS?

[1] https://www.wsj.com/articles/amazon-scooped-up-data-from-its...


Are you really comparing making knockoff products with stealing their consumers data?


I doubt that actually looking into EC2 instances (which AWS would never do, "pinkie promise") would even be necessary. Metadata about the company's deployed infrastructure would totally suffice to derive enough business insights to launch a "knockoff product", as you call it.


The fallout is already in effect, if you read the above comments.



Can also confirm, a percentage of our customers see Amazon as a fighting word. They conflate the store with AWS. The store is their competitor.


Ah I get it, because of a different part of Amazon being a competitor. Interesting!


Were you affected? Wondering if your DNS is setup separately then you wouldn't be affected by whatever mishap that happened to Oracle's own DNS.


We were not, oddly enough. I can't speak in too much detail since I'm not on that team but our region was included so we may just have been lucky


can you tell me what issue you were facing with Azure?


I haven't investigated in more detail but Oracle Cloud supposedly has lower bandwidth costs than AWS and Azure, making it cheaper in the long run for transfer heavy services. That's why Zoom uses it.

https://www.lastweekinaws.com/blog/why-zoom-chose-oracle-clo...


I'm using it for over one year now as my backup remote programming platfrom and CI server for small projects (Jenkins), and on other server as mysql + minecraft server machine.

Working flawlessly with ssh, out of the box. It is FAR simpler to manage than AWS in my opinion.

There is "no catch".


I do. 2 Free vms! The one is a gitlab worker when I reach my free gitlab minutes. The other hosts demo websites/ projects


Same, but VPS server and Grafana+InfluxDB.


Do the servers have static IP addresses?


Anyone who uses DynDNS is using Oracle Cloud.


when dyn got acquired, I stayed on for a while because I liked dyn. but after a bit, oracle cloud changed the terms and I didn't really like interacting with their systems or giving them any revenue so I switched to amazon, which has worked quite well for me.


And DynDNS was definitely down.


Yup!

Order of magnitude cheaper and way more predictable costs than other cloud providers.


Sometimes you have to do things you aren't proud of. $20 is $20.


Oracle Apps are on cloud which means at the minimum most clients who use Oracle Apps use Oracle Cloud.

Companies also use Oracle Cloud PaaS to build custom enterprise applications


I do running GraalVM, Node.JS projects on OCI and Digital Assistant for chatbot


I am.


My condolences. Genuinely curious, how has it been?


Oracle Support: Can you please add the following trace files....


Oracle support is a contradiction in terms.


So who does Oracle sue because of this?


Their own employees, I suppose.


I’m sure they are capable of suing themselves


I didn’t realise they were hosted at OVH


Can't be, if they were hosted at OVH the stuff not running in the burned down datacenter would still be online. But since it's the cloud, all the outages they have are typically (1) systemic and (2) global.


Nothing like the redundancy of all those cloud servers and decentralization. Great job Oracle. lmao, how do you fail across the entire planet? I can't wait for the write ups on this.


I think GP was joking


GP? ____ poster?


Grand Parent

The parent post of the parent post.


The very definition of a grandparent :)


every time you reply to a comment someone else makes on one of your comments, you are in fact your own grandparent


Which is why you can't travel to the the parent post and delete it from history. This keeps the universe working in correct order.


TIL a grandparent is actually an internet comment, not a person

/s


Your mind is really gonna be blown when you learn about backups.


Oracle Virtual Hosting


Off-topic, but is there a reason for small companies/projects to use Oracle Cloud? Are their prices and feature set competitive against AWS, Azure and GCP for this audience?


My understanding is that the three reasons people use Oracle Cloud are:

1. You're valuable enough of a marketing point that they're willing to give you insane discounts to reel you in (see Zoom)

2. You're locked in enough on other Oracle technologies and they've raised your license fees but coincidentally it comes with a big bundle discount if you also use Oracle cloud, or maybe you just failed an audit and Oracle said the problem will go away if you buy some cloud services.

3. For really small scale (like some dude on r/selfhosted scale, not even startup scale), they offer a free tier that doesn't have a time limit


Nice, 2 VMs for free. Might put my little side project there for a while. Currently on 2 nano vm's on AWS.


we're talking very few dollars here. In my experience, the only thing making me go above a dollar a month on AWS is the storage on the VMs.


Biased opinion: Having worked with Oracle products before, I'm not willing to touch anything from Oracle.


The only time i kind of liked their products was when the corporation spent 1.5 million on the fastest database I’ve ever seen.


What made it "fast"? Were your workloads properly optimized previously?


I forget the exact specs but it had close to 200 cores and a team of highly paid dbas that had optimized every aspect. Even the worst of queries would come back instantly.


I can second this.

We had a database growing by around 200GB of relevant data per month. While I'm not a fan of Oracle having worked there long ago, that database was unbelievably fast. They moved to MySQL years later and it required a much much bigger tin after being heavily optimized.


Details pls?


It was for an enterprise web app (.net) that was used to process seemingly millions of applications per month which would then pull in a whole ton of external data.


Huge mistake. I was surprised as well :)


Their prices are very competitive. Cheaper than AWS and Azure by a fairly large margin.


This is Oracle we’re talking about. The minute they get a significant foothold, prices are guaranteed to skyrocket.


I looked at traffic pricing (the most outrageous item among the big three cloud providers).

Outbound traffic from US/Europe is $0.0085 USD per GB. That's roughly an order of magnitude cheaper. Also, 10 TB free will be enough for many small use cases. The billing based on source instead of destination also makes the cost a bit more predictable.

That's just list price, of course, and it's still well above e.g. Hetzner, but it's a reasonable price.


Does anyone pay list price for AWS? Either you're small enough to avail of startup credits etc. or you're big enough to negotiate a personalised discount


If you know what you're doing, and knows a better part of the AWS services, the discount you'll get just by knowing what you're doing is probably bigger at that point.

Cloud native can go to either extreme cost wise, and it's very dependent on good engineering (which, unfortunately, is the exception not the norm).


One word: PRICE.

In the end of the day this is THE cheapest cloud available if you just want few VMs with database and under 10TB of traffic monthly.

Personal opinion: it is much easier to manage than AWS.


Isn't part of Zoom on Oracle? Noticed a few different Zoom issues over the past hour or so trying to connect to meetings.


Zoom chose Oracle for a lot of their cloud infra which would probably explain that. AFAIK Zoom wasn't bought by Oracle: https://www.oracle.com/corporate/pressrelease/zoom-selects-o...


It says something that one has to explicitly mention that $product wasn't bought by Oracle to avoid confusion after saying $product runs on Oracle's public servers.


Looking at that page, the total downtime was around 27 minutes. Having been through multiple AWS outages lasting many hours, this is not even a blip on the radar. Except for the usual disparaging of all Oracle products, it is a non story .


The fact that it impacted so many regions was kind of informative.

Every single AWS post-event report (https://aws.amazon.com/premiumsupport/technology/pes/) speaks to _a_ region. They design their regions to operate independently.

A single fault taking down what sounds like _all_ of Oracle Cloud is... unexpected.


It was DNS. It's always DNS.


Cache invalidation, naming things, and a globally distributed system. What could go wrong?


Oh .. that is good. Love it.


Good luck Oracle Cloud Engineers! Been there, it’s not fun.


PCAP or it didn’t happen


DynDNS is down, I assume it's because of that?


Oracle ate Dyn in favor of their worse Oracle Cloud product and have been migrating people, so that's a yuppers.


Oracle Cloud DNS is Dyn DNS. Same product/stack. In fact, private DNS is also powered by Dyn tech.


It would be very funny if they created a circular dependency..


TIL that Oracle has a cloud offering. Is this primarily for their database customers?


They push it to them heavily, but no. It's a general use cloud for anyone. Their free tier is quite generous [0].

First one I saw (they call it "classic", I guess) was a piece of scripts joined with glue and clothes drying string. Now the main product is okeyish, but last time I checked it had so large hardware (size of VM's and so on) as a minimum that it was out of my budget.

It may be cool for people who want managed Oracle products, but it just solves problems created by Oracle's own licensing and architecture design.

[0] https://www.oracle.com/cloud/free/?source=:ow:o:s:nav:062520...


No. It’s actually a company-wide pivot for pretty much all their products. About 10 years ago they figured SaaS was the best way to extract even more money from customers and make their data beholden to Oracle beyond their already-punitive contracts. So they basically stopped developing anything on-prem and rewrote all their products to be SaaS.


> and rewrote all their products to be SaaS.

I had a PoC with DBA's from my company and Oracle to see how that works for simple Oracle DB use cases and some Exadata stuff. At the time (2-3 years ago) most of that "rewrite" was just a lot of scripts on their side that reconfigured regular Oracle DB. Same work my DBA's did. They even allowed us to modify managed DB's configuration, but apparently it meant we'll loose support for that and would have to start managing it by ourselves again. It was confusing at best and their team was sometimes as surprised as we were.

Still, it's better then "classic" they offered. Smallest Weblogic setup (we wanted two servers) that was possible to create using a wizard was around 70GB of RAM (AFAIR) and multiple servers. It just spawned full VM with managed Oracle DB for Weblogic configuration, another one for DR, two WLS servers, a loadbalancer. Take into account that the smallest VM was 4 or 8GB of ram and your budget is gone right away. Not really for small scale setups.

I wonder how it looks like from perspective of people more fluent in Oracle products and OCI cloud, currently.


I’m not familiar with every product, but I worked there and I can guarantee you that a lot of their products had to be substantially rewritten. The scenario is somewhat confused by the fact that they try to sell everything as “cloud” but some are really just old-fashioned hosting with a bunch of scripts as you said. In the worst cases it wasn’t even scripts, it was someone in Asia getting a notification to do some work...

The final objective is “real” SaaS / IaaS though. Making stuff cloud-first for years was the only thing that mattered internally, a massive undertaking that took ages and might not even be done yet for some products. It burned their partner ecosystem almost to the ground, anybody who was in infrastructure basically had to gtfo or be a dumb reseller; and a lot of on-prem customers moved to other products (because Oracle basically stopped updating on-prem, or dropped crippled bundle-patches years after deploying new features on cloud, sending a clear message that on-prem was dead. Faced with a migration to what were often significantly-different cloud versions, a lot of customers opted for a migration to better products from the competition...).


No.

I'm running some mysql and sqlite databases locally for my apps apart from Oracle DB. All for free.


It's always DNS!


Was anyone here impacted by this and could kindly describe what they've experienced? Was it just actually DNS resolution issues - meaning that stuff working with IP addresses was fine? Or was it just all unreachable and a black hole?

I would imagine DNS plays quite a key role in the internal operations of literally everything in a public cloud but I could be wrong.


Once this actually comes back online, if you're looking for free stuff, Oracle Cloud gives you the most free resources of any cloud provider. But probably you should have a backup.


The cheapest server you can run is disused mobile hardware on your existing internet connection, e.g. an old laptop. They're designed to be energy-efficient so you'll pay pennies yet have a dedicated server.

Now make a friend do the same and back your sites up to each other. Decentralized Internet, it works!


> The cheapest server you can run is disused mobile hardware on your existing internet connection, e.g. an old laptop. They're designed to be energy-efficient so you'll pay pennies yet have a dedicated server.

If it's disused, it's probably got components that are fairly late in their service life, and it's probably polls enough that it's not energy efficient by current standards, either.

It's the cheapest acquisition cost, though, since that's already paid for.


Reusing instead of producing new also is very high up on the reduce-reuse-recycle order. I don't have figures on at what point it's better to upgrade to more efficient processors than to keep using older ones that were already meant for mobile devices, I'd be interested if you do.

All I was saying, though, is that you can have dedicated hardware at almost zero costs and without having to use a party like Oracle or Amazon.


Everything is ok as per latest status https://ocistatus.oraclecloud.com


I use oracle over others for the free 10tb bandwidth :D


TIL. I was so excited to try this out, finally a cloud provider with decent pricing (especially considering I'm on South America). Definitely the best price around here by a long margin.

But then I just spent 20 minutes trying to sign up. Every time with my credit card being rejected due to "wrong billing address". Never had this happen before to me, I tried pretty much every permutation of my address. Sigh, I guess Oracle will always be Oracle.


Do they have a homepage in your country that you could start the process on? Stuff like you're saying happens to me all the time because I usually have my VPN going 24/7 and financial transaction companies (some) do NOT like VPNs. Pretty often I have to turn it off for a few minutes to use a credit card. Just thought I'd throw that out there.


I don’t think that’s an Oracle issue. Had my credit card locked out after some tries recently, maybe try another?


The odd thing is that my credit card had a successful $1 charge on each sign up attempt. I might give it another shot in the future though.


Bad news if OVH was your failover.


Interesting it happens right after that fire at OVH. They're not related right?


Makes me want to watch The Matrix again.



Unreachable, they know how to pick a pun, don't they. I was thinking it meant to be more metaphorical




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: