Hacker News new | past | comments | ask | show | jobs | submit login
Google Cloud Prediction API End of Life (cloud.google.com)
119 points by dotmanish on May 15, 2017 | hide | past | favorite | 120 comments

This shutdown had an incredibly healthy discussion internally. The reality is that this service had been unmaintained for a long while, but we'd previously chosen not to start this deprecation process until we had a GA service we could actually have someone migrate to (Cloud ML Engine).

Additionally, it turns out that very few people were using it. That's not an excuse, but the reality of ongoing investment. I fought hard for this to be our expected 1 year term, and we had hoped to have a somewhat cookie cutter guide for "Here's how you reproduce this with TensorFlow". Quite frankly, the handful of users of the prediction API likely aren't the kind to happily port to TensorFlow (and this service has existed since the sort of App Engine only days, so they're mostly hobbyists, but I still care).

It's never great to "have to" turn down a service, but ultimately when forced between letting the code rot and become a potential security nightmare versus give the small set of users some time to retool, the decision was made to go with the latter. No new features is an easy way to keep something running forever, but keeping the damn thing secure requires a team to stay on top of it.

Disclosure: I work on Google Cloud.

How much does Google Cloud spend on marketing and sales of its services, how much business do they lose from companies who believe that Google could shut down their services in a few years? How much would they have spent on maintaining this service?

I can tell you, from my experience at mid-market size non-tech clients looking to move to the cloud, that Google's reputation for shutting down services is known and is a negative. AWS' reputation for leaving services alive is also known, and is a positive.

When companies buy software, they don't want to have to worry about the future- they just want it to work.

"No one ever got fired for buying IBM" was a big driving force for IBM's success. "No one ever got fired for choosing AWS" seems to be true these days.

If you want that phrase to be "No one ever got fired for choosing Google Cloud"- you probably shouldn't deprecate services like that. I wonder if anyone got fired for choosing Google Cloud, after Google shuts down one of their services?

Trust me, I know. I do agree with the folks that argued that security compromise is worse than deprecation though.

My counter-argument is the same as yours: find someone to handle it, for as long as it makes sense (I personally jumped on a grenade for a short period of time, but that's not enough). As the commenter below argues, that's probably not indefinite, but it is likely greater than 1 year.

Quite frankly, this service should probably have never gone to GA. Plenty of products don't work, but instead of just going out into the marketplace to see what sticks, we should have what it takes to say "This isn't ready". (We have this now, we didn't then).

I appreciate you being patient with us, and understand how this affects your decisions going forward.

Totally agree. Seems Google wants to be in exploring mode at the same time as conquer the world mode.

It needs to have strict communication with its users. I like that they have some sort of focus and trim junk but they have to really communicate that with their users.

You have to be upfront on how many of your API's are in beta mode. I remember Gmail being in beta for 5 years.

May be this should be the new bar. If you can't get your shit working in 5 years then can it. If you can and enough users use if then support it for a lifetime.

Let me respectfully rephrase your argument.

Despite parent's logical argument and evidence, and despite further evidence that Google doesn't really shut down Enterprise services at industry-irregular rates, market narrative has been defined (mostly by Readergate), and you are more inclined to accept said narrative?

Alternatively, is there sufficient evidence beyond "market narrative" and Readergate that has allowed you to reach your conclusions?

(work at G)

I can't find even a single example of AWS shutting down a service.

In fact, one example, showing a potentially different philosophy across companies is AWS SimpleDB. SimpleDB is no longer on the AWS console - DynamoDB has replaced it, as AWS believes DynamoDB is a superior service.

That said, SimpleDB still functions perfectly fine. In fact, I recently spoke to an AWS manager who stated that they will leave SimpleDB out there as long as people keep using it.

Just one metric: How many services has Google Cloud shut down and how many has AWS shut down? Off the top of my head, I know of two Google Cloud services, and after Googling (Thanks!) I can't find a single AWS example.

The fact remains: If Google knows that this reputation is in the populace but still doesn't believe it's worth it to fight that with more than 12 months notice with (what appears) a small investment - is the reputation undeserved? If larger Google Cloud services start losing money, why should customers count on Google to keep those services up?

AWS comes out with new services all the time, and even if they are smaller / niche, I feel comfortable recommending them to my clients. For example Data Pipeline never really seemed that popular, and AWS has recently come out with a better product, AWS Glue, which the former Data Pipeline team has transitioned to.

Will AWS force customers to switch to the new, better AWS Glue? No, they will leave Data Pipeline, and their customers' investments alone.

I think the point is that with the resources Google have you need to appear perfect as well as be perfect. Commenting on message boards about how unfair people are to poor Google doesn't make me want to support your view. I'm not sure how important this API endpoint was but it's definitely another datapoint in the discussion. I wouldn't choose to rely on Google APIs any more and I think that quite a few people feel the same way. I might choose to use one of the endpoints that few others use and it gets shutdown.

I mean look at Stripe for a counter example, they seem to be super solid and are still on version 1 of their API, it literally never changes.

Marketing is all about confidence, regardless of the facts the public perception is that google shuts down services with wild abandon, that is a perception google needs to work to fix and your response merely confirms it

> ...mostly by Readergate...

Oh, come on... seriously? This argument is trotted out every single time Google shuts down something: "you are still upset about Google Reader". No: we are upset about all of those other things that get shut down, and people from Google shifting blame to Reader over and over and over again is nonsensical at this point: there are so many many examples of things being shut down, and Reader was probably the least interesting to those of us who care about this sort of thing :(.

Here is something I wrote four years ago when someone (as from Google!) tried to pull the "you are just upset about Google Reader" argument.



They've shut down much more than Reader (seriously, who said anything about Reader? was that even a service a developer could rely on?). Even APIs they don't shut down, such as their OpenID login stack, they routinely replace (dropping a lot of the maintenance and support, which leads to weird failures) with "exciting new APIs" that have no migration path, such as with G+ Login (I am thankfully safe from this issue, as I did something crazy with Portable Contacts that have me Google user IDs that I started storing before we even knew what they meant; so, to be clear: I have very little personal axe to grind on this, but others should be wary).

They thankfully decided to just go commercial-ish with Translate, but the same can't be said about Charts (which had a long deprecation window and a replacement, but the replacement is a fundamentally different kind of API that has different browser requirements and even different charting capabilities). They also happily will just shut down things like Google Checkout and offer no replacement at all for key use cases like "sell a physical product" (if you sell digital goods, you might be able to switch to Google Wallet Objects, an "exciting new API" released a couple weeks before Checkout was deprecated).

Google Checkout was certainly also targetted at enterprises, had a clear business model behind it, and had existed since 2006: everyone who built on that one (again, not me: I avoided Checkout like the plague) got only six months to migrate to a different provider (and figure out how they are going to handle any refund requests from recent customers, which will surely be horribly irritating as they won't be able to just tell Checkout to refund the transaction anymore). There is simply a patterned lack of care for people who may have built things on their stacks.

(Yes, some services have deprecation policies, but let's not forget that those guarantees were themselves attempts to regain faith due to a previous round of services that had been axed with little warning ;P. After the anger died down from that they started reversing course, shortening or removing the policy entirely after the previous guarantees expire. I can only imagine the people who keep citing these deprecation policies don't have much memory of how this has all been going down over the past few years ;P.)


Yes: you might be able to find alternatives to migrate towards... but if, by just acknowledging this pattern, you could avoid that bullet--which could easily come at the "least convenient moment" (such as when you now have some competition out of nowhere while attempting to launch a new product and raise finding), requiring you to suddenly drop everything for a couple weeks coming up with a new implementation of key infrastructure before the clock runs out--why wouldn't you?


Two years ago someone (not from Google, this time) used that argument against someone again, and I linked to my earlier comment. A few people downvoted my link, and I responded with the following (which was upvoted, and my link was also upvoted back to 0).



Multiple people (now three!) have downvoted this, but the idea that everyone who talks about Google service closures is complaining about one specific service--Google Reader--is the "meme" that we should all be quite tired of, as it is trotted out like a broken record every single time anyone points out reservations about Google's track record: no matter the context, no matter how many services have been shut down or cut down since, no matter what announcements Google makes about limiting their policies, and no matter whether the person mentioned Google Reader or not... it is even used against people like myself, who had absolutely no specific interest in Google Reader in the first place :/.

It is nothing more than a knee-jerk way to dismiss a rather easily defended position (due to the large number of closures that have been documented, ones that are more extreme or would have been considered less likely than Google Reader) by stereotyping someone's argument down to not just a "strawman" (an argument that is easily defeated), but essentially a purposely broken and laughable version of their argument so as to purposely ill-inform other people (which I feel the need to separate from a "strawman", as the goal is not to defeat the argument but to belittle the opponent). It is frankly one of the more underhanded argumentation tactics that people seem to enjoy defending here.

The reality is that Reader is a non-issue for most people here, as it isn't something you likely built your business or infrastructure around (and to the ones who ended up indirectly relying on it, that is a stretch to blame them for), but when Google randomly shuts down, cuts down, or entirely reboots APIs and services--something they have done many times, at the extremely painful end with things like Checkout and Login, but also with things such as App Engine or Charts--the fact that people seem to seriously have just forgotten how recent these things have been is distressing, and is made all the worse by people who insist on perpetuating "you are just whining about Reader" lie :/.

I appreciate your passion.

My argument isn't that Google doesn't shut down B2B services. It's that I have seen no evidence that Google does it at a vastly outlying rate relative to the industry.

IANAL, but Both AWS [0] and Azure [1] have 12-month deprecation policies - this is industry standard, as mentioned in the Google Developers blog you link.

[0] https://aws.amazon.com/agreement/

[1] https://azure.microsoft.com/en-us/support/legal/subscription...

(work at G)

> IANAL, but Both AWS [0] and Azure [1] have 12-month deprecation policies - this is industry standard, as mentioned in the Google Developers blog you link.

Something to keep in mind but it's very difficult to find anything AWS has removed. They have APIs / features that have existed since their launch. While they may have such a policy it's essentially unheard of.

For Azure I'm not as familiar with but a few Google searches and I also can't find an API they killed.

There is a difference between a rate in practice and a rate in policy. I don't have the numbers to know what any of the big cloud provider's rates actually are but as someone who has worked for large companies and organizations one year is simply not long enough for almost any change especially one that is necessary for non-business reasons. Regardless of who's suggesting it.

I appreciate your passion.

What a shitty way to put down his well supported argument, shame on you. This isn't about his passion - he is refuting your statements with actual facts to the contrary, and you try to make him out like he is passionate about the whole thing (planting the idea that he is somehow not thinking with his head, but with his heart) as opposed to taking a well-supported position against your handwaving that this is somehow all about market perception. Do you work for the marketing department or something?

Political slimery. Ugh.

What is the shutdown rate of AWS & Azure that you've calculated, and what is Google Cloud's? If no one actually has these numbers, let's assume the perception didn't come from nowhere.

This is one of those occasions where evidence and material facts are irrelevant, what actually matters is how people feel about using Google products. Is the average developer going to feel confident about recommending a Google service to their manager?

You've picked a tricky occasion for making this argument. AWS haven't announced that they're shutting down anything today, but the same can't be said for Google Cloud.

> evidence that Google doesn't really shut down Enterprise services at industry-irregular rates

I have not seen any actual numerical evidence on this (either for or against google).

One well reasoned comment on HN won't change the entire market's perspective.

I agree wholeheartedly.

My argument is that there is not sufficient evidence to claim that "Google shuts things down nilly-willy" within the context of B2B (conflation with B2C is the real issue here). Rational thinkers should be swayed by evidence, rather than "market perspective", especially within the context of what boulos posted.

For some perspective on "market perspective", see [0]:


(work at G)

That's the problem. Humans don't respond to logic, they respond to emotion. And every time you try to argue logically, it comes across as tone deaf.

I have a personal interest in GC, however, in my previous day jobs I've shipped real products with AWS. And in my time with AWS... I can't remember anything important (or unimportant) being shutdown.

I can't shake the feeling, however wrong you believe the impression may be, that Google flails around in the dark while AWS hums along quietly.

I was using Google Wallet for the web. Not a huge number of subscribers, but substantial. Google shut it down with no migration path or notification to Wallet customers. Actually, it still seems to be running, but only for use with Google. (eg. Buying Google Drive storage)

I don't hold too much of a grudge about it, but it has shaped my perception of Google services. Except for the extremely established ones, the possibility of an early shutdown should be factored in before choosing to go with a Google service.

Ever? What if they had a 10 year deprecation policy?

10 years would be a reasonable good start for a deprecation policy for an API-providing infrastructure company.

It is not uncommon to have 5-year-old systems in production, and a factor of two above that sounds like a good minimum.

You (and the other commenter from Google Cloud team) do not seem to have compassion for the fact that migrating application code is very expensive.

1. Take a talented, valuable developer off whatever important task they're working on now.

2. Get them review whatever changes have supposedly been made between old version and current version.

3. Get them to read the existing code and think about places where it may break. (Often not obvious.)

4. Deal with doing the upgrade, which may often have dependencies that can break other unrelated things.

5. Write whatever code needed to make changes. (Often not obvious.)

6. Test on some sort of staging setup, which was probably written by some other person a while ago and may or may not have adequate coverage for this specific feature (since it was already working!!!).

7. Deploy and test in production.

8. Hope and pray nothing breaks.

All of that work (time and $$$) must be done to achieve ZERO benefit for your customer. High cost! Zero benefit! That makes developers (and founders) angry.

1 year deprecation is insanely short for public-facing infrastructure APIs. We should be talking about 10 years.

On the other hand, that's the reality of software development and should already be priced in. WannaCry was so painful because WinXP and Server 2003 were still in operation, and I bet that was because there were applications that just couldn't be migrated away.

Sure. But are you going to choose API provider "A" with a history of keeping stable APIs working 10+ years, or API provider "B" which can and does change stuff every year? What's your expected maintenance cost over the lifetime of the service?

Fair point.

On the other hand, Google is notorious for EOLing services that don't have a billion users, even compared to other cloud/web companies. That should be priced in.

I guess it could still make sense for tech-heavy, "we rewrite 50% of our code every month"-companies, like... Google?

Maybe they're mostly running their cloud to iron out the kinks im the API's they use themselves?

> 1 year deprecation is insanely short for public-facing infrastructure APIs. We should be talking about 10 years.

Thats a little extreme.

5 years ago was 2012. Do you have code written in 2012 still in production? Almost certainly.

10 years ago was 2007. So you have code written in 2007 still in production? Less likely, but I bet there are enough "yes" answers out there.

Most of my company's clients have code from 1997 still in production, and at least a third have code from 1987 still in production. A few have code from 1977 still in production.

I'll give you one guess what sector :D

There's a difference between code written and tools you used. Just to compare, 2007 was the time of python 2.5, Java 6 and Windows Vista.

I would certainly hope people have upgraded to newer tools than those.

And 2017 is the time of Windows XP and WannaCry.

What I'm saying is, those things are all still running in prod. I got an employer off Java 6 just two weeks ago. The next step is all the Java packages that were deprecated 5 years ago.

Right, and Windows XP was deprecated 3 years ago after a 12 year lifespan. That people are still using XP is, honestly, inexcusable. If you want a 10 year deprecation policy for something someone else is maintaining, you should be prepared to pay for the salaries of the engineers who will maintain it for the 5 years after everyone else has moved off the system, or just build your own api ;)

But many of the companies whose APIs you might have started using in 2007 are long dead: bankrupt; acquihired; pivoted into completely different verticals...

If you truly want API longevity at that scale, we probably need some force beyond that of quarterly-profit-driven companies to actually achieve it. Maybe some nonprofit (the Internet Archive, maybe) could offer "API insurance" packages, where—in exchange for your premiums—they'd build and maintain API-compatible replacements for any and all services the insurance-holders rely on, if said services should happen to die.

Then you'd just need to insist (with force) that all network-RPC-based "client" libraries have configuration to let you easily change their endpoints and expected X.509 trust anchors...

Kudos for giving people a year to migrate! I know you're getting drubbed by some for the forced migration, but a year is more than generous, particularly if indeed most of your customers are hobbyists anyway.

Just out of curiosity, is this API used inside Google at all?

I ask because it would probably be pretty tough to deprecate this API if it was being used widely internally too.

Before folks start comparing this to Reader or point to the general "Google shuts things down" narrative, Prediction API has been superset by the array of ML APIs and Google Cloud ML, found at [0].

[0] https://cloud.google.com/products/machine-learning/

(work at G)

I downvoted this comment, because the general "Google shuts things down" narrative remains. Companies invested real money in integrating Google's existing services.

If Google's New API is so easy to switch to, and companies will get value from the switch, then they would do it automatically.

The truth is that the old API probably worked perfectly fine for some companies, who are now forced to spend time and money migrating.

If you were a purchasing manager in the IT department if a midwestern manufacturing (for example) company, wouldn't this kind of service shut down scare you?

AWS seems to support their services forever. That's what Midwestern manufacturing-type companies want. Hopefully Google will learn this, so AWS will have legitimate competitors.

Despite generally disagreeing with the comment I upvoted the parent because I think it provides an important counterpoint to the discussion.

I think it's too common to make lazy assumptions and not even read the article when it comes to a stereotype (true or not) such as Google's countless EOL'd services.

And personally I feel like Google's endgame isn't to provide the best consumer/enterprise cloud services a la AWS. They're just subsidizing their own endeavors in mass computing with presumably a tidy profit on the side. They can afford to alienate a customer here and there because they don't need a customer here and there. Which is an entirely rational thing to do for a corporation.

But there are other options that are available for the customer is not really much of a counter point, as there are almost always other options available.

People dislike shutdowns because they have invested time and resources in implementing something. Now they have to spend the same if not more time again. This has a cost involved.

How much work is created in our cultures because of the disposable nature of everything. Businesses use efficiency as an excuse of squeeze every last dollar out of workers. Yet on the other end the system is exactly the opposite of efficient.

I would find alternative from other companies instead migrating to their new API. Who can be sure they are not gonna have the same forceful migration in next year, next 2 years ,etc.?

Everyone deprecates APIs in favor of more evolved ones. It's pretty standard. If they didn't, innovation would falter.

If Google was shutting down the capability of doing something, like reader, that would be a different issue.

There's a rather long discussion in this very thread about how AWS doesn't.

It still might not be a great idea for Google to routinely shut down APIs in favor of other APIs if they want people to actually rely on its cloud services. It doesn't look like Google is interested in trying to make the migration particularly easy either:

> Q: What will happen with my existing models?

> A: You must recreate your existing Prediction API models using Cloud Machine Learning Engine. To learn more, please read our documentation about creating models on Cloud Machine Learning Engine.

They _are_ giving a 1 year notice.

This. In GCP, they are handling these things transparently, professionally and proactively - usually giving you a personalized heads-up, alternatives and reminders as early as possible. Facebook on the other hand has been shutting down critical APIs under my systeme literally with a days' notice.

Does Facebook have paid APIs?

Good point. Definitely not as directly as Google, but in this case I guess I could make a fair point that their reporting and campaign management APIs for accessing campaigns we as a client pay 6-7 digit amounts per month for could be counted as such.

I think we agree that Google does a better job at shutting down APIs (paid or unpaid) than Facebook does. But I don't think people are bothered about the way Google shuts down services, but rather that they shut them down at all.

Which is one thing if your product is on a yearly (or quicker) release cycle. But it's a whole different level of pain if the project is in maintenance mode and the development team has already been reassigned to other projects.

People hate migration no matter what. Maybe Google should consider what how sustainable this service is going to be before selling it

If I had built a commercial product on that API how much money would this have cost me to move to the API. Also would the new API allow me to provide a better service to my customers or would it just be a pointless upkeep task.

If you build a product on an API you don't control, you'd better always have some realistic contingency plans on hand. A years notice is probably one of the easiest cases to handle out of the many scenarios that might force you to switch APIs.

or build it on aws

yeah, and we should all be happy when we are forced to migrate to new APIs...like the old APIs got rust. This's one of the reasons why proprietary "cloud" services are so bad. Once they are gone you have little or no choice. Nobody would complain much if Google would shut down its MySQL "cloud" service. Amazon is just there and if Amazon doesn't fit the bill you can use a old good metal server. Not the same you can say about Google Cloud Datastore, Appengine or this ML API

Tensorflow is open source and apparently very popular, so I'm guessing this migration would be moving in the direction of less lock-in.

My point is that the business owner/developer should decide when to migrate to a different API or if it even makes sense for them to even migrate, not Google. The lock in was a bad idea in the first place. I may exaggerate but what's the point to have a new/better solution(Tesorflow) if the migration kills your business?

When will Google open source the Tensor Processing Unit, or something similar for FPGA accelerated TensorFlow?

To alleviate the pattern of shutting down systems maybe you should fix your design process for services. Instead of having a healthy discussion when a service will be EOL you should ask a different question . Is this service going to be able to be able to be supported in the long run ?

Maybe APIs should have standard ways of accessing the information so that transisions can be seamless. Right now your API offerings make people distrust how long you will support any API.

I have had long healthy discussions about using google APIs but it's becoming harder and harder to advocate your products (which are great) when everyone perceives that you will shut things down.

If it's so simple why not support the old API in the new systems? This just reminds us all of one of the costs of trying to depend on Google.

Why can't Google offer an open source bridge/solution for easy migration of deprecated APIs?

Perhaps the title of this post should be amended to something like:

Google Cloud Prediction API deprecated in favor of Google Cloud ML

How much of the API is the same? Would the average project require a 5%, 50%, etc rewrite?

One thing that I find so amazing about AWS (Amazon Web Services) is that I'm not aware of them ever EOLing one of their apis (I could be wrong). We still have a bunch of code that still uses SimpleDB and even though they haven't promoted SDB for a while, they haven't EOLed it.

Can confirm that the EC2 API team is still maintaining perfect backwards compatibility with the EC2 SOAP API released in 2006. It's kind of funny that the known issues are all still valid, too :P https://aws.amazon.com/releasenotes/Amazon-EC2/353

Exactly this. Companies are still running on-prem software from decades ago, because it's not economical to migrate. Why should the cloud be different?

It's very scary to a purchasing manager at a company to know that when they invest $100k+ in developer/consultant time implementing a solution, that every 3-5 years they will need to spend an additional $xk to simply keep the solution alive!

AWS is doing this right. I can't imagine recommending Google for many cloud use cases, given their propensity to decommission software that companies use!

Salesforce gets this right too. All of their APIs get built with the intention of supporting them for 20+ years. Companies can't build on quicksand.

>Why should the cloud be different?

Because you're not running the stack. Relying on any PASS that you can't lock down to a version and deploy on commodity infrastructure means you've accepted the reality of being vulnerable to platform changes - and in a lot of cases this makes sense because the benefits outweigh the cost of staying up to date - but scenarios you describe are not one of them.

I think this can be attributed to the companies core values. Amazon is tirelessly customer centric and deprecated an API is painful for customer. Especially if Amazon could write a wrapper for new API and let people continuing using the old.

I was a big fan of SimpleDB - I feel like if they would have just given it some love (i.e. feature updates and moving it to all SSD) it would have been a really friendly, scaleable "almost SQL" database. I found it way more flexible than DynamoDB for most data models - maybe I am the only one but I think they missed the boat with SimpleDB, and now it's just been relegated to the forgotten side of AWS.

I agree. I have a little service using SimpleDb and it just works. If the End of Life it then the service will come down. I already have my business partner's agreement on that.

They still use it internally too, if you ever use ElasticMapReduce (EMR) then you'll probably see SimpleDB charges on your bill (very small amounts) because they use it under the hood to store cluster debugging information

It's been unavailable from the scope of newly created accounts for quite a while

That is very different than pulling it away from existing users.

Indeed, and I think this is a smart move on their part.

More impressively they also maintain older instance types such as m1 they rarely pull them off the shelf completely.

ec2 is ~easy though, they're just smaller VMs on newer hosts.

I'm amazed that things like SimpleDB are still around and supported even though only a handful of engineers in my ~200 person org have heard of it.

I believe they aren't allowed to change the hardware spec, so at some point they will not be able to get supplies anymore. As far as SimpleDB, yeah, believe it or not, on AWS bill, it still exist and most support engineers from Amazon don't really have a clue why it's there. I still don't know which service we use is using SimpleDB. Only a few cents, so nobody really care.

Is that true? They'd still need to adhere to HVM vs. PVM, but that should be it.

Well, they advertised m1 as such and such hardware spec. I don't know if in their ToS they state they would be allowed to change the spec without notifying customer. Maybe.

I believe m1 is only available on PV.

I think this is disingenuous. AWS keeps something running forever, that's true, but in what state?


(We were well aware of this thread, and others like it, internally)

Something that exists, but has absolutely no investment and unknown support is probably not a great thing. Given that EC2's recent outage was because they hadn't restarted their services in a long time[1], what problems exist here? I think that Google probably should have a SUPER long deprecation policy (24 months feels right), but a year isn't terrible - and I'd rather explicitly get a statement like this than the zombie services elsewhere.

Disclosure: I used to work at AMZN and am highly biased.

[1] https://aws.amazon.com/message/41926/

As a big and long time user of AWS I thought it was pretty obvious that SimpleDB was deprecated when DynamoDB came out, and am surprised you think it is somehow bad to maintain a service in a "zombie" state is somehow worse than killing it... the way AWS plays this allows customers using the service to decide to move to a new service because it is better or cheaper, and on their own time scale (!), rather than being forced to drop everything else they are doing to go back to some old project and plan a migration.

I am so sick of companies thinking "a year (or even two!) to migrate is long enough for anyone": do you realize how many services I use? Do you realize how many random products I have built? If every service deprecates their API once every five years and gives everyone two years to update, and you use 20 services (this is common!), that means that every few months you are having to go back through all of your projects you have ever built to rewrite them against some new API. That is absolutely insane.

See my new comment at https://news.ycombinator.com/item?id=14344415 (I'm on a plane, so late to the party). I'm disappointed in this particular service deprecation, but your broader point applies regardless of the specific service: once you start using several, there's the potential for burning a lot of your engineering budget just rewriting and retooling.

On a meta level, I've seen an incredibly high number of risk averse companies just using EC2/S3 or GCE/GCS. That's not to say I disagree with using tons of services (that's how you get value!), but it's clearly food for thought.

I'd love to see historical data on the various cloud service turndowns, though I suspect there's too little track record to draw meaningful conclusions (and for some folks, abandonware is just as bad as turndown).

[Edit for disclosure since I wasn't in this thread before]

Disclosure: I work on Google Cloud.

So, then your answer is never deprecate?

> Q: What will happen with my existing models?

> A: You must recreate your existing Prediction API models using Cloud Machine Learning Engine. To learn more, please read our documentation about creating models on Cloud Machine Learning Engine.

Slightly edited and corrected answer should be

> A: You must recreate your existing Prediction API models using Cloud Machine Learning Engine. But, you know what? You must recreate it using Amazon Machine Learning, because we might again shut down this service in favor of our next platform. So If you care about your product, move directly to Amazon Machine Learning so next time you will not be bothered by us. And thanks for using it

Prediction API was closed source and proprietary. Cloud Machine Learning Engine is open source Tensorflow.

You can run these models on prem, on AWS, etc.

"Let's build our entire stack around Google Cloud. I'm sure they won't shut down an API we depend on"

This one-year-long deprecation warning comes atop what is a fairly reasonable migration path for most users. I have yet to hear of a single user of any size whose “entire stack” relied on a single API, let alone this one, and who would be left without a faster, easier, or less expensive option in this move.

But you might have info I do not!

(Work at Google Cloud)

> This one-year-long deprecation warning comes atop what is a fairly reasonable migration path for most users

After working at multiple, large companies as well as doing government contracting, it would take a year just to propose and get funding for very minor changes. Sometimes it would take months to get critical bugs or security issues approved and taken care of.

A year works for many tech companies. Maybe even most. But it's laughably tone deaf when talking about larger companies or government organizations.

This is why the majority of companies I've worked with opt for APIs from, say, AWS and only consider using a Google API if there is literally no other choice. No one wants to be focused on X and suddenly have to allocate money to change something that's working to use something else.

If that's really the official attitude within Google, then I will stay away from using Google's APIs and will advise other companies to do the same.

We should be talking about 10-year deprecation for APIs. Not 1 year.

Software development is a huge capital expense. Talented developers are a limited resource. Forcing your paying customers to go spend a bunch of valuable $$$ and time in order to just move to the latest version of your API -- not even to gain any new functionality, but to keep what was working already! -- is simply unacceptable.

theDoug, how long do you typically get internally to move off of major services to new revisions at Google? Are there services that have been deprecated for more than a year, or even been resurrected? If so then 1 year really isn't reasonable to someone who built on a service. At big co I worked at, I would routinely see 3-5 year deprecation plans with a possibility of paid support after. A bank I was consulting for was using Java 1.3 in 2010 and paying oracle millions to keep it patched as they wanted to have a standard java across the company and a key system was very afraid of upgrading (I think they were really stupid but it's their money).

I didn't mean to suggest you guys did the wrong thing -- it was just a joke. However Google is known for shutting down services whenever they feel like it and I think that you can see from some of the other comments in this thread that traditionally people expect things to be supported for much longer if they're going to build on top of it for something serious.

Totally misread this as google cloud predict-the-end-of-your-life API.

I misread this as a service to predict the EOL date for a given Google API.

Someone actually built a model back in 2013 to predict risks of various Google APIs shutting down:


The engineering team working on that all abruptly left their jobs once it hit alpha. They are now traveling abroad and spending more time with their loved ones.

I'm tired of Google shutting down or screwing up services. I come from marketing, and while I still use Adwords, I'm more and more moving to other platforms for research and spending my money. It's not only that, the moment you have a problem, it's up to yourself, while other companies have a customer service that actually replies, with more or less success.

I'm also learning programming, basically because I want to get into Data Science and I'm doing everything I can to avoid using Google Cloud, even though I found Bigtable to be easy to use and the kind of stuff that I wanted for a project, but I forced myself into learning how to get a postgre & couch dbs up. Also using vps's from a local provider (clouding.io).

It's like... I can see myself in the future spending time on modifying stuff because Google make X decision, instead of doing stuff I enjoy.

Just curious, were you using the Predict API?

The discussion seems to have pretty well settled on Google's policies around service deprecation, but in case anyone's interested in chatting about the replacement API, I'm excited about the long-term prospects of their ML Engine product.

The open-source distributed tensorflow stuff is pretty nice, but it still requires a huge amount of hand coding and tuning the machinery, reminding me quite a lot of just rolling the damn thing in MPI yourself. I'm very excited to see where distributed tf will be in a year or two, but it's a chore today.

The hope is that using Google's secret sauce to auto-distribute the execution graphs and associated data ingestion makes things "just work". At the moment, the documentation and examples for that are a bit all over the place at the moment, and require writing models to conform to the newish tf.contrib.learn.Experiment API, which is also a bit underdocumented and underexampled. Using it for very large datasets (say >tens of TB) seems to be pretty challenging at this moment (to me at least).

At any rate, I've been banging around on it for a few weeks and am really hopeful. I will follow Cloud ML Engine's career with considerable interest.

tl;dr Cloud Prediction is deprecated in favor of Cloud Machine Learning (https://cloud.google.com/ml-engine/)

I think Google just made job of some sales manager at AWS Rekognition way easier.

You may be confusing the Google Cloud Vision API with the Google Cloud Prediction API.

I know they are different, but Google needs to realize their approach to consumer products does not transfers well to cloud computing.

From the FAQ:

Q: Why is Google Cloud Platform no longer supporting Cloud Prediction API?

A: As we've expanded our Cloud Machine Learning services, many of the use-cases supported by Cloud Prediction API can be better served by Cloud Machine Learning Engine.

Doesn't matter. To general masses it just looks (and well you will have to change code) like they shut down another thing after short time people relied on. Bad move Google. Just stop offering it and keep in running in quiet for people who have it next time.

The general masses neither know nor care about this API. "Business" folks will see they have a year to transition to an improved version of the product.

Man, there are some enterprises still running mission-critical stuff on Windows XP. The idea that a year to transition something is a long time window for enterprises is absurd.

... and that they will need to allocate a budget to this migration project, and make sure it is delivered successfully. If they do not understand the benefits, "business" folks will just see it as an annoyance.

Business folks: So instead of new features we have sold to customers we have to spend X weeks on this feature which was working perfectly fine?

Since nobody here has yet admitted to using this API, these business folks are rather hypothetical so far.

(I expect some exist, but we haven't heard from them yet.)

Does Google use ML to predict the failure of one service?

Why not just make the old API be a wrapper for the new one?

Didn't see that coming, eh?

I initially read this title as an API to predict when Google Cloud APIs would be end of lifed.

TL;DR: if your business relies on a Google API/service, your business is in risk.

It didn't meet our needs in classifying hotdogs vs everything else.

Good on Google for giving people some lead time on deprecating this service.

There is no way I will ever build a product, or spec anything for a client, that depends in some way on a Google offering other than public search.

Now this is going to be a moneymaker, this is a sensible business move.

People misunderstand Google: they offer things for free because it lets them collect data (search, gmail, etc.) or it drives competition out (1TB free for BigQuery, for example).

For everything else, they either shut them down or charge money =)

The Predictions API costs money. The replacement also costs money. What are you trying to say?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact