Additionally, it turns out that very few people were using it. That's not an excuse, but the reality of ongoing investment. I fought hard for this to be our expected 1 year term, and we had hoped to have a somewhat cookie cutter guide for "Here's how you reproduce this with TensorFlow". Quite frankly, the handful of users of the prediction API likely aren't the kind to happily port to TensorFlow (and this service has existed since the sort of App Engine only days, so they're mostly hobbyists, but I still care).
It's never great to "have to" turn down a service, but ultimately when forced between letting the code rot and become a potential security nightmare versus give the small set of users some time to retool, the decision was made to go with the latter. No new features is an easy way to keep something running forever, but keeping the damn thing secure requires a team to stay on top of it.
Disclosure: I work on Google Cloud.
I can tell you, from my experience at mid-market size non-tech clients looking to move to the cloud, that Google's reputation for shutting down services is known and is a negative. AWS' reputation for leaving services alive is also known, and is a positive.
When companies buy software, they don't want to have to worry about the future- they just want it to work.
"No one ever got fired for buying IBM" was a big driving force for IBM's success. "No one ever got fired for choosing AWS" seems to be true these days.
If you want that phrase to be "No one ever got fired for choosing Google Cloud"- you probably shouldn't deprecate services like that. I wonder if anyone got fired for choosing Google Cloud, after Google shuts down one of their services?
My counter-argument is the same as yours: find someone to handle it, for as long as it makes sense (I personally jumped on a grenade for a short period of time, but that's not enough). As the commenter below argues, that's probably not indefinite, but it is likely greater than 1 year.
Quite frankly, this service should probably have never gone to GA. Plenty of products don't work, but instead of just going out into the marketplace to see what sticks, we should have what it takes to say "This isn't ready". (We have this now, we didn't then).
I appreciate you being patient with us, and understand how this affects your decisions going forward.
It needs to have strict communication with its users. I like that they have some sort of focus and trim junk but they have to really communicate that with their users.
You have to be upfront on how many of your API's are in beta mode. I remember Gmail being in beta for 5 years.
May be this should be the new bar. If you can't get your shit working in 5 years then can it. If you can and enough users use if then support it for a lifetime.
Despite parent's logical argument and evidence, and despite further evidence that Google doesn't really shut down Enterprise services at industry-irregular rates, market narrative has been defined (mostly by Readergate), and you are more inclined to accept said narrative?
Alternatively, is there sufficient evidence beyond "market narrative" and Readergate that has allowed you to reach your conclusions?
(work at G)
In fact, one example, showing a potentially different philosophy across companies is AWS SimpleDB. SimpleDB is no longer on the AWS console - DynamoDB has replaced it, as AWS believes DynamoDB is a superior service.
That said, SimpleDB still functions perfectly fine. In fact, I recently spoke to an AWS manager who stated that they will leave SimpleDB out there as long as people keep using it.
Just one metric: How many services has Google Cloud shut down and how many has AWS shut down? Off the top of my head, I know of two Google Cloud services, and after Googling (Thanks!) I can't find a single AWS example.
The fact remains: If Google knows that this reputation is in the populace but still doesn't believe it's worth it to fight that with more than 12 months notice with (what appears) a small investment - is the reputation undeserved? If larger Google Cloud services start losing money, why should customers count on Google to keep those services up?
AWS comes out with new services all the time, and even if they are smaller / niche, I feel comfortable recommending them to my clients. For example Data Pipeline never really seemed that popular, and AWS has recently come out with a better product, AWS Glue, which the former Data Pipeline team has transitioned to.
Will AWS force customers to switch to the new, better AWS Glue? No, they will leave Data Pipeline, and their customers' investments alone.
I mean look at Stripe for a counter example, they seem to be super solid and are still on version 1 of their API, it literally never changes.
Oh, come on... seriously? This argument is trotted out every single time Google shuts down something: "you are still upset about Google Reader". No: we are upset about all of those other things that get shut down, and people from Google shifting blame to Reader over and over and over again is nonsensical at this point: there are so many many examples of things being shut down, and Reader was probably the least interesting to those of us who care about this sort of thing :(.
Here is something I wrote four years ago when someone (as from Google!) tried to pull the "you are just upset about Google Reader" argument.
They've shut down much more than Reader (seriously, who said anything about Reader? was that even a service a developer could rely on?). Even APIs they don't shut down, such as their OpenID login stack, they routinely replace (dropping a lot of the maintenance and support, which leads to weird failures) with "exciting new APIs" that have no migration path, such as with G+ Login (I am thankfully safe from this issue, as I did something crazy with Portable Contacts that have me Google user IDs that I started storing before we even knew what they meant; so, to be clear: I have very little personal axe to grind on this, but others should be wary).
They thankfully decided to just go commercial-ish with Translate, but the same can't be said about Charts (which had a long deprecation window and a replacement, but the replacement is a fundamentally different kind of API that has different browser requirements and even different charting capabilities). They also happily will just shut down things like Google Checkout and offer no replacement at all for key use cases like "sell a physical product" (if you sell digital goods, you might be able to switch to Google Wallet Objects, an "exciting new API" released a couple weeks before Checkout was deprecated).
Google Checkout was certainly also targetted at enterprises, had a clear business model behind it, and had existed since 2006: everyone who built on that one (again, not me: I avoided Checkout like the plague) got only six months to migrate to a different provider (and figure out how they are going to handle any refund requests from recent customers, which will surely be horribly irritating as they won't be able to just tell Checkout to refund the transaction anymore). There is simply a patterned lack of care for people who may have built things on their stacks.
(Yes, some services have deprecation policies, but let's not forget that those guarantees were themselves attempts to regain faith due to a previous round of services that had been axed with little warning ;P. After the anger died down from that they started reversing course, shortening or removing the policy entirely after the previous guarantees expire. I can only imagine the people who keep citing these deprecation policies don't have much memory of how this has all been going down over the past few years ;P.)
Yes: you might be able to find alternatives to migrate towards... but if, by just acknowledging this pattern, you could avoid that bullet--which could easily come at the "least convenient moment" (such as when you now have some competition out of nowhere while attempting to launch a new product and raise finding), requiring you to suddenly drop everything for a couple weeks coming up with a new implementation of key infrastructure before the clock runs out--why wouldn't you?
Two years ago someone (not from Google, this time) used that argument against someone again, and I linked to my earlier comment. A few people downvoted my link, and I responded with the following (which was upvoted, and my link was also upvoted back to 0).
Multiple people (now three!) have downvoted this, but the idea that everyone who talks about Google service closures is complaining about one specific service--Google Reader--is the "meme" that we should all be quite tired of, as it is trotted out like a broken record every single time anyone points out reservations about Google's track record: no matter the context, no matter how many services have been shut down or cut down since, no matter what announcements Google makes about limiting their policies, and no matter whether the person mentioned Google Reader or not... it is even used against people like myself, who had absolutely no specific interest in Google Reader in the first place :/.
It is nothing more than a knee-jerk way to dismiss a rather easily defended position (due to the large number of closures that have been documented, ones that are more extreme or would have been considered less likely than Google Reader) by stereotyping someone's argument down to not just a "strawman" (an argument that is easily defeated), but essentially a purposely broken and laughable version of their argument so as to purposely ill-inform other people (which I feel the need to separate from a "strawman", as the goal is not to defeat the argument but to belittle the opponent). It is frankly one of the more underhanded argumentation tactics that people seem to enjoy defending here.
The reality is that Reader is a non-issue for most people here, as it isn't something you likely built your business or infrastructure around (and to the ones who ended up indirectly relying on it, that is a stretch to blame them for), but when Google randomly shuts down, cuts down, or entirely reboots APIs and services--something they have done many times, at the extremely painful end with things like Checkout and Login, but also with things such as App Engine or Charts--the fact that people seem to seriously have just forgotten how recent these things have been is distressing, and is made all the worse by people who insist on perpetuating "you are just whining about Reader" lie :/.
My argument isn't that Google doesn't shut down B2B services. It's that I have seen no evidence that Google does it at a vastly outlying rate relative to the industry.
IANAL, but Both AWS  and Azure  have 12-month deprecation policies - this is industry standard, as mentioned in the Google Developers blog you link.
Something to keep in mind but it's very difficult to find anything AWS has removed. They have APIs / features that have existed since their launch. While they may have such a policy it's essentially unheard of.
For Azure I'm not as familiar with but a few Google searches and I also can't find an API they killed.
There is a difference between a rate in practice and a rate in policy. I don't have the numbers to know what any of the big cloud provider's rates actually are but as someone who has worked for large companies and organizations one year is simply not long enough for almost any change especially one that is necessary for non-business reasons. Regardless of who's suggesting it.
What a shitty way to put down his well supported argument, shame on you. This isn't about his passion - he is refuting your statements with actual facts to the contrary, and you try to make him out like he is passionate about the whole thing (planting the idea that he is somehow not thinking with his head, but with his heart) as opposed to taking a well-supported position against your handwaving that this is somehow all about market perception. Do you work for the marketing department or something?
Political slimery. Ugh.
I have not seen any actual numerical evidence on this (either for or against google).
My argument is that there is not sufficient evidence to claim that "Google shuts things down nilly-willy" within the context of B2B (conflation with B2C is the real issue here). Rational thinkers should be swayed by evidence, rather than "market perspective", especially within the context of what boulos posted.
For some perspective on "market perspective", see :
I have a personal interest in GC, however, in my previous day jobs I've shipped real products with AWS. And in my time with AWS... I can't remember anything important (or unimportant) being shutdown.
I can't shake the feeling, however wrong you believe the impression may be, that Google flails around in the dark while AWS hums along quietly.
I don't hold too much of a grudge about it, but it has shaped my perception of Google services. Except for the extremely established ones, the possibility of an early shutdown should be factored in before choosing to go with a Google service.
It is not uncommon to have 5-year-old systems in production, and a factor of two above that sounds like a good minimum.
1. Take a talented, valuable developer off whatever important task they're working on now.
2. Get them review whatever changes have supposedly been made between old version and current version.
3. Get them to read the existing code and think about places where it may break. (Often not obvious.)
4. Deal with doing the upgrade, which may often have dependencies that can break other unrelated things.
5. Write whatever code needed to make changes. (Often not obvious.)
6. Test on some sort of staging setup, which was probably written by some other person a while ago and may or may not have adequate coverage for this specific feature (since it was already working!!!).
7. Deploy and test in production.
8. Hope and pray nothing breaks.
All of that work (time and $$$) must be done to achieve ZERO benefit for your customer. High cost! Zero benefit! That makes developers (and founders) angry.
1 year deprecation is insanely short for public-facing infrastructure APIs. We should be talking about 10 years.
On the other hand, Google is notorious for EOLing services that don't have a billion users, even compared to other cloud/web companies. That should be priced in.
Maybe they're mostly running their cloud to iron out the kinks im the API's they use themselves?
Thats a little extreme.
10 years ago was 2007. So you have code written in 2007 still in production? Less likely, but I bet there are enough "yes" answers out there.
I'll give you one guess what sector :D
I would certainly hope people have upgraded to newer tools than those.
What I'm saying is, those things are all still running in prod. I got an employer off Java 6 just two weeks ago. The next step is all the Java packages that were deprecated 5 years ago.
If you truly want API longevity at that scale, we probably need some force beyond that of quarterly-profit-driven companies to actually achieve it. Maybe some nonprofit (the Internet Archive, maybe) could offer "API insurance" packages, where—in exchange for your premiums—they'd build and maintain API-compatible replacements for any and all services the insurance-holders rely on, if said services should happen to die.
Then you'd just need to insist (with force) that all network-RPC-based "client" libraries have configuration to let you easily change their endpoints and expected X.509 trust anchors...
I ask because it would probably be pretty tough to deprecate this API if it was being used widely internally too.
If Google's New API is so easy to switch to, and companies will get value from the switch, then they would do it automatically.
The truth is that the old API probably worked perfectly fine for some companies, who are now forced to spend time and money migrating.
If you were a purchasing manager in the IT department if a midwestern manufacturing (for example) company, wouldn't this kind of service shut down scare you?
AWS seems to support their services forever. That's what Midwestern manufacturing-type companies want. Hopefully Google will learn this, so AWS will have legitimate competitors.
I think it's too common to make lazy assumptions and not even read the article when it comes to a stereotype (true or not) such as Google's countless EOL'd services.
And personally I feel like Google's endgame isn't to provide the best consumer/enterprise cloud services a la AWS. They're just subsidizing their own endeavors in mass computing with presumably a tidy profit on the side. They can afford to alienate a customer here and there because they don't need a customer here and there. Which is an entirely rational thing to do for a corporation.
People dislike shutdowns because they have invested time and resources in implementing something. Now they have to spend the same if not more time again. This has a cost involved.
How much work is created in our cultures because of the disposable nature of everything. Businesses use efficiency as an excuse of squeeze every last dollar out of workers. Yet on the other end the system is exactly the opposite of efficient.
If Google was shutting down the capability of doing something, like reader, that would be a different issue.
> Q: What will happen with my existing models?
> A: You must recreate your existing Prediction API models using Cloud Machine Learning Engine. To learn more, please read our documentation about creating models on Cloud Machine Learning Engine.
Maybe APIs should have standard ways of accessing the information so that transisions can be seamless. Right now your API offerings make people distrust how long you will support any API.
I have had long healthy discussions about using google APIs but it's becoming harder and harder to advocate your products (which are great) when everyone perceives that you will shut things down.
Google Cloud Prediction API deprecated in favor of Google Cloud ML
It's very scary to a purchasing manager at a company to know that when they invest $100k+ in developer/consultant time implementing a solution, that every 3-5 years they will need to spend an additional $xk to simply keep the solution alive!
AWS is doing this right. I can't imagine recommending Google for many cloud use cases, given their propensity to decommission software that companies use!
Because you're not running the stack. Relying on any PASS that you can't lock down to a version and deploy on commodity infrastructure means you've accepted the reality of being vulnerable to platform changes - and in a lot of cases this makes sense because the benefits outweigh the cost of staying up to date - but scenarios you describe are not one of them.
I'm amazed that things like SimpleDB are still around and supported even though only a handful of engineers in my ~200 person org have heard of it.
I believe m1 is only available on PV.
(We were well aware of this thread, and others like it, internally)
Something that exists, but has absolutely no investment and unknown support is probably not a great thing. Given that EC2's recent outage was because they hadn't restarted their services in a long time, what problems exist here? I think that Google probably should have a SUPER long deprecation policy (24 months feels right), but a year isn't terrible - and I'd rather explicitly get a statement like this than the zombie services elsewhere.
Disclosure: I used to work at AMZN and am highly biased.
I am so sick of companies thinking "a year (or even two!) to migrate is long enough for anyone": do you realize how many services I use? Do you realize how many random products I have built? If every service deprecates their API once every five years and gives everyone two years to update, and you use 20 services (this is common!), that means that every few months you are having to go back through all of your projects you have ever built to rewrite them against some new API. That is absolutely insane.
On a meta level, I've seen an incredibly high number of risk averse companies just using EC2/S3 or GCE/GCS. That's not to say I disagree with using tons of services (that's how you get value!), but it's clearly food for thought.
I'd love to see historical data on the various cloud service turndowns, though I suspect there's too little track record to draw meaningful conclusions (and for some folks, abandonware is just as bad as turndown).
[Edit for disclosure since I wasn't in this thread before]
Slightly edited and corrected answer should be
> A: You must recreate your existing Prediction API models using Cloud Machine Learning Engine. But, you know what? You must recreate it using Amazon Machine Learning, because we might again shut down this service in favor of our next platform. So If you care about your product, move directly to Amazon Machine Learning so next time you will not be bothered by us. And thanks for using it
You can run these models on prem, on AWS, etc.
But you might have info I do not!
(Work at Google Cloud)
After working at multiple, large companies as well as doing government contracting, it would take a year just to propose and get funding for very minor changes. Sometimes it would take months to get critical bugs or security issues approved and taken care of.
A year works for many tech companies. Maybe even most. But it's laughably tone deaf when talking about larger companies or government organizations.
This is why the majority of companies I've worked with opt for APIs from, say, AWS and only consider using a Google API if there is literally no other choice. No one wants to be focused on X and suddenly have to allocate money to change something that's working to use something else.
We should be talking about 10-year deprecation for APIs. Not 1 year.
Software development is a huge capital expense. Talented developers are a limited resource. Forcing your paying customers to go spend a bunch of valuable $$$ and time in order to just move to the latest version of your API -- not even to gain any new functionality, but to keep what was working already! -- is simply unacceptable.
I'm also learning programming, basically because I want to get into Data Science and I'm doing everything I can to avoid using Google Cloud, even though I found Bigtable to be easy to use and the kind of stuff that I wanted for a project, but I forced myself into learning how to get a postgre & couch dbs up. Also using vps's from a local provider (clouding.io).
It's like... I can see myself in the future spending time on modifying stuff because Google make X decision, instead of doing stuff I enjoy.
The open-source distributed tensorflow stuff is pretty nice, but it still requires a huge amount of hand coding and tuning the machinery, reminding me quite a lot of just rolling the damn thing in MPI yourself. I'm very excited to see where distributed tf will be in a year or two, but it's a chore today.
The hope is that using Google's secret sauce to auto-distribute the execution graphs and associated data ingestion makes things "just work". At the moment, the documentation and examples for that are a bit all over the place at the moment, and require writing models to conform to the newish tf.contrib.learn.Experiment API, which is also a bit underdocumented and underexampled. Using it for very large datasets (say >tens of TB) seems to be pretty challenging at this moment (to me at least).
At any rate, I've been banging around on it for a few weeks and am really hopeful. I will follow Cloud ML Engine's career with considerable interest.
Q: Why is Google Cloud Platform no longer supporting Cloud Prediction API?
A: As we've expanded our Cloud Machine Learning services, many of the use-cases supported by Cloud Prediction API can be better served by Cloud Machine Learning Engine.
(I expect some exist, but we haven't heard from them yet.)
People misunderstand Google: they offer things for free because it lets them collect data (search, gmail, etc.) or it drives competition out (1TB free for BigQuery, for example).
For everything else, they either shut them down or charge money =)