Looks like the instances they use are not GPU instances, so I’m not sure what the point of this is. Also they do not take into account EC2 spot pricing, which saves a ton of money and are the main reason I use AWS over alternate clouds.
(I work for GCP)
In fact they have preconfigured Windows instances: https://cloud.google.com/compute/docs/quickstart-windows
Do you have any numbers on that take? Or just a general feel that it works out?
For specific numbers, the AllThePeople.net web scraping and profile compilation runs on Windows m1.small instances. The normal price is 8 cents per hour each while the spot instance costs me only a couple cents per hour each. I set the majority of my spot requests to the minimum possible price. I leave a couple at a higher price point of 8 cents per hour, so at least I have some work getting done if the prices go up. Those numbers are mostly for memory, it’s been a while since I fiddled with it much.
Machine learning stuff will typically happen in batches, openai even went through the effort to publish a k8s library for the task of autoscaling clusters to minimize cost and manage it.
It is ironic that sometimes those pipelines are themselves cheaper to build up than building on fixed cost alternatives. But I don't think it is a given. (I would be surprised, but not shocked, to find I am wrong on that.)
Answering the comments here:
GPUs: makes no sense for word2vec (and many other ML algos).
Azure: no support from Microsoft for our program. The Docker image and all code is 100% public though, so please run yourself & report the results. We'll be happy to update.
AWS spot: might still try that.
Thanks for the feedback!
1. "Softlayer is also the only platform that allows the provisioning of bare metal[...]": so does AWS:
2. "Softlayer and AWS charge in hourly increments." AWS offers per-second billing: https://aws.amazon.com/blogs/aws/new-per-second-billing-for-...
3. Echoing @tedivm's comment, for a real comparison you'd really have to include GPUs. Do you plan to write a follow-up?
(Disclaimer: I work for AWS)
1. AWS launched this last week, after the benchmarks were finished & published -- can't take the blame for that :) Exciting news though.
2. Launched during the benchmarks -- sorry we missed this!
3. I disagree that CPUs are "not real", but, yes, we already ran some GPUs benchmarks (alluded to in this blog post, too). Stay tuned.
That's surprising. Usually the complaint is that IBM's stuff if complicated to figure out and Google and others are "user-friendly".
> Softlayer is also the only platform that allows the provisioning of bare metal servers amongst the 3 cloud service providers,
Wonder if there is an opportunity there to deploy Kubernetes on it and manage it on your own. Before cloud orchestration was the super-secret-money-making sauce, but lately with some solutions being open source maybe that's changing and bare metal servers would become popular again. Wonder in general if Google talking about Borg and sponsoring Kubernetes was aimed at weakening Amazon's AWS's position.
Amazon recently announced bare-metal servers
Yep. IBM Cloud Private, a self-managed K8s based orchestration platform can be deployed on bare metal.
... sounds about right
I encountered same error with google cloud ML engine. No errors on warning when shutting down. Any idea why?
I am skeptical about using it, I've had better performance with Keras/Theano on CPU only loads
(I've used Hetzner in the past--in my experience their hosts are not very reliable but are super cheap.)
Look, as a consumer of gpu compute, I want there to be as much competition in this space as possible, so I root for you guys, but as it stands you're nowhere even close.
dedicer.com has been connecting our visitors with providers of Craps, Dice, Dice Games and many other related services for nearly 10 years. Join thousands of satisfied visitors who found Discount Toys, Fun Games, Game Magazines, Games, and Games And Puzzles.