Hacker News new | past | comments | ask | show | jobs | submit login
Amazon’s cloud Macs cost $25.99 a day. 77 days of usage would buy you your own (theregister.com)
55 points by alexellisuk on Dec 2, 2020 | hide | past | favorite | 79 comments



This is also why machine learning scientists should build their own GPU workstations and servers.

Someone did the calculations:

https://www.reddit.com/r/MachineLearning/comments/9iqcr3/d_w...

This guy built a 4 x 2080 ti workhorse for 7,000 USD, and it would probably be even cheaper now:

https://l7.curtisnorthcutt.com/the-best-4-gpu-deep-learning-...


Even with that build it might still take several days to train some deep learning models. Much better would be multiple machine distributed model training. You can pay the true cost either by waiting for your one machine model to train, by setting up complicated infrastructure on your own (either in house or in the cloud) or by using a cloud/third party distributed computing option.


> by setting up complicated infrastructure on your own (either in house or in the cloud) or by using a cloud/third party distributed computing option.

The argument is/would be, if your business is heavily based upon deep learning and is that compute heavy on a hourly basis, you may in fact save millions daily by having your own in-house infrastructure.

It seriously isn't complicated to run as much as AWS and co want to say it is to push their own sales. But it does cost capital investment and maybe 1 or 2 IT guys.

There's an inflection point where it makes sense.


What do you expect the salary is for an IT guy that knows how to set up and maintain distributed machine learning infrastructure on prem?


Not that much. It's really no different than setting up other servers. You can also rent services instead of maintaining a FT position.

IT has been commoditized, it's not a special domain of expensive and rare experts anymore.


Don't fool yourself, it is not hard.


For about three times the prices you can get double that performance in one machine (dual socket with 8-10 GPUs). You can get a lot done with that, and in some cases waiting days is actually acceptable. If you need more power than that it does indeed get more complicated and the trade-off might change.


Isn't that true for most of AWS/EC2 ? A t2.2xlarge with 8 cores and 32 GB Ram costs $685 in 77 days. You can probably get an 8 core 32GB ram Intel/Amd machine for that price. Not a xeon, not "server-grade" but still the 77 days are not surprising to me. You pay a premium for the machine to be available on demand, being in a datacenter and being integrated in the AWS ecosystem.


Thank goodness for this offering. Do you know of any other cloud provider offering Mac instances that is HIPAA compliant and will sign a BAA?


If you need to run a Mac in some kind of server role in a secure DC that has a list of certifications, this AWS product may be for you. You are also very niche.


Note that you cannot use these macs "servers" to actually serve stuff. Its just for CI, development. That is in the ToS


MacStadium claims their Atlanta and Las Vegas data centers are HIPAA compliant. Not sure about a BAA, you’d probably have to ask them.


Aren't we paying for the service of not having to buy and physically manage our own as well? Regardless, this seems high.


You could spin it up on demand though. Don't think many envision this as a 24/7 workhorse in the cloud.


You have to rent it for a minimum of 24 hours. I don't know if this is in 24 hour increments, but lets say you need 1 build a day then you are already paying for it every day.

There must be some existing tooling to let you integrate a physical mac you have lying around into your build system, so comparing rental macs vs bought macs isn't so strange.


> There must be some existing tooling to let you integrate a physical mac you have lying around into your build system, so comparing rental macs vs bought macs isn't so strange.

At work, we use GitLab CI for this and have one of the developers' old MBPs sitting on a rack shelf for this very purpose.


Ah the old load bearing Mac Mini approach :)


It was said on launch day - they bill for the 24 hours upfront, then in 1 second increments after.


The issue with spinning it up "on demand" is that there is a minimum 24-hour billing period for each machine you start because of Apple's EULA. So, each time you want to spin up an AWS Mac, you're paying at least $25.99.


Azure DevOps actually includes quite a bit of Mac-based compute for free (no M1 yet). I use it to prepackage TensorFlow.


Only useful if you have AWS credits.

Holy crap this is expensive though


I think many of these cloud offerings are shockingly expensive. The first time I heard what we paid for the Ubuntu images we have on Azure, I though my manager was joking...


When you compare raw hardware costs vs VM costs it seems ludicrous. But it's more reasonable than you expect.

First if you commit for 1 or 3 year (like you'd do with hardware) you can get 30-70% discounts already.

Then you have to factor in costs of: replacement hardware, energy, cooling, networking. But the big one is people. Someone has to install all that hardware, fix it if it breaks etc. They provide everything like that for you.

If you are tiny (<10 servers located in your business building) it can be worthwhile because it's just a side job of someone you need anyway. After that you need quite a large scale for it to become really attractive.


> But it's more reasonable than you expect.

If it was reasonable, there would not be other companies providing the exact same product at a fraction of the cost:

https://www.linode.com/pricing/


Aren't ubuntu images free to use? Or are you talking about the ones with support entitlements?


We pay a monthly cost for every Ubuntu VM that we have. Perhaps there is some "get X free VM's" included with a certain Office365 license or something; but ours are certainly not free. You could probably make the exact same calculation as they have done here. How long can rent an Ubuntu VM in Azure for the price of buying actual hardware running Ubuntu etc.


I don't doubt that the VMs themselves cost money. I only doubt that the ubuntu images cost money. In other words, the cost for the VMs should be the same if you installed debian (or whatever home rolled distro).


Yes, no additional cost to install Ubuntu on the VM's.


It's bare metal. I wonder if you run virtual box on it and have a VM in your VM


Yo dawg! But seriously, as a regular VB user on macos: it works, but it’s a bit shit and slow. Will probably work better with vmware, i’ve heard their equivalent Fusion might be free now.


Probably not. Most virtualisation doesn't allow arbitrary levels of nesting. But I'm not sure how AWS's Macs are set up so who knows.


I know AWS has certain bare metal instances that are expensive as all get out, that support virtualization.


And then run an emulated Acorn Archimedes. Emulating a BBC Master.


Imagine how much the desire for Macs would collapse if they let people compile iOS / macOS apps on Linux.

The fact people tolerate or celebrate Apple's heavily restricted ecosystem is absurd. Apple is overly greedy and resting on their laurels.


>Imagine how much the desire for Macs would collapse if they let people compile iOS / macOS apps on Linux.

Probably not much after the latest M1 release.


Hm. Looked up the M1 because all I had seen was "low power consumption, lots of battery life" which I took as "Facebook machine".

Looks cool. Apple might be turning the corner from "lol, here's the same phone from 2 years ago but now with three cameras, give me $1,100"


This seems misleading. If you needed to use anywhere close to 24 hours of cpu computation per day, wouldn’t you not use a cloud provider? And most tenants will only use a fraction of that per day, so 77 days of CPU usage could take years to use.


I thought the point is with the new announcement/EULA terms that a day was the minimum amount of time you could rent from Amazon for. I could have misunderstood of course.


As far as I know, there's a minimum tenancy period of 24 hours, but after that it's like "standard" EC2 where you're billed per second.


I believe you rent a day to buy in, but that’s a day of CPU usage measured by clock time. So it expires based on your usage, not 24 hours later.


I thought the EC2 pricing model is, even when the instance is shut down, you continue paying, since it is still reserved.


This is true for just about any architecture. It's usually a lot cheaper to buy your own machine and put it in a closet if you need it doing work most or all of the time.

The only time that doesn't make sense is if it's customer-facing and needs better than 99.9% uptime or needs a lot of bandwidth, as those things are better done from a data center.


Yeah, but then you gotta keep it somewhere, network it, keep it powered, provision users on it, fix it when it breaks, et c.

Most of cloud computing's value is just not having to do that utterly mundane crap. Amazon can charge just a tiny sliver less than it costs in some mid-level admin's fully loaded cost in dealing with it, and then they and their customers make/save money: win-win.

I'm the last person to defend AWS (they built a custom on-prem AWS datacenter for the CIA!) but this is a silly criticism of their new service.


Honestly, they can charge quite a bit more than the amortized cost. You can't hire 10% more admins if you only have one. And there's substantial training and cognitive cost to manage a more heterogeneous network if your staff only work with Linux or Windows but you have one or two Macs for a particular pipeline, which is a common use case given the price of Apple servers.


Seconding this. The value is especially obvious if you’re not running a single server: these are targeted for things like build & test infrastructure, where these would be one more node in a larger system. Not duplicating that or having to maintain a VPN, etc. is worth a lot more than the pricing differential.


No it's not. Do the math, it is a serious rip off, waaaaay over priced.


Statements like that need a TCO (Total Cost of Ownership) calculation to back them up.


Both sides need to be doing this. So many un-cited "AWS is cheaper overall" estimations, with hand-wavey "but you have to hire more IT staff" as their justification.

Plus, articles which do the TCO calcuations (there was one recently that pointed out a 200% premium for AWS) are still derided because people really prefer to argue with their feelings on this topic.

We get it, you don't want to run your own infrastructure. How much is that distaste worth to your company?


You realize you can install Parallels on a Mac and create multiple Mac VMs, right? So just run the base OS on the hardware and never screw with it and run test/build VMs on it.


Keenly. Now think about what happens as you go from being a one person show doing everything on your laptop to a larger operation. You have servers in the cloud, build in the cloud, and tests in the cloud. You can do everything for every other platform that way, nicely locked down on a secure network protected with firewalls, access controlled with ephemeral IAM credentials, nice CloudFormation/Terraform to manage everything — and someone says we need to have a build runner on one dude’s Mac because we can’t treat Apple platforms like Windows or Linux. Now you need a VPN or other network holes, you’re delayed & paying network egress pulling stuff down over the internet, the owner of that Mac has to be careful about installing system updates to avoid breaking the build, etc.

Unless everyone is working pro bono, the amount of time you spend on ops will pay for a lot of Mac mini hours and takes away time which could be going towards actual product improvements.


Depends very much on how many of those you have. We have one AMD Threadripper box and one Mini for builds and some other various machines for testing, and we've had to screw with one of them maybe once a year.

Modern machines are less finicky than older hardware. Spinning disk is almost dead, and if you avoid it you've eliminated most of your moving parts that fail often. Fans are the only other thing. Solid state hardware can hum away for years and years.

This is for small scale of course. If you try to run a larger scale DC it gets to be a pain since even with rare failures they will now happen often if you have tens or hundreds of machines. I wouldn't try to run a DC these days unless there was a compelling reason or a huge cost saving.


Exactly. Put that Mac Mini in your data center with redundant power, redundant networking and complete failover to another data center. I've been there and done that - it ain't cheap. Your rack enclosure with redundant power sources and Fibre Channel Adapters is going to run you more than the Mac Mini itself. Add in the need for high availability and failover and yeah, you're looking at some real money. When you look at the total cost of putting a Mac Mini in your data center, and not just the cost of a Mac Mini itself, then you'll see this is a good deal.


The fuck are you doing that a Mac Mini needs failover? At that point, dear lord have mercy on that dev team and whatever horrible ideas management have pushed.


Paying a team of iOS developers $150/hr to create apps. The Mac Mini is their CI/CD server - if it goes down then they're toast. A team costing $1,500/hr sits idle because I'm too cheap to pay $25.99/day to keep them productive? That doesn't make any sense.


And, for the point, this markup is less than on a server. When evaluating AWS and other clouds versus direct ownership and the salary to devOp the servers - one month of AWS purchases the entire server stack, hardware firewall, more than enough storage, and has more than enough left over for salaries for a half dozen devOp techs. The cloud is a ripoff.


Sad to see this downvoted as it is absolute truth (except "left over for salaries for a half dozen devOp techs" is a bit exaggerated but the rest is spot-on).

AWS/cloud is extremely expensive. Seen many companies bankrupted by it when they could've run their own infrastructure for pennies on the dollar. For some workloads (very variable ones, in particular) it's totally worth it. Most of the time though, the cost premium is a killer. If you're running a business where customers are not price-sensitive and your margins are very high, AWS makes sense. That's not most of us, I think.


Actually, the "left over for salaries for a half dozen devOp techs" is correct. My total hardware purchase was $55K, and that is versus an equal computational setup at AWS was $96K per month. That means there is over $40K per the first month that could be distributed to devOps staff.


It depends on who you are/where you work.

Have you ever had to interact with enterprise IT, especially purchases? Also, do you know how much things cost when using enterprise IT?

I'm talking about many huge companies with a lot of money (something on the scale of hundreds or thousands of companies).


Also, paying for cloud compute is an operating expense vs the capital expense of buying hardware yourself.

I'm not going to pretend to know all of the tax implications, but it can actually make a significant difference to the company because the operating expense can be fully written off. The cost is therefore offset slightly vs depreciating the value of physical hardware over a number of years.


I did this math, I've got an MBA. Physical hardware wins in nearly all cases.


For startups you cant beat used hardware from ebay and some IT experience in one of the cofounders. This is especially true for development machines (CI runners, compile caching, GPU servers, etc).


Yeah, if you're a startup you'd better have someone tech oriented and you'll be able to do a lot of things for cheap.

If you're a tech startup, then I don't even know how you'll be able to do anything without someone technical :-)


> If you're a tech startup, then I don't even know how you'll be able to do anything without someone technical :-)

I've spoken to some YC founders. You can definitely raise funds without a (real) technical founder.


It's also extremely comparatively cheap to get powerful dedi servers from places like Hetzner. Price/performance absolutely dwarfs the cloud services.


(Serious question)

What do you do with that hardware? Do you go to a colocation and stand the servers up there? Run them from home? What are your options for something like this?

Edit: I realized afterwards you may not be talking about public-facing servers


I've done a few different things with these types of machines. I also know a few people who run some big companies off infra like this.

The key for a small team is finding a colo provider with really good remote hands. Essentially you can setup your rack/cage, leave good instructions and room for expansion. Then you buy a server from dell with financing, ship it to your DC, someone installs an OS and racks it (or you PXE boot from your rack ;) ), then you can provision anything you want. It's essentially like cloud providers but scaling is slower.


Colo the servers. I had 17 servers + the hardware firewall and a massive data store in a single cabinet for $600 a month. The colo I selected was a former Enron data center, so it had a massive Internet pipe.


How did you deal with data redundancy / backups?


If your use case is 1/10th the monthly cost of equivalent cloud resources you can have 5x the regions and still have budget for plane tickets and remote hands.


This is probably true, thanks


I had 3 duplicate environments - dev, staging and production. Backups were an automated process between data stores, and integrated into the environments normal operation. The entire setup was automated to the degree it only required casual glances at resource limits about once a week, with minor devOps maintenance occasionally. We're all smart engineers, producing wonderful tech - automating a server stack is intern work.


Buy new hardware and just do the IT yourself - it is seriously not hard. That's the other side of cloud propaganda: running an maintaining servers is easy peasy..


If you're that confident, why not start a cloud yourself and rake in the money?


That's a deliberately bad headline. The vast majority of these will launch, build an iOS binary, save and quit within an hour. The other 5% will be people testing an app.


You can't rent them for less than 24 hours.


Yes, but my point is that they won't be used every day. I used to work in a dev shop that had to compile 20+ applications every month or so. Our automated mac/build box, would require poking and resurrecting every other time it was used. This would have saved us a fortune.


> That's a deliberately bad headline. The vast majority of these will launch, build an iOS binary, save and quit within an hour. The other 5% will be people testing an app.

Headline is spot-on. Minimum rent is a day per Apple's terms of use.

Discussion of this can be found from yesterday: https://news.ycombinator.com/item?id=25262303


You have to pay for 24 hours on every start up.


If it's too expensive for you, you are probably not the target audience ;-)

Edit: This is downvoted, but it's the truth.

Do they really think that a company known for long term support of its cloud services (AWS) contacted and worked with what is a competitor in many fields (Apple) to build a service in which they invested a ton of money, if they didn't think there was a market for it? There is, enterprise build, test and deployment services for companies using AWS.

So I have to repeat, if you think that's expensive, you're not the target audience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: