Hacker News new | past | comments | ask | show | jobs | submit login
EC2 Dedicated Hosts (amazon.com)
176 points by jeffbarr on Nov 23, 2015 | hide | past | web | favorite | 142 comments

I have been with Hetzner since 1999, and I have hosted with AWS for 2 or 3 years, before moving back to Hetzner, and I keep wondering when AWS will have anything to match Hetzner's price/performance.

Someone from AWS proactively called me up out of the blue the other day to ask for input and to interview me about what I need, and offered to quote. I received a phone call or two and two emails and after mentioning the exact specs and price I get from Hetzner, and being genuinely interested in perhaps getting a competitive quote, I waited and received no reply for some time and then the following:

"I'm sorry for my late response. I’ve been pretty busy these days…"

"I had a look at what your technical specifications below and I'd like to understand a bit more what type of business you're doing."

"One of AWS's main strengths lies in the scalability of our platform."

"How much is the % of usage of your dedicated server? What happens if it reaches the maximum usage? At AWS, you can start with smaller instances and use auto-scaling to scale up or down automatically according to your traffic."

"Is scalability a challenge for you?"

There was no quote attached. :)

Hetzner's price/performance is an elephant in the room it seems.

Hehe, I've been using different European-based dedicated server providers (including Hetzner, but also server4you, OVH, Strato) for 10-15 years as well, with varying numbers of servers. All of this happened during the cloud revolution, and I've always been wondering "why would anybody pay these outrageous AWS prices?" - after all, the much-hyped scalability quickly becomes much less relevant if you can practically get 5x-10x the power and traffic in the form of dedicated servers at the same cost, plus the capability of quickly ordering a few more servers (which are automatically allocated to you from a pool of ready-to-use servers, this has been standard practice at dedicated hosting providers for quite some time now) and have them installed in a matter of minutes (provided you've automated your setup process). Sure, you'll have a harder time to get rid of the additional servers, since rentals are typically on a monthly basis, but - did I mention that you'll start with 5x-10x the power anyway, which greatly reduces the need for quick up/downscaling?

Aside from huge customers like Dropbox, which most likely get deep discounts on AWS, I don't see who else could get a better deal out of AWS (or any similar cloud provider) than they would get by directly renting dedicated servers, even considering the much-hyped scalability factor (which most customers probably don't ever end up needing anyway, because...let's face it, there's just so much attention going around on the net, which means there can be just a very limited number of things that "go viral", while there are many, many more services competing for said attention out there). That's at least true for the European market. AFAIK, the US hosting market is a bit different and generally more expensive, so AWS might have a better price point with regard to the competition there.

How do you handle massive traffic spikes? Hetzner seems fine for small project / personal / small company websites with predictable load and people who have no problem spending time setting up all of their services they need themselves, it really just sounds like you just aren't the customer AWS wants to attract.

"How do you handle massive traffic spikes?"

Well, how would you handle massive traffic spikes? Through a combination of vertical and horizontal scaling? Through having excess capacity? Except that I would probably want to start with something fast and inexpensive to begin with.

If you wait until the spike hits before you spin up your VM you're still too late.

I don't follow your line of reasoning, you seem to suggest that to build a scalable service you yourself would prefer to use servers with necessarily poor price/performance?

Or are you saying that it's not possible to use dedicated machines to build a scalable service? Or that one should only use VMs, with their inefficiency and resource contention? How do you reason about disk seek performance? What happens when the spike hits, and another AWS customer on the box starts stealing CPU?

Actually, traffic spikes were the reason we moved off AWS. A single dedicated machine at Hetzner gives 10x the headroom at a fraction of the cost. That buys you time and capacity when you need it.

> If you wait until the spike hits before you spin up your VM you're still too late.

precisely. this is the other elephant in the AWS room. the only way to survive a spike without service degradation while vm spin up on that platform is through lambdas/s3 served pages/api gateway - but even lambdas lag behind traffic.

but then you need to build your whole architecture for it

anyway I'm running on AWS right now, but for other advantages and services, not for its scalability/price performance.

Baking your images and set your scaling thresholds lower should be able to largely deal with that.

The thing is though, the scaling is nice but most folks just want the auto-recovery. You stick your app over 3 zones, you have autoscaling and you can run with a couple of Ops savy devs and largely forget about it.

You no longer need to pay for that dedicated sysadmin who knows how to manage a datacentre when have a small number of technical staff. The extra hosting bills are less than hiring that other person.

precisely. recoverable multi zone postgres without paying a sysadmin. we can manage basic maintenance on our own, but setting up something like that require a skilled consultant in short burst at each upgrade/maintenance check, and for a small operation like ours it's too expensive if we want to match what amazon gives.

we do have a custom AMI that just fetch and build the latest snapshot release when we need auto-scaling or auto-recovery, but our real problem is that we currently depend on sticky session, so users accumulated on the initial instances get sucky performances. (and yes we are currently working on fixing it, we cannot outright serialize sessions on dynamo because, reasons)

This sounds like it goes back to my comment in that you aren't AWS' target customer in that you are fine with a bare bones provider and setting up handling all of this yourself.

It's shown that many people want all of this outsourced for them and will pay a premium to do so. I always find it a bit silly that every time an AWS service is announced it is compared to some bare minimum other provider that is out there that is cheaper. Of course it is going to be cheaper, but that really is irrelevant.

No, the context here was "EC2 Dedicated Hosts" and pinned VMs vs dedicated servers, which is a fair comparison. Or were you actually speaking about things like S3 or WorkMail and traffic spikes related to those?

People pay a premium for the currently 50+ other integrated services in the AWS ecosystem they can leverage vs. spending time and money doing it themselves if they went with another provider, dedicated hosts are no different re: pricing.

Not everyone gets these massive traffic spikes. You don't have to be a small company to have easily predictable demand on production servers.

The company website can take a beating, but onboarding new customers takes weeks if not months. Plenty of time to plan and scale.

We handle traffic spikes with having massive overcapacity while still paying _much_ less than at AWS.

In 2008 we started a browsergame that attracted more than 200.000 players. Until 2009 we had to rent big fat servers that could cope with the load and costed over 4000 € per month. Since then hardware got so fast that our PHP software with 500k line of code and 200GB stuff in databases could run on 3-4 dedicated servers that we could rent from almost every german provider for around 300-500 € per month. Since 2014 we have everything on virtual servers hosted by Hosteurope because we wanted managed hosting like AWS and it was even cheaper than running our old dedicated machines. At that time we looked at AWS and we were totally shocked by the prices (we were also shocked how difficult it is to understand the AWS universe)! We're paying less than a tenth for much more performance than we need than we would pay for even the cheapest, 1 year reservation stuff from AWS. Even if our playerbase would triple overnight, our virtual machines would easily cope with that. And upgrading them needs a two clicks and a reboot. Also: we see it as a plus that we have not to deal with all the Amazon stuff, because they have so much APIs, names and complexity in their system. Nobody wants to learn all this and nobody wants to deal with multi-location and replication just because amazon has unreliable servers. German providers seems to have datacenters and servers that don't crash every once in awhile. I'm flabbergasted by how often I read that AWS has broken disks, crashed servers, crashed datacenters, network problems and so on. Since 2008 we had two or three problems in the datacenters at our different providers, with a total downtime of 8 hours. That would be just a little less than 99.99% uptime over all the years... I really wanted to try the new shiny thing that is called AWS but no matter how I looked at it, it is just a very very costly solution for something with built in unreliability.

My main issue with hetzner is the lack of any sort of firewall in front of the servers. Your chosen OS is essentially directly exposed to the internet.

how do you interconnect three nodes without a vpn on hetzner without paying twice as much as on aws? like a private network.

Man, I'd really like to just see bare metal from them. To me that was part of the allure of containers -- so much more efficient than multiple VMs and yet you get a lot of similar benefits, as long as you don't mind some security tradeoffs.

Running ECS is cool and all, and I know they do some container-specific VM optimizations, but I still know I'm running a kernel on a kernel, even if my VMs happen to share a host. I'd love seeing the flexibility of the ECS software, but on metal.

I'm sure Amazon would love it too (think about how much money they could save / make) but there are a lot of security concerns that need to be addressed. Running code in a virtualized kernel is a lot more secure than running in the real kernel with permissions limited. Theoretically an exploit could allow someone to take over the machine, whereas if the machine is virtualized it's much harder to take over the host.

Doesn't matter if the machine is taken over if its properly contained at the network level. Dedicated server hosting companies have been doing this for years.

With VMs the customer can share a physical host with others and not have security concerns. This cohosting lets aws pack customers more efficiently and have much better margins for the same quality service.

With containers on bare metal, they could not cohost customers.

Uhh, even with containers you really can't cohost in a multi-tenant system. Containerization doesn't securely segregate arbitrary code in the same way traditional virtualization does, and there are still security issues with that.

I'm sure AWS is improving on this, as they need to with Lambda and its underlying containerization architecture. But everyone else? Its going to be a while before you can call it "secure".

It looks like you are in violent agreement

We are! And I love that phrase :)

Amazon offers billing by the hour for those, meaning they probably move between customers a lot more than traditional dedicated servers. A bare-metal user can do lot of bad things unless you trust your hardware manufacturing 100 %, e.g. by messing with the firmware.

At the hardware purchasing level Amazon operates with for AWS hardware, I'm sure they're able to (if they required it) spec custom BIOS requirements that perform integrity verification of subsystems before permitting power up to continue. Whether they do or not is a different question (although I'm sure someone could test this theory).

They could also just spec that firmware be stored on ROM rather than flash.

Or that a hardware jumper needs to be set on the motherboard before any firmware updates can be done.

Rackspace offers a cloud cum bare metal service called OnMetal. They flash the firmware before putting a new user on a machine.

Unless “the machine is taken over” means firmware-level compromise (e.g. disk controller, NIC controller, …), in which case the next customer allocated to that machine can be compromised despite full disk erase between customers.

I'll agree with this attack vector, although it'd be trivial to have your PXE boot routine between customers re-apply known good firmware to the NIC and disk controller and verify with checksums (unless there's a 0-day out there for the NIC controller, in which case you're owned regardless).

Normally the PXE boot code is, itself, stored in firmware.

Whatever components start first, wins. If the PXE boot code is in the NIC, you can theoretically verify the rest of the system first. If its in the BIOS, the BIOS needs to trust the NIC as untrusted/determine if its compromised or not.

Security is hard :/

ROM based NIC? Why not? If you're going into large scale dedicated servers, this is not exactly impractical. You could probably also just disconnect the flash enable pins on the ICs.

Assuming the firmware has no say in the flashing process.

It definitely does matter if the host machine is taken over... an attacker could access other customer instances, shut down the system to try and shut down EC2, etc..

If I've paid for the bare metal machine then by definition I already have root on the host machine. Securing that is just as much my problem at that point as is securing my EC2 instances.

The security concern you mention (taking over the host) is moot if I've paid for root access to a physical machine (bare metal). By definition I've already paid for the host; there's nothing else to break out of.

SDN could easily be handled by an off-system component. I forget who, but someone who presented at ONS 2015 [1] mentioned the use of FPGAs for this.

[1] https://www.youtube.com/playlist?list=PLhigroIsbIuet0qlIGBQi...

I have run containers on bare metal and in Amazon. Performance can be noticeably slower in Amazon on machines with similar specifications. A detailed explanation why is in this informative video by Bryan Cantrill https://www.youtube.com/watch?v=coFIEH3vXPw. It is a talk about how layering containers on top of virtual machines is a recipe for poor performance and wasted resource utilization. He equates it to developers “being given fire, then proceeding to put the fire out.”

so would you say then that the next logical step would be to try out joyent baremetal container runtime?

You might want take a look at SoftLayer, they have bare metal servers. However, if you are okay with ARM servers, then I'd suggest taking a look at scaleway, which provide bare metal ARM servers, and they manage it all for you as well.

If you want bare metal for performance, you're not going to get it on a tiny ARM chip

You can get managed Docker containers on bare metal with Bluemix which runs on Softlayer.

> Running ECS is cool and all, and I know they do some container-specific VM optimizations, but I still know I'm running a kernel on a kernel, even if my VMs happen to share a host.

The cloud is great for distributed systems, and for these the "kernel on a kernel" cost becomes insignificant with the additional power allowed by the horizontal scalability of letting multiple computers talk over a network (not to mention network latency).

I'd be willing to bet that running on absolute bare metal is a sort of niche market in cloud computing that Amazon considers too small to get into.

"kernel on a kernel" is one of the worst things you can do to a distributed computing environment. This is why Google does containers.

I have double digit thousands of machines for one application, and have an Excel spreadsheet outlining the additional hardware that would need to be purchased to handle the same workload but virtualized. I pull it out every time some recent Stanford grad tells me the cloud is the future.

Would you mind explaining a little about how bare metal is different from this? Isn't a dedicated host a real physical machine?

You have a dedicated host to run VM but you don't really have access to the host OS and cannot run custom code on it.

Previously, your VM were allocated to a host as Amazon saw fit.

This is giving you control of the allocation on one specific VM host. So you have, for example, a box that is hosting x m4.larges that you can fill how you want. Plus you have information about the hardware, so software that's licensed per physical machine can be run on each of those instances.

No I think it's still virtualised isn't it? But on a machine that they guarantee only you are using?

Yes. The key is that you can control exactly what combination of your own VMs land on each physical machine.

Agreed. But maybe that's a later step in the roadmap. Let's hope!

What do you mean bare metal? Does that mean physical server?

This is great for (warning: potentially limited use case ahead) engineering companies* who run large analysis clusters for specialty Simulation/CFD software. Especially those frustrated by their vendors current or (non-existent) cloud processing offerings.

A lot of software in this space, for instance, has a fixed MAC address requirement for their license servers, and you report/pay depending on the number of cores. Whilst you can get around that sometimes, it would certainly void agreements and wouldn't hold up in an audit.

In some companies I've worked, this could drastically reduce the capital costs for engineers needing overpowered workstations that are analogous to Ferraris you only take out on weekends.

*who are already using AWS in other parts of their infrastructure.

Wouldn't you still see a virtualized network interface with random MAC address from the VM within the dedicated host where the software is running?

You could configure an Elastic Network Interface to have a fixed MAC address, but that would also work without dedicated hosts.

Other place where this is great is enterprise cloud migration and hybrid cloud formation.

When one decides to go cloud, you need to throw away you prior investment in SO licenses and hardware. Although you have no alternative but throw away hardware, being able to reuse investment in software greatly reduce migration costs.

Of course one can just wait until all his licenses expire and then migrate, but such a hard migration would increase risks.

And of course it would be nice to have a price comparison between running normal instances and using a dedicated host with my own licenses, but without deeper inspection, dedicated is cheaper.

How exactly is this different from Dedicated Instances (which have been around for years)? "Dedicated Instances are Amazon EC2 instances that run on single-tenant hardware dedicated to a single customer. They are ideal for workloads where corporate policies or industry regulations require that your EC2 instances be physically isolated at the host hardware level from instances that belong to other customers. Dedicated Instances let you take full advantage of the benefits of the AWS cloud – on-demand elastic provisioning, pay only for what you use, all while ensuring that your Amazon EC2 compute instances are isolated at the hardware level."

This basically lets you have an entire machine dedicated to a specific type of VM instance, plus have the underlying hardware information, to ease licensing requirements that certain programs and operating systems have. So you can get a box of m4.larges and know exactly what programs are running on it; for example, RedHat offers per-vm server licensing, so running 20 instances on RHEL on your dedicated box just costs you the flat rate VM host cost.

It probably allows you to run your Oracle workloads.

I'm guessing it's simply that you know exactly which hosts you are running on, which would solve the licensing issue mentioned below. Any other reasons/benefits?

Sadly there is still no IPV6 support...

This should be higher up. It's time for end-to-end IPv6 functionality on all major service providers and applications.


> each Dedicated Host can accommodate one or more instances of a particular type, all of which must be the same size

I'm surprised by the "same size" requirement. It seems like even if you ask customers to stay within a single family (m3, m4, etc.) the customer could do their own hand placement of 8 vCPUs next to a pair of 4s...

Edit: Disclaimer, I work on Compute Engine.

Is it possible that's been going on the whole time? Allowing different flavors of guests to cohabitate has few benefits (at their scale) and many drawbacks; I wouldn't be surprised if they simply never allowed it.

It probably simplifying the fitting if they just line up a Host targeted request with a dedicated host of that type.

All the advantages of a dedicated server without the hassle of saving tons of money.

Many of our customers have asked for this feature so that they can run software that is licensed for a particular piece of actual hardware.

So it is a feature designed for a silly license scheme?

I don't mean this to sound condescending, but you have a lot to learn about software licensing.

There are many software packages that are licensed to an individual peice of hardware. Tied to that are USB authentication dongles, and even parallel port dongles for some old school commercial software.

Yep, you sounded condescending.

I think after 25 years in the industry I know a lot about software licensing. Enough to know that just because lots of software packages are licensed that way doesn't change the fact that they are silly.

That sounds like a silly license scheme to me.

Silly license schemes are annoyingly common and being able to deal with silly license schemes is a massive killer feature for very many people.

To think of it another way, smaller companies might save money by buying bare metal and renting colo space. But larger companies would spend more just managing the separate bill for the colo, not to mention the logistics of another datacenter.

Given that there is a 10x price difference in some cases, I would doubt this. The bigger a company is, the more sense colo or dedicated makes.

You don't have to manage a datacenter just because you don't go for Amazon. There's tons of companies that will do it for you, at various levels. You want colo space, nothing more ? Fine. You want dedi hosts with a few of your own machines in between ? No problems. On top of that, a lot of dedicated server companies will have spare instance so spinning up new devices is certainly easy enough. Feel free to contact me if you need solutions like this.

I think there's one huge advantage of Amazon : management effectively doesn't get to set policy on the datacenter, so they don't get to screw it up. An internal company department would use something like VMWare and won't let you spin something up without endless approvals, whereas Amazon treats you like a customer.

Could you list some examples?

Many Oracle products must be licensed for all the physical hardware they run on, irrespective of VMs (unless you're running OracleVM).

Their example dedicated host adds up to $2'226 per month - had a quick calc on the retail HW cost for a machine like this - should not be more than $6.5k with 2 * E5-2670 v3 12 cores.

What I understand so far - it is physically bundling a fixed amount of virtual machines to a physical host - in the example "dedicated host" 22 * m4.large.

Bundling your virtual machine to one or a series of physical hosts / on the same network segment is a service you can have from quite a few hosting providers (if you ask).

If you opt for a solution like this, it is also most likely that you will run an enterprise scale solution and you will do so for quite some time - at least 6 months upwards.

Keeping that in mind together with a lifetime of at least 2 years for such HW, you will be paying 8 times the HW cost for a 2y lifetime for a management layer (storage / connectivity you pay per GB with EC2).

I guess everybody will have to see how this fits into their business model for non volatile / predictable resource demand or a set of when physical iron might be a better choice (colo or rent).

You are looking at their on-demand rates, which means no commitment. You can do this for a month or even a few days. If you know you will need infra for 1 year or more you can use reserved rates. Right now those are not yet publicly available for Dedicated Instances, although the post mentions you can contact them if you want early access. With reserved rates the price should be 70% less for a 3 year commitment (speculating based on the post). A huge difference.

But I can still for most things get lower prices on a month by month basis than the price a 3 year commitment to Amazon gets you. And that's before paying their outright extortionate bandwidth prices.

If you have to hire an extra person to manage the new hardware, then you aren't saving money.

In my experience, moving people off EC2 cuts down on the ops time they need, it doesn't increase it.

For one of my clients that I manage multiple racks in two different locations for with 150+ VMs, "managing the hardware" comes out to about 1-2 days a year in aggregate to bring new hardware in and wire it up (most of that is travel) + 20-30 minutes to investigate the very few issues we can't diagnose and fix via IPMI. I pop a server in, attaches power and ethernet, checks that the IPMI is reachable and that it sees the PXE server, and beyond that "managing the hardware" comes to yanking the occasional dead harddrive and inserting a new one, and ever now and again to confirm a server is dead.

Meanwhile with EC2 I see most of the same non-hardware issues (e.g. kernel panic, applications occasionally spinning out of control and taking a server down) that are just as trivial to handle via IPMI as via the EC2 console, but we also have to engineer around things like the lack of solid, stable, directly attached RAID arrays, which we don't need to worry about with the bare metal servers.

And no, EBS does not count - the number of times volumes have gotten stuck in attached state on a failed instance terrifies me. It also can't in any way match directly attached SSD RAID setups for performance which is another reason why it ends up taking more ops time: You end up with setups that simply take more vms to compensate for platform limitations.

I absolutely think EC2 is great for things like large batch jobs etc. where your requirements vary wildly, but most people don't even have enough daily variance for that to get anywhere near compensating for the cost of EC2 (and nothing stops you from deploying hybrid approaches - in fact I'm working on hybrid approaches mixing bare metal servers with EC2 to handle batch jobs and load spikes now)

wake up

What if the license of the software you want to run is 100k per physical host? Then being able to run 2 or 4 instances on the same physical host makes the $2226 cost pretty insignificant.

If you have software that costs that much, you will want to run it on hardware optimized for that software. (Does it want lots of GPU cores? I/O? Raw computrons? Does it prefer more cores or more GHz/core?) Unless Amazon happens to have exactly what you want available, you'll be better off putting it in a lab or datacenter.

Maybe, maybe not. Just because software cost a lot does not mean it is super high performance. Some is awful php strung together hacks, but if it does what a customer wants - it may be worth 100k to them.

Except, of course, that for anything important you'll want redundancy and failover in the face of hardware failure, so you'll need two of those $6.5k servers.

The same (kind of specious) argument applies to the AWS case.

Maybe - I have a number of services that can tolerate a small amount of downtime if they fail, during which I would spin up a replacement AWS server. 99% of the time I'm only paying for the one machine that's running. These would all need dedicated hardware failovers if they weren't hosted in AWS (the same system does have hardware components, and they either have redundant failovers that are mostly idle, or for clusters we run at 66% capacity to allow for failure. Either way, it's often not true that one hardware instance can be substituted for one AWS instance, unless you don't care about long downtime when the hardware fails).

Are there any good, dependable, modern alternatives if you don't want to deal with the hardware, and also don't need "scale on demand"? The only name I know is SoftLayer, which is certainly not cheap.

There's thousands of choices, most have been in business since before AWS has existed, and almost all of them will give you much better (2x-16x) bang for your money (Even on a small order, you can (or could) negotiate SoftLayer's down by ~50%).

Ones that I'm personally familiar with: Hivelocity in Florida ReliableSite in NY WebNX in LA 100TB (a SL reseller in some locations, and they own their own in others) OVH (lower quality, lower price, great for various workloads, NA data center) Hetzner (Germany)

Yes, that's why I'm asking. There's surely tons of potential differences, in reliability, redundancy options, turnaround time for new boxes, selection of configurations (including recently released hardware and things like GPUs), networking options, backbone connectivity, software services, multiple data centers (so you only need to pay/deal with one vendor) etc.

Hetzner.de is a good place to rent a dedicated machine that is fairly configurable.

I know Hetzner, but I didn't think they had any datacenters in the US.

I don't believe they do. If you want a US datacenter, consider LeaseWeb. They're a Dutch company but offer servers in DC and San Jose.

Looks alright, but their server selection is a bit limited (and hard to navigate, just a big list that doesn't even filter properly).

For example, if I want 128GB RAM and SSD disks, their prices suddenly go up to thousands of dollars per month because they're all some kind of beefy Dell or HP, whereas Hetzner can give me a single-processor, 6-core Xeon E5-1650 3.4GHz + 128GB RAM + 960GB SSD for $123/mo. LeaseWeb has cheaper SuperMicros, but they either max out at 32GB, or they don't have SATA disks. They're skewed very differently: With Hetzner you can't pick "less RAM, lots of CPUs" and LeaseWeb doesn't have "lots of RAM, fewer CPUs, SSDs".

A vendor like this needs to offer a much wider range of specs to be worth investing one's entire infrastructure in, to be honest.

I thought we were talking about "where do I get a dedicated server"? If you're serious about moving your entire infrastructure onto dedicated hosting, you have to pick up the phone and negotiate a deal. The off-the-shelf options are pretty limited, as you point out.

Is that entirely true, though? I'm too small a fish to be in the position to negotiate much. And besides, at my scale, there's no reason "negotiation" can't be expressed as simple mechanical rules (fixed-length contracts, volume discounts, usage tiers, etc.). Off the shelf is what I want, not some bullshit corporate sales thing.

Well, dedicated hardware for EC2 makes sense for certain workloads (eg when you need more predictable performance), considering that you can use them alongside other AWS services (EBS, RDS, Route 53, S3, etc). Plus allocating 'bare metal' VMs on demand via script/API is a big plus.

It would be interesting to see how much you can squeeze the dedicated hardware with the largest EC2 types though.

This is very far from 'bare metal'.

What's the networking model? Will two instances on the same host talk over the datacenter network, or will the traffic stay local to the machine?

Given placement groups I'd expect 10 GB local networking as this is the same thing.

Now how long will it take for Amazon to release a dedicated host marketplace where you can sell unused space on your dedicated hosts?

Ah yes, we run all of our services on dedicated NSA machines we leased from AWS!

This would be a security and privacy nightmare.

This isn't too far-fetched -- there's already a marketplace for buying and selling used reserved instances.


Doesn't that defeat the purpose of obtaining a dedicated host?

Technically, its the equivalent of a leaseback. It may make sense depending on the profitability of the arbitrage you're targeting.

Not if it's primarily to accommodate bring-your-own-license for per-machine-licensed software. Needing to have control over what machine runs your instances doesn't mean you can't allow other people to run their instances on "your" hardware.

Can anyone give me the advantages of choosing an EC2 dedicated host over going with a true dedicated server?

jeffbarr from Amazon in another thread:

> Many of our customers have asked for this feature so that they can run software that is licensed for a particular piece of actual hardware.

So it's more aimed at working with/around archaic licensing schemes, rather than technical advantages.

Thank you! This answer makes total sense.

All the rest of your infrastructure is in AWS and you want the high interconnect speeds.

This shouldn't be any faster than normal EC2.

(Edit: I may have misunderstood the comment. This will certainly have a faster interconnect to EC2 compared to a non-EC2 dedicated server.)

No, but if you need a dedicated server in addition to other EC2 hosts it will certainly be faster to be in a AWS data center.

Higher relative to getting a dedicated server from a different provider.

If it was just about network speed, it would be cheaper to get the direct 10G cross-connection to Amazon's network than to pay the premium for the dedicated hardware.

You want decent performance but your budgeting process only allows you to use AWS.

It's called Amazon. Other cloud services and bare metal hosts are not called Amazon.

Some pro/cons that would be interesting to discuss:

Other than licensing, an advantage I'm guessing is in reducing noisy neighbor affects. In our case, we use a lot of t2.micro instances, which seem to suffer from this.

A disadvantage is that the instances within a dedicated host might all go down together? (similar to if you put all your instances in us-east-1e and 1e goes down?, or of course if all of us-east goes down). Although I'm not sure a dedicated host itself is more likely to go down, while a datacenter remains operational. That's what I'm most interested in knowing - how do these fail?

We monitor the health of the host; take a look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated... to learn more.

You have to consider it a SPOF. Put nothing on the same dedicated host that you cannot fail at the same time. Though in AWS speak, generally you treat an availability zone as a SPOF... so dedicated host is of course a SPOF...

No one's mentioned this as a potential use case, so wondering if other people have a way to solve a problem I have...we spin up EC2 instances on the fly for running tests, but some of those require a dedicated IP address to deal with a vendor integration. Seems like a dedicated host might solve that, though perhaps not worth the cost. Is there some other way people solve this problem? We currently have an in-house box just for running those IP limited tests.

Yes - its called Elastic IP. You can have a dedicated IP address and associate it on-the-fly to regular EC2 instances. You pay a small monthly fee if you are not actively using it.


The problem with Elastic IP, from my understanding, was that I often need to spin up say a dozen instances for a short period of time, but otherwise don't need the IPs, so it seemed it would end up costing to have those all available but only temporarily in use...though adding up the cost, it'd only be $45 a month (12 * 0.005 * 24 * 30) so perhaps that's not a big deal.

Apart from the elastic IPs already mentioned, two more options:

1) Have a canned VPC-based test server that's off, and only turned on for the test. VPC-based servers do not lose their IP addresses between activations (though a single server may not suit you)

2) Use AWS internal DNS - have your vendor product use hostnames rather than IPs (if possible), and when you spin up the new machine for testing, switch it's IP into the hostname on the internal DNS zone. Again, you'll need to be VPC-based to use this. You can destroy the instance between runs with this method.

Alternatively you could set up a forward proxy server on its own instance with an elastic IP and proxy your network requests through it.

This has the added benefit that if you ever need to spin up more instances to run simultaneously, you don't need to get a new whitelisted IP address.

Elastic IPs?

To save a trip to the calculator:

$2.341 (m3 host / hour) * 24 hours * 30 days = $1685.52

before the "up to 70%" reservation discount.

Bare in mind that can run up to 32 m3 instances

Which would be `$0.067 * 32 * 24 * 30 = $1543.68` on demand

Thanks to the amazing innovation at Amazon, 2016 will be the year, we can finally have a real computer in a datacenter, where we can run all our JavaScript code on, instead of of having to use an anonymous cloud service.

This isn't even bare metal. This is just pinning or affinity. (Pinning, or affinity, depending on host failure mode - do they failover to a new host, and do they all get to be on the same host again?)

Sure, your jobs aren't fighting another customer's jobs for CPU; now you're fighting your own jobs for CPU.

Maybe we could distribute the computing on these dedicated hosts, to have like, virtual dedicated hosts. And then run them from Minecraft... It's the future.

You can implement the HTTP parse logic in Minecraft using redstone. Deploy nginx or Apache Traffic Server in front of your Minecraft and you are good.

Are the hardware specs available somewhere? Is it possible I'm confusing this with an interface for launching EC2 instances at a premium disguised as being actual dedicated hardware?

Each Dedicated Host is home to a specific number of EC2 instances.

Check out the EC2 instance types (https://aws.amazon.com/ec2/instance-types/) to learn more.

If you setup a dedicated host, does that also include local SSD disks, or do you have to still use EBS?

A Dedicated Host contains EC2 instances. You would still use EBS if you need storage other that what's available on the instances.

Per the EC2 Instance Types page (https://aws.amazon.com/ec2/instance-types/), many of the instance types already include SSD storage.

Thanks Jeff for the response. Appreciate that you personally respond.

Wouldn't dedicated I/O be a big plus and selling point for dedicated instances?

That pricing is ridiculous though. I can get 15 Hetzner servers for the price of one EC2.

It's more than a bare metal server - you're still running EC2 instances atop of it, except only your instances are running on that server. Looking at the price of the underlying instances if you run it fully loaded, it seems pretty comparable running those instances non-dedicated.

So? It's still ridiculously expensive.

So this is for IO- and CPU-bound users who for some reason are also EC2-bound.

According to their docs/examples, this is mostly for legal reasons. i.e. some software can be licensed to multiple VMs if they are all running on the same host.


For one thing, Amazon isn't spamming the crap out of multiple forums (Webhostingtalk, Lowendtalk, ...) with fake shill reviews.

And you not disclosing your connection considering 90% of your submission history is cloudways makes it even better.

And that you're one of the fake-review-shill spammers too.[0]

[0] http://www.lowendtalk.com/discussion/65774/is-there-any-serv...

As far as I can tell that offering has basically nothing to do with dedicated hosts.

So, how much different is this than Vultr's[0] offering?


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact