And even if you need the scaling, a mixed stack with dedis and EC2 on demand shouldn't be hard to run.
Unless, in turn, you use reserved instances, which bring the price down to equal or cheaper than most other options. "dedis and EC2 on demand" is an order of magnitude more complex, probably actually more expensive, and increases your failure cases significantly.
https://www.ovh.com/us/dedicated-servers/details-servers.xml... For 12 months will cost you about $800
A somewhat comparable EC2 instance (that still has way lower specs) m3.2xlarge will have an up-front cost of $1772, and an hourly cost of $0.146. So you'd be looking at a total yearly bill of $3051, which would pay for 3 of those dedis.
I honestly don't know how anyone could describe that as competitive.
> "dedis and EC2 on demand" is an order of magnitude more complex, probably actually more expensive.
If you're one of the few people that actually need the rapid scaling that EC2 can provide, sure. But that's an extremely specific and rare use case.
I'm also familiar with exactly how misleading a specs comparison ends up when you can architect an application to leverage burstable instances and rapid auto-scaling, letting you require little expenditure during slow periods while still responding gracefully to load. This isn't some deep-magic thing for sites with tens of thousands of instances, either--I've had clients with ten total instances benefit both financially and operationally from building out fairly straightforward elastic capacity (not least because the same code paths necessary for elasticity also makes things like setting up development environments single-click affairs, which enables deeper and more complete testing of the application to ensure correctness).
Even were a $2200 premium a credible number, that's eleven hours of my billing rate. Reimplementing what AWS provides even a fairly straightforward system--easy, API-driven system deployment and configuration (and while OVH's public cloud APIs can enable some of this, your "dedis" cannot), straightforward and API-controlled support services of more or less any stripe, integrated system monitoring and alerting--would be hundreds of hours of work, be a worse solution, and require ongoing maintenance. (A decent hosting provider will offer some of that, of course. But not all. And you'll pay for it.)
That's good to know. I was contemplating going with them for my next project. AWS it is then!
The only problem I see with AWS is the configuration overhead. Want storage? Well, we have a separate service for that! Now you have to understand the pricing, the API, etc. just to integrate it into your stack.
I completely understand how this modular approach helps with scaling, but when starting out it just feels like way too much overhead, especially for a single dev like me.
I do some advisory stuff for students and bootstrapping startups that are looking to work in cloud environments. Feel free to drop me an email (in my profile) if you'd like to chat.
There's a plenty of places that aren't OVH, however I figured they'd be the most relevant choice here as game servers were being discussed earlier. DDoS protection being cool and all, something you really don't get on EC2.
>This isn't some deep-magic thing for sites with tens of thousands of instances, either--I've had clients with ten total instances benefit both financially and operationally from building out fairly straightforward elastic capacity (not least because the same code paths necessary for elasticity also makes things like setting up development environments single-click affairs, which enables deeper and more complete testing of the application to ensure correctness).
If for the cost of 10 instances they could've had 30 dedicated servers with better specs, did they really benefit very much? While I can certainly appreciate the part about development environments, applications that would actually benefit from such scaling are rather rare. Although, admittedly, if they hired you they probably did need it.
>Even were a $2200 premium a credible number, that's eleven hours of my billing rate.
$2200 per instance, does someone only running a couple of boxes even need your services?
>Reimplementing what AWS provides even a fairly straightforward system--easy, API-driven system deployment and configuration (and while OVH's public cloud APIs can enable some of this, your "dedis" cannot), straightforward and API-controlled support services of more or less any stripe, integrated system monitoring and alerting--would be hundreds of hours of work, be a worse solution, and require ongoing maintenance.
And again, I believe most businesses simply don't need what AWS providers. AWS is certainly a good choice if you actually need what they offer, however very few do.
The reinsurance company I used to work for could be run on about 5 aws servers compute wise... they do about $3 trillion in in force life insurance policies a year.
Also, once you're paying upfront for reserved instances you've lost one of the major advantages in your cashflow scaling with your usage.
Aside from that, we run 2-3 c3.xlarge API servers, two c3.medium webserver (hosts Facebook versions of our games) and two more c3.medium servers are joining in for a new launch next month.
We have one large (size escapes me) ElastiCache instance.
3 ELBs sit in front of those servers, all in a VPC.
Finally, we have 11 or so S3 buckets that back Cloudfront distributions for static content for websites and whatnot.
We also pay for several TB of traffic per month, and enterprise support.
Finally, we're paying for not having to fix any hardware ever, 5 minute incremental DB backups, a VPC I can define in a JSON doc, automatic failovers, and so much more, managed by yours truly, a software engineer, not a network/server/devops/etc. engineer. Yeah, there's a premium, but so far it's paid off.
Now, instead of 15-20 c3.medium servers at peak, we run three c3.xlarge instances at peak. I can be sure that the larger server will handle more concurrent requests than process-restricted uwsgi, and thus be able to better allocate the resources given to us by larger instances.
21:37 <@deen> A third of the sys calls of a DDNet server are recfrom and
21:37 <@deen> they always occur in large chunks, so ideal for
21:38 <@deen> the last third are mostly strange time and gettimeofday
calls, I thought I got rid of most of them
21:40 <@deen> server with 30 players causes 3000 syscalls a second
22:00 <@deen> PACKET_MMAP is very cool, reading packets with 0 syscalls,
too bad it's not for regular applications (requires root,
doesn't work with normal udp socket):
22:04 <@deen> and then the glibc version matters a lot for syscalls. new
versions of glibc don't syscall at all for gettimeofday
22:05 <@deen> (or something else is causing that, not sure yet)
22:13 <@deen> Reading the glibc implementation, that's done by vDSO,
22:32 <@deen> totally confused why some of our servers use vdso
gettimeofday, others not even though they have more recent
kernel and glibc
22:41 <@deen> ok, probably depends on the underlying clock that the vps
uses. pvclock is used with kvm and doesn't support vdso. but
looks like there's some progress being made:
I am now looking at LFE (Lisp Flavored Erlang) and ELM to create a very small online game. It makes me want to maintain C/C++ chops.
It's sad Apple is so walled in that you need a VM to build for OS X, and iOS doesn't even make the list. I have an iPad, but I use an Android phone for that reason, and I only program mobile for Android. Apple is getting better at supporting iOS devs of late though...
Curious to hear what the client stack was? Did you use LibGDX by chance?
SDL2, OpenGL, FreeType, pnglite, zlib, curl, md5, wavpack, opus, json-parser
So pretty much low level, keeps the performance and flexibility high.