It’s so cheap to start and stop servers on demand that I’ve decided to give “away” servers for free. I wrote a little proxy in Go that detects minecraft login requests and starts a server with the specific world. After a dropped connection I stop it.
 For 15€/month you can have ~30 servers running in parallel and thousands of powered down worlds. https://contabo.com/en/vps/
1) Initial world creation is quite slow - 10-15 seconds on a moderately powerful hardware. Since I want first joins to be as fast as possible, I keep 1000 pre-generated worlds and one of them is chosen randomly to be used as template on your first login.
2) In addition to login packets, minecraft clients send a ping packet to check if the server is online. I forge a valid response because I don't want to start a server just so you can see "server is up, 0 players online".
You could take this a step further by "decaying" the bases in some way (remove torches, remove some large percent of the items in chests, add vines, weather rock, move blocks from the ceiling to the floor, etc)
Otherwise someone should make one. No need to full reset servers any more.
Then you just need a good heuristic to guess whether or not a group of blocks matches your definition of a base to be explored.
You could take this a step further; once you have determined that a set of chunks have been modified significantly, you could apply that set of changes to the same coordinates of any map generated from that seed, meaning you can combine changes from multiple worlds into one (with the same seed).
But that's my skepticism talking.
As for the skins - servers run in "offline mode", which means no communication with the Microsoft authentication API responsible for validating accounts and (I believe) giving skins to players.
So 16GB of ram should be enough for 20-60 servers.
> We will not use your credit/debit card information to automatically upgrade your Always Free or Free Trial to paid without first getting your explicit approval.
I gave a throw away google email and google voice and just used my first initials, the one thing I could not use was a throw away debit card, they really wanted a real credit card.
So far so good.
One guy complained they shut his server off after he transitioned from the $300 free credits in the first month to always free.
I didn't have that issue, although I do check the billing statements periodically.
It's a simple and pretty good proxy for usage
Then there's an audit, you're found non-compliant, and now they own your house.
Kids played for a couple of hours last night without issue.
It’s usually an hour or two of work every time my kids want a new plug-in or MC version.
Yes, there is no point to this, but if it's free... ?
Here's the Anandtech review of the Ampere Altra, which is what Oracle is serving these VMs from: https://www.anandtech.com/show/16315/the-ampere-altra-review...
"The Altra’s strengths lie in compute-bound workloads where having 25% more cores is an advantage. The Neoverse-N1 cores clocked at 3.3GHz can more than match the per-core performance of Zen2 inside the EPYC CPUs.
There are still workloads in which the Altra doesn’t do as well – anything that puts higher cache pressure on the cores will heavily favours the EPYC as while 1MB per core L2 is nice to have, 32MB of L3 shared amongst 80 cores isn’t very much cache to go around."
I'm sure in GCP's Always Free tier the server they offer only has 0.5GB RAM.
Opinion: They're losing badly in the Cloud Wars and need to scrape together some sort of customer base in any way they can, even if it means burning money.
Three times i've tried using OCI to move Oracles own products from on-prem to cloud. All three times they told me not to bother as it wasn't supported.
Seriously if Oracle can't figure out how to run RDBMS, Weblogic and Opera what hope have i got?
I had never heard of this architecture before; a pretty creative way of doing Heroku-like scale-to-zero at nearly no cost on AWS.
> Fargate launches two containers, Minecraft and a watchdog
I'd love to see a cost analysis between running the "watchdog" as a Fargate container versus another lambda function. Even having a lambda function run once every 5 minutes 24/7 would trigger ~15,000 invocations a month, which is in the realm of "near Free".
If there was some way to trigger the scale-down event from there, it would reduce the expensive part of this setup (Fargate) even further. Though, granted; given both containers are packed into the same Fargate VM, it would really only mean freeing up some additional resources for the Minecraft server.
It looks like the watchdog is simply checking for connections on a port, which is probably too low-level to handle with lambda. But, an architecture like this could work in a ton of services, and if you had e.g. an ALB set up in front of the services, one could use the lambda to scan incoming request metrics and scale down on that.
Not at all. You could easily check that with any of Lambda's supported languages.
The person that set this up got an amazing education on use of real-world AWS services.
A lot of IT people aren't aware that things like this exist. They think moving to the cloud means sending all your virtual servers to your provider of choice and running them 24x7 like you did on-prem. In my opinion it's more about architecting solutions so that resources pop into existence for the exact # of milliseconds they're needed and then they're released. This is a clever step along that path.
The vast majority of use cases are better off with variable resource availability. Unless you're doing something akin to mining cryptocurrency 24x7x365 most workloads are variable to some degree.
So maybe instead of one giant server that processes requests you use a single small server that is available 24x7x365. Then if your workload increases at 8 am you use an autoscaling group to spin up 3 more. Then at 5 pm it goes back down to 1. And maybe you have a batch process that kicks off at 2 am every night so you spin up 4 servers to process requests. This is just one example so it's important not to focus on it and respond with, "Well what about x!" AWS has many ways to fulfill the promise of accomplishing tasks with minimal resources.
And all of this is just a step on the path to serverless computing with things like Lambda and DynamoDB or serverless RDS.
it never really took off so I mothballed it, however, I do use it at home for our personal server and it has saved me a ton of money! It makes perfect sense as you can have quite a good spec machine when you are paying by the hour. you just disconnect the disk from the VM and pay for disk storage which is very cheap.
It was based on the following terraform recipe (which I wrote)
This is PHENOMENALLY DOCUMENTED. I am thoroughly impressed, @doctorray. Clear and easy to follow walkthrough and explanation of how it works, amazing troubleshooting tips, suggestions for managing it... This is an exemplar of a well-made README for a service. Bravo!
Do you just stop playing when it happens ?.
That said, when we exceed capacity we cannot boot any more instances, that’s definitely true.
You just turn that msg into a in-game countdown.
I always wanted to go after an auto-switch style system but never got that far.
Edit: Just saw that the GitHub includes a link to an AWS calculator. Looks like a month of continuous usage caps out at $40-ish. Not too bad since my realistic worst case is probably more like 8/hrs per day rather than the full 24.
When I played, if nobody was in the vicinity of the chunks with your farm in it it would unload and of course then the farm would not produce, so people would AFK in their farm to keep the chunks loaded.
 https://www.youtube.com/watch?v=dx5Wd28AKxQ (the video is a couple years old so the current implementation might be different, but I think the basic principles are still the same).
Looks pretty easy to build.
They do but, afaik, there's no "spin up and down" ones that charge you for usage; they're all "$X per month" fixed cost.
(Although looking at the costs these days, they're not that much higher than this would cost you for even a medium-sized world.)
Wondering if services like Google or Shodan may have tried querying it and causing your server to turn on?
In the 2 months I've been using this method before deciding to write it all down, I've not run into any issues with anyone else or any bots triggering the container to start, at least not yet...
I just wish there weren't so many steps to get this kind of thing running! Even with automation it's still a LOT - getting this running myself would take me a few hours, and I have prior relevant experience.
A regular non-software-industry-professional parent has little chance.
I really wish there were better ways to make AWS stuff like this available for people to use without requiring them to have deep knowledge of how to work with different aspects of AWS.
I wish AWS would provide some kind of interface where I can redirect a regular human being to easy-deploy.aws.com/?cloudformation=url-to-my-cloud-formation and they would be presented with a human-readable form that tells them what it will do, sets a hard limit on how much money it will be able to burn through (for protection against crypto-currency mining scams), enter their credit card details and click "Deploy" to start using it.
But in the background, it's run on a set of Amazon services. You don't have to rent a specific server for a given time period, like monthly server rental.
You just use Amazon's on-demand services (that use whichever server resources are required at the time).
Considering all the hours I spent looking for ways to do exactly this when I was 12-15... I don't doubt I would've gone through all the trouble and even learned some AWS along the way.
Back in those days the only way I could get a free server was by hosting a phpBB forum on 000webhost and somehow convincing a VPS provider to "sponsor our forum". They'd get a massive banner ad and I'd get a free server to play around with. The good days!
But the difference between a couple bucks a month and $5 once you actually have the ability to pay for stuff online does seems pretty negligible.
In fact, some websites even offer big discounts (like 15%) for payments in boleto since there is basically no service fee.
That is basically how me and all my friends did "online" transactions.
In any case, if it gets kids learning new things under the guise of saving a very limited resource, I'm all for it!
Even after that, I was always frugal and never wanted to spend something like $15 a month for a server for my friends. Now, as an adult software developer, I wouldn't think twice about the fun to dollar ratio of paying for a Minecraft server to connect with some old friends.
000webhost, x10Hosting and SixServe (both had FREE cPanel!!), and never forget those shady reseller control panel hosts like Nazuka.
I'll admit though, the shady reseller hosts were pretty good. Terrible control panel aside, they had very generous CPU/bandwidth/storage limits compared to the free cPanel hosts that had to cut down the costs there.
The cheap VPS's absolutely do not allow you to pin the CPU to 100% usage for a significant amount of time since that messes up the provisioning. A Minecraft server will definitely pin the CPU to 100%.
What happens is that your process will be killed repeatedly.
A $5 VPS is great for simple site hosting and a small amount of CPU workload. They do not work at all for any type of game server.
>As long as you don’t go to 100% CPU usage for a long period of time, everything will be okay. DigitalOcean are doing pro active monitoring and will see if your droplet is having 100% CPU usage all the time and may limit the CPU capacity of the droplets displaying this behavior. Since each droplet shares physical hardware with other droplets, constant 100% CPU use degrades the service quality for other users on the same node.
Note that a game server will go to 100%. It will be killed.
What you describe has never happened to me. Have I just been lucky?
I did this with a Minecraft plugin that would schedule a systemd shutdown in 30 minutes when the last player disconnects, and cancel the shutdown if a player connects.
Then a simple webpage that sent an EC2 API request to power on the instance, and a simple plugin that sends a Telegram message when the server is ready for connections.
You send the EC2 API request directly from a public facing website?
This qualifies as "serverless" now?
A lot less chance of me spending $$ that way.
Overall I personally prefer a VPS or dedicated server but I don't think comparing it like you are is 100% fair.
I don't even bother with playit.gg - just forward a couple ports on the router and pass out my ip. Only time my dynamic ip changes is when I lose power, and if I've lost power the server is down for "Maintenance" anyways.
Concerned about cost overruns?
Set up a Billing Alert! You can get an email if your bill exceeds a certain amount. Set it at $5 maybe?
However the reason it doesn’t exist I suspect is twofold. Firstly because it is bad business. All the cloud providers make a lot of money from mistakes and small things sapping cash. Secondly it’s hard to rationalise what to do when the budget runs out. What do they nuke?
This is an example where being data driven to the exclusion of all else can hurt a company; I suspect having this feature would pay dividends down the road (by being the first to provide a safety net for a startup with a fixed budget that doesn't have production workloads yet you offer a competitive advantage between cloud providers), but the effect is completely impossible to predict or track currently since it doesn't have an immediate impact on revenue or the satisfaction of large, paying customers.
There's a very simple explanation for this; realtime billing would increase the cost of the product they sell to create something most people don't need.
If you can tell, then you can set a limit.
Besides, if they can trigger alerts at a particular spend then they should be able to create a limit.
That's not really true. The alerts happen when the billing is re-calculated (periodically) and you've exceed a predefined, not when you hit that exact threshold.
>When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data.
Real time billing is actually a Hard Problem to solve.
I refuse to believe there is no workaround. I can understand it is not easy to fix for corporations who need AWS to make money but that is not the use case for students.
If it were, Azure for students couldn't exist. Signing up for Azure for students does not require a credit card so they must have figured out a way to prevent / stop the bleeding?
Non-realtime limits are better than no limits at all. Besides the cloudwatch documentation seems to suggest it’s reporting on a 5 minute frequency for most of AWS.
Besides, AWS already complicates things way too much by handling VAT like other billable items instead of just adding it at the final step like any sane company would
This also solves the problem of "what to cut". If I hit my bandwidth limit AWS simply stops routing requests to my servers, if I hit my CPU limit AWS should throttle me, etc.
If threshold (x) hit then do:
- Email me
- Stop Servers XYZ
- Leave Servers ABC running.
If threshold (y) hit then do:
- Email me / Call me
- Shut everything down.
It would be really nice if I could preload $100 into the account and remove my credit card. I don't have ANYTHING sensitive behind my username and password- except my CC #
I know they are never going to implement this because I'm small potatoes, but it would be nice
$7.00 for up to $500
I don't think there's a magical solution to this. There might be a company that sets a $1K USD/month limit, forgets about it, and suddenly the cloud provider shuts down everything a year later, while "everyone" is unavailable or something like that.
There are so many scenarios, and I honestly feel that the cloud providers have decided on the most fool-proof solution both for them and their clients.
Ok. But the Auto Scaling Groups are free - so I can keep that on, right? Oh, look! They just launched more EV2s, how convenient. Should I back these up to S3? With CRR enabled?
Tee hee hee
So switch off all VMs, but don’t delete the disks. Disable S3 read/write, but don’t delete the data. Etc…
Doing nothing is generally better from a legal liability point of view. The customer should be liable for turning services on and off.
We hear about this all the time from AWS customers and its a large reason why people connect their account to Vantage which will help alert you if costs change intra-month. The first $2,500 in AWS costs per month are tracked for free so I thought I'd mention this here for potentially being helpful to the community.
If you don't want to remember to set up billing alerts, we provide basically a turn-key experience around this that takes less than a few minutes to setup: http://vantage.sh/
The list of permissions is a whittled down version of what's available in the AWS managed policy of "ReadOnlyAccess" and doesn't allow us to do things like read from S3 Buckets or read from RDS instances. Basically just List/Describe actions.
IAM permissions are written about more here in our documentation and are ultimately handled gracefully if you want to remove some. For example, if you just want to hand Vantage access to billing, S3 and EC2, it will do the job as best it can with just those permissions: https://docs.vantage.sh/permissions/
Finally, here's a blog post on our cross account IAM setup: https://www.vantage.sh/blog/how-vantage-uses-cross-account-i...
To everyone claiming "ohhh that's illegal/unethical" I say to you: take it in your favor for once. For every 100 clients aws bills unexpectedly and with no controls in place to mitigate, you can be the 1 who gets a free month of service. They will not pursue you for $5. Imagine making the argument for welfare on a company that is worth a trillion dollars.
The rationale against doing this is as much practical as it is moral --- unless you're just doing this once for a single month and don't care if your account gets banned. AWS isn't like an auto-renewing subscription, where if the card declines, your service is cut off. They won't charge the card with a $5 limit until the end of the billing period. If you rack up more than $5 in charges in a billing period, you will be in debt to Amazon. They will certainly ban your account, so you'd have to make a new throwaway account with a new disposable CC each month.
Not advocating for mass fraud here, or even petty fraud, just making it a bit more fair to those who have 0 provisions in the platform to prevent involuntary overspending.
You are not doing delivering some sort of poetic justice, you are just showing your lack of self-preservation instinct. For your own welfare, just don't poke the bear. You don't wanna get blacklisted for doing some dumb crap that will come bite you in the ass someday.
There are enough stories running around of people getting their job accounts banned by association for pulling idiotic stunts like these, and we don't know what crap Amazon will be running in the future.
AWS bills work a lot like postpaid phone bills. When you use the service you agree to pay the bill for usage.
Your suggestion is kind of like saying “If your card declines you don’t have to pay for your meal.” Not really true.
In my experience AWS support has been good about reversing accidental/fraudulent usage charges and helping to prevent them in the future.
I was thinking it would be useful if Organizations could pre authorize users at $X before preventing them from doing more - of course the better solution is to manage releases through a pipeline that checks for stuff created and code scanning and... whatever
In the end, we use cost monitoring, but no AWS billing alerts
I think it is.
A few jobs ago, the boss of my boss got fired for a cloud service overage. Not a huge amount; the number on the grapevine was around $10,000. But it was enough.
For many (numerically "most," probably) companies, the IT department is a black box to upper management, and any unexpected budget overages are a serious problem.
People do ask for alerting and monitoring but that's not a hard stop.
Then you get complex issues such as S3 and EBS. As long as there is data you will keep paying so what do you do? Have a hard limit but not really since it doesn't cover them? Delete people's data?
The real reason is that if you give companies a budget feature, they will inevitably, you know, use it. They'll set a budget that seems 'reasonable', and then freak out when everything turns off when it's exceeded, and then go raise it a little bit, and repeat the cycle.
Compare that to now where every place I've ever worked basically seems to forget that cloud hosting costs even exist, based on how much most companies balk at paying for simple SaaS tools for developers but will happily let the hosting costs grow to astronomical amounts. They're happy to do it cause they just see a line item and accept it. If you give them budgets, that won't happen any more.
It's worked well enough for the entire fitness industry forever. No reason it can't work here as well, and at scale I'm sure it's pretty profitable. You're right, too, that we'd use it, but I think this is a situation where we can both be right.
At the scale of Joe's Chicken Shack, accidental revenue is not reliable. But at the scale of a Google or an Amazon, while it will fluctuate month to month, a certain minimum revenue stream should be statistically predictable.
Fortunately, I never got sucked into the 8-track club.
I prefer to be billed whatever it costs but have my service up all the time.
It's just not that relevant ...
Look... just be happy that they made it PAINFULLY obvious when you make S3 buckets public
AWS makes it clear when buckets are publicly visible. This is a good thing, and I am grateful
Those guys in finance know there are people whom will pay any bill.
When I was younger I would just pay for even mistakes because I was concerned with my credit number.
AWS is for businesses and hard limits on spending is a liability for their pricing structure. Imagine you run a small business built on AWS and you hit your limit -- you're basically asking AWS to dismantle your business. They'd have to null-route traffic directed to you, shut down your servers, delete your data, de-allocate your IP addresses, etc. Your business won't be any better off than if you went bankrupt from a huge AWS bill.
It's a serverless server (aka nothing), and it's almost free, so you're paying money for nothing.