If your 'project' can't allocate $15 for a domain name then you have a bigger problem with your project. Especially if your project involves taking money from customers.
Hetzner rents you 42RU for €199 plus power and network. If we assume they can fill the entire rack, that's 4 9RU units for about €50 plus power and network.
If we assume an average power draw of 20W per laptop, that's 300W for each 15 laptop unit, or about €57/month in Hetzner's Finish DC (including aircon)
Not sure about network. A 1Gbit uplink with 10TB traffic (and €1/TB after that) is provided. Upgrading that to 10Gbit is probably similar to the €51/month cost for the same uplink for dedicated servers, so another €15 for each 15 laptop unit. Plus around €2/month/IP, but you can probably bring your own if you find a cheaper subnet to buy
So yeah, you are right that the math does not work out. But it is pretty close to break even. I think you can break even on this if you find a more space efficient way to cram them into the rack and don't pay yourself any salary
You (the person paying to co-locate hardware) don't buy the KVM that the colo facility uses. The colo facility hooks up the KVM that they own to your hardware and configures it so that you can access it. Once you stop paying to colo your hardware, you take your hardware back (or maybe pay them to dispose of it, I guess) and they keep the KVM, because it's theirs.
They aren't targeting no one (and looks like they aren't at all).
Just do the math: for a measly €2000 a month, a salary of a cashier in Amsterdam, you already need to have 285 clients - and this is without taxes and revenue.
It's akin to remembering the phone numbers. Even 20 years ago I had like 10-20 of most important ones memorized despite some of them not used often ie once in a years. Nowadays I have 'me myself' in the Contacts because I can't remember it despite using it for 5+ years nor I care.
Not needed. All your unused/unfilled space is that space for wear-leveling. It wasn't needed even back then besides some corner cases. And most importantly 10% of the drive in ~2010 were 6-12GB, nowadays it's 50-100GB at least.
But even ignoring the wear-levelling issue, the spare space still fulfils a need in providing the ballast space which is the main thing we are talking about here. Of course there are other ways to manage that issue¹ but a bit of spare space in the volume group is the one I go for.
In fact since enlarging live ext* filesystems has been very reliable² for quite some time and is quick, I tend to leave a lot of space initially and grow volumes as needed. There used to be a potential problem with that in fragmenting filesystems over the breadth of a traditional drive's head seek meaning slower performance, but the amount of difference is barely detectable in almost all cases³ and with solid state drives this is even more a non-issue.
> And most importantly 10% […] nowadays it's 50-100GB at least.
It doesn't have to be 10%. And the space isn't lost: it can be quickly brought into service when needed, that is the point, and if there is more than one volume in the group then I'm not allocating space separately to every filesystem as would be needed with the files approach. It is all relative. My /home at home isn't nearly 50GB in total⁴, nor is / anywhere I'm responsible for even if /var/log and friends are kept in the same filesystem, but if I'm close to as little as 50GB free on a volume hosting media files then I consider it very full, and I either need to cull some content or think about enlarging the volume, or the whole array if there isn't much slack space available, very soon.
--------
[1] The root-only-reserved blocks on ext* filesystems, though that doesn't help if a root process has overrun, or files as already mentioned above.
[2] Reducing them is still a process I'd handle with care, it can be resource intensive, has to move a lot more around so there is more that could go wrong, and I've just not done it enough to be as comfortable with the process as I am with enlarging.
[3] You'd have to work hard to spread things far and randomly enough to make a significant difference.
[4] though it might be if I wasn't storing 3d print files on the media array instead of in /home
Empty space is good for wear-leveling but enforcing a few percent extra helps.
> And most importantly 10% of the drive in ~2010 were 6-12GB, nowadays it's 50-100GB at least.
Back then you were paying about $2 per gigabyte. Right now SSDs are 1/15th as expensive. If we use the prices from last year they're 1/30th, and if we also factor in inflation it's around 1/50th.
So while I would say to use a lower percentage as space increases, 50-100GB is no problem at all.
Only if you fill the drive up to 95-99% and do this often. Otherwise it's just a cargo-cult.
> So while I would say to use a lower percentage as space increases
If your drive is over-provisioned (eg 960GB instead of 1024GB) then it's not needed. If not and you fill your drive to the full and just want to be sure then you need the size of the biggest write you would do plus some leeway, eg if you often write 20GB video files for whatever reason then 30-40GB would be more than enough. Leaving 100GB of 1TB drive is like buying a sneakers but not wearing them because they would wear.
> If your drive is over-provisioned (eg 960GB instead of 1024GB) then it's not needed.
I disagree. That much space isn't a ton when it comes to absorbing the wear of background writes. And normal use ends up with garbage sectors sprinkled around inflating your data size, which makes write amplification get really bad as you approach 100% utilization and have to GC more and more. 6% extra is in the range where more will meaningfully help.
> Leaving 100GB of 1TB drive is like buying a sneakers but not wearing them because they would wear.
50GB is like $4 of space the last time most people bought an SSD. Babying the drive with $4 is very far from refusing to use it at all. The same for 100GB on a 4TB drive.
Nah, we used some consumer SSD for write heavy but not all that precious data, and time to live was basically directly dependant on the space left free on device.
Of course, doesn't matter for desktop use as the spare on drive is enough, but still, if you have 24/7 write heavy loads, making sure it's all trimmed will noticably extend lifetime
Yes, this is the reason why 0.3 DWPD drive is 10 orders smaller than 3 DWPD. I know the horror stories of using Samsung EVO for the SQL loads, especially < 512GB.
But yes, without the actual use-case it's just speculations.
NB QVO drives I mentioned a year ago in the comments are still running, but I do make sure they are never used more than 80%
> Arguments like "twenty-four hours is short enough to not cause serious disruption of legitimate traffic" and "we already know that spam senders rarely use a fully compliant SMTP implementation to send their messages" are 20 years out of touch and completely void of connection with reality.
Just recently I found out a very prominent local service recovery emails are not delivered to the end-user mailbox.
Reason? The email doesn't have Message-ID. Like it get's generated, sent out, "my" PMG box receives it... and throws it out because no Message-ID. Insult to an injury? It was password recovery emails. Regular marketing ones are going through.
I feel that some people are just forgetting what the reason it's easy for them is because they learned that "swipe based UI" ages ago.
When I get handed iPhone I have no clue on how to even open an additional tab in Safari and any finger gestures do not do the things what I expect nor there is a lick of indication on how to do something. It's all just a memorized magical incantations at this point. But hey you are familiar with them so it's easy to bash on everyone who is not in yours eco-system.
If your 'project' can't allocate $15 for a domain name then you have a bigger problem with your project. Especially if your project involves taking money from customers.
reply