I'm not posting to convince people they should use it, just that it's a really cool piece of open source infrastructure that I think is less well known, and I resepect it. It is very configurable and tunable, has a lot of features, command lines, and things to learn, and that does need people with skills and time.
That said, it doesn't need constant management; it's excellent at staying up even while damaged. As long as the cluster has enough free space it will rebuild around any hardware failure without human intervention, it doesn't need hot spares; if you plan it carefully then it has no single point of failure. (The original creator introduces the design choice of 'placement groups' and tradeoffs in this video[1]).
Most of the management time I've spent has been ageing hardware flaking out without actually failing - old disks erroring on read, controllers failing and dropping all the disks temporarily causing tens of seconds of read latency which had knock-on effects, or when we filled it too full and it went read-only. Other management work has been learning my way around it, upgrades, changing the way we use it for different projects, onboarding and offboarding services that use it, all of which will vary with what you actually do with it.
I've spent less time with VMware VSAN, but VSAN does a lot less, it takes your disks and gives you a VMFS datastore and maybe an iSCSI target. There can't be many alternatives which do what Ceph does, and require less skill and effort, and don't involve paying a vendor to manage it for you and give you a web interface?
That's was not my experience. Deploying and configuring ceph was a nightmare due to the mountain of options and considerations, but once it was deployed, ceph is extremely hands-off and resilient.
Cloudflare actually has this as a free tier feature so even if you don't want to use it for your site you can just setup a throwaway domain on Cloudflare and periodically copy the robots.txt they generate from your scraper allow/block preferences, since they'll be keeping up to date with all the latest.
Yeah I had a manager grill me like crazy about short stints on my resume while I was interviewing for DigitalOcean. He told me it looked like I wasn't dedicated or trustworthy.
He wasn't my manager so I brushed over it and 6 months into working at DO they started 3 rounds of enormous layoffs that were handled so poorly even the executives doing the layoffs got removed by the board.
So I left and got to add another short stint at a company run by craven morons to my resume :)
I was laid off at my last 3 positions and can really relate to this. If it’s any consolation: how a company handles this is a good indication of the maturity of their management and recruiting function. I also strongly disagree with any assertion that would state “short stints = unreliable employee”. Nobody can make that assertion without confirmation of what caused those stints and the tech market from 2020 - today has been notoriously volatile.
There are plenty of great orgs out there that will soak with you before making assumptions, but as a rule most startups have fairly inexperienced management unless they are founded by a team that’s been through the rodeo a few times.
We're on the forum where people are most capable of doing this for themselves.
And if your company uses GMail that is less than ideal for de-Googling, but it does not meaningfully impact the benefits of de-Googling your personal life.
Refusing to run all your search history, personal transactions, and correspondences through one of the fascist state's pet companies is still beneficial.
This isn't about timing the market by being clairvoyant about the timing of a madman's tariffs.
This is about taking reasonable risk calculations as a small business with extremely high tariff exposure, when a president who did a bunch of high tariffs last time wins and election and says he'll do it again.
Sure multi-trillion-dollar financial institutions didn't run for the hills because they get paid when it goes up and paid when it goes down.
Yeah I think it's better to think of Tailscale as an access control company which is utilizing networks as the utility vector, not a network utility company that also has access controls.
It would be nice if there were a common way for essentially feature-complete SaaS businesses to carry on and maintain some expected level of quality, security updates, and support without endless pressure to expand revenue or slash costs.
A lot of the pressure to expand revenue comes from within thanks to the flaws of permanent employment - you hired permanent employees at a discount compared to the equivalent contractor/temporary workforce in exchange for a promise of perpetual employment. These people will thus do their best to ensure their perpetual employment (they will never say "hey I think we've finished building your product, now you can lay us off to make profit").
You can of course sidestep that and use contractors for the initial build out - plenty of agencies and freelancers will give you a quote with various terms. It'll cost way more in the short term because you're essentially paying upfront the years of salary they would otherwise earn building the same thing as permanent employees, but at least it's an upfront, honest transaction with no expectation of loyalty. You can then hire a permanent skeleton crew for the continuous upkeep.
Want a turnkey Vimeo you can deploy on a cloud or truckloads of servers you can just rack up in a colo? If you have a spare 1.4B laying around, I'm sure that can be arranged.
Does RAMP or CURE offer any possibility of conditional writes with CRDTs?
I have had these papers on my list to read for months, specifically wondering if it could be applied to Garage
I had a very rapid look at these two papers, it looks like none of them allow the implementation of compare-and-swap, which is required for if-match / if-none-match support. They have a weaker definition of a "transaction". Which is to be expected as they only implement causal consistency at best and not consensus, whereas consensus is required for compare-and-swap.
When I was there, DigitalOcean was writing a complete replacement for the Ceph S3 gateway because its performance under high concurrency was awful.
They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.
Unrelated, but one of the more annoying aspects of whatever software they use now is lack of IPv6 for the CDN layer of DigitalOcean Spaces. It means I need to proxy requests myself. :(
reply