> I'm not entirely sure why buzz around "developer learns basic knowledge" has this on the front page.
The problem is that in the old days, not knowing about indexes left you with an underperforming system or downtime. But in The Cloud™ it leaves you with an unreasonably huge bill and that somehow as an industry we're accepting this as normal.
Which really is a head scratcher. You'd figure especially as a startup seeing a 5k oopsie isn't really as acceptable. Mistakes do happen and I don't mean any shade to this particular person (they'll never make this mistake again) but as an industry the aggregate consequence of this is you have a lot of waste and stupid choices that then have to be cleaned up when more knowledgeable (read highly paid) people are introduced later on.
They'll have to clean up the mess which causes real business consequences that, and I've personally seen this, will directly impact bottom line and have no quick or easy solution to wiggle out of.
Maybe it's acceptable for products like this because the balance between good engineering and company health probably aren't as cut and clear but stuff like this always makes me sad because it's such low hanging fruit, it doesn't require any real effort, just basic curiosity around your job.
I have no problems with a developer doing a 5k oopsie with things like card processing or an area that has a legitimate potential for direct monetary losses (such as payment processing where a bug could allow customers to order goods without actually paying).
I have a problem with whoever looked at <insert your favorite on-prem RDBMS here> and said "nah, let's go with a cloud-based solution that charges per-query and gives us an essentially infinite financial liability".
It's not so clear cut. What's the cost of losing the entire on-prem database? Do you trust a company who hired a developer who didn't know about indexes to hire a rock solid DBA? And how much does that DBA cost?
> What's the cost of losing the entire on-prem database?
Backing up an on-prem DB doesn't require specialist DBA knowledge. Basic UNIX skills are enough. Not to mention, since you're not in the cloud, bandwidth or efficiency is not a concern - feel free to rsync your entire DB off to a backup server every 5 minutes.
> to hire a rock solid DBA? And how much does that DBA cost?
They didn't have a DBA here either, and this "cloud" didn't save them. But at least with an on-prem Postgres the worst they'd have is significantly reduced performance*, where as here they had a 5k bill.
*actually the price/performance ratio for bare-metal servers is so good that a 100$/month server would probably take their unindexed query and work totally fine (as a side effect of the entire working dataset being in the kernel's RAM-based IO cache).
Rsyncing the database won't work in many cases, this doesn't ensure your backup is consistent. That is really, really dangerous advice, especially as you might not notice this if you test the process while the database is idle.
For Postgres you either use the pg-dump command and backup the dump or you setup WAL archiving and save base backups and the WAL files as they are created.
This isn't rocket science, but you really should read the manual at least once before doing this. Just copying the files is not the right way to backup a database (unless you really know what you're doing and are ensuring consistency in some other ways).
I am not saying that rsync or cp is the right way to backup a DB, I was just giving a very crude example. I absolutely agree with the issues you're raising.
However, I'd still take recovering a DB that has been backed up by rsync/cp over a DB that's not been backed up at all. If you really can't be bothered to do it the right way, you're still better off doing something than running with no backups at all.
HA/clustered/replicated DB setups are not rocket science. Backups are not rocket science. Losing an on-prem database irrevocably never happened for me in 20 years.
This is not a matter of hiring an elite DBA. This is a matter of reading the manual. Both indices and backups are right there in chapter 7 and 8 respectively. But that's something worth doing irrespectively if you are running your own database or using someone else's as-a-service.
There are also many options in between "cloud-based infinity scale" and "on-prem". You can use cloud services that abstract many day-to-day operational tasks of db management, but are still bound price-wise to your monthly instance costs.
If someone comes to me and tries to sell me a service that can leave me with an infinite bill I'd look at them funny and walk away. But that's just me and maybe I just don't get it and I'm not "startuping" right.
Cool, and if this was a case of bike shedding over something that hadn't just cost that early stage startup 5 grand I'd agree with you.
However regurgitating a platitude that everyone, including myself, learned when we tried getting our first business off the ground doesn't add much value here.
Had this been a Database with 10million rows it would have cost them 50k, and this is incredibly basic programming knowledge.
Basic proficiency is a far cry from worrying about best technical talent and not a particularly egregious ask.
I've been involved in some terrible software behind reasonably successful businesses. I complained like the best of them having to clean up the horrible mess. But it worked they used their limited competence and limited funds then built something profitable.
Yes we really should not accept this. The ability to impose limits on spending is key to control an enterprise. Whole security certification guacamole is based on having established controls. But where the bit hits the fan control is absent.
Using money to solve business problems is good business sense, but only if that’s the best way to spend that money. I agree with you that the status quo is normal, but nonsensical.
The problem is that in the old days, not knowing about indexes left you with an underperforming system or downtime. But in The Cloud™ it leaves you with an unreasonably huge bill and that somehow as an industry we're accepting this as normal.