We use BQ and Metabase heavily at work. Our BQ analytics pipeline is several hundred TBs. In the beginning we had data (engineer|analyst|person) run amock and run up a BQ bill around 4,000 per month.
By far the biggest things was:-
- partition key was optional -> fix: required
- bypass the BQ caching layer -> fix: make queries use deterministic inputs [2]
It took a few weeks to go through each query using the metadata tables [1] but it worth it. In the end our BQ analysis pricing was down to something like 10 per day.
I'm amazed that this has reached the almost-top of HN.
It's a very confused article that (IMO) is AI Slop.
1. Naming conventions is a weird one to start with, but okay. For the most part you don't need to worry about this with PKs, FKs, Indices etc. Those PG will automatically generate for you with the correct syntax.
2. Performance optimization, yes indices are create. But don't name them manually let PG name them for you. Also, where possible _always_ create the index concurrently. This does not lock the table. Important if you have any kind of scale.
3. Security bit of a weird jump as you've gone from app tier concern to PG management concerns. But I would say RLS isn't worth it. And the best security is going to be tightly control what can read and write. Point your reads to a read only replica.
If you have multiple unique constraints in the same table then it will spit back the same error code. I want to know the name of the specific constraint that was violated.
Always name your indexes. Always name your constraints.
I mean if you have that many violations that points to other problems, but sure if you are in that situation I can see how catching by name might be helpful. It's not a panacea.
That exactly. Fintech forced Financial Institutions into Digital Transformation. Now they have caught up, there is no "next big thing" for Fintech. Crypto might have been it, but it killed itself by terrible UI and a never ending stream of scams and frauds. I believe there is an AI Agent Internet of Money to end Ad Revenue, but haven't found the arbitrage model yet.
Halifax is owned by Lloyds banking group, and their current app is just the exact same Lloyds app with a different logo. I know because I bank with both and the apps are identical.
Previous to the merging of online services, you are correct that Halifax had its own app and it was terrible.
But at that time Lloyds had a great app, they just hadnt unified the back end tech of all the different bank brands they own.
It wasnt disruption from startups that caused the improvement, it was the parent company taking its time to merge the decent tech it had developed for itself.
Yeah I feel like Monzo and Starling really forced the high street banks to level up their game. A friend of mine was a PM on an app at one of the big high street banks and they said the instruction from the top was explicitly to be like Monzo, and they did iterate, get better at app development, and ship a bunch of features that people like (spending notifications, in app card freezing, etc).
Bear in mind that's a measure of how backwards US banking is, not how advanced.
In the UK, I can't remember the last time I wrote or received a cheque. Maybe twice in the 17 years I've been living here, and certainly not in the last decade.
So with UK cheque usage being a tiny fraction of the US rate, there's simply no demand for it in banking apps.
I get a couple of cheques a year from family in the UK. It's an infrequent transaction but an important one, and cheque scanning is actually the only reason I maintain my legacy bank account.
> Has there been any other behavior like this in the past where a company "shut themselves down" to make a big political statement and then almost immediately undid the shut down?
That wasn’t a political statement. Per your link, it was a belief that that could not continue the credit card payments while staying in compliance with the law.
Probably not the place to post this feedback, but in general I get excited about what Cloudflare have been releasing in 2024. I'm borderline desperate to try them out in a business setting.
The only thing that really stops me is the horror stories I hear about random billing issues and on top of that account closures.
That is something I'm _never_ worried about with AWS.
On the off chance that someone from CF is reading this feedback.
This is pretty cool, I ran it against one of a largest customer sites and it was very interesting to see how the page all interconnects. I'm pretty sure it can be used to spot architecture/performance problems.
The first and immediate difference for me is the ability to recall the name. I can recall Postman/Insomina fine, and now for API Parrot. I'm never going to be able to recall mitmproxy2swagger.
Ha! Nicely played. That was out of purely laziness. I don't like using one handle across sites, so I take the first 8 chars of (New-Guid).ToString() and then dump it in my password manager.
As someone who uses mitmproxy and swagger quite often, I actually think the name isn't so bad. I haven't even looked at the readme but I already know what it does, how to run it and what output to expect.
I often forget the name of things, sometimes even the big ones. GitHub search is one of the primary ways I rediscover them. "reverse-engineer API" returns mitmproxy2swagger as the third result, and this is how I found it last time I needed it.
It is a bit frustrating when a project on GitHub doesn't have good tags or searchable keywords, making it harder to find.
The first year was the best year for me. It was really fun and I think I got 22/24 days done. After that my participation rate has been shocking, I really want to do it but I get this weird anxiety that I'm not quick enough.
Which is weird because that is not a thought that entered my mind when I did it for the first time. It was pure fun!
reply