Hacker News new | past | comments | ask | show | jobs | submit login

Nothing exaggerated, your comment implied scale was your justification: in which case there'd better have been some crazy high load that just brought the tool you already had to its knees to justify paying an additional closed source platform and manually having to pipe data to it in addition to your main data store.

Of course if I sounded incredulous it's because I didn't think you had that scale, and it sounds like I was correct?




No, you're not correct. We have over $2M ARR and with the amount of data we are storing it would be downright stupid to use Supabase.

We don't also "pipe" our data to Supabase, we use a couple different data stores depending on the best use case. For example we also use R2 and Durable Objects.

Just because you have a hammer doesn't mean everything is a nail.


Well maybe we have different definitions of scale: I think my team spends about $2M a month on compute, so we don't pride ourselves on randomly pulling in new vendors.


You are incredibly dense


When you're playing checkers people playing by the rules of chess might seem dense.


It's pretty incredible how you know more about our needs, usage, and infrastructure than we do :D

Also don't you think it's funny calling out somebody else's tech choices when you have zero insight into it and when it's worked out perfectly for us?

By the way, how many TBs of vector data are you storing in Postgres and needing to retrieve with minimal latency?


I work on autonomous vehicles: we generate more data every day than you likely generate in 10 lifetimes of your CRUD app escapades.


Bragging about generating lots of data? Wow, cool buddy

Literally irrelevant lol




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: